Nuts and Bolts of Transaction Processing
This article walks the reader through the transaction processing and ACID.
Nuts and Bolts of Transaction Processing
Introduction
Transaction management is one of the most crucial requirements for enterprise application development. Most of the large enterprise applications in the domains of finance, banking and electronic commerce rely on transaction processing for delivering their business functionality. Given the complexity of today's business requirements, transaction processing occupies one of the most complex segments of enterprise level distributed applications to build, deploy and maintain.
This article walks the reader through the following:
- What is a transaction? What is ACID?
- What are the issues in building transactional applications? Why is transaction management middleware important?
- What is the architecture of a typical transaction processing application? What are the responsibilities of various components of this architecture?
- What are the concepts involved with transaction management systems?
- What are the standards and technologies in the transaction management domain?
This article is not specific to any product, and so attempts to be generic while describing various issues and concepts. This article does not aim to compare various transaction processing technologies/standards, and offers a study only.
What is a Transaction?
Enterprise applications often require concurrent access to distributed data shared amongst multiple components, to perform operations on data. Such applications should maintain integrity of data (as defined by the business rules of the application) under the following circumstances:
- distributed access to a single resource of data, and
- access to distributed resources from a single application component.
In such cases, it may be required that a group of operations on (distributed) resources be treated as one unit of work. In a unit of work, all the participating operations should either succeed or fail and recover together. This problem is more complicated when
- a unit of work is implemented across a group of distributed components operating on data from multiple resources, and/or
- the participating operations are executed sequentially or in parallel threads requiring coordination and/or synchronization.
In either case, it is required that success or failure of a unit of work be maintained by the application. In case of a failure, all the resources should bring back the state of the data to the previous state (i.e., the state prior to the commencement of the unit of work).
The concept of a transaction, and a transaction manager (or a transaction processing service) simplifies construction of such enterprise level distributed applications while maintaining integrity of data in a unit of work.
A transaction is a unit of work that has the following properties:
- ATOMICITY: A transaction should be done or undone completely and unambiguously. In the event of a failure of any operation, effects of all operations that make up the transaction should be undone, and data should be rolled back to its previous state.
- CONSISTENCY: A transaction should preserve all the invariant properties (such as integrity constraints) defined on the data. On completion of a successful transaction, the data should be in a consistent state. In other words, a transaction should transform the system from one consistent state to another consistent state. For example, in the case of relational databases, a consistent transaction should preserve all the integrity constraints defined on the data.
- ISOLATION: Each transaction should appear to execute independently of other transactions that may be executing concurrently in the same environment. The effect of executing a set of transactions serially should be the same as that of running them concurrently. This requires two things:
- During the course of a transaction, intermediate (possibly inconsistent) state of the data should not be exposed to all other transactions.
- Two concurrent transactions should not be able to operate on the same data. Database management systems usually implement this feature using locking.
- DURABILITY: The effects of a completed transaction should always be persistent.
These properties, called as ACID properties, guarantee that a transaction is never incomplete, the data is never inconsistent, concurrent transactions are independent, and the effects of a transaction are persistent. For a brief description of what can go wrong in distributed transaction processing, see Fault Tolerance and Recovery in Transaction Processing Systems.
Issues in Building Transactional Applications
To elicit the issues involved in building transactional applications, consider an order capture and order process application with the architecture shown in Figure 1.
Figure 1: Order Capture and Order Process ApplicationThis application consists of two client components implementing the order capture and order process operations respectively. These two operations constitute a unit of work or transaction. The order capture and order process components access and operate on four databases for products, orders, inventory and shipping information respectively. In this figure, while the dotted arrows indicate read-only data access, the continuous arrows are transactional operations modifying data. The following are the transactional operations in this application:
- create order,
- update inventory,
- create shipping record, and
- update order status.
While implementing these operations as a single transaction, the following issues should be addressed:
- The application should keep track of all transactional operations and the databases operated upon. The application should therefore define a context for every transaction to include the above four operations.
- Since the order capture and order process transaction is distributed across two components, the transaction context should be global and be propagated from the first component to the second along with the transfer of control.
- The application should monitor the status of the transaction as it occurs.
- To maintain atomicity of the transaction, the application components, and/or database servers should implement a mechanism whereby changes to databases could be undone without loss of consistency of data.
- To isolate concurrent transactions on shared data, the database servers should keep track of the data being operated upon, and lock the data during the course of a transaction.
- The application should also maintain association between database connections and transactions.
- To implement reliable locking, the application components should notify the database servers of transaction termination.
Transaction Processing - Architecture
Having seen the issues in building transactional applications from scratch, consider the same application built around a transaction processing architecture as shown in Figure 2. Note that, although there are several architectures possible, as will be discussed in a later section, the one shown in Figure 2 represents the essential features.
`
Figure 2: Transaction Processing ArchitectureThis architecture introduces a transaction manager and a resource manager for each database (resource). These components abstract most of the transaction specific issues from application components (Order Capture and Order Process), and share the responsibility of implementation of transactions. The various components of this architecture are discussed below.
Application Components
Application Components: Responsibilities
- Create and demarcate transactions
- Propagate transaction context
- Operate on data via resource managers
Application components are clients for the transactional resources. These are the programs with which the application developer implements business transactions.
With the help of the transaction manager, these components create global transactions, propagate the transaction context if necessary, and operate on the transactional resources with in the scope of these transactions. These components are not responsible for implementing semantics for preserving ACID properties of transactions. However, as part of the application logic, these components generally make a decision whether to commit or rollback transactions.
Resource Managers
Resource Managers: Responsibilities
- Enlist resources with the transaction manager
- Participate in two-phase commit and recovery protocol
A resource manager is a component that manages persistent and stable data storage system, and participates in the two phase commit and recovery protocols with the transaction manager.
A resource manager is typically a driver or a wrapper over a stable storage system, with interfaces for operating on the data (for the application components), and for participating in two phase commit and recovery protocols coordinated by a transaction manager. This component may also, directly or indirectly, register resources with the transaction manager so that the transaction manager can keep track of all the resources participating in a transaction. This process is called as resource enlistment. For implementing the two-phase commit and recovery protocols, the resource manager should implement supplementary mechanisms using which recovery is possible.
Resource managers provide two sets of interfaces: one set for the application components to get connections and perform operations on the data, and the other set for the transaction manager to participate in the two-phase commit and recovery protocol.
Transaction Manager
Transaction Manager: Responsibilities
- Establish and maintain transaction context
- Maintain association between a transaction and the participating resources.
- Initiate and conduct two-phase commit and recovery protocol with the resource managers.
- Make synchronization calls to the application components before beginning and after end of two-phase commit and recovery process
The transaction manager is the core component of a transaction processing environment. Its primary responsibilities are to create transactions when requested by application components, allow resource enlistment and delistment, and to conduct the two-phase commit or recovery protocol with the resource managers.
A typical transactional application begins a transaction by issuing a request to a transaction manager to initiate a transaction. In response, the transaction manager starts a transaction and associates it with the calling thread. The transaction manager also establishes a transaction context. All application components and/or threads participating in the transaction share the transaction context. The thread that initially issued the request for beginning the transaction, or, if the transaction manager allows, any other thread may eventually terminate the transaction by issuing a commit or rollback request.
Before a transaction is terminated, any number of components and/or threads may perform transactional operations on any number of transactional resources known to the transaction manager. If allowed by the transaction manager, a transaction may be suspended or resumed before finally completing the transaction.
Once the application issues the commit request, the transaction manager prepares all the resources for a commit operation (by conducting a voting), and based on whether all resources are ready for a commit or not, issues a commit or rollback request to all the resources.
The following sections discuss various concepts associated with transaction processing.
Transaction Processing - Concepts
Transaction Demarcation
A transaction can be specified by what is known as transaction demarcation. Transaction demarcation enables work done by distributed components to be bound by a global transaction. It is a way of marking groups of operations to constitute a transaction.
The most common approach to demarcation is to mark the thread executing the operations for transaction processing. This is called as programmatic demarcation. The transaction so established can be suspended by unmarking the thread, and be resumed later by explicitly propagating the transaction context from the point of suspension to the point of resumption.
The transaction demarcation ends after a commit or a rollback request to the transaction manager. The commit request directs all the participating resources managers to record the effects of the operations of the transaction permanently. The rollback request makes the resource managers undo the effects of all operations on the transaction.
An alternative to programmatic demarcation is declarative demarcation. Component based transaction processing systems such as Microsoft Transaction Server, and application servers based on the Enterprise Java Beans specification support declarative demarcation. In this technique, components are marked as transactional at the deployment time. This has two implications. Firstly, the responsibility of demarcation is shifted from the application to the container hosting the component. For this reason, this technique is also called as container managed demarcation. Secondly, the demarcation is postponed from application build time (static) to the component deployment time (dynamic).
Transaction Context and Propagation
Since multiple application components and resources participate in a transaction, it is necessary for the transaction manager to establish and maintain the state of the transaction as it occurs. This is usually done in the form of transaction context.
Transaction context is an association between the transactional operations on the resources, and the components invoking the operations. During the course of a transaction, all the threads participating in the transaction share the transaction context. Thus the transaction context logically envelops all the operations performed on transactional resources during a transaction. The transaction context is usually maintained transparently by the underlying transaction manager.
Resource Enlistment
Resource enlistment is the process by which resource managers inform the transaction manager of their participation in a transaction. This process enables the transaction manager to keep track of all the resources participating in a transaction. The transaction manager uses this information to coordinate transactional work performed by the resource managers and to drive two-phase commit and recovery protocol.
At the end of a transaction (after a commit or rollback) the transaction manager delists the resources. Thereafter, association between the transaction and the resources does not hold.
Two-Phase Commit
This protocol between the transaction manager and all the resources enlisted for a transaction ensures that either all the resource managers commit the transaction or they all abort. In this protocol, when the application requests for committing the transaction, the transaction manager issues a prepare request to all the resource managers involved. Each of these resources may in turn send a reply indicating whether it is ready for commit or not. Only when all the resource managers are ready for a commit, does the transaction manager issue a commit request to all the resource managers. Otherwise, the transaction manager issues a rollback request and the transaction will be rolled back.
Transaction Processing - Standards and Technologies
X/Open Distributed Transaction Processing Model
The X/Open Distributed Transaction Processing (DTP) model is a distributed transaction processing model proposed by the Open Group, a vendor consortium. This model is a standard among most of the commercial vendors in transaction processing and database domains.
This model consists of four components:
- Application Programs to implement transactional operations.
- Resource Managers as discussed above.
- Transaction Managers as discussed above.
- Communication Resource Manager to facilitate interoperability between different transaction managers in different transaction processing domains.
This model also specifies the following interfaces:
- TX Interface: This is an interface between the application program and the transaction manager, and is implemented by the transaction manager. This interface provides transaction demarcation services, by allowing the application programs to bound transactional operations within global transactions. This interface consists of the following functions:
Function Purpose tx_open
Opens the transaction manager and the set of associated resource managers. tx_close
Closes the transaction manager and the set of associated resource managers. tx_begin
Begins a new transaction. tx_rollback
Rolls back the transaction. tx_commit
Commits the transaction. tx_set_commit_return
Commits the transaction. tx_set_transaction_control
Switches between chained and unchained mode. In the case of chained transactions, the work is broken into pieces with each piece being under control of a flat transaction. Once a piece of work is complete it is committed or rolled back independent of the state of the other pieces. tx_set_transaction_timeout
Sets a transaction timeout interval. tx_info
Returns transaction information such as its identifier, state of the transaction etc. Table 1: TX Interface of X/Open DTP Model
- XA Interface: This is a bidirectional interface between resource managers and transaction managers. This interface specifies two sets of functions. The first set, called as
xa_*()
functions are implemented by resource managers for use by the transaction manager.
Function Purpose xa_start
Directs a resource manager to associate the subsequent requests by application programs to a transaction identified by the supplied identifier. xa_end
Ends the association of a resource manager with the transaction. xa_prepare
Prepares the resource manager for the commit operation. Issued by the transaction manager in the first phase of the two-phase commit operation. xa_commit
Commits the transactional operations. Issued by the transaction manager in the second phase of the two-phase commit operation. xa_recover
Retrieves a list of prepared and heuristically committed or heuristically rolled back transactions xa_forget
Forgets the heuristic transaction associated with the given transaction identifier Table 2: XA Interface of X/Open DTP Model for the transaction manager
The second set of functions, called as
ax_*()
functions, are implemented by the transaction manager for use by resource managers.
Function Purpose ax_reg
Dynamically enlists with the transaction manager. ax_unreg
Dynamically delists from the transaction manager. Table 3: AX Interface of X/Open DTP Model for resource managers
- XA+ Interface: This interface is used to support global transactions across different transaction manager domains via communication resource managers.
- TXRPC Interface: This interface provides portability for communication between application programs within a global transaction.
- CRM-OSI TP: An interface between a communication resource manager and the OSI transaction processing services.
The X/Open DTP model is well established in the industry. A number of commercial transaction management products, such as TXSeries/Encina (from Tranarc, a wholly owned subsidiary of IBM), Tuxedo and TopEnd (both from BEA Systems), and AT&T GIS support the TX interface. Although Microsoft's Transaction Server does not support the TX interface, it can interoperate with XA compliant databases, such as Oracle. Similarly, most of the commercial databases such as Oracle, Sybase, Informix and Microsoft SQL Server, and messaging middleware products like IBM's MQSeries, and Microsoft's MSMQ Server provide an implementation of the XA interface.
OMG Object Transaction Service
Object Transaction Service (OTS) is a distributed transaction processing service specified by the Object Management Group (OMG). This specification extends the CORBA model and defines a set of interfaces to perform transaction processing across multiple CORBA objects.
The OTS model is based on the X/Open DTP model with the following enhancements:
- The OTS model replaces the functional XA and TX interfaces with CORBA IDL interfaces.
- The various objects in this model communicate via CORBA method calls over IIOP.
However, the OTS is interoperable with X/Open DTP model. An application using transactional objects could use the TX interface with the transaction manager for transaction demarcation.
The OTS architecture consists of the following components:
- Transaction Client: A program or object that invokes operations on transactional objects.
- Transactional Object: A CORBA object that encapsulates or refers to persistent data, and whose behavior depends on whether or not its operations are invoked during a transaction.
- Recoverable Object: A transactional object that directly maintains persistent data, and participates in transaction protocols.
- Transactional Server: A collection of one or more transactional objects.
- Recoverable Server: A collection of objects, of which at least one of which is recoverable.
- Resource Object: A resource object is an object in the transaction service that is registered for participation in the two-phase commit and recovery protocol.
In addition to the usual transactional semantics, the CORBA OTS provides for the following:
- Nested Transactions: This allows an application to create a transaction that is embedded in an existing transaction. In this model, multiple subtransactions can be embedded recursively in a transaction. Subtransactions can be committed or rolled back without committing or rolling back the parent transaction. However, the results of a commit operation are contingent upon the commitment of all the transaction's ancestors. The main advantage of this model is that transactional operations can be controlled at a finer granularity. The application will have an opportunity to correct or compensate for failures at the subtransaction level, without actually attempting to commit the complete parent transaction.
- Application Synchronization: Using the OTS synchronization protocol, certain objects can be registered with the transaction service for notification before the start of and the completion of the two-phase commit process. This enables such application objects to synchronize transient state and data stored in persistent storage.
The following are the principal interfaces in the CORBA OTS specification.
Interface Responsibilities Current
- Transaction demarcation (begin, commit, rollback, rollback_only, set_time_out)
- Status of the transaction (get_status)
- Name of the transaction (get_transaction_name)
- Transaction context (get_control)
TransactionFactory Explicit transaction creation Control Explicit transaction context management Terminator Commit or rollback a transaction. Coordinator
- Status of the transaction (get_status, get_parent_status, get_top_level_status)
- Transaction information (is_same_transaction, is_related_transaction, is_ancestor_transaction, is_descendant_transaction, is_top_level_transaction, hash_transaciton, hash_top_level_transaction, get_transaction_name, get_txcontext)
- Resource enlistment (register_resource, register_subtrans_aware)
- Registration of synchronization objects (register_synchronization)
- Set the transaction for rollback (rollback_only)
- Create subtransactions (create_subtransaction)
RecoveryCoordinator
Coordinate recovery in case of failure (replay_completion) Resource Participation in two-phase commit and recovery protocol (prepare, rollback, commit, commit_one_phase, forget) Synchronization
Application synchronization before beginning and after completion of two-phase commit (before_completion, after_completion) SubtransactionAwareResource
Commit or rollback a subtransaction (commit_subtransaction, rollback_subtransaction called by the transaction service TransactionalObject
A marker interface to be implemented by all transactional objects Table 4: CORBA OTS Interfaces The following products offer implementations of the OTS: Integrated Transaction Service (from Inprise), OrbixOTM (from Iona), OTSARjuna (from Arjuna Solutions Limited), and TPBroker (from Hitachi Software).
JTA and JTS
The Java Transaction Service and the Java Transaction API are the latest entrants into the enterprise distributed computing arena. As a part of the enterprise Java initiative, Sun Microsystems Inc. proposed these specifications in early 1999.
Figure 3: Java Transaction InitiativeJTS specifies the implementation of a Java transaction manager. This transaction manager supports the JTA, using which application servers can be built to support transactional Java applications. Internally the JTS implements the Java mapping of the OMG OTS 1.1 specifications. The Java mapping is specified in two packages: org.omg.CosTransactions and org.omg.CosTSPortability. Although the JTS is a Java implementation of the OMG OTS 1.1 specification, the JTA retains the simplicity of the XA and TX functional interfaces of the X/Open DTP model.
The JTA specifies an architecture for building transactional application servers and defines a set of interfaces for various components of this architecture. The components are: the application, resource managers, and the application server, as shown in Figure 3.
The JTS thus provides a new architecture for transactional application servers and applications, while complying with the OMG OTS 1.1 interfaces internally. This allows the JTA compliant applications to interoperate with other OTS 1.1 complaint applications through the standard IIOP.
As shown in Figure 3, in the Java transaction model, the Java application components can conduct transactional operations on JTA compliant resources via the JTS. The JTS acts as a layer over the OTS. The applications can therefore initiate global transactions to include other OTS transaction managers, or participate in global transactions initiated by other OTS compliant transaction managers.
For more details on JTS and JTA, see Java Transaction Service.
Microsoft Transaction Server
The Microsoft Transaction Server (MTS) is a component based transaction server for components based on the Microsoft's Component Object Model (COM). The MTS programming model provides interfaces for building transactional COM components, while the MTS runtime environment provides a means to deploy and manage these components and manage transactions. Using the MTS, work done by multiple COM components can be composed into a single transaction.
Unlike other technologies discussed in this section, MTS is a product and is not based on open specifications. Also note that, although the MTS environment offers several other features such as resource pooling, object recycling, access control etc., this section focuses only on the transactional capabilities of MTS, and attempts to map the various transaction management concepts to the MTS environment.
MTS Architecture
The high level architecture of MTS is shown in Figure 4.
Figure 4: Microsoft Transaction ServerThe MTS environment consists of the following:
- MTS Run-time: It is the environment where instances of MTS components execute and be managed. The MTS run-time provides for deployment and management of MTS components. It has the following features:
- Management of distributed transactions
- Automatic management of processes and threads
- Management of objects (creation, pooling and recycling)
- Distributed security service to control object creation and usage
- MTS Explorer: This is a graphical user interface driven tool for deploying and managing MTS components on the MTS run-time. The MTS Explorer can also be used to monitor transactions with the Distributed Transaction Coordinator.
- Distributed Transaction Coordinator (DTC): DTC is the transaction manager for MTS.
- MTS API: The MTS API (in Microsoft Visual Basicâ, Microsoft Visual C++â and Microsoft Visual J++â) provides certain interfaces and concrete classes for building transactional components.
- Resource Dispensers: An MTS resource dispenser manages nondurable shared data on behalf of MTS applications. MTS provides two resource dispensers:
- ODBC Resource Dispenser: The ODBC resource dispenser is essentially a ODBC driver manager with the following additional capabilities:
- Manage pools of connections to ODBC compliant databases, including reclamation and reuse of connections)
- Enlist and delist database connections on MTS context objects.
- Shared Property Manager: The MTS shared property manager manages applications-wide process-specific properties (name-value pairs) and provides synchronized access this data.
- Resource Manager: For a resource manager to participate in MTS transactions, it must support one of the following protocols:
- OLE Transactions: This is a COM based two-phase commit protocol used by resource managers to participate in transactions coordinated by the DTC.
- X/Open DTP XA Protocol: With this protocol, the MTS requires a OLE Transactions to XA mapper. This mapper is provided by the MTS SDK.
MTS Objects and Transaction Context
An MTS object is an instance of an MTS component (a component deployed on MTS, and managed by MTS). For each MTS object, the MTS creates and maintains a context object (ObjectContext) which provides execution context for an MTS object. The context object also maintains information of transaction context. Resource dispensers and the DTC can access this transaction context information for transaction demarcation, resource enlistment, delistment, two-phase commit etc. Note that, in MTS, the transaction context information is maintained with each MTS object, as opposed to a single transaction context object for all the objects participating in a transaction.
Transaction Outcome
Each MTS object can participate in determining the outcome of a transaction by calling one of the methods on the ObjectContext object:
- SetComplete: Informs the MTS that the object has successfully completed its work, and its work can be committed.
- SetAbort: Informs the MTS that the object's work can not be committed.
- EnableCommit: The object's work is not necessarily done, but its transactional work can be committed in the current form.
- DisableCommit: Informs the MTS the object's work can not be committed in the current form.
Transaction Demarcation
MTS allows both programmatic and declarative demarcation of transactions. Declarative demarcation is compulsory for all components deployed on the MTS. In addition, MTS clients can also initiative and end transactions programmatically.
- Declarative Demarcation: Depending on an MTS component's transaction property, MTS automatically begins a transaction on behalf of the application. The following transaction properties (that can be set at the deployment time) are possible:
- Requires Transaction: Instances of the component always execute within the context of a transaction. It is expected that the invoking object is being executing in the context of a transaction.
- Requires New Transaction: Instances of the component must execute within their own transactions, irrespective of whether or not the invoking object has started a transaction.
- Supports Transaction: Instances of the component can execute within the scope of the invoking object's transaction (if there is one). This implies that the component is transaction-safe.
- Does not Support Transaction: Instances of the component do not execute with in the scope of any transaction. MTS does not associate the work done by such objects with any transaction.
- Programmatic Demarcation: MTS clients can demarcate transactions programmatically using the TransactionContext object. A client can begin a transaction by creating an instance of the TransactionContext object, and end the transaction by calling Commit or Abort methods of this object. All MTS objects created within these boundaries using the TransactionContext object will execute under the same transaction context (except when the component is set to require a new transaction, or does not support a transaction). MTS implicitly maintains the association between the TransactionContext object and the transaction.
Resource Enlistment
MTS does automatic resource enlistment. When a MTS object requests the resource dispenser for a connection to a resource, the resource dispenser obtains the calling object's transaction context, and registers the connection with it.
Although MTS is available for Microsoft Windows platforms only, MTS can interoperate with resource managers complying to the XA protocol, and such resource managers operating on non-Windows platforms can participate in transactions coordinated by the DTC.
For more information on MTS, refer to the MSDN library. For a quick but elaborate compilation of features of MTS vis-a-vis other competing technologies, refer to MTS FAQ.
Enterprise Java Beans
Enterprise Java Beans (EJB) is a technology specification from Sun Microsystems Inc. that specifies a framework for building component-based distributed applications. Application servers conforming to this technology are beginning to appear from various vendors over the past six months, while the specification is being improved currently by Sun Microsystems Inc.
As an application server framework, the EJB servers address transaction processing, resource pooling, security, threading, persistence, remote access, life cycle etc. However, as with the case of MTS, this section focuses only on distributed transactional model of the EJB framework.
The EJB framework specifies construction, deployment and invocation of components called as enterprise beans. The EJB specification classifies enterprise beans into two categories: entity beans and session beans. While entity beans abstract persistent domain data, session beans provide for session specific application logic. Both types of beans are maintained by EJB compliant servers in what are called as containers. A container provides the run time environment for an enterprise bean.
Figure 5 shows a simplified architecture of transaction management in EJB compliant application servers. This figure shows only the essential interactions between the constituent parts of the architecture.
Figure 5: Transactions in EJB Application ServerAn enterprise bean is specified by two interfaces: the home interface and the remote interface. The home interface specifies how a bean can created or found. With the help of this interface, a client or another bean can obtain a reference to a bean residing in a container on an EJB server. The remote interface specifies application specific methods that are relevant to entity or session beans.
Clients obtain references to home interfaces of enterprise beans via the Java Naming and Directory Interface (JNDI) mechanism. EJB servers typically provide JNDI implementations. Using this reference to the home interface, a client can obtain a reference to the remote interface. The client can then access methods specified in the remote interface. The EJB specification specifies the Java Remote Method Invocation (RMI) as the application level protocol for remote method invocation. However, an implementation can use IIOP as the wire-level protocol.
In Figure 5, the client first obtains a reference to the home interface, and then a reference to an instance of Bean A via the home interface. The same procedure is applicable for instance of Bean A to obtain a reference and invoke methods on an instance of Bean B.
EJB Transaction Model
The EJB framework does not specify any specific transaction service (such as the JTS) or protocol for transaction management. However, the specification requires that the javax.transaction.UserTransaction interface of the JTS be exposed to enterprise beans. This interface is required for programmatic transaction demarcation as discussed in the next section.
Similar to the MTS, the EJB framework provides for declarative demarcation of transactions. The container performs automatic demarcation depending on the transaction attributes specified at the time of deploying an enterprise bean in a container.
The following attributes determine how transactions are created.
- NotSupported: The container invokes the bean without a global transaction context.
- Required: The container invokes the bean within a global transaction context. If the invoking thread already has a transaction context associated, the container invokes the bean in the same context. Otherwise, the container creates a new transaction and invokes the bean within the transaction context.
- Supports: The bean is transaction-ready. If the client invokes the bean within a transaction, the bean is also invoked within the same transaction. Otherwise, the bean is invoked without a transaction context.
- RequiresNew: The container invokes the bean within a new transaction irrespective of whether the client is associated with a transaction or not.
- Mandatory: The container must invoke the bean within a transaction. The caller should always start a transaction before invoking any method on the bean.
Transaction Demaraction
The EJB framework supports three types of transaction demarcation.
- Declarative Demarcation: This is also called as container managed demarcation. The container demarcates transactions on behalf of the bean. The required attribute is specified in a deployment descriptor at the time of deploying the bean on an EJB server. The bean can use the javax.ejb.EJBContext.setRollbackOnly() method to mark the transaction for rollback.
- Bean Managed Demarcation: This is similar to the client-managed demarcation.
- Client Managed Demarcation: Java clients can use the javax.transaction.UserTransaction interface to demarcate transactions programmatically.
Resource Enlistment
Resource enlistment is automatic with EJB. The EJB containers automatically enlists connections to EJB-aware resource managers whenever a bean obtains a connection.
Application Synchronization
The EJB specification provides the javax.ejb.SessionSynchronization interface for application synchronization. When implemented by a bean, the container calls the afterBegin, beforeCompletion and afterCompletion methods for application synchronization during the two-phase commit process.
There are several vendors who currently offer application servers complying to the EJB 1.1 specification. Some of the products are WebLogic from BEA Systems Inc., GemStone/J from GemStone Systems Inc., iPlanet from Sun Microsystems Inc. and PowerTier for EJB from Persistence Software Inc.. Refer to the EJB Directory for a listing of EJB application servers and EJB-aware database servers.
In summary, the EJB framework provides features almost similar to the MTS. Both the technologies allow component-based transactional distributed applications, and abstract the process of transaction demarcation from application components.
Summary
Transaction processing has always been complex and critical. However, with the advent of MTS and EJB, transaction processing has caught the interest and attention of both developers and IT organizations simultaneously. This is not without reason. These recent technologies simplify distributed transaction management, and are fueled by three major developments:
- Distributed Computing: The two competing distributed computing models (CORBA and COM/DCOM) simplified distributed computing.
- Component Based Development: Based on the above interface centric paradigms, component based distributed application development has become a reality.
- Object Orientation: The maturity of object-oriented programming assisted by design patterns and frameworks, made implementation of these technologies feasible.
In addition, these technologies address the scalability and robustness that are required for today's enterprise applications.
The purpose of this document is to focus on the issues and concepts involved in distributed transaction management. By no means this is a comprehensive enough to cover all the finer details of the underlying technologies. At the same time, this document does not attempt to compare technologies, but attempts to map various concepts to each of the technologies. Only the nuts and the bolts are discussed, not how nuts and bolts are made, and not how machines are built with them.
If you're interested about the author - Check his web site at http://www.Subrahmanyam.com.
© Subrahmanyam Allamaraju 1999, 2000. All rights reserved.
This document is protected by copyright. No part of this document may be reproduced in any form without prior written consent of the author. This document is for electronic distribution only.
All trademarks acknowledged.