|
Distributed Transaction Processing |
|
|
When XCICS arranges function shipping, distributed program link (DPL), asynchronous transaction processing, or transaction routing for you, it establishes a logical data link with a remote system. A data exchange between the two systems then follows. This data exchange is controlled by CICS-supplied programs, using APPC, LUTYPE6.1, or MRO protocols. The XCICS-supplied programs issue commands to allocate conversations, and send and receive data between the systems. Equivalent commands are available to application programs, to allow applications to converse. The technique of distributing the functions of a transaction over several transaction programs within a network is called distributed transaction processing (DTP). In the intercommunication facilities, DTP is the most flexible and the most powerful, but it is also the most complex. This section introduces you to the basic concepts. Why to use DTP In a multisystem environment, data transfers between systems are necessary because end users need access to remote resources. In managing these resources, network resources are used. But performance suffers if the network is used excessively. There is therefore a performance gain if application design is oriented toward doing the processing associated with a resource in the resource-owning region. DTP lets you process data at the point where it arises, instead of overworking network resources by assembling it at a central processing point. There are, of course, other reasons for using DTP. DTP does the following:
DTP Programming In DTP, transactions pass data to each other directly. While one sends, the other receives. The exchange of data between two transactions is called a conversation. Although several transactions can be involved in a single distributed process, communication between them breaks down into a number of self-contained conversations between pairs. Each such conversation uses a CICS resource known as a session. Conversation initiations A transaction starts a conversation by requesting the use of a session to a remote system. Having obtained the session, it causes an attach request to be sent to the other system to activate the transaction that is to be the conversation partner. A transaction can initiate any number of other transactions, and hence, conversations. In a complex process, a distinct hierarchy emerges, with the terminal-initiated transaction at the very top. The structure of a distributed process is determined dynamically by program; it cannot be predefined. Notice that, for every transaction, there is only one inbound attach request, but there can be any number of outbound attach requests. The session that activates a transaction is called its principal facility. A session that is allocated by a transaction to activate another transaction is called its alternate facility. Therefore, a transaction can have only one principal facility, but any number of alternate facilities. When a transaction initiates a conversation, it is the front end on that conversation. Its conversation partner is the back end on the same conversation. (Some books refer to the front end as the initiator and the back end as the recipient.) It is normally the front end that dominates, and determines the way the conversation goes. You can arrange for the back end to take over if you want, but, in a complex process, this can cause unnecessary complication. This is further explained in the discussion on synchronization later in this section. A conversation transfers data from one transaction to another. For this to function properly, each transaction must know what the other intends. It would be nonsensical for the front end to send data if all the back end wants to do is print out the weekly sales report. It is therefore necessary to design, code, and test front end and back end as one software unit. The same applies when there are several conversations and several transaction programs. Each new conversation adds to the complexity of the overall design. Conversation state and error detection As a conversation progresses, it moves from one state to another within both conversing transactions. The conversation state determines the commands that may be issued. For example, it is no use trying to send or receive data if there is no session linking the front end to the back end. Similarly, if the back end signals end of conversation, the front end cannot receive any more data on the conversation. Either end of the conversation can cause a change of state, usually by issuing a particular command from a particular state. XCICS tracks these changes, and stops transactions from issuing the wrong command in the wrong state. Synchronization There are many things that can go wrong during the running of a transaction. The conversation protocol helps you to recover from errors and ensures that the two sides remain in step with each other. This use of the protocol is called synchronization. Synchronization allows you to protect resources such as transient data queues and files. If anything goes wrong during the running of a transaction, the associated resources should not be left in an inconsistent state. APPC sync levels The APPC architecture defines three levels of synchronization (called sync levels):
All these levels are supported by XCICS/TS. At sync level 0, there is no system support for synchronization. It is nevertheless possible to achieve some degree of synchronization through the interchange of data, using the SEND and RECEIVE commands. If you select sync level 1, you can use special commands for communication between the two conversation partners. One transaction can confirm the continued presence and readiness of the other. The user is responsible for preserving the data integrity of recoverable resources. The level of synchronization described earlier in this section corresponds to sync level 2. Here, system support is available for maintaining the data integrity of recoverable resources. XCICS implies a syncpoint when it starts a transaction; that is, it initiates logging of changes to recoverable resources, but no control flows take place. XCICS takes a full syncpoint when a transaction is normally terminated. Transaction abend causes rollback. The transactions themselves can initiate syncpoint or rollback requests. However, a syncpoint or rollback request is propagated to another transaction only when the originating transaction is in conversation with the other transaction, and if sync level 2 has been selected for the conversation between them. Remember that syncpoint and rollback are not peculiar to any one conversation within a transaction. They are propagated on every sync level 2 conversation that is currently in bracket. SYNCPOINT transmission XCICS/TS supports three different modes of SYNCPOINT transmission over APPC: simple SYNCPOINT is activated with a two-phases only PSH exchange:
implied_forget SYNCPOINT is managed with a tree-phases PSH handleshake, using implied forget algorythm:
Implied processing is the default. explicit_forget SYNCPOINT is managed with a four-phases PSH handleshake:
The type of SYNCPOINT processing may be defined for each single connection at configuration time or modified at run-time using xcicsadm. If syncpoint processing is not configured in the connection definition, XCICS/TS handles syncpoint with the "implicit" mechanims by default. I.e. define connection sysid=P390, Or runtime: # xcicsadm --set-synctype-implied P390 XLN processing XCICS/TS supports the exchange log name (XLN) processing: during the connection acquisition phase it exchanges data with the partner to identify itself and to get information about the capabilities and the status of the remote system. The "acquire" processing may be activated from XCICS/TS (either automatically at region startup or manually using xcicsadm utility) or by the remote system.
Known limitations XCICS only supports APPC links. No MRO link is possible with remote regions. XCICS only supports LU6.2 communication. XCICS only supports mapped APPC. EXEC CICS GDS commands are not currently available. Other docs XCICS uses the same APPC APIs of IBM CICS. For further information about APPC programming, please refer to the following docs:
|