Share sun-scheduler-bc jobs across instances

General Information: As we know  sun-scheduler-bc provides scheduling capabilities for initiating JBI services. The binding component is powered by http://www.opensymphony.com/quartz/, and allows you to schedule triggers to launch (consume) other JBI components. We can use this component to scheduled  “Simple“, or “Cron” or “Hybrid” trigger as we need. When you deploy any sun-scheduler based composite application into JBI container, by default its uses “Quartz RAM Store” to define its job and this is quite good if that composite application going to run in one instance. Problem with Quartz RAM Store: What happens it you going to deploy this same composite application in  multiple Cluster instances ? The answers is, trigger is going to happen in all the cluster instances at the same time and this will cause the same process runs multiple times in different instance and this is against cluster configuration. Solution: To avoid the above problem we have to perform certain configuration in sun-scheduler-bc configuration to share the quartz job details across cluster instances using “Quartz Persistent Job Store” configuration property. So that, any one of the same composite application that belongs to different cluster instances should trigger at a time. Follow the below steps to configure sun-scheduler-bc to define Quartz Persistent Job Store ( Glassfish cluster mode): Note:- In this configuration I’ve use Mysql as persistent job store to store the sun-scheduler-bc composite application job details. You can choose your own data base as per quartz api support (http://svn.terracotta.org/svn/quartz/tags/quartz-1.8.6/docs/dbTables/) Step1: To Set Up Mysql DB For Job Store 1. Create schema in Mysql db (eg: oe_sun_schedulder_bc_job_store) 2.Download the Mysql based job store sql from below link http://svn.terracotta.org/svn/quartz/tags/quartz-1.8.6/docs/dbTables/tables_mysql.sql or check this attachment for same sql scripts Quartz_Mysql_Job_store 3. Run...

GFESBv22_EphemeralSequenceGenerator

Author: Michael.Czapski@sun.com Updated and Verified on OpenEsb2.3 by Jay Shankar Gupta, Logicoy Inc Table Of Contents Introduction JVM-Global, Ephemeral Sequence Generator Creating POJO for getting Sequence HL7 Message ID Encricher Devloping Wsdl and Xsds BPEL Design Create Composite Application Testing Summary References   Introduction When working on the HA solutions discussed in my blog[1] I realized that it will be difficult to work out whether messages are delivered in order, as was required, and whether any are missing. I got over the issue by ensuring that my test data was prepared in such a way that messages in each test file had increasing, contiguous sequence numbers embedded in the message. For HL7 v2, which is the messaging standard with which I dealt, I used MSH-10, Message Control ID field. I wrote processed messages and acknowledgements to files whose names embedded MSH-10 Message Control Id, with the sequence number, so that breaks in sequence and out of order messages could be readily detected. With multiple message files containing between 1 and 50,000 messages, adding a sequence number to each message by hand was clearly out of the question. I put the GlassFish ESB to use. I constructed a file-to-file BPEL module project to read each test file and to prepend a sequence number to each message’s MSH-10 field. The only snag was how to get a sequence number that would start at 0 and increase by 1 for each message, such that each BPEL process instance would get the next sequence, and that messages would be written to the output file in order. This note discusses how I went about...

Creating and Configuring Glassfish Cluster with MQ Cluster

1. Install Glassfish with cluster profile: a. Make sure ANT_HOME environment variable points to the ant instation folder. For example: “C:Program Filesapache-ant-1.7.1” b. Make sure PATH variable has ant’s bin folder. For example: “C:Program Filesapache-ant-1.7.1bin” c. Go to the folder where glassfish jar (glassfish-installer-v2.1.1-b31g-windows.jar) is extracted. d. Run: “ant –f setup-cluster.xml”. You should see “Build Successful” message. e. Follow the installation on all the systems where Glassfish needs to be installed 2. Start glassfish domain. a. Choose one of the Glassfish installation server as Domain Administration Server (DAS). The domain administration server is used to administer all the servers in the cluster. b. Start the Domain Administration server. Go to glassfishbin folder and Run: “asadmin start-domain domain1” 3. Create Node agents a. Node agents need to be created on each system where we need a Glassfish cluster instance. b. Go to DAS admin console. c. Navigate to Node Agents on left hand tree d. Click “New” for creating node agents. e. Give a node agent name. Click Ok f. We need to run “asadmin create-node-agent” command on all the host machines which will participate in this cluster. For now, Click Ok. g. Follow the above steps to create node agents for each machine. h. The screen should look like this: i. Go to a system which will participate in the cluster. Open command prompt and change directory to <GlassFishDir>/glassfish/bin. j. Run “asadmin create-node-agent –host <host or IP> –port 4848 <node-agent-name>”. The node-agent-name should be exactly same as what we gave in admin console. The host or IP is DAS host / IP. If the system is not able to...