Introduction GOALS: To improve the Quality of Service (QoS) for the JBI platform and endpoints JBI Client Pub .. . E.g., latency, fault tolerance, scalability, graceful degradation Includes QoS of interactions JBIendpoint JBI Client Pub APPROACH: Build on QoS-aware middleware JBI Client Sub Examined Java Message Service (JMS) and Real-Time Java (e.g., RTSJ) .. . JBI Server (s) (Using DDS) Currently using OMG Data Distribution Service (DDS) JBI Client Sub Original Proposed JBI/DDS Architecture Questions/Challenges: Processing an Information Object (IO) Can important pieces of JBI be implemented with DDS? DDS does not include centralized brokering DDS does not include archival data (a limit to queries) Broker #1 in each JBI Server is servicing clients subscribing to ImageInfo objects 1. JBI Client 1 publishes an ImageInfo object Can better QoS properties be attained with DDS? Note: DDS QoS parameters useful such as Deadline, Reliability, Latency Budget, Resource Limits, etc. JBI Client 1 (publisher) IOType: ImageInfo 2. PubCatcher converts XML to C++ binary and sends a DDS publication to local brokers and other JBI server’s PubCatchers 1 Introduction of New Information Object Type 2 Pub Catcher Pub Catcher 2 XML Schema Parse XML Schema 3. PubCatcher on JBI Server 2 sends a DDS publication to brokers on its local DDS Domain 3 Broker #1 Broker #2 4. JBI Client 4’s predicate doesn’t match Broker #1 Broker #2 4 New IO Type Generate IDL (Interface Definition Language) structure Predicate Match JBI Server 1 Compile into DDScompatible C++ code Generate XML to C++ conversion code 6. Disseminator forwards publication to JBI Client 3 Predicate Mismatch 5 Disseminator (s) JBI Server 2 5. Broker converts back to XML to send to Disseminator Disseminator (s) 6 JBI Client 2 (subscriber) IOType: SARInfo JBI Client 3 (subscriber) IOType: ImageInfo JBI Client 4 (subscriber) IOType: ImageInfo Legend XML C++ binary Generate C++ to XML conversion code Bandwidth Savings: Transmitting C++ binary data rather than XML Compile Brokering Savings: Parse an IO once and do most predicate Generated object code evaluation in DDS JBI/DDS Architecture Implementation Completed work: Demonstrate feasibility of proposed architecture Difficulties IDL generation from supported XML schemas Security – dynamic code generation Prototype of XML IO conversion Aggressive application of DDS DDS predicate generation from supported XPath predicates Potential next steps: Interim experiment: The 100X JBI uses “JBI Connectors” to connect multiple 100X servers Automate generation of XML IO conversion code Apply DDS in a less aggressive manner Automate compilation of generated source code See right panel Dynamically load generated code Evaluate expressability of XML schema and XPath predicates, as used by JBI, when converted to IDL and DDS predicates The 100X JBI Connector The DDS/JBI Connector • • • • • Executes alongside 100X JBI servers Connects multiple 100X JBI servers Candidate scheme for integration TAO DDS is utilized between JBI servers The DDS code is transparent to the JBI servers and JBI clients Above: Three 100X JBI servers and their connectors. Above: Three 100X JBI servers and DDS/JBI connectors. • • TAO DDS Currently supports five of the 22 QoS policies defined in the DDS specification History/Durability – – • • • Whether samples are discarded by DDS after being sent to all known subscribers or if a certain number of samples is kept to send to late-joining subscribers Verified with a three-node configuration experiment to allow a new Connector or a restarted Connector to get a snapshot of the past • • Liveliness – – – Supports the Automatic setting, which indicates that DDS should periodically poll participants at a configurable interval When an SSH tunnel is broken or a DDS/JBI Connector exits, the DDS components on the nodes are notified within the lease duration. Notification is also given to the DDS components when the DDS/JBI Connector which was previously not reachable is again reachable (the SSH tunnel is restored and/or the DDS/JBI Connector restarts). One major issue identified: there is no mechanism within the current JBI with which the DDS/JBI Connector can communicate to indicate liveliness information. That is, the DDS/JBI Connector has no way of notifying the JBI that a node has gone down or reappeared. Providing QoS-enabled dissemination between multiple JBI servers and compression of JBI IOs The dashed connections between the DDS/JBI Connectors and the DCPSInfoRepo are part of the TAO DDS architecture All JBI IO types are represented with just one DDS message type and, currently, one DDS topic. Performance results (latency) were obtained for the 100X Connector, the DDS/JBI Connector with compression, and the DDS/JBI Connector without compression – – • Reliability and Resource Limits: – • Message type of approximately one kilobyte in size Most of the latency can be attributed to network communication time for both the 100X and DDS/JBI Connector and not to the overhead of the connectors Only the use of TCP transports with Reliability as Reliable ensured message delivery. Other settings resulted in samples being dropped in cases of heavy load for tests (or error message indicating that the maximum blocking time has been exceeded). Later renamed to OpenDDS