Life insurance demonstration¶
This repository includes a simple example of how to integrate, with minimum disruption, IBM Event Streams (Kafka) with an existing MQ based framework which manages messages distribution between MQ applications. The example is in life insurance domain but could be applied in other industry.
Architecture context¶
The existing solution integrate a queueing framework that receives messages from different applications (front end / mobile for the most part) and addresses message distribution, message transformation, error management, retry, notification, data life cycle auditing and governance.
The ask is to see how event-driven architecture will help to support some of the capability of the framework or complement it, and how it can help propagate message as events to event-driven microservices.
Figure below illustrates a generic view of how the existing framework is running:
- At the top we have different front end applications that can send transactional data (write to life insurance model), or not transactional ones
- The APIs consumed by the frontend could be mediated with ESB (IBM IIB) flows, and then some of those flows are publishing messages to IBM MQ queues.
- From those queues, we can get different processing running all together to do data enrichment, transformation, to get subscriber applications consuming those data.
- Other services are responsible to do retries, auditing, manage errors, or notify end user with a mobile push or email back-end
- An important component of this framework is the transaction event sourcing capability: keep state of change on some interesting transactional data: for example a life insurance offer.
Different flows are doing the needed works, and all this framework is basically supporting long running transaction processing and a notification engine.
Also note, that to be generic this framework defines different message types (600) and adapt mediation flow via configuration.
The solution in on bare metal or VM running on premise.
Requirements to demonstrate¶
- Address how to extend existing architecture with Kafka based middleware and streaming processing. (See next section)
- Demonstrate streaming processing with the exactly once delivery (See transaction streaming component)
- Ensure Event order is not changed: in the queuing approach with subscription, it is possible that a message arrived after another one could be processed before the first one is completed, which could impact data integrity.
- Demonstrate Data transformation to target different models, to prepare the data for a specific subscriber (a kafka consumer). (See this streaming code)
- Support message content based routing
- Dead letter queue support for data in error
- Support CloudEvent.io to present metadata around the message
- Support Schema management in registry to control the definition of the message in a unique central repository
- Demonstrate access control to topic See user declaration.
Example of non-desruptive integration¶
The existing framework can be extended by adding Kafka MQ source and sink connectors in parallel of existing framework, with deploy Kafka based middleware (IBM Event Streams): transactional or non-transactional data will injected from queues to different Kafka Topics as event ready to be processed as soon as created.
- For the streaming processing, we propose to do data enrichment, data validation to route erranous data to dead-letter-queue topic, and data transformation to publish to two different topics for downstream subscribers: these will validate content based routing and enrichment, and exactly once delivery with order guarantee.
- Subscriber applications illustrate in the figure above could be new applications, or existing one connected to Kafka directly or if actually connected to Queue, then those subscribers will be MQ Sink kafka connector.
Read more
- To understand Kafka topic - offset see this note
- Kafka MQ Source connector lab
- Dead letter queue pattern
Domain model¶
We pick up the life insurance domain, but as of now very limited, it could be extended in the future. See the design section
Components¶
We leverage the following IBM Products:
- Event Streams with one cluster definition is in this eventstreams-dev yaml
- MQ broker with AMQP protocol enabled (See this folder for deployment example)
- Kafka Connector
- Event end point management
- Schema registry
And develop three components to demonstrate how to support requirements:
- A transaction simulator to send data to MQ to support different demonstration goals. The app is done in Java Messaging Service in lf-tx-simulator folder. This application also send categories at startup time.
- a Kafka streams processing app which is using standard Java Kafka Streams API. The application is in client-event-processing folder
- Configuration for MQ source connector. The Yaml file is in environments mq-source folder