Get rid of Zookeeper and use Kraft

It this post, i will explain how to be able to use Kraft and why use it.

Kraf is a concensus protocol allowing Kafka to work independently without Zookeeper.

This simplifies the fact there is nore more need of two different services (Kafka and Zookeeper) to share metadata. It also enables kafka fail over control to be almost instant. Thus the effect of kafka start and stop is faster.

The feature began with the version Kafka 2.8 and is enhanced with every new version BUT for the moment (June 2022), this solution is not yet production ready.

Of course this changes a little the infrastructure and the connection method.

With ZookeeperWithout Zookeeper
Client and
service configuration
zookeeper.connect=zookeeper:2181bootstrap.servers=broker:9092
Schema registry configurationkafkastore.connection.url=zookeeper:2181kafkastore.bootstrap.servers=broker:9092
Kafka admin toolkafka-topics –zookeeper zookeeper:2181kafka-topics –bootstrap-server broker:9092 … –command-config properties to connect to brokers
REST Proxy APIv1v2 and v3
Get Cluster IDzookeeper-shell zookeeper:2181 get/cluster/idkafka-metadata-quorum or view metadata.properties or confluent cluster describe --url http://broker:8090 --output json

How to configuration and start Kafka with Kraft

Generate a UID:

./bin/kafka-storage.sh random-uuid
xtzWWN4bTjitpL3kfd9s5g

Format the data directory to be compatible with Kraft (to run on each Kafka broker), do not forget to set the cluster ID.

./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
Formatting /tmp/kraft-combined-logs --cluster-id XXXXXXXXXX

In the file server.properties, do not forget to set all ther brokers hosts in the line “controller.quorum.voters”

process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093

Start Kafka with pointing to kraft config file

./bin/kafka-server-start.sh ./config/kraft/server.properties

You are now ready to work with a broker running with no zookeeper dependencies to be healty.

The MQ Explorer

In this post i will help you understand what is the Eclipse Tool MQ explorer, how to use it and what are the options available.

When you download the MQ for Developers software, you can find the link here https://developer.ibm.com/articles/mq-downloads, it contains the MQ service and the MQ Explorer tool.

It allows to visualize the configuration of queue managers but also the data state and also metadata of each objects (such as queues, topics, subscriptions, …).

It is either usefull for developers to debug and configure their objects for their future development or useful for administrator to configure objects or export configurations.

After installing the MQ for Developers edition, start the MQ Explorer software.

When the tool is running, you will see in the top left corner eclipse view MQ Explorer – Navigator tab.

Lets begin by explaining each elements:

  • Queue Managers: This is one of the most import element. It allows to create a new local queue manager, transfert local queue manager to a distant server, connect to a remote queue manager but also run tests on all the local and remote queue managers configured.
  • Queue Manager Cluster: This element allowd to create a Queue manager cluster. This is will help to configure all the channel senders and receivers usefull to have an up and running cluster. The cluster configuration can be made by two different options:
    • Full repository: each QM member of the cluster will sync everything objects and data.
    • Partial repository: each QM member of the cluster will only sny objects. Data wont be synchronized.
    • The choice between the two options can have an impact on the network bandwith, The fact that only the objects must be the same but the queue managers will be separated in region for instance (Europe region will not have the same data than the Asian region but the softwares putting data will have the same behavior in Europe and in Asia).
  • JMS Administered Objects: This element is usefull to create a binding JNDI configuration file to share with developers. Each binding file will have the connection details to connect to a queue manager, a queue manager HA, a queue manager cluster and also destination queues/topics to send data. This allows to control and restrict the queues and topics availability.
  • Manage File Transfer: For those who have the feature MQ FT enabled on at least two different queue managers and that you configured your agents, this options allows to test, trace and configuration file transfer. It is also possible to cron file transfer from the MQ Explorer.
  • Service Definition Repositories: This option is less used than before. This allows to create a documentation that will become a WSDL to deploy to get description and documentation information. It can be compared to IBM WSRR.

Create a local queue manager with MQ Explorer

Create a local queue manager with MQSC

Message structure

In this post i will explain how mq messages are structured.

An mq message contains two main sections. (It can be compared to JMS where a message is slipt as JMS Header and JMS data)

The header and the data.

  1. The Header:
    1. The header contains the MQMD (Message Queue Message Descriptor) with all the usefull information to handle the message metadata.
  2. The data:
    1. The data contains three different informations.
      1. The MQ RFH 1: RFH contains usefull metadata information sur as:
        1. The reply to Q for dynamic reply response.
        2. The DLQ name in case of technical error.
        3. The persistence flag: Keep the message in case of mq restart or not.
        4. The message type.
        5. The priority number: long representing the value to prioritze read messages.
        6. Group Id: read based by group number for security context.
        7. The sequence number: in case of partitionned data into multiple messages.
      2. The MQ RFH 2: RFH 2 contains usefull metadata information sur as:
        1. Version.
        2. CCSID: Code Char Set ID as long value (for instance 1208 is UTF-8, 1200 is UTF-16, 13488 is UTF-16 V2, 17584 is UTF-16 V3).
        3. Encoding: Encoding as long value represent carriage return representation for OS usage of MQ, 273 is for Unix systems and 546 is for Linux and Windows systems
        4. Format: Format will be MQHRF2.
        5. Sturcture Lenght: Size in bytes of MQRFH 2.
      3. The payload:
        1. The data payload can be:
          1. Binary
          2. XML
          3. JSON
          4. DFDL
          5. MIME
          6. MRM
          7. CUSTOM

It is possible to convert an MQ message to JMS message and inversly. (On this topic there are two nodes on ACE allowing to do it: See the post List of Nodes JMSMQTransform and MQJMSTransform.)

RFHUTIL

In this post, i will explain what is RFHUTIL, where to download it and how to use it.

RFHUTIL is a usefull tool to read, write and browse message located on a queue or subscribe, publish to a topic.

The tool can be downloaded here https://github.com/ibm-messaging/mq-rfhutil

From you command prompt, run the command line git clone https://github.com/ibm-messaging/mq-rfhutil.git

In the folder bin\Release, you will see two exe files rfhutil.exe and rfhutilc.exe

The difference between the two file is the connection method.

rfhutil.exe will connect to a local queue manager using local binding method.

rfhutilc.exe (also called rfhutil client) will connect to any queue manager using server connection channel (with host, port, protocol and channel name)

A usefull option is to be able to read a message from a queue, save it into file and being able later to load it and resend this message. It can be usefull in case of development and/or debug.

The list of Nodes

In this post i will explain each node available in IIB/ACE/CP4I and will explain the purpose of each node.

A detailed description of each node can be found here –> https://www.ibm.com/docs/en/app-connect/11.0.0?topic=development-built-in-nodes

Nodes have INPUT TERMINALS and OUTPUT TERMINALS to allow message to pass trhoug nodes and be processed.

Some nodes have fixed input/output terminal. But other nodes can be customized with custom input/output terminals. To link a node to another, we stick the output terminal of a node to an input terminal of the next node, this is how flows are developed.

I will browse each node group and will describe all the nodes one by one.

Every node with a green line drawed on the left of the icon is called an entry node, it is a starting point node.

Every node with a blue line drawed on the rightof the icon is called an exitnode, it is an ending point node.

To link nodes between each other, the nodes will have terminals to make the connection, some nodes will have static terminal like out, failure, error terminals. And other node can have static and custom dynamic terminals.

  1. IBM MQ group:
    1. MQ Input will read any incoming message entering in the queue.
    2. MQ Output will write the output message in the target queue.
    3. MQReply is similar to MQ Output but can reply to the replyToQueue name.
    4. MQGet will read only one message, the read will often be based on messageID or correlation ID.
    5. MQHeader has 4 different options, forward the header, add an MQ Header, remove an MQHeader and update MQHeader values.
    6. MQ Connections can made by 3 different ways, local queue manager (it will use a local bind to find the queue manager), MQ client connection properties (the connection can be static or linked with an MQ Policy), a secured connection based on a CCDTable.
    7. MQ Input, Output and MQReply can transactionnal and the choice is very important, espacially for error management.
  1. MQTT group:
    1. MQTT is a lighweight Queue manager designed to send minimal reduces message for IOT devices based on topics.
    2. MQTTSubscribe will listen a topic based on the topic name and a host and port.
    3. MQTTPublish will write the message to a topic based on the topic name and a host and a port.
  1. Kafka group:
    1. Kafka Consumer will listen to a topic based on a topic name and a list of kafka brokers and a consumer group id (Consumer groups will be describes in the Kafka with a Java example).
    2. Kafka Producer will write to a topic based on a topic name and a list of kafka brokers.
    3. Kakfa read can be compare to MQGet. It will only read one message from a topic based on a topic name, a list of kafka brokers, the partition number and the offset number (similar the an index from where it will do the read).
  1. JMS group: (Java Messaging service, all the nodes are 1.1 and 2.0 compliant, JMS connections are based on JNDI binding connections that can be created from the MQ Explorer tools)
    1. JMS Input will read any incoming message entering in the queue (can be compared to MQ Input).
    2. JMS Ouput will write the output message in the target queue (can be compared to MQ Output).
    3. JMS Reply is similar to JMS Output but can reply to the replyToQueue name (can be compared to MQ Reply).
    4. JMSReceive will read only one message, the read will often be based on messageID or correlation ID and priority (can be compared to MQ Get).
    5. JMSHeader has 4 different options, forward the header, add an JMS Header, remove a JMS Header and update JMS Header values (can be compared to MQ Header).
    6. JMSMQTransform is a node that will transform a message coming from JMS to convert it to an MQ message.
    7. MQJMSTransform: is a node that will transform a message coming from MQ to convert it to a JMS message.
  1. HTTP group:
    1. HTTP Input is a web server message receiver.
    2. HTTP Reply is a message replier coming from an Http input. The reponse will be done base on the HTTP session id.
    3. HTTP Request will make an http request (such as a curl) with different options availabled such as HTTP GET (to extract an information from the target server), HEAD (similar to GET but will only ask header information), POST (to send data to insert into the target server), DELETE (to remove the data from the target server), PUT ( to update a data into the target server), PATCH (to apply a partial updated on a data into the target server), OPTION (to ask the communication server with the target server).
    4. HTTP Header: has 4 different options, forward the header, add an HTTP Header, remove an HTTP Header and update HTTP Header values.
    5. HTTP Async Request is similar to the HTTP Input witch is a sync node and will release the requestor that will wait for a response later.
    6. HTTP Async Response is similar to the HTTP Reply which is a sync node and will reply to the async requestor based on the session id.
  1. REST Request is the node that will send a JSON request to a REST service and wait for the response as a sync node.
  2. REST Async Request is the node that will send a JSON request to a REST service and not wait for the response as an async node.
  3. REST Async Response is the node that will wait for a JSON response from a REST service based on a session id as an async node.
  1. Web Services group:
    1. SOAP Input is a SOAP web service listener waiting for a SOAP request. This node has two operation modes:
      1. Specitfy to a WSDL/XSD which will define the contract to respect to call the service.
      2. As a gateway which is a generic node accepting any valid SOAP request.
    2. SOAP Reply is a SOAP replied that will send a SOAP reponse based on the http session id.
    3. SOAP Request is a SOAP client that will send a SOAP request to a SOAP web service.
    4. SOAP Async Request is similar to the SOAP request as a sync node that will not wait for the response.
    5. SOAP Async Response will wait for the SOAP async response based on a session id.
    6. SOAP Envelope is a node that can help create the soap envelope.
    7. SOAP Extract will retrieve specific requested information the a SOAP envelope to a target variable but also can route the message based on certain conditions.
    8. Registry Lookup is a node that can retired any entity information for IBM WSRR (Websphere Registry and repository). WSRR is a tool that help register metadata web services but also the WSDL and XSD.
    9. Endpoint Lookup is similar to Registry Lookup but will retreive endpoint information.
  1. Adapters group:
    1. PeopleSoft:
      1. PeopleSoft Input is a node that will wait for compatible requests with the CRM tool PeopleSoft.
      2. PeopleSoft Request is a node that will send requests with the CRM tool PeopleSoft.
    2. SAP:
      1. SAP Input is a node that will wait for compatible requests with the accounting tool SAP.
      2. SAP Request is a node that will send compatible requests with the accounting tool SAP.
      3. SAP Reply is a node that will send
  1. Adapters group:
    1. PeopleSoft:
      1. PeopleSoft Input is a node that will wait for compatible requests with the CRM tool.
      2. PeopleSoft Request is a node that will compatible send requests with the CRM tool.
    2. SAP:
      1. SAP Input is a node that will wait for compatible requests with the accounting tool.
      2. SAP Request is a node that will send compatible requests with the accounting tool.
      3. SAP Reply is a node that will send compatible responses with the account tool.
    3. Siebel:
      1. Siebel Input is a node that will wait for compatible requests with the CRM tool.
      2. Siebel Request is a node that will send compatible requests with the CRM tool.
    4. JDEdwards:
      1. JDEdwards Input is a node that will wait for compatible requests with the ERP tool.
      2. JDEdwards Request is a node that will send compatible requests with the ERP tool.
  1. Routing group:
    1. Routing:
      1. Filter is an ESQ node to route message based on any business logic that will be developer into the ESQL filter node. The results can go to different output terminal wich are TRUE (a condition is respected), FALSE (a condition is not respected), UNKNOWN (a condition does not match iether TRUE or FALSE), FAILURE (technical error).
      2. Label: A label is a routing point that can be called from an ESQL (with ROUTE TO LABEL LABEL_NAME) or from the node RouteToLabel.
      3. Publication: is a node that can filter a message and publish it to a topic based on a the condition of a filter.
      4. RouteToLabel: is a node that will forward the message to the target label, the label name must be located into the LocalEnvironment variable “OutputLocalEnvironment.Destination.RouterList.DestinationData[1].labelName“. You can specify multiple lable in the DestinationData array to forward the message to multiple lables at once.
      5. Route: is a very usefull node allowing to send messages to next nodes by terminal output. The route node will filter check the route conditition by XPATH expression and will use the output terminal name to send the message to the next node. This node has 3 different terminals by default and has the particularity to allow custom output terminal creation:
        1. Failure: is case of technical error.
        2. Default: the message does not match any xpath filter pattern.
        3. Match: the message matches an xpath filter pattern.
    1. Aggregation: Belongs to the aggregation eai dessign pattern. All the nodes belonging to Aggregation need a default queue manager. Aggregation sessions will be stored into specific system queues.
      1. Aggregate Control: is a node that will initiatie the sessions aggregation for the next aggregate requests and reply.
      2. Aggregate Reply: is the node waiting fo all the responses from a backend to specify to all sub sessions replied with a response or a timeout. After the aggregate reply, the next action will be to loop on an responses to process the aggregation.
      3. Aggregate Request: is a the node that will initiate a sub sessions for a new request will be made to a backend.
      4. Collector: is a node that will create collection from different responses messages based on filter patterns. It can also regroup based on number of responses, bases on xpath and timeout.
      5. Resequence: is a usefull node to reorder message in case of need that some messsages need to be send by a specific order.
      6. Sequence: is the node that will be used by the resquence node. It will set a sequence number to the messages.
    2. Groupping: Belongs to the aggregation eai dessign pattern. This node is similiar to aggregation nodes but has the particularity to not need a default queue manager. Those nodes where developed because the product ACE as a standalone integration server can be deployed and installed as a standalone service without queue manager (very important for future microservices integration server instances into containers or without containers). !!! The sessions and sub sessions mechanism needs to be implemented by yourself in ESQL where the aggregation nodes will do it for you with MQ system queues !!!
      1. Group Scatter: is similar to Aggregate Control and will initiage the sessions aggregation.
      2. Group Gather: is a node that will mark the management of a response (reply or timeout).
      3. Group Complete: is similar to Aggredate Reply and will start asynchronously when all the GroupGather will have a responses. From there, the next action will be to loop on an responses to process the aggregation.
  1. .NET group:
    1. .NET Input: is a node that will retrieve data from MicrosoftMQ or a file or a DB based on an assembly (DLL). As an entry node, it will initiate the start of a flow.
  1. Transformation:
    1. .Net Compute: is a node that will process messages with .net code wrapped in an assembly (DLL). It is based on .net classes and interfaces specific for IIB/ACE for process Input,Output, Environment Variables, LocalEnvironment variables.
    2. Mapping: is a powerfull visual data mapper (coming from another IBM product call WTX Websphere Transformation Extender). It can work with DFDL, MRM, XML (XSD contract), JSON (JSON Schema or Swagger 2.0 contract).
    3. XSL Transformation: is node that will execute an XSLT file to a message to process transformations.
    4. Compute: Is the famous ESQL compute module to process messages for transformation and/or routing.
    5. Java Compute: is a node that will class a java class wrapped in a Java project. It is based on classes and interfaces specific for IIB/ACE for process Input,Output, Environment Variables, LocalEnvironment variables.
  1. Construction group:
    1. Input: is a node to declare the entry of a subflow.
    2. Output: is a node to declare the exit of a subflow.
    3. Throw: will send a throw exception to the flow and go recursively to the first failure terminal if linked.
    4. Trace: is a node to write data to system log, or a local filed or a user trace.
    5. TryCatch: Will wrapp an error management to process some actions.
    6. FlowOrder: will allow to process to flow sequentially as a first flow to run and when the first is finished, will run the second flow.
    7. Passthrough: will alow to version the execution of a flow by a label name.
  1. Callebale Flow group:
    1. Callable Input: is an entry node that will begin when the Callable flow receives a request.
    2. Callable Reply: is an exit node to send the response from the Callable flow.
    3. Callable Flow Invoke: is the node to invoke a Callable flow so the next node will be the Callable input.
    4. Callable Flow Async Invoke: similar to Callable Flow invoke, it will send a request asynchronously and now wait for the responses.
    5. Callable Flow Async Reponse: is the node that will wait for the response based on the session id of the Callable Flow Async Invoke.
  1. Cloud Connectors group:
    1. AppConnectRESTRequest: is a node that will communicate with an App Connect REST API project. The REST API project and this node will use a common Swagger 2.0 contract to be compliant each other.
    2. SalesforceRequest: is a node that will communicate with all the default CRUD SFDC objects operations. It does not allow to call custom SFDC operations, for that, the would be to use a Soap Request or a Rest request node.
  1. LoopBack Connectors group:
    1. LoopBackRequest: is the node to send CRUD requests to loopback services (based on NodeJS) through the loopback connector. You can send requests to loopback of multiple DB for instance or create create your own loopback service to consume.
    2. In a future post, i will show an example on how to configure your connector fron this link https://www.ibm.com/docs/en/app-connect/11.0.0?topic=connectors-installing-loopback-connector and consume the loopback service of MongoDB, see link https://loopback.io/doc/en/lb4/MongoDB-connector.html or create your own loopback service from https://loopback.io/.
  1. Database group: !!! Each node of this group needs an ODBC DSN entry !!! To install ODBC on linux, see this link https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver16.
    1. Database Input: Similar to a DB cron, this node will wait for events by running the ESQL code of this node on a interval pooling time.
    2. Database: This node is similar to the Database input but is executed on the continuity of a flow without pooling interval but also needs ESQL code to execute CRUD statements.
    3. Database Retrieve: This node will help you to retreive data from one or multiple tables and extract to the OutpuRoot message.
    4. Database Route: Can be compared to the Route node, it will query data into the DB and based on the result, you can send the results to differents output terminals by configuring the Filter expression table propery.
      1. This node has 4 different terminals by default and has the particularity to allow custom output terminal creation:
        1. Failure: in case of technical error.
        2. Default: Default behavior in case of else condition.
        3. KeyNotFound: The data is not present into DB.
        4. Match: The data is found into DB and extracted.
  1. File group:
    1. File Input: This node is a usefull one allow to read a file from
      1. The local host.
      2. A FTP or SFTP or FTPS. To configure the ftp service connection, either for it on the FTP property tab of the node, by applying override bar or with an FTP policy. The credentials will be stored with setdbparms command line.
    2. File Output: Similar to the file input node, it allows to write some content into a file on the local host or on FTP, STFTP, FTPS.
    3. File Read: This is node will help in the middle of a flow to read the whole content of a file, read a fixed content record, read based on a delimiter, parse a record based on a schema (xsd, dfdl or mdm).
    4. FTE Input: this is node is specific to only work with IBM MQ File Transfer Edition. This node will require that ACE has a default Queue manager with the FTE feature enabled. It will read a file and will use system queue to store the content file.
    5. FTE Ouput: is similar to FTE input but will send the file to a target agent waiting for the file to write.
    6. CD Input: is a node waiting for data from IBM Sterling Connect Direct and will store the data to the local queue manager. The connection to CD will be done with a policy and the credentials will be store with setdbparms command line.
    7. CD Output: is similar to CD Input but will send data to the CD server.
  1. Email group:
    1. Email Input: is a node waiting for any message coming from a POP3 or IMAP server. The credentials will be store with setdbparms command line.
    2. Email Output: is a node sending data by SMTP protocol. The credentials will be store with setdbparms command line.
  1. TCPIP group:
    1. TCPIP Client Input: is a node to receive a request from a distant TCPIP socket client port. You need to specify a host and a port or a TCPIP Server policy.
    2. TCPIP Client Output: is a node to reply with a response to a distant TCIP socket client.
    3. TCPIP Client Receive: is a node to send a request to a distant TCPIP socket client and wait to a synchronous reponse.
    4. TCPIP Server Input: is a node to allow an integration server to become a TCPIP socket server.
    5. TCPIP Server Output: is a node to respond to a client socket requestor.
    6. TCPIP Server Receive: is a node to send a request to a distant TCPIP socket server and wait to a synchronous reponse.
  1. CORBA Group:
    1. CORBA Request: Similar to COM model, this node allows to send a CORBA request to CORBA service.
  1. Rules group:
    1. ODM Rules: ODM (Operational Desicion Management) is a set of business rules written in XML. The ODM service url must be specified or must be configured in an ODM server policy. Each rule or a specific rule can be executed with Xpath. The ODM XML can also be executed by a Java Compute node.
  1. CICS group:
    1. CICS Request: is a node to connect to a mainframe by communicating with CICS TG (Transaction gateway). The request can be sent by commearea, fixed lenght of data that will end up into the working storage section of the COBOL CICS application for instance. The request can also be sent by channel, where the link of the CICS application will be made by Container name.
  1. IMS group:
    1. IMS Request: IMS is an old hierarchical DB (non RDBM) storing data linked by parents and children. To connect and send a request, you must provide a host, pord and datastore name or a policy IMS Connect.
  1. Validation group:
    1. Validate: is a very useful node to validate message of type XMLNS, XMLNSC, SOAP, DFDL, MRM or SAP IDOC. In case of invalid or non compliant data, the validate node will send the exception to the failure terminal.
    2. Check: is deprecated node. Check will verify that the message complies with a contract and if it is the case, the node will forward the message to the next node, if not, it will not forward the message.
  1. Security:
    1. Security PEP: is a very very very useful node to process authentication control. The credentials can be challenged through LDAP, WS-Trust, TFIM (Tivoli Federated Indentify Manager). You can configure the IAM service with the Security Profile.
  1. Timer group:
    1. Timeout Control: is a node that will run forward the message to the next node if the time request matches or respects the system clock requirement. The information to check the time is located into an XML. !!!! the node need a local queue manager !!!!
    1. Timeout Notification: is a node that will behave as a cron. You can specify a timeout internal and each interval reached, the node will start a new theard. !!! This node has two operation modes, automatic where it will not need a local queue manager and will use the timeout interval value to run, controlled mode will get the run information from the local queue manager where a Timeout Control node will give the instruction to the Timeout Notification to run !!!
Timeout Control example:
<TimeoutRequest>
  <Action>SET | CANCEL</Action>
  <Identifier>String (any alphanumeric string)</Identifier>
  <StartDate>String (TODAY | yyyy-mm-dd)</StartDate>
  <StartTime>String (NOW | hh:mm:ss)</StartTime>
  <Interval>Integer (seconds)</Interval>
  <Count>Integer (greater than 0 or -1)</Count>
  <IgnoreMissed>TRUE | FALSE</IgnoreMissed>
  <AllowOverwrite>TRUE | FALSE</AllowOverwrite>
</TimeoutRequest>

This concludes the small explanation of each node of IIB/ACE/CP4I.

I had the chance to use almost 90% of those nodes and i’m a very passionate user of ESBs and see the difference between writting a J2EE service where 100 % of the development must be coded and where only 20 % must be coded on an ESB.

Kafka

In this post, i will explain many different topics dedicated to Kafka/Zookeeper/Kraft

Seen that kafka is a tool i like a lot. Let me share with you a video in french explaining many different topics about Kafka such as: What is Kafka, how it works in cluster, how to install it, how to secure it, how to use it with Java, with Apache Camle, with Apache Sparks, how to send logs of Kafka and Zookeeper to ELK with filebeat, how to monitor it with Zabbix, which patterns can be used, what are the best practices.

The video is quite long, but i really enjoyed doing it.

Coming soon:

Kafka pattern usage.

Kafka and security.

Kafka and Apache Camel.

Message Validation, enrichment.

KSQL.

Kafa Connect.

Monitoring with Zabbix/Nagios.

Kafka/Zookeeper and ELK.