Skip to main content

Ansible Playbook - Setup Kafka Cluster.

Ansible Playbook - Setup Kafka Cluster.

This is a simple Kafka setup. In this setup we are running kafka over a dedicated zookeeper service. (NOT the standalone zookeeper which comes with kafka)
Before we start read more information about Zookeeper/Kafka in the below link.
  1. Setup Zookeeper.
  2. Setup Kafka. Server running on ports 9091/9092 ports on each server.

Before we start.

Download kafka_2.9.2- to file_archives directory.
Download zookeeper-3.4.5-cdh5.1.2.tar.gz to file_archives directory.

Get the script from Github.

Below is the command to clone.
ahmed@ahmed-server ~]$ git clone

Step 1: Update Hosts File.

Update the host file to reflect your server IPs. Currently hosts file looks as below.
[zookeepers] zookeeper_id=1 zookeeper_id=2 zookeeper_id=3

[kafka-nodes] kafka_broker_id1=11 kafka_port1=9091 kafka_broker_id2=12 kafka_port2=9092 kafka_broker_id1=13 kafka_port1=9091 kafka_broker_id2=14 kafka_port2=9092 kafka_broker_id1=15 kafka_port1=9091 kafka_broker_id2=16 kafka_port2=9092

Step 2: Update group_vars information as required.

Update users/password and Directory information in group_vars/all file. Currently we have the below information.
# --------------------------------------
# --------------------------------------

zookeeper_user: zkadmin
zookeeper_group: zkadmin
zookeeper_password: $6$rounds=40000$1qjG/hovLZOkcerH$CK4Or3w8rR3KabccowciZZUeD.nIwR/VINUa2uPsmGK/2xnmOt80TjDwbof9rNvnYY6icCkdAR2qrFquirBtT1

kafka_user: kafkaadmin
kafka_group: kafkaadmin
kafka_password: $6$rounds=40000$1qjG/hovLZOkcerH$CK4Or3w8rR3KabccowciZZUeD.nIwR/VINUa2uPsmGK/2xnmOt80TjDwbof9rNvnYY6icCkdAR2qrFquirBtT1

# --------------------------------------
# --------------------------------------

# Common Location information.
  install_base_path: /usr/local
  soft_link_base_path: /opt

Step 3: Update default information in default/main.yml.

Update the default values if required.

Step 4: Executing.

Below is the command.
ahmed@ahmed-server ansible_kafka_tarball]$ ansible-playbook ansible_kafka.yml -i hosts --ask-pass

Popular posts from this blog

Setting up SNMP Trapper for Zabbix.

Receiving SNMP traps is the opposite to querying SNMP-enabled devices. In this case the information is sent from a SNMP-enabled device and is collected or “trapped” by Zabbix. Usually traps are sent upon some condition change and the agent connects to the server on port 162 (as opposed to port 161 on the agent side that is used for queries). Using traps may detect some short problems that occur amidst the query interval and may be missed by the query data. Receiving SNMP traps in Zabbix is designed to work with snmptrapd and one of the built-in mechanisms for passing the traps to Zabbix - either a perl script or SNMPTT. The workflow of receiving a trap:snmptrapd receives a trapsnmptrapd passes the trap to SNMPTT or calls Perl trap receiverSNMPTT or Perl trap receiver parses, formats and writes the trap to a fileZabbix SNMP trapper reads and parses the trap fileFor each trap Zabbix finds all “SNMP trapper” items with host interfaces matching the received trap address. Note that only t…

Installing SpagoBI 5.1 on Centos 6.5 -Tomcat 7 with MySQL 5.6.

Installing SpagoBI 5.1 on Centos 6.5 -Tomcat 7 with MySQL 5.6.What is the SpagoBI project? The SpagoBI project is a free software/open source initiative by the SpagoBI Labs of Engineering Group. It aims to realize the most complete 100% open source business intelligence suite, aggregating developers, integrators, companies, users and passionate people in an open community. Prerequiste Downloads and setup.Install JAVA JDK.Install MySQL on CentOS 6.5, HOWTO Install Instructions are here.Download SpagoBI 5.1.Download Mysql Script from SpagoBI Website.Download Mysql Connector.Download Tomcat 7. (Optional - if you want to run SpagoBI on a Dedicated Tomcat Server.) Extracting SpagoBI. Extract[ahmed@ahmed-server ~]# unzip Move to /opt and create a softlink as spagobi[ahmed@ahmed-server ~]# mv All-In-One-SpagoBI-5.1-21012015 /opt/ [ahmed@ahmed-server ~]# ln -s /opt/All-In…

NodeJS Kafka Producer - Using `kafka-node`

Now that we have Kafka and NodeJS ready. Lets some data to our Kafka Cluster. Below is a basic producer code. below are the Server Details.nodejs is the nodejs server.kafka is the kafka server (single node). Step 1: Copy the below script in a file called producer_nodejs.js. /* Basic producer to send data to kafka from nodejs. More Information Here : */ // Using kafka-node - really nice library // create a producer and connect to a Zookeeper to send the payloads. var kafka = require('kafka-node'), Producer = kafka.Producer, client = new kafka.Client('kafka:2181'), producer = new Producer(client); /* Creating a payload, which takes below information 'topic' --> this is the topic we have created in kafka. (test) 'messages' --> data which needs to be sent to kafka. (JSON in our case) 'partition' --> wh…