Skip to main content

Installing KAFKA Single Node - Quick Start.

Download and Extract

Download the tgz file and extract.
 [kafka-admin@kafka Downloads]$ ls
 jdk-7u75-linux-x64.rpm  kafka_2.9.2-0.8.2.0.tgz
 [kafka-admin@kafka Downloads]$ sudo rpm -ivh jdk-7u75-linux-x64.rpm
 ...
 [kafka-admin@kafka Downloads]$ sudo tar -xzf kafka_kafka_2.9.2-0.8.2.0.tgz -C /opt
 [kafka-admin@kafka Downloads]$ cd /opt
 [kafka-admin@kafka opt]$ sudo ln -s kafka_2.9.2-0.8.2.0 kafka
 [kafka-admin@kafka opt]$ ls
 kafka  kafka_2.9.2-0.8.2.0
 [kafka-admin@kafka opt]$ sudo chmod kafka-admin:kafka-admin -R kafka

Now we are ready to start all the services required.

 [kafka-admin@kafka opt]$ cd kafka
 [kafka-admin@kafka kafka]$ ls
 bin  config  libs  LICENSE  logs  NOTICE
 [kafka-admin@kafka kafka]$ bin/zookeeper-server-start.sh config/zookeeper.properties
This will start us a zookeeper in localhost on port 2181. This configuration can be changed in the config/zookeeper.properties file. NOTE : If you want to run the zookeeper on a separate machine make sure the change in the config/server.properties so that the kafka server points to the correct zookeeper. By default it points to localhost:2181.

Next we start server.

 [kafka-admin@kafka kafka]$ bin/kafka-server-start.sh config/server.properties
NOTE : If you want to start multiple make sure you make multiple copies of the server.properties file and change the below information.
  1. broker.id is the unique identifier for the service.
  2. port where this server is going to listen on.
  3. log.dir where to write the log.
 config/server-1.properties:
     broker.id=1
     port=9093
     log.dir=/tmp/kafka-logs-1

 config/server-2.properties:
     broker.id=2
     port=9094
     log.dir=/tmp/kafka-logs-2
Now our server has started, lets assume we start only one server.

Creating Topics

To create a topic just execute below command, this will create a single partition.
 [kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
To check topics currently running. Execute below command.
 [kafka-admin@kafka kafka]$ bin/kafka-topics.sh --list --zookeeper localhost:2181
 test
 [kafka-admin@kafka kafka]$ 
We see currently we have only one topic. Now we are all set to send and recv messages.

Send some message

Open up a new terminal and fire up the Kafka producer script as below. And start typing some message \n or cr will be end of each message
 [kafka-admin@kafka kafka]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
 This is a message
 This is a message2

Start a Consumer

Open a new terminal and start the consumer.
Option --from-beginning will give all the messages from the beginning. You will see 2 messages as we typed above This is a message and This is a message2.
 [kafka-admin@kafka kafka]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
 This is a message
 This is a message2
Our single node Kafka cluster is Ready.

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa