Skip to main content

Ansible Playbook - Setup Zookeeper Using tarball.

Ansible Playbook - Setup Zookeeper Using tarball.

This is a simple zookeeper playbook, to quickly start zookeeper running on a single or more nodes, in a clustered mode.
Here is the Script Location on Github: https://github.com/zubayr/ansible_zookeeper_tarball
Below are the steps to get started.

Before we start.

Please download zookeeper-3.4.5-cdh5.1.2.tar.gz and store it in file_archives directory.**

Get the script from Github.

Below is the command to clone.
ahmed@ahmed-server ~]$ git clone https://github.com/zubayr/ansible_zookeeper_tarball

Step 1. Update below variables as per requirement.

Variables are located in roles/zookeeper_install_tarball/default/main.yml.
# Zookeeper Version.
zookeeper_version: zookeeper-3.4.5-cdh5.1.2

# Zookeeper Storage and Logging.
zookeeper_data_store: /data/ansible/zookeeper
zookeeper_logging: /data/ansible/zookeeper_logging
Global Vars can be found in the location group_vars/all.
# --------------------------------------
# USERs
# --------------------------------------

zookeeper_user: zkadmin
zookeeper_group: zkadmin
zookeeper_password: $6$rounds=40000$1qjG/hovLZOkcerH$CK4Or3w8rR3KabccowciZZUeD.nIwR/VINUa2uPsmGK/2xnmOt80TjDwbof9rNvnYY6icCkdAR2qrFquirBtT1

# Common Location information.
common:
  install_base_path: /usr/local
  soft_link_base_path: /opt

Step 2. User information come from global_vars.

Username can be changed in the Global Vars, zookeeper_user.
Currently the password is hdadmin@123
Password can be generated using the below python snippet.
# Password Generated using python command below.
python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.encrypt(getpass.getpass())"
Here is the execution. After entering the password you will get the encrypted password which can be used in the user creation.
ahmed@ahmed-server ~]$ python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.encrypt(getpass.getpass())"
Enter Password: *******
$6$rounds=40000$1qjG/hovLZOkcerH$CK4Or3w8rR3KabccowciZZUeD.nIwR/VINUa2uPsmGK/2xnmOt80TjDwbof9rNvnYY6icCkdAR2qrFquirBtT1
ahmed@ahmed-server ~]$

Step 3. Update playbook.

Update file called ansible_zookeeper.yml (if required) with hosts file in root of the directory structure.
Below is the sample directory structure.
zookeeper.yml
hosts
global_vars
  --> all
file_archives
  --> zookeeper-3.4.5-cdh5.1.2.tar.gz
  --> ...
roles
  --> zookeeper_install_tarball
  --> ...
Below are the contents for ansible_zookeeper.yml
#
#-----------------------------
# ZOOKEEPER CLUSTER SETUP
#-----------------------------
#

- hosts: zookeepernodes
  remote_user: root
  roles:
    - zookeeper_install_tarball
Steps used in zookeeper_install_tarball role.
  1. Create a user to running zookeeper service. NOTE: user information in global_vars.
  2. Copy tgz file and extract in destination.
  3. Changing permission to directory, setting zookeeper_user as the new owner.
  4. Creating Symbolic link. NOTE: soft_link_base_path information in global_vars.
  5. Updating Configuration File in Zookeeper.
  6. Creating directory for Zookeeper.
  7. Initializing myid file for Zookeeper.
  8. Starting Zookeeper Service.
Here are the contents of hosts file.
In hosts file zookeeper_id will be used to create an id in myid file, for each zookeeper running as a cluster.
#
# zookeeper cluster
# 

[zookeepernodes]
10.10.18.25 zookeeper_id=1
10.10.18.87 zookeeper_id=2
10.10.18.90 zookeeper_id=3

Step 4. Executing yml.

Execute below command.
ansible-playbook ansible_zookeeper.yml -i hosts --ask-pass

Comments

  1. It’s interesting to read content nice post.
    and also we are providing E-Learning Portal Videos for students and working Professionals
    Hurry Up! Bag All Courses in Rs - 10000 /- + taxes
    41 Career building courses.
    Designed by 33 industrial experts
    600+ hours of video Content
    DevOps and Cloud E-Learning Portal

    ReplyDelete

Post a Comment

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa