Skip to main content

Creating /etc/hosts file in Chef.

We had a cluster environment which we needed to update the /etc/hosts file. Which would help communicate between the server over a private network. Our servers have multiple interfaces and we need them to communicate between each other using the private network.
Goal for us is to create a /etc/hosts with all the nodes within the cluster with their private IP addresses.

Chef Setup (Assumptions).

  • We have multiple cluster.
  • Each cluster has an environment set.
  • We have a multiple interfaces on each node (But this solutions should work for single interface as well).

    Steps we do to get information from each node.

  1. We take 3 attributes to look for in the cluster.
    1. Each server with the cluster have specific fqdn.
    2. Private IP (string search).
    3. Interface to look for on the node.
  2. all_nodes_in_cluster will get all the nodes with that fqdn (This can be change based on requirement to node, tags, roles, hostname).
  3. For every node we look for the specific interface and get the private IP.

    Before we start.

    Before we start we need to check for our search criteria using knife search more details here
    Below if an example to look for server fqdn containing string env-lab-1.
    knife search node "fqdn:*env-lab-1*"
    

    About the Recipe.

The interface information on hash is located as below
 node_in_cluster['network']['interfaces']['bond.007']['addresses']
This has a Dictionary with multiple values, we are specifically looking for IP.
 if private_interface[0].include? node['private_ip_search_filter']
Above we are looking for the interface which matches out search filter. Required information is in private_interface[0]
Here is how we would write the information in our /etc/hosts file, IP,FQDN,HOSTNAME.
 puts "#{private_interface[0]} #{node_in_cluster['fqdn']} #{node_in_cluster['hostname']}"
Here is complete ruby code which does the same thing as in the erb template file.
 all_nodes_in_cluster.each do |node_in_cluster|
   node_in_cluster['network']['interfaces'][int_to_look]['addresses'].each do |private_interface|
     if private_interface[0].include? node['private_ip_search_filter']
       puts "#{private_interface[0]} #{node_in_cluster['fqdn']} #{node_in_cluster['hostname']}"
     end
   end
 end

Attribute File.

 # Example :  We take the search Criteria to generate /etc/hosts
 default['env_search_filter'] = "fqdn:*lab-env-1*"
 default['private_ip_search_filter'] = "192.168.149"
 default['interface_to_look'] = 'bond.007'

Recipe

 # Search Criteria
 all_nodes_in_cluster = search(:node, node['env_search_filter'])
 int_to_look = node['interface_to_look']

 template '/etc/hosts' do
   source 'etc_hosts_file.erb'
   mode '0755'
   owner 'root'
   group 'root'
   variables({
     all_nodes_in_cluster: all_nodes_in_cluster,
     int_to_look: int_to_look,
     private_ip_search_filter: node['private_ip_search_filter']
   })
 end

Template File etc_hosts_file.erb.

 127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
 <% @all_nodes_in_cluster.each do |node_in_cluster| -%>
   <% node_in_cluster['network']['interfaces'][@int_to_look]['addresses'].each do |private_interface| -%>
     <% if private_interface[0].include? @private_ip_search_filter -%>
 <%= private_interface[0] %>     <%= node_in_cluster['fqdn'] %>      <%= node_in_cluster['hostname'] %>  # Serial Number: <%= node_in_cluster['dmi']['system']['serial_number'] %> ( <%= node_in_cluster['dmi']['system']['manufacturer'] %> ) <%= node_in_cluster['dmi']['system']['product_name'] %>
     <% end -%>
   <% end -%>
 <% end -%>
 ::1     localhost localhost.localdomain localhost6 localhost6.localdomain6

Disclaimer.

  1. This does not look like an optimized solution, but something which worked for me.
  2. search method which will be run every 30mins, will query to get all information for all the nodes, which I think would be time/bandwidth consuming operation if we have a very large cluster. (A single node information was about 10000 lines of ruby Hash for our nodes).
  3. If any one has a better way to do it, please post it in the comments below. Thanks :)

Comments

Post a Comment

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa