Skip to main content

Creating /etc/hosts file in Chef.

We had a cluster environment which we needed to update the /etc/hosts file. Which would help communicate between the server over a private network. Our servers have multiple interfaces and we need them to communicate between each other using the private network.
Goal for us is to create a /etc/hosts with all the nodes within the cluster with their private IP addresses.

Chef Setup (Assumptions).

  • We have multiple cluster.
  • Each cluster has an environment set.
  • We have a multiple interfaces on each node (But this solutions should work for single interface as well).

    Steps we do to get information from each node.

  1. We take 3 attributes to look for in the cluster.
    1. Each server with the cluster have specific fqdn.
    2. Private IP (string search).
    3. Interface to look for on the node.
  2. all_nodes_in_cluster will get all the nodes with that fqdn (This can be change based on requirement to node, tags, roles, hostname).
  3. For every node we look for the specific interface and get the private IP.

    Before we start.

    Before we start we need to check for our search criteria using knife search more details here
    Below if an example to look for server fqdn containing string env-lab-1.
    knife search node "fqdn:*env-lab-1*"
    

    About the Recipe.

The interface information on hash is located as below
 node_in_cluster['network']['interfaces']['bond.007']['addresses']
This has a Dictionary with multiple values, we are specifically looking for IP.
 if private_interface[0].include? node['private_ip_search_filter']
Above we are looking for the interface which matches out search filter. Required information is in private_interface[0]
Here is how we would write the information in our /etc/hosts file, IP,FQDN,HOSTNAME.
 puts "#{private_interface[0]} #{node_in_cluster['fqdn']} #{node_in_cluster['hostname']}"
Here is complete ruby code which does the same thing as in the erb template file.
 all_nodes_in_cluster.each do |node_in_cluster|
   node_in_cluster['network']['interfaces'][int_to_look]['addresses'].each do |private_interface|
     if private_interface[0].include? node['private_ip_search_filter']
       puts "#{private_interface[0]} #{node_in_cluster['fqdn']} #{node_in_cluster['hostname']}"
     end
   end
 end

Attribute File.

 # Example :  We take the search Criteria to generate /etc/hosts
 default['env_search_filter'] = "fqdn:*lab-env-1*"
 default['private_ip_search_filter'] = "192.168.149"
 default['interface_to_look'] = 'bond.007'

Recipe

 # Search Criteria
 all_nodes_in_cluster = search(:node, node['env_search_filter'])
 int_to_look = node['interface_to_look']

 template '/etc/hosts' do
   source 'etc_hosts_file.erb'
   mode '0755'
   owner 'root'
   group 'root'
   variables({
     all_nodes_in_cluster: all_nodes_in_cluster,
     int_to_look: int_to_look,
     private_ip_search_filter: node['private_ip_search_filter']
   })
 end

Template File etc_hosts_file.erb.

 127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
 <% @all_nodes_in_cluster.each do |node_in_cluster| -%>
   <% node_in_cluster['network']['interfaces'][@int_to_look]['addresses'].each do |private_interface| -%>
     <% if private_interface[0].include? @private_ip_search_filter -%>
 <%= private_interface[0] %>     <%= node_in_cluster['fqdn'] %>      <%= node_in_cluster['hostname'] %>  # Serial Number: <%= node_in_cluster['dmi']['system']['serial_number'] %> ( <%= node_in_cluster['dmi']['system']['manufacturer'] %> ) <%= node_in_cluster['dmi']['system']['product_name'] %>
     <% end -%>
   <% end -%>
 <% end -%>
 ::1     localhost localhost.localdomain localhost6 localhost6.localdomain6

Disclaimer.

  1. This does not look like an optimized solution, but something which worked for me.
  2. search method which will be run every 30mins, will query to get all information for all the nodes, which I think would be time/bandwidth consuming operation if we have a very large cluster. (A single node information was about 10000 lines of ruby Hash for our nodes).
  3. If any one has a better way to do it, please post it in the comments below. Thanks :)

Comments

Post a Comment

Popular posts from this blog

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server ...

Installing Zabbix Version 2.4 Offline (Zabbix Server without Internet).

There might be situations where you have a remote/zabbix server which does not have internet connectivity, due to security or other reasons. So we create a custom repo on the remote/zabbix server so that we can install zabbix using rpms Here is how we are planning to do this. Download all the dependency rpms on a machine which has internet connection, using yum-downloadonly or repotrack . Transfer all the rpms to the remote server. Create a repo on the remote server. Update yum configuration. Install. NOTE: This method can be used to install any application, but here we have used zabbix as we had this requirement for a zabbix server. Download dependent rpms . On a machine which has internet connection install the package below. And download all the rpms . Make sure the system are similar (not required to be identical - At-least the OS should be of same version) mkdir /zabbix_rpms yum install yum-downloadonly Downloading all the rpms to location /zabbix_rpms/ ,...

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa...