Skip to main content

[SOLVED] `ansible` on RHEL 6.6 - dependency failure with `python-jinja2` error.

Installing ansible on RHEL 6.6.

Download epel and install.
[root@server-cloudera-manager ~]# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@server-cloudera-manager ~]# rpm -ivh epel-release-6-8.noarch.rpm
warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ########################################### [100%]
   1:epel-release           /etc/yum.repos.d/epel.repo
########################################### [100%]
IMPORTANT : As of now in redhat 6.6 python-jinja2 is moved to optional repo in redhat from epel. So we need to enable optional repo in redhat.
  • Information on python-jinja2 move from EPEL: http://docs.saltstack.com/en/latest/topics/installation/rhel.html
  • How to enable repo: https://access.redhat.com/solutions/265523
ERROR before we enable the optional rpms.
[root@server ~]# yum install ansible
Loaded plugins: product-id, rhnplugin, security, subscription-manager
epel/metalink                                                                                                                                |  12 kB     00:00
epel                                                                                                                                         | 4.3 kB     00:00
epel/primary_db                                                                                                                              | 5.7 MB     00:43
rhel-6-server-rpms                                                                                                                           | 3.7 kB     00:00
rhel-6-server-rpms/primary_db                                                                                                                |  35 MB     01:29
rhel-server-dts-6-rpms                                                                                                                       | 2.9 kB     00:00
rhel-server-dts2-6-rpms                                                                                                                      | 2.9 kB     00:00
Resolving Dependencies
--> Running transaction check
---> Package ansible.noarch 0:1.9.2-1.el6 will be installed
--> Processing Dependency: python-simplejson for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-keyczar for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-jinja2 for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-httplib2 for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-crypto2.6 for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: PyYAML for package: ansible-1.9.2-1.el6.noarch
--> Running transaction check
---> Package PyYAML.x86_64 0:3.10-3.1.el6 will be installed
--> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-3.1.el6.x86_64
---> Package ansible.noarch 0:1.9.2-1.el6 will be installed
--> Processing Dependency: python-jinja2 for package: ansible-1.9.2-1.el6.noarch
---> Package python-crypto2.6.x86_64 0:2.6.1-2.el6 will be installed
---> Package python-httplib2.noarch 0:0.7.7-1.el6 will be installed
---> Package python-keyczar.noarch 0:0.71c-1.el6 will be installed
--> Processing Dependency: python-pyasn1 for package: python-keyczar-0.71c-1.el6.noarch
---> Package python-simplejson.x86_64 0:2.0.9-3.1.el6 will be installed
--> Running transaction check
---> Package ansible.noarch 0:1.9.2-1.el6 will be installed
--> Processing Dependency: python-jinja2 for package: ansible-1.9.2-1.el6.noarch
---> Package libyaml.x86_64 0:0.1.3-4.el6_6 will be installed
---> Package python-pyasn1.noarch 0:0.0.12a-1.el6 will be installed
--> Finished Dependency Resolution
Error: Package: ansible-1.9.2-1.el6.noarch (epel)
           Requires: python-jinja2
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
[root@server ~]#
Here is more verbose from the installation.
Enable optional repo.
[root@server-cloudera-manager yum.repos.d]# subscription-manager repos --enable=rhel-6-server-optional-rpms
Install ansible.
[root@server-cloudera-manager ~]# yum install ansible

===========================================================================
 Package           Arch   Version        Repository                   Size
===========================================================================
Installing:
 ansible           noarch 1.9.2-1.el6    epel                        1.7 M
Installing for dependencies:
 PyYAML            x86_64 3.10-3.1.el6   rhel-6-server-rpms          157 k
 libyaml           x86_64 0.1.3-4.el6_6  rhel-6-server-rpms           52 k
 python-babel      noarch 0.9.4-5.1.el6  rhel-6-server-rpms          1.4 M
 python-crypto2.6  x86_64 2.6.1-2.el6    epel                        513 k
 python-httplib2   noarch 0.7.7-1.el6    epel                         70 k
 python-jinja2     x86_64 2.2.1-2.el6_5  rhel-6-server-optional-rpms 466 k
 python-keyczar    noarch 0.71c-1.el6    epel                        219 k
 python-pyasn1     noarch 0.0.12a-1.el6  rhel-6-server-rpms           70 k
 python-simplejson x86_64 2.0.9-3.1.el6  rhel-6-server-rpms          126 k

Transaction Summary
===========================================================================
Install      10 Package(s)


[root@server-cloudera-manager ~]# ansible --version
ansible 1.9.2
  configured module search path = None
[root@server-cloudera-manager ~]#

Comments

Post a Comment

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa