Skip to main content

Issues - Monitoring MongoDB using Nagios XI.

Monitoring for mongodb using nagiosxi is straight forword but you might have some issues when we are setting up.
Here are few issues which might come up using mongodb version 3.

Issue getting monitoring data in nagios.

1. ConnectionFailure object has no attribute strip

[ahmed@localhost libexec]$ ./check_mongodb.py -H 192.168.94.137 -P 27017 -u admin -p admin
Traceback (most recent call last):
  File "./check_mongodb.py", line 1372, in <module>
    sys.exit(main(sys.argv[1:]))
  File "./check_mongodb.py", line 196, in main
    err, con = mongo_connect(host, port, ssl, user, passwd, replicaset)
  File "./check_mongodb.py", line 294, in mongo_connect
    return exit_with_general_critical(e), None
  File "./check_mongodb.py", line 310, in exit_with_general_critical
    if e.strip() == "not master":
AttributeError: 'ConnectionFailure' object has no attribute 'strip'
Solution.
e.strip() expects e to be a string, which might not be the case sometimes, so remove strip(). Change below code on line 310.
  else:
      if e.strip() == "not master":
          print "UNKNOWN - Could not get data from server:", e
          return 3
to
  else:
      if e == "not master":
          print "UNKNOWN - Could not get data from server:", e
          return 3
After the change atleast you will get an error which gives you more information.
[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'37a502d665186449'), ('key', u'd8c683f98a5e720c28a8007018ed7414')]) failed: auth failed
Next we will try to resolve, above auth failure.

2. Executing command from the nagios server.

[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'42110dc29ee7fe6b'), ('key', u'827a2b0e4af97e88560800ab86b04e57')]) failed: auth failed

On the mongodb server.

Checking on the mongodb server shows that the AuthenticationFailed due to MONGODB-CR credentials missing in the user document
2016-09-14T19:11:12.142-0700 I ACCESS   [conn114] Successfully  authenticated as principal admin on admin
2016-09-14T19:11:32.892-0700 I NETWORK  [initandlisten] connection accepted from  192.168.94.130:48657 #115 (2 connections now open)
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115]  authenticate db: admin { authenticate: 1, user: "admin", nonce: "xxx", key: "xxx" }
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115] Failed to authenticate admin@admin with mechanism MONGODB-CR: AuthenticationFailed: MONGODB-CR credentials missing in the user document
2016-09-14T19:11:32.895-0700 I NETWORK  [conn115] end connection 192.168.94.130:48657 (1 connection now open)
2016-09-14T19:11:54.283-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48663 #116 (2 connections now open)
2016-09-14T19:11:54.284-0700 I NETWORK  [conn116] end connection 192.168.94.130:48663 (1 connection now open)
2016-09-14T19:12:07.860-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48666 #117 (2 connections now open)
2016-09-14T19:12:07.861-0700 I ACCESS   [conn117] Unauthorized: not authorized on admin to execute command { listDatabases: 1 }
Solution.
  1. Delete exsisting users on the database if it was already created.
  2. Modify the collection admin.system.version such that the authSchema currentVersion is 3 instead of 5
  3. Version 3 is using MongoDB-CR
  4. Recreate your user on the databases.
NOTE : Do not do it on PRODUCTION environment, use update instead and try on test database first.
mongo
use admin
db.system.users.remove({})
db.system.version.remove({})
db.system.version.insert({ "_id" : "authSchema", "currentVersion" : 3 })
More Details Here:

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa