Skip to main content

Fetch Monitoring Data from Item to CSV file.

Have done some minor changes as per my need.
This script gets monitoring data from Items from Zabbix server and creates a CSV file. For a range of time given.
Time range is given in ‘yyyy-mm-dd hh:mm:ss’ format.
For the range we give start datetime and end datetime,
if only start datetime is specified then time period will be start datetime to now()
Script can do 2 things:
  1. Single key data into csv
  2. Multiple keys read from a file and creates a single csv, with all the keys (creates a .csv file).

Single key data into csv

This option will connect to Zabbix and collection a single key monitoring data.
CSV file is created with the key_name.csv format.
python -s 127.0.0.1 -h host-in-zabbix -u admin -p zabbix -k key-in-zabbix -t1 '2014-10-16 12:00:00' -v 1 

Multiple keys read from a file.

This option will connect to Zabbix and collection a multiple key monitoring data.
CSV file is created with the hostname.csv format.
A Text file or a zabbix export xml file can be use to get multiple keys from zabbix.
python -s 127.0.0.1 -h host-in-zabbix -u admin -p zabbix -f sample_key_file.txt -t1 '2014-10-16 12:00:00' -v 1 
python -s 127.0.0.1 -h host-in-zabbix -u admin -p zabbix -f sample_zabbix_export_file.xml -t1 '2014-10-16 12:00:00' -v 1

Sample Output CSV File.

CSV file created is in below format.
key timestamp value
TestKeyFromZabbix_VS.dlBytes 2014-10-14 12:00:00 0
TestKeyFromZabbix_VS.dlBytes 2014-10-14 12:05:00 0
TestKeyFromZabbix_VS.dlBytes 2014-10-14 12:10:00 0
TestKeyFromZabbix_VS.dlBytes 2014-10-14 12:15:00 3517
Here is a raw output. Rather a semicolon separated values (SSV).
key;timestamp;value
TestKeyFromZabbix_VS.dlBytes;2014-10-14 12:00:00;0
TestKeyFromZabbix_VS.dlBytes;2014-10-14 12:05:00;0
TestKeyFromZabbix_VS.dlBytes;2014-10-14 12:10:00;0
TestKeyFromZabbix_VS.dlBytes;2014-10-14 12:15:00;3517

Usage

usage: fetch_monitoring_data_to_csv.py [-h] -s SERVER_IP -n HOSTNAME
                                       (-k KEY | -f FILE) -u USERNAME -p
                                       PASSWORD [-o OUTPUT] -t1 DATETIME_START
                                       [-t2 DATETIME_END] [-v DEBUG_LEVEL]

Fetch history from aggregator and save it into CSV file

optional arguments:
  -h, --help            show this help message and exit
  -s SERVER_IP, --server-ip SERVER_IP
                        Server IP address
  -n HOSTNAME, --hostname HOSTNAME
                        Hostname of the VM
  -k KEY, --key KEY     Zabbix Item key
  -f FILE, --file FILE  Zabbix Item key File. XML export file from Zabbix.
                        Text file with key in each line.Each Key will create
                        its own csv file.
  -u USERNAME, --username USERNAME
                        Zabbix username
  -p PASSWORD, --password PASSWORD
                        Zabbix password
  -o OUTPUT, --output OUTPUT
                        Output file name, default key.csv (will remove all
                        special chars)
  -t1 DATETIME_START, --datetime-start DATETIME_START
                        Start datetime, use this pattern '2014-10-15 19:12:00'
                        if only t1 specified then time period will be t1 to
                        now()
  -t2 DATETIME_END, --datetime-end DATETIME_END
                        end datetime, use this pattern '2014-10-15 19:12:00'
  -v DEBUG_LEVEL, --debug-level DEBUG_LEVEL
                        log level, default 0

Code Usage

Single key data into csv

from fetch_monitoring_data_to_csv import fetch_monitoring_data_to_csv

fetch_monitoring_data_to_csv('admin','zabbix', "http://127.0.0.1/zabbix", 'BLR-IN-DEVICE',
                             'key', '', '2014-10-14 12:00:00', '', 1)

Multiple keys read from a file.

from fetch_monitoring_data_to_csv import fetch_multi_key_monitoring_data_to_csv

fetch_multi_key_monitoring_data_to_csv('admin','zabbix', "http://127.0.0.1/zabbix", 'BLR-IN-DEVICE',
                             'file.txt', '', '2014-10-14 12:00:00', '', 1)

Code Location

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa