Code Monkey home page Code Monkey logo

zabbix-cloudwatch's People

Contributors

atem18 avatar bsingr avatar dabest1 avatar ianssoftcom avatar longc avatar nserban avatar omni-lchen avatar roeezab avatar yurij-stranger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zabbix-cloudwatch's Issues

This solution works fine in Python2.7 but gives error in 2.6

Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 415, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 190, in getCloudWatchData
aws_metrics = json.loads(open(aws_services_conf).read())
File "/usr/lib64/python2.6/json/init.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.6/json/decoder.py", line 336, in raw_decode
obj, end = self._scanner.iterscan(s, **kw).next()
File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
File "/usr/lib64/python2.6/json/decoder.py", line 183, in JSONObject
value, end = iterscan(s, idx=end, context=context).next()
File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
File "/usr/lib64/python2.6/json/decoder.py", line 217, in JSONArray
value, end = iterscan(s, idx=end, context=context).next()
File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
File "/usr/lib64/python2.6/json/decoder.py", line 193, in JSONObject
raise ValueError(errmsg("Expecting , delimiter", s, end - 1))
ValueError: Expecting , delimiter: line 458 column 5 (char 10715)

Lambda data gets failed

Hi,
I am getting below response. Can you help to find what I am missing in this.

('this is in sending data', {'data': [{'host': 'localhost', 'value': 4.0, 'key': u'Lambda.Invocations.Sum'}, {'host': 'localhost', 'value': 0.0, 'key': u'Lambda.Errors.Sum'}, {'host': 'localhost', 'value': 257.3425, 'key': u'Lambda.Duration.Average'}, {'host': 'localhost', 'value': 0.0, 'key': u'Lambda.Throttles.Sum'}], 'request': 'sender data'})
('sEND FATA', 'ZBXD\x01;\x01\x00\x00\x00\x00\x00\x00{"data": [{"host": "localhost", "value": 4.0, "key": "Lambda.Invocations.Sum"}, {"host": "localhost", "value": 0.0, "key": "Lambda.Errors.Sum"}, {"host": "localhost", "value": 257.3425, "key": "Lambda.Duration.Average"}, {"host": "localhost", "value": 0.0, "key": "Lambda.Throttles.Sum"}], "request": "sender data"}')
('this is response', [(1, {u'info': u'processed: 0; failed: 4; total: 4; seconds spent: 0.000057', u'response': u'success'})])

No data on zabbix graphics.

Hello,
I want to monitor cloudfront datas.I installed templates but i can not see any data on graphic.
Can you help me with cloudfront.

LLD discovery EBS-no data in zabbix

i modified awdLLD.py

def getEBS(a, r ,c):

    account = a
    aws_account = awsAccount(account)
    aws_access_key_id = aws_account._aws_access_key_id
    aws_secret_access_key = aws_account._aws_secret_access_key
    aws_region = r
    component = c

    # Init LLD Data

    lldlist = []
    llddata = {"data":lldlist}
    # Connect to EC2 service
    conn = awsConnection()
    conn.ebsConnect(aws_region, aws_access_key_id, aws_secret_access_key)
    ebsConn = conn._aws_connection

    # Save EBS function results in a list

    functionResultsList = []

    # Save topic names in a list

    tdata = []
    # Get a list of Lambda Functions
    ebsResults = ebsConn.get_all_volumes()

    for v in ebsResults:
         ebs=v.id
         if re.search(component, ebs) and re.search('', ebs, re.I):
           tdata.append(ebs)
     if tdata:

         for x in tdata:

            dict = {}

            # Get aws account

            dict["{#AWS_ACCOUNT}"] = a

            # Get aws region

            dict["{#AWS_REGION}"] = r

            # Get topic name

            dict["{#EBS_ID}"] = x

            # Get short topic name by removing component name

            #dict["{#FUNCTION_INAME}"] = re.sub(component + '(-?)', '', x)

            lldlist.append(dict)



    print json.dumps(llddata, indent=4)

./awsLLD.py -a "default" -r "eu-west-1" -q "EBS" -c ''
{
"data": [
{
"{#AWS_REGION}": "eu-west-1",
"{#EBS_ID}": "vol-895aa039",
"{#AWS_ACCOUNT}": "default"
},
{
"{#AWS_REGION}": "eu-west-1",
"{#EBS_ID}": "vol-3d9b518d",
"{#AWS_ACCOUNT}": "default"
},


zabbixCloudWatch.py:
 


 elif aws_service == 'EBS':
                ebs_id = dimensions['VolumeId']
                zabbix_key =  aws_service + '.' + metric_name + '.' + statistics + '["' + account + '","' + aws_region + '","' + ebs_id + '"]'



/opt/zabbix/cloudwatch/zabbix-cloudwatch/cron.d/cron.EBS.sh "vol-0164c6bbf17b06458" "Zabbix_Test" "localhost" "default" "eu-west-1"
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeReadBytes.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeWriteBytes.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeReadOps.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeWriteOps.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeTotalReadTime.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeTotalWriteTime.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeIdleTime.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeQueueLength.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeThroughputPercentage.Average["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.VolumeConsumedReadWriteOps.Sum["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
{'host': 'Zabbix_Test', 'value': 0, 'key': u'EBS.BurstBalance.Average["default","eu-west-1","vol-0164c6bbf17b06458"]', 'clock': 1533563220}
Count: 11

1

2

but no data in zabbix

Need someexplanation for adding hosts

Can someone explain step 6 Create a new host and linked with the template.

What "new host" means ? is it for example RDS instance or what ?

Thanks in advance

SNS discovery:unknown query

imported SNS exampple template and attached it to zabbix-host

My topic is named "topic"

AWS-CloudWatch:zabbix-host name
localhost:zabbix server

./cron.SNS.sh "topic" "AWS-CloudWatch" "localhost" "default" "eu-west-1"

no results

./awsLLD.py -a "default" -r "eu-west-1" -q "SNSTopics" -c "topic"

{u'ListTopicsResponse': {u'ResponseMetadata': {u'RequestId': u'9cce08fb-1215-5273-9761-5b3601c7ec11'}, u'ListTopicsResult': {u'Topics': [{u'TopicArn': u'arn:aws:sns:eu-west-1:233135199200:topic'}], u'NextToken': None}}}
{
"data": []
}

Not sure did i miss some steps (5 is confusing for me)

AttributeError: 'NoneType' object has no attribute 'get_metric_statistics'

Hi,

First, thank you for your work. But i have a problem when i launch cli command to retrieve metrics from RDS instance.

When i launch this command :
./cron.RDS.sh "database-zabbix" "FOUQUET RDS" "127.0.0.1" "awsfouquet" "eu-west-3"
Or this command :
./zabbixCloudWatch.py -z "127.0.0.1" -x "FOUQUET RDS" -a "awsfouquet" -r "" -s "RDS" -d "DBInstanceIdentifier=database-zabbix" -p "60" -f "2019-12-05 15:30:00" -t "2019-12-05 16:00:00"

I have these errors :

Traceback (most recent call last):
File "./zabbixCloudWatch.py", line 394, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "./zabbixCloudWatch.py", line 181, in getCloudWatchData
results = cw.get_metric_statistics(period, start_time, end_time, metric_name, namespace, statistics, dimensions)
AttributeError: 'NoneType' object has no attribute 'get_metric_statistics'

But if i try to get metrics from AWS CLI, it works well :
aws cloudwatch get-metric-statistics --namespace 'AWS/RDS' --metric-name 'CPUUtilization' --dimensions Name=DBInstanceIdentifier,Value=database-zabbix --start-time '2019-12-05T12:00:00Z' --end-time '2019-12-05T12:30:00Z' --period 60 --statistics 'Maximum'

Result :
{
"Timestamp": "2019-12-05T12:24:00Z",
"Maximum": 1.47540983606554,
"Unit": "Percent"
}
],
"Label": "CPUUtilization"

Thanks for your help.

File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 370, in <module>

Hi, I got this error msg...

Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 370, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 172, in getCloudWatchData
aws_metrics = json.loads(open(aws_services_conf).read())
File "/usr/lib/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 10 - line 403 column 1 (char 9 - 9190)

RedShift graph no data

Hi ,

I have configured ELB, Cloudfront and those are working fine with provided template but for RedShift it is not working. I am not getting any data on the zabbix.

Cron entry.
/bin/bash /opt/zabbix/cloudwatch/cron.d/cron.RedShift.sh "prod-Redshift-2017" "prod-Redshift-2017" "zabbix.production.com" "default" "us-east-1"

AWS use have full access to cloud watch. Also the same user is used for other service monitoring too.
Template is imported correctly. Can you help me here plz?

Thanks,
Pratap

data not accepted by zabbix if includes 'clock'

Using zabbix 3.0.3. Data is only accepted by the zabbix server if I comment line 71-72 in pyZabbixSender.py, ie. do not include the 'clock' key in createDataPoint():

def __createDataPoint(self, host, key, value, clock=None):
        '''
        Creates a dictionary using provided parameters, as needed for sending this data.
        '''
        obj = {
            'host': host,
            'key': key,
            'value': value,
        }
    '''
    if clock:
            obj['clock'] = clock
    '''
        return obj

This is what I get when I run the script with verbose=True:

Failures reported by zabbix when sending:
{"data": [{"host": "ELB-xxxxx-us-pub", "value": 2.0, "key": "ELB.HealthyHostCount.Average", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0.0, "key": "ELB.UnHealthyHostCount.Average", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 110.0, "key": "ELB.RequestCount.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0.11984923536127264, "key": "ELB.Latency.Average", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.HTTPCode_ELB_4XX.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.HTTPCode_ELB_5XX.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 105.0, "key": "ELB.HTTPCode_Backend_2XX.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 5.0, "key": "ELB.HTTPCode_Backend_3XX.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.HTTPCode_Backend_4XX.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.HTTPCode_Backend_5XX.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.BackendConnectionErrors.Sum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.SurgeQueueLength.Maximum", "clock": 1465229160.0}, {"host": "ELB-xxxxx-us-pub", "value": 0, "key": "ELB.SpilloverCount.Sum", "clock": 1465229160.0}], "request": "sender data"}

Reason for the failure could be that the whole request doesn't include a timestamp, see:

https://www.zabbix.org/wiki/Docs/protocols/zabbix_sender/2.0

Problems for configurate

I'm trying to configure this plugin, but I'm finding doubt configuration.
Mostly Item 3 READ ME.

Could you please help me?

Can you add support for accepting the time in EST time zone.

Currently the script accepts time in UTC as EC2 instance on which the scripts are running in UTC. Our zabbix server is in EST and we want to pass the EST time to the script and get the data from cloudwatch for that particular time. Can you do that?

Pulling metrics for RDS

I am currently running Zabbix 3.2 and running into the below issue when pulling the metrics for RDS. However, metrics for ELB work without a issue. Not sure if this is due to the last update that fixed the date timestamp. Looks like its not returning any metrics in the array as well. Can you please help with this? Thanks!

Traceback (most recent call last):
  File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 368, in <module>
    sendLatestCloudWatchData(zabbix_server, zabbix_host, cw_data)
  File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 221, in sendLatestCloudWatchData
    zabbix_key_timestamp = int(time.mktime(sorts[0]['Timestamp'].timetuple()))
UnboundLocalError: local variable 'sorts' referenced before assignment

configuration issue with RDS

Hi
well i'am stack at the step 5 when configuring zabbix trapper; what do you mean by discovery_item_value?
the in step 6 when i try to add the new host, how can i find connection port (default value:10050)?

Could you please enlighten me on this :)

Can you elaborate more on how to set this up?

As far as i understand, you had prepared everything including template, script.
In my case, i just want to integrate ELB metrics into my zabbix server.
How should i do that? Please correct me if i'm wrong.

  1. Install boto
    2.Create folder /opt/zabbix/cloudwatch
    3.Add AWS credentials
    4.Because aws_services_metrics.conf has every metrics, so i don't think i need to add anything
    5.I don't know how to create template and item, so i would use ELB template you created. I would import it to zabbix
    6.Create new host an linked with ELB template.
    7.Execute that cron.ELB.sh
    Is that correct?

cron job doesn't work

when executed cron.Service.sh -parameers, it works fine-data sent to zabbix,but from cron job nothing happens

i tried crontab -e

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/opt/zabbix/cloudwatch/zabbix-cloudwatch/cron.d

*/1 * * * * root /opt/zabbix/cloudwatch/zabbix-cloudwatch/cron.d/cron.Lambda.sh "lambda-cllean-unknown-instances" "aws_north_virginia" "localhost" "default" "us-east-1" >/tmp/2.txt

file 2.txt is created but it's empty

file is not even created if specified in crontab file

Monitor multiple instances

Let's say I have multiple load balancers that I want to monitor. I would run something like this:
cron.ApplicationELB.sh "LoadBalancer1" "zabbix_host" "zabbix_server" "aws_account" "aws_region"
Now what about LoadBalancer2? I assume I would create another item on the host with a unique key but how would I manage that with the code?

RDS Graph [No data],But Json Data can return

Help me please
Json Data return Below ,But RDS graph [No data]

root@xxxx:/opt/zabbix/cloudwatch# /opt/zabbix/cloudwatch/cron.d/cron.RDS.sh "xxxxxx" "xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com" "localhost" "aws_account_1" "ap-southeast-1"
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 12151499.4, 'key': u'RDS.BinLogDiskUsage.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 13.745, 'key': u'RDS.CPUUtilization.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 3.4, 'key': u'RDS.DatabaseConnections.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 0.37352823270672225, 'key': u'RDS.DiskQueueDepth.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 157339648.0, 'key': u'RDS.FreeableMemory.Minimum', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 81383927808.0, 'key': u'RDS.FreeStorageSpace.Minimum', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 0, 'key': u'RDS.ReplicaLag.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 62583603.2, 'key': u'RDS.SwapUsage.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 0.8000537492719946, 'key': u'RDS.ReadIOPS.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 287.7268591888593, 'key': u'RDS.WriteIOPS.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 0.0010479477124183006, 'key': u'RDS.ReadLatency.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 0.0013828101671950967, 'key': u'RDS.WriteLatency.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 9393.58107745714, 'key': u'RDS.ReadThroughput.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 3034772.928814686, 'key': u'RDS.WriteThroughput.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 21776.359648373196, 'key': u'RDS.NetworkReceiveThroughput.Average', 'clock': 1509282120}
{'host': 'xxxxxx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com', 'value': 31777.87956107002, 'key': u'RDS.NetworkTransmitThroughput.Average', 'clock': 1509282120}

Sh script queation: In the script cron.ES.sh, variable issue

In the script, there are some variable, what i can fill in this case?

Client Id

CLIENT_ID=$1

Client Id

DOMAIN_NAME=$2

Zabbix Host

ZABBIX_HOST=$3

Zabbix Server

ZABBIX_SERVER=$4

AWS Account

ACCOUNT=$5

AWS Region

REGION=$6

what should i fill in those variable when i run this script?

Wrong values when change timezone

Hello,

I use timezone BRT (-3), so in file "cron.d/cron.RDS.sh" i change variables "ENDTIME" and "STARTTIME" removing "-u", ex:

ENDTIME=$(date "+%F %H:%M:00")
STARTTIME=$(date "+%F %H:%M:00" -d "5 minutes ago")

In this way the data arrive with timezone correct at my zabbix-server, but this generate problems in values, they not are correct.

How to i can solve this problem?

Thank you

ElasticCache metrics returning 0 values for all metrics

As per the AWS documentation for getting the Cloudwatch metrics for Elastic cache data we need to provide the CacheClusterId and CacheNodeId both. But in this solution we are only supplying the CacheClusterID and not CacheNodeID. And when are getting 0 values for all the metrics.

Error when trying to get SNS data from CW

Hello!
I have a problem when I try to get SNS data from CloudWatch...
This don't return anything when I use this command:

bash cron.d/cron.SNS.sh "<SNS-TOPIC>" "<ZABBIX-HOST>" "<ZABBIX-SERVER>" "<AWS-PROFILE>" "<AWS-REGION>"

And if I try to parse with awsLLD, this dont return any data...

python awsLLD.py -a "<AWS-PROFILE>" -r "<AWS-REGION>" -q "SNSTopics" -c "<SNS-TOPIC>"

{
    "data": []
}

It happens with any SNS topic in all AWS accounts that I have, no matter what I try to do..
I try to put some prints in code, trying to search what is my problem and aparently, when this action happens: nextToken = topicsResults['ListTopicsResponse']['ListTopicsResult']['NextToken'] my "nextToken" comes empty...

Need Assistance with Setup

Alright, im super confused. Here is what I have so far:

  • Install boto on zabbix server via pip
    - Installed (couldn't use pip command on Ubuntu)

  • Create cloudwatch directory in "/opt/zabbix/cloudwatch" and do a git pull from this repository.
    - I created the directory and did a git pull to download the contents to the directory

  • Add AWS account credential to the configuration file "conf/awscred".
    - I created a new IAM account (CLI only) and gave it "ReadOnlyAccess" for all AWS services. I then named the account in the [] brackets to match what I am using in the cron job (treenonprod). I then added in the access ID and secret key values.

  • Find the metrics from AmazonCloudWatch Developer Guide and add metrics to the configuration file "conf/aws_services_metrics.conf".
    - I didn't need to do this as your template already included all the things I needed.

  • Create a zabbix template for an AWS service, then create items with metrics key by using zabbix trapper type. AWS Metric Zabbix Trapper Item Key Format without Discovery or AWS Metric Zabbix Trapper Item Key Format with Discovery.
    - Here's where I started to get confused. Can I not just import your template, assign it to a host and then let the scripts populate the data?

  • Create a new host and linked with the template.
    - I created a Zabbix host for an ELB in AWS (named the same as the AWS object), linked the ELB template to it successfully

  • Create a cloudwatch bash wrapper script for cron job.
    - I am using the cron.ELB.sh script in /opt/zabbix/cloudwatch/cron.d.

  • Create a new cron job to send the cloudwatch metrics to the host.
    - I created a new crontab via "crontab -e" and pasted in the following:

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/opt/zabbix/cloudwatch/cron.d

ELB monitoring

*/5 * * * * root cron.ELB.sh "adfs-int-elb" "adfs-int-elb" "monitor.lendingtree.com" "treenonprod" "us-east-1" &>/dev/null

=============================================================

Currently the cron job is returning nothing to my Zabbix host in "latest data". I tried also running the script with arguments manually without a cron job and get this error:

image

image

What the hell do I do next? I don't know what I'm missing. Please advise.

ES and ELB return zero value

I config all done, RDS, Cloudfront,... is ok.
But when i config to monitor ES and ELB on AWS, data return zero. I check cloudwatch on AWS and it's not like that
Untitled

Help me pleass...

CloudWatch ERROR: BotoServerError: 403 Forbidden

Iam gettting CloudWatch ERROR: BotoServerError: 403 Forbidden

i have configured everything and running the script like this:
./cron.RDS.sh "bettr-dev" "Aws Rds" "localhost" "aws_account_1" "ap-south-1"

my rds DBInstanceIdentifier = "bettr-dev"
my zabbix host name = "Aws Rds"
zabbix server is running at localhost
i have already given credential like this:
[aws_account_1]
aws_access_key_id = 'xxxxxxxxxxxxxxxxxxx'
aws_secret_access_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
region = "ap-south-1"

is everything fine? if not then what i am doing wrong ?

line 27: zabbixCloudWatch.py: command not found

Hi ,Help me please
I have problem when run CLI below
/bin/bash /opt/zabbix/cloudwatch/zabbix-cloudwatch/cron.d/cron.RDS.sh "database" "localhost" “databases.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com” “zabbix” "ap-southeast-1a"

Thanks

Solved

PostgreSQL monitoring

Hi,

We're using this repository for sometime to monitor our MySQL databases with no issue so far.
Recently we added a PostgreSQL in our RDS and we aren't able to integrate to Zabbix. The error that we got was:

# ./zabbixCloudWatch.py -z "localhost" -x "pginstance" -a "aws_account" -r "aws_region" -s "RDS" -d "DBInstanceIdentifier=pginstance" -p "300" -f "2018-04-16 13:10:00" -t "2018-04-16 13:15:00"
Traceback (most recent call last):
  File "./zabbixCloudWatch.py", line 368, in <module>
    sendLatestCloudWatchData(zabbix_server, zabbix_host, cw_data)
  File "./zabbixCloudWatch.py", line 221, in sendLatestCloudWatchData
    zabbix_key_timestamp = int(time.mktime(sorts[0]['Timestamp'].timetuple()))
UnboundLocalError: local variable 'sorts' referenced before assignment

After some investigation I was able to find that because of the first metric been a MySQL only the PostgreSQL wasn't able to populate the Zabbix.

I changed the lines 62-63 for the 66-67 in the conf/aws_services_metrics.conf file and now we're abel to monitor the PostgreSQL as well.
Old

    60	    "RDS": [
    61	        {
    62	            "metric": "BinLogDiskUsage",
    63	            "statistics": "Average"
    64	        },
    65		{
    66	            "metric": "CPUUtilization",
    67	            "statistics": "Average"
    68	        },

New

    60	    "RDS": [
    61	        {
    62	            "metric": "CPUUtilization",
    63	            "statistics": "Average"
    64	        },
    65		{
    66	            "metric": "BinLogDiskUsage",
    67	            "statistics": "Average"
    68	        },

I hope this can help other with the same issue ;)

When I run the cron.RDS.sh script , zabbixCloudWatch.py did not get these variable values ($ZABBIX_SERVER , $ZABBIX_HOST,.......)

When I direct run the script zabbixCloudWatch.py ,the cloudwatch information of EC2 server can transfer to Zabbix Server .

[root@redhat cloudwatch]# /usr/bin/python zabbixCloudWatch.py -x "54.212.211.137" -z "superman-zabbix.man-webplatform.com" -a "CloudWatch-test" -r "cn-north-1" -s "EC2" -d "InstanceId=i-0d3138e5be40eft01" -p "300" -f "2017-12-22 01:40:00" -t "2017-12-22 01:45:00"

I create a cron.AWSInstance.sh script (This script is copy from cron.RDS.sh ).
The content of the cron.AWSInstance.sh script is :

#!/bin/bash

PATH=$PATH:/opt/zabbix/cloudwatch
export PATH

#EC2 instance indentifier
ID=$1
#Zabbix Host
ZABBIX_HOST=$2
#Zabbix Server
ZABBIX_SERVER=$3
#AWS Account
ACCOUNT=$4
#AWS Region
REGION=$5
#Collecting 5-minute data from cloudwatch
PERIOD="300"
Set start time and end time for collecting cloudwatch data
ENDTIME=$(date -u "+%F %H:%M:00")
STARTTIME=$(date -u "+%F %H:%M:00" -d "5 minutes ago")

#Sendcloudwatch data of a table to Zabbix Server
zabbixCloudWatch.py -z "$Zabbix Server" -x "$ZABBIX_HOST" -a "$ACCOUNT" -r "$REGION" -s "EC2" -d "InstanceId=$ID" -p "$PERIOD" -f "$STARTTIME" -t "$ENDTIME"

When I run the cron.AWSInstance.sh script like this :
./cron.AWSINSTANCE.sh "i-0d3138e5be40eft01" "54.212.211.137" "superman-zabbix.man-webplatform.com" "CloudWatch-test" "cn-north-1" &>/dev/null

The cloudwatch information of EC2 server can't transfer to Zabbix Server.

My question is :
Can these variables defined in the shell script (such as $ZABBIX_SERVER , $ENDTIME)transfer directly as a parameter to a python script like this :
zabbixCloudWatch.py -z "$ZABBIX_SERVER" -x "$ZABBIX_HOST" -a "$ACCOUNT" -r "$REGION" -s "EC2" -d "InstanceId=$ID" -p "$PERIOD" -f "$STARTTIME" -t "$ENDTIME"

I feel zabbixCloudWatch.py did not get these variable values ($ZABBIX_SERVER , $ZABBIX_HOST,.......)

File "/usr/lib64/python2.7/json/decoder.py" ....... ValueError: Expecting , delimiter: line 20 column 37 (char 393)

Hi! Upon manually running an EC2 wrapper script, I get the following error:

Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbix-cloudwatch/zabbixCloudWatch.py", line 394, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/opt/zabbix/cloudwatch/zabbix-cloudwatch/zabbixCloudWatch.py", line 172, in getCloudWatchData
aws_metrics = json.loads(open(aws_services_conf).read())
File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 20 column 37 (char 393)

The command I was running is:
/opt/zabbix/cloudwatch/zabbix-cloudwatch/cron.d/cron.EC2.sh "instanceid" "hostname" "servername" "acctname" "region"

with the parameters specified accordingly.

Any help/enlightenment would be much appreciated!

error in running zabbixCloudWatch.py script

Hi while running below command , i am getting below error

/usr/bin/python /opt/zabbix/cloudwatch/zabbixCloudWatch.py -z "172.31.31.18" -x "aniltariyal4c.mylabserver.com" -a "rahprasingh" -r "ap-south-1a" -s "RDS" -d "zabbixdb" -p "300" -f "2019-09-18 16:20:00" -t "2019-09-18 16:25:00"

Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 352, in
dimensions = dimConvert(options.dimensions)
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 50, in dimConvert
dim[secondSplit[0]] = secondSplit[1]
IndexError: list index out of range

i have only basic knowledge of python , pls help

Lambda function/ELB/Application ELB names

This works like a charm, but how to add variable for region/RDS/Lambda/LB names in items/triggers so to know which Lambda/RDS/Any service has issues ?
How to distinguish if more than one RDS,for example, are monitored ?

like for SNS template

AttributeError: 'NoneType' object has no attribute 'get_metric_statistics'

what's wrong?

root@xxx:/opt/zabbix/cloudwatch/conf# /opt/zabbix/cloudwatch/cron.d/cron.RDS.sh "databasexx" "localhost" "databasexx.clzmdeoafdm1.ap-southeast-1.rds.amazonaws.com" "zabbix" "ap-southeast-1a"
Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 370, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 181, in getCloudWatchData
results = cw.get_metric_statistics(period, start_time, end_time, metric_name, namespace, statistics, dimensions)
AttributeError: 'NoneType' object has no attribute 'get_metric_statistics'

Error talking to server: a bytes-like object is required, not 'str' - python3

I am using the python3.
facing a many issues with python3 with this plugin. however i think i am very near to make this working..
I am facing below error...when i removed the proxy

./cron.ES.sh xxxxxxxxxx cssdev xxxxxxxxxxx xxxxxxxx xxxxxxxxxx ap-southeast-1
Error talking to server: a bytes-like object is required, not 'str'

if i keep the proxy I am getting below error...

Traceback (most recent call last):
File "/usr/lib/zabbix/externalscripts/zabbixCloudWatch.py", line 394, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/usr/lib/zabbix/externalscripts/zabbixCloudWatch.py", line 181, in getCloudWatchData
results = cw.get_metric_statistics(period, start_time, end_time, metric_name, namespace, statistics, dimensions)
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/ec2/cloudwatch/init.py", line 239, in get_metric_statistics
[('member', Datapoint)])
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/connection.py", line 1170, in get_list
response = self.make_request(action, params, path, verb)
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/connection.py", line 1116, in make_request
return self._mexe(http_request)
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/connection.py", line 913, in _mexe
self.is_secure)
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/connection.py", line 705, in get_http_connection
return self.new_http_connection(host, port, is_secure)
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/connection.py", line 747, in new_http_connection
connection = self.proxy_ssl(host, is_secure and 443 or 80)
File "/var/lib/zabbix/my_app/env/lib64/python3.6/site-packages/boto/connection.py", line 796, in proxy_ssl
sock.sendall("CONNECT %s HTTP/1.0\r\n" % host)
TypeError: a bytes-like object is required, not 'str'

Could you please provide some hint to make it work?

How can I monitor custom metrics ,such as : DiskSpaceAvailable , DiskSpaceUsed, DiskSpaceUtilization, MemoryAvailable , MemoryUsed , MemoryUtilization.

There are some metrics, such as : DiskSpaceAvailable , DiskSpaceUsed, DiskSpaceUtilization,
MemoryAvailable , MemoryUsed , MemoryUtilization.

These metric is custom metrics.

This article show this :

http://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/mon-scripts.html

According to the step 4 of your method:

Find the metrics from AmazonCloudWatch Developer Guide and add metrics to the configuration file "conf/aws_services_metrics.conf".

For example :
I add these custom metrics to the configuration file

"EC2": [
{
"metric":"DiskSpaceAvailable",
"statistics":"Average"
},
{
"metric":"DiskSpaceUsed",
"statistics":"Average"
},
{
"metric":"DiskSpaceUtilization",
"statistics":"Average"
},
{
"metric":"MemoryAvailable",
"statistics":"Average"
},
{
"metric":"MemoryUsed",
"statistics":"Average"
},
{
"metric":"MemoryUtilization",
"statistics":"Average"
}
]

According to the step 5 of your method:

Create a zabbix template for an AWS service, then create items with metrics key by using zabbix trapper type.

Sample templates can be found in "templates" folder.

AWS Metric Zabbix Trapper Item Key Format without Discovery.

Key: <aws_service>..

For example :
I create a item , the name of item is DiskSpaceUtilization ,
the key of item is EC2.DiskSpaceUtilization.Average

But when I run the zabbix-cloudwatch python script , these custom metric items in zabbix server Dashboard don't have any data.

Some standard metrics of EC2 items in Zabbix Server ,such as CPUUtilization , it have data.

So the question is :
Whether zabbix-cloudwtch python script support custom metrics ?
If it support , What steps should I do to achieve the goal ?

issue with cron.ELB.sh

When I run the python script directly it works fine. but when run through the bash(cron.ELB.sh) it throws the error:
[root@ip-10-204-96-207 cron.d]# ./cron.ELB.sh he-dportal-dev-web-elb HL-DirectPortal-Web-ELB zbxserverdev.clouddqt.capitalone.com GR_GG_COF_AWS_COAF_Dev_Developer us-east-1us-east-1
Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 415, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 198, in getCloudWatchData
results = cw.get_metric_statistics(period, start_time, end_time, metric_name, namespace, statistics, dimensions)
AttributeError: 'NoneType' object has no attribute 'get_metric_statistics'

Request for clarity of execution

Hello,

I am new to the zabbix monitoring platform, and I am working to configure your module to integrate the AWS Cloudwatch module with Zabbix and my AWS Environment. I have few question regarding the setup of the module.

Under the "Installation" section of the README.md, I have the following questions,

Starting with step 3, I did apply the credential for connection to the AWS account, within "awscred" file of the "/opt/zabbix/cloudwatch/"conf directory.

[redsomething-sandbox]
aws_access_key_id=***********************************
aws_secret_access_key=************************

Next, in step 4, after reviewing the "conf/aws_services_metrics.conf," I noticed the setting for Lambda metrics I want, so I just copied and renamed the original file to "aws_services_metrics.conf.orig," and stripped the working file to only include,

Updated File: "aws_service_metrics.conf"
{ "Lambda": [ { "metric":"Invocations", "statistics":"Sum" }, { "metric":"Errors", "statistics":"Sum" }, { "metric":"Duration", "statistics":"Average" }, { "metric":"Throttles", "statistics":"Sum" } ] }

So assuming that will work, onto the next step, step 5,

Now, in regards to the template, I copied the Template_AWS_Lambda.xml to Template_AWS_Lambda_RR.xml and left everything in place.

Now it is step 6, that it beings to become unclear, partly because I am new to Zabbix platform. The following is what performed based on best guess.

a. Host Name: RR-Application-Server #Now this is a name I made up for there is no server or static ip address associated with the application.

b. Visible Name: RR-App-Server

And everything else is defaulted with the Groups and the localhost of 127.0.0.1. This is where it becomes a little unclear. I went onto import the Template_AWS_Lambda.xml template I believe and linked to the host as it is listed under the template column as "Template AWS Lambda".

Steps 7, 8 I created the wrapper cron.RR-Lambda.sh and corresponding cron job as directed.

Now, I not sure how to confirm the configuration as up, nor how to properly execute this task to confirm the operation of this monitoring task. Can someone provide some additional directions regarding this matter, please?

SNS LLD no data in Zabbix

Changed macro values (region,AWS account and region)

./awsLLD.py -a "default" -r "eu-west-1" -q "SNSTopics" -c 'topic'
{
"data": [
{
"{#AWS_REGION}": "eu-west-1",
"{#AWS_ACCOUNT}": "default",
"{#TOPIC_INAME}": "",
"{#TOPIC_NAME}": "topic"
}
]
}

AWS-CloudWatch-display name of zabbix-host
localhost:zabbix server
default-AWS account

./cron.SNS.sh "topic" "AWS-CloudWatch" "localhost" "default" "eu-west-1"
{'host': 'AWS-CloudWatch', 'value': 2.0, 'key': u'SNS.NumberOfMessagesPublished.Sum["default","eu-west-1","topic"]', 'clock': 1532180160}
{'host': 'AWS-CloudWatch', 'value': 2.0, 'key': u'SNS.NumberOfNotificationsDelivered.Sum["default","eu-west-1","topic"]', 'clock': 1532180160}
{'host': 'AWS-CloudWatch', 'value': 0.0, 'key': u'SNS.NumberOfNotificationsFailed.Average["default","eu-west-1","topic"]', 'clock': 1532180160}
Count: 3

Monitoring-Latest Data-empty

Imported example SNS template and attached it to host

capture

I tried AWS Lambda template and it works without issues, same output as for SNS template from terminal

Pushing SQS stats not working

I've had success with the other templates in pushing CloudFront statistics to Zabbix, but pushing of SQS statistics doesn't seem to work - the template items are never created from the discovery templates.

Trying to push SQS stats yields no usable information from the Zabbix server, even at DebugLevel=4, apart from noting that all the trapped events have failed.

Using zabbix_sender.printData() in zabbixCloudWatch.py yields the following which just re-confirms that the push has failed, but doesn't explain why.

{'host': 'localhost', 'value': 0.0, 'key': u'SQS.ApproximateNumberOfMessagesDelayed.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} {'host': 'localhost', 'value': 5.0, 'key': u'SQS.ApproximateNumberOfMessagesNotVisible.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} {'host': 'localhost', 'value': 0.0, 'key': u'SQS.ApproximateNumberOfMessagesVisible.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} {'host': 'localhost', 'value': 0.0, 'key': u'SQS.NumberOfEmptyReceives.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} {'host': 'localhost', 'value': 0.0, 'key': u'SQS.NumberOfMessagesDeleted.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} {'host': 'localhost', 'value': 0.0, 'key': u'SQS.NumberOfMessagesReceived.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} {'host': 'localhost', 'value': 0.0, 'key': u'SQS.NumberOfMessagesSent.Average["aws_account_1","eu-west-1","production_core_services_medium"]', 'clock': 1478256240} Count: 7

Failures reported by zabbix when sending: {"data": [{"host": "localhost", "value": 0.0, "key": "SQS.ApproximateNumberOfMessagesDelayed.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}, {"host": "localhost", "value": 5.0, "key": "SQS.ApproximateNumberOfMessagesNotVisible.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}, {"host": "localhost", "value": 0.0, "key": "SQS.ApproximateNumberOfMessagesVisible.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}, {"host": "localhost", "value": 0.0, "key": "SQS.NumberOfEmptyReceives.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}, {"host": "localhost", "value": 0.0, "key": "SQS.NumberOfMessagesDeleted.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}, {"host": "localhost", "value": 0.0, "key": "SQS.NumberOfMessagesReceived.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}, {"host": "localhost", "value": 0.0, "key": "SQS.NumberOfMessagesSent.Average[\"aws_account_1\",\"eu-west-1\",\"production_core_services_medium\"]", "clock": 1478256240}], "request": "sender data"}

Environment: Ubuntu 14.04.5LTS, Python 2.7.6, Zabbix_server 3.2.0

File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 370, in <module> cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)

I was really happy to see someone created this integration. Its well done. Having one issue collecting data.

Traceback (most recent call last):
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 370, in
cw_data = getCloudWatchData(aws_account, aws_region, aws_service, dimensions)
File "/opt/zabbix/cloudwatch/zabbixCloudWatch.py", line 154, in getCloudWatchData
aws_account = awsAccount(account)
File "/opt/zabbix/cloudwatch/awsAccount.py", line 20, in init
options = Config.options(account)
File "/usr/lib/python2.7/ConfigParser.py", line 279, in options
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: ''

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.