# Integration examples

Connecting data sources to Acure is done by setting up Data Streams. A detailed guide on working with Data Streams is available here.

# Network interactions of typical integrations

Integration example Incoming connections (port/protocol) Outgoing connection (port/protocol) Notes Used template
Zabbix 80.443 / tcp Connecting to Zabbix API Zabbix default
Zabbix(webhooks) 80.443 / tcp Sending data to Acure AnyStream default
SCOM 1433 / tcp Connecting to DBMS of SCOM SCOM default
Prometheus 80.443 / tcp Sending data to Acure Prometheus default
ntopng 80.443 / tcp Sending data to Acure ntopng default
Nagios IX 80.443 / tcp Connecting to Nagios IX API Nagios default
Nagios Core 80.443 / tcp Sending data to Acure AnyStream default
Fluentd (Fluent Bit) 80.443 / tcp Sending data to Acure AnyStream default
Splunk 80.443 / tcp Sending data to Acure AnyStream default
Logstash 80.443 / tcp Sending data to Acure AnyStream default
VMWare vCenter 80.443 / tcp Connecting to the vCenter API vCenter default

# Example of standard integration with Zabbix

To connect a data source of Zabbix type, first you must properly configure the Zabbix side:

  1. Create a group of users with read permissions for the hosts that you want to send data about to the Acure system. To create, go to the section Administration->User groups and click Create group, then enter the name and select host groups in the Access rights tab.
  2. Create a user in the created group. To do this, go to the section Administration->Users and click Create user, in the window that opens, enter the user data and select the group to which you want to add him. Copy the user's alias and password for further configuration.

Next, go to the Acure menu section Data CollectionData Streams and configure a data stream with the Zabbix default template, on the Settings tab fill in the fields:

  1. apiUri - must contain URL in the format http://zabbix.example.com/api_jsonrpc.php
  2. login - Zabbix login
  3. password - Zabbix password

and click Save.

If necessary, configure the launching intervals for the Acure agent tasks:

  • Zabbix - Events Data Flow (default - 10 seconds)
  • Zabbix - Api Connection Check (default - 30 seconds)
  • Zabbix - Version Check (default - 5 minutes)

Click Start at the upper right of the page to enable the data stream.

Acure implements interaction with the Zabbix monitoring system in the form of setting up links between Acure triggers and Zabbix triggers through configuration items (CIs). Detailed information on bound objects can be found in the corresponding section of the documentation.

To configure additional functionality for synchronizing the state of Zabbix triggers with Acure triggers, you need to set up a direct connection to the Zabbix database (tables: auditlog, auditlog_details, triggers).

This functionality provides for automatic shutdown of triggers in Acure after they are manually deactivated in Zabbix.

Restrictions: Zabbix DB - MySQL, Zabbix up to version 6.0.

Reference

The order of passing of an event about deactivation or removal of a trigger in Zabbix:

  1. Service pl-connector-dispatcher-api-service-runner periodically picks up events from service sm-zabbix-connector-api-service-zabbix-api and sends them to service cl-stream-data-collector-api-service via HTTP request.
  2. Next, the generated event is sent via RabbitMQ with the key cl.stream-raw-event.new to cl-stream-data-preprocessor-service where it is enriched with labels (stream-ready-event.zabbix.new ) and then sent with the cl.stream-processed-event.new key to the cl-stream-schema-validator-service service.
  3. If there are several events in the request, then they are split, validated and sent one by one with the cl.stream-validated-event.new key to the cl-stream-data-service-buffer service.
  4. Events are accumulated and written to the database and then sent to RabbitMQ with the key specified in the labels (cl.stream-ready-event.zabbix.new).
  5. The event queue from Zabbix is handled by the sm-zabbix-connector-api-service-autodiscovery service.
  6. From the sm-zabbix-connector-api-service-autodiscovery service, events are sent with the cl.stream-ready-event.new key in parallel to the pl-router-service service, where they are routed via websockets to the screen Raw events and logs, as well as to the pl-automaton-prefilter-service service.
  7. In the pl-automaton-prefilter-service service, we have rules for launching events in the automaton and sending them to the pl-automaton-runner-service.

# Example of Zabbix integration via webhook

Using this example, you can implement receiving data from any source that supports Webhook.

  1. Add a new data stream or go to the configuration page of an existing data stream with the AnyStream default configuration template and copy the API-key - you will need it later.
  2. Configure sending messages from the data source (in this case - Zabbix):
    • In the Zabbix 5.0 frontend, go to Administration->Alert Methods and create a new alert type. Enter a name, select the type Webhook. Fill in the table Parameters that forms the content of a JSON file that will be sent to the Acure system:

      EventID: {EVENT.ID}
      EventName: {EVENT.NAME}
      EventSeverity: {EVENT.SEVERITY}
      HostName: {HOST.NAME}
      TriggerStatus: {TRIGGER.STATUS}
      
    • In the script field, copy the code in JavaScript that forms and sends a POST request to the API of your Acure system:

      var req = new CurlHttpRequest();
      params = JSON.parse(value);
      req.AddHeader('Content-Type: application/json');
      req.Post('https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}', JSON.stringify(params));
      

      {GLOBAL_DOMAIN} – the address of your Acure space, for example my-userspace.acure.io.

      {API-KEY}API-key, copied in the first step.

      Image

  3. Save the new notification method. In the Settings->Actions section, configure the response to Zabbix events and select the created notification method as operation.

# Example of integration with SCOM

First, create a user in the SCOM system database. To do this, connect to the Operations Manager target base using a DBMS client, for example, MSSQL, and create a new user:

  1. In the General section, enter a username, select SQL Server Authentication, enter a password. Copy your name and password - you will need them later.
  2. In the Server Roles section, select the public role.
  3. In the User Mapping section, select the db_datareader and public roles.
  4. Check the summary list of rights in Protected Objects - permissions must include CONNECT SQL, VIEW ANY DATABASE, VIEW ANY DEFINITION.
  5. Confirm the creation of the user - click OK.

Next, go to the Acure menu section Data CollectionData Streams and configure a data stream with the SCOM default template and on the Settings tab fill in the fields:

  1. host - DBMS address
  2. login - Username
  3. password - Password
  4. dbName - DBMS name - OperationsManager
  5. port - Port of connection to DBMS - 1433

Click Save.

Click Start at the upper right of the page to enable the data stream.

# Example of integration with Prometheus

Go to the Acure menu section Data CollectionData Streams, configure a data stream with the Prometheus default template and copy its API key.

Next, configure the alertmanager.yaml file in Prometheus:

  1. Add receiver 'web.hook':

    receivers:
    - name: 'web.hook'
        webhook_configs:
        - send_resolved: true
            url: 'https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}'
    

    {GLOBAL_DOMAIN} – the address of your Acure space, for example my-userspace.acure.io.

    {API-KEY}API-key, copied from the data stream page.

  2. In the route block, add the group_by grouping order and the sending method via receiver 'web.hook', fill in the group_by key manually:

    route: 
        group_by: ['<Group tags>']
        group_wait: 30s
        group_interval: 30s
        repeat_interval: 1h
        receiver: 'web.hook'
    
  3. Restart alertmanager.

    An example of the final configuration file alertmanager.yaml

    global:
        resolve_timeout: 5m
    route:
        group_by: ['ingress']
        group_wait: 30s
        group_interval: 30s
        repeat_interval: 1h
        receiver: 'web.hook'
    receivers:
    - name: 'web.hook'
        webhook_configs:
        - send_resolved: true
            url: 'https://sm.example.com/api/public/cl/v1/stream-data?streamKey=e4da3b7f-bbce-2345-d777-2b0674a31z65'
    inhibit_rules:
        - source_match:
            severity: 'critical'
        target_match:
            severity: 'warning'
        equal: ['alertname', 'dev', 'instance']
    

Click Start at the upper right of the page to enable the data stream.

# Example of integration with Ntopng

Go to the Acure menu section Data CollectionData Streams, configure a data stream with the Ntopng default template and copy its API-key.

Next, go to the ntopng system interface, to the Settings->Preferences->Alert Endpoints section and activate the Toggle Webhook Notification switch. Next, paste the address https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY} in the Notification URL field.

{GLOBAL_DOMAIN} – the address of your Acure space, for example my-userspace.acure.io.

{API-KEY}API-key, copied from the data stream page.

# Example of integration with NagiosXI

In the deployed Nagios add a new user - go to the section Admin->Add new user accounts->Add new user. In the user creation window, enter name, password, email and check the boxes Can see all hosts and services, Has read-only access, and API access.

Click Add User.

Now select the created user from the list. On the user page, in the LDAP Settings block, copy the key from the API-Key field.

Next, go to the Acure menu section Data CollectionData Streams, configure a data stream with the Nagios default template, on the Settings tab fill in the apiUri field and paste the previously copied key into the apiKey field, then click Save.

Click Start at the upper right of the page to enable the data stream.

# Example of integration with Nagios Core

In the Acure menu section Data CollectionData Streams create an integration with the AnyStream default template and copy its API-key.

Nagios Core does not natively support the HTTP API interface. Integration with the monitoring system is configured by adding a custom alert script.

⚠️ This model uses the following static fields, since their values cannot be obtained in notification: INSTANCE_ID="1" OBJECT_ID="1" LAST_HARD_STATE=0

To configure the stream from the Nagios side:

  1. Enable enviroment_macros:

    enable_environment_macros=1
    
  2. Add comands:

    define command {
        command_name     send-service-event-to-sm
        command_line  /usr/local/bin/send_sm 2 > /tmp/sm.log 2>&1
        }
    
    define command {
        command_name     send-host-event-to-sm
        command_line  /usr/local/bin/send_sm 1 > /tmp/sm.log 2>&1
        }
    
  3. Add contact:

    define contact {
        use              generic-contact
        contact_name                     sm
        alias                            Service Monitor
        service_notification_period      24x7
        host_notification_period         24x7
        host_notifications_enabled   1
        service_notifications_enabled    1
        service_notification_options     w,u,c,r,f
        host_notification_options        d,u,r,f
        service_notification_commands    send-service-event-to-sm
        host_notification_commands       send-host-event-to-sm
        register             1
        }
    
  4. Modify the current contactgroup by adding the created contact to it:

    define contactgroup{
            contactgroup_name       admins
            alias                   Nagios Administrators
            members                 nagiosadmin,sm
            }
    
  5. Create a script:

    cat > /usr/local/bin/send_sm <<EOF
    #!/bin/bash
    #############################
    ##### Define constants ######
    #############################
    SM_URI="<sm uri with proto>"
    CONNECTOR_KEY="<key>"
    INSTANCE_ID="1"
    OBJECT_ID="1"
    LAST_HARD_STATE=0
    #################################
    ##### Define dynamic fields #####
    #################################
    STATE_TIME=`date '+%F %T'`
    OBJECTTYPE_ID=$1
    HOST_NAME=$NAGIOS_HOSTNAME
    SERVICE_DESCRIPTION=$NAGIOS_SERVICEDESC
    if [[ "$1" == "1" ]];then
        STATE=$NAGIOS_HOSTSTATEID
        LAST_STATE=$NAGIOS_LASTHOSTSTATEID
        STATE_TYPE_NAME=$NAGIOS_HOSTSTATETYPE
        ATTEMPT=$NAGIOS_HOSTATTEMPT
        MAX_ATTEMPTS=$NAGIOS_MAXHOSTATTEMPTS
        OUTPUT=$NAGIOS_HOSTOUTPUT
    else
        STATE=$NAGIOS_SERVICESTATEID
            LAST_STATE=$NAGIOS_LASTSERVICESTATEID
            STATE_TYPE_NAME=$NAGIOS_SERVICESTATETYPE
            ATTEMPT=$NAGIOS_SERVICEATTEMPT
        MAX_ATTEMPTS=$NAGIOS_MAXSERVICEATTEMPTS
        OUTPUT=$NAGIOS_SERVICEOUTPUT
    fi
    if [[ "$STATE" != "LAST_STATE" ]];then
            STATE_CHANGE=1
    else
            STATE_CHANGE=0
    fi
    if [[ "$STATE_TYPE_NAME" == "HARD" ]];then
            STATE_TYPE=1
    else
            STATE_TYPE=0
    fi
    #############################
    ##### Send http request #####
    #############################
    curl -X POST -H "Content-Type: application/json" $SM_URI/api/public/sm/v1/events-aggregator?connectorKey=$CONNECTOR_KEY \
    -d "{
    \"recordcount\": \"1\",
    \"stateentry\": [
        {
        \"instance_id\": \"$INSTANCE_ID\",
        \"state_time\": \"$STATE_TIME\",
        \"object_id\": \"$OBJECT_ID\",
        \"objecttype_id\": \"$1\",
        \"host_name\": \"$HOST_NAME\",
        \"service_description\": \"$SERVICE_DESCRIPTION\",
        \"state_change\": \"$STATE_CHANGE\",
        \"state\": \"$STATE\",
        \"state_type\": \"$STATE_TYPE\",
        \"current_check_attempt\": \"$ATTEMPT\",
        \"max_check_attempts\": \"$MAX_ATTEMPTS\",
        \"last_state\": \"$LAST_STATE\",
        \"last_hard_state\": \"$LAST_HARD_STATE\",
        \"output\": \"$OUTPUT\"
        }
    ]
    }"
    EOF
    chmod +x /usr/local/bin/send_sm
    
  6. Restart Nagios Core to apply the new config.

# Example of integration with Fluentd

An example of setting up a data stream with an external "Fluentd" service through the configuration template AnyStream default.

To send log messages to the Acure system, the following conditions must be met:

  • The log message contains a @timestamp field in the format "2019-11-02T17:23:59.301361+03:00",
  • Fluentd sends logs in the application/json format,
  • Sending is performed through the module out_http.

Next, configure fluentd:

  1. Install the fluentd package.

    fluent-gem install fluent-plugin-out-http
    
  2. Add a timestamp entry to the log - to do this, add a filter block to the configuration file, for example, for entries with the tag kubernetes.var.log.containers.nginx-ingress-**.log.

    <filter kubernetes.var.log.containers.nginx-ingress-**.log>
    @type record_transformer
    enable_ruby
    <record>
       @timestamp ${time.strftime('%Y-%m-%dT%H:%M:%S.%6N%:z')}
    </record>
    </filter>
    
  3. Add to the send data block dispatching of logs to Acure by using the mechanism @type copy.

    <match **>
    @type copy
    <store>
    @type stdout
    format json
    </store>
    ...
    ...
    </store>
    <store>                                                                                               
    @type http
    endpoint_url https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}
    http_method post
    serializer json
    rate_limit_msec 0
    raise_on_error false
    recoverable_status_codes 503
    buffered true
    bulk_request false
    custom_headers  {"X-Smon-Userspace-Id": "1"}
    <buffer>
       ...
    </buffer>
    </store>
    </match>
    

    {GLOBAL_DOMAIN} – the address of your Acure space, for example my-userspace.acure.io.

    {API-KEY}API-key, copied from the data stream page.

  4. Apply the settings and check the cl-stream-data-collector-service microservice logs in follow mode.

    If fluentd is used in a docker container inside kubernetes, rebuild the container with the plugin.

    The example uses fluentd-kubernetes-daemonset: v1.10-debian-elasticsearch7-1.

    mkdir fluentd-kubernetes; cd fluentd-kubernetes
    cat > Dockerfile << EOF
    FROM fluent/fluentd-kubernetes-daemonset:v1.10-debian-elasticsearch7-1
    
    RUN fluent-gem install fluent-plugin-out-http
    
    ENTRYPOINT ["tini", "--", "/fluentd/entrypoint.sh"]
    EOF
    docker build -t fluentd-kubernetes-daemonset:v1.10-debian-elasticsearch7-1_1 .
    

Click Start at the upper right of the page to enable the data stream.

# Example of integration with Fluent Bit

An example of setting up a data stream with an external "Fluent Bit" service through the configuration template AnyStream default.

The Fluent Bit processor is capable of handling a variety of formats. Below the UDP syslog reception and reading of a local docker logfile are considered (more about other methods of receiving data (opens new window)).

Scheme of sending data to Acure:

Image

  1. On the Acure side, create two data streams with the AnyStream default configuration template and copy their API-keys.

  2. Configure the Fluent Bit as follows:

    cat /etc/td-agent-bit/td-agent-bit.conf

    [SERVICE]
        flush        5
        daemon       Off
        log_level    info
        parsers_file parsers.conf
        plugins_file plugins.conf
        http_server  On
        http_listen  0.0.0.0
        http_port    2020
        storage.metrics on
    
    @INCLUDE inputs.conf
    @INCLUDE outputs.conf
    @INCLUDE filters.conf
    

    cat /etc/td-agent-bit/inputs.conf

    [INPUT]
        Name     syslog
        Parser   syslog-rfc3164
        Listen   0.0.0.0
        Port     514
        Mode     udp
        Tag      syslog
    
    [INPUT]
        Name              tail
        Tag               docker
        Path              /var/lib/docker/containers/*/*.log
        Parser            docker
        DB                /var/log/flb_docker.db
        Mem_Buf_Limit     10MB
        Skip_Long_Lines   On
        Refresh_Interval  10
    

    cat /etc/td-agent-bit/outputs.conf

    [OUTPUT]
        Name             http
        Host             ${acure_URL}
        Match            syslog
        URI              /api/public/cl/v1/stream-data
        Header           x-smon-stream-key ${KEY1}
        Header           Content-Type application/x-ndjson
        Format           json_lines
        Json_date_key    @timestamp
        Json_date_format iso8601
        allow_duplicated_headers false
    
    [OUTPUT]
        Name             http
        Host             ${acure_URL}
        Match            docker
        URI              /api/public/cl/v1/stream-data
        Header           x-smon-stream-key ${KEY2}
        Header           Content-Type application/x-ndjson
        Format           json_lines
        Json_date_key    @timestamp
        Json_date_format iso8601
        allow_duplicated_headers false
    

    ${acure_URL} – the address of your Acure space, for example my-userspace.acure.io.

    ${KEY1}, ${KEY2} – API-keys copied from Acure data stream page.

  3. After modifying the config files, restart Fluent Bit to apply the new settings.

The example uses the standard parsers provided with Fluent Bit. If necessary, you can implement a new parser and place it in the configuration (see more (opens new window)).

# Example of integration with Logstash

Consider receiving a log file from a server via Logstash.

Create in the Acure system a data stream with the AnyStream default configuration template and copy its API-key.

Install on the server from which the logs will be transferred the logstash component of the ELK stack.

root@server$ apt-get install logstash

Create a configuration file acure-logstash.conf in the Logstash directory with the following content:

input {
    stdin {
        type => "logstash-acure"
    }
}

filter {

}

output {
    http {
        url => "https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}"
        http_method => "post"
    }
}

{GLOBAL_DOMAIN} - the address of your Acure space, e.g. my-userspace.acure.io

{API-KEY} – API-key copied from the data stream page.

In this example, the transfer of the log file to **mon ** is carried out through the standard input <STDIN> without additional processing and filtering by logstash.

For more information on working with logstash see the ELK documentation (opens new window).

Run the following command on the server with logstash to send the log file:

root@server$ cat {logfile} | nice /usr/share/logstash/bin/logstash -f acure-logstash.conf

Go to the Raw events screen of the Acure platform, in the list of data streams select the previously created integration, and view the data received from the log file.

# Example of integration with VMWare vCenter

In order to receive topology synchronization events and VM migration events in VMWare and build in Acure the vSphere Service Model, do the following:

  1. Create a data stream with the vCenter default configuration template.

  2. Go to the data stream Settings tab, fill in the fields:

    • apiUri - the address at which the VMWare vCenter web-interface is available (specifying the protocol is not required)

    ⚠️ apiUri must contain URL in the format vcenter.company.com without specifying the protocol and the path to the SDK

    • login - a user of the vCenter system with sufficient rights to receive events about topology changes or separate objects state changes that the user is interested in synchronizing.
    • password - vCenter user password.
  3. Go to the Configuration tab.

    • For tasks of collecting data from "vCenter", specify a Coordinator with a connected agent that have network access to the vCenter server (80/tcp, 443/tcp).
    • Add a Handler CM autobuild routing to route events to the SM Autobuild service.
  4. Click Save to save the settings.

  5. Click Run at the upper right of the page to enable the data stream.

Reference

In the current version, Acure supports the following types of vCenter events:

  • VmMigratedEvent;
  • DrsVmMigratedEvent;
  • HostAddedEvent;
  • HostRemovedEvent;
  • VmCreatedEvent;
  • VmRemovedEvent.

Example of event VmMigratedEvent:

[
  {
    "Key": 11518,
    "EventType: "vim.event.VmMigratedEvent",
    "ChainId": 11515,
    "CreatedTime": "2021-08-10T06:39:25.448Z",
    "UserName": "VSPHERE.LOCAL\\username",
    "Net": null,
    "Dvs": null,
    "FullFormattedMessage": "Migration of virtual machine vcenter-test from pion02.devel.ifx, Storwize3700 to pion01.devel.ifx, Storwize3700 completed",
    "ChangeTag": null,
    "Vm": {
      "Id": "vm-23",
      "Name": "vcenter-test",
      "Type": "VirtualMachine",
      "TargetHost": {
        "Id": "host-15",
        "Name": "pion01.devel.ifx",
        "Type": "HostSystem",
        "Cluster": {
          "Id": "domain-c7",
          "Name": "clHQ-test",
          "Type": "ClusterComputeResource",
          "Datacenter": {
            "Id": "datacenter-2",
            "Name": "dcHQ-test",
            "Type": "Datacenter"
          }
        }
      },
      "SourceHost": {
        "Id": "host-12",
        "Name": "pion02.devel.ifx",
        "Type": "HostSystem",
        "Cluster": {
          "Id": "domain-c7",
          "Name": "clHQ-test",
          "Type": "ClusterComputeResource",
          "Datacenter": {
            "Id": "datacenter-0",
            "Name": "dcHQ-test",
            "Type": "Datacenter"
          }
        }
      }
    }
  },
  {
    "Key": 11946,
    "ChainId": 11943,
    "CreatedTime": "2021-08-10T20:37:30.995999Z",
    "UserName": "VSPHERE.LOCAL\\username",
    "Net": null,
    "Dvs": null,
    "FullFormattedMessage": "Migration of virtual machine vcenter-test from pion01.devel.ifx, Storwize3700 to pion02.devel.ifx, Storwize3700 completed",
    "ChangeTag": null,
    "Vm": {
      "Id": "vm-23",
      "Name": "vcenter-test",
      "Type": "VirtualMachine",
      "TargetHost": {
        "Id": "host-12",
        "Name": "pion02.devel.ifx",
        "Type": "HostSystem",
        "Cluster": {
          "Id": "domain-c7",
          "Name": "clHQ-test",
          "Type": "ClusterComputeResource",
          "Datacenter": {
            "Id": "datacenter-2",
            "Name": "dcHQ-test",
            "Type": "Datacenter"
          }
        }
      },
      "SourceHost": {
        "Id": "host-15",
        "Name": "pion01.devel.ifx",
        "Type": "HostSystem",
        "Cluster": {
          "Id": "domain-c7",
          "Name": "clHQ-test",
          "Type": "ClusterComputeResource",
          "Datacenter": {
            "Id": "datacenter-0",
            "Name": "dcHQ-test",
            "Type": "Datacenter"
          }
        }
      }
    }
  }
]