Code Monkey home page Code Monkey logo

influxdb-relay's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

influxdb-relay's Issues

Question regarding recovery procedure and shard ids

Hi,

I have 2 questions about the basic recovery example from the readme:

  1. shard id
    Essentially do we have any guarantee that this will be the same for both influx databases, which - outside of having some databases and retention policies identical - know really nothing about each other ? ITOW if I backup shard X of database Y and retention policy Z from server DB1 - do I have any guarantee that X will be the same on DB2 ? From what I saw, the shard backups are just plain tar files with full paths to relevant tsm files (so if X can differ and influxd itself is not prepared for such restoration case ...)

  2. live restoration
    Formally, influxdb documentation claims that:
    "Note: Restoring from backup is only supported while the InfluxDB daemon is stopped."

OTOH, the readme file says it's ok to do the resotration if the relevant shard/policy/db combination is no longer written to. Are there no other risks in this context (e.g. maybe something related to metadata or anything else) ?

Debian packages fails to install due to malformed version number

Package generated using the Docker container, as per README instructions. Attempting to install it in an Ubuntu 16.04 Docker container.

$ apt install ./influxdb-relay_adaa2ea_amd64.deb 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting 'influxdb-relay' instead of './influxdb-relay_adaa2ea_amd64.deb'
The following packages were automatically installed and are no longer required:
  geoip-database libgd3 libgeoip1 libvpx3 libxpm4 libxslt1.1
Use 'sudo apt autoremove' to remove them.
The following NEW packages will be installed:
  influxdb-relay
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 0 B/2540 kB of archives.
After this operation, 8583 kB of additional disk space will be used.
Get:1 /root/influxdb-relay_adaa2ea_amd64.deb influxdb-relay amd64 adaa2ea-1 [2540 kB]
debconf: delaying package configuration, since apt-utils is not installed
dpkg: error processing archive /root/influxdb-relay_adaa2ea_amd64.deb (--unpack):
 parsing file '/var/lib/dpkg/tmp.ci/control' near line 2 package 'influxdb-relay':
 error in 'Version' field string 'adaa2ea-1': version number does not start with digit
Errors were encountered while processing:
 /root/influxdb-relay_adaa2ea_amd64.deb
N: Can't drop privileges for downloading as file '/root/influxdb-relay_adaa2ea_amd64.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
E: Sub-process /usr/bin/dpkg returned an error code (1)

Support per-backend HTTP authentication

There was a sub-discussion in #13 about enabling per-backend HTTP authentication. This ticket will be a discussion place to talk about what that would look like and how it would behave now that Authentication pass-through has been enabled.

data is not synchronised

Hi,
we are trying to implement HA for influxdb using influx-relay.
after the initial set-up I tried testing the relay by doing a manual curl post ,what I observed after that is the data doesn't get written to one of the servers.below is my relay config and the database o/p as well.
my relay config
[[http]]
name = "simple-relay"
bind-addr = "0.0.0.0:9096"
output = [
{ name="influx1", location = "http://172.16.23.15:9096/write" },
{ name="influx2", location = "http://172.16.23.20:8086/write" },
]

from influx1 db

select * from test;
name test

time value
1486730660357106979 1

from influx2 db

select * from test;
name test
time value

1486729766273384897 1
1486729777494460423 1
1486730066964361329 1
1486730660357106979 1
1486732208067048977 2
1486732320456879542 5

InfluxDB HA Subscriptions for Kapacitor

I am currently integrating kapacitor into my alerting system and wanted to address HA concerns related to subscriptions. Currently the only way to reliably do this is to have a subscription on each InfluxdDB server, which unfortunately doubles data and alerts. InfluxDB relay is in a unique position where it could be in charge of distributing the subscriptions itself since that is what is it basically already doing with the multiple InfluxDB databases.

With the new http kapacitor endpoints you can just treat kapacitors as just another InfluxDB database, which seems like the best solution so far. Are there any other recommendations for official HA solutions for kapacitor?

data inconsistency

I was looking at the architecture and I'm a bit puzzled by the fact that data inconsistency can occur and I'm wondering how I'm supposed to fix those.

Here's an example:

  • client writes to relay
  • relay writes to InfluxDB A and InfluxDB B
  • InfluxDB A write is successful
  • InfluxDB B write is not successful
  • client got success message, because the write to A was first

So now I have 2 databases with different data. Although A has all the correct and complete data.
Now imagine the same process as above again, but B gets the successful write and the client also gets the success message, because the write to B was first (and successful). In this case now both databases have incorrect and incomplete data.

Or am I missing something here?

Error in sending data from Telegraf to Influxdb-relay

Hi,

Hello, I am trying to send the data from Telegraf to Influxdb-relay and from there to Influxdb i.e., Telegraf -> Influxdb_relay -> Influxdb. So, I have configured Influxdb-relay IP in the telegraf.conf
as follows:-

[[outputs.influxdb]]
urls = [“http://172.29.29.12:9096/write”]

But the issue is Telegraf is not able to write to Influxdb-relay I am getting error as “Database creation Failed: post http://172.29.29.12:9096/query?q=CREATE DATABASE getsockopt: connection refused”

Can anyone know the solution for this. Can anyone please suggest the correct way to allow Telegraf send data to Influxdb-relay.

Any help is much appreciated. Thank you

how to implement in Load balancer for recovery?

in Recovery part, there is step -- "Tell the load balancer to stop sending query traffic to the server that was down".
now i am already setup all of components with nginx, but how to implement this step?

any body can give me help? many thanks!

the relay leak client sockets in TIME_WAIT state when QPS is high

Seems the sockets connecting to backends are not closed in time. this causes socket port exhausted on the machine. I guess re-using the the http transport to backends should resolve the issue, it is not necessary to create new one for each write request.

BTW, I found it on windows.

Support /ping endpoint

I'm using InfluxDB Relay to forward data to 2 InfluxDB instances + 1 Kapacitor instance. Eventually I would like kapacitor to write to Relay, the only problem now is that with the following kapacitor config:

[[influxdb]]
  enabled = true
  name = "localhost"
  default = true
  urls = ["http://192.168.50.11:9096", "http://192.168.50.12:9096"]
  disable-subscriptions = true
  ...

Kapacitor tries to ping a URL before it uses it, and if the ping fails then it assumes the URL is not working and tries the next one (link to Kapacitor v0.13.1 client code).
So if I'm correct, kapacitor won't be able to write to InfluxDB Relay unless:

  1. Relay supports /ping
  2. Kapacitor doesn't use ping to determine if a client is working or not

My guess would be that (1) is cleaner?

Maintained fork by Vente-Privée

Hello,

First of all thank you InfluxData & contributors for providing us such great tools. You've done an Amazing work.

We are very happy user of InfluxDB Relay at Vente-Privée but I seem that it's not maintained anymore (last commit is a bit old) and I respectfully understand that choice. That's why we plan to maintain it on https://github.com/vente-privee/influxdb-relay/tree/develop. The first main feature we want to add is handle prometheus remote_write case.

What we already have done:

  • Add basic tests with golint & pylint
  • Add CHANGELOG (sorry we start with our fork, we'll work on that)
  • Add CONTRIBUTING guide
  • Merge #65
  • Merge #52
  • Merge #59
  • Merge #43
  • Merge #57

Feel free to join & help us by making more & more issues / PR 😄

=> https://github.com/vente-privee/influxdb-relay

Error parsing packet in relay : unable to parse

Hello,

I am sanding them metric since collectd on UDP directly on the relay influxDB and I have this message when I run the relay:


/usr/local/go/bin/influxdb-relay -config relay.toml 
2016/06/10 16:11:50 starting relays...
2016/06/10 16:11:50 Starting HTTP relay "example-http" on 127.0.0.1:9096
2016/06/10 16:11:50 Starting UDP relay "influxdb-udp" on 0.0.0.0:25826
2016/06/10 16:11:50 Error parsing packet in relay "influxdb-udp" from 127.0.0.1:35398: unable to parse '�host1.local
                                                                                                                             �ֲ����9  
                                                                                                                                        ���interface� eth0�if_octets�����
                                                                                                                                                                          �ֲ������if_errors�����
��      eth1�if_octets������y���+��                                                                                                                                                           �ֲ��
                                  �ֲ����]�if_errors�����
                                                      �ֲ������   diskvda��disk_octets��������)t�
disk_ops�����ZLm�                                                                             �ֲ���    (�
                �ֲ������disk_time������\��
                                        �ֲ����d��disk_merged���������
                                                                   �ֲ������
                                                                          memory��
                                                                                  memory    used����\�A
buffered����5�A                                                                                     �ֲ����6
              �ֲ����4
                    cached������A�sponge02l.btsys.local
                                                      �ֲ�6��r�interface� eth0�if_errors�����
                                                                                          �ֲ�6��%�   eth1�if_octets�����Vƣ������"{
                                                                                                                                    �ֲ�6��F�if_errors�����
                                                                                                                                                        �ֲ�6�u]�   diskvda��disk_octets��������` ��~�
                                                                                                                                                                                                   �ֲ�6��W��disk_merged�����N]�<�)�
                           �ֲ�6��G�disk_time�������z���
disk_ops������.��W���                                �ֲ�6�{F�
                    �ֲ�6�+�� load��  load�!����q=': missing fields
unable to parse 'ףp�?ffffff�?��(\�?
                                  �ֲ�6�!��
                                         memory�
                                                memory  used���.)�A
                                                                  �ֲ�6���
                                                                        cached��� ��A
                                                                                    �ֲ�6���    free��� N�A
buffered������A': invalid field format                                                                    �ֲ�6�A�

Here is the configuration I have in the relay:


[[http]]
name = "example-http"
bind-addr = "127.0.0.1:9096"
output = [
    { name="local1", location = "http://127.0.0.1:8086/write" },
    { name="local2", location = "http://127.0.0.1:7086/write" },
]

[[udp]]
name = "influxdb-udp"
bind-addr = "0.0.0.0:25826"
read-buffer = 0 # default
output = [
    { name="influxdb1", location="host1.local:25826", mtu=512 },
    { name="influxdb2", location="host2.local:25826", mtu=1024 },
]

I have version 5.4 of collectd and 0.13 version of influxdb (although finally the metric does not arrive until the base ...)

Do you have an idea of why the relay can not parse the metric correctly sent by collectd?

Thank you.

Cordially.

Enhancements to logging when backends are down

I've been testing a setup of InfluxDB/InfluxDB-Relay. It tolerates reboots of instances very well with us experiencing 0 lost records or inconsistencies in our data so far.

A nice enhancement would be to the logging...

  1. When a backend is detected down & comes back online.
  2. Record count and data size of buffer-size-mb consumed. (Useful for estimating how much node downtime we can tolerate).

How to handle prometheus remote_write case with influxdb-relay

how should i configure relay in prometheus case
prometheus config:

# Remote write configuration (for Graphite, OpenTSDB, or InfluxDB).
remote_write:
  - url: "http://relay.Loadbalancer:9096/api/v1/prom/write?u=foo&p=bar&db=prometheus"

relay config:

bind-addr = "0.0.0.0:9096"
# Array of InfluxDB instances to use as backends for Relay.
output = [

 { name="influxdb1", location="http://10.35.96.144:8086/write", timeout="10s" },
 { name="influxdb2", location="http://10.35.96.145:8086/write", timeout="10s" },

]

Loadbalancer nginx config:

http {
  client_max_body_size 20M;
  sendfile    on;
  tcp_nopush  on;
  tcp_nodelay on;

  upstream influxdb-relay {

    server relay:9096 fail_timeout=0;
  }
  upstream influxdb {

    server influxdb1:8086 fail_timeout=0;
    server influxdb2:8086 fail_timeout=0;
  }

  server {
    listen                  9096;
    server_name             relay.Loadbalancer;

    location /write {
      limit_except POST {
        deny all;
                        }
      proxy_pass http://influxdb-relay;
                    }
    location /api/v1/prom/write {
      limit_except POST {
        deny all;
                        }
      proxy_pass http://influxdb-relay;
                    }
        }

  server {
    listen                  8086;
    server_name             influxdb.Loadbalancer;
    location /query {
      limit_except GET {
        deny all;
    }

    proxy_pass http://influxdb;

  }
    location /read {
      limit_except GET {
        deny all;
    }

    proxy_pass http://influxdb;

  }
    location /api/v1/prom/read {
      limit_except GET {
        deny all;
    }

    proxy_pass http://influxdb;

  }
 }
}

and get error
2018/06/01 14:08:02 [error] 10360#0: *69959 readv() failed (104: Connection reset by peer) while reading upstream, client: prometheus_host, server: relay.Loadbalancer, request: "POST /api/v1/prom/write?u=foo&p=bar&db=prometheus HTTP/1.1", upstream: "http://relay:9096/api/v1/prom/write?u=foo&p=bar&db=prometheus", host: "relay.Loadbalancer:9096"

HTTP authentication not working?

With http authentication enabled in influxdb, I cannot write to to the relay (port 8096 in this example).

Ex. authentication enabled writing to the relay

curl -i -XPOST -u test:test 'http://statsdb01:8096/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64'
HTTP/1.1 401 Unauthorized
Content-Type: application/json
Date: Fri, 22 Apr 2016 13:20:38 GMT
Content-Length: 50

{"error":"unable to parse Basic Auth credentials"}

Ex. Writing directly to influxdb (port 8086)

curl -i -XPOST -u test:test 'http://statsdb01:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64'   
HTTP/1.1 204 No Content
Request-Id: ae607db4-088e-11e6-8012-000000000000
X-Influxdb-Version: 0.12.2
Date: Fri, 22 Apr 2016 13:32:38 GMT

Ex. authentication disabled writing to the relay

curl -i -XPOST 'http://statsdb01:8096/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64'                
HTTP/1.1 204 No Content
Date: Fri, 22 Apr 2016 13:35:50 GMT

Relay config:

[[http]]
name = "statsdb01"
bind-addr = "0.0.0.0:8096"
output = [
    { name="statsdb01", location = "http://statsdb01:8086/write", buffer-size = 1000, max-delay-interval = "1m" },
    { name="statsdb02", location = "http://statsdb02:8086/write", buffer-size = 1000, max-delay-interval = "1m" },
]

Grafana is not working with relay

I'm trying to setup influxdb-relay in my env, but stopped on problem with Grafana.
I've added new Datasource, with http://my.local.ip:9086/, but got an error:

InfluxDB Error Response: invalid write endpoint

My current relay.toml:

[[http]]
name = "influxdb-http"
bind-addr = ":9086"
output = [
    { name = "influxdb_a", location = "http://my.local.ip:8086/write" },

]

[[udp]]
name = "influxdb-udp"
bind-addr = ":9086"
read-buffer = 0 # default
output = [
    { name = "influxdb_a", location = "my.local.ip:8086", mtu = 512 },

]

I'm using docker images: appcelerator/influxdb-relay, influxdb:1.2.4 and grafana:4.1.2

IInfluxdb-relay installation error: not enough arguments in call to models.ParsePointsWithPrecision

I am running following command to install influxdb-relay.
sudo go get -u github.com/influxdata/influxdb-relay


gocode/src/github.com/influxdata/influxdb-relay/relay/http.go:169:48: not enough arguments in call to models.ParsePointsWithPrecision
        have ([]byte, time.Time, string)
        want ([]byte, []byte, time.Time, string)
gocode/src/github.com/influxdata/influxdb-relay/relay/udp.go:163:48: not enough arguments in call to models.ParsePointsWithPrecision
        have ([]byte, time.Time, string)
        want ([]byte, []byte, time.Time, string)
root@lx-05:~#  not enough arguments in call to models.ParsePointsWithPrecision

Is this due to a missing package or actual bug ?

influxdb-relay doesn't build with Go versions older than 1.5

go install github.com/influxdata/influxdb-relay
warning: GOPATH set to GOROOT (/usr/lib/golang) has no effect
# github.com/influxdata/influxdb-relay/relay
/usr/lib/golang/src/github.com/influxdata/influxdb-relay/relay/udp.go:210: undefined: bytes.LastIndexByte

how to test influxdb-relay working or not on multinode…?

how to test influxdb-relay working or not on multinode…?
server1: I install influxdb and influxdb-relay and both running
server2: I install influxdb and influxdb-relay and both running
so what next … I want exact steps to test influxdb-relay working or not

Why not just a replication stream and master fallover?

Having relay just adds complexity and multiple failure points where order can get messed up. I don't see the advantage or safety aspect of using a relay.

        ┌─────────────────┐               
        │writes & queries │               
        └─────────────────┘               
                 │                        
                 ▼                        
         ┌───────────────┐                
         │               │                
┌────────│ Load Balancer │────────────┐   
│        │               │    ┌──────┐│   
│        └───────────────┘    │/query││   
│           │          |      └──────┘│   
│           │          |              │   
│┌──────┐   │┌──────┐  |              │   
││/query│   ││/write│  | Failover     │
│└──────┘   │└──────┘  |_ _ _ _ _     │
│           │                    │    │
│           ▼   ┌───────────┐    ▼    ▼  
│  ┌──────────┐ │replication│┌──────────┐
│  │          │ └───────────┘│          │
└─▶│ InfluxDB │─────────────▶│ InfluxDB │ 
   │          │              │          │ 
   └──────────┘              └──────────┘ 

Thoughts?

UDP Write Buffers

Do you plan on supporting buffering for UDP writes as well? I was really interested in this feature, but it doesn't help me much since we don't write to HTTP endpoints.

Thanks,
Ben

gzip mirrored traffic

How about gzipping mirrored traffic ? Otherwise you get a considerable traffic overflow between nodes. Does this bother anybody ?

installation of influxdb relay and dockerizing it

Hello,
Thanks for the update of the readme.md. I am still having issues,
When I run

$ go get -u github.com/influxdata/influxdb-relay
Error:
package github.com/influxdata/influxdb-relay: directory "/usr/local/go/src/github.com/influxdata/influxdb-relay" is not using a known version control system

So I removed the directory influxdb-relay and tried again and got,

package github.com/influxdata/influxdb-relay: cannot download, $GOPATH not set. For more details see: go help gopath

I have the right gopath as we can see in the ~/.profile

if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi

if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
export PATH=$PATH:/usr/local/go/bin

still gives gopath not set error.
Can you please help me out, also is there a reliable ready image on dockerhub for the influxdb-relay?
Thanks and regards

two influxdb node , The data is not synchronized

I have two influxdb node :A,B,while A is down and B is up , I insert a data into the influxdb by influx-relay ,i can see this data in B ,then I set up A,but A has't this data ,what's wrong?

Unable to start relay with init.d

I built a .deb file. After installing it, I tried to start the service, but I get the following error (inside the log file):

flag provided but not defined: -pidfile
Usage of influxdb-relay:
  -config string
        Configuration file to use

I tried to tinker around a bit, but didn't manage to get it to work properly.
This was on Ubuntu 14.04, so it is using init.d scripts.

Unable to install and run Influxdb-relay

I am following all instructions in the readme.md, yet it fails at the very first step at
$ go install github.com/influxdata/influxdb-relay
Error:
can't load package: package github.com/influxdata/influxdb-relay: no buildable Go source files.

I even tried cloning the code in the directory and tried running build.py
it gives error saying could not find dependencies; go
I have installed go properly on my system. I am sure there is no issue on your part, but I am fairly new to both influxdb-relay as well as Go. Also if someone can dockerize the relay, it will be great help.
Thanks

A question about the use of the relay with the collectd

I’ve got some troubles in using influxdb-relay with the collectd. And I checked the udp.go, it seemed that it could not parse the data from collectd.
Does the relay support listening for the writes from collectd and writing the data to each InfluxDB ?

how to do query in relay?

any body use query in relay?
i setup two relay,
relay1[http:9096] relay2[http:9096]
when i write point, everything works fine,
but when i query, seems with port 9096 doesnot work? so how to query in the load balance?

thanks
Vita

How to config for self-signed certificate

My influxdb is enabled with self-signed certificate:
https-enabled = true https-certificate = "/etc/ssl/influxdb-selfsigned.crt" https-private-key = "/etc/ssl/influxdb-selfsigned.key"

How to config influx-relay in order to write to my influxdb instance?

duplicate relay when a new relay is started

I followed the steps mentioned in document and started a relay with below command
$GOPATH/bin/influxdb-relay -config relay.toml
Relay started successfully

Now, I want to stop the above relay and update relay.toml and start relay with new config. How do I do that?

Since I didnt find a way to stop relay, I again started a new relay which is giving me an error 'duplicate-relay'

How to handle such usecase?

Can not write to Kapacitor with influxdb-relay

As mentioned in one of the configurations, influxdb-relay can write data to kapacitor as they are wire compatible.

But when i try the same, Kapacitor is not receiving any data. Configurations for Influxdb-relay and Kapacitor are below.

Kapacitor -

# The hostname of this node.

# Must be resolvable by any configured InfluxDB hosts.

hostname = "localhost"

# Directory for storing a small amount of metadata about the server.

data_dir = "/var/lib/kapacitor"

[http]

  # HTTP API Server for Kapacitor

  # This server is always on,

  # it servers both as a write endpoint

  # and as the API endpoint for all other

  # Kapacitor calls.

  bind-address = ":9092"

  auth-enabled = false

  log-enabled = true

  write-tracing = false

  pprof-enabled = false

  https-enabled = false

  https-certificate = "/etc/ssl/kapacitor.pem"


[[influxdb]]

  # Connect to an InfluxDB cluster

  # Kapacitor can subscribe, query and write to this cluster.

  # Using InfluxDB is not required and can be disabled.

  enabled = true

  default = true

  name = "localhost"

  urls = ["http://localhost:8086"]

  username = "admin"

  password = "admin"

  timeout = 0

Influxdb-relay

[[http]]
name = "kapacitor-http"
bind-addr = "0.0.0.0:9096"
default-retention-policy = "default"
output = [
    { name="influxdb1", location = "http://influxdb1:8086/write" },
    { name="influxdb2", location = "http://influxdb2:8086/write" },
    { name="kapacitor1", location = "http://kapacitor1:9092/write" },
]

Is there any additional configuration required, or is something wrong with this conf ?

[feature request] Add support for specifying trusted certificates

It can be difficult to configure Relay to communicate with an InfluxDB server that is using a self-signed SSL certificate that is not trusted by the host system. It would be great if there was a configuration option for specifying a set of trusted certificates to use for communication.

installation issue

go install github.com/influxdata/influxdb-relay
work/src/github.com/influxdata/influxdb-relay/relay/http.go:16:2: cannot find package "github.com/influxdata/influxdb/models" in any of:
/usr/local/go/src/github.com/influxdata/influxdb/models (from $GOROOT)
/root/work/src/github.com/influxdata/influxdb/models (from $GOPATH)
work/src/github.com/influxdata/influxdb-relay/relay/config.go:6:2: cannot find package "github.com/naoina/toml" in any of:
/usr/local/go/src/github.com/naoina/toml (from $GOROOT)
/root/work/src/github.com/naoina/toml (from $GOPATH) (edited)

new messages

install fails

go get -u github.com/influxdata/influxdb-relay
package github.com/influxdata/influxdb/v2/pkg/escape: cannot find package "githu b.com/influxdata/influxdb/v2/pkg/escape" in any of:
/usr/local/go/src/github.com/influxdata/influxdb/v2/pkg/escape (from $GO ROOT)
/root/go/src/github.com/influxdata/influxdb/v2/pkg/escape (from $GOPATH)

Centos7.5 golang:1.5.4(I also have tried 1.7.6 and 1.14.2)

I am not good at golang. The used package has been removed or changed dir?

HA influx on relay method

HI team,
Its very pleasure to have the HA in influx i am new to this DB deployment, I have deployed the influx and connect the Prometheus to the influx every thing is good. Coming to the HA enable most of the documents are directing to use the relay method but i am not getting the clarity on that like who we are going to write the data with relay and do we check that and a few important aspects are need to know can you please guide me on that or is there any best documentation available on practical example (I read the documentation on official web set it is not clear)

Wrong rp in influx write causing ingest issues

A client was writing data to backend influxdbs through relay using wrong retention policy (rp). Both backend Influxdbs returned 5xx errors for it. Relay then kept accruing these data points in its buffer eventually filling the buffers and all other valid writes from other clients started failing.

This implies any bad player out there can impact filling up relay buffers...is there a way for relay to forget these errors and not keep them in memory. A human error on client side config can impact the whole relay by eventually choking up its memory.

It may be by design but want to confirm ....

Support relay of collectd UDP

I looked at #54 and #29
this seems to be couple of years old.

Is there no way to use HA-based deployment for collectd based ingest traffic?

What is the problem that one type of UDP payload (line protocol) can be relayed but not the other (Collectd). Is there any technical problem that prevents relay of collectd based ingest data?

Support for sharding layer

Hi folks, more of a question for clarification and not so much an issue.

I don't think the README is clear on wether the relay supports (or is planned to support) acting as the sharding layer mentioned.

RPM/DEB installers

It would be nice if there were RPM/DEB installers for influxdb-relay available in the official yum/apt repos.

Many people don't have much experience with go and since the infrastructure is already in place it would be fairly easy to impelment and would greatly help speed up implementation for most people.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.