Code Monkey home page Code Monkey logo

influxdb-gds-connector's Introduction

InfluxDB Connector for Google Data Studio

CircleCI codecov code style: prettier License GitHub issues GitHub pull requests Slack Status

This is not an official Google product.

This Data Studio Connector lets users query datasets from InfluxDB v2 instances through the InfluxDB API.

How it works

Connect your InfluxDB to Google Data Studio and start pushing your data to in minutes.

How to add the InfluxDB Connector to Data Studio

Direct link

To add the InfluxDB Connector in Data Studio you can use this link:

Use InfluxDB DataSource.

From Data Studio

TBD: If you are already in Data Studio, click the "Create" button and select "Data Source". From there you can search for the InfluxDB Connector.

Connect your InfluxDB to Data Studio

To access your InfluxDB, enter your Connection information:

  • InfluxDB URL
  • Token
  • Organization
  • Bucket
  • Measurement

Set up Metrics

Once you are connected, Data Studio will show you a list of all the fields available from your Measurement. This includes your Tag set, Field set and Timestamp.

Visualize your data in Data Studio

After you have reviewed the fields, press "CREATE REPORT" button to create your report.

Inspiration

InfluxDB 1.8 compatibility

InfluxDB 1.8.0 introduced forward compatibility APIs for InfluxDB 2.0. This allow you to easily move from InfluxDB 1.x to InfluxDB 2.0 Cloud or open source.

Connector usage differences:

  1. Use the form username:password for an authentication token. Example: my-user:my-password. Use an empty string ("") if the server doesn't require authentication.
  2. The organization parameter is not used. Use a string - as a value.

Troubleshooting

This app isn't verified

When authorizing the connector, an OAuth consent screen may be presented to you with a warning "This app isn't verified". This is because the connector has requested authorization to make requests to an external API (E.g. to fetch data from the service you're connecting to).

This warning will no longer be display after the connector will include in Partner connectors gallery - see #2

Data optimize

The connector uses two types of query: schema query and data query. Please, check that both of them correctly works with your dataset.

Schema query

It is used to determine all your fields from configured Bucket and Measurement. The query is used only in the configuration.

import "influxdata/influxdb/v1"

bucket = "my-bucket"
measurement = "my-measurement"
start_range = duration(v: uint(v: 1970-01-01) - uint(v: now()))

v1.tagKeys(
  bucket: bucket,
  predicate: (r) => r._measurement == measurement,
  start: start_range
) |> filter(fn: (r) => r._value != "_start" and r._value != "_stop" and r._value != "_measurement" and r._value != "_field")
  |> yield(name: "tags")

from(bucket: bucket)
  |> range(start: start_range)
  |> filter(fn: (r) => r["_measurement"] == measurement)
  |> keep(fn: (column) => column == "_field" or column == "_value")
  |> unique(column: "_field")
  |> yield(name: "fields")

You can optimize this query by Schema Query Range configuration in first step of Data Studio connection wizard:

then the start_range of query will be -6h.

Data query

It is used for retrieve data from InfluxDB. The time-range is configured via Date Range Control Widget.

bucket = "my-bucket"
measurement = "my-measurement"
// Configure DataRange in Data Studio - https://support.google.com/datastudio/answer/6291067?hl=en
start = time(v: 1) // or start of specified Data Range
stop = now() // or end of specified Data Range

from(bucket: bucket) 
|> range(start: start, stop: stop) 
|> filter(fn: (r) => r["_measurement"] == measurement) 
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")

Development

Open Apps Script project in your browser:

$ yarn open

or

https://script.google.com/home/projects/18YPFhvO1TMR7QFCw2iFuIH_1_iPvFIYrnT3fg8J7skoqUd5x7YAnPD7_/edit

Push your local changes to Apps Script:

$ yarn push

Update the production deployment of connector:

$ yarn deploy

Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/influxdata/influxdb-gds-connector.

License

The gem is available as open source under the terms of the MIT License.

influxdb-gds-connector's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

influxdb-gds-connector's Issues

Not an issue

Removed due to the reason that it was not an actual issue.

InfluxDB Connector Issue

I'm trying to understand this error more when I try to authorize the connector. Is anyone able to advise?

"GetBuckets from: https://xxx.xxx.com/" returned an error:Exception: Could not parse text.

InfluxDB Connector Issue

Very excited to try this connector.
I have a bucket with a lot of data in. It seemed to make the connector 'fall over'

Community Connector Error
There was an error caused by the community connector. Please report the issue to the provider of this community connector if this issue persists.

Connector details
"GetFields from: https://eu-central-1-1.aws.cloud2.influxdata.com" returned an error:Exception: Request failed for https://eu-central-1-1.aws.cloud2.influxdata.com returned code 400. Truncated server response: {"code":"invalid","message":"runtime error @1:110-1:143: drop: schema collision detected: column "_value" is both of type int and float"} (use muteHttpExceptions option to examine full response)

Error ID: fb5b13d2

I'm quite experienced on InfluxDB and there is nothing fundamentally wrong with having _value as being int and float for different _field values. In fact it's fairly fundamental. There must be something that the connector is assuming about InfluxDB 2.0 data that is (in general) not always true.

Any ideas how I'd move forward to use this excellent tool?

Metric data type can not be changed back to it's original data type.

The issue:

I noticed that if you change the metrics data type from 'Date & Time' to 'Text' you can't change it back.

Before

This is because the 'Date & Time' option is no longer available in the LOVs.

After

Steps to reproduce:
List the minimal actions needed to reproduce the behavior.

  1. ... Using GDS create a connection to InfluxData Cloud2 service, setting the InfluxDB URL, Token, Organisation, Bucket and Measurement.
  2. ... Within GDS, at the field editing section, change the 'time' metric data type from 'Date & Time' to data type 'Text'.
  3. ... Now, try to change it back from 'Text' to a 'Date & Time'.

Expected behavior:
You should be able to change the data type back using the LOVs.

Actual behavior:
The original data type of 'Date & Time' is no longer available in the LOVs.

Specifications:

  • Client Version: 2021.10
  • InfluxDB Version: 2.x Cloud
  • Platform: n/a

InfluxDB Connector Issue

Hi - Trying to use InfluxDB connector to connect to our Azure Tenant hosted InfluxDB 2.1.1 instance. The connector authorization throws an error when I provide the URL, Token Org. I took the query it was running and ran it as a query in the Data Explorer on that instance and it didn't return an error. I know the url, token are correct because I use that same info to connect from Redash. Any thoughts? Version compatability?
Screen Shot 2022-06-03 at 9 34 23 AM
Screen Shot 2022-06-03 at 9 34 11 AM

Metrics and Dimensions are imported around the wrong way

Currently when data is imported into data studio from InfluxDB, time is imported as a metric (blue) while the metrics are imported as dimensions (green). This should be the opposite way round and currently makes this unusable.

Publish connector as a Partner connector

Partner Connector requirements

https://developers.google.com/datastudio/connector/pscc-requirements

Apps Script

  1. Share view access of your Apps Script project with both of these addresses:
  2. Create a deployment named Production and update the Production deployment to contain the version of code you want to publish.
  3. If you have updated your code since creating the Production deployment, ensure that the correct version of the code is selected for the deployment before you submit your connector.

Manifest

You must include the following in your connector's manifest. View manifest reference for more information about each property in manifest.

  1. In description, make sure you provide all information and instructions necessary to have a basic understanding of the connector and how to use it. Connectors with vague and incomplete descriptions will be rejected during review.
  2. addOnUrl should be a dedicated hosted page about your connector, preferably hosted on your own domain.
    • This page must contain or link to your Privacy Policy and Terms of Use hosted on the same domain as the addOnUrl (see examples: https://supermetrics.com/privacy-policy, https://supermetrics.com/terms-of-service).
    • This page should contain any details the user will need to know to use your connector.
    • If users need to sign up for an account to use your connector, the sign up link should be available from this page.
    • The page cannot be hosted at https://sites.google.com/.
    • See example pages from existing partners: Funnel, Supermetrics, CallRail.
  3. supportUrl should be a hosted page where users can go to get support for your connector. This cannot be an email or mailto link.
  4. You should populate the sources property with all the sources your connector connects to. See Sources in Manifest reference for details.
    • You can view the existing list of sources at our Data Registry Repository. If the source you are connecting to does not exist in the repository, send a pull request to the Data Registry Repository to add the source. Your connector will fail the review process if the sources in your manifest do not exist in the repository.
    • This is additional metadata for the connector that will be indexed for search feature in the gallery. You connector will show up in the search results when users search for a specific data source in the gallery.
    • The gallery will let users discover connectors by data sources by providing a Connectors by Data Source interface.
  5. You should limit the number of endpoints called by UrlFetchApp in your connector to those absolutely required for connector functionality. Add the urlFetchWhitelist property to the root level of your manifest. View the urlFetchWhitelist reference for more info.
    • This property should contain all URLs your connector connects to using the UrlFetchApp call.
    • If your connector does not execute a UrlFetchApp call then set the property value to an empty list [].
    • If your connector does not connect to a fixed domain or the endpoint prefix varies, omit the urlFetchWhitelist property in the manifest.
  6. Manifest should contain values for
    • shortDescription
    • authType
    • feeType
    • privacyPolicyUrl
    • termsOfServiceUrl
  7. Connector name should be directly representative of what the connector specifically does. This makes it clear what the connector does and helps the user to identify the correct connector for their need.

Template and report

  1. If you connector has a fixed schema, create a report template for your connector and add it to the manifest. Enable Sharing by link for the report.
  2. Create at least one demo report using your connector and submit the report to Data Studio gallery. This report can be a replica of your template report or a separate report that displays even broader functionalities of your connector.
    • Adding the demo reports makes your connector eligible for promotional opportunities (getting featured, mentions in newsletters and blog posts, case studies etc).

Connector

  1. If the user needs an account to use the connector, make sure the connector description or the addOnUrl link provides instructions to help the user create one.
  2. Your connector cannot be in unfinished or beta status. You have to publish a complete and functional connector. You can always update your connector but the production deployment that is released to users should be tested and feature complete.
  3. Provide meaningful and actionable error messages to users when users encounter a Connector internal error. This includes cases when a user provides invalid/blank input at configuration.
  4. You connector's shortDescription, description, addOnUrl link, supportUrl link, and OAuth page (if applicable) should be free of spelling and grammatical errors.
  5. Use appropriate authentication method in getAuthType(). Do not request credentials via getConfig().
  6. Complete the OAuth Client Verification process. The verification is mandatory for all connectors regardless of the authentication method in getAuthType(). The verification process is handled by a separate team. Consult the OAuth API Verification FAQ for more info on this. Your connector will not be published if the OAuth Client Verification process is not completed.
    • During the OAuth verification process, add your connector's required OAuth scopes as part of the OAuth consent screen configuration. If you fail to add all required scopes, you might pass the OAuth verification process but your connector will still show the Unverified app screen. This will cause the Partner Connector verification process to fail.
      Authorize and test your connector using a new account after passing the OAuth verification process to ensure that Unverified app screen is not displayed to your users.
  7. Ensure you adhere to the Data Studio Galleries Terms of Service (Submitter).

OpenEnded Questions

  1. Which Google account we will use to publish the connector?
  2. Could be a Github repository the home page of connector, or it should be a page on www.influxdata.com?
  3. What will be a demo report that will be submitted into Data Studio gallery? Could we use COVID-19 report from examples?

TODO

  1. Move GitHub repository to InfluxData organization
  2. Create Privacy Policy doc
  3. Create Terms of Use doc
  4. Complete OAuth Client Verification

Publish to verification process

Publish your Partner Connector

Improve error message

Proposal:
No buckets have been detected. Please verify your organization name is accurate and your token has permissions to read the selected bucket.

Current behavior:
image

Error on measurments with a tag with different values

Steps to reproduce:
Configure the influx DB conector with the followeing data:

  1. URL https://us-central1-1.gcp.cloud2.influxdata.com
  2. Token given at InfluxDB Cloud 2.0
  3. Ognaization configured at InfluxDB Cloud 2.0
  4. Select one of the bucket you already created
  5. Select a measurement
  6. Leave "Scheme Query range" empty since you want to have a data range selector

Expect Behavior
A windows with all the metrics and dimension detected by GDS

Actual behavior:
I got the next error:

There was an error caused by the community connector. Please report the issue to the provider of this community connector if this issue persists.

Connector Details
Cannot retrieve a Schema of your Measurement. InfluxDB Error Response: "invalid". Requested Query: "import "influxdata/influxdb/v1" bucket = "brechas" measurement = "Temperatura" start_range = duration(v: uint(v: 1970-01-01) - uint(v: now())) v1.tagKeys( bucket: bucket, predicate: (r) => r._measurement == measurement, start: start_range ) |> filter(fn: (r) => r._value != "_start" and r._value != "_stop" and r._value != "_measurement" and r._value != "_field") |> yield(name: "tags") from(bucket: bucket) |> range(start: start_range) |> filter(fn: (r) => r["_measurement"] == measurement) |> keep(fn: (column) => column == "_field" or column == "_value") |> unique(column: "_field") |> yield(name: "fields")"

Error ID: undefined

Specifications:

  • Client Version: Microsoft Edge and Google chrome
  • InfluxDB Version: Cloud 2.0
  • Platform: ??

P.S
This happen only with some measures. I am trying to figure out what is different with those measurements I got the error.

Since I am storing data data in both databases (Cloud 2.0 and influxDB V1.x), until now, I dont get any erros using influxDB V1.x

no buckets found in organization

Hi,

We are using Influx Cloud v2 with Google Data Studio connector and we have the following error for a few days :

"GetBuckets from: https://eu-central-1-1.aws.cloud2.influxdata.com" returned an error:Exception: Request failed for https://eu-central-1-1.aws.cloud2.influxdata.com returned code 404. Truncated server response: {"code":"not found","message":"no buckets found in organization **************"} (use muteHttpExceptions option to examine full response)

Capture d’écran 2021-04-19 à 16 16 29

The identifiers and Uri have not changed.

Do you have any idea of ​​the source of the problem?

Thank you

GDS Connector not found

Hello,

I know what I will report is not a real issue but I don't know where I should open it. Please let me know and I will move the post wherever you say. Mine is a simple question. I am not able to find the connector in GDS. Has it been retired? Should I install something or suscribe somewhere?

Thanks in advance

Datasource cannot retrieve list of Buckets

Hi,
I am unable to connect the datasource. The following error is generated:

Cannot retrieve a list of Buckets. InfluxDB Error Response: "Invalid argument: xxx.119.230.43:50500/api/v2/query?org=Home"

I did try with a new read/write token and even with a fresh organisation but cannot get it to connect.
The port 50500 has been checked and is open pointing to the VM port 8086 on the internal network

Screenshot 2023-05-18 at 05 54 52

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.