gusutabopb / aioinflux Goto Github PK
View Code? Open in Web Editor NEWAsynchronous Python client for InfluxDB
License: MIT License
Asynchronous Python client for InfluxDB
License: MIT License
Hello, how we can use multiple points on write like influxdb-python?
json_body = {
"tags": {
"host": "server01",
"region": "us-west"
},
"points": [{
"measurement": "cpu_load_short",
"fields": {
"value": 0.64
},
"time": "2009-11-10T23:00:00Z",
},
{
"measurement": "cpu_load_short",
"fields": {
"value": 0.67
},
"time": "2009-11-10T23:05:00Z"
}]
}
await influx_client.write(json_body, measurement='cpu_load_short')
But we got exception=KeyError('fields')
Thanks.
when I call await influx_client.writ(somedata)
in sainc , it raise "RuntimeError: Timeout context manager should be used inside a task"
below are details :
File "C:\Users\zzc\Envs\py37_sanic\lib\site-packages\aioinflux\client.py", line 292, in write
async with self._session.post(url, params=params, data=data) as resp:
File "C:\Users\zzc\Envs\py37_sanic\lib\site-packages\aiohttp\client.py", line 1005, in __aenter__
self._resp = await self._coro
File "C:\Users\zzc\Envs\py37_sanic\lib\site-packages\aiohttp\client.py", line 417, in _request
with timer:
File "C:\Users\zzc\Envs\py37_sanic\lib\site-packages\aiohttp\helpers.py", line 568, in __enter__
raise RuntimeError('Timeout context manager should be used '
RuntimeError: Timeout context manager should be used inside a task
Hi
During processing this query:
SELECT ROUND(LAST(Free_Megabytes) / 1024) AS free, ROUND(Free_Megabytes / 1024 / (Percent_Free_Space / 100)) AS total, ROUND(Free_Megabytes / 1024 * ((100 - Percent_Free_Space) / Percent_Free_Space)) AS used, (100 - Percent_Free_Space) as percent, instance as path FROM win_disk WHERE host = 'ais-pc-16003' GROUP BY instance
This is the raw data that InfluxDBClient.query
returned.
{'results': [{'series': [{'columns': ['time',
'free',
'total',
'used',
'percent',
'path'],
'name': 'win_disk',
'tags': {'instance': 'C:'},
'values': [[1577419571000000000,
94,
238,
144,
60.49140930175781,
'C:']]},
{'columns': ['time',
'free',
'total',
'used',
'percent',
'path'],
'name': 'win_disk',
'tags': {'instance': 'D:'},
'values': [[1577419571000000000,
1727,
1863,
136,
7.3103790283203125,
'D:']]},
{'columns': ['time',
'free',
'total',
'used',
'percent',
'path'],
'name': 'win_disk',
'tags': {'instance': 'HarddiskVolume1'},
'values': [[1577419330000000000,
0,
0,
0,
29.292930603027344,
'HarddiskVolume1']]},
{'columns': ['time',
'free',
'total',
'used',
'percent',
'path'],
'name': 'win_disk',
'tags': {'instance': '_Total'},
'values': [[1577419571000000000,
1821,
2101,
280,
13.345237731933594,
'_Total']]}],
'statement_id': 0}]}
And I want to use this code to get parsed dicts:
def dict_parser(*x, meta):
return dict(zip(meta['columns'], x))
g = fixed_iterpoints(r, dict_parser)
But only got the first row ("instance": "C:"
).
And below is the source of iterpoints
. As you can see, the for-loop returned at the first iteration.
def iterpoints(resp: dict, parser: Optional[Callable] = None) -> Iterator[Any]:
for statement in resp['results']:
if 'series' not in statement:
continue
for series in statement['series']:
if parser is None:
return (x for x in series['values'])
elif 'meta' in inspect.signature(parser).parameters:
meta = {k: series[k] for k in series if k != 'values'}
meta['statement_id'] = statement['statement_id']
return (parser(*x, meta=meta) for x in series['values'])
else:
return (parser(*x) for x in series['values'])
return iter([])
I modified this function as a workaround:
def fixed_iterpoints(resp: dict, parser: Optional[Callable] = None):
for statement in resp['results']:
if 'series' not in statement:
continue
gs = []
for series in statement['series']:
if parser is None:
part = (x for x in series['values'])
elif 'meta' in inspect.signature(parser).parameters:
meta = {k: series[k] for k in series if k != 'values'}
meta['statement_id'] = statement['statement_id']
part = (parser(x, meta=meta) for x in series['values'])
else:
part = (parser(x) for x in series['values'])
if len(statement['series']) == 1:
return part
gs.append(part)
return gs
return iter([])
It worked for me. But it returned nested generator which might be wierd.
I want to know if you have a better idea.
Currently, when querying multiple series with chunked=True
, there is no way to know which series a data point belongs to; the interface only provides the point and does not pass along the name of the measurement.
I fixed this for myself here: https://github.com/plugaai/aioinflux/compare/master...miracle2k:multiseries?expand=1 but it's not a clean pull request because it depends on my changes to make the library run on Python 3.5 (for PyPy).
hello,
we have scenario where we receive timestamp in ms specificity, and would like to specify the precision as ms in the write to influx so it is stored as proper value. As of now it shows up as a 1970 timestamp due to time being in ms accuracy but influx interpreting it as ns accuracy. I see in the code this is not presently implemented. i'm not sure what is required to implement, if it not overly complicated I may be able to do it.
in the meantime I guess we can convert our time to ns from ms and then write with this client, but it would be great to not to have to do that.
Thanks.
Hi, Gustavo,
I tried to follow the demo on our project which doesnt work, could you help me to figure out the reason?
here is my code
async def read_influxdb(userid, starttime, endtime):
#logger = logging.getLogger("influxDB read demo")
async with InfluxDBClient(host=localhost, port=8086, username='admin', password='123456',db=db_name) as client:
user_id = '\'' + str(userid) + '\''
sql_ecg = 'SELECT point FROM wave WHERE (person_zid = {}) AND (time > {}s) AND (time < {}s)'.format(user_id, starttime, endtime)
await client.query(sql_ecg, chunked=True)
if __name__ == '__main__':
user_id = 973097
starttime = '2018-09-26 18:08:48'
endtime = '2018-09-27 18:08:48'
starttime_posix = utc_to_local(starttime)
endtime_posix = utc_to_local(endtime)
asyncio.get_event_loop().run_until_complete(read_influxdb(user_id, starttime_posix, endtime_posix))
We I run this code, I get the errors below:
sys:1: RuntimeWarning: coroutine 'query' was never awaited
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x10f78f630>
Best
Thanks for creating this library? Does it support UDP inserts via asyncio?
i.e.
udp_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
udp_port_tuple = (host, udp_port)
udp_socket.sendto(data_str, udp_port_tuple)
I have no intention of using pandas and don't want to see the following warning when my application starts or worry about suppressing a warning:
"Pandas/Numpy is not available. Support for 'dataframe' mode is disabled."
Please consider either removing this warning on import.
Is it possible to add extra tags to a user object directly from the the client's .write()
method? I'd like to be able to add extra tags when calling client.write(data, **extra_tags)
, but this appears to only be possible with a dictionary, not a user defined class.
Does this package support the new flux query syntax?
Currently, the blocking
mode won't work on Python 3.7 running on Jupyter. The code below:
import aioinflux
c = aioinflux.InfluxDBClient(db='mydb', mode='blocking')
c.show_measurements()
Raises RuntimeError: This event loop is already running
This is caused by the fact that the latest versions of Tornado (which is used by Jupyter/ipykernel) runs an asyncio loop on the main thread by default:
# Python 3.7
import asyncio
asyncio.get_event_loop()
# <_UnixSelectorEventLoop running=True closed=False debug=False>
# Python 3.6 (w/ tornado < 5)
import asyncio
asyncio.get_event_loop()
# <_UnixSelectorEventLoop running=False closed=False debug=False>
This is being discussed on jupyter/notebook#3397
From an aioinflux perspective, a possible work around would be to start a new event loop on a background thread and use asyncio.run_coroutine_threadsafe
to run the coroutine and return a concurrent.futures.Future
object that wraps the result.
Given the datetime datetime.datetime(2018, 4, 27, 20, 26, 38, 456123)
.
Before 65f9862, this resulted in a timestamp of 1524860798456123000
being sent (pd.Timestamp(x).value
).
After, it results in 1524857198456123000
being sent (int(x.timestamp()) * 10 ** 9 + x.microsecond * 1000
).
The first one interprets the time as UTC. The second one one interprets it as local time.
Not sure which one you prefer, just wanted to point out this (backwards-incompatible) change.
PEP 563 behavior is available from Python 3.7 (using from __future__ import annotations
) and will become the default in Python 3.10.
Among changes introduced by PEP 563, the type annotations in __annotations__
attribute of an object are stored in string form. This breaks in the function below because all the tests expect type objects.
aioinflux/aioinflux/serialization/usertype.py
Lines 57 to 67 in 77f9d24
lineprotocol()
:from typing import NamedTuple
import aioinflux
@aioinflux.lineprotocol
class Production(NamedTuple):
total_line: aioinflux.INT
# Works well as is
from __future__ import annotations
at the top and you get:SchemaError: Must have one or more non-empty field-type attributes [~BOOL, ~INT, ~DECIMAL, ~FLOAT, ~STR, ~ENUM]
at import time.Using https://docs.python.org/3/library/typing.html#typing.get_type_hints has the same behavior (returns a dict with values as type objects) with or without from __future__ import annotations
. Furthermore, the autor of PEP 563 advises to use it.
This lib could be useful to manipulate large amounts of data ; and Pypy can help speeding thing up for those use-cases, but Pypy is marked compatible only with python 3.5, and cannot be installed with the python_requires=3.6
specified in the setup.py file.
I don't think anything in the code requires 3.6 other than the async generators. Would you consider a PR lowering the python_requires
to 3.5 and add a dependency on async_generator
to support 3.5 ?
Recently I have a problem with installing the aioinflux
PyPI package using pip
:
pip install aioinflux==0.9.0
Here's what I get back:
Collecting aioinflux==0.9.0
Using cached aioinflux-0.9.0-py3-none-any.whl (16 kB)
Collecting aiohttp>=3.0
Using cached aiohttp-3.7.4.post0-cp39-cp39-manylinux2014_x86_64.whl (1.4 MB)
Collecting ciso8601
Using cached ciso8601-2.1.3.tar.gz (15 kB)
Requirement already satisfied: async-timeout<4.0,>=3.0 in ./env/lib/python3.9/site-packages (from aiohttp>=3.0->aioinflux==0.9.0) (3.0.1)
Requirement already satisfied: multidict<7.0,>=4.5 in ./env/lib/python3.9/site-packages (from aiohttp>=3.0->aioinflux==0.9.0) (5.1.0)
Requirement already satisfied: yarl<2.0,>=1.0 in ./env/lib/python3.9/site-packages (from aiohttp>=3.0->aioinflux==0.9.0) (1.6.3)
Requirement already satisfied: typing-extensions>=3.6.5 in ./env/lib/python3.9/site-packages (from aiohttp>=3.0->aioinflux==0.9.0) (3.10.0.0)
Requirement already satisfied: chardet<5.0,>=2.0 in ./env/lib/python3.9/site-packages (from aiohttp>=3.0->aioinflux==0.9.0) (4.0.0)
Requirement already satisfied: attrs>=17.3.0 in ./env/lib/python3.9/site-packages (from aiohttp>=3.0->aioinflux==0.9.0) (21.2.0)
Requirement already satisfied: idna>=2.0 in ./env/lib/python3.9/site-packages (from yarl<2.0,>=1.0->aiohttp>=3.0->aioinflux==0.9.0) (2.10)
Building wheels for collected packages: ciso8601
Building wheel for ciso8601 (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/benyamin/PycharmProjects/ivms/env/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/setup.py'"'"'; __file__='"'"'/tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-6nn7ilbn
cwd: /tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/
Complete output (18 lines):
running bdist_wheel
running build
running build_py
package init file 'ciso8601/__init__.py' not found (or not a regular file)
creating build
creating build/lib.linux-x86_64-3.9
creating build/lib.linux-x86_64-3.9/ciso8601
copying ciso8601/__init__.pyi -> build/lib.linux-x86_64-3.9/ciso8601
copying ciso8601/py.typed -> build/lib.linux-x86_64-3.9/ciso8601
running build_ext
building 'ciso8601' extension
creating build/temp.linux-x86_64-3.9
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DCISO8601_VERSION=2.1.3 -I/home/benyamin/PycharmProjects/ivms/env/include -I/usr/include/python3.9 -c module.c -o build/temp.linux-x86_64-3.9/module.o
module.c:1:10: fatal error: Python.h: No such file or directory
1 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for ciso8601
Running setup.py clean for ciso8601
Failed to build ciso8601
Installing collected packages: ciso8601, aiohttp, aioinflux
Running setup.py install for ciso8601 ... error
ERROR: Command errored out with exit status 1:
command: /home/benyamin/PycharmProjects/ivms/env/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/setup.py'"'"'; __file__='"'"'/tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-w9l_7mmo/install-record.txt --single-version-externally-managed --compile --install-headers /home/benyamin/PycharmProjects/ivms/env/include/site/python3.9/ciso8601
cwd: /tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/
Complete output (18 lines):
running install
running build
running build_py
package init file 'ciso8601/__init__.py' not found (or not a regular file)
creating build
creating build/lib.linux-x86_64-3.9
creating build/lib.linux-x86_64-3.9/ciso8601
copying ciso8601/__init__.pyi -> build/lib.linux-x86_64-3.9/ciso8601
copying ciso8601/py.typed -> build/lib.linux-x86_64-3.9/ciso8601
running build_ext
building 'ciso8601' extension
creating build/temp.linux-x86_64-3.9
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DCISO8601_VERSION=2.1.3 -I/home/benyamin/PycharmProjects/ivms/env/include -I/usr/include/python3.9 -c module.c -o build/temp.linux-x86_64-3.9/module.o
module.c:1:10: fatal error: Python.h: No such file or directory
1 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/benyamin/PycharmProjects/ivms/env/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/setup.py'"'"'; __file__='"'"'/tmp/pip-install-6vlfria8/ciso8601_eb27a37138244a729c0bbd6b5fd97f7a/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-w9l_7mmo/install-record.txt --single-version-externally-managed --compile --install-headers /home/benyamin/PycharmProjects/ivms/env/include/site/python3.9/ciso8601 Check the logs for full command output.
[NOTE]:
Issue:
Pandas/NumPy are not required for Influx interaction, but are dependencies for aioinflux.
When developing for a Raspberry target, this becomes an issue, as Pandas/NumPy do not provide compiled packages for ARMv7.
Compiling these packages on a Raspberry 3 takes +/- 1 hour. That's a bit much for an unused dependency.
Desired behavior:
No functional changes for clients that use dataframe serialization functionality.
No functional changes for clients that don't - but they can drop the Pandas/Numpy packages from their dependency stack.
Proposed solution:
Pandas/NumPy dependencies in setup.py
can move to an extras_require
collection.
client.py
does not use NumPy, and only uses Pandas to define PointType
.
PointType
definition can equally easily be made conditional.serialization.py
makes more extensive use of dependencies.
make_df()
and parse_df()
are Pandas-only functions, and can move to a conditional include.isinstance(data, pd.DataFrame)
check can also be made conditional (if using_pd and isinstance(data, pd.DataFrame)
._parse_fields()
: the checks for np.integer
and np.isnan()
can be placed behind evaluation of a using_np
variable, or pd is None
check.Required effort:
Between 2 hours and 2 days.
Practical consideration:
We're actively using aioinflux (apart from the compile time issues it works great), and I can make the time to make a PR.
Bigger issue is whether this is a desired feature for the main repository. If not, I can fork and implement it downstream.
Hi, thanks again for this great package, been very helpful.
Sync InfluxDB client has in its constructor a path
parameter:
https://influxdb-python.readthedocs.io/en/latest/api-documentation.html#influxdb.InfluxDBClient
path (str) – path of InfluxDB on the server to connect, defaults to ‘’
And the URL is built as follows:
self.__baseurl = "{0}://{1}:{2}{3}".format(
self._scheme,
self._host,
self._port,
self._path)
Although in aioinflux there is no path parameter, and the URL is built as follows:
https://github.com/gusutabopb/aioinflux/blob/master/aioinflux/client.py#L163
return f'{"https" if self.ssl else "http"}://{self.host}:{self.port}/{{endpoint}}'
So, it seems that I cannot connect with aioinflux to our Influx deployment, as for reasons unknown to me, it is under a path.
Currently, I created a quick monkey patch as follows:
class MonkeyPatchedInfluxDBClient(InfluxDBClient):
def __init__(self, *args, path='/', **kwargs):
super().__init__(*args, **kwargs)
self._path = path
@property
def path(self):
return self._path
@property
def url(self):
return '{protocol}://{host}:{port}{path}{{endpoint}}'.format(
protocol='https' if self.ssl else 'http',
host=self.host,
port=self.port,
path=self.path,
)
Thanks for placing the url
in a property, that was useful.
When the query has group by clause other than time, for example
SELECT COUNT(*) FROM "db"."rp"."measurement" WHERE time > now() - 7d GROUP BY "category"
The dataframe output mode returns a dictionary instead of dataframe. The key seems to be a string with "measurement_name, category=A", "measurement_name, category=B",...
and values of the dictionary are dataframes. Is this expected?
I am wondering if aioinflux can write data points to a specific retention policy?
First of, thank you! Great repo with excellent documentation. I use it with a Starlette project I am working on.
In the project I've implemented a simple way to parse a pandas.Dataframe
from a chuncked response. It works, and I added it to my fork and I am wondering if you would welcome such a feature.
Here is the MVP implementation in my fork
I'll clean the code, remove exceptions, move it to serialization/dataframe.py
and add tests if you're OK with it.
Hello, should the output
parameter be set during instantiation or afterwards? E.g, the default example yields:
1)
import asyncio
from aioinflux import InfluxDBClient
point = {
'time': '2009-11-10T23:00:00Z',
'measurement': 'cpu_load_short',
'tags': {'host': 'server01',
'region': 'us-west'},
'fields': {'value': 0.64}
}
async def main():
client = InfluxDBClient(host='localhost', port=8086, username='root',
password='root', db='db')
client.output = 'dataframe'
await client.create_database(db='db')
await client.write(point)
resp = await client.query('SELECT value FROM cpu_load_short')
print(resp)
asyncio.get_event_loop().run_until_complete(main())
{'results': [{'statement_id': 0, 'series': [{'name': 'cpu_load_short', 'columns': ['time', 'value'], 'values': [[1257894000000000000, 0.64]]}]}]}
2)
However, when you set output
like this, it still yields the same dict as in above and not a dataframe:
client = InfluxDBClient(host='localhost', port=8086, username='root',
password='root', db='db' output='dataframe')
3)
In contrast to that, if you set the parameter on the next line like this:
client = InfluxDBClient(host='localhost', port=8086, username='root',
password='root', db='db')
client.output = 'dataframe'
Then you successfully get a Dataframe
as a result, i.e:
value
2009-11-10 23:00:00+00:00 0.64
From debuging the code I saw that the output
parameter is being overwritten by async def get_tag_info(self)
. In the docstring you have specified that this method is necessary for the library to correctly parse dataframes. However, if you don't specify the output
type during the instantiation, then async def get_tag_info(self)
won't be run. Hence, no correct parsing?
On the other hand, if you specify the parameter during the instantiation and then further overwrite it before querying with client.output = 'dataframe'
, then you will get an error like this:
Task exception was never retrieved
future: <Task finished coro=<get_tag_info() done, defined at /Python-Projects/Testing/venv/python3/lib/python3.6/site-packages/aioinflux/client.py:327> exception=KeyError('results',)>
Traceback (most recent call last):
File "/Python-Projects/Testing/venv/python3/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3063, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 140, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 162, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1492, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1500, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'results'
When trying to write an integer64
field, I was getting an error due to the presence of missing values. The missing values were in the form of pd.NA
, rather than np.nan
and they were not being excluded in the serialization.
I made an attempt to fix this and it worked, though might not be the most elegant solution. In the _replace
function, I added a new replacement tuple to the list of replacements, very similar to the one that handles the nans:
def _replace(df):
obj_cols = {k for k, v in dict(df.dtypes).items() if v is np.dtype('O')}
other_cols = set(df.columns) - obj_cols
obj_nans = (f'{k}="nan"' for k in obj_cols)
other_nans = (f'{k}=nani?' for k in other_cols)
obj_nas = (f'{k}="<NA>"' for k in obj_cols)
other_nas = (f'{k}=<NA>i?' for k in other_cols)
replacements = [
('|'.join(chain(obj_nans, other_nans)), ''),
('|'.join(chain(obj_nas, other_nas)), ''),
(',{2,}', ','),
('|'.join([', ,', ', ', ' ,']), ' '),
]
return replacements
Hope this ends up helping someone
Hi,
If you find the time for some maintenance could you include the LICENSE
file in the next pypi release? This simplifies integration of the package through yocto/bitbake into embedded linux applications.
Best regards
When doing exploratory data analysis with Jupyter (using blocking
or dataframe
mode), the following warning often shows up:
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x112138048>
That happens whenever a new AsyncInfluxDBClient
object and assign it to the same name a another AsyncInfluxDBClient
was assigned to previously. In other words, when a cell with the following code is executed more than once:
client = AsyncInfluxDBClient(host='my.host.io', mode='dataframe', db='mydb')
Reimplementing __del__
(removed in #4), would likely solve this issue.
Related issue: aio-libs/aiohttp#1175
Would you be open to adding support for trio (https://github.com/python-trio/trio) and curio (https://github.com/dabeaz/curio) as alt event loops?
This would mostly require supporting an optional non-aiohttp backend for the http calls.
Actually i don't see reason to use that library without support python 3.5. Not all LTS linux distros provide python3.6.
Just wondering if this library is still actively maintained since it hasn't had a commit or merged PR since last summer. No judgment, just wondering since I like the idea of this client vs influx-python.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.