aio-libs / aiopg Goto Github PK
View Code? Open in Web Editor NEWaiopg is a library for accessing a PostgreSQL database from the asyncio
Home Page: http://aiopg.readthedocs.io
License: BSD 2-Clause "Simplified" License
aiopg is a library for accessing a PostgreSQL database from the asyncio
Home Page: http://aiopg.readthedocs.io
License: BSD 2-Clause "Simplified" License
Hi,
do you plan to add support for http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications by any chance?
Hi,
I've got a problem with aiopg when using transaction. I've used a simple code for transaction:
trans = yield from conn.begin()
try:
yield from fn(self, conn, *args, **kwargs)
except:
yield from trans.rollback()
raise
else:
yield from trans.commit()
But if there timeout is occurred I'll get the next traceback:
Traceback (most recent call last):
...
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/connection.py", line 100, in execute
yield from cursor.execute(str(compiled), post_processed_params[0])
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/cursor.py", line 112, in execute
yield from self._conn._poll(waiter, timeout)
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/connection.py", line 162, in _poll
yield from asyncio.wait_for(self._waiter, timeout, loop=self._loop)
File "/usr/lib/python3.4/asyncio/tasks.py", line 381, in wait_for
raise futures.TimeoutError()
concurrent.futures._base.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/connection.py", line 185, in _rollback_impl
yield from cur.execute('ROLLBACK')
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/cursor.py", line 107, in execute
self._impl.execute(operation, parameters)
psycopg2.ProgrammingError: execute cannot be used while an asynchronous query is underway
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
...
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/transaction.py", line 67, in rollback
yield from self._do_rollback()
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/transaction.py", line 95, in _do_rollback
yield from self._connection._rollback_impl()
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/connection.py", line 187, in _rollback_impl
cur.close()
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/cursor.py", line 49, in close
self._impl.close()
psycopg2.ProgrammingError: close cannot be used while an asynchronous query is underway
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
...
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/engine.py", line 190, in exit
self._engine.release(self._conn)
File "/home/alex/work/python/.envs/stload3/lib/python3.4/site-packages/aiopg/sa/engine.py", line 133, in release
raise InvalidRequestError("Cannot release a connection with "
aiopg.sa.exc.InvalidRequestError: Cannot release a connection with not finished transaction
As I think if we get TimoutError while the query executing, the query should be cancelled.
Thanks
tbl2 = sa.Table('usert2', metadata,
sa.Column('id', INTEGER, primary_key=True),
sa.Column('webid', UUID),
sa.Column('totyear', SMALLINT),
sa.Column('brewdata', SMALLINT))
engine = yield from create_engine(user='postgres',database='t1',host='127.0.0.1',password='pg')
with (yield from engine) as conn:
query = sa.select([tbl2])
res = yield from conn.execute(query)
alarm=res.fetchone()
print('alarm=',alarm)
Output is :
<generator object fetchone at 0x7f7b27eff5e8>
It is not right according to sqlalchemy rules. See below section 'Selecting' :
http://docs.sqlalchemy.org/en/rel_1_0/core/tutorial.html#executing
On the Cursor documentation it says that all methods are coroutines. I get that you don't want to document that on each method explicitly, but there were two issues for me while using it:
The Cursor.close
method is not a coroutine. It probably should be mentioned in the docs that you don't need to wait for it
There is no __iter__
for async cursors. Afaik you can't have them because there's no way to implement correctly (is this correct?). The docs say that
Cursor objects are iterable [...]
This isn't true here so it should probably be adjusted.
My proposal as a fix would be to introduce something like fetchiter
that yields each row separately (if this is possible). Otherwise the only way I found would be to use fetchall
or (if you have many rows) to handle this yourself using fetchmany
.
Can you, please, show docs links, where the difference between two-phase and nested transactions is explained?
File "/home/tag/.virtualenvs/moxie/lib/python3.4/site-packages/aiopg/sa/connection.py", line 53, in scalar
return (yield from res.scalar())
File "/home/tag/.virtualenvs/moxie/lib/python3.4/site-packages/aiopg/sa/result.py", line 417, in scalar
row = yield from self.first()
File "/home/tag/.virtualenvs/moxie/lib/python3.4/site-packages/aiopg/sa/result.py", line 405, in first
self._non_result()
File "/home/tag/.virtualenvs/moxie/lib/python3.4/site-packages/aiopg/sa/result.py", line 334, in _non_result
"This result object does not return rows. "
aiopg.sa.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
Code is similar to:
engine = yield from aiopg.sa.create_engine(DATABASE_URL)
with (yield from engine) as conn:
reschedule = (dt.datetime.utcnow() + job.interval)
yield from conn.scalar(
update(
Job.__table__
).where(
Job.name==job.name
).values(
active=True,
scheduled=reschedule,
))
Database has:
moxie=# SELECT * FROM job;
id | name | description | command | image | scheduled | interval | active | maintainer_id
----+----------------+-----------------------+----------------+--------------+----------------------------+----------+--------+---------------
3 | test-job-three | Test Job number Three | update holyoke | paultag/pupa | 2014-06-28 14:37:09.501019 | 1 day | f | 1
4 | test-job-four | Test Job number Four | update holyoke | paultag/pupa | 2014-06-28 14:37:09.50455 | 1 day | f | 2
5 | test-job-five | Test Job number Five | update holyoke | paultag/pupa | 2014-06-28 14:37:09.508053 | 1 day | f | 2
6 | test-job-six | Test Job number Six | update holyoke | paultag/pupa | 2014-06-28 14:37:09.512476 | 1 day | f | 2
1 | test-job-one | Test Job number One | update holyoke | paultag/pupa | 2014-06-29 14:37:15.559101 | 1 day | t | 1
2 | test-job-two | Test Job number Two | update holyoke | paultag/pupa | 2014-06-29 14:37:15.56207 | 1 day | t | 1
(6 rows)
Failed on entry test-job-three
.
Add chapter to documentation and make an example to use SA for DDL manipulations.
Hi,
I am using aiopg with a Pool object that is shared among coroutines. Every once in a while, randomly I see following error:
psycopg2.ProgrammingError: close cannot be used while an asynchronous query is u
nderway
I can reproduce the issue with following:
pool = yield from aiopg.create_pool(dsn)
with (yield from pool.cursor(timeout=1.0)) as cur:
yield from cur.execute("SELECT pg_sleep(2)")
It seems valid to raise TimeoutError rather than ProgrammingError, I believe. What do you think?
Thanks,
Hey aiopg people! Nice work so far, loving this!
It'd be awesome to find a way to get session
objects working with asyncio!
sync psycopg vs aiopg vs aiopg.sa vs aiopg.sa with query caching
When inserting, I get the following internal error from aiopg
File "/usr/local/lib/python3.4/dist-packages/aiopg/sa/connection.py", line 115, in scalar
res = yield from self.execute(query, *multiparams, **params)
File "/usr/local/lib/python3.4/dist-packages/aiopg/sa/connection.py", line 102, in execute
yield from cursor.execute(str(compiled), post_processed_params[0])
File "/usr/local/lib/python3.4/dist-packages/aiopg/cursor.py", line 106, in execute
self._impl.execute(operation, parameters)
UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 4198: ordinal not in range(128)
Invocation is:
@asyncio.coroutine
def create(self, **kwargs):
with (yield from self.db.engine) as conn:
runid = yield from conn.scalar(insert(Run.__table__).values(**kwargs))
It's likely breaking on the log of a Run:
class Run(Base):
__tablename__ = 'run'
id = Column(Integer, primary_key=True)
failed = Column(Boolean)
job_id = Column(Integer, ForeignKey('job.id'))
job = relationship("Job", foreign_keys=[job_id], backref='runs')
log = Column(Text)
start_time = Column(DateTime)
end_time = Column(DateTime)
I'm guessing it's the log
field
E.g. aiopg.OperationalError
Hi, i replaced in my code "yield from" to "await", and received Traceback:
"TypeError: object Engine can't be used in 'await' expression"
async def db_psql_middleware(app, handler):
async def middleware(request):
db = app.get('db_psql')
if not db:
app['db_psql'] = db = await create_engine(app['psql_dsn'], minsize=1, maxsize=5)
request.app['db_psql'] = db
return (await handler(request))
return middleware
async def psql_select(request):
with (await request.app['db_psql']) as conn:
result = await conn.execute(models.select())
Traceback
[2015-09-17 14:50:29 +0300] [26045] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/vvv/src/backend-tools/python/asyncio/venv35/lib/python3.5/site-packages/aiohttp/server.py", line 272, in start
yield from self.handle_request(message, payload)
File "/Users/vvv/src/backend-tools/python/asyncio/venv35/lib/python3.5/site-packages/aiohttp/web.py", line 85, in handle_request
resp = yield from handler(request)
File "/Users/vvv/src/backend-tools/python/asyncio/app.py", line 39, in middleware
return (await handler(request))
File "/Users/vvv/src/backend-tools/python/asyncio/app.py", line 46, in psql_select
with (await request.app['db_psql']) as conn:
TypeError: object Engine can't be used in 'await' expression
What is the most generic proper way of handling database connection in real web apps? The naive way is just to create connection pool and pick/release a cursor per-query (or per-request?).
But the naive approach has some pitfalls:
So what is the way to connect/re-connect which suits basic needs and could face naive approach problems?
It would be great to have some ideas and examples on that in docs.
Thank you!
I'm getting an error when using sqlalchemy and nothing is getting inserted.
Fatal error on aiopg connection: bad state in _ready callback
connection: <aiopg.connection.Connection object at 0x7f0c3ab44710>
Not sure what's causing it, maybe this additional information will help. Is it possible to use a sqlalchemy model defined using declarative to get the table object used by aiopg?
from aiopg.sa import create_engine
engine = create_engine(database='db')
Base.metadata.bind = engine
table = Base.metadata.tables['table']
And is it possible to only provide the 'database' argument like above if in the non-aiopg version I'm defining it as follows, with just a db name because it's using local authentication.
engine = engine_from_config({'sqlalchemy.url': 'postgresql+psycopg2:///db'})
I'm running the insert statements like this (not sure if I need the commented parts).
with (yield from engine) as conn:
# tr = yield from conn.begin()
yield from conn.execute(table.insert(), *multivalues)
# yield from tr.commit()
Currently client 'hangs'.
Probably the best way to process the issue is adding timeouts.
Extracted from private email:
В файле pool.py в _fill_free_pool
после того как получили новый connection
надо снова проверить, что пул всё еще не заполнен до конца
потому что за это время параллельный процесс мог его уже заполнить
в итоге в пуле оказывается больше connection-ов чем maxsize
и это потом приводит к ошибке во время release (он там падает на assert-е, проверяющем что пул не полный)
я вот так у себя залечил
@asyncio.coroutine
def _fill_free_pool(self):
while self.freesize < self.minsize and self.size < self.maxsize:
conn = yield from connect(
self._dsn, loop=self._loop,
**self._conn_kwargs)
if self.freesize < self.minsize and self.size < self.maxsize:
yield from self._free.put(conn)
else:
conn._close()
This one example https://github.com/aio-libs/aiopg#example-of-sqlalchemy-optional-integration doesn't work. And gives error:
psycopg2.ProgrammingError: relation "tbl" does not exist
LINE 1: INSERT INTO tbl (val) VALUES ('abc') RETURNING tbl.id
Because table tbl
isn't created by default. And as I understand we should add something like: yield from conn.execute(CreateTable(tbl))
before yield from conn.execute(tbl.insert().values(val='abc'))
. Right?
@popravich knows better what the bug is.
I realize (now) that the docs say all the methods are coroutines (except close). However, the examples for fetchone, fetchmany, fetchiter, etc do not use 'yield from' in the examples and I wasted a bit of time wondering why I was getting an empty list from my select statement before I noticed the (correct) example at the top of the section. Also, the example of using the cursor as an iterator does not work for me (prints no data). I'm using 0.6.1.
Now tests just hungs
Should describe transactions via cursor object.
this package is almost same in aiopg
and aiomysql
. So I think we should extract this package as separate library, shouldn't we?
I was not able to iterate over a select query result, running from the view. Here is the flow:
i run a repository function from the view:
# views.py
users = yield from find_all()
find_all()
is the function in repository module:
# app/models/user.py
@asyncio.coroutine
def find_all():
yield from run(User.table.select())
finally here is how run
looks:
# app/models/__init__.py
@asyncio.coroutine
def run(query):
engine = yield from create_engine(user='test',
database='test',
host='127.0.0.1',
password='test')
with (yield from engine) as conn:
yield from conn.execute(query)
So the problem is that users
is a NoneObject
, but if i iterate the result in the same file where the query is executed (app/models/__init__.py
) then it works fine:
res = yield from conn.execute(query)
for row in res:
print(row)
Could you please add more examples of how to get a query result in such case?
or something like that
I am facing an issue where my db queries start failing after sometime. Here is some code that I use to create a pool
class PostgresStore:
_pool = None
_connection_params = {}
@classmethod
def connect(cls, database:str, user:str, password:str, host:str, port:int):
"""
Sets connection parameters
"""
cls._connection_params['database'] = database
cls._connection_params['user'] = user
cls._connection_params['password'] = password
cls._connection_params['host'] = host
cls._connection_params['port'] = port
@classmethod
def use_pool(cls, pool:Pool):
"""
Sets an existing connection pool instead of using connect() to make one
"""
cls._pool = pool
@classmethod
@coroutine
def get_pool(cls) -> Pool:
"""
Yields:
existing db connection pool
"""
if len(cls._connection_params) < 5:
raise ConnectionError('Please call SQLStore.connect before calling this method')
if not cls._pool:
cls._pool = yield from create_pool(**cls._connection_params)
return cls._pool
I use aiohttp to create a web server and once the server is up and running for a few hours, db quesries start failing. All other apis work perfectly fine. Here are the logs attached:
2015-07-14 16:19:18,531 ERROR [base_events:698] Fatal read error on socket transport protocol: transport: Traceback (most recent call last): File "/usr/lib/python3.4/asyncio/selector_events.py", line 459, in _read_ready data = self._sock.recv(self.max_size) TimeoutError: [Errno 110] Connection timed out 2015-07-14 17:26:42,070 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback connection: 2015-07-14 17:26:58,017 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback connection: 2015-07-14 17:27:02,606 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback connection: 2015-07-14 17:27:03,226 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback connection: 2015-07-14 17:27:14,691 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback connection: 2015-07-14 18:47:51,427 ERROR [base_events:698] Fatal read error on socket transport protocol: transport: Traceback (most recent call last): File "/usr/lib/python3.4/asyncio/selector_events.py", line 459, in _read_ready data = self._sock.recv(self.max_size) TimeoutError: [Errno 110] Connection timed out 2015-07-14 18:50:02,499 ERROR [base_events:698] Fatal read error on socket transport protocol: transport: Traceback (most recent call last): File "/usr/lib/python3.4/asyncio/selector_events.py", line 459, in _read_ready data = self._sock.recv(self.max_size) TimeoutError: [Errno 110] Connection timed out
The following code should work according to http://aiopg.readthedocs.org/en/stable/sa.html#aiopg.sa.SAConnection.execute:
import asyncio
from aiopg.sa import create_engine
import sqlalchemy as sa
metadata = sa.MetaData()
table = sa.Table(
'table',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('value', sa.String(255)),
)
@asyncio.coroutine
def go():
engine = yield from create_engine('postgresql://localhost/aiopg_issue')
with (yield from engine) as conn:
yield from conn.execute(
table.insert(),
{"id": 1, "value": "v1"},
{"id": 2, "value": "v2"},
)
res = yield from conn.execute(table.select())
for row in res:
print(row.id, row.val)
loop = asyncio.get_event_loop()
loop.run_until_complete(go())
However, I get a aiopg.sa.exc.ArgumentError: Don't mix sqlalchemy clause and execution with parameters
error because of the dictionaries passed to execute. Are the docs out of date?
on releasing back to pool
I believe aiopg doesn't allow server side cursor support, are there any plans to change this?
It looks like the aiopg.sa
module relies on reimplementing some of the lower level parts of SQLAlchemy and providing alternatives which use a similar API. I wonder if this could instead be done using SQLAlchemy's support for different execution strategies?
There is an example of doing this for Twisted at https://github.com/alex/alchimia, that defers things to a thread pool, largely because it wants to work for any database not just PostgreSQL. However I wonder if it would be possible to do that instead which might deduplicate some of the code?
Hi!
I would like to have sql statements logging right into stdout when I run my applications in a debug mode. Normally (i.e. with a regular sqlalchemy-backed application) I setup the standard python logger with the following structure:
logging:
version: 1
disable_existing_loggers: True
formatters:
generic:
format: "%(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s"
handlers:
console:
level: DEBUG
class: logging.StreamHandler
stream: ext://sys.stdout
formatter: generic
loggers:
root:
level: DEBUG
handlers: [console]
asyncio:
level: DEBUG
handlers: [console]
sqlalchemy:
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
level: DEBUG
handlers: [console]
sqlalchemy.engine:
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
level: INFO
handlers: [console]
propagate: False
But apparently this is not enough for aiopg. Just adding the aiopg
logger into that definition makes no difference to the final output. Therefore, the question is how to set it up properly. Is there any working example of it?
I'm playing with development version and noticed that Pool and Connection cursor() methods have different result types, that makes single connection and pool interfaces non interchangeable when making queries.
Is it a bug or feature?
I've found an unexpected behavior in SQLAlchemy. I'm using the following versions:
This is the table definition for the example:
import asyncio
from aiopg.sa import create_engine
from sqlalchemy import (
MetaData,
Column,
Integer,
Table,
String,
)
metadata = MetaData()
users = Table('users', metadata,
Column('id_user', Integer, primary_key=True, nullable=False),
Column('name', String(20), unique=True),
Column('age', Integer, nullable=False, default=0),
)
Now if I try to execute a simple insert to the table just populating the _id_user_ and _name, the column _age should be auto-generated with the default value 0 right? Lets see...
@asyncio.coroutine
def go():
engine = yield from create_engine('postgresql://USER@localhost/DB')
data = {'id_user':1, 'name':'Jimmy' }
stmt = users.insert(values=data, inline=False)
with (yield from engine) as conn:
result = yield from conn.execute(stmt)
loop = asyncio.get_event_loop()
loop.run_until_complete(go())
This is the resulting statement with the corresponding error:
INSERT INTO users (id_user, name, age) VALUES (1, 'Jimmy', null);
psycopg2.IntegrityError: null value in column "age" violates not-null constraint
I didn't provide the age column, so where is that age = null value coming from? I was expecting something like this:
INSERT INTO users (id_user, name) VALUES (1, 'Jimmy');
Or if the _default_ flag actually works should be:
INSERT INTO users (id_user, name, Age) VALUES (1, 'Jimmy', 0);
Could you put some light on this?
Running the tests on master with psycopg2==2.6.1 and postgres 9.4.3 I get a test failure in test_binary_protocol_error. Instead of the expected OperationalError, I'm getting a DatabaseError. One fix would be to look for both, the other would be to check the version (assuming that's the issue).
Hello there, we started to use your modern framework aioREST in our projects with aiopg driver on top of SQLAlchemy and i got some problems with JSON field.
https://github.com/aio-libs/aiopg/blob/master/aiopg/sa/result.py#L49
We have code like this:
@classmethod
@coroutine
def get_by_id(cls, id):
'''Return user or None'''
with (yield from db.engine) as conn:
cursor = yield from conn.execute(
users.select().where(users.c.id == id).limit(1))
user = yield from cursor.first()
return cls(user) if user else None
as far as i understand psycopg2 already deserialize JSON into native datatype, i started to look using pdb
and that's what i got:
> /projecti/auth/models.py(81)__init__()
-> print(sq_row['contacts'])
(Pdb) s
--Call--
> /project/var/env/lib/python3.4/site-packages/aiopg/sa/result.py(27)__getitem__()
-> def __getitem__(self, key):
(Pdb) s
> /project/var/env/lib/python3.4/site-packages/aiopg/sa/result.py(28)__getitem__()
-> try:
(Pdb) s
> /project/var/env/lib/python3.4/site-packages/aiopg/sa/result.py(29)__getitem__()
-> processor, obj, index = self._keymap[key]
(Pdb) s
> /project/var/env/lib/python3.4/site-packages/aiopg/sa/result.py(45)__getitem__()
-> if index is None:
(Pdb) print(processor)
<function JSON.result_processor.<locals>.process at 0x10ecd4488>
(Pdb) print(processor.__name__)
process
(Pdb) s
> /project/var/env/lib/python3.4/site-packages/aiopg/sa/result.py(49)__getitem__()
-> if processor is not None:
(Pdb) s
> /project/var/env/lib/python3.4/site-packages/aiopg/sa/result.py(50)__getitem__()
-> return processor(self._row[index])
(Pdb) print(self._row[index])
[{'type': 'phone', 'value': ''}]
(Pdb) s
--Call--
> /project/var/env/lib/python3.4/site-packages/sqlalchemy/dialects/postgresql/json.py(202)process()
-> def process(value):
(Pdb) s
> /project/var/env/lib/python3.4/site-packages/sqlalchemy/dialects/postgresql/json.py(203)process()
-> return json_deserializer(value)
but value already is in native python data structure
(Pdb) print(type(value))
<class 'list'>
so finally:
TypeError: the JSON object must be str, not 'list'
for returning back to pool on gc
on cursor/connection closing and on explicit cancellation of waiter
Hello.
How about add it ?
dsn = "postgresql+psycopg2://aiopg:passwd@localhost:5432/aiopg"
def parse_uri(uri):
from urllib.parse import urlparse
uri = urlparse(uri)
return dict(
user=uri.username,
password=uri.password,
database=uri.path[1:],
port=str(uri.port),
host=uri.hostname
)
engine = yield from create_engine(**parse_uri(dsn))
If we use the SQLAlchemy we need the same uri to acceses db.
Привіт.
Як працювати з об'єктами алхімії?
Ну тобто зараз є можливість виконувати sql запити через conn.execute() і отримувати звідти скаляри.
А можна отримувати обджекти?
Currently there's no database backend using aiopg for Django.
Two backends should be implemented one that provides the same behavior as the original psycopg2 adapter and one that also provides connection pooling. The first backend should be a drop-in replacement.
How do we advance this issue further?
I assume that yield from should be added here and here but there are probably other places where we should be async.
Care to provide some feedback? I'll create a pull request once I will have further instructions on how to proceed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.