Code Monkey home page Code Monkey logo

ydf0509 / distributed_framework Goto Github PK

View Code? Open in Web Editor NEW
323.0 14.0 89.0 70.64 MB

pip install function_scheduling_distributed_framework,python通用分布式函数调度框架。python万能超高并发神器,改成funboost框架名字,停止更新,只更新funboost框架。

License: Apache License 2.0

Python 94.20% Batchfile 0.03% CSS 2.59% JavaScript 0.16% HTML 3.02%
distributed-framework function-scheduling rabbitmq rocketmq kafka nsq redis disk sqlachemy consume-confirm

distributed_framework's Introduction

nb_time pip install nb_time
面向对象封装的NbTime 时间类,方便时间转化和时区支持,支持无限链式操作,用法暴击亲自使用 datetime 和 三方 arrow 包
比面向过程工程师在time_utils.py写几百个孤立的时间转换函数用法方便太多。
db_libs pip install db_libs
各种数据库的封装。只封装生成连接,很少添加新的方法调用原生方法这种写法。
async_pool_executor pip install async_pool_executor
its api like the concurrent.futures.使asyncio并发编程简化10倍
flexible_thread_pool pip install flexible_thread_pool
flexible_thread_pool 支持同步函数和 acync def 的 异步函数并发执行。 可扩大和自动缩小的线程池,比 threadpool_executor_shrink_able 实现更简单的线程池,性能超过 concurrent.futures.ThreadpoolExecutor 200%
sync2asyncio pip install sync2asyncio
python 快速万能同步转异步语法
object_pool_proj pip install universal_object_pool
通用对象池,可以池化任意自定义类型的对象,用于快速实现任意池(线程池除外)。
nb_http_client pip install nb_http_client
powred by object_pool_proj
nb_http_client 是 python 史上性能最强的http客户端,比任意请求包快很多倍
celery_demo 演示复杂深层路径,完全不按照一般套路的目录格式的celery使用
funboost_support_celery_demo 演示复杂深层路径,完全不按照一般套路的目录格式,使用funboost来自动化配置和操作celery,代码极其简化
nb_filelock pip install nb_filelock
使用磁盘文件作为介质,实现基于单台机器的跨进程跨解释器的分布式锁。
tps_threadpool_executor pip install tps_threadpool_executor
控频线程池,能够指定精确每秒运行多少次函数,而不是精确指定程序线程池中同时多少个线程在并发
auto_run_on_remote pip install auto_run_on_remote
在本机点击运行一个python脚本,但自动使该脚本自动在远程linux机器上运行。
方便程度暴击pycahrm 专业版调用远程linux解释器
auto_restart pip install auto_restart
自动重启冷部署工具。当检测到git内容发生变化时候,会自动重启服务,无需手动重启发版。
base_decorator pip install base_decorator
通用的装饰器基类,使写装饰器变得更简单。
decorator_libs pip install decorator_libs
常用的日常通用装饰器大全
fastapi_use_funboost fastapi 使用分布式函数调度框架 fastapi_use_funboost 作为后台消费的 demo
uwsgi_flask_funboost uwsgi部署flask + funboost 作为后台消费的 demo
django_use_funboost dajngo + funboost 作为后台消费的 demo
funboost_django_orm_demo dajngo + funboost + 函数中操作了orm ,作为后台消费的 demo
funboost_vs_celery_benchmark 使用严谨精确的控制变量法,测试分布式函数调度框架 funboost 和celery的性能对比
pysnooper_click_able pip install pysnooper_click_able 神级别黑科技装饰器,实现难度5颗星。不用打断点不用到处加print的deubg工具,可以精确显示代码运行率轨迹并点击。 可以精确动态统计调用一个函数背后,python到底解释执行了多少行代码,让你对函数消耗的cpu资源了如指掌。
pythonpathdemo 用专门的项目说明掌握python的 PYTHONPATH的重要性;说明窗口会话临时环境变量和永久性环境变量区别;说明pythonpath的好处;说明pythonpath的妙用。学了PYTHONPATH 写几十个项目复用公共代码如虎添翼
kuai_log pip install kuai_log 速度最快的python日志,比nb_log更简单简化,没有三方包依赖和无需配置文件

distributed_framework's People

Contributors

ydf0509 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distributed_framework's Issues

批量配置后的调用

如:consumer_add = get_consumer('queue_test569', consuming_function=add, broker_kind=9)
我调用的时候是consumer_add.publisher_of_same_queue.publish(dict(x=1,y=2))

如您之前建议的定义一个列表,每个元素是个字典,在字典中指定各个消费任务的配置。

from function_scheduling_distributed_framework import get_consumer

consumer_kwargs_list = [
{'queue_name': 'queue1', 'consuming_function': f1, 'qps': 10},
{'queue_name': 'queue2', 'consuming_function': f2, 'qps': 5}
]

for consumer_kwargs in consumer_kwargs_list:
get_consumer(**consumer_kwargs).start_consuming_message()

这样的写法,那我调用的时候要怎么调用?
麻烦写一个完整的例子,谢谢

体验

用了下且研究了下源码,看的出作者的用心,不过结果获取只能依赖redis,这块可以优化下;另外框架的本身优势完全可以被其它开源框架取代,如果是并发操作的话,这里会有很有任务问题(不列举了);框架更像是把n个包柔和在一起,层次略微混乱,重点不够突出,不重要的功能过多;一个好的框架一定是把接口留下就可以,实现可以给出默认;测试类要分离;
总结,作者还是很厉害的!希望在代码命名或语义上风格要统一、主要功能要突出点,其次框架本身没有做到真正的分布式,只做到了单点的分布式,分布式意味这单个节点可以集群部署或单节点部署。如果只是单点分布式,意义就不是很大,这里任何消息中间件都可以完成。好在框架本身支持多消费者,过多的功能让我眼花缭乱。。(还是springclound好用)

请问可以支持调整任务队列的size吗

假设我的爬虫在消费者里依据返回数据持续对redis的任务队列通过push加入新的自动扩线任务,当消费者速度跟不上生产速度会不会导致任务队列无限膨胀,吃光内存。
我希望有一个方法可以在消费者里快速读取当前任务队列的长度,超过一定长度则阻塞当前消费者或者抛弃任务。

几点建议

  1. 名字太长了,不好写和不好记
    你叫 菠菜(Spinach)、油麦菜(Lettuce)、胡萝卜(carrot)都行。

function_scheduling_distributed_framework 太长了,真不利于书写。

  1. 大概看了一下代码,书写不够规范,你很厉害,见码如面嘛!我能想象你这个人生活不太注重细节。
    https://www.python.org/dev/peps/pep-0008/

  2. 文档写的也不通俗,精简,可能你不太熟悉markdown。

无意冒犯哈~! 我自己也有开源项目,既然开源了就把他当自己的一件作品去打磨~!

windows生产部署

有个windows生产服务器,需要用到异步任务调试,想向您了解一下如何生产部署
代码结构如下
#####sync.py
from function_scheduling_distributed_framework import get_consumer,patch_frame_config,ConsumersManager,show_frame_config
patch_frame_config(REDIS_HOST='127.0.0.1',REDIS_PASSWORD='',REDIS_PORT=6379,REDIS_DB=0)
def send_to_submit(strdata,returntime):
print(strdata,returntime)

consumer_submit = get_consumer('send_to_submit', consuming_function=send_to_submit, broker_kind=2)
if name == 'main':
consumer_submit.start_consuming_message()

我是不是直接在控制台执行python sync.py 命令就可以了呢?

关于log输出的兼容问题

您好,我在使用Django+tensorflow+fsdf的架构,但是发现三者同时存在默认的Log会出现错误。

python3.6/logging/__init__.py", line 996, in emit    
      stream.write(msg) 
ValueError: I/O operation on closed file.

经过测试我发现,只使用Django+fsdf 或者 fsdf+tensorflow 或者 Django + tensorflow 都不会出现异常。
不知道是否是nb_log和某些log同时存在时会出现异常?

worker平滑关闭问题

当修改代码要更新worker时, 我希望能给worker发个TERM信号,让当前正在执行的任务完成后在退出, 请求现在支持么?

运行时,报错TypeError: attrs() got an unexpected keyword argument 'eq'

运行程序为:
`
@task_deco('ocr_content_and_audit', broker_kind=BrokerEnum.LOCAL_PYTHON_QUEUE, concurrent_mode=ConcurrentModeEnum.THREADING, qps=10)
def test_multi_thread_ocr(x):
xxx

if name == 'name':
for i in range(10):
test_multi_thread_ocr(i)
run_consumer_with_multi_process(test_multi_thread_ocr, 10)
运行日志如下:
15:33:41 "/usr/local/lib/python3.6/dist-packages/function_scheduling_distributed_framework-9.4-py3.6.egg/function_scheduling_distributed_framework/utils/decorators.py:28" 操作系统类型是 posix
Traceback (most recent call last):
File "AuditEngineScheduler.py", line 6, in
from function_scheduling_distributed_framework import task_deco, BrokerEnum, ConcurrentModeEnum, run_consumer_with_multi_process
File "/usr/local/lib/python3.6/dist-packages/function_scheduling_distributed_framework-9.4-py3.6.egg/function_scheduling_distributed_framework/init.py", line 9, in
from function_scheduling_distributed_framework.consumers.base_consumer import ExceptionForRequeue, ExceptionForRetry,
File "/usr/local/lib/python3.6/dist-packages/function_scheduling_distributed_framework-9.4-py3.6.egg/function_scheduling_distributed_framework/consumers/base_consumer.py", line 49, in
from function_scheduling_distributed_framework.factories.publisher_factotry import get_publisher
File "/usr/local/lib/python3.6/dist-packages/function_scheduling_distributed_framework-9.4-py3.6.egg/function_scheduling_distributed_framework/factories/publisher_factotry.py", line 16, in
from function_scheduling_distributed_framework.publishers.rabbitmq_pika_publisher import RabbitmqPublisher
File "/usr/local/lib/python3.6/dist-packages/function_scheduling_distributed_framework-9.4-py3.6.egg/function_scheduling_distributed_framework/publishers/rabbitmq_pika_publisher.py", line 6, in
from pikav0 import BasicProperties
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 656, in _load_unlocked
File "", line 626, in _load_backward_compatible
File "/usr/local/lib/python3.6/dist-packages/pikav0-0.1.2b0-py3.6.egg/pikav0/init.py", line 15, in
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 656, in _load_unlocked
File "", line 626, in _load_backward_compatible
File "/usr/local/lib/python3.6/dist-packages/pikav0-0.1.2b0-py3.6.egg/pikav0/adapters/init.py", line 35, in
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 656, in _load_unlocked
File "", line 626, in _load_backward_compatible
File "/usr/local/lib/python3.6/dist-packages/pikav0-0.1.2b0-py3.6.egg/pikav0/adapters/twisted_connection.py", line 16, in
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/reactor.py", line 38, in
from twisted.internet import default
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/default.py", line 55, in
install = _getInstallFunction(platform)
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/default.py", line 43, in _getInstallFunction
from twisted.internet.epollreactor import install
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/epollreactor.py", line 19, in
from twisted.internet import posixbase
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/posixbase.py", line 20, in
from twisted.internet import error, udp, tcp
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/udp.py", line 57, in
from twisted.internet import base, defer, address
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/base.py", line 38, in
from twisted.internet._resolver import (
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/_resolver.py", line 33, in
from twisted.internet.address import IPv4Address, IPv6Address
File "/usr/local/lib/python3.6/dist-packages/Twisted-21.2.0-py3.6.egg/twisted/internet/address.py", line 99, in
@attr.s(hash=False, repr=False, eq=False)
TypeError: attrs() got an unexpected keyword argument 'eq'
`
请教一下这个是什么问题?

PySnooper 包引用相关

建议将 utils/custom_pysnooper.py 中:

import pysnooper  # 需要安装 pip install pysnooper==0.0.11
from pysnooper.pysnooper import get_write_function
from pysnooper.tracer import Tracer, get_local_reprs, get_source_from_frame

改为

from pysnooper.tracer import Tracer, get_local_reprs, get_write_function

再把代码中 get_source_from_frame 相关的内容注释掉,反正你也没有用到。

这样对 PySnooper 包版本就没限制了,因为我偶尔也要用到这个包,老是版本冲突很烦。

RabbitMQ连接不上,找不到能讨论的地方只能来这里了

使用REDIS没有问题。

我用的文档 4.1 的代码,装饰器中
broker_kind=BrokerEnum.PERSISTQUEUE
改为
broker_kind=BrokerEnum.RABBITMQ_AMQPSTORM
之后运行,最后两行提示为

(192.168.0.107,DESKTOP-N56Q994)-[p14292_t14556] 2021-10-14 17:33:55 - RabbitmqPublisherUsingAmqpStorm--queue_test_f01 - "base_publisher.py:298" - WARNING - 对象的方法 【concrete_realization_of_publish】 首次使用 进行初始化执行 init_broker 方法
(192.168.0.107,DESKTOP-N56Q994)-[p14292_t14556] 2021-10-14 17:33:55 - RabbitmqPublisherUsingAmqpStorm--queue_test_f01 - "rabbitmq_amqpstorm_publisher.py:23" - WARNING - 使用AmqpStorm包 链接mq

等待一会之后报错如下

2021-10-14 17:31:24 - function_error - "D:\Python\Python38\lib\site-packages\function_scheduling_distributed_framework\utils\decorators.py:89" - __handle_exception - ERROR - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
记录错误日志,调用方法--> [ concrete_realization_of_publish ] 第 0 次错误重试, 错误类型是:<class 'amqpstorm.exception.AMQPConnectionError'> Connection timed out
Traceback (most recent call last):
File "D:\Python\Python38\lib\site-packages\function_scheduling_distributed_framework\utils\decorators.py", line 74, in __handle_exception
result = func(*args, **keyargs)
File "D:\Python\Python38\lib\site-packages\function_scheduling_distributed_framework\publishers\base_publisher.py", line 299, in _deco_mq_conn_error
self.init_broker()
File "D:\Python\Python38\lib\site-packages\function_scheduling_distributed_framework\publishers\rabbitmq_amqpstorm_publisher.py", line 24, in init_broker
self.connection = amqpstorm.UriConnection(
File "D:\Python\Python38\lib\site-packages\amqpstorm\uri_connection.py", line 46, in init
super(UriConnection, self).init(hostname, username,
File "D:\Python\Python38\lib\site-packages\amqpstorm\connection.py", line 75, in init
self.open()
File "D:\Python\Python38\lib\site-packages\amqpstorm\connection.py", line 224, in open
self._wait_for_connection_state(state=Stateful.OPEN)
File "D:\Python\Python38\lib\site-packages\amqpstorm\connection.py", line 378, in _wait_for_connection_state
raise AMQPConnectionError('Connection timed out')
amqpstorm.exception.AMQPConnectionError: Connection timed out

然后重新尝试连接,如此重复十次最后达到重试max退出了

文件distributed_frame_config.py中涉及到RabbitMQ的部分我更改为如下内容

RABBITMQ_USER = 'guest'
RABBITMQ_PASS = 'guest'
RABBITMQ_HOST = '127.0.0.1'
RABBITMQ_PORT = 15672
RABBITMQ_VIRTUAL_HOST = 'rabbitmq_virtual_host'

我确定rabbitMQ已经安装成功,我通过 http://localhost:15672/ 也能进入后台,至于虚拟机我只操作了一步,不知道有没有其他需要操作的地方
AFJP82@7OC{)SZ9T}@FGDA7

来自一个被安利的新手爬虫

如果我使用python自带队列作为中间件,我有一个列表,怎么才能把这个列表当成队列从里边取东西。
然后就是如果consumer = get_consumer()这个对象成功生成了,该怎么使用呢?是生成对象程序就已经完成了么?

建议改个名字

这个框架的名字太长了,真心建议改个名字。名字不一定是要直观的告诉大家这个东西要做啥,但太长的包名让人没法引用啊。

关于任务

1.是否支持优先级以及如何创建发布优先级任务?
2.消费时间预测是指队列的整个消费时间预估?还是一次任务本身的时间预估?
3.是否像celery一样支持链式任务调用?

会过滤字符?

{"table":"swap/candle900s","data":[{"candle":["2020-07-10T02:00:00.000Z","9224.3","9230","9224.3","9227.9","2920","31.6394"],"instrument_id":"BTC-USD-SWAP"}]} 这个字符串,经过publisher_of_same_queue.publish(dict(res_str=res_str)) 会少字符

上下游任务调度

其他工种的人如何直接发布任务到该框架?如何进行远程调用的?是否像celery 通过flower远程api的调用方式?

是否支持任务的暂停与继续,以及主任务子任务的编排!

您好作者,偶然看到这个很优秀的任务框架,最近我们也是打算迁移重构我们之前的老的任务。也调研了celery 和rq。今天发现您这个框架。简单看了下,想问下这个任务框架是否支持 类似celery的chord 。我们现在期望的功能包括一个 任务能挂起,和失败后,人工介绍,任务继续重试往下。希望您有时间解答下。谢谢。

请考虑低内存配置的内存占用

此项目确实比celery的执行效率强很多倍,但相对的占用内存也高了,毕竟执行快了,在高并发执行的情况下,内存占用也是个问题。我用一台云主机配置为1C2G3M的centos 7香港线路。因为目前正在研究数字货币,用这台机器与交易所的websocket连接获取交易数据。原来用celery就没有出现过redis挂掉的情况,会出现内存不够用,写入有问题,但不影响使用。改用本库之后,同样的处理,redis扛不过5分钟就挂了。无奈只能不在这台机上做数据存储,只用来接收数据提交到另外一个系统的接口。我试了concurrent_num=10,concurrent_mode=2这两个参数,只是让redis支持的时间久一些,但扛不过10分钟。不知道有没有办法进行设置?

get_consumer和fsdf_background_scheduler配合使用的问题

get_consumer和fsdf_background_scheduler配合使用时,提示:AttributeError: 'PersistQueueConsumer' object has no attribute 'is_decorated_as_consume_function'

是fsdf_background_scheduler不能和get_consumer一起用只能使用装饰器版本么?

测试代码:
def test_job():
from datetime import datetime
print(f"{datetime.now()}任务执行了一次")

if name == "main":
ss = get_consumer(consuming_function=test_job, queue_name='test_job', broker_kind=BrokerEnum.SQLACHEMY,
create_logger_file=False, log_level=10)
cron = "0/5 * * * * ? *".split(' ')
cron_rel = dict(second=cron[0], minute=cron[1], hour=cron[2], day=cron[3],
month=cron[4])
fsdf_background_scheduler.add_job(timing_publish_deco(ss), trigger='cron', **cron_rel)
fsdf_background_scheduler.start()
ss.start_consuming_message()

如下代码报错:
\lib\site-packages\function_scheduling_distributed_framework\timing_job_init_.py", line 17, in _deco
if getattr(consuming_func_decorated_or_consumer, 'is_decorated_as_consume_function') is True:
AttributeError: 'PersistQueueConsumer' object has no attribute 'is_decorated_as_consume_function'

是否可以简化启动代码

如果,我有一百个需要处理的任务,那是不是代码有
一百个get_consumer()及一百个start_consuming_message(),能不能简化成为只有一个配置,一个启动?还是说我,我的处理有问题,实际是可以一个配置一个启动就可以 了?

因为我用celery就是一个配置及一个启动就可以了,所以也在想,distributed_framework是不是也能一样

python3.9 设置function_timeout 引发异常

简单的试用了一下,发现指定了 function_timeout 参数后,程序会触发异常,如果不指定这个参数就正常。直接上代码:

目录结构:

├── consumer.py
├── distributed_frame_config.py
├── funcs.py
├── producer.py

funcs.py

import time

from function_scheduling_distributed_framework import task_deco, BrokerEnum


@task_deco('queue1', qps=3, broker_kind=BrokerEnum.RABBITMQ_AMQPSTORM, is_show_message_get_from_broker=True,
           is_using_distributed_frequency_control=True, function_timeout=600)
def task_fun(x, y):
    print(f'{x} + {y} = {x + y}')
    time.sleep(3)

consumer.py

from funcs import task_fun

if __name__ == '__main__':
    task_fun.consume()

producer.py

from funcs import task_fun

if __name__ == '__main__':
    for i in range(20):
        task_fun.push(i, i * 2)

错误日志

 (10.8.0.2,dm.local)-[p7047_t123145617371136] 2021-11-14 19:35:13 - RabbitmqConsumerAmqpStorm--queue1 - "base_consumer.py:554" - DEBUG -  rabbitmq 中间件  queue1 中取出的消息是 {"x": 9, "y": 18, "extra": {"task_id": "queue1_result:bcaeff96-d55d-494b-913c-a6e99ad1135b", "publish_time": 1636889710.6115, "publish_time_format": "2021-11-14 19:35:10"}}
2021-11-14 19:35:13 - _CustomThread - "/Users/替换后的路径/venv/lib/python3.9/site-packages/function_scheduling_distributed_framework/concurrent_pool/custom_threadpool_executor.py:172" - run - DEBUG - 新启动线程 123146003529728 
19:35:13  "/Users/替换后的路径/funcs.py:13"   9 + 18 = 27
 (10.8.0.2,dm.local)-[p7047_t123145734897664] 2021-11-14 19:35:14 - RabbitmqConsumerAmqpStorm--queue1 - "base_consumer.py:660" - ERROR - 函数 task_fun  第1次运行发生错误函数运行时间是 3.0011 ,
  入参是  {'x': 1, 'y': 2}    
 原因是 <class 'AttributeError'> '__KThread' object has no attribute 'isAlive' 
Traceback (most recent call last):
  File "/Users/替换后的路径/venv/lib/python3.9/site-packages/function_scheduling_distributed_framework/consumers/base_consumer.py", line 639, in _run_consuming_function_with_confirm_and_retry
    function_result_status.result = function_run(**function_only_params)
  File "/Users/替换后的路径/venv/lib/python3.9/site-packages/function_scheduling_distributed_framework/utils/decorators.py", line 539, in _
    alive = thd.isAlive()
AttributeError: '__KThread' object has no attribute 'isAlive'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.