Code Monkey home page Code Monkey logo

pushl's People

Contributors

dependabot[bot] avatar fluffy-critter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pushl's Issues

ERROR:asyncio:SSL error in data received

This happens when running pushl against an SSL-served RSS/Atom feed or entries. It seems to be an upstream bug in aiohttp, but it could be a problem with connection pooling on the Pushl end.

Output log gets spammed with:

ERROR:asyncio:SSL error in data received
protocol: <asyncio.sslproto.SSLProtocol object at 0xf45838ac>
transport: <_SelectorSocketTransport fd=47 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
  File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
    ssldata, appdata = self._sslpipe.feed_ssldata(data)
  File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
    self._sslobj.unwrap()
  File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
    return self._sslobj.shutdown()

Rework the href vs. link thing

Right now the href vs link thing is causing a bunch of weird behavior and also the caching on it could be a lot better.

Possible refactoring:

  1. entries.Entry.get_targets returns a list of (url,href) pairs (looks like it already does this but the naming is confusing)
  2. webmentions.get_target only takes (and caches) the url value
  3. webmentions.Target.__init__ stores the self.canonical value from the request.url response
  4. webmentions.Target._get_endpoint can override self.canonical if the document provides a <link rel="canonical">
  5. webmentions.Target.send takes source,href parameters, and emits a compatibility warning if href != self.canonical (but only if self.endpoint is not None).

AttributeError in webmention handler

remote: ERROR:asyncio:Task exception was never retrieved
remote: future: <Task finished coro=<Pushl.send_webmention() done, defined at /home/fluffy/.local/share/virtualenvs/beesbuzz.biz-RK9-tIok/lib/python3.7/site-packages/pushl/__init__.py:162> exception=AttributeError("'Target' object has no attribute 'href'")>
remote: Traceback (most recent call last):
remote:   File "/home/fluffy/.local/share/virtualenvs/beesbuzz.biz-RK9-tIok/lib/python3.7/site-packages/pushl/__init__.py", line 178, in send_webmention
remote:     await target.send(self, entry)
remote:   File "/home/fluffy/.local/share/virtualenvs/beesbuzz.biz-RK9-tIok/lib/python3.7/site-packages/pushl/webmentions.py", line 159, in send
remote:     LOGGER.debug("%s -> %s via %s %s", entry.url, self.href,
remote: AttributeError: 'Target' object has no attribute 'href'

I think this was already fixed by 4836b49 but it's worth double-checking.

Rewrite to use async

Instead of the silly ThreadPoolExecutor stuff this would be a really good candidate for use of asyncio and aiohttp.

Add support for media targets

<img src>, <video src>, etc. should also send a webmention. For media targets it should only be necessary to HEAD, rather than GET, the resource, and <script src> should probably be excluded. There is also no reason to look at rel on this.

Add configuration for top-level container(s)

When a page doesn't use <article> or h-entry markup, the entire page is used as the source of outgoing links. It would be helpful for forums or older blogs to be able to specify which containers should be considered as entry content (ignoring e.g. signatures and user profiles).

Rethink the caching, targeting, etc behavior

The actual run loop has gotten fragile and messy.

Each feed should simply get the list of entries, plus the list of entries which were in the previous cached version.

Each entry should simply get the links, xored with the links which were in the previous cached version.

Each ping should only be sent once.

There probably isn’t really a good reason to be running so many threads. Maybe do one active connection per domain, and use that as the gating mechanism for aiohttp.

Basically I feel like a lot of the guts need to be torn out and replaced now that I have a better idea of what I’m doing.

If caching is enabled, keep track of already-sent pings

At least in the case of Pingback, most sites do not seem to appreciate getting the same ping multiple times (as opposed to Webmention which makes that part of the spec).

So, if persistent storage is enabled, pushl should only send pingback pings which haven't been sent before.

Add file-based locking to cache files

To better support concurrency, add a Cache.lock primitive which would look something like

with cache.lock(prefix, url) as lock:
    previous = lock.get(schema_version)
    # ...
    lock.save(current)

While a lock is held on a file, any other attempt at acquiring that lock should block.

KeyError: link while processing a feed

Traceback (most recent call last):
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/feedparser.py", line 398, in __getattr__
    return self.__getitem__(key)
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/feedparser.py", line 356, in __getitem__
    return dict.__getitem__(self, key)
KeyError: 'link'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/bin/pushl", line 11, in <module>
    sys.exit(main())
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/pushl/__main__.py", line 123, in main
    worker.wait_finished()
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/pushl/__main__.py", line 60, in wait_finished
    queued.result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/pushl/__main__.py", line 82, in process_feed
    self.submit(self.process_entry, entry.link)
  File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/feedparser.py", line 400, in __getattr__
    raise AttributeError("object has no attribute '%s'" % key)
AttributeError: object has no attribute 'link'

TypeError: shield() got an unexpected keyword argument 'loop'

With Python 3.10, I'm getting

ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-174' coro=<Pushl.send_webmention() done, defined at /home/tomi/.local/pipx/venvs/pushl/lib/python3.10/site-packages/pushl/__init__.py:180> exception=TypeError("shield() got an unexpected keyword argument 'loop'")>
Traceback (most recent call last):
  File "/home/tomi/.local/pipx/venvs/pushl/lib/python3.10/site-packages/pushl/__init__.py", line 190, in send_webmention
    target, code, cached = await webmentions.get_target(self, dest)
  File "/home/tomi/.local/pipx/venvs/pushl/lib/python3.10/site-packages/async_lru.py", line 212, in wrapped
    return (yield from asyncio.shield(fut, loop=_loop))
TypeError: shield() got an unexpected keyword argument 'loop'

Handle pending tasks better when killed

If a KeyboardInterrupt et al occur in main, it should check the status of the pending tasks and cancel them appropriately and maybe print what was being waited on. Otherwise you get a big list of opaque blobs like:

KeyboardInterrupt
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_feed() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:35> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10f41d0a8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x1113b27c8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x111391678>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_feed() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:35> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x1114811f8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_feed() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:35> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x1113b2be8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x112175558>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x111ec0f18>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.send_webmention() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:110> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x112380e88>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.send_webmention() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:110> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x111ec0348>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>

HTML encoding detection could be better

if ctype.startswith('text/html'):

Right now if the content-type doesn't declare an encoding, it assumes ISO-8859-1. While this is probably fine for everything Pushl needs, technically it should decode as US-ASCII (or similar) and then look for a <meta charset> or <meta http-equiv> that declares the character encoding.

Track pending pings to retry later

Save the scheduled and failed pings into an item which gets written to the cache after each ping has completed (after either removing or marking as failed, accordingly). OR these items into the scheduled list when the target list is computed.

This will help with the case that Pushl gets interrupted partway through (or the receiving endpoint happens to be down/erroring at the time), so pings aren’t lost forever.

Allow self-pings by configuration

Currently all webmentions to the same domain are disabled. Someone might want to be able to self-ping though (for example, for community blogs all hosted on the same domain).

don't assume namespace-names

While most uses of RFC5005 use the namespace fh it oculd really be anything. look at the feed namespaces to figure out which namespace to use for the tag.

aiohttp.client_exceptions.ClientPayloadError on feeds using chunked encoding

Comment feeds from phpBB are delivered using chunked encoding, in a way which makes aiohttp barf. Fetching via curl works:

$ curl -i https://songfight.net/forums/app.php/feed/posts
HTTP/1.1 200 OK
Date: Sun, 21 Jun 2020 07:30:08 GMT
Server: Apache
Cache-Control: private, must-revalidate
Set-Cookie: phpbb3_fhvs1_u=1; expires=Mon, 21-Jun-2021 07:30:08 GMT; path=/; domain=.songfight.net; HttpOnly
Set-Cookie: phpbb3_fhvs1_k=; expires=Mon, 21-Jun-2021 07:30:08 GMT; path=/; domain=.songfight.net; HttpOnly
Set-Cookie: phpbb3_fhvs1_sid=da219fd5cb7c6839f6f2ecaae0ce9dd7; expires=Mon, 21-Jun-2021 07:30:08 GMT; path=/; domain=.songfight.net; HttpOnly
Upgrade: h2
Connection: Upgrade
Last-Modified: Sun, 21 Jun 2020 07:21:51 GMT
Cache-Control: max-age=172800
Expires: Tue, 23 Jun 2020 07:30:08 GMT
Vary: User-Agent
Transfer-Encoding: chunked
Content-Type: application/atom+xml

<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-gb">
...

but in pushl it fails:

$ pipenv run pushl -rvvvk --rel-exclude '' https://songfight.net/forums/app.php/feed/posts
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:pushl:++WAIT: https://songfight.net/forums/app.php/feed/posts: get feed
DEBUG:pushl.feeds:++WAIT: cache get feed https://songfight.net/forums/app.php/feed/posts
DEBUG:pushl.feeds:++DONE: cache get feed https://songfight.net/forums/app.php/feed/posts
DEBUG:pushl.feeds:++WAIT: request get https://songfight.net/forums/app.php/feed/posts None)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=0)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=1)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=2)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=3)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=4)
WARNING:utils:https://songfight.net/forums/app.php/feed/posts: Exceeded maximum retries; errors: {'Response payload is not completed'}
DEBUG:pushl.feeds:++DONE: request get https://songfight.net/forums/app.php/feed/posts
ERROR:pushl.feeds:Could not get feed https://songfight.net/forums/app.php/feed/posts: -1
DEBUG:pushl:++DONE: https://songfight.net/forums/app.php/feed/posts: get feed
INFO:pushl.main:Completed all tasks

There is probably some configuration that needs to be sent to aiohttp to make it more tolerant of chunked encoding weirdness.

Spider for related feeds on the same site

It would be useful if entry processing also discovered related feeds for websub et al. Perhaps a -r/--recurse parameter could tell process_entry to also submit any <link rel="alternate"> links for consideration.

Handle source redirection

If an entry URL changes, Pushl should re-send all of its mentions from the old URL so the endpoint knows to update them.

This can happen in Pushl.process_entry, by looking to see if url != entry.url and making the target set entry.get_targets(self) | previous.get_targets(self). Then the API for Pushl.process_entry changes to be the entry url instead of the entry object, and it sends based on the original URL rather than the resolved one.

-r should only look at feeds on the same domain probably

In Publ an item can redirect to something on another site, which can then cause recursion to take place on external feeds. It'd be better to restrict -r traversal to feeds that are on the same domain as a feed that was specified in the original options.

e.g. process_feed can whitelist domains that process_entry will recurse into.

Better error reporting

ERROR:urllib3.connection:Certificate did not match expected hostname: themindfulnessapp.com. Certificate: {'subject': ((('organizationalUnitName', 'Domain Control Validated'),), (('organizationalUnitName', 'EssentialSSL Wildcard'),), (('commonName', '*.binero.se'),)), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'COMODO CA Limited'),), (('commonName', 'COMODO RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': '4AD42FAD2417190F22820224FF009436', 'notBefore': 'Apr  3 00:00:00 2018 GMT', 'notAfter': 'Apr  3 23:59:59 2019 GMT', 'subjectAltName': (('DNS', '*.binero.se'), ('DNS', 'binero.se')), 'OCSP': ('http://ocsp.comodoca.com',), 'caIssuers': ('http://crt.comodoca.com/COMODORSADomainValidationSecureServerCA.crt',), 'crlDistributionPoints': ('http://crl.comodoca.com/COMODORSADomainValidationSecureServerCA.crl',)}

It would be nice if these errors also showed what the origin of the connection request was.

Option to auto-ping Internet Archive

Per RL discussion with Marty McGuire and Kitt Hodsen: If I link to something that might go away in the future, it’s useful if Internet Archive has a snapshot of it for later. Internet Archive provides an API for requesting a snapshot, so if so configured, we could have all outgoing link targets be requested.

Re-ping everyone if content changes

If a pertinent mf2 attribute (p-title, p-summary, e-content) on an entry changes, pings should be re-sent to all targets, not just the ones where URLs changed.

However this should only happen when actual entry content changes and not based purely on the overall content hash (eg nav links, “15 minutes ago” publish times, etc)

Allow pinging from an old URL

Currently, pinging an entry by URL will only send the pings from the current canonical URL. Sometimes it's necessary to manually send pings from an old or non-canonical URL; for example, forcing a receiving website to update the canonical URL on an outdated ping (when the interim update failed for whatever reason, or the user hasn't been using the caching mechanism).

So, there should be a command-line option that makes process_entry_mentions send the pings from url instead of entry.url.

When a canonical URL changes, re-send all pings from both URLs

When fixing #35, it didn't cover the case that the URL in the feed is a link to a redirector, and the old redirection target doesn't redirect to the new target.

An attempted fix in 17fd5ac had the unfortunate effect of breaking support for rel="canonical" so that has been reverted; a better solution would be to enqueue re-pinging all of the prior targets from the prior URL, rather than just re-pinging everything indiscriminately.

Sometimes Pushl hangs indefinitely

On occasion, pushl will hang indefinitely and since I use flock -n to prevent multiple instances from running simultaneously it means push notifications stop happening for days at a time until I notice.

The logic for when to let the process exit probably needs work.

Support h-feed

A lot of IndieWeb folks are moving away from atom/rss and towards h-feed. If a feed fails to parse using feedparser, use https://github.com/microformats/mf2py to parse the feed out and do the same logic.

Also add support for <link rel="feed"> for feed discovery in recursive mode.

Remember to support WebSub, which should also probably be added to content pages as well.

Frequent connection-closing exceptions

It looks like there's something wrong with connection pooling where the server might disconnect and the pool expects the connection to still be there. Example:

WARNING:pushl.entries:Entry http://beesbuzz.biz/blog/3978-Reprogramming-my-sleep-cycle: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/blog/8118-So-what-is-Subl-anyway: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/experiments/5720-sawbench-test: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/3987-Strangers-Official-video: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/covers/5274-I-Dont-Believe-You-Magnetic-Fields: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/experiments/2314-error-pages: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/1702-Boffo-Yux-Dudes: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/demos/771-Good-Luck-Charm: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/experiments/2346-bowed-bass: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/blog/7902-More-Authl-thoughts: got ServerDisconnectedError: None
INFO:pushl.webmentions:Sending Webmention http://beesbuzz.biz/blog/3743-More-fun-with-Webmentions -> https://webmention.io/
WARNING:pushl.entries:Entry https://beesbuzz.biz/comics/journal/555-Resolutions-for-2016: got ServerDisconnectedError: None
WARNING:pushl.entries:Entry https://beesbuzz.biz/comics/journal/839-July-1-2017-Re-refactor: got ServerDisconnectedError: None
WARNING:pushl.entries:Entry https://beesbuzz.biz/blog/6665-Some-more-site-template-update-thinguses: got ServerDisconnectedError: None

There is probably something hidden in the aiohttp.TCPConnector thing to make this retry or something.

TypeError: Passing coroutines is forbidden, use tasks explicitly.

Running from commit a13a2e7, installed with pipx and without system-site-packages, I get the following errror:

$ pushl -vc "${XDG_CACHE_HOME-$HOME/.cache}/pushl" -e "https://seirdy.one/notes/2023/01/04/against-chasing-growth/"
Traceback (most recent call last):
  File "/path/to/pipx/bin/pushl", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/path/to/pipx/venvs/pushl/lib64/python3.11/site-packages/pushl/__main__.py", line 128, in main
    loop.run_until_complete(_run(args))
  File "/usr/lib64/python3.11/asyncio/base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/path/to/pipx/venvs/pushl/lib64/python3.11/site-packages/pushl/__main__.py", line 163, in _run
    _, timed_out = await asyncio.wait(tasks, timeout=args.max_time)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/asyncio/tasks.py", line 415, in wait
    raise TypeError("Passing coroutines is forbidden, use tasks explicitly.")
TypeError: Passing coroutines is forbidden, use tasks explicitly.
sys:1: RuntimeWarning: coroutine 'Pushl.process_entry' was never awaited

System info: Fedora 37, python 3.11.1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.