Code Monkey home page Code Monkey logo

query-service's Introduction

HCA DCP Query Service

The HCA DCP Query Service provides an interface for scientists and developers to query metadata associated with experimental and analysis data stored in the Human Cell Atlas Data Coordination Platform (DCP). Metadata from the DCP Data Store are indexed and stored in an AWS Aurora PostgreSQL database.

Queries to the database can be sent over HTTP through the Query Service API, which is available together with its documentation at https://query.staging.data.humancellatlas.org/.

For long-running queries (runtime over 20 seconds), the Query Service supports asynchronous tracking of query results. When a long-running query triggers this mode, the caller will receive a 301 Moved Permanently response status code with a Retry-After header. The caller is expected to wait the specified amount of time before checking the redirect destination, or use the query job ID returned in the response JSON body to check the status of the query job. The caller may turn off this functionality (and cause the API to time out and return an error when a long-running query is encountered) by setting the async=False flag when calling /query.

For large query results, the Query Service may deposit results in S3 instead of returning them verbatim in the response body. In this case, the client will receive a 302 Found response status code sending them to the response data location. In this mode, response data are confidential to the caller, and remain accessible for 7 days. The caller may turn off this functionality by setting the async=False flag when calling /query.

Query Service internal architecture

Query architecture

Query API

The REST API is a Chalice app that adopts OpenAPI's approach to specification driven development and leverages Connexion for input parameter validation. The Chalice app is deployed with Amazon API Gateway and AWS Lambda. The full API documentation can be found here.

Subscribing to Data Updates

The query service is subscribed to all updates to the DCP Data Store (DSS). When data is added to the DSS, a webhook containing the bundle id is sent to the Query API. The bundle_id is added to the dcp-query-data-input-queue-[deployment-stage] SQS queue, which eventually calls the query-load-data-[deployment-stage] lambda, which retrieves the bundle metadata and loads it into the database.

Developing the Query Service

Prerequisites: Ubuntu

Run apt-get install jq moreutils gettext make virtualenv postgresql zip unzip.

Prerequisites: Mac OS

Run:

brew install jq moreutils gettext postgresql@10
ln -s /usr/local/opt/gettext/bin/envsubst /usr/local/bin
ln -s /usr/local/Cellar/postgresql@10/*/bin/psql /usr/local/bin

Most components of the Query Service are written in Python (Python version 3.6 or higher is required). After cloning this repository, you can run the Query Service in local mode to experiment with it. In this mode, a PostgreSQL server is expected to be running locally on the standard port (5432), and the current user is expected to have admin access to the database referenced by APP_NAME.

It is recommended that you set up a dedicated Python virtualenv to work with the Query Service, then install the development requirements and run the Chalice server in local mode:

mkdir venv
virtualenv --python python3.6 venv/36
source venv/36/bin/activate
pip install -r requirements-dev.txt
source environment
chalice local

Errors while running source environment can be ignored if you are just experimenting with Query Service in local mode. After starting chalice local, you can open the Query Service in your browser at http://127.0.0.1:8000 to experiment with it.

App Configuration

The environment file

Global app configuration variables are stored in this file in the root of this repository. The file is a Bash script, intended to be sourced in the shell by running source environment. Some of these environment variables are made available to the deployed AWS Lambda functions and to the Terraform scripts via the EXPORT_* lists at the bottom of the file.

dcpquery/_config.py

Python runtime app configuration is centralized in this file. Some of the config values are imported from the environment. In Python, the instance of the dcpquery._config.DCPQueryConfig class is available as dcpquery.config.

Configuring logging

Python logging configuration is centralized in dcpquery.config.configure_logging(). Call this method from any entry point that loads dcpquery.

Logging verbosity is controlled through the DCPQUERY_DEBUG environment variable, set in the environment file:

  • Set DCPQUERY_DEBUG=0 to disable debugging and set the app log level to ERROR.
  • Set DCPQUERY_DEBUG=1 to change the log level to INFO and cause stack traces to appear in error responses.
  • Set DCPQUERY_DEBUG=2 to change the log level to DEBUG and cause stack traces to appear in error responses.

AWS Secrets Manager

Several configuration items, including database credentials, are saved by Terraform in the AWS Secrets Manager, and accessed by the Python app from there.

Updating the requirements

Runtime Python dependencies for the Query Service app are listed in requirements.txt.in. Test/development Python dependencies are listed in requirements-dev.txt.in. These files are compiled into pip freeze output files, which are stored in requirements.txt and requirements-dev.txt, respectively.

To update the requirements files for the application, edit the *.in files and run make refresh-all-requirements, then commit all the requirements* file changes to version control.

Loading Test Data

To load a test dataset into the Query Service for experimenting, run make load-test-data. This script fetches and loads a small (<10 MB) set of HCA production metadata.

Deployment

The Query Service requires AWS to deploy. It uses several AWS services, including Lambda, S3, API Gateway, SQS, ACM, and Route53. To set up the deployment, first ensure that your AWS credentials are configured by installing the AWS CLI and running aws configure or setting the AWS environment variables.

The Query Service uses Terraform to manage the deployment process. Before deploying, edit the environment file to set essential variables to control the deployment:

  • STAGE - Application deployment stage (such as dev, staging or production)
  • APP_NAME - The name that the query service will use to refer to itself
  • API_DOMAIN_NAME - The domain name that the query service will register for itself using Route 53 and API Gateway. Before continuing, ensure that a certificate for this domain name is available (in VERIFIED state) in the AWS Certificate Manager.
  • API_DNS_ZONE - The name of a Route 53 zone that must be present in your AWS account, with a registered domain pointing to it. You can set up both at https://console.aws.amazon.com/route53/.

Run source environment to set the results of the above edits in your shell. (You can also source environment-specific convenience files such as environment.staging that set the STAGE variable and then source environment for you.)

In the same shell, run make install-secrets to initialize and upload credentials that Query Service components need to communicate with each other.

Finally, to deploy the Query Service, run make deploy in the same shell.

Minor app updates

After deploying, you can update just the Lambda function codebase by running make update-lambda (this is faster, but no dependencies, routes, or IAM policies will be updated).

Database management

Run python -m dcpquery.db --help for a list of commands available to manage the database. By default, this script connects to the PostgreSQL database running on localhost. To connect to the remote database listed in the service configuration, add --db remote to the command.

  • Running python -m dcpquery.db connect will connect to the database using psql.

  • Running python -m dcpquery.db load (or make load) will load all data from DSS. This requires substantial time and resources. To load a test dataset for experimenting, see "Loading Test Data".

Connecting directly to the database

The following command can be used python -m dcpquery.db connect --db remote

Testing

Run make test to run unit tests. Run make integration-test to run integration tests, which require the Query Service to be deployed.

Monitoring your app

To get logs for the last 5 minutes from the app, type make get_logs in this directory.

Lambda is automatically set up to emit logs to CloudWatch Logs, which you can browse in the AWS console by selecting the log group for your Lambda. You can see built-in CloudWatch metrics (invocations, errors, etc.) by selecting your Lambda in the Lambda AWS console and going to the Monitoring tab. To tail and filter logs on the command line, you can use the logs command in the Aegea package, for example: aegea logs /aws/lambda/dcpquery-api-dev --start-time=-15m (this is what make get_logs runs).

Troubleshooting

For more help troubleshooting failures in the system, run scripts/trace.py and follow the instructions.

Metric dashboards for core DCP deployments

You can view metric dashboards for each deployment stage at the links below

Bugs & Feature Requests

Please report bugs, issues, feature requests, etc. on GitHub. Contributions are welcome; please read CONTRIBUTING.md.

Test Coverage Production Health Check Master Build Status

Migrations

When to Create a Migration

  • Anytime you make changes to the database schema (adding a table, changing a field name, creating or updating an enum etc)

Creating Migration Files

  • Autogenerate a migration file based on changes made to the ORM. On the command line run make create-migration

    • This will create a migration in dcpquery/alembic/versions. Take a look at the generated SQL to ensure it represents the changes you wish to make to the database. Potential issues with migration autogeneration are [listed below](#Autogenerate can't detect)
    • If you get this error you need to apply the migrations you've already created to the db (or delete them) before you can create a new migration
    ERROR [alembic.util.messaging] Target database is not up to date.
     FAILED: Target database is not up to date.
    
    • Note that this will create a migration even if you have not made any changes to the db (in that case it will just be an empty migration file which you should delete)
  • To create a blank migration file run alembic revision -m "description of changes"

    • The description of changes will be appended to the migration file's name so you'll want to keep it short (less than 40 chars); spaces will be replaced with underscores
    • You can then edit the newly created migration file (in dcpquery/alembic/versions)

Applying new migrations to the database

  • Ensure you are connected to the correct database (run python -m dcpquery.db connect to see the database url, use the --db remote flag if necessary)
  • From the command line run python -m dcpquery.db migrate; use the --db remote flag if necessary
  • To unapply a migration run (locally) alembic downgrade migration_id (the migration_id is the string in front of the underscore in the migration name, for file 000000000000_init_db.py the migration id is 000000000000)
  • To unapply a migration in a remote db it is simplest to hardcode the db_url in alembic/env.py and then run alembic dowgrade migration_id

Autogenerate can't detect

  • Changes of table name. These will come out as an add/drop of two different tables, and should be hand-edited into a name change instead.
  • Changes of column name. Like table name changes, these are detected as a column add/drop pair, which is not at all the same as a name change.
  • Anonymously named constraints. Give your constraints a name, e.g. UniqueConstraint('col1', 'col2', name="my_name").
  • Special SQLAlchemy types such as Enum when generated on a backend which doesn’t support ENUM directly - this because the representation of such a type in the non-supporting database, i.e. a CHAR+ CHECK constraint, could be any kind of CHAR+CHECK. For SQLAlchemy to determine that this is actually an ENUM would only be a guess, something that’s generally a bad idea.

query-service's People

Contributors

kislyuk avatar malloryfreeberg avatar mdunitz avatar mweiden avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

query-service's Issues

Monitoring

Need / User Story

  • As a Query Service maintainer
  • I want to easily determine the current status of the service
  • so I can easily determine if anything is broken

Definition of Done

  • use grafana template in the dcp-monitoring repository
  • link to dashboard in readme/documentation

Related docs

Create elasticsearch vs. postgres UX worksheet

Need

For product development of the query service, we have real-world users of data repositories try to complete queries in each query language and report on their experience with each.

The task in this ticket is to create a worksheet for participants in the UX study with the queries and some questions for us to learn about their experience.

Update system blueprint

Need

  • As a Developer
  • I want a blueprint of how data flows through the system
  • so that I can visualize what currently exists/what is planned

Approach

  • use lucid chart to show architecture plans
  • update lucid chart to show reality

Stand up a storage instance

Need

  • As a developer of the query service
  • I want a infrastructure-as-code deployment of the query service database
  • so that I can carry out UX studies and develop the ETL job.

Approach

  • Terraform configuration for chosen database
  • Deploy that database

Add fields to the files table

Need

Some of the user stories require file fields that we are not currently storing.

Approach

Add the following to the files table.

  • File size in bytes
  • File extension

Health Check

Need

  • As a logging service
  • I want an endpoint I can check every minute
  • to ensure the service is still up

Approach

  • create a health check endpoint
  • create a Route 53 health check and cloudwatch alarm in dcp-monitoring (ask @mweiden)

Code Coverage

Need

  • As a developer
  • I want a minimum level of code coverage
  • so I can be confident changes wont break functionality

Approach

  • add code coverage threshold for merging to master

Propagate SQL errors to HTTP API users

User story

  • As a user of the HTTP API
  • I want the API to return SQL errors
  • so that I can debug my queries.

Approach

Serialize SQL errors and return them in the response JSON.

Create Runbook

Need

  • As a developer
  • I want to know how to rollout/deploy the service
  • particularly when rolling out a backwards incompatible version

Approach

  • data migration
  • switchover
  • service rollback plan
  • dev -> integration -> staging -> prod release tooling and pipelines

Set up a HTTP API framework implementation

Need

  • As a Query Service developer
  • I need an API framework in place in the Query Service
  • so that I can develop API methods efficiently and with a consistent interface.

Approach

Use the, now well understood, tech stack used by the Data Storage Service and Expression Matrix Service.

  • AWS Route 53 for DNS configuration
  • AWS Chalice for API Gateway HTTP and Lambda configuration
  • Swagger for docs and API specification

Users can submit asynchronous, long running queries via the API

Need

  • As a user of the Query Service,
  • I want to submit queries to the API asynchronously,
  • so that I can issue queries that take longer than HTTP timeouts.

Approach

  • separate endpoint or query parameter to indicate asynchronous status of request
  • job queueing system, dead-letter queue for failures
  • job state tracking system (maybe dynamodb)
  • asynchronous process to query RDS, update state, and store results

lifecycle rule prefix fix

@mweiden

two possible fixes

  • create lifecycle rule for each env
  • refactor query-service s3 bucket key structure so the topic (query_results, terraform) is first and then its split into envs

Design the database schema

Need

  • As a user of the Query Service
  • I want a database table schema that is easy and intuitive to use
  • so that I can query for metadata efficiently.

Approach

  • Decide on how raw bundle metadata will be divided up into tables in the database

Create Facets table

Create MATERIALIZED VIEW facets_table
  AS
    Select b.bundle_uuid        as bundle_uuid,
           b.bundle_version     as bundle_version,
           'organ'              as field,
           specimen->'organ'->>'text' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'specimen_from_organisms') as specimen

    UNION
    Select b.bundle_uuid              as bundle_uuid,
           b.bundle_version           as bundle_version,
           'organ_part'               as field,
           specimen-> 'organ_part'->>'text' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'specimen_from_organisms') as specimen

    -- METHODS
    UNION

    Select b.bundle_uuid                      as bundle_uuid,
           b.bundle_version                   as bundle_version,
           'method-library-construction'      as field,
           sp->'sequencing_approach'->>'text' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'sequencing_protocols') as sp

    UNION

    Select b.bundle_uuid                                as bundle_uuid,
           b.bundle_version                             as bundle_version,
           'method-instrument-model'                    as field,
           sp->'instrument_manufacturer_model'->>'text' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'sequencing_protocols') as sp

    -- DONOR INFO
    UNION

    Select b.bundle_uuid                              as bundle_uuid,
           b.bundle_version                           as bundle_version,
           'sex'                                      as field,
           donor->>'sex'                              as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'donor_organisms') as donor


    UNION

    Select b.bundle_uuid       as bundle_uuid,
           b.bundle_version    as bundle_version,
           'age'               as field,
           donor->>'organism_age' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'donor_organisms') as donor

    UNION

    Select b.bundle_uuid                   as bundle_uuid,
           b.bundle_version                as bundle_version,
           'Genus Species'                 as field,
           donor->'genus_species'->0->>'text' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'donor_organisms') as donor

    UNION

    Select b.bundle_uuid                    as bundle_uuid,
           b.bundle_version                 as bundle_version,
           'age unit'                       as field,
           donor->'organism_age_unit'->>'text' as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'donor_organisms') as donor

    UNION

    Select b.bundle_uuid                    as bundle_uuid,
           b.bundle_version                 as bundle_version,
           'disease'                        as field,
           d->>'text'                       as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'donor_organisms') as donor,
         json_array_elements(donor.json->'diseases') as d


  -- ADDITIONAL SAMPLE INFO
    UNION

    Select b.bundle_uuid                              as bundle_uuid,
           b.bundle_version                           as bundle_version,
           'project'                                  as field,
           project->'project_core'->>'project_title'  as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'projects') as project


    UNION

    Select b.bundle_uuid                    as bundle_uuid,
           b.bundle_version                 as bundle_version,
           'Laboratory'                     as field,
           c->>'institution'                as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'projects') as project,
         json_array_elements(project.json->'contributors') as c

    UNION

    Select b.bundle_uuid                                                          as bundle_uuid,
           b.bundle_version                                                       as bundle_version,
           'file format'                                                          as field,
           substring(file->>'name', '^(?:(?:.){1,}?)((?:[.][a-z]{1,5}){1,2})$')   as field_value
    from bundles as b,
         jsonb_array_elements(b.json->'manifest'->'files') as file;

create index on facets_table(bundle_uuid, bundle_version)

Identify first UX study users

Need

For our first UX study on whether users prefer the UX of postgres vs elasticsearch, we need a pool of test users.

Approach

We can start small,

  • Get a list of volunteers, perhaps 4-7 people; probably focus on RFA2 grantees
  • Get a sense for what kinds of questions they would want to ask of the data

Implement the ETL module

Need

  • As a developer of the Query Service
  • I want an ETL module for taking raw bundle metadata, transforming it, and loading it into the database
  • so that we can load the database.

Approach

  • Extraction Library
  • Extraction Library Tests
  • Transform Library
  • Transform Library Tests
  • Load Library
  • Load Library Tests

Production readiness

Need

  • As a developer of the query service
  • I want baseline production readiness measures in place
  • so that I can safely test, deploy, and monitor the query service.

Initial feedback on SQL/jsonb UX

Need

Now that real data is loaded into the RDS, we should hold an early feedback round (before API completion) on the language and table schema design with a small set of bioinformaticians and app developers. This could inform future schema design and technology choices.

Approach

  • Gather 3-5 bioinformaticians and app developers.
  • Send them and require reading of the postgres jsonb syntax and Query Service table structure.
  • Ask them to complete a small set of defined queries. Examples:
    1. Query for the UUIDS of all the bundles in the Tabula Muris dataset
    2. Across all projects, count the number of male and female donors
    3. ...
  • Ask them to issue three or four queries from their research projects against the dataset
  • Have respondents complete a short diary entry on their experiences with the system
    1. What parts of the system were useful?
    2. Could you answer important research questions that you couldn't otherwise?
    3. What could be improved?
    4. What were the pros and cons of the schema design from your perspective?
    5. ...

Configure the database with database schema

Need

  • As an operator of the query service
  • I want the table schemas for the database to be populated as part of the deployment
  • so that the system is more PnP.

Definition of done

  • There is a make target to set the database schema

Getting metadata files from DSS needs to be optimized

Need

Currently, the checkout process is started for every GET /v1/files request against the DSS. This makes sense for individual users selecting specific subsets of the total data set, but is inefficient for data consumer applications that must scan the whole data set, such as the query service.

For these applications, we must figure out a strategy for serving metadata files from DSS in a scalable and cost effective manner.

User story 1

  • As a Query Service developer or third party app developer
  • I want bundle metadata files to be served quickly (either in events or from the API)
  • so that I can build search indices outside of DSS.

User story 2

  • As an operator of the DSS
  • I don't want all metadata files to be checked out
  • as it's not cost effective.

UX Study #1: query language and core use cases

Need

For the query service to reach MVP we better UX understanding in two areas:

1. Query language usability

As a developer, I need to know if users more easily use SQL than ElasticSearch for finding data of interest.

2. Core search use cases

As a developer, I need to know the core search use cases for:

  • Primary data/metadata
  • Secondary data/metadata
  • Matrix data/metadata

These core search use cases will depend on the type of user using the system. Key user types that we intend to serve:

  • Researcher with a keyboard
  • DCP Developer or advanced analysis portal developer
  • Wrangler

Approach

Work with @GenevieveHaliburton to evaluate the UX of the query language and API.

Update: research plan here: https://docs.google.com/document/d/1GkHRB5Tf1QpD-MEbP1gni2fetIndeV50NA73aIt-U-s/edit

Functional Tests

Need

  • As a developer on the Query Service
  • I want to be confident my changes wont break current functionality

Approach

  • ???????

Can we use postgres for faceted search?

Need

To understand the limitations of what kinds of user features we can build with postgres, we should try to perform performant faceted searches. Given their use in the data browser, it is likely that these types of searches will show up in other data consumer applications.

Faceted search

Given a field path in bundle JSON documents, index a mapping of field path to all field values, and from the tuple (field path, field value) to all bundle FQIDs.

Build a CI/CD pipeline for the query service

Need

  • As a developer of the Query Service
  • I want the Query Service to be tested and deployed in a CI/CD pipeline
  • so I can automate testing and deployment toil.

Approach

  • Infrastructure check-plan in a GitLab CI/CD pipeline
  • Run unit tests in a GitLab CI/CD pipeline
  • Run integration tests in a GitLab CI/CD pipeline
  • Deployment from a GitLab CI/CD pipeline

Users can submit a short-lived queries via the API

Need

  • As a user of the Query Service API
  • I want an endpoint for submitting queries synchronously
  • so that I can leverage metadata queries in building interactive apps.

Approach

  • Set up an endpoint (perhaps GET /?q=...) that accepts a query and can return query results synchronously.
  • For large query results a pagination strategy will be required. (maybe)

Alerts

Need / User Story

  • as a maintainer of the Query Service
  • I want to be alerted if it is broken
  • so I am aware of any issues

Definition of Done

  • create query-service-alerts slack channel
  • black-box monitoring
  • white-box monitoring (number of messages in the DLQ, queue length, any alerts on important RDS metrics)

Related docs

Documentation

Need

  • As a user of the Query Service

  • I want to know how to use it

  • As a developer on the Query Service

  • I want to know how to develop on it

Approach

  • examples for users
  • instructions for deployment
  • instructions for developing/getting started

Operational Logs

Need

  • As a developer
  • I want to be able to see/query service logs
  • for help with debugging/monitoring the system

Approach

  • integrate with logs service
  • log aggressively in beta

Load ~10e6 bundles with recent metadata versions

Need

  • As a UX researcher for the Query Service
  • I want some real-life bundle metadata in the system
  • so that I can show non-trivial use cases to users.

Approach

  • Query some recent metadata (Tabula Muris, SpaceTx, the preview datasets) from DSS
  • Load them with the ETL module into the database
  • Make the credentials available to @GenevieveHaliburton and test users
  • Reach significant scale to understand performance characteristics

Code Style

Need

  • As a developer
  • I want a linter
  • to be lazy about white space but still have pretty code

Approach

  • configure linter to run on every push

Capacity Planning

Need

  • As a developer
  • I want to predict the systems needs
  • and build accordingly

Approach

  • determine beta use cases
  • determine system needs to support those use cases
  • determine what the estimated load means for cost/computation against the DSS

Scale Test

Need

Ensure the Query Service is able to accommodate load consistent with other DCP components' scalability limits.

Done when

Read operations

A scale test is committed to the Query Service codebase that runs periodically and ramps up concurrent query requests until a performance limit is found. The performance limit is documented.

Write operations

An environment-wide ETL (extract/transform/load) or a synthetic ETL runs periodically, and its total runtime is tracked over time. The performance limitations of the ETL are recorded to reflect the system's write operation throughput limits.

Get into Postgres Aurora Serverless Beta

Andrey has been communicating with our AWS reps to get into the serverless postgres beta program.

Goal: get into the beta program so that we can simplify our integration with postgres (remove pgbouncer).

Investigate adding a `links` table

Need

  • As a user of the query service
  • I want to easily understand the relationship between bundles
  • so that I can understand what analysis is in the system

Publish a v0 specification of the Query Service

Need

Now that we've completed some early experiments on how the Query Service should be designed, we should publish a v0 spec of what the design should be.

This specification can, of course, change based on further UX and performance evaluations.

Architecture

Need

  • Rough capacity planning
  • ETL job Architecture
  • Tentative plan on technology choice
  • Search API design
  • Indexer interface design (tentatively)
  • Event Bus (with DSS)
  • Rough migration strategy
    ... others?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.