Code Monkey home page Code Monkey logo

covid-tracking-api's Introduction

👷 Covid Tracking API

THIS CODE IS BEING MOVED TO THE WEBSITE REPOSITORY

PLEASE VISIT https://github.com/COVID19Tracking/website/tree/master/build


A Cloudflare Worker that makes a requests to get the sheet info and then cleans it up a bit and returns an array of values.

Initial Basic API

The default response is JSON. If you'd like CSV just append .csv at the end of the url. For example https://covidtracking.com/api/states.csv

If you want to filter the /api/us/daily you can add a query param like ?state=NY to only show cases in New York. Or ?state=NY&date=20200316 to show the result of a specific date.

GraphQL API

Technical How

Currently each and every request is passed through Netlify to Cloudflare that makes an API request to fetch the resource and then cleans it up and decides on format (CSV/JSON) before returning results. For caching we are utilizing a KV store.

Staging

Errors

There is no error console unfortunately.

Easy Deploy w/ Wrangler

wrangler

  • Add wrangler.toml file.
  • Get a Google API Key https://console.developers.google.com/
  • Add Google API Key to cloudflare environment with the command wrangler secret put GOOGLE_API_KEY
  • Publish to Cloudflare with the command wrangler publish

Serverless

To deploy using serverless add a serverless.yml file.

Testing locally

Install cloudflare-worker-local

yarn global add cloudflare-work-local yarn global add nodemon wrangler build && cloudflare-worker-local worker/script.js covid.cape.io 3000 wrangler.toml staging

nodemon --watch worker/script.js --signal SIGHUP --exec 'cloudflare-worker-local worker/script.js covid.cape.io 3000'

KV Cache Keys

wrangler kv:key list --binding=COVID --env=staging wrangler kv:key list --binding=COVID --env=production --prefix="/states.json" wrangler kv:key delete --binding=COVID --env=staging "/screenshots" wrangler kv:key delete --binding=COVID --env=production "/press"

Secrets

wrangler secret put GOOGLE_API_KEY --env=staging

History

Google often requires an API key or has some strange formatting. I just wanted an array of values that reflected the sheet rows. No long complicated URL.

At first the worker was just a simple proxy, making an API request for every worker request. Then I added cf: { cacheEverything: true, cacheTtl: 120 } to the fetch() options so CF could cache the fetch result. Some endpoints requested XML from AWS and it takes some time to parse a really big XML file so using the CF Key Value storage for the parsed result was implemented. It was requested to offer a CSV download so that was added too. Then people wanted to be able to do some basic queries so I turned the search query args into an object and passed it to _.filter. That allows /states/daily?state=MN to return the state values for a specific state. It’s tricky to stay within the 50ms processing time limit when doing XML parsing or making large CSV files so Cloudflare had to increase our limits. We put a TTL limit (like an hour) on every file saved to cache. On a new request we return the previous generated result from the cache and then lookup the TTL of the item and if it’s more than 5 minutes old we make a new request and save it to the cache for next time. This way the user gets a fast result before we update an entry. If no user makes a request for an hour the cached item expires and the next request has to wait for a full process before response but that doesn’t happen for the popular endpoint/query options No volunteers were excited to help with the API because local development is too difficult. There’s no official tools for local development. There is https://github.com/gja/cloudflare-worker-local but it’s incomplete and has bugs. There is https://github.com/dollarshaveclub/cloudworker but is has a big “no longer actively maintained” message at the top of the page. Ultimately this caused the project to start building hundreds of static files (json/csv) for many of the most popular queries during the site build/compile process so it’s easier for others to help. I had a GraphQL endpoint for a bit but I think it’s easier to send the data into a postgres database and throw Hasura in front of it to handle auto GraphQL functionality.

covid-tracking-api's People

Contributors

webmasterkai avatar zachlipton avatar samskeller avatar burritojustice avatar

Stargazers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.