Code Monkey home page Code Monkey logo

heissepreise's Introduction

Heisse Preise

A terrible grocery price search "app". Fetches data from big Austrian grocery chains daily and lets you search them. See https://heisse-preise.io.

The project consists of a trivial NodeJS Express server responsible for fetching the product data, massaging it, and serving it to the front end (see server.js). The front end is a least-effort vanilla HTML/JS app (see sources in site/).

Requirements

  • Node.js

Running

Development

Install NodeJS, then run this in a shell of your choice.

git clone https://github.com/badlogic/heissepreise
cd heissepreise
mkdir -p data
npm install
npm run dev

The first time you run this, the data needs to be fetched from the stores. You should see log out put like this.

Fetching data for date: 2023-05-23
Fetched LIDL data, took 0.77065160000324 seconds
Fetched MPREIS data, took 13.822936070203781 seconds
Fetched SPAR data, took 17.865891209602356 seconds
Fetched BILLA data, took 52.95784649944306 seconds
Fetched HOFER data, took 64.83968291568756 seconds
Fetched DM data, took 438.77065160000324 seconds
Merged price history
App listening on port 3000

Once the app is listening per default on port 3000, open http://localhost:3000 in your browser.

Subsequent starts will fetch the data asynchronously, so you can start working immediately.

Production

Install the dependencies as per above, then simply run:

git clone https://github.com/badlogic/heissepreise
cd heissepreise
node --dns-result-order=ipv4first /usr/bin/npm install --omit=dev
npm run start

Once the app is listening per default on port 3000, open http://localhost:3000 in your browser.

Using data from heisse-preise.io

You can also get the raw data. The raw data is returned as a JSON array of items. An item has the following fields:

  • store: (billa, spar, hofer, dm, lidl, mpreis, ...)
  • name: the product name.
  • price: the current price in €.
  • priceHistory: an array of { date: "yyyy-mm-dd", price: number } objects, sorted in descending order of date.
  • unit: unit the product is sold at. May be undefined.
  • quantity: quantity the product is sold at for the given price
  • bio: whether this product is classified as organic/"Bio"

If you run the project locally, you can use the data from the live site including the historical data as follows:

cd heisse-preise
rm data/latest-canonical.*
curl -o data/latest-canonical.json https://heisse-preise.io/data/latest-canonical.json

Restart the server with either npm run dev or npm run start.

Historical Data Credits

The live site at heisse-preise.io feature historical data from:

heissepreise's People

Contributors

badlogic avatar dakralex avatar eltociear avatar flobauer avatar h43z avatar hannesoberreiter avatar iantsch avatar mhochsteger avatar pretzelhands avatar schwindp avatar simmac avatar slhck avatar tiefenb avatar unki2aut avatar xsuchy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

heissepreise's Issues

Add support for MPREIS

Research

Website: mpreis.at

Found the following APIs while browsing around

trbo.com

example

used to get related products on the site, no token required

emporix.io

developers guide

used to fetch search results (40 per page). requires token

Please add additional findings here.

Foodora Shop

link

Unit calc

Well, 6 x 0,5 = 3. But 6 x 0,5 lt sholud be 3 lt. Not ml. May be this can be fixed somehow.
image

No raw data for Unimarkt

For other other shops we return (and keep) all the raw data in fetchData and extract relevant information in getCanonical.
The Unimarkt code already extracts in fetchData (thus, the raw data is thrown away).

I guess this is not on purpose?

Interest in Hofer support?

What are the reasons for Hofer missing in the shops (legal, technical)?

I started to work on it and managed to fetch about 800 articles (need to get pagination working, also you need an access token, so it's more involved).

Is there any interest on adding Hofer support?

Generate site sources with lightweight templating

Having to modify headers in all .html is annoying. Bundling all the .js would also be nice. I'm writting a tiny site generator with the most basic templating possible. This will require restructuring of the site/ folder hierarchy.

  • site/data: the new location of momentum-cart.json. We may want to add default carts or other data in the future.
  • site/js: vendored third party JS dependencies, e.g. AlaSQL and Chart.js
  • site/: .html and .js files for each page of the site. Also contains files beginning with an _, like _header.html.

The generator is given the site/ directory as an input directory, and an arbitrary output directory. It will then:

  1. For each file in the input directory recursively
    1. If the file starts with _, ignore it
    2. Load the content of the file and replace any occurance of %%$filename%% with the contents of $filename. The $filename is given relative to the currently processed file
    3. Write the content to the output directory

The generator is called both by server.js (previously index.js) and pages.js. The server will also watch the input directory for changes and regenerate the output accordingly. The latest-canonical.${store}.compressedjson files will be directly written to the output directory.

No, I do not want to use a pre-existing "solution". There's nothing as lightweight out there.

RFC @iantsch @mhochsteger @simmac @pretzelhands @slhck before I fuck shit up.

i18n

Since we are adding different stores from different countries, should we add some localization library to allow users to view in their preferred language?

I'm usually working with i18n or i18next

network.external.name is deprecated

When I clone the project and try to run it in production I get the following warnings/errors:
CleanShot 2023-05-16 at 21 01 23

Can you help me or tell me which settings I have to change to get it running?

Visualisation Ideas

I am currently experimenting with the visualisation of product price changes:
image

What I have found so far is, that many prices iterate in a specfic range. The Boxes you see is the general range of each products, the points are the price changes in price history. All displayed on the same price axis.

I would like to find out which products are more speculative than others. Any ideas or feedback is welcome.

Request: Warenkorb pricewatch

I do not have the means to implement something like this, but I would like to suggest a feature:

The shopping baskets themselves are interesting as an analysis tool. However, to make them useful in everyday life, it would be exciting if the noted products could be checked for their prices and then sorted according to 2 or more previously set supermarket-favourites. This way, one could save a weekly shopping list as a basket and shortly before shopping, check which combination of supermarkets would result in the most savings. Often, supermarkets of different chains - especially in urban areas - are located in close proximity to each other.

Expose logs on the web

Cause Mario is a lazy admin. Wrap logging and write to file that's located in the served static file dir. Append only, include timestamps.

Hackathon: Data Visualisation & Analysis

I organize a hackathon with around 100 participants in the end of July and one challenge will specifically target the datasets of this project, there will be 1-5 teams working on the challenge and visualise the dataset in a new way and/or find out something, we haven't looked for yet. Does anybody of the other contributors would like to join me at the event? Its on 29.07 - 30.07 in Vienna, the quality of the teams is pretty high. We have 2 chefs that are cooking for us all the time and the summer vibes are pretty cool.

Better compression for latest-canonical.json

As we add new stores and acrue more historic pricing info, the latest-canonical.json file gets a bit too large for my taste. We are currently at 3mb gzipped, which is pretty decent, but largely due to the compressor being able to exploit repetitions within the file. The latest addition of the url field blew things up from 2mb gzipped.

We want to keep things snappy. latest-canonical.json is only updated once a day, so a user will have to re-download it at least once, after which it will be cached by the browser. We want this file to be as small as possible, both after compression and after decompression (JSON parser speeds ain't great either).

Consider other encoding and/or compression schemes, e.g. a binary format, shortening of the url field contents, etc.

Swatch conditions for Phrasen, exclusion,...

Maybe it's implemented and i just don't know about it, but it would be nice to use search patterns like "Milch -joghurt -schokolade" to narrow search result?
Or allow regular expressions?

Damn autocorrection, it should be titled search... not Swatch :(

RFC: automatic categorization

Problem

Searching for '"butter" gives you a lot of different types of products, e.g. tea butter, butter cookies, or butter milk.

Seach engines in online shops are usually hand tuned to return expected results/rankings for common searches, like "eggs", or "milk".

We can't do this in our case: too many products and stores.

The second best solution is to allow the user to filter search results by product category. However, none of the store API end points give us meaningful categories. The stores' categories also differ greatly, which would require mapping to a canonical set of categories.

We need:

  • A predefined set of (hierarchical) categories
  • A way to assing each item a category, either based on the info in its raw form (e.g. name + description) or canonical form (just the name, maybe quantity if we get that as correct as possible)

Idea

I've spent 2 hours today trying a few things.

First, I needed to come up with a set of fixed categories. Didn't want to reinvent the weel, so I took the hierarchical categories from Billa. They have a 3 levels deep hierarchy. I only adopted the first two layers, see
https://github.com/badlogic/heissepreise/blob/main/stores/utils.js#L1

My first thought was feeding ChatGPT that set of categories and tell it to assign them to a list of item names. This is too slow, too costly, and the results are bad.

Next, I figured out that you can query all Billa products for a specific category. This gives us perfect category information for all Billa items. I've switched fetchData() in billa.js over to that way of querying. I do not yet exploit or store the category info in there.

Instead, I duplicated the querying code in the new categorize.js file. It's a little prototype I use to evaluate how well this categorization approach works. The basic idea is this:

I fetch all items from Billa for all 2nd level categories. I then assign the category to each item. Next, I create a normalized n-gram vector from the lower cased, stop-word filtered, and stemmed tokens derived from an item's name. I thus end up with a set of vectors, each with a category assigned to it. Let's call them category vectors.

To assign a category to a non-Billa item, I create a vector for it like described above. Next, I compare this vector's similarity with every category vector and pick the n category vectors most similar to it. Similarity is calculated via cosine similarity. Since all vectors are unit vectors, that boils down to a dot product.

The non-Billa item is then assigend either the category of the most similar category vector, or I pick the most common category found in thr n most similar category vectors for the item. I haven't decided on a strategy yet.

The underlying code was actually born out of the work for the 'sort by name similarity" feature. It's a similar principle, though in this case it's just plain old kNN calssification.

This works suprisingly well, especially for brand products. It works less well for products with no brand identifier, or very few tokens, like "Spar Gemüsemais". Another peculiarity is that Billa uses the term Erdapfel instead of Kartoffel for the vegtable. Kartoffel only turns up in non-vegtable items. Fun! You can try it out by running node categorize.js items.json. items.json can either be a cart's JSON export, or a query result's JSON export from the front end.

My guesstimate is that we can assign perfect categories to about 70% of items this way. The remaining 30% will have slightly or entirely wrong categories.

Before I invest more time in this, I wanted to RFC you folks. Do we think that this error rate is acceptable? Do you have a better idea to solve the problem?

For a baseline evaluation we can use the Momentum Eigenmarken Warenkorb. It consists of pairs of Spar and Billa items. Since we have perfect categories for the Billa items, we know what category should be assigned to the Spar item. Not great, but at least something.

@mhochsteger @iantsch @simmac @HannesOberreiter

Idea: Display categories (e.g. butter, cheese etc.) with average prices over the last 18 months

First of all, thanks for the great work!

While using your application I was thinking that it would be nice to display charts of certain categories on the main page. The categories could be an aggregate of multiple items in that category (like cheese, take 5 comparable items of cheese), calculate the average and display it as a graph.

Maybe (also as a possible alternative to this) there's also the possibility to add a separate category section where the user can click through different categories?

Unit/quantity based searches

Currently, the simple keyword search checks for token matches in the item.search field, which itself is created like this:

https://github.com/badlogic/heissepreise/blob/main/site/utils.js#L86-L87

As a first step, we can extend item.search by adding aliases for the units, e.g. g -> gramm, etc. We can also convert the quantity to kilo and liter and add the resulting numbers to the item.search field along with l and literand kg and kilo.

This will allow queries like teebutter 250 g or cola 1 liter to give better (albeit not perfect) matches.

We could add additional inputs for min/max quantity searched, e.g. 2 numeric inputs, and a drop down that lets you select the unit. searchItems would then interpret those additional search parameters.

For the SQL-like AlaSql searches, we don't really need to do anything. Can already query like

!name like "%teebutt%" and unit="g" and quantity >= 250

or

!name like "%windel%" and unit="stk" and quantity > 30

Mark unavailable items

When merging historical data with the current data returned by end points, we now get 9894 items that are not in the current data.

We should mark them, e.g. add an "available" flag and set it to false, and decide how to handle them. They can be interesting for historical analysis, but may otherwise just take up space.

latest-canonical has not all items

I host the project myself. Now I wanted to get the history data as written in the README, but the latest-canonical.json does not contain everything. For example "S-Budget Blue" is not in there.

Need to adjust DM query

Getting

hp_site | DM Query matches 1002 items, but API only returns first 1000. Adjust queries. Query: allCategories.id=020000&price.value.from=18

@simmac halp plz

Endpoints not returning full assortment

Today I found that (at least) the Billa endpoint does not always return the same products.

The current logic in analysis.js, specifically mergePriceHistory(), will throw away all products that are not in the latest canonical list of items.

We should probably keep items that are in the previous canonical item list. This may end up with stale products no longer offered, but it will also ensure we have some history of products that get randomly omitted from the daily crawl.

Wrong Mpreis data?

On the aktionen page there are items shown which are not returned by the API endpoint of the Mpreis implementation.

Where does this data come from? Is that an error?

image
image

Performance opportunities

You have some performance opportunities per a Lighthouse test:

  • Vendor TailwindCSS to only serve the necessary styles. This is the biggest impact at the moment.
  • Minify all JavaScripts before serving them
  • Split JavaScript files into smaller bundles if they are not used on every page
  • Enable caching from the server side (via express, set cache headers)

I can also have a look at one of those points.

Unify units

A follow up on this comment: #7 (comment)

I've yet to find time to work on unit normalization. It would be very nice if one could search for units properly. You can currently enter a number as part of the search query, but it's not always effective enough.

I can have a look at the raw data processing when I have time. I guess the first step is to have some proper quantity in the canonical data, like

"quantity": 125,
"unit": "g"

instead of
"unit": "125 Gramm Packung"

To keep it simple, I would convert the canonical data into either "g" and "ml". From that you can easily derive "Euro/kg" etc. and sort by unit price.

Can't start server, Unexpected token in JSON at position 0

I just did a pull and now I can't start the server anymore:

➜ node index.js
undefined:1
�
^

SyntaxError: Unexpected token  in JSON at position 0
    at JSON.parse (<anonymous>)
    at Object.readJSON (/Users/werner/Documents/Software/heissepreise/analysis.js:20:17)
    at /Users/werner/Documents/Software/heissepreise/index.js:49:35
    at Object.<anonymous> (/Users/werner/Documents/Software/heissepreise/index.js:80:3)
    at Module._compile (node:internal/modules/cjs/loader:1254:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1308:10)
    at Module.load (node:internal/modules/cjs/loader:1117:32)
    at Module._load (node:internal/modules/cjs/loader:958:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:23:47

This is my content in data:

➜ ll data/                           
total 971104
-rw-r--r--  1 werner  staff   12505515 Jun  2 17:29 billa-2023-05-30.json.gz
-rw-r--r--  1 werner  staff   12503074 Jun  2 17:29 billa-2023-06-02.json.gz
-rw-r--r--  1 werner  staff    3367252 Jun  2 17:29 dm-2023-05-30.json.gz
-rw-r--r--  1 werner  staff    3397462 Jun  2 17:29 dm-2023-06-02.json.gz
-rw-r--r--  1 werner  staff    3456070 Jun  2 17:29 dmDe-2023-05-30.json.gz
-rw-r--r--  1 werner  staff    3465108 Jun  2 17:29 dmDe-2023-06-02.json.gz
-rw-r--r--  1 werner  staff     360676 Jun  2 17:29 hofer-2023-05-30.json.gz
-rw-r--r--  1 werner  staff     372567 Jun  2 17:29 hofer-2023-06-02.json.gz
-rw-r--r--  1 werner  staff   38604224 Jun  2 13:56 latest-canonical.json
-rw-r--r--  1 werner  staff     174390 Jun  2 17:29 lidl-2023-05-30.json.gz
-rw-r--r--  1 werner  staff     213130 Jun  2 17:29 lidl-2023-06-02.json.gz
-rw-r--r--  1 werner  staff   20171402 Jun  2 17:29 mpreis-2023-05-30.json.gz
-rw-r--r--  1 werner  staff  228688054 Jun  2 13:55 mpreis-2023-06-02.json
-rw-r--r--  1 werner  staff     678068 Jun  2 13:55 penny-2023-06-02.json
-rw-r--r--  1 werner  staff    3766875 Jun  2 13:55 reweDe-2023-06-02.json
-rw-r--r--  1 werner  staff   82036386 May 30 20:38 spar-2023-05-30.json
-rw-r--r--  1 werner  staff   82417204 Jun  2 13:55 spar-2023-06-02.json
-rw-r--r--  1 werner  staff     494554 May 30 20:38 unimarkt-2023-05-30.json
-rw-r--r--  1 werner  staff     494048 Jun  2 13:55 unimarkt-2023-06-02.json

I assume this is because of the recent changes wrt compression? Did something require migration?

My previous local commit for running the server was f21ac58.

Node v18.16.0 on macOS 13.

Memory optimizations

Use dev tools and memory snapshots to cut down on required memory.

  • Don't calculate full url in processItems(). Construct on demand in items-list.js during item rendering. 166 - 159mb
  • De-duplicate strings in ids, names, dates, etc. 159mb - 145mb
  • Use getters instead of precalculated values for fields used in alasql queries. 145mb - 92mb

Handling of "discounts-only" stores like LIDL and PENNY

LIDL and Penny both only show currently discounted products in the online shops.

When we fetch new data from them, we merge the latest sortiment with what we recorded before. This leads to "orphaned" items, which are no longer available on the website.

To handle this, these types of stores get a flag removeOld in site/model/stores.js. mergePriceHistory() will then remove any items which are not in the latest data from the site.

Compression improvements

  • chunk by anything else than store to avoid different sizes in compressed files
    e.g. total items / stores
  • move all store data from utils into store/*.js and provide via loadItems e.g. stores.compressed.json => not needed anymore, since we build now and have this data in model/stores.js
  • inject the stores with decompress/compress to avoid duplication
  • Investigate brotli instead of gzip
  • #51
  • Split url compress/decompress data from logic

Moar users!!

I think a lot of people could profit from this awesome software. Unfortunately, getting it up and running is not easy enough for the average billa/spar/hofer customer.

I tried an easy setup, which might be beneficial for Windows (and potentially also MacOS or Linux) users:

  • Download and extrace node.js Windows Binary (.zip)
  • Put in the heissepreise src alongside node (two directories)
  • add following script (run.bat)
cd heissepreise
call ..\node\npm install
start "" http://localhost:3000
call ..\node\node.exe index.js
pause

-> Zip this result and provide it here as binary release for windows. Automating this steps is quite easy (and no need to certify executables etc., but larger release size (30mb in my test) )

To install it, you just need to download and extract a zip file, and double-click run.bat

I did test this on Windows with the following code: https://github.com/mhochsteger/heissepreise/tree/local_deployment
Only two changes needed: Path fixes (path.sep instead of /), host site as static directory (no separate web server)

Open questions:

  • How to do updates? (use node script to check for and download source release from github?)
  • Any plans on an Android app? (that would be an even greater boost, but I have no expertise here)

results loaded, but not displayed

Hi,

I have a strange issue running docker/control.sh start, the results seem to get loaded, but not displayed... After the navigation and controls, the site is just white without a table...

# docker logs -f hp_site

up to date, audited 102 packages in 580ms

11 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
npm notice
npm notice New major version of npm available! 8.19.4 -> 9.6.6
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.6.6>
npm notice Run `npm install -g [email protected]` to update!
npm notice
RUNNING STUFF 2
Fetching data for date: 2023-05-17
Fetched SPAR data, took 18.967726425000002 seconds
Fetched BILLA data, took 68.71422416600026 seconds
Merged price history
Example app listening on port 3000

Browsing at /api/index, i see the results.

How can I troubleshoot that further?
Thanks!

DM Germany crawl throws errors

@simmac I added DM Germany support in the latest commit. Getting a bunch of warnings/errors:

Fetching data for date: 2023-05-30
Example app listening on port 3000
Fetched LIDL data, took 1.5741576662063599 seconds
DM-DE Query returned more than 1000 items! Items may be missing. Adjust queries. Query: allCategories.id=010000&price.value.from=4&price.value.to=7
DM-DE Query returned more than 1000 items! Items may be missing. Adjust queries. Query: allCategories.id=010000&price.value.from=7&price.value.to=10
Fetched UNIMARKT data, took 7.120512875556946 seconds
DM-DE API returned 429, retrying in 2s.
Fetched MPREIS data, took 11.233189957618713 seconds
DM-DE API returned 429, retrying in 2s.
Fetched SPAR data, took 17.845787750244142 seconds
DM-DE API returned 429, retrying in 2s.
DM-DE API returned 429, retrying in 2s.
DM-DE API returned 429, retrying in 4s.
Unknown unit in dm: 'undefined
Unknown unit in dm: 'undefined
Fetched DM data, took 29.484291208267212 seconds
Fetched BILLA data, took 30.971627750396728 seconds
DM-DE API returned 429, retrying in 4s.
DM-DE API returned 429, retrying in 8s.
DM-DE API returned 429, retrying in 16s.
DM-DE Query returned more than 1000 items! Items may be missing. Adjust queries. Query: allCategories.id=030000&price.value.to=8
DM-DE API returned 429, retrying in 2s.
DM-DE API returned 429, retrying in 4s.
DM-DE API returned 429, retrying in 8s.
Fetched HOFER data, took 82.25956470775604 seconds
DM-DE API returned 429, retrying in 2s.
DM-DE API returned 429, retrying in 2s.
DM-DE API returned 429, retrying in 4s.
DM-DE API returned 429, retrying in 8s.
DM-DE Query returned more than 1000 items! Items may be missing. Adjust queries. Query: allCategories.id=050000&price.value.to=4
DM-DE Query returned more than 1000 items! Items may be missing. Adjust queries. Query: allCategories.id=050000&price.value.from=4
DM-DE Query returned more than 1000 items! Items may be missing. Adjust queries. Query: allCategories.id=060000&price.value.to=4
Fetched DMDE data, took 108.94952433300018 seconds
3811 not in latest list.
Items: 98107
Merged price history

I'm not quite sure how to adjust the queries, as it seems fine to me that the query returns more than 1000 items? Pointers welcome.

Shareable searches

Similar to sharing a cart, generate a link that has all the info needed to reproduce a search in index.html. Should also contain info which items have been checked for charting. Must distinguish between simple keyword search + filters and SQL-like AlaSql search.

AlaSQL searches can already be shared, but lack info on which items are checked for charting.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.