Code Monkey home page Code Monkey logo

crk-db's Introduction

Plains Cree Dictionary Database Management

The repository contains scripts and documentation for managing the multiple data sources for ALTLab's Plains Cree dictionary, which can be viewed online here. This repository does not (and should not) contain the actual data. That data is stored in the private ALTLab repo under crk/dicts.

The database uses the Data Format for Digital Linguistics (DaFoDiL) as its underlying data format, a set of recommendations for storing linguistic data in JSON.

Contents

Sources

ALTLab's dictionary database is / will be aggregated from the following sources:

  • Arok Wolvengrey's nêhiyawêwin: itwêwina / Cree: Words (CW)
    • This is a living source.
  • Maskwacîs Nehiyawêwina Pîkiskwewinisa / Dictionary of Cree Words (MD)
    • This a static source. We are using a manually-edited version of the original dictionary.
  • Alberta Elders' Cree Dictionary (AECD or AE or ED)
    • This is a static source.
  • Albert Lacombe's Dictionnaire de la langue des Cris (DLC)
    • This will be a static source.
  • The Student's Dictionary of Literary Plains Cree, Based on Contemporary Texts
    • This source has already been integrated into Cree: Words.
  • ALTLab's internal database
    • This is mostly a set of overrides, where we can store information about certain entries permanently.

Also check out the Plains Cree Grammar Pages.

Process

At a high level, the process for aggregating the sources is as follows:

  1. convert each data source from original format to DaFoDiL and save it as an NDJSON file.
  2. import the data into the Plains Cree database using an algorithm that first matches entries and then aggregates the information in them
  3. create outputs:
    • the import JSON database for itwêwina
    • the FST LEXC files

The Database

The database is located in the private ALTLab repo at crk/dicts/database.ndjson. This repo includes the following JavaScript utilities for working with the database, both located in lib/utlities.

  • readNDJSON.js: Reads all the entries from the database (or any NDJSON file) into memory and returns a Promise that resolves to an Array of the entries for further querying and manipulation.
  • writeNDJSON.js: Accepts an Array of database entries (or any JavaScript Objects) and saves it to the specified path as an NDJSON file.

Building & Updating the Database

To build and/or update the database, follow the steps below. Each of these steps can be performed independently of the others. You can also rebuild the entire database with a single command (see the end of this section).

  1. Download the original data sources. These are stored in the private ALTLab repo in crk/dicts. Do not commit these files to git.

    • ALTLab data: altlab.tsv
    • Cree: Words: Wolvengrey.toolbox
    • Maskwacîs dictionary: Maskwacis.tsv
  2. Install Node.js. This will allow you to run the JavaScript scripts used by this project. Note that the Node installation includes the npm package manager, which allows you to install Node packages.

  3. Install the dependencies for this repo: npm install.

  4. Convert each data source by running node bin/convert-*.js <inputPath> <outputPath>, where * stands for the abbreviation of the data source, ex. convert-CW data/Wolvengrey.toolbox data/CW.ndjson.

    You can also convert individual data sources by running the conversion scripts as modules. Each conversion script is located in lib/convert/{ABBR}.js, where {ABBR} is the abbreviation for the data source. Each module exports a function which takes two arguments: the path to the data source and optionally the path where you would like the converted data saved (this should have a .ndjson extension). Each module returns an array of the converted entries as well.

  5. Import each data source into the dictionary database with node bin/import-*.js <sourcePath> <databasePath>, where * stands for the abbreviation of the data source, <sourcePath> is the path to the individual source database, and <databasePath> is the path to the combined ALTLab database.

    You can also import individual data sources by running the import scripts as modules. Each import script is located in /lib/import/{ABBR}.js, where {ABBR} is the abbreviation for the data source.

    Entries from individual sources are not imported as main entries in the ALTLab database. Instead they are stored as subentries (using the dataSources field). The import script merely matches entries from individual sources to a main entry, or creates a main entry if none exists. An aggregation script then does the work of combining information from each of the subentries into a main entry (see the next step).

    Each import step prints a table to the console, showing how many entries from the original data source were unmatched.

    When importing the Maskwacîs database, you can add an -r or --report flag to output a list of unmatched entries to a file. The flag takes the file path as its argument.

  6. Aggregate the data from the individual data sources: node bin/aggregate.js <inputPath> <outputPath> (the output path can be the same as the input path; this will overwrite the original).

  7. For convenience, you can perform all the above steps with a single command in the terminal: npm run build | yarn build. In order for this command to work, you will need each of the following files to be present in the /data directory, with these exact filenames:

    • ALTLab.tsv
    • Maskwacis.tsv
    • Wolvengrey.toolbox

    The database will be written to data/database.ndjson.

    You can also run this script as a JavaScript module. It is located in lib/buildDatabase.js.

Steps to incrementally update the production database

  1. Clear the existing database: rm src/crkeng/db/db.sqlite3
  2. Start a virtual environment: pipenv shell
  3. Migrate the database: ./crkeng-manage migrate
  4. Import latest version of database: ./crkeng-manage importjsondict {path/to/database.importjson}
    • incremental update: --incremental
    • don't translate wordforms (runs faster): --no-translate-wordforms
  5. Run a local server to test results: ./crkeng-manage runserver
  6. Build test database: ./crkeng-manage buildtestimportjson --full-importjson {path/to/database.importjson}
  7. Run tests: pipenv run test
    • If either the structure of the database or the definitions of the test entries have changed, the tests may fail. You will need to update the tests.
  8. Log into U Alberta VPN using Cisco VPN or similar.
  9. Save the latest version of the import JSON to the private ALTLab repo (under home/morphodict/altlab) or your user directory. (It can't be copied directly to its final destination because you must assume the morphodict user in order to have write access to the morphodict/ directory.)
  10. SSH into the ALTLab gateway and tunnel to the morphodict server.
  11. Become the morphodict user: sudo -i -u morphodict
  12. Update the import JSON file located at /opt/morphodict/home/morphodict/src/crkeng/resources/dictionary/crkeng_dictionary.importjson by copying it from the private ALTLab repo located at /opt/morphodict/home/altlab/crk/dicts.
  13. Get the ID of the current Docker container:
    1. cd /opt/morphodict/home/morphodict/src/crkeng/resources/dictionary
    2. docker ps | grep crkeng (docker ps lists docker processes)
    3. Copy container ID.
  14. Run incremental import on new version of database:
  15. docker exec -it --user=morphodict {containerID} ./crkeng-manage importjsondict --purge --incremental {path/to/database}
  • The morphodict user is required to write changes.
  • The path to the database will be src/crkeng/resources/dictionary/crkeng_dictionary.importjson or some variation thereof.

Tests

Tests for this repository are written using Mocha + Chai. The tests check that the conversion scripts are working properly, and test for known edge cases. There is one test suite for each conversion script (and some other miscellaneous unit tests as well), located alongside that script in lib with the extension .test.js. You can run the entire test suite with npm test.

crk-db's People

Contributors

aarppe avatar andrewdotn avatar aradu12 avatar dbdacanay avatar dependabot[bot] avatar dwhieb avatar eddieantonio avatar

Watchers

 avatar  avatar

crk-db's Issues

MD: handle homographs

Add a unique key to each MD entry, which also handles homographs. Remember that MD entries occasionally begin their definitions with a homograph number, like so:

nipiy	ᓂᐱᕀ	n	1. Water.
nipiy	ᓂᐱᕀ	n	2. A leaf.

add MD > CW mappings to repo

Add the MD > CW mappings to the repo (though don't commit them to git because the contain the definitions), and write a script that parses the mappings file and returns a JS Map Object with all the entries, for use in the import script.

convert Maskwacîs dictionary

Convert the Maskwacîs dictionary from TSV to JSON. For now just do a simple JSON conversion, then open issues for advanced processing of each field in turn.

handle parsing errors

The CW conversion script currently produces 216 parsing errors. All parsing errors either need to be handled by the conversion script or fixed in the underlying Toolbox database.

save database as NDJSON

The aggregated database should be saved as a NDJSON file with a version hash, and the previous backup deleted, when new commits are made (ensuring that each commit has a single associated backup in a human-readable text format). Writing a quick utility script for this would be helpful.

The database should be stored in the ALTLab repo.

incorporate semantic classifications of CW

Incorporate Daniel Dacanay's semantic classifications of CW words into the ALTLab database.

These files should already live in the ALTLab repo (add them if not).

This could even be its own independent dictionary database; however, Arok would probably like this information in his Toolbox file as well.

MD: standardize `POS` field

Standardize the POS field, and extract relevant information to other fields.

  • Store the lexical category in Sense.category.
  • Store information about animacy in Lexeme.features.
  • Store information about plurality in Lexeme.features or possibly in Sense.usages.
  • Store information about verb type (imperative) in Lexeme.features or possibly Sense.usages.
  • If the entry is a prefix, store the class it attaches to in Sense.category, the class it derives in Sense.derives (if it's derivational), and the type of morpheme in Lexeme.morphemeType.

compare definitions

Write a script that compares two normalized definition strings (see #21) and returns information about their similarity.

  • entire strings match = 100% similarity
  • entire string of DefA is a substring of DefB = % of words matched; also return direction of the match
  • otherwise, return % of words matched; also return direction of match (which definition is longer)

clean up the MD database

First create an ALTLab version of the MD database and archive the original version.

  • extract examples + example translations into their own fields
  • extract cross-references into their own field
  • consistently represent separate senses with 1., 2., etc.
  • extract parentheticals into their own field (so that there's just one sentence per definition)
  • update mappings

Will require an initial planning meeting to finalize the details of each task.

integrate Maskwacîs dictionary

  • write a script to convert to JSON while cleaning/normalizing the data (#37)
  • update import/aggregation script to incorporate MD-specific considerations

See Notes on the Maskwacîs Dictionary for more information about this data source.

Katie is working on a manual transcription of the MD entries and creating a canonical SRO form for each. This would help in mapping the MD entries to the CW entries, or at least to a canonical SRO representation.

We're not showing MD entries in itwêwina unless they have a match in CW (and even then, only some of them are shown, depending on how much overlap the definitions have with CW).

create a user interface for modifying dictionary content

More properly on the longer term, aggregating Cree Words and the Maskwacîs Dictionary should be fixed by creating a proper lexical database of the Maskwacîs Dictionary content, which includes the following information:

  1. Original MD dictionary entry.
  2. Normatized MD dictionary entry.
  3. Matching CW dictionary entry, if any. May require the creation of a unique ID on CW side, if match is ambiguous.
  4. CW-MD English gloss comparison classification (reflecting how the English glosses are presented if comparable).
  5. Cree stem (when appropriate, not for multiword phrases) <- can be extracted from CW, if match established, but likely good to specify under MD dictionary entry as well.
  6. POS and inflectional class (following CW style).
  7. Rapid Words semantic classification numbers
  8. Rapid Words semantic classification labels
  9. POS-parsed version of MD English gloss (useful for Eng-to-Cree search and in creating inflected English paradigm cells, if not already in CW).

The above information, coupled with comparably organized CW content, will allow for the instantaneous and easy dynamic creation of the aggregated Cree dictionaries' content.

Eventually, this will be extended as a result of the validation of the new words and sentences from the Spoken Dictionary of Maskwacîs Cree project.

We should consider whether this should be part of the validation project (as it serves it) or a separate small project.

Originally posted by @aarppe in UAlbertaALTLab/morphodict#165 (comment)

generate slug for each entry

In some cases, itwêwina relies on search queries rather than URL parameters for routing. (I think this happens specifically with homographs?) It might be useful to generate a unique slug for each entry, allowing itwêwina to simplify routing-related code. Homographs would could get routed as /atim1 and /atim2, for example.

The slug could be the same as the key, or it could be the full headword with diacritics + a homograph number if needed.

CW: process `\ps` field

Process the \ps field in CW, and map it to the following places:

  • Lexeme.pos: the original value of the \ps field
  • Sense.category: the lexical category
  • Sense.inflectionClass: the inflectional class

Entries with multiple \ps fields should be separated into distinct senses with the same definition but different parts of speech.

import MD entries

Import the MD entries into the ALTLab database. Attempt to match the MD entry to an existing ALTLab entry first, then create a new entry if none is found.

set up MongoDB

Set up the MongoDB database where the ALTLab database will live.

Also add notes to the README about using MongoDB Compass to easily access data.

Incorporate corpus/lemma and dictionary/morpheme frequencies

To replace the current file: ~/giella/art/dicts/crk/Wolvengrey/W_aggr_corp_morph_log_freq.txt, with the process described here:

UAlbertaALTLab/morphodict#163

... we'd want to implement the incorporation of comparable information with our aggregate dictionary database.

Based on the materials we have for Cree, I'd presume one or more corpus-based frequencies (not only Ahenakew-Wolfart but also Bloomfield), as well as a dictionary/morpheme-based ranking, which might be corpus-weighted as well. So these would seem features to be added to the aggregate dictionary entries.

improve matching algorithm

The matching algorithm already looks at the MD > CW mappings to match entries. Improve this algorithm by using other factors as well:

  • If the inflectional class of the query entry is known or can be guessed, match the entry with the same inflectional class (see #22).
  • Otherwise, match the entry containing a definition that most closely matches the definition of the query entry, when both definitions are normalized (see #20 and #21).
  • As a final fallback, match to the entry with the highest frequency. (Get this data from elsewhere and store it in the database.) (see #51)

Examples

Cree: Words

ayâw  VII-v  it is, it is there                                          /ay-/ + /-â/  ayâ-
ayâw  VTI-2  s/he has s.t., s/he owns s.t.                               /ay-/ + /-â/  ayâ-
ayâw  VAI-v  s/he is, s/he is there; s/he lives there, s/he stays there  /ay-/ + /-â/  ayâ-

Maskwacîs Dictionary

ayaw  He owns: he has.                                                 finance; have_wealth; own_possess
ayaw  He is here/there.  (Animate)  e.g. Ekota ki ayaw. He was there.  here_there; location
  • The first entry is VTI-2, but we don’t know this from the entry.
  • The second entry is VAI-v, but we don't know this from the entry.

import CW into ALTLab database

Write a script to import the CW database into the ALTLab dictionary database. For each entry:

  1. Check whether the entry already exists in the ALTLab database. Create it if not.
  2. If the entry exists, update the dataSources field with the CW record.

guess inflectional class

Write a script that accepts a normalized English definition (see #21) and attempts to guess the inflectional class of the associated Cree word.

  1. has both subject and object pronouns and it doesn't have the word "inanimate": VTA
  2. of the leftovers, if they don't have the object pronouns, you can be fairly sure that you've got VTI
  3. of the leftovers, if they don't have neuter object pronouns, you can be sure it's VTI
  4. deal with reciprocals
  5. repeat procedure for intransitive cases

If using the FST, this is probably easiest done as a preprocessing step for MD, after normalizing the definition, but before the aggregation step.

CW: process `\mrp` fields

Process the \mrp fields into an array of morphemes in the Lexeme.forms.components field.

Only do this for the topmost layer of derivation. Later (in a separate issue) you can create distinct entries for each morpheme, and those entries can themselves contain references to other morpheme entries.

integrate Alberta Elders' Cree Dictionary

  • write a script to convert to JSON
  • write a script to clean/normalize the data
  • update import/aggregation script to incorporate AECD-specific considerations

Notes

  • We only want to import the Cree > English entries, not the English > Cree entries.
  • Vowel length is inconsistently marked.
  • The Elders' Dictionary includes information about inflectional classes (but only at the II, AI, TI, and TA level), while the MD and LDC dictionaries do not. This information needs to be added to ALTLab's dictionary for any entries that lack it.
  • Arok has incorporated entries from this dictionary through .

aggregate data sources into a unified Plains Cree lexical database

This is a meta-issue for tracking initial aggregation of the current dictionary data sources. Other sources may be added later, but this issue can be considered complete once the following issues are done, and we have an initial aggregation process and ALTLab-specific database in place.

To Do

  • integrate Arok Wolvengrey's Cree Words (#9)
  • integrate Maskwacîs dictionary (#5)
  • convert / prepare database for itwêwina (#10)

Notes

(See also these Database Specification Notes on Google Drive.)

Reviewing a small number of differences between a version of CW from this January and from 2014, I'm starting to more and more think whether the manual evaluation of CW vs. MD content is realistic, given that AEW is updating CW continuously, and MD content will be expanded as well with the "new" words collected in the recordings.

In principle, we'd need to run through every new version of CW, and then figure out which CW entries have changed or are new ones, and then somehow automatically contrast these with the entire content of MD as to whether some CW entries might have substantial semantic overlap with MD. Only when the English words in the definitions are exactly the same, or when the English words of one definition are a complete subset of the English words in the other dictionary could we automatically decide that the entries in the two dictionaries can be "merged". In all other cases, this would need to be assessed manually by a linguist. E.g. we know of some dictionary entries that concern the same sense, but do not have practically any overlap in their definitions, for instance:

MD: mêscihew <- meschihew ᒣᐢᒋᐦᐁᐤ vp He kills them all, wipes them out.
CW: mêscihew <- s/he kills s.o. off, s/he annihilates s.o., s/he exterminates s.o. (VTA-1)

An alternative is that 1) we base matching dictionary entries from multiple sources based on the assumption that the combination of the dictionary entry head and the inflectional category is unique in all sources, so can be used to map 1:1 potentially similar dictionary entries; and 2) we merge dictionary entries from multiple sources only when that can be done automatically, i.e. when the content words in the English definitions overlap completely (or almost completely), i.e. the English content words in one source can all be found in the dictionary entry in the other source, and vice versa.

Thus, we leave the manual assessment of sense similarity until later. But in the interim, what we need to do in any event, also as a necessary step for eventual manual assessment, is a) the standardization of the orthographical form of the lexical entries in non-CW sources, b) the assignment of inflectional category for all dictionary entries, and c) the assignment of the stem for all dictionary entries (at least those not in CW).

This strategy would allow us to aggregate not only MD but also AECD. In the case of three sources, the merging consideration would be applied pairwise, in that it might be that the English definitions of only two sources are practically identical, but a third source might have narrower or broader definitions.

create import script

Write a script which imports new versions of any of our data sources into the ALTLab database incrementally. This script should:

  1. identify new / removed / changed entries (and ignore the rest)
    1. get an ordered set of the keys to any existing subentries in the ALTLab database
    2. get an ordered set of the keys to the entries in the data source
    3. in order, use assert.deep(Strict)Equal() to compare the records in each set (or maybe just use the lastUpdated property)

(The above procedure will likely be somewhat slow, but will avoid the need for passing both the previous and current versions of the data to the script. The entirety of each of our original data sources is stored within the ALTLab database (using the alternativeAnalyses fields), so we can use that to determine what updates are needed.)

For each difference:

  1. add / remove / update the subentries (Lexeme/alternativeAnalyses) in the ALTLab database as appropriate
    • The remove / update actions will likely be the same regardless of the data source.
    • The add action will likely be specific to the data source, and will have to consider which of the matching lemmas the new subentry should be added to (see #19).
    • The add action can also do the work of guessing the inflectional class of the entry, for easier matching.
  2. normalize / clean the subentries (in memory)
    • The normalization scripts will be specific to the data source.
  3. aggregate the normalized subentries, updating the main entry with the result (see #18)

The import script should also produce a change report each time it is run.

Input

  • data source (String): The abbreviation for the data source being imported (CW, AECD, etc.).
  • data path (String): Path to the new version of the JSON-formatted data source that you would like to import.

orthographic normalization

The above cases illustrate that in matching MD with CW, there are some minor but still significant orthographical differences, which result in certain matches not being made that should be identified.

E.g. MD mitsow 'He eats.' is not matched with CW mîcisow, and likewise MD wapimew is not matched with CW wâpamêw.

This probably should be made into its own issue, but under the Dictionary Database project.

Originally posted by @aarppe in UAlbertaALTLab/morphodict#197 (comment)

aggregate matched entries

Write a script that accepts (normalized) Lexeme objects from multiple data sources, and decides what the resulting aggregate entry should look like. It should also mark entries for review.

The script should only accept one entry per data source. The task of deciding which entries should constitute a match should already be completed.

Notes

  • If @katieschmirler has already given the senses a comparison, attempt to use that.
  • If 90% of the definition is shared between CW and MD, use the CW definition. Otherwise show both definitions. (This same strategy probably applies when comparing other sources as well.)
  • If the words in one of the definitions is a subset of the words in the other definition, use the longer definition.
  • Each sense needs to indicate the data source it came from.
  • If an MD entry's definition is similar enough to a CW entry, just list both CW and MD in the dataSources.

add FST stems

Add an fstStem field for any entries that require it. This information can be acquired from the TSV version of the Toolbox database, and should be stored permanently in the main entries of the database. There are somewhere in the range of ~1,500 entries total where this information has been added. This data should be retrieved and added to the main entry in the dictionary database. Forms marked as "CHECK" are ones that need FST stems.

CW: process part of speech field (`\ps`)

Perform advanced processing of the CW \ps (part of speech) field.

  • Lexeme.pos: the original value of the \ps field
  • Sense.category: the lexical category (general word class)
  • Sense.subcategory: and lexical subcategory/subclass (specific word class)
  • Sense.inflectionClass: the inflectional class (e.g. VTA-1 etc.; this isn't technically to spec, but the DaFoDiL spec should be updated to reflect this)

Entries with multiple \ps fields should be separated into distinct senses with the same definition but different parts of speech.

matching MD <ts> cases with CW <c> or <ci> cases

Besides the straight-forward linking of MD and CW content by undoing vowel length for CW dictionary entries and converting <ch> to <c> and turning prefix-spaces to hyphens for MD entries, there are other orthographical divergences that are not as systematic and thus harder to match. One of the more frequent ones is the MD convention of using <ts> for either <c> or <ci> in CW. So, while this can be automated for a large part, it will need manual final fixing and validation.

An example MD mitsow 'He eats.' which is not matched with CW mîcisow.

A similar but less observed variation is MD using a short <i> for an unstressed short vowel such as <a> in CW, e.g. MD wapimew is not matched with CW wâpamêw - there might be other CW short vowels rendered as <i> in MD as well.

normalize definitions

Write a script that accepts a definition string in English and normalizes it:

  • remove articles: a(n), the, some(?)
  • he | she | s/heshe
  • s.o.someone
  • s.t.something
  • s.w.somewhere (?)
  • lowercase
  • remove punctuation

integrate Lacombe's dictionary

  • get high-quality scans of Lacombe's dictionary (#13)
  • OCR Lacombe's dictionary
  • learn about documentary context of Lacombe dictionary (helps with OCR correction process)
  • correct OCR text (using FST? see attached PDF)
  • write a script to convert to JSON
  • write a script to clean/normalize the data
  • update import/aggregation script to incorporate DLC-specific considerations

CW: store separate versions of lemma for FST and itwêwina

@dwhieb You do not really want to regularize <ý> to <y> for Plains Cree either, when importing the data from Arok's CW into the dictionary database.

Currently, we retain the <ý> when e.g. we create the stems in LEXC code - this is useful as that allows us to convert ý -> y for Plains Cree, and ý -> {th} for Woods Cree. I don't include <ý> in the lemmas for now, as that might have complications in the use of the FSTs, as accessing <ý> on most regular keyboards is not trivial.

I could imagine us retaining <ý> in the dictionary database as well, and then having an option allowing users to select whether they want to see within itwêwina that marking or not. Arok in fact has on a few occasions been inclined to make that default behavior, but I've managed to successfully argue that I'd be better to offer it as an option - in terms of the simplicity of a regular writing system, we ought not to force users to make explicit distinctions that are primarily historical/linguistic.

For the few Swampy Cree forms with <ń>, I'd convert those to <ý>, but keep a note somewhere of their provenance. Or then we could keep <ń>, but I'd need to add that to the morphophonological rewrite rules.

Originally posted by @aarppe in #30 (comment)

add entries for morphemes

Add entries for individual morphemes to the ALTLab dictionary database.

  • CW (\mrp fields; NB: Arok doesn't want these displaying on itwêwina until they've been reviewed)
  • Cook & Muehlbauer

Then consider retroactively adding them back into Arok's database as well - open a separate issue for that if so.

CW: add homograph number `1` to first homograph

Currently, the CW conversion script only adds homograph numbers to homographs other than the first one encountered. For example, if there are two entries with the headword itwêwina, the keys for those two entries will be itwewina and itwewina2. They should be itwewina1 and itwewina2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.