Code Monkey home page Code Monkey logo

ncbo_annotator's People

Contributors

alexskr avatar dazza-codes avatar dependabot[bot] avatar jvendetti avatar mdorf avatar msalvadores avatar ncbo-deployer avatar palexander avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ncbo_annotator's Issues

Develop a way to remove individual ontology from Annotator cache

When a new ontology submission is uploaded and re-processed, the cache is re-populated with all the existing and new terms contained within the submission. However, if a term had been removed between the submissions, its entries still remain in the Annotator cache. Currently, the only way to remove these orphan terms is to regenerate the entire Annotator cache.

We've observed an increased number of cases, where this behavior causes wrong Annotations to be returned.

This functionality can be developed as a CRON job that runs weekly to remove orphan entries.

Optimize Mgrep dictionary generation process

Currently, the dictionary is re-generated every time an ontology (submission) is processed. This process takes over an hour due to retrieving a huge data structure from Redis in a single call:

https://github.com/ncbo/ncbo_annotator/blob/master/lib/ncbo_annotator.rb#L122

There is room for optimization here. Possible avenues to pursue:

  1. Incremental dictionary file population

We may not need to rebuild the dictionary file for the entire system on every ontology parse. Updating it incrementally may drastically improve performance

  1. Retrieve data from Redis in an iterative way:

Instead of using all = redis.hgetall(dict_holder), it's possible to iterate of the data structure using SCAN:

          cursor = 0
          loop do
            cursor, key_values = redis.hscan(dict_holder, cursor, count: 1000)
            @logger.info cursor if cursor.to_i % 1000 == 0
            break if cursor == "0"
          end

Warnings: Pipelining commands on a Redis instance is deprecated and will be removed in Redis 5.0.0

Running unit tests (bundle exec rake test TESTOPTS="-v") I noticed the log is cluttered with instances of the following warning:

Pipelining commands on a Redis instance is deprecated and will be removed in Redis 5.0.0.

redis.pipelined do
  redis.get("key")
end

should be replaced by

redis.pipelined do |pipeline|
  pipeline.get("key")
end

(called from /Users/vendetti/.rbenv/versions/2.6.8/lib/ruby/gems/2.6.0/bundler/gems/ncbo_annotator-2725353b0852/lib/ncbo_annotator.rb:446:in `annotate_direct'}

test_generate_dictionary_file unit test intermittently fails

The test_generate_dictionary_file unit test is intermittently failing on this line:

assert refresh_timestamp > start_timestamp

See the following GitHub Action runs for examples:

https://github.com/ncbo/ncbo_annotator/runs/5698818078
https://github.com/ncbo/ncbo_annotator/runs/6068868691

In subsequent commits with no related code changes the test passes. The overall functionality of the Annotator seems unaffected.

Missing ancestors in hierarchy from annotator?

If we look at https://data.bioontology.org/ontologies/MEDDRA/classes/http%3A%2F%2Fpurl.bioontology.org%2Fontology%2FMEDDRA%2F10042945/paths_to_root to see the paths_to_root from MEDDRA/10042945, we see that there are 3 paths to root, each with 3 ancestors.

When annotating the text "Systemic lupus erythematosus" with expand_class_hierarchy=true and focusing on MEDDRA (https://data.bioontology.org/annotator?text=Systemic%20lupus%20erythematosus&class_hierarchy_max_level=25&expand_class_hierarchy=true&ontologies=MEDDRA) the first item in the result is MEDDRA/10042945 mentioned above. It contains hierarchy items with distances [1, 1, 1, 2, 3], but I expected it to have more items with distances [1, 2, 3, 1, 2, 3, 1, 2, 3] to reflect all the ancestors seen in the first link above.

Issues with some special characters in annotator api

Request:

encoded_text = quote_plus(text)
apikey = ""
ontologies_to_search = [
    "MONDO"
    ]
format = "json"
params: Dict[str, Any] = {
    "apikey": apikey,
    "format": format,
    "ontologies": ontologies_to_search,
    "mappings": True,
    "longest_only": True,
    "exclude_synonyms": False,
    "expand_class_hierarchy": False,
    "class_hierarchy_max_level": 0,
    "text": encoded_text
}
url = "http://services.data.bioontology.org/annotatorplus"
url = url + f"?apikey={apikey}&format={format}&ontologies={ontologies_to_search[0]}&mappings={True}&longest_only={True}&exclude_synonyms={False}&class_hierarchy_max_level={0}&text={text}"
r = requests.get(url=url)
r.raise_for_status()

Issue with %

If the input text contains % (note the whitespace), API gives 500 internal server error.

Sample input:
text=Parkinson Disease % Pneumonia

Server response:

<body>
    <h1>HTTP Status 500 – Internal Server Error</h1>
    <hr class="line" />
    <p><b>Type</b> Exception Report</p>
    <p><b>Message</b> Unexpected end of input at 1:1</p>
    <p><b>Description</b> The server encountered an unexpected condition that prevented it from fulfilling the request.
    </p>
    <p><b>Exception</b></p>
    <pre>com.eclipsesource.json.ParseException: Unexpected end of input at 1:1
	com.eclipsesource.json.JsonParser.error(JsonParser.java:490)
	com.eclipsesource.json.JsonParser.expected(JsonParser.java:484)
	com.eclipsesource.json.JsonParser.readValue(JsonParser.java:193)
	com.eclipsesource.json.JsonParser.parse(JsonParser.java:152)
	com.eclipsesource.json.JsonParser.parse(JsonParser.java:91)
	com.eclipsesource.json.Json.parse(Json.java:295)
	org.sifrproject.annotations.input.BioPortalJSONAnnotationParser.parseAnnotations(BioPortalJSONAnnotationParser.java:65)
	org.sifrproject.servlet.AnnotatorServlet.doPost(AnnotatorServlet.java:177)
	org.sifrproject.servlet.AnnotatorServlet.doGet(AnnotatorServlet.java:118)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:655)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
	org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
	org.sifrproject.util.CharacterSetFilter.doFilter(CharacterSetFilter.java:24)
</pre>
    <p><b>Note</b> The full stack trace of the root cause is available in the server logs.</p>
    <hr class="line" />
    <h3>Apache Tomcat/9.0.62</h3>
</body>

Issue with ;

  1. If the text prefix contains ;, API gives 200OK but with error

    Sample input:
    text: ;Disease

    Sample output:

    [
        {
            "error": "{"errors":["A text to be annotated must be supplied using the argument text=<text to be annotated>"],"status":400}
    "}]
    
  2. If the text contains ;, only entities before ; are annotated.
    Sample input1:
    text: PARKINSON DISEASE PARKINSON's DISEASE

Sample output1:

[
    {
        "annotatedClass": {
            "definition": [
                "A progressive degenerative disorder of the central nervous system characterized by loss of dopamine producing neurons in the substantia nigra and the presence of Lewy bodies in the substantia nigra and locus coeruleus. Signs and symptoms include tremor which is most pronounced during rest, muscle rigidity, slowing of the voluntary movements, a tendency to fall back, and a mask-like facial expression."
            ],
            "prefLabel": "Parkinson disease",
            "synonym": [
                "paralysis agitans",
                "Parkinson disease",
                "Parkinson's disease"
            ],
..........................
        "hierarchy": [],
        "annotations": [
            {
                "from": 1,
                "to": 17,
                "matchType": "PREF",
                "text": "PARKINSON DISEASE"
            },
            {
                "from": 19,
                "to": 37,
                "matchType": "SYN",
                "text": "PARKINSON'S DISEASE"
            }
        ],
        "mappings": []
    }
]

Sample input2:
text = PARKINSON DISEASE; PARKINSON's DISEASE

Sample output2:

    [
        {
            "annotatedClass": {
                "definition": [
                    "A progressive degenerative disorder of the central nervous system characterized by loss of dopamine producing neurons in the substantia nigra and the presence of Lewy bodies in the substantia nigra and locus coeruleus. Signs and symptoms include tremor which is most pronounced during rest, muscle rigidity, slowing of the voluntary movements, a tendency to fall back, and a mask-like facial expression."
                ],
                "prefLabel": "Parkinson disease",
                "synonym": [
                    "paralysis agitans",
                    "Parkinson disease",
                    "Parkinson's disease"
                ],
    ............................
            "annotations": [
                {
                    "from": 1,
                    "to": 17,
                    "matchType": "PREF",
                    "text": "PARKINSON DISEASE"
                }
            ],
            "mappings": []
        }
    ]

As it is visible, Only the first instance of PARKINSON DISEASE was annotated.

Internal Server Error when expand_mappings=true is used in conjunction with include=

Semantic types aren't universally recognized

Per user report:

I have been experimenting with the annotator found here:
https://bioportal.bioontology.org/annotator

Settings:
Text entered into annotator: invasive mammary carcinoma
Select ontology: National Cancer Institute Thesaurus (NCIT)
Select UMLS semantic type: neoplastic process

Result: no annotations found

I am not understanding why the result is 'No annotations found'. The term 'invasive mammary carcinoma' does exist in the NCIT, with semantic_type 'Neoplastic process'.

https://bioportal.bioontology.org/ontologies/NCIT?p=classes&conceptid=http%3A%2F%2Fncicb.nci.nih.gov%2Fxml%2Fowl%2FEVS%2FThesaurus.owl%23C9245

Maximum size limited for NCBO annotator service?

When I submit a very long text query to NCBO annotator service using Python3.5 on urllib3, it gives me this error.

exceptions.MaxRetryError: HTTPConnectionPool(host='data.bioontology.org', port=80): Max retries exceeded with url: /annotator?text=Prevention+and+Early+Detection+of+ ... (Caused by ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))

When I query short text to annotate, it is processed without any problem but when the size of query text is going to be 20Kbyte, it gives me the error.

Presumably, it seems there is certain amount of maximum length of query allowed to the annotator service. If so, can I know how much exactly it is?

For your information, I attached my python code.

Thanks,
Jeongmin

import json
import urllib3
import urllib
import traceback
import sys
import re
import glob
from time import sleep

# user parameters
TEXT_DIR = '../data/text/'
JSON_DIR = '../data/json/'

apikey=''
REST_URL = "http://data.bioontology.org"

ontology_list = 'ICD9CM,LOINC,NDFRT,RXNORM,SNOMEDCT'
tui_list = 'T017,T029,T023'
options = '&longest_only=true&exclude_numbers=false&whole_word_only=true&exclude_synonyms=false' param = '&ontologies=' + ontology_list + '&semantic_types=' + tui_list + options


def get_json(text):
    # create request_url
    request_url = REST_URL + "/annotator?text=" + text.replace(' ','+') + param + "&apikey=" + apikey
    # get data as json type
    http = urllib3.PoolManager() 
    r = http.request('POST', request_url, headers={'Authorization': 'apikey token=' + apikey})
    print('request_url: '+request_url)
    print('http status: '+ str(r.status))
    data_json = json.loads(r.data.decode('utf-8'))
    return data_json

def main():
    for filename in glob.glob(TEXT_DIR+'*.txt'):
        # for each file load file 
        text = ''
        lines = open(filename,"r").read().splitlines()
        for l in lines:
            text = text + l.rstrip()
        # remove special characters
        text = re.sub('[^A-Za-z0-9]+', ' ', text)
        # get json
        data = get_json(text)
        # save to json file
        filename_nodir = filename.split('/')[-1].split('.')[0]
        json_fn = '' + filename_nodir + '.json'
        # print(json_fn)
        with open(JSON_DIR+json_fn, 'w') as outfile:
            json.dump(data, outfile)

if __name__ == "__main__":
    main()

Result sets include classes that no longer exist in latest submission

To reproduce, execute the following REST call that includes ancestors specified to 3 levels deep:

http://data.bioontology.org/annotator?text=Melanoma%20is%20a%20malignant%20tumor%20of%20melanocytes%20which%20are%20found%20predominantly%20in%20skin%20but%20also%20in%20the%20bowel%20and%20the%20eye.&ontologies=MEDLINEPLUS&expand_class_hierarchy=true&class_hierarchy_max_level=3

In the resulting JSON, there's one ancestral annotatedClass object with distance of 2 that no longer exists in the latest submission of the MEDLINEPLUS ontology. Screenshot of particular result:

Screenshot 2019-10-03 14 40 03

Clicking on the "self" link results in a 404 error:

"Resource 'http://purl.bioontology.org/ontology/MEDLINEPLUS/C1456590' not found in ontology MEDLINEPLUS submission 13"

@alexskr performed a full regeneration of the Annotator cache for all ontologies on Oct. 2nd. I checked the log file on the production parsing box (/srv/ncbo/ncbo_cron/logs/cache.log) and see no errors for the MEDLINEPLUS ontology:

I, [2019-10-02T22:08:06.271420 #22045]  INFO -- : Creating Annotator cache for http://data.bioontology.org/ontologies/MEDLINEPLUS (http://data.bioontology.org/ontologies/MEDLINEPLUS/submissions/13) - 1142/1238 ontologies
I, [2019-10-02T22:08:06.333339 #22045]  INFO -- : ["Caching classes of MEDLINEPLUS"]
I, [2019-10-02T22:08:07.448444 #22045]  INFO -- : ["Page 1 of 1 - 2258 classes retrieved in 1.110196227 sec."]
I, [2019-10-02T22:08:20.611879 #22045]  INFO -- : ["Page 1 of 1 cached in Annotator in 13.163305221 sec."]
I, [2019-10-02T22:08:20.703788 #22045]  INFO -- : ["Completed caching ontology: MEDLINEPLUS (http://data.bioontology.org/ontologies/MEDLINEPLUS/submissions/13) in 14.278680390212685 sec. 2258 classes."]

A side effect of this issue is that our example code for working with the Annotator throws exceptions - so far noticed by at least one end user (see: ncbo/ncbo_rest_sample_code#5)

hyphenated terms not found by annotator given non-hyphenated words

Recently, we used our lookup tool, which calls the Annotator, and noticed a discrepancy in how the Annotator searches for ontology class matches vs a manual search. Basically, the Annotator does not seem to have the flexibility to deal with hyphens or similar characters while the manual search in an ontology will find matches. The example uses an input term of “sodium iodide symporter”. MESH has a “sodium-iodide symporter” but this is not found using the Annotator. Instead, the Annotator finds matches just to sodium iodide (see excel attachment). Is this an issue of which you are already aware? If so, is there a plan for an Annotator version update or would a fix be simple enough to implement in the near future?

annotator cache generation failure - Unsupported command argument type: RDF::URI

Annotator fails to create cache for a number of ontologies with an error Unsupported command argument type: RDF::URI

According to the parsing logs the annotator cache was generated successfully in the past but started to fail consistently. This issue applies to both 4store (prod) and AG (stage) backends.

example ontologies: FOVT GSSO BMTO ONS PP PORO BSPO PLACES ECTO EDAMT VTO NeuroFMA NCI MMIRNAO DATASET MLTX PEAO ITO IOBC

E, [2023-01-18T10:02:09.765678 #11136] ERROR -- : ["Unsupported command argument type: RDF::URI\n/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-client-0.11.2/lib/redis_client/command_builder.rb:75:in block in generate'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-client-0.11.2/lib/redis_client/command_builder.rb:68:in map!'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-client-0.11.2/lib/redis_client/command_builder.rb:68:in generate'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-client-0.11.2/lib/redis_client.rb:218:in call_v'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-5.0.5/lib/redis/client.rb:73:in call_v'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-5.0.5/lib/redis.rb:167:in block in send_command'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-5.0.5/lib/redis.rb:166:in synchronize'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-5.0.5/lib/redis.rb:166:in send_command'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/gems/redis-5.0.5/lib/redis/commands/hashes.rb:26:in hset'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:676:in create_term_entry'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:337:in block (3 levels) in create_term_cache_for_submission'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:336:in each'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:336:in block (2 levels) in create_term_cache_for_submission'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:292:in each'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:292:in block in create_term_cache_for_submission'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/2.7.0/benchmark.rb:308:in realtime'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:273:in create_term_cache_for_submission'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:190:in block in create_term_cache_from_ontologies'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:166:in each_index'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:166:in create_term_cache_from_ontologies'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.7.0/bundler/gems/ncbo_annotator-71d41e3afb35/lib/ncbo_annotator.rb:106:in create_term_cache'\n\tbin/ncbo_ontology_annotate_generate_cache:113:in block in <top (required)>'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/2.7.0/benchmark.rb:308:in realtime'\n\tbin/ncbo_ontology_annotate_generate_cache:112:in <top (required)>'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/cli/exec.rb:58:in load'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/cli/exec.rb:58:in kernel_load'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/cli/exec.rb:23:in run'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/cli.rb:483:in exec'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/vendor/thor/lib/thor/command.rb:27:in run'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in invoke_command'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/vendor/thor/lib/thor.rb:392:in dispatch'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/cli.rb:31:in dispatch'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/vendor/thor/lib/thor/base.rb:485:in start'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/cli.rb:25:in start'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/exe/bundle:48:in block in <top (required)>'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/lib/bundler/friendly_errors.rb:103:in with_friendly_errors'\n\t/usr/local/rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/bundler-2.3.13/exe/bundle:36:in <top (required)>'\n\t/usr/local/rbenv/versions/2.7.6/bin/bundle:23:in load'\n\t/usr/local/rbenv/versions/2.7.6/bin/bundle:23:in

'"]
`

deprecation warnings `Redis#exists(key)` will return an Integer in redis-rb 4.3

Some code in Annotator throws warnings:

Redis#exists(key) will return an Integer in redis-rb 4.3. exists? returns a boolean, you should use it instead. To opt-in to the new behavior now you can set Redis.exists_returns_integer = true. To disable this message and keep the current (boolean) behaviour of 'exists' you can set Redis.exists_returns_integer = false, but this option will be removed in 5.0. (/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/sparql-client-2ab48e2b7b2a/lib/sparql/client.rb:419:in `block in cache_invalidate_graph')

deprecation warnings `Redis#exists(key)` will return an Integer in redis-rb 4.3 #3

ncbo_cron logs shows deprecation warnings:
Redis#exists(key) will return an Integer in redis-rb 4.3. exists? returns a boolean, you should use it instead. To opt-in to the new behavior now you can set Redis.exists_returns_integer = true. To disable this message and keep the current (boolean) behaviour of 'exists' you can set Redis.exists_returns_integer = false, but this option will be removed in 5.0. (/srv/ontoportal/ncbo_cron_deployments/shared/bundle/ruby/2.5.0/bundler/gems/ncbo_annotator-08176ef39822/lib/ncbo_annotator.rb:118:in `generate_dictionary_file')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.