Code Monkey home page Code Monkey logo

mmdbctl's Issues

MMDB with nested structures break read and export

When attempting to use this project to export from an mmdb with nested objects it throws an error: err: failed to get record for next subnet: maxminddb: cannot unmarshal map into type string. The read operation also fails with an error: err: couldn't get data for <requested_ip>.

In both these cases it seems to stem from the assumption that records are a map[string]string instead of map[string]interface{}. Unfortunately that opens up a new host of problems with representation in output formats incapable of nested structures natively like csv/tsv.

file size difference

GeoLite2-City.mmdb(64.2 MB) export to city.json(4.53 GB)
city.json(4.53 GB) import city.mmdb(287 MB)

Why is the re exported file(city.mmdb(287 MB)) much larger than the original(GeoLite2-City.mmdb(64.2 MB))

Am I missing any parameters.

panic: runtime error on iplocation.mmdb

Hello,
I am running errors using mmdbctl_1.4.4_linux_386 verify with iplocation.mmdb (e.g. standard_location.mmdb)
Maybe it's related to the important size of the mmdb ?

./mmdbctl_1.4.4_linux_386 verify iplocation.mmdb
panic: runtime error: index out of range [654620380] with length 654620376

goroutine 1 [running]:
github.com/oschwald/maxminddb-golang.nodeReader32.readRight(...)
        /home/runner/go/pkg/mod/github.com/oschwald/[email protected]/node.go:54
github.com/oschwald/maxminddb-golang.(*Networks).Next(0x9c50090)
        /home/runner/go/pkg/mod/github.com/oschwald/[email protected]/traverse.go:142 +0x195
github.com/oschwald/maxminddb-golang.(*verifier).verifySearchTree(0x9c56f08)
        /home/runner/go/pkg/mod/github.com/oschwald/[email protected]/verifier.go:106 +0x7e
github.com/oschwald/maxminddb-golang.(*verifier).verifyDatabase(0x9c56f08)
        /home/runner/go/pkg/mod/github.com/oschwald/[email protected]/verifier.go:90 +0x25
github.com/oschwald/maxminddb-golang.(*Reader).Verify(0x9c00180)
        /home/runner/go/pkg/mod/github.com/oschwald/[email protected]/verifier.go:21 +0x51
github.com/ipinfo/mmdbctl/lib.CmdVerify({0x0}, {0x9c10438, 0x1, 0x1}, 0x825e31c)
        /home/runner/work/mmdbctl/mmdbctl/lib/cmd_verify.go:49 +0x142
main.cmdVerify()
        /home/runner/work/mmdbctl/mmdbctl/cmd_verify.go:35 +0x100
main.main()
        /home/runner/work/mmdbctl/mmdbctl/main.go:42 +0x141

Low-level data about MMDB

Hey @UmanShahzad.

To make mmdbctl even more awesome, it would be great to be able to display some low-level data about an MMDB file, such as

  • Tree size in bytes
  • Data section start/end offsets
  • Data section size in bytes
  • Metadata section start offset

This is helpful, for example, if you want to inspect (with hexdump) the actual data section, or if you want to estimate relative impact of the tree/data sections to file size.

Simple example. Let's say we want to find out whether the actual MMDB writer deduplicates written objects (replaces by pointers) or not. I'll use my MMDB parser to display abovementioned offsets.

  • Case 1 -- write two different objects
$ echo -e '{"range":"1.0.0.0/24","value":{"col":"nested1"}}\n{"range":"2.0.0.0/24", "value":{"col":"nested2"}}' | mmdbctl import --no-network -j -o test.mmdb
writing to test.mmdb (2 entries)

$ python3 ./parser.py  test.mmdb
Namespace(file='test.mmdb', meta=False, data=None, ip=None)
Data section offset 1096 (data starts at 1112)  # <===
Metadata section offset: 1146 (metadata starts at 1160)
Data section size 34 bytes (3.4e-05 MB)  # <===
Record size: 32
Node count: 137
Tree size: 1096 (bytes)
ip_version: 6
First data record at 153 pointer

# Knowing the offset/size, we can inspect specific portion of the file:
$ hd -s 1112 -n 34 test.mmdb
00000458  e1 45 76 61 6c 75 65 e1  43 63 6f 6c 47 6e 65 73  |.Evalue.CcolGnes|
00000468  74 65 64 31 e1 20 01 e1  20 08 47 6e 65 73 74 65  |ted1. .. .Gneste|
00000478  64 32                                             |d2|
0000047a
  • Case 2 -- write duplicate objects:
$ echo -e '{"range":"1.0.0.0/24","value":{"col":"nested1"}}\n{"range":"2.0.0.0/24", "value":{"col":"nested1"}}' | mmdbctl import --no-network -j -o test.mmdb
writing to test.mmdb (2 entries)

$ python3 ./parser.py  test.mmdb
Namespace(file='test.mmdb', meta=False, data=None, ip=None)
Data section offset 1096 (data starts at 1112)  # <===
Metadata section offset: 1132 (metadata starts at 1146)
Data section size 20 bytes (2e-05 MB)   # <===
Record size: 32
Node count: 137
Tree size: 1096 (bytes)
ip_version: 6
First data record at 153 pointer

$ hd -s 1112 -n 20 test.mmdb
00000458  e1 45 76 61 6c 75 65 e1  43 63 6f 6c 47 6e 65 73  |.Evalue.CcolGnes|
00000468  74 65 64 31                                       |ted1|   # Note it removed whole second object and tree points directly to the first one
0000046c

So it does deduplicate objects. Looks like it even deduplicates nested objects, which is great.

The point is it is really convenient to know those offsets when doing stuff like this.

Not sure if Go MMDB reader exposes this data, but it should be easy to find section separators (see the specs) even without parsing the file, e.g. by mmap-ing the file and using string search functions: https://github.com/svbatalov/construct_mmdb_parser/blob/11b13ef946b7d85cec4e21a538af49b5b44f22a1/parser.py#L13-L19

Thanks,
Sergey

Memory Issue

So I have been playing around with mmdbctl and I was doing simple mmdb export to json and import back to mmdb. However, the process started taking up to 46 GB of memory. Luckily my machine had plenty of memory so import was successful. However, systems with low memory may run into problems.

I did some digging into source and found that mmdbctl is holding all data into memory and writing into file at once after processing is done. Can we do buffered writing and flush incrementally to prevent huge memory usages?

fmt.Fprintf(os.Stderr, "writing to %s (%v entries)\n", f.Out, entrycnt)

Task Manager:

Screenshot 2024-02-08 161631

MMDB file used: GeoLite2-City.mmdb

is ipapi csv's conversion supported?

i want to use the csv DB file from https://ipapi.is/developers.html#geolocation-database but when i try to convert it with:
mmdbctl import --in geolocationDatabaseIPv4.csv --out data.mmdb
i get error:
err: couldn't parse cidr "4/32": invalid CIDR address: 4/32

Why is it giving this error?
Is it possible you can see if you can make this conversion for ipapi csv's compatible with mmdbctl?

thanks in advance

ps: surprised no-one has mentioned ipapi.is db's

Compatibility with maxmind datasets

For some reason mmdctl doesn't work with maxmind's geoip and geolite datasets: all requests for any ip return err: couldn't get data for x.x.x.x.

mmdbctl read -f json-pretty 8.8.8.8 ./GeoLite2-Country.mmdb 
err: couldn't get data for 8.8.8.8

mmdbctl read -f json-pretty 1.1.1.0/31 ./GeoIP2-Country.mmdb
err: couldn't get data for 1.1.1.0
err: couldn't get data for 1.1.1.1

ipinfo dataset works fine:

mmdbctl read -f json-pretty 8.8.8.8 ./Ipinfo-Country.mmdb 
{
  "continent": "NA",
  "continent_name": "North America",
  "country": "US",
  "country_name": "United States"
}

Metadata:

mmdbctl metadata ./GeoIP2-Country.mmdb 
- Binary Format 2.0
- Database Type GeoIP2-Country
- IP Version    6
- Record Size   24
- Node Count    1201540
- Description   
    en GeoIP2 Country database
- Languages     de, en, es, fr, ja, pt-BR, ru, zh-CN
- Build Epoch   1647287311

Is it possible to make them work together?

Corrupted Database

Hi,
I'm trying to update the mmdb database, as there is missing data in some ips, as Graylog needs those information to treat the logs.
However, after I update the mmdb, the graylog lookup table doesn't work.
The result from the test is this:

{
  "single_value": null,
  "multi_value": null,
  "string_list_value": null,
  "has_error": false,
  "ttl": 9223372036854776000
}

normally, it would be this:

{
  "single_value": "TZ",
  "multi_value": {
    "continent": {
      "code": "AF",
      "geoname_id": 6255146,
      "names": {
        "de": "Afrika",
        "ru": "Африка",
        "pt-BR": "África",
        "ja": "アフリカ",
        "en": "Africa",
        "fr": "Afrique",
        "zh-CN": "非洲",
        "es": "África"
      }
    },
    "country": {
      "confidence": null,
      "geoname_id": 149590,
      "is_in_european_union": false,
      "iso_code": "TZ",
      "names": {
        "de": "Tansania",
        "ru": "Танзания",
        "pt-BR": "Tanzânia",
        "ja": "タンザニア連合共和国",
        "en": "Tanzania",
        "fr": "Tanzanie",
        "zh-CN": "坦桑尼亚",
        "es": "Tanzania"
      }
    },
    "traits": {
      "autonomous_system_number": null,
      "autonomous_system_organization": null,
      "connection_type": null,
      "domain": null,
      "ip_address": "41.59.208.37",
      "is_anonymous": false,
      "is_anonymous_proxy": false,
      "is_anonymous_vpn": false,
      "is_anycast": false,
      "is_hosting_provider": false,
      "is_legitimate_proxy": false,
      "is_public_proxy": false,
      "is_residential_proxy": false,
      "is_satellite_provider": false,
      "is_tor_exit_node": false,
      "isp": null,
      "mobile_country_code": null,
      "mobile_network_code": null,
      "network": "41.59.0.0/16",
      "organization": null,
      "user_type": null,
      "user_count": null,
      "static_ip_score": null
    },
    "represented_country": {
      "confidence": null,
      "geoname_id": null,
      "is_in_european_union": false,
      "iso_code": null,
      "names": {},
      "type": null
    },
    "registered_country": {
      "confidence": null,
      "geoname_id": 149590,
      "is_in_european_union": false,
      "iso_code": "TZ",
      "names": {
        "de": "Tansania",
        "ru": "Танзания",
        "pt-BR": "Tanzânia",
        "ja": "タンザニア連合共和国",
        "en": "Tanzania",
        "fr": "Tanzanie",
        "zh-CN": "坦桑尼亚",
        "es": "Tanzania"
      }
    }
  },
  "string_list_value": null,
  "has_error": false,
  "ttl": 9223372036854776000
}

what I have done:

  1. installed the app:
wget "https://github.com/ipinfo/mmdbctl/releases/download/mmdbctl-1.4.4/mmdbctl_1.4.4_linux_amd64.tar.gz" && tar xvf mmdbctl_1.4.4_linux_amd64.tar.gz && rm mmdbctl_1.4.4_linux_amd64.tar.gz && mv mmdbctl_1.4.4_linux_amd64 /usr/local/bin/mmdbctl
sudo apt-get update && sudo apt-get install jq
  1. Exported:
    mmdbctl export --in ${importfrom} --out ${exportto}

  2. Changed the content using python:

import json

input_file = '/opt/scripts/tmp/GeoLite2-City.json'
output_file = '/opt/scripts/tmp/GeoLite2-City-Changes.json'

def add_subdivisions_to_json_line(json_line):
    data = json.loads(json_line)
    original_data = data.copy()  # Make a copy of the original data for comparison

    if 'subdivisions' not in data:
        country_info = data.get('country', {})
        if country_info:
            subdivisions = [{
                "geoname_id": country_info.get("geoname_id"),
                "iso_code": country_info.get("iso_code"),
                "names": {"en": country_info["names"]["en"]}
            }]
            data['subdivisions'] = subdivisions

    return data, data != original_data  # Return both the data and a flag indicating if a change was made

with open(input_file, 'r') as infile, open(output_file, 'w') as outfile:
    for line in infile:
        updated_data, is_changed = add_subdivisions_to_json_line(line)
        if is_changed:
            updated_line = json.dumps(updated_data)
            outfile.write(updated_line + '\n')

print(f"Processed file saved as {output_file}")
  1. Imported the changes to the mmdb:
    mmdbctl import --disable-metadata-pointers false --in ${updatedjson} --out ${mmdbbkp}

  2. Tests
    mmdbctl metadata GeoLite2-City-updated.mmdb

  • Binary Format 2.0
  • Database Type ipinfo GeoLite2-City-updated.mmdb
  • IP Version 6
  • Record Size 32
  • Node Count 3971781
  • Description
    en ipinfo GeoLite2-City-updated.mmdb
  • Languages en
  • Build Epoch 1721934123

mmdbctl metadata GeoLite2-City.mmdb

  • Binary Format 2.0
  • Database Type GeoLite2-City
  • IP Version 6
  • Record Size 28
  • Node Count 3971880
  • Description
    en GeoLite2City database
  • Languages de, en, es, fr, ja, pt-BR, ru, zh-CN
  • Build Epoch 1721747951

mmdbctl read -f json-pretty 41.59.208.37 GeoLite2-City.mmdb # Original File
{
"continent": {
"code": "AF",
"geoname_id": 6255146,
"names": {
"de": "Afrika",
"en": "Africa",
"es": "África",
"fr": "Afrique",
"ja": "アフリカ",
"pt-BR": "África",
"ru": "Африка",
"zh-CN": "非洲"
}
},
"country": {
"geoname_id": 149590,
"iso_code": "TZ",
"names": {
"de": "Tansania",
"en": "Tanzania",
"es": "Tanzania",
"fr": "Tanzanie",
"ja": "タンザニア連合共和国",
"pt-BR": "Tanzânia",
"ru": "Танзания",
"zh-CN": "坦桑尼亚"
}
},
"location": {
"accuracy_radius": 1000,
"latitude": -6.8227,
"longitude": 39.2936,
"time_zone": "Africa/Dar_es_Salaam"
},
"registered_country": {
"geoname_id": 149590,
"iso_code": "TZ",
"names": {
"de": "Tansania",
"en": "Tanzania",
"es": "Tanzania",
"fr": "Tanzanie",
"ja": "タンザニア連合共和国",
"pt-BR": "Tanzânia",
"ru": "Танзания",
"zh-CN": "坦桑尼亚"
}
}
}

mmdbctl read -f json-pretty 41.59.208.37 GeoLite2-City-updated.mmdb # New File
{
"continent": {
"code": "AF",
"geoname_id": 6255146,
"names": {
"de": "Afrika",
"en": "Africa",
"es": "África",
"fr": "Afrique",
"ja": "アフリカ",
"pt-BR": "África",
"ru": "Африка",
"zh-CN": "非洲"
}
},
"country": {
"geoname_id": 149590,
"iso_code": "TZ",
"names": {
"de": "Tansania",
"en": "Tanzania",
"es": "Tanzania",
"fr": "Tanzanie",
"ja": "タンザニア連合共和国",
"pt-BR": "Tanzânia",
"ru": "Танзания",
"zh-CN": "坦桑尼亚"
}
},
"location": {
"accuracy_radius": 1000,
"latitude": -6.8227,
"longitude": 39.2936,
"time_zone": "Africa/Dar_es_Salaam"
},
"network": "41.59.208.0/20",
"registered_country": {
"geoname_id": 149590,
"iso_code": "TZ",
"names": {
"de": "Tansania",
"en": "Tanzania",
"es": "Tanzania",
"fr": "Tanzanie",
"ja": "タンザニア連合共和国",
"pt-BR": "Tanzânia",
"ru": "Танзания",
"zh-CN": "坦桑尼亚"
}
},
"subdivisions": [
{
"geoname_id": 149590,
"iso_code": "TZ",
"names": {
"en": "Tanzania"
}
}
]
}

Graylog log:

WARN [MaxmindDataAdapter] Unable to look up city data for IP address /123.185.49.44, returning empty result.
java.lang.NullPointerException: Cannot invoke "Object.getClass()" because "parameters[index]" is null
at com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:450) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decodeMap(Decoder.java:341) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decodeByType(Decoder.java:162) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decode(Decoder.java:151) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decode(Decoder.java:89) ~[graylog.jar:?]
at com.maxmind.db.NoCache.get(NoCache.java:17) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decode(Decoder.java:116) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:434) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decodeMap(Decoder.java:341) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decodeByType(Decoder.java:162) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decode(Decoder.java:151) ~[graylog.jar:?]
at com.maxmind.db.Decoder.decode(Decoder.java:76) ~[graylog.jar:?]
at com.maxmind.db.Reader.resolveDataPointer(Reader.java:411) ~[graylog.jar:?]
at com.maxmind.db.Reader.getRecord(Reader.java:185) ~[graylog.jar:?]
at com.maxmind.geoip2.DatabaseReader.get(DatabaseReader.java:280) ~[graylog.jar:?]
at com.maxmind.geoip2.DatabaseReader.getCity(DatabaseReader.java:365) ~[graylog.jar:?]
at com.maxmind.geoip2.DatabaseReader.city(DatabaseReader.java:348) ~[graylog.jar:?]
at org.graylog.plugins.map.geoip.MaxMindIPLocationDatabaseAdapter.maxMindCity(MaxMindIPLocationDatabaseAdapter.java:39) ~[graylog.jar:?]
at org.graylog.plugins.map.geoip.MaxmindDataAdapter.doGet(MaxmindDataAdapter.java:186) ~[graylog.jar:?]
at org.graylog2.plugin.lookup.LookupDataAdapter.get(LookupDataAdapter.java:143) ~[graylog.jar:?]
at org.graylog2.lookup.LookupTable.lambda$lookup$0(LookupTable.java:73) ~[graylog.jar:?]
at org.graylog2.lookup.caches.CaffeineLookupCache.lambda$get$2(CaffeineLookupCache.java:161) ~[graylog.jar:?]
at com.github.benmanes.caffeine.cache.LocalCache.lambda$statsAware$2(LocalCache.java:167) ~[graylog.jar:?]
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2688) ~[graylog.jar:?]
at java.base/java.util.concurrent.ConcurrentHashMap.compute(Unknown Source) [?:?]
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2686) [graylog.jar:?]
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2669) [graylog.jar:?]
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:112) [graylog.jar:?]
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62) [graylog.jar:?]
at org.graylog2.lookup.caches.CaffeineLookupCache.get(CaffeineLookupCache.java:182) [graylog.jar:?]
at org.graylog2.lookup.LookupTable.lookup(LookupTable.java:73) [graylog.jar:?]
at org.graylog2.lookup.LookupTableService$Function.lookup(LookupTableService.java:635) [graylog.jar:?]
at org.graylog2.inputs.extractors.LookupTableExtractor.run(LookupTableExtractor.java:66) [graylog.jar:?]
at org.graylog2.plugin.inputs.Extractor.runExtractor(Extractor.java:226) [graylog.jar:?]
at org.graylog2.filters.ExtractorFilter.filter(ExtractorFilter.java:79) [graylog.jar:?]
at org.graylog2.messageprocessors.MessageFilterChainProcessor.process(MessageFilterChainProcessor.java:105) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.handleMessage(ProcessBufferProcessor.java:167) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.dispatchMessage(ProcessBufferProcessor.java:137) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:107) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:52) [graylog.jar:?]
at org.graylog2.shared.buffers.PartitioningWorkHandler.onEvent(PartitioningWorkHandler.java:52) [graylog.jar:?]
at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:167) [graylog.jar:?]
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:122) [graylog.jar:?]
at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66) [graylog.jar:?]
at java.base/java.lang.Thread.run(Unknown Source) [?:?]

mmdb Sizes:
-rw-r--r-- 1 root root 237M Jul 26 00:38 GeoLite2-City-updated.mmdb
-rw-r--r-- 1 root root 50M Jul 25 07:28 GeoLite2-City.mmdb

json sizes:
-rw-r--r-- 1 root root 963M Jul 25 14:35 GeoLite2-City-Changes.json
-rw-r--r-- 1 root root 3.8G Jul 25 08:36 GeoLite2-City-Dump.json

Why is the database in some way corrupted that Graylog can't read but the command line can??

Granular data section metadata

As an extension to #25 it'd be nice to see not only data-section-size, but also a breakdown of the size of individual data types within that data section (e.g. string, pointer, int, map, etc).

File extensions not triggering default flags in import

According to mmdbctl import help the file extensions should trigger the default flag

    -t, --tsv
      interpret input file as TSV.
      by default, the .tsv extension will turn this on.

but it's giving error if we don't specify the flag explicitly

$ mmdbctl import abc.csv 
err: input file type unknown

Giving error on json and csv as well

Support for merging mmdb or csv files?

Is it supported to merge two mmdb or csv files that record different information?

For example, one is an mmdb or csv file that records country information, and the other is a file that records ASN. Is it possible to merge these two files into one that contains both ASN and country information, just like the three free databases you provide officially?

If it is possible to implement, can you give me an example of a specific command? If it is not possible to implement temporarily, is it possible to implement it in the future?

Compatibility issue with IP2Location DB which uses int format for ip address range start and end

Command: mmdbctl import -c --in IP2LOCATION-LITE-DB3.CSV --out IP2LOCATION-LITE-DB3.mmdb
Output: err: couldn't parse cidr "16777216/32": invalid CIDR address: 16777216/32

IP2LOCATION-LITE-DB3.CSV file format:

"0","16777215","-","-","-","-"
"16777216","16777471","US","United States of America","California","San Jose"
"16777472","16778239","CN","China","Fujian","Fuzhou"
"16778240","16778495","AU","Australia","Tasmania","Glebe"
"16778496","16779263","AU","Australia","Victoria","Melbourne"
"16779264","16781311","CN","China","Guangdong","Guangzhou"
"16781312","16785407","JP","Japan","Tokyo","Tokyo"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.