Code Monkey home page Code Monkey logo

link-move's Introduction

Build Status Maven Central

LinkMove

LinkMove is a model-driven dynamically-configurable framework to acquire data from external sources and save it in your database. Its primary motivation is to facilitate domain-driven design architectures. In DDD terms LinkMove is a tool to synchronize data between related models from different "bounded contexts". It can also be used as a general purpose ETL framework.

LinkMove connects data models in a flexible way that anticipates independent changes between sources and targets. It will reuse your existing ORM mapping for the target database, reducing configuration to just describing the source. It supports JDBC, XML, JSON, CSV sources out of the box.

Support

There are two options:

  • Open an issue on GitHub with a label of "help wanted" or "question" (or "bug" if you think you found a bug).
  • Post your question on the LinkMove forum.

Getting Started

Add LinkMove dependency:

<dependency>
    <groupId>com.nhl.link.move</groupId>
    <artifactId>link-move</artifactId>
    <version>3.0.M5</version>
</dependency>

The core module above supports relational and XML sources. The following optional modules may be added if you need to work with other formats:

<!-- for JSON -->
<dependency>
    <groupId>com.nhl.link.move</groupId>
    <artifactId>link-move-json</artifactId>
    <version>3.0.M5</version>
</dependency>
<!-- for CSV -->
<dependency>
    <groupId>com.nhl.link.move</groupId>
    <artifactId>link-move-csv</artifactId>
    <version>3.0.M5</version>
</dependency>

Use it:

// bootstrap shared runtime that will run tasks
DataSource srcDS = // define how you'd connect to data source 
ServerRuntime targetRuntime = // Cayenne setup for data target .. targets are mapped in Cayenne 
File rootDir = .. // this is a parent dir of XML descriptors

LmRuntime lm = LmRuntime.builder()
          .connector(JdbcConnector.class, "myconnector", new DataSourceConnector(srcDS))
          .targetRuntime(targetRuntime)
          .extractorModelsRoot(rootDir)
          .build();

// create a reusable task for a given transformation
LmTask task = lm.getTaskService()
         .createOrUpdate(MyTargetEntity.class)
         .sourceExtractor("my-etl.xml")
         .matchBy(MyTargetEntity.NAME).task();

// run task, e.g. in a scheduled job
Execution e = task.run();

Extractor XML Format

Extractor XML format is described by a formal schema: https://nhl.github.io/link-move/xsd/extractor_config_3.xsd

An example using JDBC connector for the source data:

<?xml version="1.0" encoding="utf-8"?>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="https://nhl.github.io/link-move/xsd/extractor_config_3.xsd https://nhl.github.io/link-move/xsd/extractor_config_3.xsd"
        xmlns="https://nhl.github.io/link-move/xsd/extractor_config_3.xsd">
	
	<type>jdbc</type>
	<connectorId>myconnector</connectorId>
	
	<extractor>
		<!-- Optional source to target attribute mapping -->
		<attributes>
			<attribute>
				<type>java.lang.Integer</type>
				<source>AGE</source>
				<target>db:age</target>
			</attribute>
			<attribute>
				<type>java.lang.String</type>
				<source>DESCRIPTION</source>
				<target>db:description</target>
			</attribute>
			<attribute>
				<type>java.lang.String</type>
				<source>NAME</source>
				<target>db:name</target>
			</attribute>
		</attributes>
		<!-- JDBC connector properties. -->
		<properties>
			<!-- Query to run against the source. Supports full Cayenne 
			     SQLTemplate syntax, including parameters and directives.
			-->
			<extractor.jdbc.sqltemplate>
			       SELECT age, description, name FROM etl1
			</extractor.jdbc.sqltemplate>
		</properties>
	</extractor>
</config>

Logging Configuration

LinkMove uses Slf4J abstraction for logging, that will work with most common logging frameworks (Log4J2, Logback, etc.). With any framework you use, you will need to configure the following log levels depending on the desired verbosity of your ETL tasks.

Logging ETL Progress

You need to configure the com.nhl.link.move.log logger to log the progress of the ETL tasks. The following table shows what is logged at each log level:

Log Level What is Logged
WARN Nothing
INFO Task start/stop with stats
DEBUG Same as INFO, but also includes start/stop of each segment with segment stats
TRACE Same as DEBUG, but also includes IDs of all affected target objects (deleted, created, updated)

Logging SQL

ETL-related SQL generated by Cayenne is extremely verbose and barely human-readable. You need to configure the org.apache.cayenne.log logger to turn it on and off:

Log Level What is Logged
WARN Nothing
INFO Cayenne-generated SQL queries and updates

link-move's People

Contributors

andrus avatar atomashpolskiy avatar dkoyro avatar mikhailovseychuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

link-move's Issues

Refactoring ETL stack to expose processing chain in the object design

An attempt to make the stack more explicit. First, the task link-etl currently provides is a "create-or-update" operation. So need a class called CreateOrUpdateTask.

Second, stages of the transformation are spread between various classes and methods. Would be great to have them in one place and to have explicit methods named after stages. So will create CreateOrUpdateSegmentProcessor with the following stage methods:

  • convert
  • map
  • match
  • merge
  • commit

A side benefit of this design is that in the future we can create listeners at any processing stage.

Long / Integer values in HashMap keys

There two lines in the com.nhl.link.etl.load.CreateOrUpdateLoader.process(List<Map<String, Object>>) method:

        Object key = mapper.keyForTarget(t);
        Map<String, Object> src = mutableSrcMap.remove(key);

Cayenne entity has integer fields when external data source responds according long value in the SQL query result set. Then hash codes for both maps are the same but they do not equal because of type difference that is why java.util.Map.remove(Object) responds null object. I.e., example:

Object key is HashMap:
{String: "a", Integer: "1"}
{String: "b", Integer: "2"}
{String: "c", Integer: "3"}

When mutableSrcMap contains key what is next HashMap:

{String: "a", Long: "1"}
{String: "b", Integer: "2"}
{String: "c", Integer: "3"}

It throws EtlRuntimeException with message like "Invalid key: ..." though the key looks right and there is no duplicates in the database.

Deprecate/remove explicit relationship mapping from CreateOrUpdateBuilder

CreateOrUpdateBuilder has a number of methods to explicitly map to-one and to-many relationships for a given task. Those are not needed, as we already have Cayenne model present where those relationships exist. So link-etl can easily find relationships.

Moreover to-many relationships do not make sense in the current state of the ETL framework, so they did not do anything anyways.

So this task is about deprecating all the 'withTo..Relationship' methods, and implementing auto-discovery for to-ones.

NPE on syncing nullable FK

Fir a job syncing nullable to-one:

builder.withToOneRelationship("x", X.class, "x");

The following exception happens when NULL is encountered:

java.lang.IllegalArgumentException: Null PK
at org.apache.cayenne.Cayenne.buildId(Cayenne.java:501)
at org.apache.cayenne.Cayenne.objectForPK(Cayenne.java:372)
at com.nhl.link.etl.load.cayenne.DefaultCayenneCreateOrUpdateStrategy.resolveRelationshipObject(DefaultCayenneCreateOrUpdateStrategy.java:72)
at com.nhl.link.etl.load.cayenne.DefaultCayenneCreateOrUpdateStrategy.writeRelationship(DefaultCayenneCreateOrUpdateStrategy.java:55)
at com.nhl.link.etl.load.cayenne.DefaultCayenneCreateOrUpdateStrategy.writeProperty(DefaultCayenneCreateOrUpdateStrategy.java:47)
at com.nhl.link.etl.load.cayenne.CayenneCreateOrUpdateWithPKStrategy.update(CayenneCreateOrUpdateWithPKStrategy.java:79)
at com.nhl.link.etl.load.cayenne.CayenneCreateOrUpdateLoader.update(CayenneCreateOrUpdateLoader.java:42)
at com.nhl.link.etl.load.CreateOrUpdateLoader.process(CreateOrUpdateLoader.java:65)
at com.nhl.link.etl.load.cayenne.CayenneCreateOrUpdateLoader.process(CayenneCreateOrUpdateLoader.java:36)
at com.nhl.link.etl.batch.BatchRunner.run(BatchRunner.java:71)
at com.nhl.link.etl.runtime.task.DefaultTaskBuilder$1.run(DefaultTaskBuilder.java:312)
at com.nhl.link.etl.runtime.task.DefaultTaskBuilder$1.run(DefaultTaskBuilder.java:284)

FileExtractorModelLoader - a file-based IExtractorModelLoader

To simplify integration of link-etl into user apps, we need to provide an out-of-the-box strategy for looking up extractor configs. Since we envision that most configs will be stored on the filesystem outside the ClassLoader, a FileExtractorModelLoader seems like a useful general-purpose loader. It should be take a DI-bound String that is a root directory to resolve ExtractorModelContainer locations against.

When this is done, telling the framework to load extractors from files should be as simple as this:

new EtlRuntimeBuilder().extractorModelsRoot("/somewhere/in/the/filesystem");

Rename Matchers to Mappers

com.nhl.link.etl.load.matcher.Matcher was redone to calculate source and target keys instead of finding objects. Probably a more appropriate name would be Mapper here.

MatchingTaskBuilder will be renamed to DefaultTaskBuilder. "Matching" aspect of it is not important to emphasize in the name. Moreover we'll hide the implementation behind a new TaskBuilder interface.

Normalize 'sources' map keys

'sources' is a List<Map<String, Object>> created from List during the first stage of most transformation flows. Currently the keys in the sources map can be an unpredictable mix of obj: and db: expressions, defined by the XML mapping, automatic mapping of source column names, etc.

Per #49 this breaks when matchers are defined using a different style of expression (db vs obj). This task will ensure that "sources" will be uniformly mapped by "db:column_name" expression, no matter how the original extractor mapping was created.

UPGRADE NOTES: if you had listeners that read or write from/to the "sources" collection, you will need to take the new rules into account.

A default IConnectorFactory using target DataSource for source

Sometimes source and target live on the same DB server, so we can use target default DataSource to get source data. So we need to create an IConnectorFactory for this scenario. Here is the new API implemented in 1.1:

new EtlRuntimeBuilder().withConnectorFromTarget().with...;

Automatic JDBC extractor that does not require an XML descriptor

If the source and target schemas are identical (or the target's schema is a subset of source), we can eliminate XML descriptor all together. The API may look like this:

Expression sourceFilter = E.SOME_PROPERTY.ge(1000);
taskService.createOrUpdate(E.class)
      .autoJdbcExtractor("someconnector")
      .sourceFilter(sourceFilter)
      .task();

and link-etl would build a source query based on the target DbEntity, applying an optional qualifier to it.

We will probably start with less ambitious approach per #41, as this solution has a number of downsides:

  • We lose the ability to dynamically (or otherwise) remap extractor when the source changes
  • We can't use a custom query
  • We can't use XML descriptors to document data flow between sources and targets.
  • The solution is JDBC-specific (although potentially we can come up with algorithms to auto-build LDAP , CSV or XML extractors)

Refactor TaskBuilder to be able to split it into specialized builders

Since DeleteTaskBuilder is coming (#23), our current TaskBuilder should be renamed to CreateOrUpdateTaskBuilder. Will also remove previously deprecated methods and shorten builder method names (remove 'with' prefix), with deprecation. Methods in ITaskService will also be aligned to indicate the type of builder returned.

Parameterized queries for Extractors

Would be nice to have parameterized extractor queries. Some thoughts on implementation details:

We may add an overloaded version of com.nhl.link.etl.runtime.task.MatchingTaskBuilder#withExtractor method with additional parameter being 'map of parameters' in case of named parameters or 'list of parameters' in case of position-indexed parameters. Default values should probably be provided by Java code. On extractor configuration level all parameters should be considered as mandatory; when value for parameter is not provided, exception should be thrown (at what moment: building or running ETL task?)

All currently existing extractor implementations seem to support some kind of query language (XPath, SQL, LDAP queries), so adding parameters into queries should not be a problem. May need to create simple query parsers for XPath and LDAP queries though. Determining the exact moment when query is parsed is related to exception logic (see above).

Matcher refactoring

To resolve various matcher issues that we have, I guess we need a bit of refactoring of the Matcher.

As it turns out our sister project LinkRest has exactly the same matcher pattern. On update it matches JSON collection (src) with database data (target). And we've mostly figured it out:

https://github.com/nhl/link-rest/blob/master/src/main/java/com/nhl/link/rest/ObjectMapper.java

ObjectMapper also supports mapping targets to sources and creating Cayenne queries to fetch targets. We've arrived at the current API for ObjectMapper after some trial and error. The advantage of ObjectMapper vs. LinkETL Matcher is that it is a stateless object and simply defines object-to-key mapping conversion (and also key-to-expression conversion). This makes it much more pluggable as it doesn't imply any specific object processing. Callers decide how to use the key.

So the goal of this task is to switch LinkETL to something similar to LinkRest ObjectMapper.

XML Schema for extractor configs

Finally need to formalize how Extractor config is structured and the meaning of all the attributes. There's already significant ambiguity as to what can be in an extractor, that needs to be resolved and documented.

Matchers must handle expression invariants in Rows

Let's consider a table column "phone_number" mapped to target object property "phoneNumber". Currently a matcher on "phoneNumber" will not find a row property specified as "db:phone_number". Though it should. This became especially noticeable after #41 implementation - dynamic attributes are all based on "db:" expressions.

We need to come up with some mechanism to handle invariants. Perhaps ensure all RowAttribute targetNames to db: expressions, and ensure downstream code understands that. Standardizing row format in this way should have more general positive effect, making data format coming through ETL fully predictable, and correctly representing tabular DB data (as opposed to forcing an ORM object abstraction on unstructured data).

SourceKeysTask: a task for extraction of source keys

A new task called SourceKeysTask that has the following stages:

BatchRunner.iterate_over_source_rows
    for_each_segment
        convertSources
        collectSourceKeys

Unlike other tasks, this one is not doing anything with the target. It is useful as a sub-task in operations like delete, etc.

not null field with default value on database entity

If field on database is Not Null and has default value i.e. like '1' and it is missed on ETL .xml config then ETL will fail with 'Validation failure for org.etl.MyEntity.myField: "myField" is required.' exception.

Descriptor versioning

Per #42 we will need to change the structure of an XML descriptor. This brings up an issue of descriptor versioning. Schema version will be defined by unique schema URLs. Once the XML is loaded, the version will be checked and the processing delegated to a version-specific handler. All handlers will need to produce a compatible version of the runtime ExtractorConfig (of whatever becomes of ExtractorConfig going forward).

Default version "1" is our initial pre-schema version. So these are equivalent:

<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xsi:schemaLocation="http://nhl.github.io/link-etl/xsd/extractor_config_1.xsd"
    xmlns="http://nhl.github.io/link-etl/xsd/extractor_config_1.xsd"> 
   ... 
</config>

and

<config> ... </config>

DeleteTask - a new callback between getting source keys and matching them against targets

Sometimes source and target keys do not exactly match and source data needs to be normalized before matching it with target. For that we'll need to add a new callback event in the delete chain.

It should be invoked between @AfterTargetsMapped and @AfterMissingTargetsFiltered. Perhaps we can reuse @AfterSourceRowsConverted for this purpose (and perhaps we need to pass it through row converter?)

Rename "transform" package to "load"

What our "transform" package classes do is actually much closer to the "load" phase of the ETL. So rename packages accordingly. The most visible renaming will be changing of TransformListener to LoadListener.

KeyBuilder refactoring - renaming to KeyMapAdapter

  1. KeyBuilder is not a "builder" in a pattern sense. It creates a HashMap-friendly adapter for a given unique key. So rename KeyBuilder to KeyMapAdapter (same for KeyBuilderFactory).
  2. Add a method to uncover the key value from the key adapter.

The result may be something like this:

KeyMapAdapter {
     Object toMapKey(Object rawKey);
     Object fromMapKey(Object mapKey);
}
  1. Move the new adapter to com.nhl.link.etl.map.key package and factory - to com.nhl.link.etl.runtime.map.key.

Extractor: support for extraction by a set of IDs

For operations like delete (#23) we need to query segments of the source data that match a collection of IDs. So our Extractor should be capable of such queries. API:

RowReader getReader(List<Map<String, Object>> ids, ExtractorParameters parameters);

I can think of 2 naive implementations with horrible performance and one reasonably performing implementation with complicated extractor scripting:

  1. Create an extractor template to fetch by single ID (should be easy in SQL and LDAP). Break collection of N ids into N queries internally, chain the results in a single RowReader. This will result in one single row select per target ID.
  2. Create an extractor template to fetch everything from source and then filter the right keys in memory. This will result in one full table select per batch (and lots of in-memory filtering). Possible optimization - cache full table result in memory.
  3. Attempt to build a single query per batch. E.g. "WHERE ID IN ($list)" or similar. This will require some advanced scripting in SQLTemplate, especially for very common multi-column matches. For LDAP extractors (outside of link-etl for now) we'll have to add some real scripting if we want to achieve similar functionality. Also it is unclear whether an LDAP "or" query can support 500 clauses (500 is a default batch size)

EtlAdapter to package extensions to LinkETL

EtlAdapter is an ultimate extension mechanism for the ETL stack, that allows to package multiple DI extensions into a single class. It is essentially an interface similar to Cayenne DI Module:

interface LinkEtlAdapter  {
     void contributeToRuntime(Binder binder);
}

EtlRuntimeBuilder would accept one or more adapters.

Automatic mapping of row attributes

Allow link-etl users to omit "attributes" section in the extractor configuration. In many cases source and target names match already. In others (jdbc connectors) it is easy to make them match by using SQLTemplate scripting. Finally custom type and name conversions can be still made via stage listeners.

So let's cut down on the amount of required configs. Note that this task is a less ambitious version of #39, and with no downsides of #39.

The implied generated configuration will be equivalent to something like this:

<attribute>
    <type>java.lang.Integer</type>
    <source>id</source>
    <target>db:id</target>
</attribute>
<attribute>
    <type>java.lang.String</type>
    <source>phone_number</source>
    <target>db:phone_number</target>
</attribute>

Execution: support for a map of arbitrary 'attributes'

"Execution" object provides a context for a single task run. It should be useful to track various things happening in this scope. E.g. we might run CreateOrUpdate , collecting source IDs . And then run a separate delete task ( #23 ), feeding it a collection of source IDs. Possible API, reminiscent of Servlet API:

Execution.java:
   void setAttribute(String key, Object value);
   Object getAttribute(String key);

Additive 'matchBy' in CreateOrUpdateBuilder/DeleteBuilder/MapperBuilder

"matchBy" methods in CreateOrUpdateBuilder/DeleteBuilder/MapperBuilder reset any previously specified attributes. Per this task we will change that to append to the attributes to the existing list. This seems like a more useable and intuitive behavior.

UPGRADE NOTES;

If you have more than one 'matchBy' in your builder chain, you will need to rewrite it, as the task will now work differently.

Disallow ID updates for auto-generated IDs

Currently when syncing an ID of an entity with auto-generated ID , LinkETL quietly uses auto-generated values, resulting on data mismatch between source and target. For now let's solve the issue by throwing an exception when autogen ID sync is attempted.

Rename LinkETL to LinkMove

UPGRADE NOTES:

LinkETL project was renamed to LinkMove. This means the following to the end users:

  • pom.xml should now include com.nhl.link.move:link-move artifact.
  • com.nhl.link.etl package is now com.nhl.link.etl.move. Classes starting with Etl are now starting with Lm (e.g. EtlRuntime -> LmRuntime). So you will need to do renaming in your code.

db:path syntax in <target> tags

We should allow full Cayenne path syntax in the tag of the extractor descriptor, including "db:somepath" variety. This would allow to match IDs, FK, etc. taking ETL closer to the database, potentially improving performance, and resulting in a clearer extraction model.

Straighten mapping by ID

Old API uses the name of DbAttribute for id mapping:

class CreateOrUpdateBuilder {
    CreateOrUpdateBuilder<T> matchById(String idAttribute) {
    }
}

All other matchBy methods are using ObjAttribute names (i.e. object properties). This is confusing both in the backend (the same attribute collection can contain obj or db attribute depending on the call sequence) and on the API end. Moreover we can figure out the ID name from Cayenne model, so passing the name is redundant. A suggested change:

class CreateOrUpdateBuilder {
    // this will become the  default and doesn't have to be called...
    // if called, no attribute needs to be specified
    CreateOrUpdateBuilder<T> matchById() {
    }
}

UPGRADE NOTES:

XML descriptors that map ID columns MUST BE CHANGED to use Cayenne db: expression syntax:

<attribute>
    <type>java.lang.Integer</type>
    <source>SOURCE_ID</source>
    <target>db:TARGET_ID</target>
</attribute>

if your IDE shows an error for 'matchById(String)' method, this is a hint that you will need to review the descriptors and switch ID attributes to "db:" expressions.

Task stage listeners

Now that we've broken down each segment processing into stages, a useful API would be listeners that can be notified after each stage is complete. Then they can look at (and alter) the state of the segment, do any needed processing, and store state in the Execution attributes ( #31).

Listeners may be arbitrary annotated methods:

   @AfterSourcesMapped
   void afterSourcesMapped(Execution exec, CreateOrUpdateSegment<MyType> segment);

Cayenne upgrade to 4.0.M3.debfa94

Will upgrade Cayenne dependency to an M3 master build of off debfa94 commit. It has the new DataSource API, and fixes an important limitation using DbPath expressions: https://issues.apache.org/jira/browse/CAY-2013 that we need to implement generic expression-based matchers.

UPGRADE-NOTES: using link-etl in your apps with an older version of Cayenne will likely break the jobs. So your apps must use 4.0.M3.debfa94 or newer

Multi-extractor configs

Support placing multiple extractor configs in a single XML file. This would allow to quickly map a large number of tables. Potentially the whole extractor can be run as a single task.

Extractor model will be structured as 2 classes - ExtractorModel and ExtractorModelContainer. A container will be associated with location it was loaded from and will provide a namespace for extactors. To find a model, a caller will need to provide two identifiers: a container location and extractor name within location.

The new descriptor structure:

<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
       xsi:schemaLocation="http://nhl.github.io/link-etl/xsd/extractor_config_2.xsd"
    xmlns="http://nhl.github.io/link-etl/xsd/extractor_config_2.xsd">

   <!-- shared information -->
    <type>jdbc</type>
    <connectorId>myconnecor</connectorId>

   <!-- Extractors -->
    <extractor>
       <!-- name is optional, the default implicit name is "default_extractor" -->
       <name>MyExtractor1</name>
       <!-- this is optional per #41 -->
       <attributes> ... </attributes>
       <properties> ... </properties>
    </extractor>

    <extractor>
       <properties> ... </properties>
    </extractor>

    <extractor>
         <name>MyExtractor3</name>
     <!-- Type and connector can be optionally overridden per extractor -->
       <type>jdbc</type>
       <connectorId>myconnecor</connectorId>
       <properties> ... </properties>
    </extractor>
</config>

Prerequisite : #52

match by foreign key value

ETL task .matchBy() method is not able to compare records by foreign key field value because of the one is not a property of the Cayenne model. It means like hidden field. For instance, matchBy("game_id") or matchBy("gameId") or matchBy("game.id") or matchBy("game") works never now.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.