Code Monkey home page Code Monkey logo

fregata's People

Contributors

aidenpan0x avatar bjuthjliu avatar icarusion avatar takun2s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fregata's Issues

A question about SparkTrainer

I have a question about trainer.ps.set(ws) at the end of the function "run" in SparkTrainer.scala. If we use ps.set() rather than ps.adjust, the epochNum is meaningless. Do I understand the code correctly?

Question about RDT

Hi
The RDT code example is not working with 0.0.2,

  1. new RDT(numTrees: Int, depth: Int, numFeatures: Int) only support 3 args
  2. rdt.train(Array) accept an array not SparseVector
  3. where is method asNum in : (l, asNum(c), ps(l.toInt))
  4. how to save model
  5. how to find out what accuracy on each label , that's say I have two(0/1).

Please help

logistic regression error: Null pointer exception occurs when global epoch larger than 1

here is the stack trace:
java.lang.NullPointerException
at fregata.param.LocalParameterServer$$anonfun$adjust$1.apply$mcVID$sp(Parameter.scala:35)
at fregata.util.VectorUtil$.forV(VectorUtil.scala:44)
at fregata.param.LocalParameterServer.adjust(Parameter.scala:34)
at fregata.optimize.sgd.StochasticGradientDescent$$anonfun$run$1.apply(StochasticGradientDescent.scala:25)
at fregata.optimize.sgd.StochasticGradientDescent$$anonfun$run$1.apply(StochasticGradientDescent.scala:20)
at scala.collection.immutable.Stream.foreach(Stream.scala:594)
at fregata.optimize.sgd.StochasticGradientDescent.run(StochasticGradientDescent.scala:20)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:83)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:74)
at fregata.model.ModelTrainer$$anonfun$run$1.apply$mcVI$sp(ModelTrainer.scala:21)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at fregata.model.ModelTrainer$class.run(ModelTrainer.scala:19)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:74)
at fregata.spark.model.SparkTrainer$$anonfun$1.apply(SparkTrainer.scala:26)
at fregata.spark.model.SparkTrainer$$anonfun$1.apply(SparkTrainer.scala:24)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

问一下LogisticRegression.run(trainData, localEpochNum, epochNum)设置的参数

问一下为什么LogisticRegression.run(trainData, localEpochNum, epochNum)把localEpochNum, epochNum设置成不是1的参数,就会报错

ERROR TaskSetManager: Task 699 in stage 4.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 699 in stage 4.0 failed 4 times, most recent failure: Lost task 699.3 in stage 4.0: java.lang.NullPointerException

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1443)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1430)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1430)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:810)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:810)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:810)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1652)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1600)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1874)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1994)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1127)
at org.apache.spark.rdd.RDD$$anonfun$treeReduce$1.apply(RDD.scala:1058)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeReduce(RDD.scala:1036)
at fregata.spark.model.SparkTrainer.run(SparkTrainer.scala:28)
at fregata.spark.model.SparkTrainer$$anonfun$run$1.apply$mcVI$sp(SparkTrainer.scala:15)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at fregata.spark.model.SparkTrainer.run(SparkTrainer.scala:13)
at fregata.spark.model.classification.LogisticRegression$.run(LogisticRegression.scala:29)

Question about GSA

HI @xiongxie @Hierarch devs:
I have see your papers about Greedy Step Averaging: A parameter-free SGD method, In algorithm 2, I just see you use line search to get step size , why don't we get step size like algorithm? Can you give me a example in which case, we will use line search? thank you!

I get a model result with all Wi = NaN

I trained a model with 2500000 samples and 900 features. But the result Wi were all NaN. I randomly chose 1000 samples from the dataset, and the reault seems ok.
Does it have limitation on the size of dataset or feature values?

咨询一下提交的参数

我的数据集有300万条数据,feature维度为1亿左右,每条数据非零值个数约200,用下面的参数提交,总是报143的错误(内存问题),想请教下你们训练 1 billion x 1 billion 的提交参数,谢谢!

--conf spark.driver.maxResultSize=12g
--conf spark.network.timeout=1200s
--conf spark.executor.heartbeatInterval=1200s
--master yarn-client
--num-executors 1000
--executor-cores 2
--executor-memory 12g
--driver-memory 10g

IndexOutOfBoundsException: 2 not in [-2,2)

I run the softmax with little data, but error
1 2:1.0 3:1.0
2 1:1.0 2:1.0 3:1.0
3 1:1.0 2:1.0 3:1.0

Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 7, executor id: 49, host: hadoop-afd-tretw.bj): java.lang.IndexOutOfBoundsException: 2 not in [-2,2)
at breeze.linalg.DenseVector$mcD$sp.apply$mcD$sp(DenseVector.scala:71)
at fregata.util.VectorUtil$$anonfun$wxpb$2.apply$mcVID$sp(VectorUtil.scala:29)
at fregata.util.VectorUtil$.forV(VectorUtil.scala:44)

fail to train model with java.lang.NullPointerExce ption (null)

17/02/10 18:20:31 INFO TaskSetManager: Lost task 345.2 in stage 1.0 (TID 18697) on executor worker8.spark.training.m.com: java.lang.NullPointerException (null) [duplicate 908]
Exception in thread "main" 17/02/10 18:20:31 INFO TaskSetManager: Lost task 141.2 in stage 1.0 (TID 18698) on executor worker3.spark.training.m.com: java.lang.NullPointerExce
ption (null) [duplicate 909]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 370 in stage 1.0 failed 4 times, most recent failure: Lost task 370.3 in stage 1.0 (TID 18666, worker4
.spark.training.m.com): java.lang.NullPointerException
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:83)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:73)
at fregata.model.ModelTrainer$$anonfun$run$1.apply$mcVI$sp(ModelTrainer.scala:21)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at fregata.model.ModelTrainer$class.run(ModelTrainer.scala:19)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:73)
at fregata.spark.model.SparkTrainer$$anonfun$1.apply(SparkTrainer.scala:26)
at fregata.spark.model.SparkTrainer$$anonfun$1.apply(SparkTrainer.scala:24)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1127)
at org.apache.spark.rdd.RDD$$anonfun$treeReduce$1.apply(RDD.scala:1058)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeReduce(RDD.scala:1036)
at fregata.spark.model.SparkTrainer.run(SparkTrainer.scala:28)
at fregata.spark.model.SparkTrainer$$anonfun$run$1.apply$mcVI$sp(SparkTrainer.scala:15)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at fregata.spark.model.SparkTrainer.run(SparkTrainer.scala:13)
at fregata.spark.model.classification.LogisticRegression$.run(LogisticRegression.scala:29)
at com.meitu.rec.longTermRecTest$.fregata_lr(longTermRecTest.scala:69)
at com.meitu.rec.longTermRecTest$.run(longTermRecTest.scala:59)
at com.meitu.rec.longTermRecTest$$anonfun$main$1.apply(longTermRecTest.scala:49)
at com.meitu.rec.longTermRecTest$$anonfun$main$1.apply(longTermRecTest.scala:48)
at scala.Option.map(Option.scala:145)
at com.meitu.rec.longTermRecTest$.main(longTermRecTest.scala:48)
at com.meitu.rec.longTermRecTest.main(longTermRecTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:83)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:73)
at fregata.model.ModelTrainer$$anonfun$run$1.apply$mcVI$sp(ModelTrainer.scala:21)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at fregata.model.ModelTrainer$class.run(ModelTrainer.scala:19)
at fregata.model.classification.LogisticRegression.run(LogisticRegression.scala:73)
at fregata.spark.model.SparkTrainer$$anonfun$1.apply(SparkTrainer.scala:26)
at fregata.spark.model.SparkTrainer$$anonfun$1.apply(SparkTrainer.scala:24)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.