Code Monkey home page Code Monkey logo

samsung / kv2streamer Goto Github PK

View Code? Open in Web Editor NEW
52.0 18.0 19.0 1.76 MB

kv2streamer is a library that allows developers to access the new Kinect V2 sensor data and tracking capabilities from non-windows OS. It provides a server-side application that can stream out Kinect V2 data to multiple client-side applications accessing the client-side API running on non-windows OS over LAN.

License: Other

C++ 90.28% Makefile 2.52% C 2.51% CMake 4.41% Batchfile 0.28%

kv2streamer's Introduction

KV2Streamer

Kinect V2 Streamer (KV2Streamer) is a library that allows developers to access the new Kinect V2 sensor data and tracking capabilities from a non-Windows OS.

KV2Streamer provides both a server-side application (KV2Server) that can stream out Kinect V2 data, including tracking data, to multiple client-side applications accessing the client-side API (KV2Client) running on non-Windows OS over LAN.


1. KV2Server

KV2Streamer provides the server-side application that retrieves data from the Kinect sensor and streams it out to some defined multicast IP.

Unlike it's client-side counterpart, KV2Server is an application on its own. Once it is build, you can run the executable from the terminal by supplying a multicast IP argument which will be used to identify the server.

However, you have to run the application from the gstreamer bin directory.

In summary:

  1. Build the application using Visual Studio (see instructions below)
  2. Open the command prompt.
  3. Change directory: $ cd C:\gstreamer\1.0\x86\bin
  4. Run the executable: $ [executable path] <multicast-IP>, for example:$ C:\Users\demo\projects\kv2streamer\server\kv2server_VS12\Debug\kv2server_application.exe 224.1.1.1

1.1 How To Build KV2Server

Follow the steps below to build and run the kv2streamer server application. It is assumed that you have installed the Kinect SDK which you can find here: https://www.microsoft.com/en-us/download/details.aspx?id=44561

  1. Install gstreamer from http://gstreamer.freedesktop.org/data/pkg/windows/:
    gstreamer-1.0-devel-x86-1.4.0.msi
    gstreamer-1.0-x86-1.4.0.msi

  2. Install boost: http://www.boost.org/users/history/version_1_56_0.html

  3. Open the Visual Studio Solution file in /server/kv2server_VS12 now and try building. If doesn't work, see below:

Note: The following instructions should have already been setup in the project you cloned, but in case you need to set things up yourself:

  1. Add ALL .cpp files contained in thes following directories to your project (right click on project > Add > Existing Item):

kv2server-applications/kv2server
kv2server-applications/oscpack (BUT EXCLUDE kv2server-applications/oscpack/ip/posix)
kv2streamer-lib/gst-wrapper
kv2streamer-lib/oscpack-wrapper

  1. Configuration Properties > C/C++ > General > Additional Include Directories:

    C:\Program Files\boost\boost_1_56_0
    $(KINECTSDK20_DIR)\inc
    $(GSTREAMER_1_0_ROOT_X86)\include\gstreamer-1.0
    $(GSTREAMER_1_0_ROOT_X86)\include\glib-2.0
    $(GSTREAMER_1_0_ROOT_X86)\lib\glib-2.0\include
    $(SolutionDir)..
    $(SolutionDir)..\kv2server
    $(SolutionDir)..\oscpack
    $(SolutionDir)....\kv2streamer-lib

  2. Configuration Properties > VC++ Directories > Library Directories:

C:\Program Files\boost\boost_1_56_0\stage\lib
$(KINECTSDK20_DIR)\lib\x86
$(GSTREAMER_1_0_ROOT_X86)\lib

  1. Configuration Properties > Linker > Input:

Ws2_32.lib
winmm.lib
kinect20.lib
gstreamer-1.0.lib
glib-2.0.lib
gobject-2.0.lib
gmodule-2.0.lib
gthread-2.0.lib
gstapp-1.0.lib

  1. Setting up your network. Both machines, server (windows 8) and client (ubuntu), should be connected to the same switch. On windows 8: go to > Control Panel > Network and Sharin Center > Ethernet > Properties > TCPIP/IPv4:

IP address: 192.168.8.2
Subnet Mask: 255.255.255.0
Default Gateway: 192.168.8.250

1.2 The Coordinate Mapper Table Generator Application

You can find the Visual Studio in kv2server-applications/KV2ServerCorrdinateMapperTableGenerator. Run it and you will find instructions on the terminal. Each Kinect V2 camera has its own unique calibration that can be saved as a binary using this application. If you copy that binary and put it in the same directory as your client application executable, you will be able to access that conversion table that is used to map the color image to the depth image, e.g. if you're using the colored-depth streamer on the client.

All the project settings should have been setup, but in case you need to do everything yourself, simply include the Kinect SDK and you should be all set:

  1. Configuration Properties > C/C++ > General > Additional Include Directories

    $(KINECTSDK20_DIR)\inc

  2. Configuration Properties > VC++ Directories > Library Directories

    $(KINECTSDK20_DIR)\lib\x86

  3. Configuration Properties > Linker > Input

kinect20.lib


2. KV2Client API

KV2Streamer also provides the client-side API that developers can use to access data from a specific server once the server is running.

This section describes how you can include the KV2Streamer client-side API in your application which runs on Ubuntu.

You're free to use any libraries/frameworks to render the received images but a sample application is provided in sample-client-applications/codelite. The sample applications uses glfw and run on the Codelite IDE.

To use the client API of KV2Streamer, perform the following steps on your Ubuntu machine:

  1. Setup and test your network
  2. Install all the dependencies: boost, gstreamer, kv2streamer version of oscpack, and the kv2streamer library.
  3. Include "KV2ClientCommon.h" in your application.

2.1 KV2Client API Instructions

  1. Setting up your network
    Both machines server (windows 8) and client (ubuntu) will be connected to the same switch. On ubuntu: go to > System Settings > Network > choose the right one > Options > IPv4 Settings:
    Address: 192.168.8.3
    Netmask: 255.255.255.0
    Gateway: 192.168.8.250
    Test by trying to ping each other. Make sure that you're not connected to any other network through the a different network card.

  2. Install boost (1.56.0): http://www.boost.org/doc/libs/1_56_0/more/getting_started/unix-variants.html

  3. Install gstreamer-1.0

  4. Install a modified version of oscpack that you can find in kv2server-applications/oscpack by running "sudo make install".

  5. Install the kv2streamer-lib by using CMake, and don't forget to run "sudo make install" at the end to place the built binaries in /usr/local/lib and the headers in /user/local/include.

  6. In your build environment, include the following header search paths:

kv2streamer&oscpack:
/usr/local/include
boost:
/usr/local/include/boost
gstreamer:
/usr/local/include/gstreamer-1.0
/usr/include/glib-2.0
/usr/lib/x86_64-linux-gnu/glib-2.0/include

  1. Also add the following library search paths in your build environment:

    /usr/local/lib
    /usr/local/lib/gstreamer-1.0

  2. And link to the following libraries:

    libkv2client
    libgst-wrapper
    liboscpack-wrapper
    liboscpack
    libboost_system
    libboost_thread
    libgstreamer-1.0
    libgstapp-1.0
    libglib-2.0
    libgmodule-2.0
    libgthread-2.0
    libgobject-2.0

  3. Finally, include the following in your application:
    "include "KV2ClientCommon.h"

And you should be all set!

2.2 Sample Build Instructions for Codelite

A sample application, which you can find in /sample-client-applications, is provided so that you can test the library quickly. Follow the steps below to run the sample application.

  1. Install Codelite
  2. Install glfw: http://www.glfw.org/
  3. Run the sample application. You should be able to see live camera image (color, depth, skeleton, etc.) from the server.

If you decide to create a new projects on codelite from scratch, you must do the steps described above, PLUS you have to link to the following libraries: libGL, libglfw, and include the following header files: #include <GLFW/glfw3.h>.

Important note: to run the executable, open the terminal, then:

  1. Navigate to the executable director: usually /Debug
  2. Run the following once: $ export LD_LIBRARY_PATH="/usr/local/lib"
  3. Run the executable: $ ./kv2clientApplication

Summary

In short, to use KV2Streamer:

  1. Build and run the server application on Windows 8 machine.
  2. Create your own project and include the KV2Client API on Ubuntu.
  3. Run your application.

Note

  • Only works on the same subnet (Multicast TTL default = 1)
  • Server can run while handling multiple clients connecting and disconnecting, but not the other way, i.e. restarting the server requires restarting all clients.
  • ./shared folder has codes shared by both the server application and the client-side API.

Contact

Please direct questions and comments to: [email protected] and [email protected]

kv2streamer's People

Contributors

kvnwinata avatar seo-young avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kv2streamer's Issues

Server Side API

Hi Seo-Young and Anette,
Is the Server Side portion of this missing or did I overlook something. Thanks! I look forward to implementing this.

How to map color and depth data correctly in client machine

Hi Kevin.
This is kirubha again, i am facing an issue during display of point cloud data from secondary sensor.Here i attached two cloud images. First one is primary sensor that is connected in the same machine. Second image is secondary sensor data which is from server machine. When i project the point cloud data in the my client machine, the Human shows like double human. I dont know where is an issue. To understand clearly i have also added my logic below.

Code logic:
//Getting ColoredBuffer data from your code KV2ClientExample.cpp(secondarysensor)
void KV2ClientExample::AcquireAndProcessColorFrame()
{
IColorFramePtr colorFrame;
if (colorStreamer->AcquireLatestFrame(&colorFrame))
{
UINT bufferSize;
unsigned char* buffer;
colorFrame->AccessRawUnderlyingBuffer(&bufferSize, &buffer);
}
}
In our logic, we did like this
BYTE* bufferbyte = nullptr;
void AcquireAndProcessColorFrame() //In this function we are receiving the buffer data from secondary sensor
{
kv2s::IColorFramePtr colorFrame;
if (colorStreamer->AcquireLatestFrame(&colorFrame))
{
UINT bufferSize;
unsigned char* buffer;
colorFrame->AccessRawUnderlyingBuffer(&bufferSize, &buffer);
// converting unsigned char* to byte buffer
bufferbyte=reinterpret_cast<BYTE*>(buffer);
}
}

//This function gets bodyindex buffer
void AcquireAndProcessBodyIndexFrame()
{
IBodyIndexFramePtr bodyIndexFrame;
signed char* s2buffer;
if (bodyIndexStreamer->AcquireLatestFrame(&bodyIndexFrame))
{
UINT bufferSize;
bodyIndexFrame->AccessRawUnderlyingBuffer(&bufferSize, &s2buffer);
unsigned char* output = bodyIndexFrameRenderBuffer;
const signed char* bufferEnd = s2buffer + DEPTH_MULTICAST_WIDTH_DEPTH_MULTICAST_HEIGHT;
while(s2buffer < bufferEnd)
{
signed char index = *s2buffer;
output = color_mapping[3_(index+1)+0]; ++output;
output = color_mapping[3(index+1)+1]; ++output;
output = color_mapping[3(index+1)+2]; ++output;
++s2buffer;
}
indexSensor2buffer=reinterpret_cast<BYTE
>(s2buffer);
bodyIndexFrame.reset();
}
}
//In this function, we are displaying the rgb point cloud data in our viewport
void PointCloudGL::PointCloudDisplay()
{
float fl_x = 1063.118f;
float fl_y = 1065.233f;
float pp_x = 962.473f;
float pp_y = 526.789f;

// Unit16* depthSensor2buffer - data comes from secondary sensor 
//Cameraspacepoint* pCSS2Points - data from depthsensor2buffer is copied to cameraspacepoint
pCoordinateMapper->MapColorFrameToCameraSpace(512*424, depthSensor2buffer,1920 * 1080, pCSS2Points);

//pDepthS2SpaceBuffer - depthspacepoint - here secondary sensor buffer is copied to depthspacepoint
pCoordinateMapper->MapColorFrameToDepthSpace(512 * 424, depthSensor2buffer, 1920 * 1080, pDepthS2SpaceBuffer);

glBegin(GL_POINTS); 
float xValue=0;
float yValue=0;
//Looping through rgb resolution 1920 * 1080
for (int i = 0; i < 1920; i++)
{   
    for (int j = 0; j < 1080; j++)
    {
        int colorIndex = i + (j * 1920);
        DepthSpacePoint p ;
        DepthSpacePoint p2 ;
        if(pDepthS2SpaceBuffer!= NULL)
        {
            p2=pDepthS2SpaceBuffer[colorIndex]; //we are retrieving the depth space buffer values as array and store it in depthspacepoint
            const CameraSpacePoint& rPt1 = pCSS2Points[colorIndex];
            if (p2.X != -std::numeric_limits<float>::infinity() && p2.Y != -std::numeric_limits<float>::infinity())
            {
                int depthX = static_cast<int>(p2.X+0.5);
                int depthY = static_cast<int>(p2.Y+0.5);
                if ((depthX >= 0 && depthX < 512) && (depthY >= 0 && depthY < 424))
                {
                    BYTE player = indexSensor2buffer[depthX + (depthY * 512)];
                        if (player != 0xff)
                    {
                        int Z = rPt1.Z;
                                float xx=0;
                                float yy=0;
                                float zz=0;
                                if (rPt1.Z > 0)
                                {
                                    xx =(i-pp_x)*rPt1.Z/fl_x;
                                    yy = (j-pp_y)*rPt1.Z/fl_y;
                                    zz=rPt1.Z;
                                    if(Sensor2Selection==0) //sensor 1 selected
                                        {
                                            if (bufferbyte !=nullptr)
                                            {

glColor4ub(bufferbyte[ 3 *colorIndex], bufferbyte[3 * colorIndex + 1], bufferbyte[3 * colorIndex + 2], bufferbyte[3 * colorIndex + 3]);
glVertex3f(xx/15, -yy/15, rPt1.Z/15);
}
}
}
}
}
}
}
// In the above logic the following code used to draw 3d point cloud
float fl_x = 1063.118f;
float fl_y = 1065.233f;
float pp_x = 962.473f;
float pp_y = 526.789f;

xx =(i-pp_x)*rPt1.Z/fl_x; // 3dpoint X
yy = (j-pp_y)*rPt1.Z/fl_y; // 3dpoint Y
zz=rPt1.Z; // 3dpoint Z

glColor4ub(bufferbyte[ 3 *colorIndex], bufferbyte[3 * colorIndex + 1], bufferbyte[3 * colorIndex + 2], bufferbyte[3 * colorIndex + 3]);
glVertex3f(xx, -yy, rPt1.Z);
if we used the above logic , we are getting following screenshot named Convertedtoours.jpg and if you see primarysensor.jpg there is no overlapping of human.
convertedtoours
primarysensor

Actually in our existing logic we put the following code to display point cloud from primary sensor,

glColor4ub(pColorBuffer[4 * colorIndex], pColorBuffer[4 * colorIndex + 1], pColorBuffer[4 * colorIndex + 2], pColorBuffer[4 * colorIndex + 3]);
glVertex3f(xx, -yy, rPt.Z);

Note:-But if we put 4*colorIndex in secondary sensor, we are getting the following error
Unhandled exception: Access violation reading location

Please guide me how to proceed with our existing logic.

Thanks
Kiruba

kv2Streamer not running in x64 Configuration

Hi ,
I am trying to run the kv2streamer in x64 architecture in Visual studio 2012 of windows platform .I have modified all supporting dlls in to x64 .

I am getting following Linker error .

error LNK2019: unresolved external symbol gst_app_src_set_callbacks referenced in function "public: void __cdecl kv2s::GstAppSrcPipeline::Initialize(class std::basic_string<char,struct std::char_traits,class std::allocator >)" (?Initialize@GstAppSrcPipeline@kv2s@@QEAAXV?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@@z)

error LNK2019: unresolved external symbol gst_app_sink_set_drop referenced in function "public: void __cdecl kv2s::GstAppSinkPipeline::Initialize(class std::basic_string<char,struct std::char_traits,class std::allocator >)"

error LNK2019: unresolved external symbol gst_app_sink_pull_preroll referenced in function "private: static enum GstFlowReturn __cdecl

Please suggest me which gstreamer library i have to modify in order to resolve the above error

Thanks

Error in RGB Multicasting Pipeline String

Hi,
In Kv2streamer,the Gstreamer pipeline (GstreamerPipelines.h) for encoding/decoding bytes over the network you are using following function.

// RGB Multicasting Pipeline
static std::string CreateAppSrc_RGB8_EncodedMulticastingPipeline(VideoMulticastingPipelineCreationParameters input)
{
std::stringstream pipelineString;

pipelineString

<< "rtpbin name=" << RTPBIN_NAME

<< NEW_LINKAGE

<< "appsrc name=" << APPSRC_NAME << " is-live=true block=true stream-type=0 format=3 do-timestamp=true min-latency=0" 
<< LINK << "video/x-raw, format=" << input.pixelFormat << ", width=" << input.inputWidth << ", height=" << input.inputHeight << ", framerate=30/1" << LINK
<< "videoconvert"
<< LINK
<< "videoscale"
<< LINK << "video/x-raw, width=" << input.multicastWidth << ", height=" << input.multicastHeight << LINK 
<< "queue"
<< LINK
<< "x264enc tune=zerolatency speed-preset=ultrafast"; if (input.useHighestQuality) pipelineString << " qp-min=0 qp-max=0 qp-step=0"; pipelineString
<< LINK << "video/x-h264, profile=baseline" <<  LINK
<< "rtph264pay"
<< LINK
<< RTPBIN_NAME << PAD << "send_rtp_sink_0"

<< NEW_LINKAGE

<< RTPBIN_NAME << PAD << "send_rtp_src_0"
<< LINK
<< "udpsink port=" << input.port << " host=" << input.multicastIP << " auto-multicast=true" 
;
return pipelineString.str();

}

Basically in my application both Kv2streamer server and client application executed in windows platform.I think this pipeline string unable to send full color bytes through network that is the reason in my output the depth and color is not mapped correctly .Can u help me to modify this pipeline string that work for windows environment to receive loss less data over the network? And also in which PipelineString is suitable for receiving full length(1920*1080) color bytes.

I have referred some gstreamer pipeline links from internet but not able to get which piplelinestring is suitable to receive (1920*1080) color bytes.
http://processors.wiki.ti.com/index.php/Example_GStreamer_Pipelines#DM355
https://gstreamer.ti.com/gf/project/gstreamer_ti/wiki/?pagename=NotesOnDM365Performance
http://lists.freedesktop.org/archives/gstreamer-devel/2013-June/041229.html
https://developer.ridgerun.com/wiki/index.php?title=Introduction_to_network_streaming_using_GStreamer

Please help me to solve this issue

How to Build ClientAPI in Windows platform

Hi,
Now i am trying to execute Kv2Streamer source in Windows platform. Now i compiled Kinectv2server
suceesfully.but when i configuring the KinectClient side API.I am facing some issues.Also i am beginner in C++ . the following errors are i facing now:

Error 30 error LNK2019: unresolved external symbol "public: __thiscall kv2s::KV2Client::~KV2Client(void)" (??1KV2Client@kv2s@@QAE@XZ) referenced in function "public: void * __thiscall kv2s::KV2Client::`scalar deleting destructor'(unsigned int)" (??_GKV2Client@kv2s@@QAEPAXI@Z) KinectClient
Error 59 error LNK2019: unresolved external symbol "public: __thiscall kv2s::GstAppSrcPipeline::GstAppSrcPipeline(class std::basic_string<char,struct std::char_traits,class std::allocator >,int)" (??0GstAppSrcPipeline@kv2s@@QAE@V?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@h@Z) referenced in function "public: __thiscall KV2ClientExample::KV2ClientExample(int,int,char const *)" (??0KV2ClientExample@@QAE@HHPBD@Z) KinectClient

Error 67 error LNK2019: unresolved external symbol __imp__wglShareLists@8 referenced in function __glfwCreateContext KinectClient
Error 66 error LNK2019: unresolved external symbol __imp__wglMakeCurrent@8 referenced in function __glfwPlatformMakeContextCurrent
..\KinectClient\KinectClient\glfw3.lib(wgl_context.obj) KinectClient

Question: Network load + lag?

Hi,

what's the approximate network load when streaming the complete data? Or, put differently: would this also work for streaming data from multiple Kinect v2 (each with it's own PC running the server) to the same client?

Also, does it add a noticeable lag?

Thanks.

How to Build ClientAPI in Windows platform

Hi,
Now i am try to execute Kv2Streamer source in Windows platform. Now i compiled Kinectv2server suceesfully.but when i configuring the KinectClient side API.I am facing some issues.Also i'm a beginner in C++ . the following errors are i facing now:

Error 30 error LNK2019: unresolved external symbol "public: __thiscall kv2s::KV2Client::~KV2Client(void)" (??1KV2Client@kv2s@@QAE@XZ) referenced in function "public: void * __thiscall kv2s::KV2Client::`scalar deleting destructor'(unsigned int)" (??_GKV2Client@kv2s@@QAEPAXI@Z) KinectClient
Error 59 error LNK2019: unresolved external symbol "public: __thiscall kv2s::GstAppSrcPipeline::GstAppSrcPipeline(class std::basic_string<char,struct std::char_traits,class std::allocator >,int)" (??0GstAppSrcPipeline@kv2s@@QAE@V?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@h@Z) referenced in function "public: __thiscall KV2ClientExample::KV2ClientExample(int,int,char const *)" (??0KV2ClientExample@@QAE@HHPBD@Z) KinectClient

Error 67 error LNK2019: unresolved external symbol __imp__wglShareLists@8 referenced in function __glfwCreateContext KinectClient
Error 66 error LNK2019: unresolved external symbol __imp__wglMakeCurrent@8 referenced in function __glfwPlatformMakeContextCurrent
..\KinectClient\KinectClient\glfw3.lib(wgl_context.obj) KinectClient

How to show multiple kinect server output in single client computer

Hi,

After a long time i got a doubt in kv2streamer application, multiple clients can connect to single server by using multicast ip.But is it possible to connect multiple servers to single client.(i.e To show multiple server kinect data output to single client machine.)

Please suggest me with some ideas.

Thanks in Advance
Kiruba

Networking Streams not rendering properly

Hi,
In my scenario i connected two kinectv2sensors.one sensor at same PC called primary. another sensor from network PC called secondary.using the Kv2streamer application we are getting streams from network PC.Using those data we mapping (color and depth)pointcloud based on coordinatemapper from primary.when i access the primary sensor color buffer data it gives (1920 * 1080 * 4)=8294400 bytes .but the secondary color buffer throws Access violation error when i multiply (1920 * 1080 * 4) bytes per pixels.so i multiply (1920 * 1080 * 3)= 6220800 it gives the pointcloud output.Actually the primary sensor gives exact isolated human but the secondary sensor depth and color not mapped correctly it show shadow around the human cloud.please suggest me to get full resolution color and depth data from network PC for exactly mapping human depth and color.Here i have attached what the result i getting from primary and secondary sensor.

forums

Thanks,
Kirubha

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.