Code Monkey home page Code Monkey logo

Comments (5)

tnowotny avatar tnowotny commented on July 17, 2024

I assume @mstimberg will get this anyway, but can I also pull in @neworderofjamie for an opinion whether anything and what might have changed on the GeNN side. I am currently traveling and will not be able to do detective work atm.

from brian2genn.

neworderofjamie avatar neworderofjamie commented on July 17, 2024

from brian2genn.

thesamovar avatar thesamovar commented on July 17, 2024

I think what you're looking for is https://git-scm.com/docs/git-bisect. :)

from brian2genn.

denisalevi avatar denisalevi commented on July 17, 2024

Alright, I found the problem... I almost couldn't reproduce my results from above... what a pain... looks like it was me after all, nothing in GeNN or brian2GeNN.

I got rid of the lastupdate variable when its not needed to save memory (see PR brian-team/brian2#979). Since those changes are not merged yet (and might not get merged as lastupdate might be removed entirely for not event-driven synapses, see PR discussion), I just add those changes through a diff file, which I apply when I check out a new brian2 version. But that diff file is tracked by our brian2cuda repo. And for the blue plot above, those changed were not included in the diff file.

I ran the brian2 COBAHH example modified for N=1e5 neurons with set_device('genn'), no monitor and with print device._last_run_time at the end of the script (needs the changes from my PR #65 to work). With the lastupdate variable last_run_time is ~2s, without its ~10s.

Here is a diff of the generated code. My guess would be it has to do with the missing convert_dynamic_arrays_2_sparse_synapses calls?

Anyways. If the lastupdate variable gets removed, there needs to be some change in brian2genn I guess. :)

diff of generated code

diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/code_objects/synapses_1_synapses_create_generator_codeobject.cpp GeNNworkspace_slow/code_objects/synapses_1_synapses_create_generator_codeobject.cpp
--- GeNNworkspace_fast/code_objects/synapses_1_synapses_create_generator_codeobject.cpp	2018-08-15 15:31:43.582184617 +0200
+++ GeNNworkspace_slow/code_objects/synapses_1_synapses_create_generator_codeobject.cpp	2018-08-15 15:21:00.006339188 +0200
@@ -386,7 +386,6 @@
     _dynamic_array_synapses_1__synaptic_post.resize(newsize);
     _dynamic_array_synapses_1__synaptic_pre.resize(newsize);
     _dynamic_array_synapses_1_delay.resize(newsize);
-    _dynamic_array_synapses_1_lastupdate.resize(newsize);
 	// Also update the total number of synapses
 	_ptr_array_synapses_1_N[0] = newsize;
 
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/code_objects/synapses_synapses_create_generator_codeobject.cpp GeNNworkspace_slow/code_objects/synapses_synapses_create_generator_codeobject.cpp
--- GeNNworkspace_fast/code_objects/synapses_synapses_create_generator_codeobject.cpp	2018-08-15 15:31:43.618185280 +0200
+++ GeNNworkspace_slow/code_objects/synapses_synapses_create_generator_codeobject.cpp	2018-08-15 15:21:00.230343308 +0200
@@ -386,7 +386,6 @@
     _dynamic_array_synapses__synaptic_post.resize(newsize);
     _dynamic_array_synapses__synaptic_pre.resize(newsize);
     _dynamic_array_synapses_delay.resize(newsize);
-    _dynamic_array_synapses_lastupdate.resize(newsize);
 	// Also update the total number of synapses
 	_ptr_array_synapses_N[0] = newsize;
 
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/definitions.h GeNNworkspace_slow/magicnetwork_model_CODE/definitions.h
--- GeNNworkspace_fast/magicnetwork_model_CODE/definitions.h	2018-08-15 15:31:53.450366374 +0200
+++ GeNNworkspace_slow/magicnetwork_model_CODE/definitions.h	2018-08-15 15:26:56.020889812 +0200
@@ -87,13 +87,9 @@
 extern double * inSynsynapses;
 extern double * d_inSynsynapses;
 extern SparseProjection Csynapses;
-extern double * lastupdatesynapses;
-extern double * d_lastupdatesynapses;
 extern double * inSynsynapses_1;
 extern double * d_inSynsynapses_1;
 extern SparseProjection Csynapses_1;
-extern double * lastupdatesynapses_1;
-extern double * d_lastupdatesynapses_1;
 
 #define Conductance SparseProjection
 /*struct Conductance is deprecated. 
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/runner.cc GeNNworkspace_slow/magicnetwork_model_CODE/runner.cc
--- GeNNworkspace_fast/magicnetwork_model_CODE/runner.cc	2018-08-15 15:31:53.470366742 +0200
+++ GeNNworkspace_slow/magicnetwork_model_CODE/runner.cc	2018-08-15 15:26:56.132891874 +0200
@@ -72,9 +72,6 @@
 __device__ unsigned int *dd_indInGsynapses;
 unsigned int *d_indsynapses;
 __device__ unsigned int *dd_indsynapses;
-double * lastupdatesynapses;
-double * d_lastupdatesynapses;
-__device__ double * dd_lastupdatesynapses;
 double * inSynsynapses_1;
 double * d_inSynsynapses_1;
 __device__ double * dd_inSynsynapses_1;
@@ -83,9 +80,6 @@
 __device__ unsigned int *dd_indInGsynapses_1;
 unsigned int *d_indsynapses_1;
 __device__ unsigned int *dd_indsynapses_1;
-double * lastupdatesynapses_1;
-double * d_lastupdatesynapses_1;
-__device__ double * dd_lastupdatesynapses_1;
 
 //-------------------------------------------------------------------------
 /*! \brief Function to convert a firing probability (per time step) 
@@ -221,8 +215,6 @@
   Csynapses.remap= NULL;
     deviceMemAllocate(&d_indInGsynapses, dd_indInGsynapses, 100001 * sizeof(unsigned int));
     deviceMemAllocate(&d_indsynapses, dd_indsynapses, Csynapses.connN * sizeof(unsigned int));
-    cudaHostAlloc(&lastupdatesynapses, Csynapses.connN * sizeof(double), cudaHostAllocPortable);
-    deviceMemAllocate(&d_lastupdatesynapses, dd_lastupdatesynapses, Csynapses.connN * sizeof(double));
 }
 
 void createSparseConnectivityFromDensesynapses(int preN,int postN, double *denseMatrix){
@@ -240,8 +232,6 @@
   Csynapses_1.remap= NULL;
     deviceMemAllocate(&d_indInGsynapses_1, dd_indInGsynapses_1, 100001 * sizeof(unsigned int));
     deviceMemAllocate(&d_indsynapses_1, dd_indsynapses_1, Csynapses_1.connN * sizeof(unsigned int));
-    cudaHostAlloc(&lastupdatesynapses_1, Csynapses_1.connN * sizeof(double), cudaHostAllocPortable);
-    deviceMemAllocate(&d_lastupdatesynapses_1, dd_lastupdatesynapses_1, Csynapses_1.connN * sizeof(double));
 }
 
 void createSparseConnectivityFromDensesynapses_1(int preN,int postN, double *denseMatrix){
@@ -252,10 +242,8 @@
 size_t size;
 size = Csynapses.connN;
   initializeSparseArray(Csynapses, d_indsynapses, d_indInGsynapses,100000);
-CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses, lastupdatesynapses, sizeof(double) * size , cudaMemcpyHostToDevice));
 size = Csynapses_1.connN;
   initializeSparseArray(Csynapses_1, d_indsynapses_1, d_indInGsynapses_1,100000);
-CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses_1, lastupdatesynapses_1, sizeof(double) * size , cudaMemcpyHostToDevice));
 }
 
 void initmagicnetwork_model()
@@ -295,8 +283,6 @@
     CHECK_CUDA_ERRORS(cudaFree(d_indInGsynapses));
     CHECK_CUDA_ERRORS(cudaFreeHost(Csynapses.ind));
     CHECK_CUDA_ERRORS(cudaFree(d_indsynapses));
-    CHECK_CUDA_ERRORS(cudaFreeHost(lastupdatesynapses));
-    CHECK_CUDA_ERRORS(cudaFree(d_lastupdatesynapses));
     CHECK_CUDA_ERRORS(cudaFreeHost(inSynsynapses_1));
     CHECK_CUDA_ERRORS(cudaFree(d_inSynsynapses_1));
     Csynapses_1.connN= 0;
@@ -304,8 +290,6 @@
     CHECK_CUDA_ERRORS(cudaFree(d_indInGsynapses_1));
     CHECK_CUDA_ERRORS(cudaFreeHost(Csynapses_1.ind));
     CHECK_CUDA_ERRORS(cudaFree(d_indsynapses_1));
-    CHECK_CUDA_ERRORS(cudaFreeHost(lastupdatesynapses_1));
-    CHECK_CUDA_ERRORS(cudaFree(d_lastupdatesynapses_1));
 }
 
 void exitGeNN(){
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/runnerGPU.cc GeNNworkspace_slow/magicnetwork_model_CODE/runnerGPU.cc
--- GeNNworkspace_fast/magicnetwork_model_CODE/runnerGPU.cc	2018-08-15 15:31:53.474366816 +0200
+++ GeNNworkspace_slow/magicnetwork_model_CODE/runnerGPU.cc	2018-08-15 15:26:56.136891947 +0200
@@ -63,14 +63,12 @@
 void pushsynapsesStateToDevice()
  {
     size_t size = Csynapses.connN;
-    CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses, lastupdatesynapses, size * sizeof(double), cudaMemcpyHostToDevice));
     CHECK_CUDA_ERRORS(cudaMemcpy(d_inSynsynapses, inSynsynapses, 100000 * sizeof(double), cudaMemcpyHostToDevice));
     }
 
 void pushsynapses_1StateToDevice()
  {
     size_t size = Csynapses_1.connN;
-    CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses_1, lastupdatesynapses_1, size * sizeof(double), cudaMemcpyHostToDevice));
     CHECK_CUDA_ERRORS(cudaMemcpy(d_inSynsynapses_1, inSynsynapses_1, 100000 * sizeof(double), cudaMemcpyHostToDevice));
     }
 
@@ -118,14 +116,12 @@
 void pullsynapsesStateFromDevice()
  {
     size_t size = Csynapses.connN;
-    CHECK_CUDA_ERRORS(cudaMemcpy(lastupdatesynapses, d_lastupdatesynapses, size * sizeof(double), cudaMemcpyDeviceToHost));
     CHECK_CUDA_ERRORS(cudaMemcpy(inSynsynapses, d_inSynsynapses, 100000 * sizeof(double), cudaMemcpyDeviceToHost));
     }
 
 void pullsynapses_1StateFromDevice()
  {
     size_t size = Csynapses_1.connN;
-    CHECK_CUDA_ERRORS(cudaMemcpy(lastupdatesynapses_1, d_lastupdatesynapses_1, size * sizeof(double), cudaMemcpyDeviceToHost));
     CHECK_CUDA_ERRORS(cudaMemcpy(inSynsynapses_1, d_inSynsynapses_1, 100000 * sizeof(double), cudaMemcpyDeviceToHost));
     }
 
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/synapseFnct.cc GeNNworkspace_slow/magicnetwork_model_CODE/synapseFnct.cc
--- GeNNworkspace_fast/magicnetwork_model_CODE/synapseFnct.cc	2018-08-15 15:31:53.654370132 +0200
+++ GeNNworkspace_slow/magicnetwork_model_CODE/synapseFnct.cc	2018-08-15 15:26:56.184892831 +0200
@@ -32,7 +32,6 @@
                  using namespace synapses_weightupdate_simCode;
                 addtoinSyn =  (6.00000000000000079e-09);
 inSynsynapses[ipost] += addtoinSyn;
-lastupdatesynapses[Csynapses.indInG[ipre] + j] = t;
                 }
             }
         }
@@ -48,7 +47,6 @@
                  using namespace synapses_1_weightupdate_simCode;
                 addtoinSyn =  (6.70000000000000044e-08);
 inSynsynapses_1[ipost] += addtoinSyn;
-lastupdatesynapses_1[Csynapses_1.indInG[ipre] + j] = t;
                 }
             }
         }
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/synapseKrnl.cc GeNNworkspace_slow/magicnetwork_model_CODE/synapseKrnl.cc
--- GeNNworkspace_fast/magicnetwork_model_CODE/synapseKrnl.cc	2018-08-15 15:31:53.486367037 +0200
+++ GeNNworkspace_slow/magicnetwork_model_CODE/synapseKrnl.cc	2018-08-15 15:26:56.148892168 +0200
@@ -48,7 +48,6 @@
                         ipost = dd_indsynapses[prePos];
                         addtoinSyn =  (6.00000000000000079e-09);
 atomicAddSW(&dd_inSynsynapses[ipost], addtoinSyn);
-dd_lastupdatesynapses[prePos] = t;
                         }
                     }
                 
@@ -93,7 +92,6 @@
                         ipost = dd_indsynapses_1[prePos];
                         addtoinSyn =  (6.70000000000000044e-08);
 atomicAddSW(&dd_inSynsynapses_1[ipost], addtoinSyn);
-dd_lastupdatesynapses_1[prePos] = t;
                         }
                     }
                 
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model.cpp GeNNworkspace_slow/magicnetwork_model.cpp
--- GeNNworkspace_fast/magicnetwork_model.cpp	2018-08-15 15:31:43.794188522 +0200
+++ GeNNworkspace_slow/magicnetwork_model.cpp	2018-08-15 15:21:00.546349120 +0200
@@ -66,13 +66,11 @@
  
 // initial variables (synapses)
 // one additional initial variable for hidden_weightmatrix
-double synapses_ini[2]= {
- 0.0,
+double synapses_ini[1]= {
 };
 
 double *synapses_postsyn_ini= NULL;
-double synapses_1_ini[2]= {
- 0.0,
+double synapses_1_ini[1]= {
 };
 
 double *synapses_1_postsyn_ini= NULL;
@@ -318,8 +316,6 @@
   s.pNames.clear(); 
   s.dpNames.clear();
   // step 1: variables
-  s.varNames.push_back("lastupdate");
-  s.varTypes.push_back("double");
   // step 2: scalar (shared) variables
   s.extraGlobalSynapseKernelParameters.clear();
   s.extraGlobalSynapseKernelParameterTypes.clear();
@@ -327,8 +323,7 @@
   s.pNames.push_back("we");
   // step 4: add simcode
   s.simCode= "$(addtoinSyn) =  $(we);\n\
-$(updatelinsyn);\n\
-$(lastupdate) = t;";
+$(updatelinsyn);";
   s.simLearnPost= "";
   s.synapseDynamics= "";
   s.simCode_supportCode= "\n\
@@ -449,8 +444,6 @@
   s.pNames.clear(); 
   s.dpNames.clear();
   // step 1: variables
-  s.varNames.push_back("lastupdate");
-  s.varTypes.push_back("double");
   // step 2: scalar (shared) variables
   s.extraGlobalSynapseKernelParameters.clear();
   s.extraGlobalSynapseKernelParameterTypes.clear();
@@ -458,8 +451,7 @@
   s.pNames.push_back("wi");
   // step 4: add simcode
   s.simCode= "$(addtoinSyn) =  $(wi);\n\
-$(updatelinsyn);\n\
-$(lastupdate) = t;";
+$(updatelinsyn);";
   s.simLearnPost= "";
   s.synapseDynamics= "";
   s.simCode_supportCode= "\n\
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/main.cpp GeNNworkspace_slow/main.cpp
--- GeNNworkspace_fast/main.cpp	2018-08-15 15:31:43.866189848 +0200
+++ GeNNworkspace_slow/main.cpp	2018-08-15 15:21:00.634350738 +0200
@@ -89,14 +89,8 @@
   // translate to GeNN synaptic arrays
    allocatesynapses(brian::_dynamic_array_synapses__synaptic_pre.size());
    vector<vector<int32_t> > _synapses_bypre;
-      convert_dynamic_arrays_2_sparse_synapses(brian::_dynamic_array_synapses__synaptic_pre, brian::_dynamic_array_synapses__synaptic_post,
-                                               brian::_dynamic_array_synapses_lastupdate, Csynapses, lastupdatesynapses,
-                                               100000, 100000, _synapses_bypre, b2g::FULL_MONTY);
       allocatesynapses_1(brian::_dynamic_array_synapses_1__synaptic_pre.size());
    vector<vector<int32_t> > _synapses_1_bypre;
-      convert_dynamic_arrays_2_sparse_synapses(brian::_dynamic_array_synapses_1__synaptic_pre, brian::_dynamic_array_synapses_1__synaptic_post,
-                                               brian::_dynamic_array_synapses_1_lastupdate, Csynapses_1, lastupdatesynapses_1,
-                                               100000, 100000, _synapses_1_bypre, b2g::FULL_MONTY);
       initmagicnetwork_model();
 
   // copy variable arrays
@@ -146,9 +140,7 @@
 
   // translate GeNN arrays back to synaptic arrays
   
-      convert_sparse_synapses_2_dynamic_arrays(Csynapses, lastupdatesynapses, 100000, 100000, brian::_dynamic_array_synapses__synaptic_pre, brian::_dynamic_array_synapses__synaptic_post, brian::_dynamic_array_synapses_lastupdate, b2g::FULL_MONTY);
      
-      convert_sparse_synapses_2_dynamic_arrays(Csynapses_1, lastupdatesynapses_1, 100000, 100000, brian::_dynamic_array_synapses_1__synaptic_pre, brian::_dynamic_array_synapses_1__synaptic_post, brian::_dynamic_array_synapses_1_lastupdate, b2g::FULL_MONTY);
     
   // copy variable arrays
  
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/objects.cpp GeNNworkspace_slow/objects.cpp
--- GeNNworkspace_fast/objects.cpp	2018-08-15 15:31:43.358180495 +0200
+++ GeNNworkspace_slow/objects.cpp	2018-08-15 15:20:59.618332052 +0200
@@ -56,13 +56,11 @@
 std::vector<int32_t> _dynamic_array_synapses_1__synaptic_post;
 std::vector<int32_t> _dynamic_array_synapses_1__synaptic_pre;
 std::vector<double> _dynamic_array_synapses_1_delay;
-std::vector<double> _dynamic_array_synapses_1_lastupdate;
 std::vector<int32_t> _dynamic_array_synapses_1_N_incoming;
 std::vector<int32_t> _dynamic_array_synapses_1_N_outgoing;
 std::vector<int32_t> _dynamic_array_synapses__synaptic_post;
 std::vector<int32_t> _dynamic_array_synapses__synaptic_pre;
 std::vector<double> _dynamic_array_synapses_delay;
-std::vector<double> _dynamic_array_synapses_lastupdate;
 std::vector<int32_t> _dynamic_array_synapses_N_incoming;
 std::vector<int32_t> _dynamic_array_synapses_N_outgoing;
 
@@ -400,19 +398,6 @@
 	{
 		std::cout << "Error writing output file for _dynamic_array_synapses_1_delay." << endl;
 	}
-	ofstream outfile__dynamic_array_synapses_1_lastupdate;
-	outfile__dynamic_array_synapses_1_lastupdate.open("results/_dynamic_array_synapses_1_lastupdate_6875119916677774017", ios::binary | ios::out);
-	if(outfile__dynamic_array_synapses_1_lastupdate.is_open())
-	{
-        if (! _dynamic_array_synapses_1_lastupdate.empty() )
-        {
-			outfile__dynamic_array_synapses_1_lastupdate.write(reinterpret_cast<char*>(&_dynamic_array_synapses_1_lastupdate[0]), _dynamic_array_synapses_1_lastupdate.size()*sizeof(_dynamic_array_synapses_1_lastupdate[0]));
-		    outfile__dynamic_array_synapses_1_lastupdate.close();
-		}
-	} else
-	{
-		std::cout << "Error writing output file for _dynamic_array_synapses_1_lastupdate." << endl;
-	}
 	ofstream outfile__dynamic_array_synapses_1_N_incoming;
 	outfile__dynamic_array_synapses_1_N_incoming.open("results/_dynamic_array_synapses_1_N_incoming_-5364435978754666149", ios::binary | ios::out);
 	if(outfile__dynamic_array_synapses_1_N_incoming.is_open())
@@ -478,19 +463,6 @@
 	{
 		std::cout << "Error writing output file for _dynamic_array_synapses_delay." << endl;
 	}
-	ofstream outfile__dynamic_array_synapses_lastupdate;
-	outfile__dynamic_array_synapses_lastupdate.open("results/_dynamic_array_synapses_lastupdate_562699891839928247", ios::binary | ios::out);
-	if(outfile__dynamic_array_synapses_lastupdate.is_open())
-	{
-        if (! _dynamic_array_synapses_lastupdate.empty() )
-        {
-			outfile__dynamic_array_synapses_lastupdate.write(reinterpret_cast<char*>(&_dynamic_array_synapses_lastupdate[0]), _dynamic_array_synapses_lastupdate.size()*sizeof(_dynamic_array_synapses_lastupdate[0]));
-		    outfile__dynamic_array_synapses_lastupdate.close();
-		}
-	} else
-	{
-		std::cout << "Error writing output file for _dynamic_array_synapses_lastupdate." << endl;
-	}
 	ofstream outfile__dynamic_array_synapses_N_incoming;
 	outfile__dynamic_array_synapses_N_incoming.open("results/_dynamic_array_synapses_N_incoming_6651214916728133133", ios::binary | ios::out);
 	if(outfile__dynamic_array_synapses_N_incoming.is_open())
diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/objects.h GeNNworkspace_slow/objects.h
--- GeNNworkspace_fast/objects.h	2018-08-15 15:31:43.362180565 +0200
+++ GeNNworkspace_slow/objects.h	2018-08-15 15:20:59.622332125 +0200
@@ -25,13 +25,11 @@
 extern std::vector<int32_t> _dynamic_array_synapses_1__synaptic_post;
 extern std::vector<int32_t> _dynamic_array_synapses_1__synaptic_pre;
 extern std::vector<double> _dynamic_array_synapses_1_delay;
-extern std::vector<double> _dynamic_array_synapses_1_lastupdate;
 extern std::vector<int32_t> _dynamic_array_synapses_1_N_incoming;
 extern std::vector<int32_t> _dynamic_array_synapses_1_N_outgoing;
 extern std::vector<int32_t> _dynamic_array_synapses__synaptic_post;
 extern std::vector<int32_t> _dynamic_array_synapses__synaptic_pre;
 extern std::vector<double> _dynamic_array_synapses_delay;
-extern std::vector<double> _dynamic_array_synapses_lastupdate;
 extern std::vector<int32_t> _dynamic_array_synapses_N_incoming;
 extern std::vector<int32_t> _dynamic_array_synapses_N_outgoing;

from brian2genn.

denisalevi avatar denisalevi commented on July 17, 2024

@mstimberg Since lastupdate is removed now in brian2, this issue might arise again.

from brian2genn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.