Code Monkey home page Code Monkey logo

convnetjs's Introduction

I like deep neural nets.

convnetjs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

convnetjs's Issues

MagicNet's convergence

I try to train magicnet on a simple data set, about 100 data records, it run over 100+ hours and seems will run forever without return. When I input less than 40 records, it would work fine.

GPU version of your Q-learning demo : enhancement

Dear Andrej,

I really like your deep q-learning demo page ๐Ÿ‘ May I ask if you, or anyone you know have tried to use Torch, Theano or Caffe, with iPython notebooks or something else to build a GPU version of the demo?

I realize this would be a lot of work, but it would also be a lot of fun ๐Ÿ‘ Since you have made the big first step of proof of principle that it can be done, it also gives a clear map of how to do it!

Best wishes, Aj

ReferenceError: deepqlearn is not defined

Followed your example and suddenly it cannot find deepqlearn. I'm doing this in Node.js.

var convnetjs = require('convnetjs');
// the normal examples with layer_defs etc. worked, but not this:
var brain = new deepqlearn.Brain(3, 2); // 3 inputs, 2 possible outputs (0,1)
var state = [Math.random(), Math.random(), Math.random()];
for(var k=0;k<10000;k++) {
    var action = brain.forward(state); // returns index of chosen action
    var reward = action === 0 ? 1.0 : 0.0;
    brain.backward(reward); // <-- learning magic happens here
    state[Math.floor(Math.random()*3)] += Math.random()*2-0.5;
}
brain.epsilon_test_time = 0.0; // don't make any more random choices
brain.learning = false;
// get an optimal action from the learned policy
var action = brain.forward(array_with_num_inputs_numbers);

Seems like it can't find deepqlearn. Saw under node_modules/convnet/deepqlearn.js there's an export statement; but your package.json specifies convnet.js as the main.

Build with node why ant?

This is a javascript project and it's better to use javascript/bash/commandline tools to build it. Objective is: To remove Java dependencies. A simple bash/commandline script can do what's there inside compile/ folder.

For utmost cross platform compatibility, we can use node directly. Or, to not need a dependency for node too, a cross platform script might do the work. But, I prefer node (cross platform + javascript).

What do you think about this?

Possible issue with batch_size parameter

I was looking through your code and you initialize the gradient to be zero on each backward pass, won't this impact the use of batch_size? Doesn't the gradient need to be added up so that you get the average gradient for the weight update? You already set the gradient to zero on each weight update.

importTrainData() not found

Hello,
the automatic.html demo refers to importTrainData() but I cannot find where it is defined.
Can you help?
Thanks. Guido

Convnetjs is not bower installable.

If you don't know, Bower is a package manager for javascript libraries, I use bower when working on my web apps, and It'd be useful if I could add convnetjs to my current project. I've taken the liberty of creating a bower.json file and I'll have a pull request for you in a jiffy.

Reinforcement Learning with negative rewards?

I wrote a very simple simulation to test the Reinforcement Learning Module. I only set up the current action as input, and the output is "left" or "right". Going right feeds the reward 1 back into the network while going left returns the reward -1.

To my astonishment, returning an hour later after I let it train, the 'creature' was moving very confidently to the left, and only to the left! Goes without saying, the average reward of the network was negative! What could be an explanation for this?

Regarding the setup of the network, I basically copied all settings from your apples/poison example - including the layer defs.

npm version out of date?

installing convnetjs for node.js via npm install convnetjs gets you this version last published a year ago. should it be updated?

the version currently on npm doesn't expose the deepqlearn module, which i needed, so i put together a fork of convnet that uses npm to build, compile, test, and expose the library. i'd be happy to polish those changes into a pull request if you're at all interested in that kind of automation.

Question: Multiple outputs..

I would like to know how to design CNN(this modules) to handle multiple outputs instead of binary outputs.
For example, I would like to retrieve relevant words (not a word) given an image.
Any idea?

Thanks

Brain fromJSON

Is there any way to read Brain data from stored JSON? toJSON + fromJSON combo doesn't seem to work.

Confusion about formula for H2 and W2 and D2

In your code you specify that

layer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3}); // declare size of input
// output Vol is of size 32x32x3 here

This is fine and makes sense

how ever this volume calculation is odd to me.

layer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});
// the layer will perform convolution with 16 kernels, each of size 5x5.
// the input will be padded with 2 pixels on all sides to make the output Vol of the same size
// output Vol will thus be 32x32x16 at this point

In your lecture notes you say

H2 = (H1 - F) + (2 * P) / S + 1
H2 = (32 - 5) + (2 * 2)/ 1 + 1
H2 = (27) + (4) / 2
H2 = 15.5
W2 = (W1 - F) + (2 * P) / S + 1
W2 = 15.5

D2 = K
D2 = 16
Resulting Volume
15.5 * 15.5 * 16

I am confused about how you arrived at 32 x 32 x 16 volume

Is there a missing sqrt in the adadelta calculations?

I was wondering if there is a missing sqrt call on this line https://github.com/karpathy/convnetjs/blob/master/src/convnet_trainers.js#L117

According to this article (links to adadelta paper)http://sebastianruder.com/optimizing-gradient-descent/index.html#adadelta it would suggest that:

var dx = - Math.sqrt((xsumi[j] + this.eps)/(gsumi[j] + this.eps)) * gij;

should be:

var dx = - (Math.sqrt(xsumi[j] + this.eps)/Math.sqrt(gsumi[j] + this.eps)) * gij;

Would be great if someone could correct my understanding if not.

Newbie question about deepQ

Since i've successfully adapted and trained my network.

How can i use it to predict a result ?

say i've 4 entry and 3 output to make on choice

i want to give 4 new entry to the trained network and get back 1 of the 3 possible choice.
and certainly retrain with the new value if it appear that the result is wrong ?
thank you

How can data be serialized?

Can a network that is already trained be serialized? In which format can it be serialized?
Can I train a network with other tools and let convnetjs only read it?

unhelpful bug messages

hey ! I get the following error :
TypeError: Vw is undefined

I am 100% sure its because I set up the convnet for regression incorrectly since if I try the example given in the getting started it works fine.

var layer_defs = [];
layer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:2});
layer_defs.push({type:'fc', num_neurons:5, activation:'sigmoid'});
layer_defs.push({type:'regression', num_neurons:1});
var net = new convnetjs.Net();
net.makeLayers(layer_defs);

var x11 = new convnetjs.Vol([1,2,3,4,5,6]);

// train on this datapoint, saying [0.5, -1.3] should map to value 0.7:
// note that in this case we are passing it a list, because in general
// we may want to regress multiple outputs and in this special case we
// used num_neurons:1 for the regression to only regress one.
var trainer = new convnetjs.SGDTrainer(net,
{learning_rate:0.01, momentum:0.0, batch_size:1, l2_decay:0.001});
trainer.train(x11, [1,2,3,4,5,6]);

This raises that error. So knowing js something is breaking and thus Vw becomes undefined. I'll try and go through the docs more carefully to understand how to set up. I am interested in regressing pairs of the form (xi, yi ) and thus why I set it up as it is seen...

What kind of neural network do I need?

Hello
I am a beginner and not sure this is the correct place to post for 'support' so please forgive me ahead of time

I would like to present a scenario and ask which type of neural network I should be using to solve it and anything else I should be aware of when I implement it as a beginner to neural networks but apt user of javascript

I'm building something that would give me data about the game hearthstone, the data might look like this:

input:
     turn_played:2
output:
    card_id: EX_11

I've experimented with brainjs but that only accepts numerics as inputs/outputs

I would like to put those data points into the neural network so it gets an idea of what was played every turn for a game

then I would ask the neural net things like
if turn 1 had a card id of EX_13 what is most likely to be the play on turn 2?
Is there a neural network capable of guessing that information based on previous data?

Where is the Model Zoo

I wonder if anyone converted pre-trained Caffe (or any else) models like AlexNet, VGG, HYBRID to ConvNet.js format?

If not, how could it be possible?

[Question] Why use javascript ?

Hello, I'm a new comer to deep learning.
And I'm just curious about why you use javascript to implement convnet that need
expensive computation ? Why not just use python, or c++ ? Is there any specific reason ?

Question: trainer network error

Hi,

Thanks a ton for the fantastic library!
I'm using a deep network for NLP, with a varying input size of 12000 down to 4000 nodes.
I've made sure to enable 8GB for RAM for node:

node --max-old-space-size=8192

My network looks like this (although I'd like to try different types and architectures):

var layers = [];
layers.push({type: 'input', out_sx: 1, out_sy: 1, out_depth: input_size});
layers.push({type: 'fc', num_neurons: 200, activation: 'relu'});
layers.push({type: 'fc', num_neurons: 100, activation: 'relu'});
layers.push({type: 'fc', num_neurons: 50, activation: 'relu'});
layers.push({type: 'fc', num_neurons: 25, activation: 'relu'});
layers.push({type: 'fc', num_neurons: 10, activation: 'relu'});
layers.push({type: 'softmax', num_classes: 2});

var net = new convet.Net();
net.makeLayers(layers);

My data has been parsed in a json array of objects, which I then randomly shuffle and partition into a training set and testing set.

Each json object has a vector (which is simply an array of floats), and a score which is a single value.

My training loop is basically the following:

var trainer = new convnet.Trainer(network, {learning_rate: 0.1, l2_decay: 001});
var epochs = 1000;
for (var i = 0; i < epochs; i++)
{
    for (var index in dataset.training())
    {
        var input = new convnet.Vol(json[index].vector);
        var output = new convnert.Vol(json[index].score);
        trainer.train(input, output);
    }
}

It runs, but I have no way of validating it.
Is there a Mean Square Error or Average Cross Entropy or any other network-error measurement? AFAIK, the only way to test the network's accuracy is to cross-validate using my testing samples, and see (a) if they are classified correctly, or (b) how far the actual output is from my target/ideal output.

I took a peek into the convent.js source file but I don't see Trainer.train to be returning any type of network error (unless I missed something - very possible!).

Last but not least, referencing your library, do you have a citation you'd like me to use?

PS: is there a way to save a trained network?

Best regards,
Alex

MagicNet for Node.js

Like others I am have been using ConvNetJS in Node.js, server-side.

In order to actually use MagicNet from Node.js given a CSV or similar data, however, one needs to implement preprocessing code like that in automatic.js.

To avoid rewriting such code, in my own experimental branch I separated the preprocessing code out of automatic.js, leaving only the UI layer there. I have tested it and am happy to submit a pull request. But before I do so I want to see if this type of change is wanted and if there any suggestions as far as naming and structure.

Have a look here:
bittlingmayer/convnetjs@master...bittlingmayer:csv
(I based it off of my fork, but if #49 is not merged it would be easy to rebase.)

Backprop zeroing on FullyConnLayer

I'm curious why the input gradients are zero'd out in FullyConnLayer.backward() and not the biases and filter gradients.

So zero out input, filter and bias gradients like this:

backward: function() {
      var V = this.in_act;
      V.dw = global.zeros(V.w.length); // zero out the gradient in input Vol                                                                                                                                                               

      // compute gradient wrt weights and data                                                                                                                                                                                             
      for(var i=0;i<this.out_depth;i++) {
          var tfi = this.filters[i];
          tfi.dw = global.zeros(tfi.dw.length); // zero out the gradient in filter Vol                                                                                                                                                      
          this.biases.dw[i] = 0;
          var chain_grad = this.out_act.dw[i];
          for(var d=0;d<this.num_inputs;d++) {
              V.dw[d] += tfi.w[d]*chain_grad; // grad wrt input data                                                                                                                                                                       
              tfi.dw[d] += V.w[d]*chain_grad; // grad wrt params                                                                                                                                                                           
          }
          this.biases.dw[i] += chain_grad;
      }

Instead of the original:

    backward: function() {
      var V = this.in_act;
      V.dw = global.zeros(V.w.length); // zero out the gradient in input Vol

      // compute gradient wrt weights and data
      for(var i=0;i<this.out_depth;i++) {
        var tfi = this.filters[i];
        var chain_grad = this.out_act.dw[i];
        for(var d=0;d<this.num_inputs;d++) {
          V.dw[d] += tfi.w[d]*chain_grad; // grad wrt input data
          tfi.dw[d] += V.w[d]*chain_grad; // grad wrt params
        }
        this.biases.dw[i] += chain_grad;
      }
    }

When I use the original code, my 1 layer, 1 neuron net oscillates around the training set rather than steadily decreasing the cost.

When I zero the input, filter and gradient biases, I get steadily decreasing cost. Isn't the filter gradient supposed to be wrt the input rather than the sum of past backprop gradients?

Thanks for helping me understand!
Seth

Saving trained networks

Hey,

Awesome job! I'm learning a bunch.
My question => how I can I save a trained network as JSON? (and load it back up again) I noticed it was done in some of the demos, but I didn't see how to do so in the Docs. Thank again!

Document motivation and benefits (and limits) of doing this in JS?

I'm was really happy to see this library and your post (Hacker's Guide to Neural Networks). I work mostly with Javascript on the server and client and am considering building a some machine learning functionality in our web application. We are also working towards taking it offline, so a pure JS solution is really interesting.

However, a lot of people dismiss JS as a good language for this kind of heavy, computational work. Other hits against it are no immutable data structures (although now we have a couple great options), floating point math (0.1*0.2), and it's not statically typed.

I think it would be interesting to talk more in the intro to this repo about

  • What motivated you built it in JS
  • If or how JS floating point math problems would affect results
  • Limitations on size of learning data sets
  • For large data sets, presumably it's recommended to train on the server and load the saved trained networks in the browser. How big are saved trained networks for large data sets? Giving some rough guidelines would help determine if this is even an option for some projects.

I was interested in these questions previously, but then read through the Hacker News thread on your post (https://news.ycombinator.com/item?id=8553307). Cynical comments as usual, but there are good questions in there.

Thanks

SVM to classify text sentences

Hi, newbie machine learner here. I am trying to classify news articles based on their headlines, mainly into two classes : "crime" and "non crime". I have used Natural's NaiveBayes Classifier, but I want to improve upon the accuracy. Will SVM improve the prediction, and if so please guide me how to implement it using your library!
Btw, i love reading your blog and got to learn a lot from there :) Please do complete the book, eagerly waiting for the remaining chapters!

Loaded JSON runs slower

Noticing a framerate drop with nets loaded from JSON.

Setup a test in jsfiddle with the library included, only thing which jumped out at me was dw in Vol.toJSON so added to see if that helped.
http://jsfiddle.net/MrYellow/xLy0Lp45/7/

  1. Hit start
  2. Run until loss drops.
  3. Log some timings.
  4. Save/Load.
  5. Re-enable learning.

I'm going from 145-150 or so to 120.

some variable definition...

In convent_net.js, line 133:

"var layer_reponse = this.layers[i].getParamsAndGrads();"

e..., it means 'layer_response'??

How do you take sample input in MNIST

Sir,
We are very interested in your work and are excited to contribute to your project. Sir, it would be of great help if you kindly help us to understand how you take input in the MNIST Demo. We cannot find the specific codes which help you to take the sample inputs. Any help for external sources and experts will also be helpful and appreciated.

Thanks in advance.

Question for deepqlearn.js

The following is my understanding about the backward function in deepqlearn.js.

If it is in learning process, after one forward path, it begins to do backward propagation. It does SGD with a batch size of N. Those N samples are randomly selecting from replay memory. The code seems to show that the weight matrix of the network is updated N times by calling the backward function once. But in my opinion, the SGD should only update the weight matrix once by averaging all weight updates of a batch size of N.

Please correct me if I have misunderstood sometime.

File "convnet_layers_transform.js" (with QuadTransformLayer?) missing

I noticed that convnet_layers_transform.js is missing, when required by build.xml.

In the same commit, where that requirement appeared in build.xml, a bunch of code for QuadTransformLayer appeared in build/convnet.js.

Is there a file that you haven't git added?

All the Best
Martin
:-)

MaxoutLayer uses different options to initialize "switches" array depending on fromJSON or base constructor

Hello All -

Was wondering if this was intended behavior? Been reading the source over the holidays and noticed that in the MaxOut's constructor:

(convnet_layers_nonlinearities.js:128)
this.switches = global.zeros(_this.out_sx_this.out_sythis.out_depth); // useful for backprop

Which doesn't seem to match with the from_json initialization:

(convnet_layers_nonlinearities.js:224)
this.switches = global.zeros(this.group_size);

Was wondering if this was a bug or maybe you could educate me on why the switches would be a different size in this case?

Thankyou.

how to reference Brain()?

I feel it's a really stupid node question to ask, but really can't figure it out after a few hours. So here is the code that embarrasses me:

var convnetjs = require("convnetjs");
var net = new convnetjs.Net();
var brain = new convnetjs.deepqlearn.Brain(3, 2);

Basically it couldn't find Brain function:

weiwe-macbookpro:test weiwe$ node index.js
/Users/weiwe/test/index.js:5
var brain = new convnetjs.deepqlearn.Brain(3, 2);
^

TypeError: Cannot read property 'Brain' of undefined
at Object. (/Users/weiwe/test/index.js:5:37)
at Module._compile (module.js:556:32)
at Object.Module._extensions..js (module.js:565:10)
at Module.load (module.js:473:32)
at tryModuleLoad (module.js:432:12)
at Function.Module._load (module.js:424:3)
at Module.runMain (module.js:590:10)
at run (bootstrap_node.js:394:7)
at startup (bootstrap_node.js:149:9)
at bootstrap_node.js:509:3

I'm pretty sure the NPM module is installed correctly because I can use functions (e.g., Net()) in convnet.js just fine. Scratching my head on why this doesn't work .....

npm: 404 not found

I'm new to node and npm so I might be doing something wrong, but when I try:

$ npm install convnetjs

I get:

npm http GET https://registry.npmjs.org/convnetjs
npm http 304 https://registry.npmjs.org/convnetjs
npm http GET https://registry.npmjs.org/convnetjs/-/convnetjs-0.2.0.tgz
npm http 404 https://registry.npmjs.org/convnetjs/-/convnetjs-0.2.0.tgz
npm ERR! fetch failed https://registry.npmjs.org/convnetjs/-/convnetjs-0.2.0.tgz
npm ERR! Error: 404 Not Found
npm ERR!     at WriteStream.<anonymous> (/usr/local/lib/node_modules/npm/lib/utils/fetch.js:57:12)
npm ERR!     at WriteStream.EventEmitter.emit (events.js:117:20)
npm ERR!     at fs.js:1598:14
npm ERR!     at /usr/local/lib/node_modules/npm/node_modules/graceful-fs/graceful-fs.js:105:5
npm ERR!     at Object.oncomplete (fs.js:107:15)
npm ERR! If you need help, you may report this *entire* log,
npm ERR! including the npm and node versions, at:
npm ERR!     <http://github.com/npm/npm/issues>

npm ERR! System Darwin 13.1.0
npm ERR! command "node" "/usr/local/bin/npm" "install" "convnetjs"
npm ERR! cwd /Users/christian/repos/github/2Q48/js
npm ERR! node -v v0.10.26
npm ERR! npm -v 1.4.7
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR!     /Users/christian/repos/github/2Q48/js/npm-debug.log
npm ERR! not ok code 0

Format of MagicNet Parameters

I have some trouble with the parameters of MagicNet:

var magicNet = new convnetjs.MagicNet(train_data, train_labels, opts);

Is train_data a JSON string, an object, a CSV string, etc?
Also, what format would train_labels be in?

Could you provide an example of each?

Thanks!

WebGL implementation of calculations

(this is a continuation of the discussion started in #11, so it can be 'closed' cleanly)

... jpcnn, which in turn relies on underscore ...

Hmm - that is a lot of machinery to include 100-200 lines of WebGL. The least-intrusive method for including the end result (i.e. what the client sees) would be to have a separate convnet.webgl.min.js which, if it's there, sets up a webgl flag for the regular convnet.min.js to call into - or even overwrite the re-implemented methods themselves.

On the source side, however, I've got to think there's a more direct way of making the BLAS.js code ready-to-use. I also think it makes sense to go the BLAS-compatible route, since it's a standard, and one avoids having to continuously re-invent the wheel... I'll have a poke around for a cleaner set of includes.

All the Best
Martin
:-)

Ok if I use convnetjs as a 'reference implementation' for my gpu-based neural net library?

Hi Andrej,

Just asking out of politeness really. Are you ok with my using convnetjs as a 'reference implementation' for my gpu-based neural net library? I doubt my maths a bit :-P and convnetjs seems to be widely used, therefore highly likely to have good correctness, and relatively straightforward to read, therefore ideal for a 'reference implementation'? I've provisionally added it here https://github.com/hughperkins/ClConvolve/tree/ca0752db840772a8c0efb6d7a454af9cd2ccba47/prototyping/convnetjs-reference

Hugh

Saving experiences in Q-Learning

How would I go about if I wanted to save the experiences of the "Brain" object in Deep Q-Learning, and subsequently restore them and continue training? It seems that saving the "experience" object would suffice but I'm not sure..

Thanks

Running the brain demo gets warning every iteration

I get a warning every loop when running the brain demo code:

  if(this.regression && y.constructor !== Array)
    console.log("Warning: a regression net requires an array as training output vector.");

This is the basic code from demo/rldemo.html

function start() {
    // var brain = new deepqlearn.Brain(num_inputs, num_actions);
    var brain = new deepqlearn.Brain(3, 2); // 3 inputs, 2 possible outputs (0,1)
    var state = [Math.random(), Math.random(), Math.random()];
    for(var k=0;k<10000;k++) {
        // var action = brain.forward(array_with_num_inputs_numbers);
        var action = brain.forward(state); // returns index of chosen action
        var reward = action === 0 ? 1.0 : 0.0;
        brain.backward(reward); // <-- learning magic happens here
        state[Math.floor(Math.random()*3)] += Math.random()*2-0.5;
    }
    brain.epsilon_test_time = 0.0; // don't make any more random choices
    brain.learning = false;
    // get an optimal action from the learned policy
    var action = brain.forward([.9, .3, .1]);
    console.log(action);
}

Speed-up of ConvLayer.forward

I noticed that the ConvLayer.forward is being benchmarked by convnet-benchmarks - and I thought have a go at some optimisation. With a little type-hinting here and there (plus some slight loop-order modifications, and constant extraction), I think I've got at least a 2x speed-up (YMMV, of course). It's functionally identical (AFAICT).

Here's the run-down of the benchmark timings

      // Orig   #5 iteration : 4880ms (original)
      // Dupe   #5 iteration : 5067ms (+1 console.log!)
      // oxoy   #5 iteration : 4155ms (move oy,ox calc outside of inner loop)
      // xy|0   #5 iteration : 4155ms (type hint on x and y)
      // xyst|0 #5 iteration : 2607ms (type hint on stride_x and stride_y)
      // more|0 #5 iteration : 2662ms (type hint on f>depth - WORSE)
      // hint|0 #5 iteration : 2591ms (type hint on constructors)
      // ox out #5 iteration : 2586ms (move ox variable setting outside y loop (splitting 'if' makes it WORSE, though))
      // xy->yx #5 iteration : 2398ms (switch loop order, so that faster moving indices inside (better cache perf))
      // contru #5 iteration : 2366ms (type-hinting into constructor of A)
      // VolSet #5 iteration : 2322ms (type-hinting into Vol.set())

One issue with submitting the patch, though, is that my build/convnet.js is also updated, which seems wasteful. OTOH, since you want the concatted-minimised version in the repo, I don't see how to get away from including it...

In addition, the current state-of-play has both forward_orig and forward(new) in it - as well as some commentary about things that don't work, etc. Would you like me to clean them up before submitting?

All the Best
Martin
:-)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.