matiasvlevi / dann Goto Github PK
View Code? Open in Web Editor NEWDeep Neural Network Library for JavaScript.
Home Page: https://dannjs.org
License: MIT License
Deep Neural Network Library for JavaScript.
Home Page: https://dannjs.org
License: MIT License
More manual tests in test/manual-tests/browser
and test/manual-tests/node
.
Examples have to be simple and easy to understand. You can use the template test/manual-tests/browser/empty-example
and test/manual-tests/node/empty-example
to create a new one.
Maybe they could also be run as tests along with mocha unit tests.
Recurrent Neural Networks for Dannjs,
Adding a new class for RNNs would be interesting. The class would likely be called Rann
for Recurrent Artificial Neural Network, instead of the Dann
acronym which stands for Deep Artificial Neural Network. This would be a pretty time-consuming feature to implement. It would require lots of testing before we can publish the feature to the master branch. Maybe creating another testing/dev branch for this feature would be necessary.
These are really early examples and might be completely different once we impelent the feature.
This would create the RNN.
const rnn = new Rann(input_neurons, hidden_neurons, output_neurons);
We could feed a Rann a set of sequences
let rnn = new Rann(2, 20, 2);
rnn.train([
[1, 2],
[3, 4],
[5, 6],
[7, 8]
],
[9, 10]
);
rnn.feed([
[1, 2],
[3, 4]
]);
// would return [5, 6]
This is, of course, early speculation about a big addition to the library. It might take quite some time to create & test, and usability might change a whole lot throughout development. I'm assigning this issue to myself because I want to start working on it, but help and criticism are welcome.
Batch Back-Propagation / Training
In batch back-propagation, we split the full input set into smaller batches
and during training the model feed-forward through all the input / target pair of a batch
without changing the weight / bias but will be updating the gradient (I am not sure on the math behind it).
once the batch is complete it updates the weights and biases based on a overall picture of batch rather than using one input.
this could help the model find the right weights and biases much faster than the single input back-propagation.
How I would expect the final method to look like.
//say arch = 1 -> 2 - > 2
input = [1]
target = [1 , 0]
nn.backpropagate(input , target)
// for one input, target pair
input =[ [ 1 ] , [-1 ] , [-5 ], [ 5 ] , [ 2 ] ]
target =[ [ 1 , 0 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 1 , 0 ] , [ 1 , 0 ] ]
nn.backpropagate(input, target)
// expression remains same, internally it just need to check if Array.isArray(input[0]) => then batch train.
Additional context
I think this is available in all major ML libraries due to its great efficiency,
also will help in creating distributed training capabilities.
We could have aliases for the feedForward
& backpropagate
methods of the Dann
class.
These aliases would be to make method names a little more uniform when we add in the Rann
class for RNNs as feedForward
& backpropagate
would not be the most accurate terms for RNNs. See the issue on RNNs here.
Is located in src/classes/dann/methods/feedForward.js
Is located in src/classes/dann/methods/backpropagate.js
These would be the aliases
Dann.prototype.backpropagate
to Dann.prototype.train
Dann.prototype.feedForward
to Dann.prototype.feed
It is important to note that we do not want to remove backpropagate
& feedForward
names, I think having machine learning terms for methods helps to get a grasp at what the neural network is doing since you can look up the terms.
Small datasets typically used for testing should be included in the library as an easy way to acces them. (So you woudn't need to define an XOR dataset to simply test your neural network)
Datasets such as:
We are getting errors with npm run test
and npm run build
I am not sure why this is happening. I am using node 14.17.1 by the way, in windows.
Originally posted by @Labnann in #22 (comment)
Add new activation functions along with their derivatives.
As of now, the yourmodel.load();
needs it's value name to be referenced as the first argument.
This works in some html environments and not in others (JSBin or other coding sandboxes).
Seems like net loading is broken. On every net.load('name')
i get Uncaught ReferenceError: name is not defined onchange https://null.jsbin.com/runner:1
Dann is loaded from https://cdn.jsdelivr.net/gh/matiasvlevi/[email protected]/build/dann.min.js
https://jsbin.com/kedijibewi/edit?js,console,output Check it out here.
It's the only major bug preventing me from deeper experiments with Dann.
We could use toLocaleLowerCase
to specify the activations functions without worrying about capitalization.
these statements would all be valid if we ignore capitalization.
nn.addHiddenLayer(x, 'leakyrelu');
nn.addHiddenLayer(x, 'LEAKYRELU');
nn.addHiddenLayer(x, 'leakyReLU');
Additional context
This also means all activation names need to be in lowercase, and when we add a new activation with Add.activation
, we should also convert the input name to lowercase.
Changes should be in these methods:
Layer.stringToFunc
parses the activation names into an object containing, the derivative & the activation.Add.activation
adds new activations, and also needs to respect the case insensitivity.Dann
methods using activation strings will need some minor adjusments.Linter for documentation examples.
A grunt task that would lint the examples in the inline documentation & overwrite them in between the <code>
tags.
We could adapt the documentation for mobile devices.
Here is an example of what could be done to the documentation on mobile:
Additional context
We are using handlebars with yui-doc for the documentation. Even if you never used yui-doc or handlebars before, these tasks only requires CSS knowledge since the content/elements dont change.
The documentation templates are located in
docs/yuidoc-dannjs-theme/partials/
And all the css is located in
docs/yuidoc-dannjs-theme/assets/css/
Softmax activation function.
Here is the softmax function I wrote not so long ago:
/**
* Softmax function
* @method softmax
* @param z An array of numbers (vector)
* @return An array of numbers (vector)
**/
function softmax(z) {
let ans = [];
let denom = 0;
for (let j = 0; j < z.length; j++) {
denom += Math.exp(z[j]);
}
for (let i = 0; i < z.length; i++) {
let top = Math.exp(z[i]);
ans.push(top / denom);
}
return ans;
}
This function is not implemented in the repository yet.
For this function to work in a Neural Network, we would need to write the derivative of this function. This might be a difficult task since this function takes in & outputs vectors. Vectors that are represented as arrays.
These two functions would need to be implemented in src/core/functions/actfuncs.js
.
For this function to work with a Dann model, we would need to change how to activations are handled since it expects a vector instead of a number value. I could work on that once the derivative is implemented.
Unless I've missed it, there doesn't seem to be any public-facing documentation that describes the changes introduced in a given release.
E.g., comparing v2.4.0 to v2.4.1c it looks like some of the recent code-level changes may include:
name.toLocaleLowerCase()
calls added in parallel?)saveLoss
configuration option (that populates nn.losses
?)I suspect there are no "breaking" changes here (and didn't notice anything obviously broken when I upgraded locally) but it would be helpful to have some context for what has changed beyond spelunking in the diff between release tags. (Also if saveLoss does what I think it does that would be neat to add to the log(options)
documentation.)
(See above)
For an elaborate example of this kind of documentation, see Electron.js's release notes but for what it's worth I'd be satisfied with a simple CHANGES.md file or whatever in the root directory that describes:
a. breaking changes,
b. new features, and
c. bugs fixed
in few short bullet points (and the "bugs fixed" one is probably optional since "fixed in release N" is probably being added to the actual bug report anyway and if anyone knows or cares about a specific issue they can probably find that info there).
It doesn't need to be exhaustive or especially detailed, I'm just looking for a clue from the contributors about the intended or expected impact of the changes in a give release.
Additional Context
Thanks for this library BTW. I don't mean to appear ungrateful, but even rudimentary release notes would make it easier to upgrade versions with confidence and it's hard for someone on more on the "api consumer" side of this to contribute to that. (It's probably best for the contributors directly involved with the changes or at least the troubleshooting to capture that info.)
It would be nice to have the code of "San-Francisco house prices predictions live demo" and maybe explained in detail?
Dann.save();
is not using any function to convert a Dann model to a savable json object, the convertion is happening directly in the Dann.save();
function, which is not optimal. Creating a Dann function that outputs a json object to be called in Dann.save();
will result in cleaner code. Also the function could be called on its own to use the json object in an other way. The decoder equivalent of this function is Dann.applyToModel(jsonObject);
. Documentation on this function is still missing, it would be relevant to add it when creating the converter function.
A function that would create an XOR dataset with X
number of inputs.
We currently have a static 2 input XOR dataset for testing/examples purposes. We also have a makeBinary
function that creates a dataset of binary digits with X
bits, so you can create a custom dataset to test a neural network. What if XOR had a similar function allowing for the creation of a 3 or 4 input XOR.
const dataset = makeXOR(3);
console.log(dataset);
{
{
input: [0, 0, 0],
output: [0]
},
{
input: [0, 0, 1],
output: [1]
},
{
input: [0, 1, 0],
output: [1]
},
{
input: [0, 1, 1],
output: [0]
},
//...
{
input: [1, 1, 1],
output: [1]
},
}
This is a 3 input XOR table for reference
Unit tests in a browser environement
Running the mocha unit tests for the browser.
We currently have unit tests with mocha that run in the command line with node. Having the same tests in a browser environment would eliminate potential errors in the future.
A static function that takes in an array of values and returns a string. Just like Rann.stringToNum
reversed. This method should be added to the RNN branch.
Existing stringToNum method
Rann.stringToNum('hey');
// Returns [0.7604166666666666, 0.7291666666666666, 0.9375]
numToString method
Rann.numToString([0.7604166666666666, 0.7291666666666666, 0.9375]);
// Should return 'hey'
Additional context
Be sure to commit changes to the RNN branch.
Do you plan adding GPU support?
Segmented Sin wave dataset for Rann.
Just like we have XOR & Binary digit testing datasets for Dann models, it would be nice to have a testing dataset for the upcoming Rann model. The changes would have to be applied to origin/RNN
branch. The method creating the dataset should be referenced in the module.exports
in src/io/exports.js
.
The source file for this method should be in src/core/datasets/
We train a Rann model this way. We feed an array of sequences to the Rann model. The sequences lengths must be the same as the number of input neurons the Rann model has.
rnn.train([
[0, 1],
[2, 3],
[4, 5]
]);
We could technically have a sin wave in an array of sequences to later train a model with.
let data = [
[0, sinus values..., ],
[sinus values continuation..., ],
[sinus values continuation..., ],
[sinus values continuation..., ],
]
Here is an example of how the method could work.
let dataset = makeSinWave( sequence_length, total_length, resolution );
console.log(dataset);
Add callbacks for the broswer version of Dann.save();
with a custom error argument.
the console.log provides a function with malformed arrow statements
// logging the following snippet toFunction will produce malformed arrow statements
// ex.
// a[1]=(1+Math.exp(-t)=>);a[2]=(1+Math.exp(-t)=>);a[3]=(1+Math.exp(-t)=>);
// initialize Dann with 1 input and 1 output
let nn = new Dann(1, 1)
// Number of neuron layers in the neural net
nn.addHiddenLayer(3, "sigmoid")
nn.addHiddenLayer(3, "sigmoid")
// how to calculate output
nn.outputActivation("sigmoid")
// assign random weights to layers
nn.makeWeights()
// How fast should it learn?
nn.lr = 0.1
// mean square errorrate
nn.setLossFunction("mse")
// show info about the neural network
nn.log()
// Training data
for(let count=0; count < 1000; count++) {
let randNum = Math.random()*10 - 5
nn.backpropagate([randNum], [randNum < 0 ? 0 : 1])
}
console.log(nn.loss)
// log the function
console.log(nn.toFunction())
// Logging the function produces the following -
// function myDannFunction(input){let w=[];w[0]=[[118.16350397261459],[125.7198197305115],[-61.353668013979465]];w[1]=[[-0.3268324018128853,-0.10547783949606436,1.2385617474541086],[-4.756040201258138,-5.530586211507047,2.0654393849840065],[-6.638909077737027,-6.373098375160245,3.4436506766914343]];w[2]=[[-0.27843549703223947,-1.8499126518203834,-2.4900563442361467]];let b=[];b[0]=[[6.431720708819896],[5.712699746115524],[-8.928168116731628]];b[1]=[[-0.8442173421026961],[-1.8347328438421329],[-1.6789862537895264]];b[2]=[[1.2994856424588022]];let c=[1,3,3,1];let a=[];a[1]=(1+Math.exp(-t)=>);a[2]=(1+Math.exp(-t)=>);a[3]=(1+Math.exp(-t)=>);let l=[];l[0]=[];for(let i=0;i<1;i++){l[0][i]=[input[i]]};for(let i=1;i<4;i++){l[i]=[];for(let j=0;j<c[i];j++){l[i][j]=[0]}};for(let m=0;m<3;m++){for(let i=0;i<w[m].length;i++){for(let j=0;j<l[m][0].length;j++){let sum=0;for(let k=0;k<w[m][0].length;k++){sum+=w[m][i][k]*l[m][k][j]};l[m+1][i][j]=sum}};for(let i=0;i<l[m+1].length;i++){for(let j=0;j<l[m+1][0].length;j++){l[m+1][i][j]=l[m+1][i][j]+b[m][i][j]}};for(let i=0;i<l[m+1].length;i++){for(let j=0;j<l[m+1][0].length;j++){l[m+1][i][j]=a[m+1](l[m+1][i][j])}}};let o=[];for(let i=0;i<1;i++){o[i]=l[3][i][0]};return o}
// passing data to the model
nn.feedForward([25], {log: true, decimals: 3})
Generate a usable function
Logging produces the following statements:
a[1]=(1+Math.exp(-t)=>);
a[2]=(1+Math.exp(-t)=>);
a[3]=(1+Math.exp(-t)=>);
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.