Code Monkey home page Code Monkey logo

Comments (6)

rbodo avatar rbodo commented on June 4, 2024

Hi Matthias,

Unfortunately I don't have an example that works out of the box, but I can share some scripts that I used years ago for quantization experiments with the Distiller library. I won't be able to help much with debugging at this point, but hopefully you'll find some pointers that get you going.

low_precision.zip

(A good starting point would be low_precision/distiller/run_ptq.py for post-training quantization or run_qat.py for quantization-aware training.)

from snn_toolbox.

Matthias-Hoefflin avatar Matthias-Hoefflin commented on June 4, 2024

Thank you very much for your fast response. I will have a look. However, in the mean time I face another issue.

I use a LeNet 5 for the fashionMNIST data set. The parsed model is really close to the ann accuracy with 94%. If i use temporal_mean_rate encoding then I achieve about 92% accuracy which is good. But now if I change to "ttfs" I have still a parsed model of the same accuracy but the SNN has an accuracy around 60%. Do you know why this happen?
Below the config I use.
I am really confused that it works pretty well with the temporal_mean_rate but not with ttfs. Because on the MNIST dataset the ttfs performes pretty well. Therefore, I would expect that it also should work pretty well on the fashionMNIST

config = configparser.ConfigParser()
config['paths'] = {
'path_wd': WORKING_DIR,
'dataset_path': DATASET_DIR,
'filename_ann': MODEL_NAME,
'runlabel': MODEL_NAME+'_'+str(NUM_STEPS_PER_SAMPLE)
}
config['tools'] = {
'evaluate_ann': True,
'parse': True,
'normalize': True,
'simulate': True,
'convert' : True
}
config['conversion'] = {
'spike_code': 'ttfs',
'softmax_to_relu':True
}
config['simulation'] = {
'simulator': 'INI',
'duration': NUM_STEPS_PER_SAMPLE,
'num_to_test': NUM_TEST_SAMPLES,
'batch_size': BATCH_SIZE,
'keras_backend': 'tensorflow'
}
config['output'] = {
'verbose': 2,
'plot_vars': {
'input_image',
'spiketrains',
'spikerates',
'spikecounts',
'operations',
'normalization_activations',
'activations',
'correlation',
'v_mem',
'error_t'
},
'overwrite': True
}

from snn_toolbox.

rbodo avatar rbodo commented on June 4, 2024

Remember that the TTFS encoding only uses a single spike per neuron to represent a floating point activation value. Temporal mean rate uses many times that number of spikes so it can achieve higher precision / accuracy more easily at the cost of increased computation. To illustrate some of the issues when using a single spike: In our coding scheme, large activations result in fast spikes, small activations in slow spikes. So if a neuron receives as input both slow and fast spikes, it may fire an output spike due to the fast input spike without waiting for the slow input spike (which could have inhibited the firing). We explain it in a bit more detail in the paper.

Unfortunately, MNIST is not a good predictor for success of a method. Fashion MNIST was designed explicitly to be harder than MNIST while still small and easy to handle. We've struggled to make TTFS work with CIFAR for example, so I'm not surprised that the accuracy dropped for Fashion MNIST. There were a couple of things we tried to improve performance - a dynamic threshold, or training with quantized / clipped activations.

from snn_toolbox.

matthiashoefflin avatar matthiashoefflin commented on June 4, 2024

Thank you for your response.
Oke so you would no expect a bug or something like that?

I was confused, because a paper which uses your toolbox noted an accuracy of 88.9% with TTFS on the fashionMNIST dataset. Therefore, I assumed that there is some mistake.

from snn_toolbox.

rbodo avatar rbodo commented on June 4, 2024

I don't think it is a bug, more likely a configuration issue / finding the right hyperparameters. Or training the ANN in a certain way before conversion (it usually helps if the activations of all layers are distributed about equally between 0 and 1, or are clipped or quantized - because the converted SNN effectively quantizes and clips the spikerates due to a finite simulation resolution and maximum firing rate). Perhaps you could ask the authors of that paper to share their config?

from snn_toolbox.

matthiashoefflin avatar matthiashoefflin commented on June 4, 2024

Thanks for your help.

from snn_toolbox.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.