Code Monkey home page Code Monkey logo

nest-simulator's Introduction

The Neural Simulation Tool - NEST

Documentation CII Best Practices License DOI

Latest release GitHub contributors GitHub commit activity

Ubuntu version Fedora package Conda version Homebrew version Docker Image Version Virtual applicance

YouTube Video Views Twitter Follow

NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons. The development of NEST is coordinated by the NEST Initiative. General information on the NEST Initiative can be found at its homepage at https://www.nest-initiative.org.

NEST is ideal for networks of spiking neurons of any size, for example:

  • Models of information processing e.g. in the visual or auditory cortex of mammals,
  • Models of network activity dynamics, e.g. laminar cortical networks or balanced random networks,
  • Models of learning and plasticity.

For copyright information please refer to the LICENSE file and to the information header in the source files.

How do I use NEST?

You can use NEST either via Python (PyNEST) or as a stand-alone application (nest). PyNEST provides a set of commands to the Python interpreter which give you access to NEST's simulation kernel. With these commands, you describe and run your network simulation. You can also complement PyNEST with PyNN, a simulator-independent set of Python commands to formulate and run neural simulations. While you define your simulations in Python, the actual simulation is executed within NEST's highly optimized simulation kernel which is written in C++.

A NEST simulation tries to follow the logic of an electrophysiological experiment that takes place inside a computer with the difference, that the neural system to be investigated must be defined by the experimenter.

The neural system is defined by a possibly large number of neurons and their connections. In a NEST network, different neuron and synapse models can coexist. Any two neurons can have multiple connections with different properties. Thus, the connectivity can in general not be described by a weight or connectivity matrix but rather as an adjacency list.

To manipulate or observe the network dynamics, the experimenter can define so-called devices which represent the various instruments (for measuring and stimulation) found in an experiment. These devices write their data either to memory or to file.

NEST is extensible and new models for neurons, synapses, and devices can be added.

To get started with NEST, please see the Documentation Page for Tutorials.

Why should I use NEST?

To learn more about the capabilities of NEST, please read the complete feature summary.

  • NEST provides over 50 neuron models many of which have been published. Choose from simple integrate-and-fire neurons with current or conductance based synapses, over the Izhikevich or AdEx models, to Hodgkin-Huxley models.
  • NEST provides over 10 synapse models, including short-term plasticity (Tsodyks & Markram) and different variants of spike-timing dependent plasticity (STDP).
  • NEST provides many examples that help you getting started with your own simulation project.
  • NEST offers convenient and efficient commands to define and connect large networks, ranging from algorithmically determined connections to data-driven connectivity.
  • NEST lets you inspect and modify the state of each neuron and each connection at any time during a simulation.
  • NEST is fast and memory efficient. It makes best use of your multi-core computer and compute clusters with minimal user intervention.
  • NEST runs on a wide range of UNIX-like systems, from MacBooks to supercomputers.
  • NEST has minimal dependencies. All it really needs is a C++ compiler. Everything else is optional.
  • NEST developers are using agile continuous integration-based workflows in order to maintain high code quality standards for correct and reproducible simulations.
  • NEST has one of the largest and most experienced developer communities of all neural simulators. NEST was first released in 1994 under the name SYNOD and has been extended and improved ever since.

License

NEST is open source software and is licensed under the GNU General Public License v2 or later.

Installing NEST

Please see the online NEST Installation Instructions to find out how to install NEST.

Getting help

  • You can run the help command in the NEST interpreter to find documentation and learn more about available commands.
  • For queries regarding NEST usage, please use the NEST users mailing list.
  • Information on the Python bindings to NEST can be found in ${prefix}/share/doc/nest/README.md.
  • For those looking to extend NEST, developer documentation on Contributing to NEST is available.

Citing NEST

Please cite NEST if you use it in your work.

nest-simulator's People

Contributors

ackurth avatar akorgor avatar apeyser avatar babsey avatar clinssen avatar gtrensch avatar hakonsbm avatar hanjiajiang avatar helveg avatar heplesser avatar jakobj avatar janhahne avatar janvogelsang avatar jessica-mitchell avatar jhnnsnk avatar jougs avatar lekshmideepu avatar med-ayssar avatar nicolossus avatar pnbabu avatar sanjayankur31 avatar sarakonradi avatar sbillaudelle avatar sdiazpier avatar silmathoron avatar steffengraber avatar stinebuu avatar tammoippen avatar terhorstd avatar willemwybo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nest-simulator's Issues

Incomplete documentation of 'noise_generator'

If the noise generator is connected to a neuron, the relation between its standard deviation 'std' and the actual fluctuations seen in the membrane potential of the neuron is not clear from the documentation.

Feature request: Script for checking coding style compliance

The C++ code style guidelines suggest running three tools (clang-format, vera++, cppcheck) to verify code formatting. The build.sh for Travis script can run these automatically (although there seems to be some trouble).

I would very much appreciate a small script that I could use myself locally to run all code style checks in one go, something like check_code_style.sh.

@tammoippen you are maybe the ideal candidate to implement this?

Communicator accessibility

Currently communicators are internal structures. It would be helpful to pull out the communicator, particularly for the case of MUSIC which uses a sub-communicator as part of putting as much of the machinery outside of NEST rather than buried in internal structures.

So either by integrating pymusic:
s = music.Setup()
s.getcomm()

or by adding a nest call:
nest.getcomm()

Invalid URL in deprecation warning for new Connection Management

Problem: old URL

The deprecation warning about changes in the connection management contains a link to the old site:
http://nest-initiative.org/Connection_Management
When clicking the link the following message shows up:

"This is somewhat embarrassing, isnโ€™t it? It seems we canโ€™t find what youโ€™re looking for. Perhaps searching can help."

Error Reproduction

This code will reproduce the warning:

import nest
nodes = nest.Create('iaf_psc_delta', 2)
nest.Connect(nodes[0:1], nodes[1:], 'all_to_all', model='static_synapse')

Output:

lib/python2.7/site-packages/nest/hl_api.py:84: UserWarning: 
The argument 'model' is there for backward compatibility with the old Connect function 
and will be removed in NEST 2.6. Please change the name of the keyword argument 
from 'model' to 'syn_spec'. For details, see the documentation at:
http://nest-initiative.org/Connection_Management

Origin and probable fix of the problem

The wrong url is in pynest/nest/hl_api.py (SHA1: ae79942), lines 80 and 1098.
I assume the URL should point to this page instead:
http://www.nest-simulator.org/connection_management/

Additionally the wording of the message in line 1096 should be changed to
"will be removed in a future version of NEST" like in line 78.

Add summary for build process in TravisCI

Currently, the log files of Travis contain different types of information in one large block of text:

  • configuration log and summary
  • build output
  • output from the coding guideline checker
  • static code analysis results
  • logs from the testsuite
    In case of failures, it is hard to find the exact cause of failure.

Once the log files are available on S3 (#98), the logs shown by TravisCI could be shortened to only contain summaries of the information detailed above and a link to the full logs. They could also point out more prominently, what the exact cause for the failure was.

@lekshmideepu @tammoippen: Could you please look into this?

tau_minus_triplet defined but no triplet STDP

[This is a summary of things discussed in trac.766]
Neurons have the property tau_minus_triplet, but this does not seem to be used, which may be confusing for users. In the ticket triage session on August 27, 2014, we came to the conclusion that triplet STDP from the developer module should be made available in the public NEST version.

Turn off notification noise

The current travis settings are noisy. By deleting the notification section, the default scheme is used, which is that messages only go to the submitter. Given the pull system, there's no need for everyone to get every build error, as in the SVN/Jenkins setup.

Travis does not report test suite failures

Travis will report success on builds even if the testsuite (make installcheck) fails. An example is Travis build 141.8 with one failing PyNEST test; see also #88.

The underlying problem is that make installcheck returns exit code 0 even when tests fail.

Configuring a custom module fails

Hi,

I'm going through this tutorial in an attempt to get a custom module running (I've already installed PyNEST successfully):

https://nest.github.io/nest-simulator/synapse_models

Everything works up to the point when I run the command

../MyModule/configure --with-nest=/usr/local/bin/nest-config

Then it stops due to an error, see here (I replaced some long file structure with /path/to):

=== configuring in libltdl (/Users/haffi/Documents/path/to/nest/mmb/libltdl)
configure: running /bin/sh ../../MyModule/libltdl/configure --disable-option-checking '--prefix=/usr/local/Cellar/nest/2.6.0'  '--with-nest=/usr/local/bin/nest-config' '--enable-ltdl-convenience' --cache-file=/dev/null --srcdir=../../MyModule/libltdl
configure: error: cannot find install-sh, install.sh, or shtool in ../../../../../path/to/MyModule "../../MyModule/libltdl"/../../../../../path/to/MyModule
configure: error: ../../MyModule/libltdl/configure failed for libltdl

I'm running this on Mac OS X and I know of at least one other user which got the same error message. I've been trying to understand what the issue is but I can't solve it. If anyone here could help me that would be very helpful (and lead to updating of that tutorial page if this is a common problem).

Edit: I also tried downloading the ubuntu image and running it in virtual box. There I managed to run the configure script above but the make command crashes.

Don't work on your master

Working on your master leads to really nasty merge histories. You'll work, then merge into our master, and then continue working without pulling up to upstream, then remerging. You should keep your master pristine, work on a branch, and then do a pull request on the branch. After that, you should delete the branch and make a new branch from master.

This is something to check for in a code review --- that the history isn't crazy. If so, the author should be requested to rebase, otherwise eventually it'll become very hard to identify were bugs occurred and merges will become increasingly difficult to do correctly.

Suggestion: Create structured Travis log output

Right now, the content written to stdout (and maybe stderr?) during the whole build and test process on Travis is just put into a single log file. Access to separate build artifacts is only possible by asking our GitHub team for a special build with output to S3.

Maybe life could be made a little bit easier by structuring the output to stdout during Travis builds in a more explicit way. This would require changes in ".travis.yml" and in "build.sh".

Variation 1: Introduce markers in stdout which contain file paths and names and provide in addition a post-processing script which can parse such a log and convert it into a directory structure with separate files.

Variation 2: Redirect all output to stdout and stderr (if possible?) during the build process to temporary files; from these files a structured tar file is created at the end of the build process and then written to stdout to get its contents into the Travis build log.

Additional suggestions are welcome... :-)

Create MPI-test for plasticity with pre-synaptic neurons using spike multiplicity

Several neuron models, especially pp_* models, use set_multiplicity() to convey that several spikes have been fired within a single time step. Since these models have proxies, their spikes are conveyed via Network::send_remote(), which converts a SpikeEvent with multiplicity n into n SpikeEvents of multiplicity 1. Thus, it does not matter that STDPConnection::send() and other plastic synapses do not heed event multiplicity.

Now, it is not inconceivable that this behavior will change in the future, i.e., that event multiplicity is transmitted. Then, plasticity would not be handled correctly. We should therefore do the following:

  • add assert(e.get_multiplicity() == 1) to the send() methods of all plastic synapse models
  • add an MPI test that checks that spikes sent with > 1 multiplicity are handled properly wrt plasticity.

Review current input and its documentation in iaf_psc_exp

iaf_psc_exp handles current input supplied via CurrentEvents differently than other neuron models. Input via rport 0 is handled the normal way, while input via rport 1 is filtered as excitatory synaptic input.
This behavior is documented in one sentence, but that sentence is too little visible for an additional feature such at this. It should at least be on a line of its own, probably be Remark.

I am also struggling to understand the physical basis of this filtering. An arriving spike releases transmitter vesicles which dock at receptors and thus evoke an input current. The exponential decay of the current captures the process in which "all" channels open instantly and then close over time as transmitter molecules detach again. Thus, there is a physical basis for a current that persists beyond the arrival time of the spike.

But when we inject currents via CurrentEvents, we model injection of currents via an electrode. Consider now a case where the electrode injects current only during a single time step. Then, immediately after that time step, the physical input current to the neuron is zero. But in the "filtered current" model, this current would persist as an exponentially decaying current for several milliseconds. What is the model behind this? It should be explained in the documentation. If one just wants to have a possiblity of injecting a low-pass filtered current into a neuron, one could use step_current_generator, setting the current amplitudes to filtered values.

@jschuecker, could you take a look at this?

NEST 2.6.0 crashes when using different synapse parameters with a large number of connections

When using NEST v2.6.0 - together with Music- to run one of our simulations, on some occasion we get this error, and the simulation crashes:

python: ../nestkernel/scheduler.h:789: static nest::delay nest::Scheduler::get_modulo(nest::delay): Assertion `static_cast::size_type>(d) < moduli_.size()' failed.

With version 2.4.2 there are no problems.
After digging into the problem, and testing different cases, it seems that it only happens when running simulations with a large number of connections and different parameters for each of them (e.g. delay and weight change from one to another). And even in this case, it's not consistent.
It could be worth to check if it's related to the new rounding strategy introduced in 2.6, because in one of my tests, filtering the synapses with a delay below a certain threshold (i.e. 0.1) seemed to fix the issue.

test_rdv_param_setting fails for some compilers

Test test_rdv_param_setting.sli fails when NEST is compiled with g++ 5.1.0 or Apple clang 6.1.0 with -O2.

The reason for this is unsafe integer-overflow detection in librandom::UniformIntRandomDev::set_status(), which leads to some overflow cases not being detected.

@apeyser provided a pointer to a safe implementation at https://www.securecoding.cert.org/confluence/display/c/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow

Since there is a failing test (provided one uses the compilers given above, currently not in the NEST CI system), I am not adding an additional regression test.

stdp synapses on pp_psc_delta neuron are not plastic

stdp_synapses onto pp_psc_delta neurons are apparently not plastic at the moment. I have made a simple example simulation script which is available here:
https://github.com/mdeger/nest-simulator/blob/master/testsuite/manualtests/test_pp_psc_delta_stdp.py

The result of the script is shown in the attached pictures. Both neurons spike shortly after each spike of a sequence of spikes from the same presynaptic neuron. However, only the iaf_psc_delta neuron changes its synaptic weight, pp_psc_delta's weight is unchanged.

The only difference of the two models that I found, with respect to STDP behavior, is that archiver_length of the neurons is different. Both are instances of ArchivingNode, and this is important for STDP. However, I do not know how this different archiver_length may occur. I speculate that the problem occurs when the STDP connection is instantiated, and somehow the pp_psc_delta neuron does not increment its connection counter or so, but the iaf_psc_delta does it correctly.

Any help on the issue is appreciated.

test_pp_psc_delta_stdp_fig1
test_pp_psc_delta_stdp_fig2

MyModule does not work on BlueGene

User modules (MyModule) do not work on BlueGene at present. Loading them as dynamically linked modules fails as well as building NEST with MyModule linked in. This is most likely due to the fact that the MyModule build setup does not take BlueGene peculiarities into account.

Rename function `test_connect_helpers.test_synapse`

Function test_connect_helpers.test_synapse is a helper function, it does not implement a test. We therefore need to change the name to something not containing test, so that nosetests won't run it as a test.

This has lead to failing tests for some time (see eg Travis build 141.8), which were not reported due to #89.

TravisCI fails on removed files

When a file is removed in a commit, the build.sh still tries to run cppcheck, vera++ and clang-format on it, hence all builds will fail. The solution is to first check, whether the file in the changeset, which is under testing, is still present โ€“ย so before build.sh:115 have something like:

if test -e "$f" ...

lib/sli/rcsinfo.sli

Ok, I have this file in the tree, and it gets updated to:

statusdict /rcsinfo (no_rcsinfo_available) put

without which, tests fail. Is this thing still necessary?

@jougs

Precise models don't work with stdp_synapse?

The following script:

import nest

cell1 = nest.Create('iaf_neuron')
cell2 = nest.Create('iaf_psc_exp_ps')

conn_dict = {"rule": "all_to_all"}
syn_dict = {"model": "stdp_synapse"}
nest.Connect(cell1, cell2, conn_dict, syn_dict)

gives

Traceback (most recent call last):
  File "tmp.py", line 8, in <module>
    nest.Connect(cell1, cell2, conn_dict, syn_dict)
  File "....lib/python2.7/site-packages/nest/hl_api.py", line 153, in stack_checker_func
    return f(*args, **kwargs)
  File "..../lib/python2.7/site-packages/nest/hl_api.py", line 1123, in Connect
    sr('Connect')
  File "....lib/python2.7/site-packages/nest/__init__.py", line 81, in catching_sli_run
    raise hl_api.NESTError("{0} in {1}{2}".format(errorname, commandname, message))
pynestkernel.NESTError: IllegalConnection in Connect_g_g_D_D: Creation of connection is not possible.

If I replace 'iaf_psc_exp_ps with 'iaf_psc_exp, it works fine.

nest.GetDefaults('iaf_psc_exp_ps') shows no tau_minus parameter, and the same is true for some other precise models I looked at.

Is there a reason why iaf_psc_exp_ps cannot work with stdp_synapse? If so, please could this be documented (unless I missed the documentation). If not, please could this be implemented?

PyNEST tests using AssertGreater fail

All PyNEST tests using the compatibility versions of AssertGreater fail with older versions of Python and SciPy (here Python 2.6.6, SciPy 0.7.2). The error messages look like this:

======================================================================
ERROR: testRPortDistribution (nest.tests.test_connect_all_patterns.TestAllToAll)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_all_to_all.py", line 88, in testRPortDistribution
    self.assertGreater(p, self.pval, 'Chi2 test failed.')
TypeError: <lambda>() takes exactly 3 arguments (4 given)

======================================================================
ERROR: testRPortDistribution (nest.tests.test_connect_all_to_all.TestAllToAll)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_all_to_all.py", line 88, in testRPortDistribution
    self.assertGreater(p, self.pval, 'Chi2 test failed.')
TypeError: <lambda>() takes exactly 3 arguments (4 given)

clang-format line length

@tammoippen

100 is way, way too long. 80 should be the absolute maximum length of a line, and I'd argue for 70.

I know that some editors will tend to drive you towards long lines (Xcode) with line wrapping, but on that point, objectively, the lines should be single concepts, single tests, and the wrapping should be fixed by the programmer. A line like:

  if ( ( new_nmin < 0 && new_nmax > max + new_nmin ) || ( new_nmax - new_nmin == max ) )

is way, way too long.

Look at the default line lengths for TeX --- thin columns, and that's for literature.

Add a status flag indicating whether a model support precise events

For all I can see, it is currently not possible to detect whether a neuron model supports precise spike times, short of reading the documentation. This is unfortunate for the user, and makes automated testing problematic if that requires adaptation for models supporting precise times.

Add a flag to the status dictionary, returning the value provided by Node::is_off_grid().

Compiler errors on K

The current nest code produces compiler errors on the K computer. There are simple workarounds for all of them, nevertheless it would be nice to add a configure option similar to "--enable-bluegene" or any other fix.

When compiling with:

../nest-simulator/configure --prefix=[...]/nest-simulator.install --with-openmp=-Kopenmp --with-mpi --with-gsl=[...]/gsl-1.15.install --without-python --without-readline --without-pthread CC=mpifccpx CXX=mpiFCCpx --host=sparc64-unknown-linux-gnu --build=x86_64-unknown-linux-gnu CFLAGS="-Nnoline -DUSE_PMA -DIS_K" CXXFLAGS="--alternative_tokens -O3 -Kfast,openmp, -Nnoline, -Nquickdbg -NRtrap -DUSE_PMA -DIS_K"

The following compiler errors occur:

in librandom/clipped_randomdev.h:

"../../nest-simulator/librandom/clipped_randomdev.h", line 337: error: class member designated by a using-declaration must be visible in a direct base class
using RandomDev::operator();

in lines 97,204,205,337,446,447

simple workaround on K: "//" this lines


in nestkernel/nest.h

"../../nest-simulator/nestkernel/nest.h", line 84: error: identifier "LONG_LONG_MAX" is undefined
const tic_t tic_t_max = LONG_LONG_MAX;

in lines 84,85 - reason: LONG_LONG_MAX is unknown

simple workaround on K: replace LONG_LONG_MAX with LONG_MAX


in nestkernel/connector_model_impl.h

"../../nest-simulator/nestkernel/connector_model_impl.h", line 277: error: expected an identifier
if ( !std::isnan( delay ) )

in several lines - reason: std::isnan is unknown

simple workaround on K: #include cmath and remove "std::" from lines

Review role and use of Events in connection creation and signal transmission

We use Events in NEST both during connection creation and during transmission of signals via connections. Events are used differently in both cases, and different subsets of Event public functions cater to the different uses. This is not properly documented at present.

As an example, e.get_sender().get_gid() and e.get_sender_gid() will yield inconsistent results, even though one would expect them to yield the same result. In particular, e.get_sender_gid() will trigger an assertion (sender_gid_>0 fails) when called from handle_test_event().

The reason for this inconsistency is that Event objects are used differently during network construction and network simulation.

During network construction, we send a pointer to a node of the sender node type. This is not, generally, a pointer to the actual sender: for senders on remote MPI processes, this pointer is not available. Instead, we send a pointer to a proxy node. No GID is set on that proxy node (proxy nodes always have GID 0).

send_test_event() now sets only the Node* sender_ field on the Event it sends (to this), but not the sender_gid_, since that is not consistently available (i.e., not for proxy neurons representing remote sources).

When delivering spikes, on the other hand, only the sender_gid_ field is set in an Event, but not the sender_. This because a pointer to sender is not available for remote neurons and passing pointers to proxy neurons would not make sense at this point.

As a further twist, if the target is a recording device, connections will only be created from local sources (the connect mechanism takes care of this), whence the pointer returned by e.get_sender() will actually be a pointer to the real sender object (not a proxy), and e.get_sender().get_gid() will return the proper GID. e.get_sender_gid(), on the other hand, should be used only during spike delivery, not during connection creation.

Regression in NEST 2.8.0 for 'aeif_cond_exp' with Delta_T = 0

When Delta_T = 0, the membrane voltage should go to infinity as soon as "V_th" is reached (Scholarpedia). This is handled correctly in NEST 2.6.0, but in NEST 2.8.0 there is no spike until the voltage reaches "V_peak".

Example:

import matplotlib.pyplot as plt
import nest

neuron = nest.Create('aeif_cond_exp')
nest.SetStatus(neuron,
               dict(Delta_T=0.0, I_e=1000.0, V_th=-50.0, V_peak=-45.0))

recorder = nest.Create('multimeter')
nest.SetStatus(recorder, {'record_from': ['V_m'],
                          'interval': nest.GetKernelStatus('resolution')})
nest.Connect(recorder, neuron)

nest.Simulate(100.0)

data = nest.GetStatus(recorder, 'events')[0]
t = data['times']
vm = data['V_m']

plt.plot(t, vm)
plt.xlabel("Time (ms)")
plt.ylabel("V_m (mV)")
plt.title(nest.version())
plt.ylim(-70, -45)
plt.savefig("aeif_conf_exp_{}.png".format(nest.version().replace(" ", "_")))

aeif_conf_exp_nest_2 6 0
aeif_conf_exp_nest_2 8 0

"make disclean" returns an error

when running make distclean, distclean-recursive throws an error when trying to run distclean in the pynest directory:

Making distclean in pynest
make[1]: Entering directory '/home/jordan/opt/nest-simulator.build/pynest'
Makefile:630: ../nest/.deps/pynestkernel_la-neststartup.Plo: No such file or directory
make[1]: *** No rule to make target '../nest/.deps/pynestkernel_la-neststartup.Plo'. Stop.
make[1]: Leaving directory '/home/jordan/opt/nest-simulator.build/pynest'
make: *** [distclean-recursive] Error 1

am im assuming this is not the desired behaviour?

PyNEST tests using scipy.stats.kstest fail

All PyNEST tests using the function scipy.stats.kstest fail with older versions of Python and SciPy (here Python 2.6.6, SciPy 0.7.2). The error messages look like this:

======================================================================
ERROR: testExponentialClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 130, in testExponentialClippedDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 444, in check_ks
    D, p = scipy.stats.kstest(M, get_clipped_cdf(params), alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testExponentialDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 119, in testExponentialDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
    D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testGammaClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 152, in testGammaClippedDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 444, in check_ks
    D, p = scipy.stats.kstest(M, get_clipped_cdf(params), alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testGammaDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 140, in testGammaDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
    D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testLognormalClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 174, in testLognormalClippedDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 444, in check_ks
    D, p = scipy.stats.kstest(M, get_clipped_cdf(params), alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testLognormalDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 162, in testLognormalDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
    D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testNormalClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 76, in testNormalClippedDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
    D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testNormalDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 64, in testNormalDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
    D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable

======================================================================
ERROR: testUniformDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 206, in testUniformDist
    is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
 File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
    D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable

Dead link to 'Overview of scheduling and update strategies' on homepage index.md

In the index.md there is a dead link to Overview of scheduling and update strategies.

I assume this will include a overview of the internal simulation loop, i.e. when which node/connection functions are called by the simulator. Comparable slides were shown at the Nest User Workshop, and I guess these would be a great help for people to start in development.

One idea would be to make available the pdf of the slides under this link instead, while the page is not yet finished or published.

Test failure when NEST is installed system-wide

I have prepared a Debian packaging for NEST v2.8 targeting Debian proper (PR will come shortly). I have sorted out most things already. However, I am facing an issue when running the test suite for a NEST installation in /usr. This culprit is this:

Running test 'unittests/test_round_validate.sli'... 
   > Running  mpirun -np 1 /usr/bin/nest /usr/share/doc/nest/unittests/test_round_valid
ate.sli
   > NEST v2.8.0 (C) 2004 The NEST Initiative
   > 
   > Sep 30 17:34:23 file [Error]: FileOpenError
   >     Could not open the following file for writing: 
   >     "/usr/share/doc/nest/help/sli/array.hlp".
   > --------------------------------------------------------------------------
   > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
   > with errorcode 126.
   > 
   > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
   > You may or may not see output from other processes, depending on
   > exactly when Open MPI kills them.
   > --------------------------------------------------------------------------
   > --------------------------------------------------------------------------
   > mpirun has exited due to process rank 0 with PID 3952 on
   > node meiner exiting improperly. There are two reasons this could occur:
   > 
   > 1. this process did not call "init" before exiting, but others in
   > the job did. This can cause a job to hang indefinitely while it waits
   > for all processes to call "init". By rule, if one process calls "init",
   > then ALL processes must call "init" prior to termination.
   > 
   > 2. this process called "init", but exited without calling "finalize".
   > By rule, all processes that call "init" MUST call "finalize" prior to
   > exiting or it will be considered an "abnormal termination"
   > 
   > This may have caused other processes in the application to be
   > terminated by signals sent by mpirun (as reported here).
   > --------------------------------------------------------------------------
-> 126 (Failed: error in test script)

Is there a way to make this work -- maybe by writing into a temp dir? If not, I would exclude this test -- it is the only one that fails.

It would be nice to be able to run (a subset of) the tests without having to install NEST somewhere, but I could not figure out how -- it seems to heavily rely on its installation prefix.

Other open issues where I would appreciate you input are:

  • is Python3 supported?
  • if yes, is it possible to build for multiple Python versions with your autotools setup (doesn't look like it)
  • can you confirm that all source code is (C) Nest initiative and GPL-2+? What about contributed pieces like ./testsuite/manualtests/stdp_prot.m ?

Thanks!

Sidenote: It would be nice, if the HEAD of master could be progressed to include the v2.8.0 tag.

Revise the build setup for user-defined modules (MyModule)

Once #28 is fixed, user-defined modules based on MyModule should work on all architectures, including BlueGene. At the same time, it is clear that our way of building such modules is not the most convenient one and breaks with some good practices for such modules. We should therefore revise the build mechanism. This is a follow-up to trac.526.

Note that the philosophy of trac.526 was to move entirely to dynamically loaded modules to be loaded by Install. At least for BlueGene, this not feasible/advisable, there we need statically linked libraries. But the build process in that case might still be improved, e.g., by making the main NEST build process build also user-defined modules when they are linked as static libraries.

For script compatibility, we should make Install a no-op in case the pertaining module has been linked in.

See also comments on #28 and code changes in #55.

Review iaf_psc_exp_multisynapse

iaf_psc_exp_multisynapse::update() contains a number of "not sure about this" comments. It needs review and proper testing urgently.

`parrot_neuron` should exploit spike multiplicity

parrot_neuron currently emits spikes using a loop

for ( ulong_t i_spike = 0; i_spike < current_spikes_n; i_spike++ )
  network()->send( *this, se, lag );

This is inefficient---why don't we use multiplicity and make just a single network->send() call?

Should it be possible to connect binary neurons to normal neurons?

NEST currently comes with two "binary neuron" models, ginzburg_neuron and mcculloch_pitts_neuron, both derived from class binary_neuron. These neuron communicate by SpikeEvents that do not really represent spikes, but state transitions, and that abuse the event multiplicity entry to communicate which states they are transiting between. But since they use SpikeEvents, they still can be connected to, and receive input from, arbitrary neuron models. To me, this seems to make little sense. Should this be prohibited, to avoid that users build networks that do not make sense?

One way to achieve this would be to define a new event type, e.g., BinarySignalEvent, and let binary neurons only support that event? A problem with that would be that remote sending only supports SpikeEvent (the binary neurons currently exploit the implementation of multiplicity transmission in a too(?) clever way to send necessary information via spikes; needs better documentation and a solid MPI-test!). Alternatively, one could modify the connection-handshaking methods (send_test_event, handles_test_event) to check for the type of the source/target model and throw an exception if it is not derived from binary_neuron.

git diff --name-only $TRAVIS_COMMIT_RANGE

@tammoippen: in build.sh, we do

file_names=`git diff --name-only $TRAVIS_COMMIT_RANGE`

Inside my repo on some pushes, this seems to return incorrect/unreachable references. I haven't yet identified the cause, or a travis bug report on this, but it may be a problem with history rewrites on pushes that it may end up referencing old, non-existent commits.

Add test that neuron models heed event multiplicity

Events, in particular SpikeEvents can include a multiplicity: one event object represents several spikes emitted by a single sender in a single time step. The handle() method of the receiving neuron must read out this information and apply it, typically by multiplying the weight with the multiplicity. I think this only makes sense for SpikeEvents, for all other event types this should lead to an error.

We currently have no test checking that all models heed multiplicity information. Add a test for this.

pynestkernel not compiling on OSX

I'm having trouble getting a compiled pynestkernel library on OSX. Standard installation doesn't even attempt to compile it and put it in the build directory, so I get:

File "/Users/rgerkin/Desktop/nest-2.6.0/pynest/nest/__init__.py", line 52, in <module>
from . import pynestkernel as _kernel
ImportError: cannot import name 'pynestkernel'

due to the file being missing. The installation script isn't even trying to compile it to a shared library (.so)

If I try to cythonize it myself, I can get a .cpp, but I can never get it to compile, although perhaps there is magic set of flags that will work.

Does anyone have a reproducible recipe for installation in OSX 10.9?

Several testsuite tests fail when compiling with clang under OSX

As first reported by Mario Mulansky on NEST User (17 July 2015), several testsuite tests fail when compiling NEST under OSX using the clang compiler.

To reproduce:

  • OSX 10.10.4
  • Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
  • ../src/configure --prefix=pwd/install --without-openmp
  • GSL 1.16 from homebrew
  • NEST master branch aeb4165

The following tests fail:

  • Running test unittests/test_aeif_cond_alpha_multisynapse.sli... Failed: segmentation fault
  • Running test unittests/test_mip_corrdet.sli... Failed: segmentation fault
  • Running test unittests/test_recorder_close_flush.sli... Failed: missed C++ assertion
  • Running test regressiontests/ticket-80-175-179.sli... Failed: segmentation fault
  • All 15 nest.tests.test_connect_distributions tests

The invalid memory is freed.

While I am executing a test program on K computer from

testsuite/manualtests/ticket-458

I get:

jwe1050i-w The hardware barrier couldn't be used and continues processing using the software barrier.
taken to (standard) corrective action, execution continuing.
jwe1603i-w The invalid memory is freed.
(Address:0  Free(function:std::basic_ifstream<char, std::char_traits<char>>::~basic_ifstream()  line:0))
 error occurs at _ZNSt14basic_ifstreamIcSt11char_traitsIcEED1Ev loc 0000000000ae1610 offset 0000000000000090 
 _ZNSt14basic_ifstreamIcSt11char_traitsIcEED1Ev     at loc 0000000000ae1580 called from loc 0000000000d6c944 in _ZNK10SLIStartup9checkpathERKSsRSs      
 _ZNK10SLIStartup9checkpathERKSsRSs     at loc 0000000000d6c340 called from loc 0000000000d718fc in _ZN10SLIStartup4initEP14SLIInterpreter      
 _ZN10SLIStartup4initEP14SLIInterpreter     at loc 0000000000d70a00 called from loc 0000000000d58df4 in _ZN9SLIModule7installERSoP14SLIInterpreter      
 _ZN9SLIModule7installERSoP14SLIInterpreter     at loc 0000000000d58d80 called from loc 0000000000c1b40c in _ZN14SLIInterpreter9addmoduleEP9SLIModule      
 _ZN14SLIInterpreter9addmoduleEP9SLIModule     at loc 0000000000c1b3c0 called from loc 000000000011da38 in _Z11neststartupiPPcR14SLIInterpreterRPN4nest7NetworkE      
 _Z11neststartupiPPcR14SLIInterpreterRPN4nest7NetworkE     at loc 000000000011d880 called from loc 0000000000111ea0 in main          
 main         at loc 0000000000111e80 called from o.s.  
taken to (standard) corrective action, execution continuing.
--------------------------------------------------------------------------
[mpi::mpi-api::mpi-abort]
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 126.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(orte_errmgr_base_error_abort+0x84) [0xffffffff008df684]
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(ompi_mpi_abort+0x51c) [0xffffffff0068389c]
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(MPI_Abort+0x6c) [0xffffffff0069b3ac]
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libtrtmet_c.so.1(MPI_Abort+0x2c) [0xffffffff00159bf0]
[i42-036:18488] ./nest [0x992cac]
[i42-036:18488] ./nest [0x11dd04]
[i42-036:18488] ./nest(main+0x38) [0x111eb8]
[i42-036:18488] /lib64/libc.so.6(__libc_start_main+0x194) [0xffffffff0323381c]
[i42-036:18488] ./nest [0x111d2c]
[ERR.] PLE 0019 plexec One of MPI processes was aborted.(rank=0)(nid=0x210a0034)(CODE=1938,793745140674134016,32256)

Below is my submission script

#!/bin/sh

#PJM -S
#PJM --rsc-list "elapse=10:00"
#PJM --rsc-list "rscgrp=micro"
#PJM --rsc-list "node=12"
#PJM --mpi "assign-online-node"
. /home/system/Env_base

export PARALLEL=1
export OMP_RUN_THREADS=1
export FLIB_FASTOMP=false

mpiexec -np 1 ./nest conf.cli run_benchmark_458.sli

Wonder if other people seen similar error on other supercomputers?

Use of synapse types in NEST + PyNN Issue #377

This issue is related to the pyNN issue 377 "PyNN exhausts NEST 2.6/2.8 synapse model storage" (NeuralEnsemble/PyNN#377).

  1. In pyNN, whenever a pyNN.projection is created to establish connections, a new synapse type in NEST is created, which is a misuse of NEST's synapse type. A synapse type in NEST specifies the dynamical model of synapse (e.g. static, STDP etc.) and not the projection/connection it belongs to. It also does not specify whether the synapse is excitatory or inhibitory, because this is independently decided by the synaptic weights.
    This causes pyNN to exhaust NEST's maximal amount of synapse types (256).
  2. Some of the NEST examples create new synapse types to distinguish between excitatory and inhibitory synapses, which also a misuse of the synapse type concept. We should discourage NEST users to use NEST synapse types for such things like the differentiation between exc. and inh. synapses, if the only difference between them are the positive/negative synaptic weights.

Examples are the brunel network scripts.

PyNEST on an ARMv8 hardware

I just wanted to let the Community know that one of the PyNEST examples executed successfully
on an ARMv8 platform (Fedora 22 aarch64).

$ python ~/projects/nest-simulator/pynest/examples/CampbellSiegert.py 

              -- N E S T --

  Copyright (C) 2004 The NEST Initiative
  Version 2.8.0-git Oct 25 2015 01:32:57

This program is provided AS IS and comes with
NO WARRANTY. See the file LICENSE for details.

Problems or suggestions?
  Visit http://www.nest-simulator.org

Type 'nest.help()' to find out more about NEST.

Oct 25 01:38:23 Network::clear_models [Info]: 
    Models will be cleared and parameters reset.
mean membrane potential (actual / calculated): -57.8512478094 / -57.8189416312
variance (actual / calculated): 0.687992608406 / 0.689739852871
firing rate (actual / calculated): 0.2 / 0.289849301849

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.