Code Monkey home page Code Monkey logo

neurokernel's Introduction

Neurokernel

Package Description

Project Website | GitHub Repository | Online Documentation | Mailing List | Forum

Neurokernel is a Python framework for developing models of the fruit fly brain and executing them on multiple NVIDIA GPUs.

Support the project

Prerequisites

Neurokernel requires

  • Linux (other operating systems may work, but have not been tested);
  • Python;
  • at least one NVIDIA GPU with Fermi architecture or later;
  • NVIDIA's GPU drivers;
  • CUDA 5.0 or later;
  • OpenMPI 1.8.4 or later compiled with CUDA support.

To check what GPUs are in your system, you can use the inxi command available on most Linux distributions:

inxi -G

You can verify that the drivers are loaded as follows:

lsmod | grep nvidia

If no drivers are present, you may have to manually load them by running something like:

modprobe nvidia

as root.

Although some Linux distributions do include CUDA in their stock package repositories, you are encouraged to use those distributed by NVIDIA because they often are more up-to-date and include more recent releases of the GPU drivers. See this page for download information.

If you install Neurokernel in a virtualenv environment, you will need to install OpenMPI. See this page for OpenMPI installation information. Note that OpenMPI 1.8 cannot run on Windows_.

Some of Neurokernel's demos require either ffmpeg or libav installed to generate visualizations (see Examples).

Installation

Conda

The easiest way to get neurokernel is to install it in a conda environment: :

conda create -n nk python=3.7 c-compiler compilers cxx-compiler openmpi -c conda-forge -y
conda activate nk
python -m pip install neurokernel

Make sure to enable CUDA support in the installed OpenMPI by setting: :

export OMPI_MCA_opal_cuda_support=true

Examples

Introductory examples of how to use Neurokernel to build and integrate models of different parts of the fly brain are available in the Neurodriver package. To install it run the following: :

git clone https://github.com/neurokernel/neurodriver
cd ~/neurodriver
python setup.py develop

Other models built using Neurokernel are available on GitHub.

Building the Documentation

To build Neurokernel's HTML documentation locally, you will need to install

Once these are installed, run the following: :

cd ~/neurokernel/docs
make html

Authors & Acknowledgements

See the included AUTHORS file for more information.

License

This software is licensed under the BSD License. See the included LICENSE file for more information.

neurokernel's People

Contributors

chungheng avatar jonmarty avatar kpsychas avatar lebedov avatar mkturkcan avatar nikulukani avatar tk-21st avatar yiyin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neurokernel's Issues

require networkx >= 1.9

Networkx versions before 1.9 have a bug affecting conversion of boolean values in GEXF files, i.e., networkx.readwrite.gexf.GEXF.convert_bool is missing certain mappings. Neurokernel should require networkx >= 1.9 upon installation.

Multiple gpot neurons can't share self.V pointer

Error

Adding multiple gpot neurons to the same simulation causes an invalid value issue:

Process LPU-4:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/apope/class/neurokernel/neurokernel/core.py", line 776, in run
    while curr_steps < self._steps:
  File "/home/apope/class/neurokernel/neurokernel/LPU/LPU.py", line 263, in pre_run
    self._init_objects()
  File "/home/apope/class/neurokernel/neurokernel/LPU/LPU.py", line 381, in _init_objects
    self.neurons = [ self._instantiate_neuron(i,t,n) for i,(t,n) in enumerate(self.n_list) ]
  File "/home/apope/class/neurokernel/neurokernel/LPU/LPU.py", line 811, in _instantiate_neuron
    self.dt, debug=self.debug, LPU_id=self.id)
  File "/home/apope/class/neurokernel/neurokernel/LPU/neurons/MorrisLecarCopy.py", line 31, in __init__
    cuda.memcpy_htod(int(self.V), np.asarray(n_dict['initV'], dtype=np.double))
LogicError: cuMemcpyHtoD failed: invalid value

Replication

This can be replicated by copying an existing class, and renaming it. As soon as there are multiple neuron classes trying to access the same pointer using cuda.memcpy_htod(int(self.V), np.asarray(n_dict['initV'], dtype=np.double)). An example of this can be seen in this branch of my fork in the retina-lamina connection IPython notebook.

PathLikeSelector.make_index() could be sped up by relying upon DataFrame.from_tuples()

Since pandas.DataFrame.from_tuples() inserts NaN to fill in index rows for tuples that contain fewer elements than other tuples, we could rewrite make_index() to not explicitly extract levels from the expanded selector by relying upon from_tuples(); this might speed up the method. The presence of NaN values in the index shouldn't cause any problems.

PathLikeSelector.select() does not return results in order of specified selector

Since DataFrame.select() applies the specified selection function in the order of the entries in the DataFrame's index, PathLikeSelector.select() will not return results in the order of the selector it is passed. This behavior can cause problems in other parts of Neurokernel that assume that the selected results will be returned in the order of the selector.

Change spread of ommatidia on the hemisphere

Added as a suggestion

According to Pick's 77 paper axes of adjacent cells differ by 2.2 degrees on average
Right now we choose the value (1/n_rings)*90 if I read Nikul's solution right.
For 16 rings that is the number of rings we simulate is more than 2 times off

We only need to change the first line of
_get_cartesian_omm_loc

and possibly also the grid to have latitude from 0 to pi/4 instead of pi/2

cannot create a Pattern with selectors containing different numbers of levels

With c5b56bd, attempting to run the following results in an exception:

import nk.pattern
pat = nk.pattern.Pattern('/a/b','/x/y/z')

Error trace:

/home/lev/Work/projects/bionet/python/nk1/neurokernel/pattern.pyc in __init__(self, *selectors, **kwargs)
   1078         # consecutively:
   1079         for i, s in enumerate(selectors):
-> 1080             self.interface[s, 'interface'] = i
   1081 
   1082         # Create a MultiIndex that can store mappings between identifiers in the

/home/lev/Work/projects/bionet/python/nk1/neurokernel/pattern.pyc in __setitem__(self, key, value)
    181                                       len(self.index.shape))
    182         for k, v in data.iteritems():
--> 183             self.data[k].ix[s] = v
    184 
    185     @property

/home/lev/Work/miniconda/envs/NK1/lib/python2.7/site-packages/pandas/core/indexing.pyc in __setitem__(self, key, value)
    112 
    113     def __setitem__(self, key, value):
--> 114         indexer = self._get_setitem_indexer(key)
    115         self._setitem_with_indexer(indexer, value)
    116 

/home/lev/Work/miniconda/envs/NK1/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_setitem_indexer(self, key)
    107 
    108         try:
--> 109             return self._convert_to_indexer(key, is_setter=True)
    110         except TypeError:
    111             raise IndexingError(key)

/home/lev/Work/miniconda/envs/NK1/lib/python2.7/site-packages/pandas/core/indexing.pyc in _convert_to_indexer(self, obj, axis, is_setter)
   1110                 mask = check == -1
   1111                 if mask.any():
-> 1112                     raise KeyError('%s not in index' % objarr[mask])
   1113 
   1114                 return _values_from_object(indexer)

KeyError: "[('a', 'b')] not in index"

support for connecting external input signals/output recorders to ports

Neurokernel should support connecting input signal sources and output recording mechanisms to the ports exposed by a module; only input ports may connect to the former, and only output ports may connect to the latter. Fan-out from one port to a destination port in a pattern and to an output recorder should be supported, but fan-in from a source port in a pattern and an input signal source should not. Initially, input/output signal support should be limited to HDF5 files, with each input or output array associated with a single port and the port identifier used as the label of its associated array.

Port Mapper maps empty spike or gpot port data to None

I am on commit cf7acc5 master branch. The PortMapper initializes a pm with self.data = None when the input data is, for example, np.zeros(0, dtype).

Therefore, if a Module is initialized with, for example, only gpot ports and spike port is set to zero length array, the Module class in core.py will initialize self.pm['spike'] as None (line 148 in core.py).
But on line 305,
spike_data = np.array([], self.pm['spike'].dtype)
will return an error since None does not have attribute 'dtype'.

Neurokernel core configuration mechanism

The core Neurokernel infrastructure should provide a means of explicitly configuring certain options not related to specific LPU models. These might include

  • location of OpenMPI installation
  • MCA options to use when launching an emulation
  • global logging setup

A simple interface to such options could resemble that of mpi4py or matplotlib, e.g.,

import neurokernel.rc
neurokernel.rc['openmpi_home'] = '/opt/openmpi-1.10.1'
neurokernel.rc['mca_opts'] = ['--mca', 'btl_cuda_if_include', 'eth0']
neurokernel.rc['log_opts'] = {'screen': True, 'file_name': 'nk.log'}

import neurokernel.mpi_relaunch
...

move compute plane machinery into separate repo

As of the submission of this issue, the neurokernel repository contains code that implements

  1. the inter-LPU communication mechanism and the draft LPU implementation, and
  2. the various neuron/synapse models it currently supports (i.e., neurokernel.LPU).

To facilitate independent development of each of the above, the draft LPU implementation should be moved to a separate repository and developed as a separate Python package. The remaining code in the current neurokernel repository should treat the new package as an installation requirement.

speed up Pattern instantiation

Instantiation of a Pattern containing many ports is currently quite slow because of the repeated index creation performed by its setitem() method. This could be remedied by conflating instantiation and specification of the connections to be comprised by the Pattern, or by providing classmethods that process connection data en masse.

modify GEXF representation to reflect port-based LPU interface architecture

The draft LPU implementation in the port_ids branch makes several assumptions about how an LPU is specified in GEXF format that should be altered to better represent the LPU interface architecture implemented in the branch:

  • Neurons and input ports are stored as nodes while synapses are stored as edges; output ports are implicitly stored by assigning port identifiers to neurons that emit output. This should be changed to store all modeling elements (input ports, output ports, neurons, synapses, etc.) as nodes and use edges exclusively for expressing relationships (i.e., either ownership or directional data flow) between elements.
  • All nodes must have identifiers, but only port nodes should have path-like selectors.
  • Port nodes must have attributes that indicate whether they receive input or emit output and whether they transmit spike data or graded potential data.
  • Neurons shouldn't have any 'extern' or 'public' attributes; whether or not a neuron can receive or emit data must be determined entirely by whether it is connected to an input or output port.

Adding @nikulukani, @kpsychas, @yiyin to the issue.

pattern.Pattern requires its interfaces to have same number of levels

In revision 6fb5317, in the sensory integration example, the vision LPUs have port selectors with 3 levels, while the olfaction LPU has port selectors with 4 levels. Thus, when the integration LPU is set up with selectors of either 3 levels or 4 level, it will only be compatible with one sensory LPU, namely, either the medulla or the antennal lobe but not both.

improve efficiency of spiking port info transmission

Currently, Neurokernel transmits the actual port identifiers for ports that emit a spike during an execution step. This can be quite inefficient for large numbers of ports or port identifiers with many levels (e.g., /foo/bar/qux/...). Given that the ports associated with a module are currently assumed to not change after module creation, the mapping between the port identifiers and the consecutive integers [0:(number of spiking ports)] should be exploited to transmit the integer indices for each port rather than the port identifiers themselves.

Add (py)CUDA test/info script

A simple python script to check the local installation of (py)CUDA would be very helpful, giving info on current configuration, hardware specs etc.

problem running MPI-based branch using SLURM

The self-launching mechanism in the MPI-based branch of NK doesn't appear to work properly when launched with SLURM (assuming that one task is requested), either because SLURM sets certain env variables or because SLURM's pmi mechanism confuses mpiexec.

mapping of integer indices to spiking port identifiers can break spike data transmission for compatible interfaces comprising different spiking ports or port identifier order

As of 00cbc59, all of the spiking ports in a pattern's interface are directly mapped to integer indices that are transmitted in lieu of the actual port identifiers. Since Neurokernel currently allows for the interfaces of a module and a pattern to be connected if they share a common subset of ports, we can't assume that a mapping from all of the ports to sequential integers in one interface will be equivalent to that of the ports in a compatible interface that could contain a different set (or different ordering) of ports.

Copying @nikulukani.

error when module id not in routing table

This error possibly occurs when simulating an LPU that is not connected to any other.
That is because ids are added in routing_table when adding a connection and not when defining
a Module.

Error can be reproduced by running the retina example in LPU_mpi branch (options -g -i are required the first time retina demo is run)

Stacktrace:

2015-04-23T22:46:02Z:ERROR:prc 0 |File "/home/kpsychas/NK1/neurokernel/neurokernel/mpi_backend.py", line 74, in
2015-04-23T22:46:02Z:ERROR:prc 0 | instance = target(**kwargs)
2015-04-23T22:46:02Z:ERROR:prc 0 |File "/home/kpsychas/NK1/neurokernel/neurokernel/LPU/LPU.py", line 454, in init
2015-04-23T22:46:02Z:ERROR:prc 0 | device=device, debug=debug, time_sync=time_sync)
2015-04-23T22:46:02Z:ERROR:prc 0 |File "/home/kpsychas/NK1/neurokernel/neurokernel/core_gpu.py", line 100, in init
2015-04-23T22:46:02Z:ERROR:prc 0 | raise ValueError('routing table must contain specified module ID')
2015-04-23T22:46:02Z:ERROR:prc 0 |ValueError: routing table must contain specified module ID

move current Neurokernel demos into separate repos

The demos in the examples/ directory of the neurokernel repo that contain actual LPU models (olfaction, sensory_int, and vision) should be moved (along with their commit histories) to independent repos.

The multi and timing demos should remain in the neurokernel repo because they either illustrate or assess the performance of the inter-LPU API independently of LPU design. The intro and generic demos should be moved to the new repo that will contain the draft LPU implementation code.

The neuroml demo will remain the existing neurokernel repo for the time being.

New demo repos need not contain any installation scripts; the code should be runnable if the core Neurokernel code has been properly installed.

Logging of execution step number

Given that neu/syn is not updated during the first call to run_step() when the LPU is not running by itself, this should be reflected in the logging mechanism of BaseModule and Module.

modify interface compatibility check to permit connections between identifier subsets

If two interfaces contain sets of port identifiers with a non-empty intersection, Neurokernel should permit any subset of that intersection to be connected (and hence be regarded as an instance of compatible interfaces). For example, one should be able to connect the ports corresponding to [b,c,d] in two interfaces respectively containing the ports [a,b,c,d] and [b,c,d,e].

Can't run vision example

Hi. I think I set up everything right, but when I run the vision demo I get:

[ec2-user@ip-172-31-23-207 vision]$ python vision_demo.py
Process LPU-14:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/ec2-user/neurokernel/neurokernel/core.py", line 423, in run
    self.pre_run()
  File "/home/ec2-user/neurokernel/neurokernel/LPU/LPU.py", line 460, in pre_run
    self._initialize_gpu_ds()
  File "/home/ec2-user/neurokernel/neurokernel/LPU/LPU.py", line 598, in _initialize_gpu_ds
    np.float64)
  File "/usr/local/lib64/python2.7/site-packages/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/gpuarray.py", line 939, in zeros
    result = GPUArray(shape, dtype, allocator, order=order)
  File "/usr/local/lib64/python2.7/site-packages/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/gpuarray.py", line 199, in __init__
    self.gpudata = self.allocator(self.size * self.dtype.itemsize)
LogicError: cuMemAlloc failed: invalid context
[ec2-user@ip-172-31-23-207 vision]$

merge base classes into core module

Since all modules must necessarily expose the same interface to enable compatibility, we should remove the separate BaseModule and Manager classes in neurokernel.base that don't make assumptions about port types during module instantiation, connection, and execution.

cross-module fan-in check should only look at connected ports

As of 54575d5, BaseModule.connect() checks for fan-in by looking at the input ports of modules that send data to the current module regardless of whether those ports are actually connected in the Pattern instance linking the modules. This check should be modified to only consider those ports that are actually linked by a connection.

handling of emulation with single module in mpi branch is broken

The following code fails when run with commit 6af2819:

import neurokernel.mpi_relaunch

import mpi4py.MPI as MPI
import numpy as np

from neurokernel.core import CTRL_TAG, GPOT_TAG, SPIKE_TAG, Module, Manager
from neurokernel.mpi import setup_logger
from neurokernel.plsel import Selector, SelectorMethods
logger = setup_logger(screen=True, mpi_comm=MPI.COMM_WORLD, multiline=True)

man = Manager()

m1_int_sel_in_gpot = Selector('/a/in/gpot0,/a/in/gpot1')
m1_int_sel_out_gpot = Selector('/a/out/gpot0,/a/out/gpot1')
m1_int_sel_in_spike = Selector('/a/in/spike0,/a/in/spike1')
m1_int_sel_out_spike = Selector('/a/out/spike0,/a/out/spike1')
m1_int_sel = Selector.union(m1_int_sel_in_gpot, m1_int_sel_out_gpot,
                            m1_int_sel_in_spike, m1_int_sel_out_spike)
m1_int_sel_in = m1_int_sel_in_gpot+m1_int_sel_in_spike
m1_int_sel_out = m1_int_sel_out_gpot+m1_int_sel_out_spike
m1_int_sel_gpot = m1_int_sel_in_gpot+m1_int_sel_out_gpot
m1_int_sel_spike = m1_int_sel_in_spike+m1_int_sel_out_spike
N1_gpot = SelectorMethods.count_ports(m1_int_sel_gpot)
N1_spike = SelectorMethods.count_ports(m1_int_sel_spike)

m1_id = 'm1   '
man.add(Module, m1_id, m1_int_sel, m1_int_sel_in, m1_int_sel_out,
        m1_int_sel_gpot, m1_int_sel_spike,
        np.zeros(N1_gpot, dtype=np.double),
        np.zeros(N1_spike, dtype=int),
        ['interface', 'io', 'type'],
        CTRL_TAG, GPOT_TAG, SPIKE_TAG, time_sync=True)
man.spawn()
man.start(10)
man.wait()

Error logs:

2015-04-22T23:24:52Z:INFO:man       |manager instantiated
2015-04-22T23:24:52Z:INFO:man       |adding class Module
2015-04-22T23:24:53Z:INFO:man       |sending steps message (10)
2015-04-22T23:24:53Z:INFO:man       |sending start message
2015-04-22T23:24:54Z:ERROR:prc 0     |File "/home/lev/Work/projects/bionet/python/nk2/neurokernel/mpi_backend.py", line 78, in <module>
2015-04-22T23:24:54Z:ERROR:prc 0     |    instance = target(**kwargs)
2015-04-22T23:24:54Z:ERROR:prc 0     |File "/home/lev/Work/projects/bionet/python/nk2/neurokernel/core.py", line 86, in __init__
2015-04-22T23:24:54Z:ERROR:prc 0     |    raise ValueError('routing table must contain specified module ID')
2015-04-22T23:24:54Z:ERROR:prc 0     |ValueError: routing table must contain specified module ID

speed up emulation by not triggering port selector lookup for spiking ports

BaseModule._put_out_data() avoids triggering the time-consuming port identifier lookup mechanism during emulation by using integer indices into the graded potential port data array computed in BaseModule._init_port_dicts(). We need to do something similar for spiking ports to avoid slowing down processing of models that make heavy use of spiking ports (e.g., the current olfactory model).

Copying @nikulukani, @chungheng.

reduce invocations of PathLikeSelector.multiindex_row_in() in select() method

Reducing the number of times multiindex_row_in() is invoked in select() should speed up execution of the latter; this could be accomplished by

  • directly passing the fully expanding selector's list of port identifier tuples to DataFrame.ix[] if the selector is unambiguous;
  • replacing the '*' and '[:]' tokens in an ambiguous selector with slices that could then be used to IndexSlice (assuming pandas 0.14.+).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.