Code Monkey home page Code Monkey logo

uwb-biocomputing / graphitti Goto Github PK

View Code? Open in Web Editor NEW
7.0 8.0 14.0 216.14 MB

A project to facilitate construction of high-performance simulations of graph-structured systems.

Home Page: https://uwb-biocomputing.github.io/Graphitti/

License: Apache License 2.0

C++ 84.63% CMake 1.26% C 0.85% Starlark 0.53% Shell 0.67% Python 11.37% Cuda 0.62% Objective-C 0.07%
c-plus-plus neural-simulator graph-simulator gpu-acceleration performance-neural-simulations emergency-services

graphitti's People

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphitti's Issues

Clarify comment for ConnGrowth::updateSynapsesWeights()

Right now, it indicates it updates weights. This should be clarified to indicated that it takes a weight matrix already computed and is merely pushing those weights into the synapse data structures that are sitting on the device(s)

Bring code up to C++11 standard

See issue UWB-Biocomputing/BrainGrid#232 to see the bug aspect of this. Currently, the Makefile is set to compile code using the C++98 standard. There is a problem with initialization of static const member variables, however. Also, generally, there can be some code cleanup and modernization (that might also have a small positive impact on performance). This is a worthwhile project.

What is affected by this?

Simulator

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Makefile cleanup needed

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

What is affected by this?

Affects simulator. Currently, there is a CXX variable for g++. We should add a CXX_cuda variable so we can set the CUDA compiler. Then, the rules at the bottom of the Makefile should use CXX_cuda, instead of having "nvcc" hard-coded.

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Ability to create output file isn't being checked

When the simulator is run, if it can't create the output file ("stateOutputFileName", from config file) at the end of thew simulation, it just terminates without any error message, leaving the reason why there's no output file a mystery. It at least should produce an error message. Even better would be for it to check at startup that it can create the file, terminating with a useful error message right away (rather than after several day's of chugging along).

What is affected by this?

Simulator

Rename saveData methods

The name is fine, but maybe something like "saveResults" or a name that communicates that it's producing simulation result output files for other tools (for analysis, etc).

rng/Norm.h: Norm class should inherit from MTRand

Right now, there is a #define in MersenneTwister.h, which should be removed, because it's really almost impossible to figure out what's going on with this code (and I'm sure that Doxygen can't figure this out).

Merge ClusterInfo into Cluster

ClusterInfo contains per-cluster data, rather than data global to all clusters. So, there's no need for this extra class; its contents can become part of Cluster.

What is affected by this?

simulator

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

SparseMatrix operator()

SparseMatrix::operator() creates a zero value element if the element doesn't exist. This appears to violate the spirit of what a sparse matrix should be. On the other hand, it isn't immediately clear what else to do if you want to use this as an Lvalue. Perhaps there is no better option. But, a const accessor version is an obvious approach to partially ameliorate this.

Then again, I don't remember what was going through my mind at the time and we don't want to assume I was completely confused at the time (no comments from the peanut gallery, please). Needs to be investigated thoroughly; maybe I will need to do this.

AllSpikingSynapses::createSynapse() cleanup

There are lines of code that access member variables like this->varname, because a local var/parameter has the same name. Would be better to change the local var/param name to avoid this.

Need a cleaner way to specify parameter file paths

Right now, we specify the main parameter file name on the command line. The neuron lists have relative path names inside that file. Those paths are relative to the working directory where BG is run, not where the parameter file is. We should come up with a way to make this more obvious, or we should do something like add a command line argument that specifies the parameter file directory and have the NList file paths be relative to that (and eliminate the need to provide the path in the command line argument for the main parameter file).

Think about architecture for supporting different neuron/synapse types, and interdependencies between them

Right now, we attach attributes such as "excitatory" or "inhibitory", etc. to the synapse, which is the "correct" way to do things. However, this doesn't work for the Izhikevich model, because there are some neuron parameters that vary by type. Right now, we encode this information in the Layout. We should think through how to deal with this in the long run.

See segundo86, figure 22 for starting point for many different synapse (actually, PSP) types. Need to do a little research about whether neurons can actually have different synapse types.

Create device configuration subsystem

This should do the following:

  1. Get information about simulation configuration (number of devices, which devices, which models, etc.)
  2. Read device information for all requested devices
  3. Allocate neurons to devices (incoming synapses live on the same device as their neuron)
  4. Calculate threads per block, number of blocks per device.

How do we specify the specific devices to use in a way that makes sense and will work on multiple machines (not just raiju or cerberus)?

Think about separating integration method from models

Ideally, a method/kernel like advanceNeuron(s) or advanceSynapse(s) should take in a function that returns the derivatives of the model state variable(s), and a function that implements an integrator, and then use those two to update the state variable(s). Right now, we've intermixed all of that into the model state variable update. With the (admittedly, not minor) change, the advance method/kernel could basically be inherited from the top of the class hierarchy.

This will require some thought, because we want it to be efficient on both the CPU and the GPU.

FClassOfCategory cleanup as part of parameter refactoring

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

So, a few things about this class to look into:

  1. Is the class name the best name to use for a factory class?
  2. This is actually a set of four factory classes; will that be a problem as we refactor parameter reading?
  3. This also implements a singleton pattern, using the static get() method. Is this the "classic" pattern implementation approach?
  4. The code that registers the available neuron, synapse, connection, and layout classes is in the FClassOfCategory constructor. It would probably be better if this were in the classes that might be instantiated, to establish a good pattern for additional ones, to avoid the need for others to modify the factory, and to be consistent with our plans for registering parameters to a parameter reading class.
  5. The createX() methods all read the name of the class to create and then call a createXWithName() method to do the instantiation. The class name reading needs to be moved into the "phase 1" parameter reading method.
  6. The readParameters() method appears to be reading all of the instantiated classes' parameters, via the VisitEnter() method. This is the top-level core of the "phase 2" (model component instance) parameter reading code.

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Need new architecture for passing operations to contained objects

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

Right now, we have a high degree of nesting of objects in other objects, which means that operations done at a high level, for example from main(), must be passed down a long chain of method calls to reach the object that can actually perform the operation. We need to fix this.

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Test that all files are writeable before simulation itself starts

Right now, the serialization file isn't opened until after the simulation runs. The same may be true for the results file. We should test that it's possible to open these files for writing before the simulation runs, so that the user doesn't have to wait until after the simulation to realize that they are in the wrong working directory.

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

DynamicLayout class needs minor parameter cleanup

The first level below the class in the parameter file is "LayoutFiles". This was copied from FixedLayout. However, the parameters below that are fractions, used to generate random layouts, not file names. The "LayoutFiles" tag should be changed to read "LayoutStatistics" or the like.

Generate list of others' code that is part of BG+WB

This is a precursor to investigating their licenses, whether those are problematic for us and our licenses, and what we need to do to be in compliance with their licenses. Please generate one additional issue for each package that we're using, so someone else can follow up with details.

Naming cleanup needed

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

In both code and parameter files.

###Code

  • SimulationInfo: maxSteps -> numEpochs
  • SimulationInfo: currentStep -> currentEpoch
  • Simulator.cpp::simulate(): currentStep -> currentEpoch
  • SimulationInfo: resultOutputFileName -> resultFileName
  • SimulationInfo: resultInputFileName -> parameterFileName
  • SimulationInfo: stimulusInputFileName -> stimulusFileName

###Parameter file

  • numSims -> numEpochs
  • Tsim -> epochDuration

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Intelligent GPU query/configuration capability

Can we automate determining number of threads per block, etc. based on GPU capabilities? We should certainly, at least, query the GPU capabilities and gracefully exit, with informative feedback, if those capabilities don't match simulation requirements.

Rename methods in AllSynapses (and subclasses) for clarity

Method names are ad hoc. Let's make their names clearer, indicate if they apply to single synapses (class name implies all synapses, so no need to explicitly write that).

uploads/baa6d1ed-3667-495c-aa58-b035dd15668e/IMG_9203.JPG

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Pretty much all things named "SingleThreaded..." should be renamed "CPU..."

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

Everything is now inherently multi-threaded (even if the number of threads is 1). We really mean, in this case, simulation thread(s) that run on the CPU.

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Implementing stopgap parameter validation

It appears that many of the classes that read parameters have a checkNumParameters() function. Do all of them? How is this used as a sanity check? Assuming it's actually used, we could deal with some of the issues we discussed in our 5/23 lab meeting by at least implementing this function correctly for each class; this should be a quick thing.

GPU simulation finalization is combined with memory cleanup

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

Right now, simulator::finish() copies information back from the GPU to the CPU, frees memory on the GPU, and also triggers deletion of some CPU-side objects. We should separate the last of those three into a separate method, so we have one method that copies everything back from the GPU and frees the GPU storage (or maybe that should even be two methods) and a method that deletes CPU-side objects. In fact, maybe we can move all of the deletion of CPU-side stuff to destructors, which would accomplish that.

What is affected by this?

How do we replicate the issue/how would it work?

Expected behavior (i.e. solution or outline of what it would look like)

Other Comments

Fix random number generator seeding

There are two (sets of) RNG: rng in Globals.cpp, which is used to randomize parameters during setup, and is seeded with a hard-coded constant, and rgNormrnd/GPU Mersenne twister RNG(s) (created in SingleThreadedSpikingModel.cpp or GPUSpikingModel.cu), used to generate noise during simulation. The latter is seeded from the parameter file; the former is not.

Optimize variables in AllNeurons and AllSynapses

@fumik added comments on AllNeurons.h and AllSynapses.h. There are four types of structure members, LOCAL CONSTANT, LOCAL VARIABLE, GLOBAL CONSTANT, and GLOBAL VARIABLE. All GLOBAL members can be scalars, not arrays. Also some structure members are neuron or synapse type dependent, so they can be optimized.

Fix Izhikevich synaptic weights

Need to also look at the units we use in the code, the difference between that and that Matlab reference code, and which makes sense for us to use.

Variable renaming needed in GPUSpikingModel::setupSim()

Clean up the following:
//initialize Mersenne Twister
//assuming neuron_count >= 100 and is a multiple of 100. Note rng_mt_rng_count must be <= MT_RNG_COUNT
int rng_blocks = 25; //# of blocks the kernel will use
int rng_nPerRng = 4; //# of iterations per thread (thread granularity, # of rands generated per thread)
int rng_mt_rng_count = sim_info->totalNeurons/rng_nPerRng; //# of threads to generate for neuron_count rand #s
int rng_threads = rng_mt_rng_count/rng_blocks; //# threads per block needed

Fix performance calculations

It seems some are totaled over the entire simulation; some are just the last kernel/function call. Make this consistent and have it produce an average per call.

Think about factory class registration implementation pattern

Right now, the factory class's constructor has a bunch of calls to register the various types of neurons, etc. I think that it would be simpler conceptually if the registerX() calls were made from the neuron, etc. classes. That way, there would be a very clear implementation pattern for a new class, and no need to modify other code (like the factory class). Perhaps this would require a static method for each of those classes, plus a mechanism to trigger all of those static methods at program load time? I believe I have some musty old code that demonstrates this, in SourceVersions.

SourceVersions.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.