Code Monkey home page Code Monkey logo

course-content's Introduction

NeuroMatch Academy (NMA) Computational Neuroscience syllabus

July 10-28, 2023

Please check out expected prerequisites here!

The content should primarily be accessed from our ebook: https://compneuro.neuromatch.io/ [under continuous development]

Schedule for 2023: https://github.com/NeuromatchAcademy/course-content/blob/main/tutorials/Schedule/daily_schedules.md


Licensing

CC BY 4.0

CC BY 4.0 BSD-3

The contents of this repository are shared under under a Creative Commons Attribution 4.0 International License.

Software elements are additionally licensed under the BSD (3-Clause) License.

Derivative works may use the license that is more appropriate to the relevant context.

course-content's People

Contributors

actions-user avatar annkennedy avatar athenaakrami avatar bgalbraith avatar carsen-stringer avatar courtneydean33 avatar ebatty avatar eejd avatar gaganab avatar gunnarblohm avatar iamzoltan avatar jamenendez11 avatar jesselivezey avatar kshitijd20 avatar marius10p avatar mk-mccann avatar mpbrigham avatar mrkrause avatar msarvestani avatar mstimberg avatar mwaskom avatar patrickmineault avatar siddsuresh97 avatar slinderman avatar spiroschv avatar ssnio avatar taravanviegen avatar titipata avatar vincentvalton avatar yifr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

course-content's Issues

Add script for removing orphan derivative files

As exercises get revised, the derivative files (solution images and scripts) will become orphaned. We need a little clean-up script to remove those files from the repo to keep everything tidy.

W1D2T2 - Micro-tutorial 7

Just a detail that was a bit confusing... In Micro-tutorial 7, in the step-by-step, it says after filtering to take the average, even though it is to take the final value.

TD7 1

Videos are not working

Is there any modifications are taking place? As I was going through the tutorials of optimal control, but those are removed and showing as private video.

W2D2 - Tutorial 1

In the helper function
'plot_trajectory(system, params, initial_condition, dt=0.1, T=6, figtitle=None)':

solution = solve_ivp(system, t_span=(0, T), y0=x0, t_eval=t, args=(params), dense_output=True)

v0 is set to x0, but the argument passed to the helper function is 'initial_condition'

W1D4T1 - Wrong equation

In the notebook the log-likelihood is given like this for Poisson GLM:

W1D4T1

inducing to the wrong answer in the exercise.

But in the video it is:

W1D4T1_2

which is the correct one.

Typo in the "hint" figures

I noticed that there is a typo in the links for the "hint" figures (at least so far in W1D1 and W1D2): in the notebooks as opened in colab the figures are not showed (https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1-ModelTypes/static/W1D1_Tutorial1_Solution_7183720f_0.png) because of a dash instead of an underline (the correct one is https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/static/W1D1_Tutorial1_Solution_7183720f_0.png), probably because a commit has renamed the folders.

W1D1-Tutorial 2 has missing videos

The first video is missing from W1D1-tut2. It shows that the video was removed by the uploader.
Also, tutorial videos 6 and 7 on the W1D1 playlist are not present in any of the tutorials.

W1D5 Tutorials should use np.linalg.eigh

Porting over the discussion from #121:

Context: the notebook was running fine on colab but failing the notebook check:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-15-0edb6db6447f> in <module>
     23   X_mean = np.mean(X,0)
     24   X_reconstructed = reconstruct_data(score,evectors,X_mean,K)
---> 25   plot_MNIST_reconstruction(X ,X_reconstructed)

<ipython-input-4-148b545ffdc9> in plot_MNIST_reconstruction(X, X_reconstructed)
     51     for k2 in range(3):
     52       k = k+1
---> 53       plt.imshow(np.reshape(X_reconstructed[k,:],(28,28)),extent=[(k1+1)*28,k1*28,(k2+1)*28,k2*28],vmin=0,vmax=255)
     54   plt.xlim((3*28,0))
     55   plt.ylim((3*28,0))

~/miniconda3/envs/nma/lib/python3.7/site-packages/matplotlib/pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, data, **kwargs)
   2649         filternorm=filternorm, filterrad=filterrad, imlim=imlim,
   2650         resample=resample, url=url, **({"data": data} if data is not
-> 2651         None else {}), **kwargs)
   2652     sci(__ret)
   2653     return __ret

~/miniconda3/envs/nma/lib/python3.7/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
   1563     def inner(ax, *args, data=None, **kwargs):
   1564         if data is None:
-> 1565             return func(ax, *map(sanitize_sequence, args), **kwargs)
   1566
   1567         bound = new_sig.bind(ax, *args, **kwargs)

~/miniconda3/envs/nma/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
    356                 f"%(removal)s.  If any parameter follows {name!r}, they "
    357                 f"should be pass as keyword, not positionally.")
--> 358         return func(*args, **kwargs)
    359
    360     return wrapper

~/miniconda3/envs/nma/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
    356                 f"%(removal)s.  If any parameter follows {name!r}, they "
    357                 f"should be pass as keyword, not positionally.")
--> 358         return func(*args, **kwargs)
    359
    360     return wrapper

~/miniconda3/envs/nma/lib/python3.7/site-packages/matplotlib/axes/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
   5613                               resample=resample, **kwargs)
   5614
-> 5615         im.set_data(X)
   5616         im.set_alpha(alpha)
   5617         if im.get_clip_path() is None:

~/miniconda3/envs/nma/lib/python3.7/site-packages/matplotlib/image.py in set_data(self, A)
    692                 not np.can_cast(self._A.dtype, float, "same_kind")):
    693             raise TypeError("Image data of dtype {} cannot be converted to "
--> 694                             "float".format(self._A.dtype))
    695
    696         if not (self._A.ndim == 2

TypeError: Image data of dtype complex128 cannot be converted to float

The error is happening because np.linalg.eig is returning complex eigenvectors, and imshow fails if its passed a complex array.

But why is it not failing on colab? Not 100% sure but I was able to reproduce it locally only inconsistently as I was playing around with numpy versions. I don't think it has anything to do with numpy version per se, but that as I was changing the version I was getting numpys that were linked against different linear algebra libraries.

np.linalg.eig has a line internally that checks if the complex part of the eigenvalues are all 0, and returns data with float or complex type depending on the outcome.

The notebook is using np.linalg.eig on a covariance matrix, so the eigenvalues should be real. But I imagine what's happening is that there's a small amount of numerical error in the computation that the linear algebra libraries handle differently. That would account for the difference in return type.

Solution: The W1D5 notebooks should use np.linalg.eigh instead of np.linalg.eig. This function will always return real-valued data. Unlike np.linalg.eig, it will also order the outputs by sorting the eigenvalues. This is done by hand in the tutorials, but not necessary.

Tutorial notebooks open in colab with original filename

I guess colab embeds the filename as metadata and that's what it uses in the UI. So we end up with files like "Copy of Copy of Model Fitting Tutorial 3.ipynb", which is kind of messy.

Not sure if best solved in automated workflow or in making sure canonical versions are manually assigned standardized names during revision.

Video links in PythonWorkshop

The video links in PythonWorkshop link to video Bayesian Statistics Day, will this part be replaced soon? Or there's no video for the workshop?

W1D1T3 - Explaining the difference in conditions between penultimate and last example

Hey there,
Maybe more a comment than anything major, but something I stumbled over:
The simulated part leaves off with the impression that exponential ISI distributions should yield most entropy (when the mean is fixed) but then in the interactive tool the "uniform" neurons show the higher entropies.
I think a sentence or two that point out the different conditions would be helpful.

Bug in W1D3 Tutorial 7 solution

AIC[order] = 2*K + n_samples * np.log(sse/n_samples)

At the linked line, the code calculates AIC using n_samples.

However, in the previous cell (show below), n_samples is last defined for the test dataset.

# You've seen this code before!

### Generate training data
np.random.seed(0)
n_samples = 50
x_train = np.random.uniform(-2, 2.5, n_samples) # sample from a uniform distribution over [-2, 2.5)
noise = np.random.randn(n_samples) # sample from a standard normal distribution
y_train =  x_train**2 - x_train - 2 + noise

### Generate testing data
n_samples = 20
x_test = np.random.uniform(-3, 3, n_samples) # sample from a uniform distribution over [-2, 2.5)
noise = np.random.randn(n_samples) # sample from a standard normal distribution
y_test =  x_test**2 - x_test - 2 + noise

### Fit polynomial regression models
max_order = 5
theta_hat = solve_poly_reg(x_train, y_train, max_order)

It's the training dataset that's being used to calculate AIC and as the training set is larger, it results in an inaccurate AIC (though it doesn't affect model selection).

Solution:
either

  1. replace n_samples in the line with `len(y_hat) or
  2. use the test dataset for AIC calculation.

This will also require redoing the solution figure for this cell in the tutorial 7 notebook.

Add tutorial review checklist

We should add a pull request template that auto generates a checklist for late-stage tutorial notebook reviewing. It should have all the things we want to double-check that aren't automated (naming scheme, etc).

Standard for specific libraires

What is the standard way for content creators to add non-standard python modules/libraries?

One possibility is to add a file requirements.txt alongside the notebooks, and insert a cell with a single after the title of the notebook:
$pip install -r ./requirements.txt

Week 1, Day 3, Tutorial 1, Equation incorrect

The equation for the optimal theta_hat

\begin{align}
\hat\theta = \sum_{i=1}^N \frac{x_i y_i}{x_i^2}
\end{align}

seems to be incorrect? Shouldn't it be:

\begin{align}
\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}
\end{align}

Best,

Jan

W2D2_Tutorial3 wrong equilibrium variance equation in Part C

In part C of W2D2_Tutorial3 the equilibrium variance of the OU process is calculated as:
Var = sig^2 / (2*(1-lambda))

However, I think that this is not the correct analytical solution of the equilibrium variance. A simple mathematical prove (for equabilirum mean=0) is shown below (the ~ sign means "follows the distribution"):

x0 = x0
x1 = lambda*x0 + N(0, sig^2) ~ N(lambda*x0, sig^2)
x2 = lambda*x1 + N(0, sig^2) ~ lambda*N(lambda*x0, sig^2) + N(0, sig^2) ~ N(lambda^2*x0, (lambda^2+1)*sig^2)
x3 = lambda*x2 + N(0, sig^2) ~ N(lambda^3*x0, (lambda^4+lambda^2+1)*sig^2)
.
.
.
xn ~ N(lambda^n*x0, (lambda^(2*n-2) + ... + lambda^4+lambda^2+1)*sig^2)

Note that:

lambda^(2*n-2) + ... + lambda^4+lambda^2+1 = (1-lambda^(2*n)) / (1-lambda^2)

And since lambda < 1, n -> inf, so lambda^(2*n) -> 0, therefore:

lambda^(2*n-2) + ... + lambda^4+lambda^2+1 = 1 / (1-lambda^2)

Therefore,

xn ~ N(lambda^n*x0, sig^2/(1-lambda^2))

So the equilibrium variance should be sig^2/(1-lambda^2) instead of sig^2/(2*(1-lambda))
And if you use this new analytical solution in exercise 3C, you can find that the fit to diagonal line is better.

Questions on Colab->Github workflow in reviewing-tutorials.md

Re #6. If the QC workflow fails, revise the notebook on Colab then repeat Step 3.

What indicates that the QC has failed?

Re #7. Once the final pieces (e.g. video links) are in place, repeat Step 2 and Step 3, then remove the "draft" status from the PR on github and @ the reviewers for the merge.

How is the merge allowed without reviewers?

W1D1-Tutorial2

In the Parameter Exploration section the code fails if one increases inh_rate variable too much. As the result there are no spikes and this results in "ValueError: zero-size array to reduction operation minimum which has no identity" when computing distribution statistics (_amin function). If this was not left on purpose I'm afraid this may confuse some students.
Details are on a screen-shot below.
Too_strong_inhibition
.

Proposal on information displayed

Maybe something like the following is easier to read? @patrickmineault @GunnarBlohm

Mon, July 13: Introduction to Computational Neuroscience and NMA

Description Introduction of datasets (spikes, EEG, fMRI + behavior), and questions about them. These questions will foreshadow the whole summer school.

Day 1 Schedule

Time (Hour) Lecture Detail
0:00 - 0:50 Intro / keynote & tutorial setup NMA organization, expectations, code of conduct, modeling vs. data
1:30 - 2:05 Lecture & Tutorial 1 Data intro, preprocessing
2:10 - 2:45 Lecture & tutorial 2 Link of neural data to behavior
4:35 - 5:30 Recap, Q&A Outlook on school
5:30 - 6:00 Professional development Being a good NMA participant

W3D1T1 - typo

"The cell below defines the function to simulate the LIF neuron when receiving external current inputs. You can use v, sp = run_LIF(pars, I) to get the membrane potential and spike train with giveN [?] pars and input current I."

Syllabus for W1D4 is off

The syllabus lists 4 tutorials, though there are only 2, and the themes are different. Here's an updated entry:

Time (Hour) Lecture Details
0:00-0:30* Intro / keynote & tutorial setup We want to predict (scikit learn)
0:30-0:45 Pod Q&A Lecture discussion with pod TA
0:50-2:05 Tutorial 1 + nano-lectures Introduction to GLMs and predicting neural responses
2:05-2:25 Discussion 1 Discussion with pod TA
2:25-3:25 Big break BREAK
3:25-4:40 Tutorial 2 + nano-lectures Logistic regression, regularization, and decoding neural activity
4:40-5:00 Discussion 2 Discussion with pod TA
5:05-5:35 Outro Recap session, Promises and pitfalls of ML for Neuroscience
5:35-6:00 Q&A Q&A with lecturers/Mentors

Come up with standard structure for pulling in tutorials

The naming scheme we have right now in tutorials won't work for a high number of tutorials. Instead of placing the Bayes day tutorials in the folder called Bayes, how about:

WxDy-SnakeCaseSubject, .e.g. W2D1-BayesianStatistics

Inside, we can have the student version at the top level, the solution inside of TA_solutions, and static content specific for that day inside of static.

Implementing case consistency in repo

When would it be a good time to implement case consistency across the repo?

Some direct links have been shared on social media, like for example:
https://github.com/NeuromatchAcademy/course-content/tree/master/tutorials/reviewing-tutorials.md

This markdown file should be named ReviewingTutorials.md following the new guidelines.

W1D4 doesn't pass CI

Merged despite tests failing, because tutorials needed be available by tomorrow morning - will take a look. It didn't generate the student version for this reason.

Code cells not hidden in local jupyter notebook

I think the cells with import .... (and some function definitions) are intended to be hidden by default, but when cloning the repo, installing the dependencies using conda and then running jupyter notebook the cells are not hidden.

W3D3 Tutorial 2

In Exercise 1: Try to approximate causation with correlation

with plt.xkcd():
    fig, axs = plt.subplots(1,2, figsize=(10,5))
    plot_connectivity_matrix(X[:,[1]], ax=axs[0])

The activity matrix X is plotted as the ground truth instead of the connectivity matrix A

Solution needed for cut tutorials

The post-pod review process is going to cut at least one full tutorial, and probably more. We will need to stay on top of removing vestigial tutorials or else they will get picked up by the automated README script, etc.

I don't know whether we want some mechanism for preserving entire "bonus" tutorial documents, but that is a topic for discussion.

W1D3T1 - Wrong equation

When you analytically minimize MSE in the notebook, the result should be sum(xy)/sum(xx) as in the video, and not sum(x*y/x**2)
W1D3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.