Code Monkey home page Code Monkey logo

deeplearning.ai-summary's People

Contributors

abhagat-splunk avatar adityassrana avatar akashadhikari avatar debajyotidatta avatar dotslash21 avatar iiey avatar jacobyan avatar jarpit96 avatar jenglamlow avatar kaddynator avatar kaushal28 avatar mbadry1 avatar mbuet2ner avatar mrbeann avatar nguyenng1802 avatar us avatar vernikagupta avatar vladkha avatar wangzhenhui1992 avatar willychen123 avatar xfge avatar xisisu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeplearning.ai-summary's Issues

Difficult to understand

Language model and sequence generation

The job of a language model is given any sentence give a probability of that sentence. Also what is the next sentence probability given a sentence.

It is difficult to understand.
Is this meaning?

The job of a language model is giving a probability of any given sentence .Also the probability of the next sentence.

maybe there are something wrong in your note

Title: Normalizing activations in a network
the last line, the last character
if gamma = sqrt(variance + epsilon) and beta = mean then Z_tilde[i] = Z_norm[i]
I think there should be : if gamma = sqrt(variance + epsilon) and beta = mean then Z_tilde[i] = Z[i]

Places to be corrected

I found some places which need to be corrected.

Other regularization methods
incorrect: For example in OCR, you'll need the distort the digits.
correct : For example in OCR, you'll need to distort the digits.

Vanishing / Exploding gradients
incorrect: And If W < I (Identity matrix) The weights will explode
correct : And If W < I (Identity matrix) The weights will vanish

Thank You

CNN number of parameters correction

in Convolutional Neural network course, the full network examples that is followed by a table to describe output size and model parameters in each layer here is wrong and it has been updated in the course itself in this section

@mbadry1 if you don't mind , i may create pull request for this , by editing the image or adding text for corrections

Momentum formulation

Say in iteration t : Vdw = Beta*Vdw + (1-Beta)*dw ---> The first term in this should actually be Vdw exponentially averaged till t-1 iterations. Since we are cacluating Vdw we can not use Vdw but need to use Vdw averaged till t-1th iteration.

Error in Course 2

In batch normalization and typical normalization, the batch is divided by standard deviation and not variance.

Standard_deviation = sqrt(var^2)

Mistakes?

Regularization

The L2 Regularization version: J(w,b) = (1/m) * Sum(L(y'(i),y'(i))) + (Lmda/2m) * ||W||2
The L1 Regularization version: J(w,b) = (1/m) * Sum(L(y'(i),y'(i))) + (Lmda/2m) * (||W||)
The normal cost function that we want to minimize is: J(W1,b1...,WL,bL) = (1/m) * Sum(L(y'(i),y'(i)))
The L2 Regularization version: J(w,b) = (1/m) * Sum(L(y'(i),y'(i))) + (Lmda/2m) * Sum((||W[l]||) ^2)

The loss function is L(y(i),y'(i)) but not L(y'(i),y'(i)) , right?

Limitation

Recurrent Neural Network Model
So limitation of the discussed architecture is that it learns from behind.
I think it should be like this:
So limitation of the discussed architecture is that it can not learn from behind.
What is your opinion?

Mistake

Thanks for your summary.
I'm now at the forth course of deeplearning.ai.
And I found a mistake.

Convolutional Implementation of Sliding Windows

As you can see in the above image, we turned the FC layer into a Conv layer using a convolution with the width and height of the filter is the same as the with and height of the input.

It should be "width" instead of "with".

Ambiguity ?

Gradient checking implementation notes

Remember if you use the normal Regularization to add the value of the additional check to your equation
(lamda/2m)sum(W[l])

I think it is difficult to understand what's meaning of this sentence because of the ambiguity.

  1. If you use the normal Regularization to add the value of the additional check to your equation,Remember (lamda/2m)sum(W[l])
  2. Remember if you use the normal Regularization to add the value of the additional check to your equation (lamda/2m)sum(W[l]) or not

Would you like to tell me which one is right?
Thank you at all.

mislabeled variable

In "A simple convolution network example" on the third layer, the variable padding "p2" is initialized instead of "p3".
Same goes for "Convolutional neural network example" in the second layer, filter and stride variables are "f1p" and "s1p" instead of "f2p" and "s2p".

If we are reusing the variable then its cool for me 👍

Mistake!

Train / Dev / Test sets

If size of the dataset is 100 to 1000000 ==> 60/20/20
If size of the dataset is 1000000 to INF ==> 99/1/1 or 99.5/0.25/0.25

According to this lesson, it should be 98/1/1 but not 99/1/1.

download.py cannot pull the image files

pandoc_version: 1.16.0.2
pandoc_path: /usr/bin/pandoc

Converting 05- Sequence Models
Traceback (most recent call last):
File "./download.py", line 48, in
main()
File "./download.py", line 42, in main
outputfile=(key + ".pdf")
File "/home/he/.local/lib/python3.5/site-packages/pypandoc/init.py", line 140, in convert_file
outputfile=outputfile, filters=filters)
File "/home/he/.local/lib/python3.5/site-packages/pypandoc/init.py", line 325, in _convert_input
'Pandoc died with exitcode "%s" during conversion: %s' % (p.returncode, stderr)
RuntimeError: Pandoc died with exitcode "43" during conversion: b"pandoc: Could not find image Images/01.png', skipping...\npandoc: Could not find image Images/02.png', skipping...\npandoc: Could not find image Images/03.png', skipping...\npandoc: Could not find image Images/04.png', skipping...\npandoc: Could not find image Images/05.png', skipping...\npandoc: Could not find image ................................... LaTeX Error: File lmodern.sty' not found.\n\nType X to quit or to proceed,\nor enter new name. (Default extension: sty)\n\nEnter file name: \n! Emergency stop.\n<read *> \n \nl.3 \usepackage\n\npandoc: Error producing PDF\n"

Unable to understand

GloVe word vectors

For stop words like this, is, the it gives it a low weight, also for words that doesn't occur so much.
It's too difficult to understand and I don't know the meaning.
And I think it should be fixed with correct grammar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.