mikeudacity / ai-term2-beta-feedback Goto Github PK
View Code? Open in Web Editor NEWPlease use the waffleboard for this Repo to provide feedback. Thank you.
Please use the waffleboard for this Repo to provide feedback. Thank you.
A supposedly DL 27 has no content, and has the same name as DL 25.
In addition to things already pointed out (static video, low image resolution), I think it would be important to explain why and how DNN are different from other ML algorithms, even if in a few words (which is harder, I know). It is just at this point the advantages of DNN are not obvious, this can confuse the students and dampen their motivation.
The video mention links but there are currently no links to the proof of why the trick works.
Would be nice to have more links to pages / papers/ websites / tools for students who wish to study extra material on given topics.
"sigmoid sigmoid...."
At 30th second, the equation should have subscripts, x1 and x2 are confusing.
Pace and content is engaging and intriguing.
( To me it seems a bit on the slow side but I have good computer vision background so I think it should be fine. )
should be converted to grayscale.
may be better to use could be as color could result in better accuracy of the task in this list.
QUIZ QUESTION
For each application, check the box if the image data should be converted to grayscale. Leave the box un-checked if the images should be in color.
Affectiva link does not work
I really liked the structure of this lesson. The pacing, level and examples were perfect for me.
The notebooks all run, but I had trouble making my environment work on my mac using the provided instructions.
The time it takes to complete the notebooks is far outside of my attention span. Understanding this is the status quo with deep learning, I'm wondering if there are training tasks that could illustrate the concepts of the lesson in a timescale that could be done with the lesson. I finished the lessons and my convolutional notebook is only on epoch 4/20.
Quiz missing.
Static Images, and low quality image of child playing in the sand at 1:30
Towards the end of the video, instructor refers to the quiz underneath. But the quiz beneath has no instructions besides 'Get started by adding a new file'.
All good, a solid explanation, good that two versions of perceptron architecture are explained.
missing Quiz: Convolutional Layers in Keras
Great link to an intro to CNN!
Strange rustling noises in the audio.
in DL 8 video, and others, the quiz video appears in the dialog box after submitting the quiz, and then the video below shows the same video.
I think the answer video from the main page can be removed since its a duplicate. Users should only get to watch the answer video after they submitted the quiz.
The previous term had setup instructions using conda virtual environments. Not a problem, but something that should probably be addressed.
These two videos in particular are too soft and require turning up the volume.
Looks like the slides are not ready. Lots of notes on the slide for animations.
Video suggested additional material below, but none found.
Also, the video could do a better job explaining how 0.1 comes from (0.4, 0.5 0.1) from the learning rate.
Couple of instances of message notification in the background of the audio.
1:55-1:58(The first one aligns well with 'Perceptron returns a yes'), so not really sure if this was intended.
3:35-3:38
The perceptron solution video is in the pop-up modal and at the bottom of the page
I found this explanation/coverage to be incomplete to motivate the use of the log likelihood.
At 1:21, multiplication is shown with asterisks. In general, there is a lack of proper mathematical notation.
Small subtitle issue around 1:38. I believe Luis says ' this is what a neural network does' but subtitles show ' this is what a neural network test'
Nice to have:
Image processing and Computer Vision on GPUs.
As most of it is now on GPUs maybe it would be nice to add a section with a gentle introduction to GPU acceleration options for image processing and computer vision. Maybe introduce OpenCL / CUDA and give a simple example on how to write a simple kernel. Perhaps mention OpenCV 3.0 options for GPU acceleration.
There isn't a no option in the quiz question.
The first video ends with just one quiz answer showing ("o Closer"). Video with the answer to the quiz appears both in the pop-up window and below the quiz (not necessarily a big but can be confusing).
Maybe also introduce Lab color space and apply to iris classification / identification.
0:23-0:25 - Luis says "What does this means?" slip of a tongue, no more, but better to correct it.
Would it be useful to point out the principal difference between W and b?
Overall the video is very good, if a little bit rushed.
---> If you'd like to learn more about Affectiva and emotion AI, check out their website.
There is a loud background noise from 0:03-0:07. There is constant noise through this video in the background, some other instances are
The audio gets stuck for a couple of seconds from 2:28-2:31, and couple of message notifications at 3:03, and 3:11.
I was not a fan of this lesson. I found it to be at too high of a level to be interesting. There was nothing confusing, and the analogy reasoning is likely to be helpful for some. I would have liked a more focus and technical approach. The convolution lesson tone was much more to my liking.
Both videos appear to be same in the lesson.
The transition between two videos has a big difference in audio quality.
Very good point: the fact that the linear separation is not perfect and makes a few mistakes in close cases! I'm wondering if this video or at least this point should be featured in a separate video, not after the quiz - the question is easy to answer and I'm worried not all students are going to watch the post-quiz explanation. Also, the way I would solve the quiz is my triangulating to the relevant point on the plain and looking what colors are around me - I would not necessarily draw a line...
Perhaps show the image of grayscale birds so we can see how hard it is to distinguish them.
Two overfitting videos - second is missing content
Maybe state that typically pixels are 1 byte in size, but there are higher quality pictures that are 10 bits. Maybe an aside on HDR content, and explain about other color spaces and how it's different?
Expecting to show a video.
Very good, no remarks at all! :)
I followed the instructions and can not load karas. I have jupyter installed globally, and when I run in the virtual enviroment it cannot find keras.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.