Machine Learning and Deep Learning are growing at a faster pace. According to MIT, in the upcoming future, about 8.5 out of every 10 sectors will be somehow based on AI.
Andrew Y. Ng is a prestige name in the field of machine learning. He is a co-founder of Coursera, associate professor at Stanford University’s Computer Science and EE departments, Baidu’s Chief Scientist and a former head of Google Brain.
Machine Learning Yearning is the current work of Andrew Ng to teach students how to structure machine learning projects. You can have its draft copy from the link at bottom of the page for absolutely free.
Machine Learning Yearning provides guidelines to machine learning practitioners (developers and managers) to help them make decisions related to design, data collection, debugging, etc.
It introduces just the necessary background, provides best practices and common pitfalls with case studies.
Machine Learning Yearning broadly focuses on teaching how to make ML algorithms work in a more efficient and less time-consuming manner and how to structure machine learning projects.
Andrew Ng’s book has also mentioned some AI classes that will give you a hammer, but it also teaches you how to use the hammer.
The major concepts that Andrew Ng covers in Machine Learning Yearning are how to prioritize the most promising directions for an AI project, diagnose errors in a machine learning system, build ML in complex settings, such as mismatched training/test set.
Machine Learning Yearning also covers how to set up an ML project to compare to and/or surpass the human-level performance, know when and how to apply end-to-end learning, transfer learning, and multi-task learning.
The specialty of Andrew Ng books are they always appear simple and anyone can quickly understand it. Machine Learning Yearning also follows the same style of Andrew Ng’s books.
In my opinion, Machine Learning Yearning book is a beautiful representation of a genius brain whose owner is Andrew Ng and what he had learned in his whole career.
Machine Learning Yearning is not a book that came wrapped with lots of machine learning mathematics.
When I read this book I found that it is not a book for someone who is interested in machine learning research and is looking for a mathematically rigorous introduction to machine learning. Basically, you won’t find heavy math, deep theory in it.
It is a book that can fit anyone who has a basic understanding of machine learning and what it actually does, having a piece of good knowledge about the programming language that he will use for ML will really help him to absorb Machine Learning Yearning fastly.
Machine Learning Yearning is also very helpful for data scientists to understand how to set technical directions for a machine learning project.
More about author Andrew Ng:
Andrew Ng was born in London in the UK in 1976. His parents were both from Hong Kong. He spent time in Hong Kong and Singapore and later graduated from Raffles Institution in Singapore in 1992.
In 1997, he received his undergraduate degree with a triple major in computer science, statistics, and economics at the top of his class from Carnegie Mellon University in Pittsburgh, Pennsylvania.
Andrew Ng earned his master’s degree from the Massachusetts Institute of Technology in Cambridge, Massachusetts in 1998 and received his Ph.D. from the University of California, Berkeley in 2002.
He started working at Stanford University in 2002. He currently lives in Los Altos Hills, California. MIT Tech Review named them an AI power couple.
Ng researches primarily in machine learning and deep learning and is one of the world’s most famous computer scientists. His early work includes the Stanford Autonomous Helicopter project, which developed one of the most capable autonomous helicopters in the world, and the STAIR (Stanford Artificial Intelligence Robot) project, which resulted in ROS, a widely used open-source robotics software platform.
In 2011, Ng founded the Google Brain project at Google, which developed large-scale artificial neural networks using Google’s distributed computing infrastructure.
Among its notable results was a neural network trained using deep learning algorithms on 16,000 CPU cores, which learned to recognize cats after watching only YouTube videos, and without ever having been told what a “cat” is.
The project’s technology is also currently used in the Android Operating System’s speech recognition system.
2 How to use this book to help your team
3 Prerequisites and Notation
4 Scale drives machine learning progress
5 Your development and test sets
6 Your dev and test sets should come from the same distribution
7 How large do the dev/test sets need to be?
8 Establish a single-number evaluation metric for your team to optimize
9 Optimizing and satisficing metrics
10 Having a dev set and metric speeds up iterations
11 When to change dev/test sets and metrics
12 Takeaways: Setting up development and test sets
13 Build your first system quickly, then iterate
14 Error analysis: Look at dev set examples to evaluate ideas
15 Evaluating multiple ideas in parallel during error analysis
16 Cleaning up mislabeled dev and test set examples
17 If you have a large dev set, split it into two subsets, only one of which you look at
18 How big should the Eyeball and Blackbox dev sets be?
19 Takeaways: Basic error analysis
20 Bias and Variance: The two big sources of error
21 Examples of Bias and Variance
22 Comparing to the optimal error rate
23 Addressing Bias and Variance
24 Bias vs. Variance tradeoff
25 Techniques for reducing avoidable bias
26 Error analysis on the training set
27 Techniques for reducing variance
28 Diagnosing bias and variance: Learning curves
29 Plotting training error
30 Interpreting learning curves: High Bias
31 Interpreting learning curves: Other cases
32 Plotting learning curves
33 Why we compare to the human-level performance
34 How to define human-level performance
35 Surpassing human-level performance
36 When you should train and test on different distributions
37 How to decide whether to use all your data
40 Generalizing from the training set to the dev set
41 Identifying Bias, Variance, and Data Mismatch Errors
42 Addressing data mismatch
43 Artificial data synthesis
44 The Optimization Verification test
45 The general form of Optimization Verification test
46 Reinforcement learning example
47 The rise of end-to-end learning
48 More end-to-end learning examples
49 Pros and cons of end-to-end learning
50 Choosing pipeline components: Data availability
51 Choosing pipeline components: Task simplicity
Free ML Courses:
More AI and ML Books: