- By Andrew Ng Offered by DeepLearning.AI
COURSE 1 : SUPERVISED ML REGRESSION & CLASSIFICATION
- Week 01 : Intro to ML
- Overview of ML
- Learning Objective
- Applications of ML
- Supervised vs Unsupervised ML
- What is ML
- Supervised Learning
- Supervised Learning Types
- Unsupervised Learning
- Regression Model
- Linear Regression with one variable
- Lab : Model Representation
- Cost Function
- Cost Function Intuition
- Visualizing the cost function
- Lab : Cost Function
- Training the model with GD
- Gradient Descent
- Implementing Gradient Descent
- Gradient Descent Intuition
- Learning Rate
- Gradient Descent for Linear Regression
- Running Gradient Descent
- Lab : To automate the process of optimizing w and b using gradient descent
- Linear Regression from Scratch
- Overview of ML
- Week 02 : Regression with Multiple input variables
- Multiple Linear Regression
- Multiple Features
- Vectorization
- Lab : Python, Numpy and Vectorization
- Gradient Descent for Multiple Linear Regression
- Lab : Multiple Linear Regression
- Gradient Descent with Multiple Variables
- Gradient Descent in Practice
- Feature Scaling
- Mean Normalization
- Z-score Normalization
- Checking Gradient descent for convergence
- Choosing the Learning Rate
- Lab : Feature Scaling and Learning Rate (Multi-Variable)
- Polynomial Regression
- Lab : Feature Engineering & Polynomial Regression
- Selecting Features
- Scaling Features
- Linear Regression using scikit-learn
- Practice Lab Linear Regression
- Multiple Linear Regression
- Week 03 : Classification
- Classification with Logistic Regression
- Classification
- Lab : Classification
- Linear Regression Approach
- Logisitic Regression
- Lab : Logisitic Regression
- Sigmoid or Logistic Function
- Decision Boundary
- Lab : Logistic Regression, Decision Boundary
- Cost Function for Logistic Regression
- Logistic Loss Function
- Lab : Logistic Regression, Logistic Loss
- Simplified Cost Function for Logistic Regression
- Lab : Cost Function for Logistic Regression
- Gradient Descent for Logistic Regression
- Lab : Gradient Descent for Logistic Regression
- Logistic Gradient Descent
- Lab : Logistic Regression Using scikit-Learn
- The Problem of Overfitting
- The Problem of Overfitting
- Regularization to Reduce Overfitting
- Lab : Overfitting
- Cost Function with Regularization
- Regularized Linear Regression
- Regularizeed Logistic Regression
- Lab : Regularized Cost and Gradient
- Classification with Logistic Regression
COURSE 2 : ADVANCED LEARNING ALGORITHMS
- Week 01 : Neural Network
- Neural Network Intuition
- Neural Networks
- Demand Prediction
- Example : Recognizing images
- Neural Network Model
- Neural Network Layer
- More Complex Neural Networks
- Forward Propagation
- Handwritten digit recognition
- Lab : Neurons and Layers
- Tensorflow Implementation
- Inference in Code
- Build the model using TensorFlow
- Model for digit Classification
- Data in Tensorflow
- Feature vectors
- Activation vector
- Building a neural network architecture
- Lab : Coffee Roasting in Tensorflow
- Neural Network Implmentation in Python
- Forward Propagation in a single layer
- Lab : Coffee Roasting Numpy
- Speculations on Artificial General Intelligence (AGI)
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence (AGI)
- Vectorization
- How neural networks are implemented efficiently.
- Matrix Multiplication
- Neural Network Intuition
- Week 02 : Neural Network Training
- Neural Network Training
- Tensorflow implementation
- Training Details
- Activation Function
- Alternatives to the Sigmoid activation
- Choosing Activation Functions
- Why do we need activation functions
- Lab : ReLu activation
- Multiclass Classification
- Multiclass
- softmax
- Neural network with softmax output
- Improved implementation of softmax
- Classification with multiple outputs.
- Lab : Softmax Function
- Lab : Multiclass
- Additional Neural Network Concepts
- Additional Optimization
- Additional Layer Types
- Convolutional layer
- Convolutional Neural Network
- Back Propagation
- What is a derivative
- Computation Graph
- Larger Neural Network
- Neural Network Training
- Week 03 : Advice for Applying ML
- Advice for Applying Machine Learning
- Deciding what to try next
- Evaluating a model
- Model Selection and Training/Cross validation/test sets
- Lab : Model Evaluation and Selection
- Bias and Variance
- Diagnosing bias and variance
- Regularization and Bias/Variance
- Establising a baseline level of performance
- Learning Curves
- Deciding what to try next revisited
- Bias/Variance and Neural Networks
- Lab : Diagnosing Bias and Variance
- Machine Learning Development Process
- Iterative Loop of ML development
- Error Analysis
- Adding Data
- Transfer Learning
- Full cycle of a machine learning project
- Skewed Database
- Error metrics for skewed datasets
- Trading off precision and recall
- Advice for Applying Machine Learning
- Week 04 : Decision Trees
- Decision Trees
- Decision Trees
- Learning Process
- Decision Tree Learning
- Entropy as a measure of impurity
- Choosing a split : Information Gain
- Decision Tree Learning
- Using one-hot encoding of categorial features
- Continuous Valued Features
- Regression Trees
- Lab : Decision Trees
- Tree Ensembles
- Using Multiple Decision Trees
- Tree ensemble
- Sampling with replacement
- Random Forest Algorithm
- XGBoost
- When to use decision trees
- Decision Trees vs Neural Networks
- Lab : Tree Ensembles - Heart Failure Prediction
- Decision Trees
- Week 01 : Unsupervised Learning
- Welcome
- Clustering
- What is clustering
- Applications of Clustering
- K-means intuition
- K-means Algorithm
- Optimization Objective
- Initializing K-means
- Choosing the number of clusters
- Anomaly Detection
- Finding unusual events
- Anomaly Detection
- Density estimation
- Gaussian (normal) Detection
- Anomaly Detection Algorithm
- Developing and evaluating an anomaly detection system
- Anomaly Detection vs Supervised Learning
- Choosing what features to use
- Practice Lab : K-means Clustering
- Image Compression
- Practice Lab : Anomaly Detection in server computers
- Week 02 : Recommender Systems
- Collaborative Filtering
- Making Recommendations
- Using per-item features
- Collaborative Filtering Algorithm
- Recommender Systems Implementation
- Mean Normalization
- Tensorflow implementation of collaborative filtering
- Finding Related items
- Content Based Filtering
- Collaborative Filtering vs Content-base Filtering
- Deeplearning for content-base filtering
- Recommending from a large catalogue
- Tensorflow implementation of content-based filtering
- Principal Component Analysis
- Reducing the number of features
- PCA algorithm
- PCA in code
- Applications of PCA
- Lab : PCA and data visulization
- Practice Lab : Collaborative Filtering Recommender Systems
- Recommender system for movies
- Practice Lab : Deep Learning for Content-based Filtering
- Recommebder for movies
- Collaborative Filtering
- Week 03 : Reinforcement Learning
- Reinforcement Learning Introduction
- What is Reinforcement Learning
- The return in Reinforcement Learning
- Making decisions : Polices of reinforcement learning
- The goal of reinforcement learning
- Markov Decision Process(MDP)
- State-action value function
- State-action value function definition (Q function)
- Lab : State action value function
- mars rover example
- Bellman Equation
- Random (stochastic) environment
- Continuous state space
- Examples of continous state space applications
- Lunar lander
- Learning the state-function
- Algorithm refinement : Improved neural netwprk architecture
- Algorithm refinment e-greedy policy
- Mini-batching and soft-updates
- The state of reinforcement learning
- Reinforcement Learning Introduction