Skip to content

ssmaheswar2001/Machine_Learning_Specialzation_Coursea

Repository files navigation

MACHINE LEARNING SPECIALIZATION

  • By Andrew Ng Offered by DeepLearning.AI

  • Week 01 : Intro to ML
    1. Overview of ML
      • Learning Objective
      • Applications of ML
    2. Supervised vs Unsupervised ML
      • What is ML
      • Supervised Learning
      • Supervised Learning Types
      • Unsupervised Learning
    3. Regression Model
      • Linear Regression with one variable
      • Lab : Model Representation
      • Cost Function
      • Cost Function Intuition
      • Visualizing the cost function
      • Lab : Cost Function
    4. Training the model with GD
      • Gradient Descent
      • Implementing Gradient Descent
      • Gradient Descent Intuition
      • Learning Rate
      • Gradient Descent for Linear Regression
      • Running Gradient Descent
      • Lab : To automate the process of optimizing w and b using gradient descent
    5. Linear Regression from Scratch

  • Week 02 : Regression with Multiple input variables
    1. Multiple Linear Regression
      • Multiple Features
      • Vectorization
      • Lab : Python, Numpy and Vectorization
      • Gradient Descent for Multiple Linear Regression
      • Lab : Multiple Linear Regression
      • Gradient Descent with Multiple Variables
    2. Gradient Descent in Practice
      • Feature Scaling
      • Mean Normalization
      • Z-score Normalization
      • Checking Gradient descent for convergence
      • Choosing the Learning Rate
      • Lab : Feature Scaling and Learning Rate (Multi-Variable)
      • Polynomial Regression
      • Lab : Feature Engineering & Polynomial Regression
      • Selecting Features
      • Scaling Features
      • Linear Regression using scikit-learn
    3. Practice Lab Linear Regression

  • Week 03 : Classification
    1. Classification with Logistic Regression
      • Classification
      • Lab : Classification
      • Linear Regression Approach
      • Logisitic Regression
      • Lab : Logisitic Regression
      • Sigmoid or Logistic Function
      • Decision Boundary
      • Lab : Logistic Regression, Decision Boundary
    2. Cost Function for Logistic Regression
      • Logistic Loss Function
      • Lab : Logistic Regression, Logistic Loss
      • Simplified Cost Function for Logistic Regression
      • Lab : Cost Function for Logistic Regression
    3. Gradient Descent for Logistic Regression
      • Lab : Gradient Descent for Logistic Regression
      • Logistic Gradient Descent
      • Lab : Logistic Regression Using scikit-Learn
    4. The Problem of Overfitting
      • The Problem of Overfitting
      • Regularization to Reduce Overfitting
      • Lab : Overfitting
      • Cost Function with Regularization
      • Regularized Linear Regression
      • Regularizeed Logistic Regression
      • Lab : Regularized Cost and Gradient




  • Week 03 : Advice for Applying ML
    1. Advice for Applying Machine Learning
      • Deciding what to try next
      • Evaluating a model
      • Model Selection and Training/Cross validation/test sets
      • Lab : Model Evaluation and Selection
    2. Bias and Variance
      • Diagnosing bias and variance
      • Regularization and Bias/Variance
      • Establising a baseline level of performance
      • Learning Curves
      • Deciding what to try next revisited
      • Bias/Variance and Neural Networks
      • Lab : Diagnosing Bias and Variance
    3. Machine Learning Development Process
      • Iterative Loop of ML development
      • Error Analysis
      • Adding Data
      • Transfer Learning
      • Full cycle of a machine learning project
    4. Skewed Database
      • Error metrics for skewed datasets
      • Trading off precision and recall

  • Week 04 : Decision Trees
    1. Decision Trees
      • Decision Trees
      • Learning Process
    2. Decision Tree Learning
      • Entropy as a measure of impurity
      • Choosing a split : Information Gain
      • Decision Tree Learning
      • Using one-hot encoding of categorial features
      • Continuous Valued Features
      • Regression Trees
      • Lab : Decision Trees
    3. Tree Ensembles
      • Using Multiple Decision Trees
      • Tree ensemble
      • Sampling with replacement
      • Random Forest Algorithm
      • XGBoost
      • When to use decision trees
      • Decision Trees vs Neural Networks
      • Lab : Tree Ensembles - Heart Failure Prediction




  • Week 03 : Reinforcement Learning
    1. Reinforcement Learning Introduction
      • What is Reinforcement Learning
      • The return in Reinforcement Learning
      • Making decisions : Polices of reinforcement learning
      • The goal of reinforcement learning
      • Markov Decision Process(MDP)
    2. State-action value function
      • State-action value function definition (Q function)
      • Lab : State action value function
        • mars rover example
      • Bellman Equation
      • Random (stochastic) environment
    3. Continuous state space
      • Examples of continous state space applications
      • Lunar lander
      • Learning the state-function
      • Algorithm refinement : Improved neural netwprk architecture
      • Algorithm refinment e-greedy policy
      • Mini-batching and soft-updates
      • The state of reinforcement learning

About

ML certification by Andrew Ng

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published