80 Days of AI Mastery: Learn Deep Learning Day by Day

Machine learning (ML) Overview _ Day 1

Day 1

integrate ML into iOS Apps _ Day 2

Day 2

Models based, Instance Models, Train-Test Splits: The Building Blocks of Machine Learning Explained – Day 3

Day 3

Regression & Classification with MNIST. _ day 4

Day 4

Mathematical Explanation behind SGD Algorithm in Machine Learning _ day 5

Day 5

Can we make prediction without need of going through iteration ? yes with the Normal Equation _ Day 6

Day 6

What is Gradient Decent in Machine Learning? _ Day 7

Day 7

3 Types of Gradient Decent Types : Batch, Stochastic & Mini-Batch _ Day 8

Day 8

Deep Learning _ Perceptrons – day 9

Day 9

Regression vs Classification Multi Layer Perceptrons (MLPs) _ day 10

Day 10

Activation Function _ day 11

Day 11

Activation Function, Hidden Layer and non linearity. _ day 12

Day 12

What is Keras _ day 13

Day 13

sequential , functional and model subclassing API in Keras _ day 14

Day 14

Sequential vs Functional Keras API Part 2 explanation _ Day 15

Day 15

TensorFlow: Using TensorBoard, Callbacks, and Model Saving in Keras _. day 16

Day 16

Hyperparameter Tuning with Keras Tuner _ Day 17

Day 17

Automatic vs Manual optimisation in Keras_. day 18

Day 18

Mastering Hyperparameter Tuning & Neural Network Architectures: Exploring Bayesian Optimization_ Day 19

Day 19

Vanishing gradient explained in detail _ Day 20

Day 20

Weight initialisation in Deep Learning well explained _ Day 21

Day 21

How Create API by Deep Learning to Earn Money and what is the Best Way for Mac Users – Breaking studies on Day 22

Day 22

Weight initialazation part 2 – day 23

Day 23

Activation function progress in deep learning, Relu, Elu, Selu, Geli , mish, etc – include table and graphs – day 24

Day 24

Batch Normalization – day 25

Day 25

Batch normalisation part 2 – day 26

Day 26

Batch normalisation – trainable and non trainable – day 27

Day 27

Understanding Gradient Clipping in Deep Learning – day 28

Day 28

Transfer learning – day 29

Day 29

How do Transfer Learning in Deep Learning Model – with an example – Day 30

Day 30

Fundamentals of labeled vs unlabeled data in Machine Learning – Day 31

Day 31

Mastering Deep Neural Network Optimization: Techniques and Algorithms for Faster Training – Day 32

Day 32

Momentum Optimization in Machine Learning: A Detailed Mathematical Analysis and Practical Application – Day 33

Day 33

Momentum vs Normalization in Deep learning -Part 2 – Day 34

Day 34

Momentum – part 3 – day 35

Day 35

Nag as optimiser in deep learning – day 36

Day 36

A Comprehensive Guide to AdaGrad: Origins, Mechanism, and Mathematical Proof – Day 37

Day 37

AdaGrad vs RMSProp vs Adam: Why Adam is the Most Popular? – Day 38

Day 38

Adam vs SGD vs AdaGrad vs RMSprop vs AdamW – Day 39

Day 39

Adam Optimizer deeply explained by Understanding Local Minimum – Day 40

Day 40

Deep Learning Optimizers: NAdam, AdaMax, AdamW, and NAG Comparison – Day 41

Day 41

The Power of Learning Rates in Deep Learning and Why Schedules Matter – Day 42

Day 42

Theory Behind 1Cycle Learning Rate Scheduling & Learning Rate Schedules – Day 43

Day 43

Exploring Gradient Clipping & Weight Initialization in Deep Learning – Day 44

Day 44

Learning Rate – 1-Cycle Scheduling, exponential decay and Cyclic Exponential Decay (CED) – Part 4 – Day 45

Day 45

Comparing TensorFlow (Keras), PyTorch, & MLX – Day 46

Day 46

Understanding Regularization in Deep Learning – Day 47

Day 47

DropOut and Monte Carlo Dropout (MC Dropout)- Day 48

Day 48

Learn Max-Norm Regularization to avoid overfitting : Theory and Importance in Deep Learning and proof – Day 49

Day 49

Deep Neural Networks vs Dense Network – Day 50

Day 50

Deep Learning Examples, Short OverView – Day 51

Day 51

Deep Learning Models integration for iOS Apps – briefly explained – Day 52

Day 52

CNN – Convolutional Neural Networks explained by INGOAMPT – DAY 53

Day 53

Mastering the Mathematics Behind CNN or Convolutional Neural Networks in Deep Learning – Day 54

Day 54

RNN Deep Learning – Part 1 – Day 55

Day 55

Understanding Recurrent Neural Networks (RNNs) – part 2 – Day 56

Day 56

Time Series Forecasting with Recurrent Neural Networks (RNNs) – part 3 – day 57

Day 57

Understanding RNNs: Why Not compare it with FNN to Understand the Math Behind it Better? – DAY 58

Day 58

To learn what is RNN (Recurrent Neural Networks ) why not understand ARIMA, SARIMA first ? – RNN Learning – Part 5 – day 59

Day 59

Step-by-Step Explanation of RNN for Time Series Forecasting – part 6 – day 60

Day 60

Iterative Forecasting which is Predicting One Step at a Time 2- Direct Multi-Step Forecasting with RNN 3- Seq2Seq Models for Time Series Forecasting – day 61

Day 61

UNLOCKING RNN, Layer Normalization, and LSTMs – Mastering the Depth of RNNs in Deep Learning – Part 8 of RNN Series by INGOAMPT – Day 62

Day 62

Natural Language Processing (NLP) and RNN – day 63

Day 63

Why transformers are better for NLP ? Let’s see the math behind it – Day 64

Day 64

The Transformer Model Revolution from GPT to DeepSeek & goes on How They’re Radically Changing the Future of AI – Day 65

Day 65

Transformers in Deep Learning: Breakthroughs from ChatGPT to DeepSeek – Day 66

Day 66

Do you want to read a summery of what is BERT in 2 min read? (Bidirectional Encoder Representations from Transformers) – day 67

Day 67

Brief OverView of How ChatGPT Works? – Day 68

Day 68

Can ChatGPT Truly Understand What We’re Saying? A Powerful Comparison with BERT” – Day 69

Day 69

How ChatGPT Work Step by Step – day 70

Day 70

Mastering NLP: Unlocking the Math Behind It for Breakthrough Insights with a scientific paper study – day 71

Day 71

The Rise of Transformers in Vision and Multimodal Models – Hugging Face – day 72

Day 72

Unlock the Secrets of Autoencoders, GANs, and Diffusion Models – Why You Must Know Them? -Day 73

Day 73

Understanding Unsupervised Pretraining Using Stacked Autoencoders – Day 74

Day 74

Breaking Down Diffusion Models in Deep Learning – Day 75

Day 75

Generative Adversarial Network (GANs) Deep Learning – Day 76

Day 76

How Dalle Image Generator works ? – Day 77

Day 77

Reinforcement Learning: An Evolution from Games to Real-World Impact – Day 78

Day 78

DeepNet – What Happens by Scaling Transformers to 1,000 Layers ? – Day 79

Day 79

Lets go through Paper of DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning – Day 80

Day 80
FAQ Chatbot

Select a Question

Or type your own question

For best results, phrase your question similar to our FAQ examples.