Machine learning (ML) Overview _ Day 1
integrate ML into iOS Apps _ Day 2
Models based, Instance Models, Train-Test Splits: The Building Blocks of Machine Learning Explained – Day 3
Regression & Classification with MNIST. _ day 4
Mathematical Explanation behind SGD Algorithm in Machine Learning _ day 5
Can we make prediction without need of going through iteration ? yes with the Normal Equation _ Day 6
What is Gradient Decent in Machine Learning? _ Day 7
3 Types of Gradient Decent Types : Batch, Stochastic & Mini-Batch _ Day 8
Deep Learning _ Perceptrons – day 9
Regression vs Classification Multi Layer Perceptrons (MLPs) _ day 10
Activation Function _ day 11
Activation Function, Hidden Layer and non linearity. _ day 12
What is Keras _ day 13
sequential , functional and model subclassing API in Keras _ day 14
Sequential vs Functional Keras API Part 2 explanation _ Day 15
TensorFlow: Using TensorBoard, Callbacks, and Model Saving in Keras _. day 16
Hyperparameter Tuning with Keras Tuner _ Day 17
Automatic vs Manual optimisation in Keras_. day 18
Mastering Hyperparameter Tuning & Neural Network Architectures: Exploring Bayesian Optimization_ Day 19
Vanishing gradient explained in detail _ Day 20
Weight initialisation in Deep Learning well explained _ Day 21
How Create API by Deep Learning to Earn Money and what is the Best Way for Mac Users – Breaking studies on Day 22
Weight initialazation part 2 – day 23
Activation function progress in deep learning, Relu, Elu, Selu, Geli , mish, etc – include table and graphs – day 24
Batch Normalization – day 25
Batch normalisation part 2 – day 26
Batch normalisation – trainable and non trainable – day 27
Understanding Gradient Clipping in Deep Learning – day 28
Transfer learning – day 29
How do Transfer Learning in Deep Learning Model – with an example – Day 30
Fundamentals of labeled vs unlabeled data in Machine Learning – Day 31
Mastering Deep Neural Network Optimization: Techniques and Algorithms for Faster Training – Day 32
Momentum Optimization in Machine Learning: A Detailed Mathematical Analysis and Practical Application – Day 33
Momentum vs Normalization in Deep learning -Part 2 – Day 34
Momentum – part 3 – day 35
Nag as optimiser in deep learning – day 36
A Comprehensive Guide to AdaGrad: Origins, Mechanism, and Mathematical Proof – Day 37
AdaGrad vs RMSProp vs Adam: Why Adam is the Most Popular? – Day 38
Adam vs SGD vs AdaGrad vs RMSprop vs AdamW – Day 39
Adam Optimizer deeply explained by Understanding Local Minimum – Day 40
Deep Learning Optimizers: NAdam, AdaMax, AdamW, and NAG Comparison – Day 41
The Power of Learning Rates in Deep Learning and Why Schedules Matter – Day 42
Theory Behind 1Cycle Learning Rate Scheduling & Learning Rate Schedules – Day 43
Exploring Gradient Clipping & Weight Initialization in Deep Learning – Day 44
Learning Rate – 1-Cycle Scheduling, exponential decay and Cyclic Exponential Decay (CED) – Part 4 – Day 45
Comparing TensorFlow (Keras), PyTorch, & MLX – Day 46
Understanding Regularization in Deep Learning – Day 47
DropOut and Monte Carlo Dropout (MC Dropout)- Day 48
Learn Max-Norm Regularization to avoid overfitting : Theory and Importance in Deep Learning and proof – Day 49
Deep Neural Networks vs Dense Network – Day 50
Deep Learning Examples, Short OverView – Day 51
Deep Learning Models integration for iOS Apps – briefly explained – Day 52
CNN – Convolutional Neural Networks explained by INGOAMPT – DAY 53
Mastering the Mathematics Behind CNN or Convolutional Neural Networks in Deep Learning – Day 54
RNN Deep Learning – Part 1 – Day 55
Understanding Recurrent Neural Networks (RNNs) – part 2 – day 56
Time Series Forecasting with Recurrent Neural Networks (RNNs) – part 3 – day 57
Understanding RNNs: Why Not compare it with Feedforward Neural Networks with simple Example to show the Math Behind it ? – DAY 58
To learn what is RNN (Recurrent Neural Networks ) why not understand ARIMA, SARIMA first ? – RNN Learning – Part 5 – day 59
Step-by-Step Explanation of RNN for Time Series Forecasting – part 6 – day 60
Iterative Forecasting which is Predicting One Step at a Time 2- Direct Multi-Step Forecasting with RNN 3- Seq2Seq Models for Time Series Forecasting – day 61
RNN, Layer Normalization, and LSTMs – Part 8 of RNN Deep Learning- day 62
Natural Language Processing (NLP) and RNN – day 63
why transformers are better for NLP ? Let’s see the math behind it – Day 64
The Revolution of Transformer Models – day 65
Transformers Deep Learning – day 66
Do you want to read a summery of what is BERT in 2 min read? (Bidirectional Encoder Representations from Transformers) – day 67
Leveraging Scientific Research to Uncover How ChatGPT Supports Clinical and Medical Applications – day 68
Can ChatGPT Truly Understand What We’re Saying? A Powerful Comparison with BERT” – day 69
How ChatGPT Work Step by Step – day 70
Mastering NLP: Unlocking the Math Behind It for Breakthrough Insights with a scientific paper study – day 71
The Rise of Transformers in Vision and Multimodal Models – Hugging Face – day 72
Unlock the Secrets of Autoencoders, GANs, and Diffusion Models – Why You Must Know Them? -Day 73
Understanding Unsupervised Pretraining Using Stacked Autoencoders – day 74
Breaking Down Diffusion Models in Deep Learning – Day 75
Generative Adversarial Network (GANs) Deep Learning – day 75
How Dalle Image Generator works ? – day 76
Reinforcement Learning: An Evolution from Games to Real-World Impact – day 77