Weight initialisation in Deep Learning well explained _ Day 21
Weight Initialization in Deep Learning: Classic and Emerging Techniques Understanding the correct initialization of weights in deep learning models is crucial for effective training and convergence. This post explores both classic and advanced weight initialization strategies, providing mathematical insights and practical code examples. Part 1: Classic Weight Initialization Techniques 1. Glorot (Xavier) Initialization Glorot Initialization is designed to maintain the variance of activations across layers, particularly effective for activation functions like tanh and sigmoid. Mathematical Formula: Uniform Distribution: Normal Distribution: Code Example in Keras: from tensorflow.keras.layers import Dense from tensorflow.keras.initializers import GlorotUniform, GlorotNormal # Using Glorot Uniform model.add(Dense(64, kernel_initializer=GlorotUniform(), activation='tanh')) # Using Glorot Normal model.add(Dense(64, kernel_initializer=GlorotNormal(), activation='tanh')) 2. He Initialization He Initialization is optimized for ReLU and its variants, ensuring that the gradients remain within a good range across layers. Mathematical Formula: Uniform Distribution: Normal Distribution: Code Example in Keras: from tensorflow.keras.initializers import HeUniform, HeNormal # Using He Uniform model.add(Dense(64, kernel_initializer=HeUniform(), activation='relu')) # Using He Normal model.add(Dense(64, kernel_initializer=HeNormal(), activation='relu')) 3. LeCun Initialization LeCun Initialization is used for the SELU activation function, maintaining the self-normalizing property of the network. Mathematical Formula: Normal Distribution: Code Example in Keras: from tensorflow.keras.initializers import LecunNormal # Using LeCun Normal model.add(Dense(64, kernel_initializer=LecunNormal(), activation='selu')) Summary Table:...