Machine Learning Overview

Vanishing gradient explained in detail _ day 20

First let’s explain what’s Vanishing Gradient Problem in Neural Networks








Understanding and Addressing the Vanishing Gradient Problem in Deep Learning

Understanding and Addressing the Vanishing Gradient Problem in Deep Learning

Part 1: What is the Vanishing Gradient Problem and How to Solve It?

In the world of deep learning, as models grow deeper and more complex, they bring with them a unique set of challenges. One such challenge is the vanishing gradient problem—a critical issue that can prevent a neural network from learning effectively. In this first part of our discussion, we’ll explore what the vanishing gradient problem is, how to recognize it in your models, and the best strategies to address it.

What is the Vanishing Gradient Problem?

The vanishing gradient problem occurs during the training of deep neural networks, particularly in models with many layers. When backpropagating errors through the network to update weights, the gradients of the loss function with respect to the weights can become exceedingly small. As a result, the updates to the weights become negligible, especially in the earlier layers of the network. This makes it difficult, if not impossible, for the network to learn the underlying patterns in the data.

Why does this happen? The root cause lies in how gradients are propagated through layers during backpropagation. In deep networks, each layer’s gradient is a product of several small derivatives, particularly when using activation functions like the sigmoid or tanh. These activation functions tend to squash input values into a small range, causing their derivatives to be small as well. As these small values are multiplied together across many layers, the gradient can shrink exponentially, leading to the vanishing gradient problem.

How to Recognize the Vanishing Gradient Problem in Your Model

Detecting the vanishing gradient problem in your model is crucial for addressing it early on. Here are some signs that your model might be suffering from this issue:

  • Slow or No Learning in Early Layers: If you notice that the weights in the early layers of your network are barely changing during training, it could be due to vanishing gradients. These layers might not be learning effectively because the gradient updates are too small.
  • Poor Performance Despite Deep Architecture: If your deep model is performing worse than a shallower version, even after sufficient training, the vanishing gradient could be at fault. This can happen because the network fails to propagate error signals back through the layers effectively.
  • Gradients Close to Zero: By monitoring the gradient values during training, you can spot if they are consistently close to zero, particularly in the earlier layers. This is a direct indication that the vanishing gradient problem might be occurring.

Solutions to the Vanishing Gradient Problem

Fortunately, several strategies have been developed to mitigate the vanishing gradient problem, allowing deep networks to train effectively:

Use of ReLU and its Variants

  • ReLU (Rectified Linear Unit): Unlike sigmoid and tanh, ReLU does not squash inputs into a small range. It allows for gradients to remain larger and more stable as they are backpropagated. However, ReLU can suffer from “dying ReLUs” where neurons stop activating. Variants like Leaky ReLU, Parametric ReLU (PReLU), and Exponential Linear Unit (ELU) help address this issue by allowing a small gradient for negative inputs.

Better Weight Initialization

  • Xavier (Glorot) Initialization: This method helps keep the output variance consistent across layers by initializing weights appropriately. It’s particularly useful with sigmoid and tanh activations.
  • He Initialization: Optimized for ReLU and its variants, He initialization further ensures that gradients don’t diminish too quickly by scaling the weights based on the number of input units.

Batch Normalization

  • Batch normalization normalizes the inputs of each layer, ensuring that they have a mean of zero and a variance of one. This not only accelerates training but also helps keep gradients from vanishing by stabilizing the distribution of activations throughout the network.

Residual Networks (ResNets)

  • ResNets introduce skip connections, allowing gradients to bypass certain layers. This direct gradient path helps mitigate the vanishing gradient problem, making it possible to train networks with hundreds or even thousands of layers effectively.

Monitoring and Gradient Clipping

  • While gradient clipping is more commonly used to handle exploding gradients, monitoring and occasionally clipping gradients that are too small can also help in managing vanishing gradients.

How to Address the Vanishing Gradient Problem in Practice

When training your model, consider implementing these strategies from the start, particularly when working with very deep architectures. If you observe signs of the vanishing gradient problem, try switching to ReLU or its variants, ensure proper weight initialization, and consider adding batch normalization layers.

By proactively applying these techniques, you can significantly improve the learning capability of your deep networks, ensuring that all layers contribute effectively to the final output.


Now let’s divide to an example to understand mathematic behind vanishing gradient a example better








A Simple Example Demonstrating the Vanishing Gradient Problem



Part 2: A Simple Example Demonstrating the Vanishing Gradient Problem

In the first part of this series, we explored what the vanishing gradient problem is and discussed various strategies to address it. In this second part, we will dive into a simple, step-by-step mathematical example using a real neural network. This example will help you understand how the vanishing gradient problem manifests in practice and what you can do to solve it.

Setting Up the Example

Let’s consider a simple neural network with the following structure:

  • 1 Input Neuron: $x_0 = 1$
  • 3 Layers: Each with 1 neuron
  • Sigmoid Activation Function for each neuron
  • Target Output: $y_{\text{target}} = 0.8$
  • Initial Weights: $W_1 = 0.5$, $W_2 = 0.5$, $W_3 = 0.5$
  • Learning Rate: $\eta = 0.1$

Step 1: Forward Pass

In the forward pass, we calculate the output of the network layer by layer.

Layer 1:

\[
z_1 = W_1 \cdot x_0 = 0.5 \cdot 1 = 0.5
\]
\[
y_1 = \sigma(z_1) = \frac{1}{1 + e^{-0.5}} \approx 0.6225
\]

Layer 2:

\[
z_2 = W_2 \cdot y_1 = 0.5 \cdot 0.6225 \approx 0.31125
\]
\[
y_2 = \sigma(z_2) = \frac{1}{1 + e^{-0.31125}} \approx 0.5772
\]

Layer 3:

\[
z_3 = W_3 \cdot y_2 = 0.5 \cdot 0.5772 \approx 0.2886
\]
\[
y_3 = \sigma(z_3) = \frac{1}{1 + e^{-0.2886}} \approx 0.5716
\]

Output of the network:

\[
y_{\text{output}} = y_3 \approx 0.5716
\]

Step 2: Calculating the Loss

The loss function we are using is the Mean Squared Error (MSE):

\[
\mathcal{L} = \frac{1}{2} (y_{\text{output}} – y_{\text{target}})^2 = \frac{1}{2} (0.5716 – 0.8)^2 \approx 0.0262
\]

Step 3: Backpropagation

Now, let’s compute the gradients for each weight using backpropagation.

Gradient with Respect to $W_3$:

\[
\frac{\partial \mathcal{L}}{\partial W_3} = \frac{\partial \mathcal{L}}{\partial y_3} \cdot \frac{\partial y_3}{\partial z_3} \cdot \frac{\partial z_3}{\partial W_3}
\]

Step-by-step:

  1. Gradient of the loss with respect to output $y_3$:
    \[
    \frac{\partial \mathcal{L}}{\partial y_3} = y_3 – y_{\text{target}} = 0.5716 – 0.8 = -0.2284
    \]
  2. Derivative of the sigmoid function:
    \[
    \sigma'(z_3) = \sigma(z_3) \cdot (1 – \sigma(z_3)) = 0.5716 \cdot (1 – 0.5716) \approx 0.2449
    \]
  3. Partial derivative with respect to weight $W_3$:
    \[
    \frac{\partial z_3}{\partial W_3} = y_2 \approx 0.5772
    \]

So,

\[
\frac{\partial \mathcal{L}}{\partial W_3} \approx -0.2284 \cdot 0.2449 \cdot 0.5772 \approx -0.0324
\]

Gradient with Respect to $W_1$ (continued):

\[
\frac{\partial \mathcal{L}}{\partial W_1} = \frac{\partial \mathcal{L}}{\partial y_3} \cdot \frac{\partial y_3}{\partial z_3} \cdot \frac{\partial z_3}{\partial y_2} \cdot \frac{\partial y_2}{\partial z_2} \cdot \frac{\partial z_2}{\partial y_1} \cdot \frac{\partial y_1}{\partial z_1} \cdot \frac{\partial z_1}{\partial W_1}
\]

Step-by-step:

  1. Gradient of the loss with respect to output $y_3$: $-0.2284$
  2. Derivative of the sigmoid function at layer 3: $0.2449$
  3. Weight of layer 3: $W_3 = 0.5$
  4. Derivative of the sigmoid function at layer 2: $0.244$
  5. Weight of layer 2: $W_2 = 0.5$
  6. Derivative of the sigmoid function at layer 1:
    \[
    \sigma'(z_1) = \sigma(z_1) \cdot (1 – \sigma(z_1)) = 0.6225 \cdot (1 – 0.6225) \approx 0.235
    \]
  7. Partial derivative with respect to weight $W_1$:
    \[
    \frac{\partial z_1}{\partial W_1} = x_0 = 1
    \]

So,

\[
\frac{\partial \mathcal{L}}{\partial W_1} \approx -0.2284 \cdot 0.2449 \cdot 0.5 \cdot 0.244 \cdot 0.5 \cdot 0.235 \cdot 1 \approx -0.0082
\]

Step 4: Updating the Weights

Using gradient descent, we update each weight:

\[
W_i^{\text{new}} = W_i^{\text{old}} – \eta \cdot \frac{\partial \mathcal{L}}{\partial W_i}
\]

For each weight:

  • $W_3^{\text{new}} = 0.5 – 0.1 \cdot (-0.0324) = 0.5032$
  • $W_2^{\text{new}} = 0.5 – 0.1 \cdot (-0.0174) = 0.5017$
  • $W_1^{\text{new}} = 0.5 – 0.1 \cdot (-0.0082) = 0.5008$

Step 5: Observe the Gradients Over Iterations

Now, let’s iterate this process a few more times and observe the gradient values.

Second Iteration:

  • Forward Pass: Using updated weights $W_1 = 0.5008$, $W_2 = 0.5017$, $W_3 = 0.5032$
  • Layer 1:
    \[
    z_1 = W_1 \cdot x_0 = 0.5008 \cdot 1 = 0.5008
    \]
    \[
    y_1 = \sigma(z_1) = \frac{1}{1 + e^{-0.5008}} \approx 0.6226
    \]
  • Layer 2:
    \[
    z_2 = W_2 \cdot y_1 = 0.5017 \cdot 0.6226 \approx 0.3122
    \]
    \[
    y_2 = \sigma(z_2) = \frac{1}{1 + e^{-0.3122}} \approx 0.5774
    \]
  • Layer 3:
    \[
    z_3 = W_3 \cdot y_2 = 0.5032 \cdot 0.5774 \approx 0.2905
    \]
    \[
    y_3 = \sigma(z_3) = \frac{1}{1 + e^{-0.2905}} \approx 0.5721
    \]

New Output of the Network:

\[
y_{\text{output}} = y_3 \approx 0.5721
\]

Calculating the New Loss:

\[
\mathcal{L} = \frac{1}{2} (y_{\text{output}} – y_{\text{target}})^2 = \frac{1}{2} (0.5721 – 0.8)^2 \approx 0.0260
\]

Backpropagation:

Gradient with Respect to $W_3$:

\[
\frac{\partial \mathcal{L}}{\partial W_3} = \frac{\partial \mathcal{L}}{\partial y_3} \cdot \frac{\partial y_3}{\partial z_3} \cdot \frac{\partial z_3}{\partial W_3}
\]
\[
\frac{\partial \mathcal{L}}{\partial y_3} = y_3 – y_{\text{target}} \approx 0.5721 – 0.8 = -0.2279
\]
\[
\sigma'(z_3) = \sigma(z_3) \cdot (1 – \sigma(z_3)) = 0.5721 \cdot (1 – 0.5721) \approx 0.2448
\]
\[
\frac{\partial \mathcal{L}}{\partial W_3} \approx -0.2279 \cdot 0.2448 \cdot 0.5774 \approx -0.0322
\]

Gradient with Respect to $W_2$:

\[
\frac{\partial \mathcal{L}}{\partial W_2} \approx -0.2279 \cdot 0.2448 \cdot 0.5032 \cdot 0.2448 \cdot 0.6226 \approx -0.0173
\]

Gradient with Respect to $W_1$:

\[
\frac{\partial \mathcal{L}}{\partial W_1} \approx -0.2279 \cdot 0.2448 \cdot 0.5032 \cdot 0.2448 \cdot 0.5017 \cdot 0.2351 \approx -0.0081
\]

Updating the Weights:

  • $W_3^{\text{new}} = 0.5032 – 0.1 \cdot (-0.0322) = 0.5064$
  • $W_2^{\text{new}} = 0.5017 – 0.1 \cdot (-0.0173) = 0.5034$
  • $W_1^{\text{new}} = 0.5008 – 0.1 \cdot (-0.0081) = 0.5016$

Third Iteration:

  • Forward Pass: Using updated weights $W_1 = 0.5016$, $W_2 = 0.5034$, $W_3 = 0.5064$
  • Layer 1: \[ z_1 = W_1 \cdot x_0 = 0.5016 \cdot 1 = 0.5016 \] \[ y_1 = \sigma(z_1) = \frac{1}{1 + e^{-0.5016}} \approx 0.6228 \]
  • Layer 2: \[ z_2 = W_2 \cdot y_1 = 0.5034 \cdot 0.6228 \approx 0.3134 \] \[ y_2 = \sigma(z_2) = \frac{1}{1 + e^{-0.3134}} \approx 0.5777 \]
  • Layer 3: \[ z_3 = W_3 \cdot y_2 = 0.5064 \cdot 0.5777 \approx 0.2923 \] \[ y_3 = \sigma(z_3) = \frac{1}{1 + e^{-0.2923}} \approx 0.5726 \]

New Output of the Network:

\[ y_{\text{output}} = y_3 \approx 0.5726 \]

Calculating the New Loss:

\[ \mathcal{L} = \frac{1}{2} (y_{\text{output}} – y_{\text{target}})^2 = \frac{1}{2} (0.5726 – 0.8)^2 \approx 0.0258 \]

Backpropagation:

Gradient with Respect to $W_3$:

\[ \frac{\partial \mathcal{L}}{\partial W_3} = \frac{\partial \mathcal{L}}{\partial y_3} \cdot \frac{\partial y_3}{\partial z_3} \cdot \frac{\partial z_3}{\partial W_3} \] \[ \frac{\partial \mathcal{L}}{\partial y_3} = y_3 – y_{\text{target}} \approx 0.5726 – 0.8 = -0.2274 \] \[ \sigma'(z_3) = \sigma(z_3) \cdot (1 – \sigma(z_3)) = 0.5726 \cdot (1 – 0.5726) \approx 0.2447 \] \[ \frac{\partial \mathcal{L}}{\partial W_3} \approx -0.2274 \cdot 0.2447 \cdot 0.5777 \approx -0.0321 \]

Gradient with Respect to $W_2$:

\[ \frac{\partial \mathcal{L}}{\partial W_2} \approx -0.2274 \cdot 0.2447 \cdot 0.5064 \cdot 0.2447 \cdot 0.6228 \approx -0.0172 \]

Gradient with Respect to $W_1$:

\[ \frac{\partial \mathcal{L}}{\partial W_1} \approx -0.2274 \cdot 0.2447 \cdot 0.5064 \cdot 0.2447 \cdot 0.5034 \cdot 0.2352 \approx -0.0080 \]

Observing the Vanishing Gradient Effect

Now, let’s analyze the gradients after three iterations:

  • Gradient $\frac{\partial \mathcal{L}}{\partial W_3}$:
    • First iteration: $-0.0324$
    • Second iteration: $-0.0322$
    • Third iteration: $-0.0321$
  • Gradient $\frac{\partial \mathcal{L}}{\partial W_2}$:
    • First iteration: $-0.0174$
    • Second iteration: $-0.0173$
    • Third iteration: $-0.0172$
  • Gradient $\frac{\partial \mathcal{L}}{\partial W_1}$:
    • First iteration: $-0.0082$
    • Second iteration: $-0.0081$
    • Third iteration: $-0.0080$

As we can observe, the gradients for the earlier layers ($W_1$ and $W_2$) are getting smaller with each iteration. This is the hallmark of the vanishing gradient problem, where the gradients become so small that the early layers’ weights update very little, leading to ineffective learning.

Mathematical Proof of the Vanishing Gradient Problem

Let’s formalize why the gradients diminish as we propagate them backward through the network. For a deep network, the gradient for a weight in an earlier layer ($W_1$) is given by:

\[ \frac{\partial \mathcal{L}}{\partial W_1} \propto \prod_{k=2}^{L} \sigma'(z_k) \cdot \frac{\partial z_k}{\partial y_{k-1}} \]

Here, each $\sigma'(z_k)$ is a small number (since for sigmoid activation, $\sigma'(z)$ is at most $0.25$). As we multiply these small numbers together across many layers, the product becomes very small, leading to a tiny gradient.

In our example:

\[ \frac{\partial \mathcal{L}}{\partial W_1} \approx -0.2274 \cdot (0.2447 \cdot 0.5064) \cdot (0.2447 \cdot 0.5034) \cdot 0.2352 \]

Each of these terms is small, and as we multiply more of them together (which would happen in a deeper network), the result approaches zero. This explains why the gradient for $W_1$ is much smaller than for $W_3$.

Solution to the Vanishing Gradient Problem

There are several techniques to address the vanishing gradient problem:

  • ReLU Activation Function: ReLU has a derivative of 1 for positive inputs, which helps maintain larger gradients. This prevents the gradient from shrinking too much as it propagates through the layers.
  • He Initialization: Proper initialization (like He initialization) ensures that the weights start with values that prevent the gradients from vanishing as quickly.
  • Batch Normalization: Normalizes the inputs of each layer, ensuring that they have a mean of zero and a variance of one. This helps maintain a consistent gradient size across layers.
  • Residual Networks (ResNets): Introduces shortcut connections that allow the gradient to bypass certain layers, which helps preserve the gradient’s magnitude as it propagates backward.

Conclusion

In this example, we’ve seen how gradients can become smaller with each layer during backpropagation, especially in networks using the sigmoid activation function. Although our example used a small network where the problem was not severe, it’s clear that as networks get deeper, the vanishing gradient problem can become a significant issue, leading to ineffective learning.

By using the strategies mentioned, such as ReLU activations, proper initialization, and batch normalization, you can mitigate the vanishing gradient problem and ensure that your deep networks learn effectively, even with many layers.

This completes our detailed exploration of the vanishing gradient problem. By understanding both the theory and practical examples, you are now equipped to recognize and address this issue in your own deep learning models.