Machine Learning Overview

Mastering Hyperparameter Tuning & Neural Network Architectures: Exploring Bayesian Optimization_ Day 19

In conclusion, Bayesian optimization does not change the internal structure of the model—things like the number of layers, the activation functions, or the gradients. Instead, it focuses on external hyperparameters. These are settings that control how the model behaves during training and how it processes the data, but they are not part of the model’s architecture itself.

For instance, in this code, Bayesian optimization adjusts:

  1. C (Regularization Parameter): This determines how much the model tries to fit the training data exactly versus how much it tries to keep things simple. A higher C means the model will try harder to match the training data, while a lower C encourages simpler solutions that generalize better to unseen data.
  2. Gamma (Kernel Coefficient): This affects how much influence each individual data point has on the decision boundary. A small gamma means each point has a broad influence, leading to simpler decision boundaries. A larger gamma focuses the model more on specific data points, creating more complex boundaries.
  3. Kernel Type: This controls the type of mathematical function used to separate the data. For example, a “linear” kernel creates a flat plane or straight line to divide the data, while an “rbf” (radial basis function) kernel allows for curved, more flexible boundaries.

So, while the model’s internal structure—like layers and activations—remains unchanged, Bayesian optimization helps you choose the best external hyperparameters. This results in a better-performing model without needing to re-architect or directly modify the model’s components.

don't miss our new posts. Subscribe for updates

We don’t spam! Read our privacy policy for more info.