Skip to main content
Practice

L1 & L2 Regularization for Preventing Model Overfitting

As we learned in the previous lesson, Regularization prevents models from memorizing irrelevant information and helps them learn more generalizable patterns.

L1 Regularization and L2 Regularization are representative regularization methods. In this lesson, we will explore how each technique operates.


L1 Regularization (Lasso Regularization)

L1 Regularization works by automatically removing unnecessary variables, setting the weights of irrelevant features to 0 as the model learns the data.

As a result, the model becomes simpler and uses only essential information.

L1 regularization is especially useful when there are too many features, as it improves model performance by retaining only the most important ones and eliminating the rest.


L2 Regularization (Ridge Regularization)

L2 Regularization modifies all weights to keep them within a reasonable range.

When a model learns, if certain features are perceived as excessively important and the weights become too large, it can lead to an over-reliance on specific patterns.

To prevent this, L2 regularization keeps all weights from becoming too large or small, maintaining balance.

L2 regularization does not completely discard specific features but adjusts all features to enable the model to learn more general patterns.

This is effective when all features are important but need to be adjusted to the appropriate level.


In the next lesson, we will explore Transfer Learning, utilizing previously learned models to solve new problems.

Want to learn more?

Join CodeFriends Plus membership or enroll in a course to start your journey.