Logistic regression: Loss and regularization
Stay organized with collections
Save and categorize content based on your preferences.
Logistic regression
models are trained using the same process as
linear regression
models, with two key distinctions:
The following sections discuss these two considerations in more depth.
Log Loss
In the Linear regression module,
you used squared loss (also called
L2 loss) as the
loss function.
Squared loss works well for a linear
model where the rate of change of the output values is constant. For example,
given the linear model $y' = b + 3x_1$, each time you increment the input
value $x_1$ by 1, the output value $y'$ increases by 3.
However, the rate of change of a logistic regression model is not constant.
As you saw in Calculating a probability, the
sigmoid curve is s-shaped
rather than linear. When the log-odds ($z$) value is closer to 0, small
increases in $z$ result in much larger changes to $y$ than when $z$ is a large
positive or negative number. The following table shows the sigmoid function's
output for input values from 5 to 10, as well as the corresponding precision
required to capture the differences in the results.
input |
logistic output |
required digits of precision |
5 |
0.993 |
3 |
6 |
0.997 |
3 |
7 |
0.999 |
3 |
8 |
0.9997 |
4 |
9 |
0.9999 |
4 |
10 |
0.99998 |
5 |
If you used squared loss to calculate errors for the sigmoid function, as the
output got closer and closer to 0
and 1
, you would need more memory to
preserve the precision needed to track these values.
Instead, the loss function for logistic regression is
Log Loss. The
Log Loss equation returns the logarithm of the magnitude of the change, rather
than just the distance from data to prediction. Log Loss is calculated as
follows:
\(\text{Log Loss} = \sum_{(x,y)\in D} -y\log(y') - (1 - y)\log(1 - y')\)
where:
-
\((x,y)\in D\) is the dataset containing many labeled examples, which are
\((x,y)\) pairs.
-
\(y\) is the label in a labeled example. Since this is logistic regression,
every value of \(y\) must either be 0 or 1.
-
\(y'\) is your model's prediction (somewhere between 0 and 1), given the set
of features in \(x\).
Regularization in logistic regression
Regularization, a mechanism for
penalizing model complexity during training, is extremely important in logistic
regression modeling. Without regularization, the asymptotic nature of logistic
regression would keep driving loss towards 0 in cases where the model has a
large number of features. Consequently, most logistic regression models use one
of the following two strategies to decrease model complexity:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-25 UTC.
[null,null,["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eLogistic regression models are trained similarly to linear regression models but use Log Loss instead of squared loss and require regularization.\u003c/p\u003e\n"],["\u003cp\u003eLog Loss is used in logistic regression because the rate of change isn't constant, requiring varying precision levels unlike squared loss used in linear regression.\u003c/p\u003e\n"],["\u003cp\u003eRegularization, such as L2 regularization or early stopping, is crucial in logistic regression to prevent overfitting due to the model's asymptotic nature.\u003c/p\u003e\n"]]],[],null,["# Logistic regression: Loss and regularization\n\n[**Logistic regression**](/machine-learning/glossary#logistic_regression)\nmodels are trained using the same process as\n[**linear regression**](/machine-learning/crash-course/linear-regression)\nmodels, with two key distinctions:\n\n- Logistic regression models use [**Log Loss**](/machine-learning/glossary#Log_Loss) as the loss function instead of [**squared loss**](/machine-learning/glossary#l2-loss).\n- Applying [regularization](/machine-learning/crash-course/overfitting/regularization) is critical to prevent [**overfitting**](/machine-learning/glossary#overfitting).\n\nThe following sections discuss these two considerations in more depth.\n\nLog Loss\n--------\n\nIn the [Linear regression module](/machine-learning/crash-course/linear-regression),\nyou used [**squared loss**](/machine-learning/glossary#l2-loss) (also called\nL~2~ loss) as the\n[**loss function**](/machine-learning/glossary#loss-function).\nSquared loss works well for a linear\nmodel where the rate of change of the output values is constant. For example,\ngiven the linear model $y' = b + 3x_1$, each time you increment the input\nvalue $x_1$ by 1, the output value $y'$ increases by 3.\n\nHowever, the rate of change of a logistic regression model is *not* constant.\nAs you saw in [Calculating a probability](/machine-learning/crash-course/logistic-regression/sigmoid-function), the\n[**sigmoid**](/machine-learning/glossary#sigmoid-function) curve is s-shaped\nrather than linear. When the log-odds ($z$) value is closer to 0, small\nincreases in $z$ result in much larger changes to $y$ than when $z$ is a large\npositive or negative number. The following table shows the sigmoid function's\noutput for input values from 5 to 10, as well as the corresponding precision\nrequired to capture the differences in the results.\n\n| input | logistic output | required digits of precision |\n|-------|-----------------|------------------------------|\n| 5 | 0.993 | 3 |\n| 6 | 0.997 | 3 |\n| 7 | 0.999 | 3 |\n| 8 | 0.9997 | 4 |\n| 9 | 0.9999 | 4 |\n| 10 | 0.99998 | 5 |\n\nIf you used squared loss to calculate errors for the sigmoid function, as the\noutput got closer and closer to `0` and `1`, you would need more memory to\npreserve the precision needed to track these values.\n\nInstead, the loss function for logistic regression is\n[**Log Loss**](/machine-learning/glossary#Log_Loss). The\nLog Loss equation returns the logarithm of the magnitude of the change, rather\nthan just the distance from data to prediction. Log Loss is calculated as\nfollows:\n\n\\\\(\\\\text{Log Loss} = \\\\sum_{(x,y)\\\\in D} -y\\\\log(y') - (1 - y)\\\\log(1 - y')\\\\)\n\n\u003cbr /\u003e\n\nwhere:\n\n- \\\\((x,y)\\\\in D\\\\) is the dataset containing many labeled examples, which are \\\\((x,y)\\\\) pairs.\n- \\\\(y\\\\) is the label in a labeled example. Since this is logistic regression, every value of \\\\(y\\\\) must either be 0 or 1.\n- \\\\(y'\\\\) is your model's prediction (somewhere between 0 and 1), given the set of features in \\\\(x\\\\).\n\nRegularization in logistic regression\n-------------------------------------\n\n[**Regularization**](/machine-learning/glossary#regularization), a mechanism for\npenalizing model complexity during training, is extremely important in logistic\nregression modeling. Without regularization, the asymptotic nature of logistic\nregression would keep driving loss towards 0 in cases where the model has a\nlarge number of features. Consequently, most logistic regression models use one\nof the following two strategies to decrease model complexity:\n\n- [L~2~ regularization](/machine-learning/crash-course/overfitting/regularization)\n- [Early stopping](/machine-learning/crash-course/overfitting/regularization#early_stopping_an_alternative_to_complexity-based_regularization): Limiting the number of training steps to halt training while loss is still decreasing.\n\n| **Note:** You'll learn more about regularization in the [Datasets, Generalization, and Overfitting](/machine-learning/crash-course/overfitting) module of the course.\n| **Key terms:**\n|\n| - [Gradient descent](/machine-learning/glossary#gradient-descent)\n| - [Linear regression](/machine-learning/glossary#linear_regression)\n| - [Log Loss](/machine-learning/glossary#Log_Loss)\n| - [Logistic regression](/machine-learning/glossary#logistic_regression)\n| - [Loss function](/machine-learning/glossary#loss-function)\n| - [Overfitting](/machine-learning/glossary#overfitting)\n| - [Regularization](/machine-learning/glossary#regularization)\n- [Squared loss](/machine-learning/glossary#l2-loss) \n[Help Center](https://support.google.com/machinelearningeducation)"]]