Explore binary classification, sigmoid function, log loss, log odds & decision thresholds
Core Concept Log Loss (also called Binary Cross-Entropy) is the standard loss function for logistic regression. It measures how well predicted probabilities match the actual binary labels.
Where y is the true label (0 or 1) and p is the model's predicted probability of class 1.
Core Concept Log Odds (also called the logit) is the raw output of the logistic regression model before applying the sigmoid function. It is the natural link between a linear model and probabilities.
| Log Odds (z) | Odds | Probability (p) |
|---|---|---|
| −4.6 | 1:100 | 0.01 |
| −2.2 | 1:9 | 0.10 |
| −0.41 | 2:3 | 0.40 |
| 0 | 1:1 | 0.50 |
| +0.41 | 3:2 | 0.60 |
| +2.2 | 9:1 | 0.90 |
| +4.6 | 100:1 | 0.99 |
Each weight wi tells you: "For every 1-unit increase in feature xi, the log odds of class 1 increase by wi."
Look at the sigmoid plot above to see how each data point's z-score (log odds) maps to a predicted probability through the sigmoid curve.
Key Insight Logistic regression outputs a probability, not a class label. The decision threshold is a separate, tuneable parameter that converts probability into a binary prediction: classify as Class 1 if p ≥ threshold, else Class 0.
When you lower the threshold (e.g., from 0.5 to 0.3):
When you raise the threshold (e.g., from 0.5 to 0.7):
At threshold = 0.5, the decision boundary lies where z = 0 (log odds = 0, meaning equal odds). Moving the threshold changes the effective z-cutoff:
Try it now! Use the Decision Threshold slider in the controls panel and watch how the scatter plot, sigmoid chart, confusion matrix, and all metrics update simultaneously.