#Definition
An unbiased predictor is a forecaster whose probability estimates are well-calibrated—events they predict at 70% probability occur roughly 70% of the time, events at 30% occur roughly 30% of the time, and so on across all probability levels. Unbiased prediction doesn't mean always being right; it means your confidence levels match actual frequencies.
In prediction markets, unbiased predictors are the ideal participants. Their trades move prices toward accurate probabilities, and their track records demonstrate genuine forecasting skill rather than luck.
#Why It Matters in Prediction Markets
Unbiased prediction is the foundation of prediction market value:
Market accuracy: Prediction markets produce accurate forecasts because they aggregate information from many traders. The more unbiased the participants, the more accurate the market prices.
Skill identification: Distinguishing skilled forecasters from lucky ones requires checking calibration, not just hit rates. An unbiased predictor demonstrates genuine skill.
Self-improvement: Tracking your own calibration reveals systematic biases (overconfidence, underconfidence) that you can correct over time.
Forecast reliability: When evaluating prediction market prices, understanding calibration helps assess how much to trust the probabilities they imply.
Trading edge: If you're more calibrated than the market, your subjective probabilities provide trading edge.
#How It Works
#What Calibration Means
A perfectly calibrated predictor has this property:
- Of all predictions made at 50% confidence, 50% come true
- Of all predictions made at 70% confidence, 70% come true
- Of all predictions made at 90% confidence, 90% come true
- And so on for every probability level
This doesn't mean predicting 70% and being right—it means predicting 70% many times and being right on 70% of those occasions.
#Calibration vs. Resolution
Two components of forecast quality:
Calibration: Do your probability levels match actual frequencies? (Are you unbiased?)
Resolution: Do you assign different probabilities to different events, or do you predict 50% for everything? (Are you informative?)
A forecaster who predicts 50% for everything is perfectly calibrated but useless—they're not distinguishing between likely and unlikely outcomes. Good forecasting requires both calibration (unbiased) and resolution (informative).
#Visualizing Calibration
#Visualizing Calibration
A calibration curve plots predicted probabilities against actual frequencies:
(Blue line: Perfect Calibration; Orange line: Overconfident Predictor)
Perfect calibration: Points fall on the diagonal line (predicted = actual)
Overconfidence: Points below the line (predicting 80%, but only 60% come true)
Underconfidence: Points above the line (predicting 60%, but 80% come true)
#Checking for Bias
def check_bias(predictions, outcomes):
"""
Compare average predicted probability to overall actual frequency.
Positive bias = Overestimating probability on average.
Negative bias = Underestimating probability on average.
"""
avg_prediction = sum(predictions) / len(predictions)
actual_rate = sum(outcomes) / len(outcomes)
bias = avg_prediction - actual_rate
print(f"Avg Prediction: {avg_prediction:.1%}")
print(f"Actual Outcome Rate: {actual_rate:.1%}")
print(f"Bias: {bias:.1%}")
if abs(bias) < 0.05:
return "Unbiased (Well-calibrated)"
elif bias > 0:
return "Positive Bias (Optimistic/Overestimating)"
else:
return "Negative Bias (Pessimistic/Underestimating)"
# Example: 10 predictions averaging 70%, but only 4 resoloved 'Yes'
preds = [0.7, 0.8, 0.6, 0.9, 0.5, 0.7, 0.8, 0.4, 0.9, 0.7]
results = [1, 0, 1, 0, 0, 1, 0, 0, 1, 0] # 4 wins
check_bias(preds, results)
# Output: Bias +30% (Severe overestimation)
#Numerical Example
A forecaster makes 100 binary predictions over a year:
| Predicted Probability | Number of Predictions | Number Correct | Actual Frequency |
|---|---|---|---|
| 50-60% | 20 | 11 | 55% |
| 60-70% | 25 | 17 | 68% |
| 70-80% | 30 | 23 | 77% |
| 80-90% | 15 | 13 | 87% |
| 90-100% | 10 | 9 | 90% |
This forecaster is reasonably well-calibrated—actual frequencies roughly match prediction ranges. Their 70-80% predictions came true 77% of the time, close to the expected 75%.
#Common Calibration Errors
Overconfidence (most common):
- Predicting 90% when true probability is 70%
- Predictions cluster at extremes (high confidence) but outcomes are more moderate
- 90% predictions come true only 75% of the time
Underconfidence (less common):
- Predicting 60% when true probability is 80%
- Reluctance to commit to strong predictions
- 60% predictions come true 80% of the time
Base rate neglect:
- Ignoring how often outcomes typically occur
- Predicting rare events too frequently or vice versa
#Examples
Well-calibrated market: A prediction market for primary elections shows 75% probability for the frontrunner. Across many similar situations (frontrunners at 75%), roughly 75% win. The market is an unbiased predictor—its prices translate to accurate probabilities.
Overconfident forecaster: A political pundit predicts elections with high confidence—typically 85-95% certainty. Reviewing their track record, their 90% predictions come true only 70% of the time. They're overconfident, not an unbiased predictor. Trading against their extreme predictions might be profitable.
Calibrated trader: A prediction market trader tracks all their trades. Positions entered when they estimated 65% probability win 63% of the time; positions at 80% probability win 78% of the time. They're approximately unbiased, suggesting genuine skill rather than luck.
Market miscalibration: During a specific period, a prediction market systematically overestimates underdogs—events priced at 20% actually occur 30% of the time. The market is biased, creating opportunities for traders who recognize and exploit this pattern.
#Risks and Common Mistakes
Small sample sizes: Calibration requires many predictions to assess. A forecaster with ten predictions can't be meaningfully evaluated for calibration—the sample is too small.
Cherry-picking predictions: Evaluating only remembered predictions (usually the correct ones) inflates apparent calibration. Track all predictions, not just memorable ones.
Ignoring the extremes: Calibration at 50% is easy—predict 50% and you'll be calibrated by definition. Calibration at 90% or 10% is harder and more revealing of forecasting skill.
Conflating calibration with accuracy: A calibrated forecaster isn't always right—they're right at the rate they predict. A 70% prediction that fails isn't a mistake; 30% of 70% predictions should fail.
Assuming market calibration: Prediction markets are generally well-calibrated but not perfectly so. Systematic biases (like the favorite-longshot bias) create persistent miscalibration.
#Measuring Calibration: Brier Score
The Brier score measures forecast accuracy, incorporating both calibration and resolution:
Brier Score = (1/N) × Σ(prediction - outcome)²
Where:
- prediction = your probability (0 to 1)
- outcome = what happened (0 or 1)
- N = number of predictions
Interpretation:
- Perfect predictions: 0
- Random guessing (always 50%): 0.25
- Always wrong with high confidence: approaches 1
- Lower is better
Example calculation:
- You predict 80% (0.80), event happens (1): (0.80 - 1)² = 0.04
- You predict 80% (0.80), event doesn't happen (0): (0.80 - 0)² = 0.64
- You predict 30% (0.30), event doesn't happen (0): (0.30 - 0)² = 0.09
A forecaster with Brier score consistently below 0.20 is performing well.
#Practical Tips for Traders
-
Track all predictions: Record every probability estimate you make, not just the ones you act on. This enables calibration analysis.
-
Use probability ranges: Instead of "70%," try "65-75%." This acknowledges uncertainty and reveals whether you're consistently biased within your ranges.
-
Check calibration regularly: Periodically analyze your prediction history. Are your 80% predictions coming true 80% of the time?
-
Adjust for discovered biases: If you find you're overconfident at high probabilities, consciously adjust future estimates downward.
-
Compare to market calibration: If markets at 70% resolve Yes 75% of the time, you can exploit this by buying at 70%.
-
Distinguish calibration from luck: A short track record of correct predictions might be luck. Calibration across many predictions at various probability levels suggests skill.
-
Train with calibration exercises: Practice predicting everyday events and checking calibration. This builds intuition for well-calibrated probability estimation.
#Related Terms
- Market Efficiency
- Subjective Probability
- Posterior Probability
- Expected Value (EV)
- Information Aggregation
- Wisdom of Crowds
- Survivorship Bias
#FAQ
#What is an unbiased predictor in simple terms?
An unbiased predictor is someone whose confidence levels match reality. When they say "70% chance," it happens about 70% of the time. They're not overconfident (saying 90% for things that happen 70% of the time) or underconfident (saying 50% for things that happen 80% of the time). Their probability estimates can be taken at face value.
#How is calibration different from being right?
Being right means a specific prediction came true. Calibration means your confidence levels match long-term frequencies. A calibrated predictor who says "70%" will be wrong 30% of the time—that's not failure, that's correct calibration. You evaluate calibration across many predictions, not by whether any single prediction was right.
#Are prediction markets unbiased predictors?
Generally yes, though not perfectly. Research shows prediction market prices are reasonably well-calibrated—events priced at 70% happen roughly 70% of the time. However, some biases exist, like the favorite-longshot bias where very low-probability events are slightly overpriced. Overall, prediction markets are among the most calibrated forecasting mechanisms available.
#How many predictions do I need to check calibration?
At least 50-100 predictions provide a rough picture. For reliable calibration assessment, especially at extreme probabilities (90%+), you need hundreds of predictions. This is why professional forecasters and prediction markets—which make thousands of predictions—can be more confidently evaluated than individual pundits making occasional calls.
#Can I become more calibrated?
Yes. Research shows calibration improves with practice and feedback. Track your predictions, check outcomes, analyze where you're over- or underconfident, and consciously adjust. Calibration training exercises—making many predictions about verifiable events and checking results—measurably improve forecasting accuracy over time.