#Definition
Posterior probability is the probability of an outcome after taking new evidence into account. It represents your updated belief, calculated by combining your prior probability (what you believed before the evidence) with the likelihood of observing that evidence given different outcomes.
In prediction markets, posterior probability is how traders should update their positions when polls release, news breaks, or events unfold. It's the mathematical framework for rational belief revision.
#Why It Matters in Prediction Markets
Posterior probability is the foundation of rational trading in dynamic markets:
Continuous updating: Prediction markets constantly incorporate new information. Understanding posterior probability helps you update appropriately—neither overreacting nor underreacting to new evidence.
News interpretation: When a poll releases or news breaks, posterior probability tells you how much to shift your probability estimate based on both the news and your prior beliefs.
Avoiding cognitive biases: Many heuristics lead to poor updating—anchoring too heavily on priors or overweighting new information. Bayesian reasoning provides a corrective framework.
Information edge: Traders who update more accurately than the market gain edge. If others overreact to news, correct posterior reasoning identifies mispricings.
Price discovery understanding: Market prices update through traders applying posterior reasoning. Understanding the process helps interpret price movements.
#How It Works
#Bayes' Theorem
The mathematical foundation for calculating posterior probability:
P(Outcome | Evidence) = [P(Evidence | Outcome) × P(Outcome)] / P(Evidence)
Where:
- P(Outcome | Evidence): Posterior probability—what we want
- P(Outcome): Prior probability—what we believed before
- P(Evidence | Outcome): Likelihood—how likely this evidence if the outcome occurs
- P(Evidence): Marginal probability of the evidence under all scenarios
#Simplified Intuition
The key insight: the strength of updating depends on how much more likely the evidence is under one hypothesis versus another.
#python
def bayesian_update(prior, likelihood_true, likelihood_false):
"""
Update probability based on new evidence.
prior: P(H) - Initial belief
likelihood_true: P(E|H) - Prob of evidence if Hypothesis is true
likelihood_false: P(E|~H) - Prob of evidence if Hypothesis is false
"""
numerator = likelihood_true * prior
denominator = (likelihood_true * prior) + (likelihood_false * (1 - prior))
posterior = numerator / denominator
return posterior
# Example:
# Prior = 50%
# Poll shows lead.
# P(Poll|Win) = 70%
# P(Poll|Lose) = 20%
post = bayesian_update(0.50, 0.70, 0.20)
print(f"Updated Probability: {post:.1%}") # Result: 77.8%
Strong updating: Evidence that's much more likely if Outcome A is true than if Outcome B is true strongly shifts probability toward A.
Weak updating: Evidence that's roughly equally likely under both outcomes barely moves probabilities.
#Numerical Example: Poll Release
Prior belief: Candidate has 50% chance of winning
New poll: Shows candidate leading by 5 points
Question: What's the posterior probability?
Step 1: Estimate likelihood of this poll result under each scenario
- If candidate will win: 70% chance of seeing a +5 poll (winners often lead)
- If candidate will lose: 20% chance of seeing a +5 poll (possible but unlikely)
Step 2: Apply Bayes' theorem
P(Win | Poll+5) = [P(Poll+5 | Win) × P(Win)] / P(Poll+5)
P(Poll+5) = P(Poll+5 | Win) × P(Win) + P(Poll+5 | Lose) × P(Lose)
P(Poll+5) = 0.70 × 0.50 + 0.20 × 0.50 = 0.35 + 0.10 = 0.45
P(Win | Poll+5) = (0.70 × 0.50) / 0.45 = 0.35 / 0.45 = 0.778
Posterior probability: ~78% (up from 50% prior)
#Sequential Updating
When multiple pieces of evidence arrive, update sequentially—each posterior becomes the prior for the next update.
Day 1: Prior 50% → Poll A releases → Posterior 65% Day 2: Prior 65% → Poll B releases → Posterior 72% Day 3: Prior 72% → Debate happens → Posterior 68%
Each update builds on the previous, incorporating all accumulated evidence.
#The Prior Matters
Same evidence can lead to different posteriors depending on starting beliefs:
Scenario A: Prior 20%, +5 poll → Posterior ~45% Scenario B: Prior 80%, +5 poll → Posterior ~92%
The poll moves both estimates up, but by different amounts. Strong priors require stronger evidence to shift substantially.
#Examples
Endorsement impact: A major newspaper endorses Candidate X. You estimated 55% win probability. How much should this endorsement shift your estimate? Consider: how often do endorsed candidates win vs. non-endorsed? If endorsement historically correlates weakly with winning (60% vs. 50%), the posterior shifts modestly to perhaps 58%. If endorsement strongly predicts winning (80% vs. 40%), the shift is larger.
Polling error adjustment: Your prior has Candidate A at 60%. A new poll shows them at 48%. Rather than jumping to 48%, you calculate the posterior considering: poll methodology (quality), sample size (reliability), and historical polling error. If polls are noisy, posterior might be 54%—weighted between prior and poll.
Cascading evidence: Early primary results come in. Each county shifts your posterior. A candidate wins County 1 (posterior goes up), loses County 2 (posterior goes down), wins County 3 (posterior goes up again). The final posterior incorporates all evidence sequentially.
Debate performance: Before a debate, you estimate 50%. The candidate performs poorly. Your assessment: 30% chance of poor performance if they'll win (bad nights happen), 60% chance if they'll lose (consistent with losing). The posterior shifts toward losing—perhaps 40% win probability now.
#Risks and Common Mistakes
Base rate neglect: Overweighting new evidence while ignoring prior probabilities. A single poll showing a 10-point lead doesn't mean 90% win probability if historical base rates suggest leads often evaporate.
Anchoring too strongly on priors: The opposite error—refusing to update sufficiently when strong evidence arrives. If multiple high-quality polls all point the same direction, your prior should shift substantially.
Ignoring evidence quality: Not all evidence is equal. A professional poll with 1,000 respondents should move posteriors more than a Twitter survey. Weight evidence by its diagnosticity.
Double-counting evidence: If market prices already incorporate a poll, trading on that poll involves using the same evidence twice. The market price is already a posterior reflecting public information.
Misestimating likelihoods: Posterior accuracy depends on correctly estimating how likely evidence is under different scenarios. Getting these wrong produces miscalibrated posteriors.
Treating opinions as evidence: Someone's prediction isn't diagnostic evidence unless they have relevant private information. Consensus opinion often just reflects the same public information you already have.
#Practical Tips for Traders
-
Estimate likelihoods explicitly: When evidence arrives, ask "how much more likely is this evidence if Outcome A is true versus Outcome B?" Make this explicit rather than intuitive.
-
Use the market as a prior: When you disagree with market prices, use Bayesian reasoning. What evidence would you need to move the market to your view? How strong is your evidence?
-
Update incrementally: Don't jump to new positions based on single pieces of evidence. Update your posterior and compare to market price. Trade when the gap is sufficient.
-
Distinguish evidence strength: A high-quality poll moves posteriors more than rumors. Weight your updates by evidence reliability.
-
Track your updating accuracy: Are you consistently moving posteriors in the right direction? Miscalibrated updating suggests you're misestimating likelihoods or prior quality.
-
Beware of confirmation bias: People tend to overweight evidence supporting their current positions and underweight contrary evidence. Apply Bayesian reasoning equally to both.
-
Use multiple evidence streams: Combine different evidence types (polls, expert opinions, historical patterns) to triangulate more robust posteriors.
#Related Terms
- Conditional Probability
- Expected Value (EV)
- Information Aggregation
- Price Discovery
- Market Efficiency
- Heuristics
- News Trader
#FAQ
#What is posterior probability in simple terms?
Posterior probability is your updated belief after seeing new evidence. Before a poll, you might think a candidate has a 50% chance. After the poll shows them leading, your posterior—your new estimate—might be 65%. It's the answer to "what do I believe now that I know this new information?"
#How is posterior probability different from conditional probability?
Conditional probability P(A|B) asks "what's the probability of A given B is true?" Posterior probability is a specific application: it's the probability of a hypothesis (like "Candidate wins") given observed evidence (like "Poll shows lead"). Posterior probability uses Bayes' theorem to flip conditional probabilities around.
#Why can't I just use the new information directly?
Because the new information combines with what you already knew. A poll showing 55% support doesn't mean 55% win probability—it depends on how reliable polls are, how much can change before election day, and your prior assessment. The posterior weighs new evidence against prior knowledge.
#How do markets incorporate posterior updating?
When new information arrives, traders update their beliefs using some version of posterior reasoning. They buy if their posterior probability exceeds market price, sell if below. The collective trading activity pushes prices toward a consensus posterior. Markets are essentially distributed Bayesian updating machines.
#What if I have different priors than the market?
Then your posteriors will differ from market prices even after the same evidence. If you had stronger prior conviction, the same evidence moves you less. If your prior was different, you end up in a different place. This creates trading opportunities when you believe your prior was better justified than the market's.