Predictions are the tax on beliefs. If you claim to know something about the future, you should be willing to put a number on it and be held accountable.

This page is my public prediction log. Each prediction has a date, a specific claim, a probability, and (eventually) a resolution. The goal isn’t to be right; it’s to be calibrated. When I say 80%, I should be right about 80% of the time.

Inspired by Gwern’s prediction tracking, Scott Alexander’s predictions, and Philip Tetlock’s work on superforecasting.


How to Read This

  • Probability: My confidence level. 50% = coin flip. 90% = very confident.
  • Resolution date: When this can be checked. Some are specific dates, others are ranges.
  • Status: ๐Ÿ”ฎ Open | โœ… Correct | โŒ Wrong | โธ๏ธ Voided (if the question became meaningless)
  • Predictions are grouped by domain and sorted by date made.
  • I don’t edit predictions after making them. Updates go in the resolution notes.

AI & Machine Learning

Made: February 2026

#PredictionProbabilityResolvesStatus
1By end of 2027, at least one major AI lab will publish a paper demonstrating meaningful progress on mechanistic interpretability of frontier models (understanding >50% of a specific circuit’s function).85%Dec 2027๐Ÿ”ฎ
2By end of 2028, AI agents will be capable of autonomously completing multi-step software engineering tasks (e.g., implementing a full feature from a spec) with >80% reliability in production codebases.70%Dec 2028๐Ÿ”ฎ
3The “open source vs. closed” AI debate will effectively be won by open-weight models for most practical applications by 2028. Closed models will retain an edge only for frontier capabilities.65%Dec 2028๐Ÿ”ฎ
4By end of 2026, the transformer architecture will still dominate production AI systems. No alternative architecture will have >10% market share.90%Dec 2026๐Ÿ”ฎ
5AI-generated code will account for >30% of all new code committed to production repositories at major tech companies by end of 2027.75%Dec 2027๐Ÿ”ฎ

Technology & Industry

Made: February 2026

#PredictionProbabilityResolvesStatus
6Remote work will remain the dominant mode for senior software engineers through 2030. The “return to office” push will largely fail for top talent.80%Dec 2030๐Ÿ”ฎ
7At least one major AI startup (valued >$5B) will collapse or be acquired at a massive discount by end of 2027 due to inability to build a sustainable business model.85%Dec 2027๐Ÿ”ฎ
8The “vibe coding” trend (non-engineers building software with AI) will produce at least 3 notable product successes but will not significantly reduce demand for senior engineers by 2028.75%Dec 2028๐Ÿ”ฎ

Geopolitics

Made: February 2026

#PredictionProbabilityResolvesStatus
9US-China semiconductor competition will intensify. China will not achieve cutting-edge chip manufacturing (<5nm) domestically by 2030.70%Dec 2030๐Ÿ”ฎ
10The EU AI Act will be meaningfully enforced (at least 3 major fines or enforcement actions) by end of 2028.60%Dec 2028๐Ÿ”ฎ
11Taiwan will not be invaded by China before 2030.90%Dec 2030๐Ÿ”ฎ

Personal

Made: February 2026

#PredictionProbabilityResolvesStatus
12I will still be working primarily in AI/ML engineering by end of 2028.85%Dec 2028๐Ÿ”ฎ
13At least one of my ventures under Mindent AI will generate >$10K MRR by end of 2027.55%Dec 2027๐Ÿ”ฎ
14I will change my mind on at least 2 beliefs currently listed on my beliefsBeliefs page during 2026.90%Dec 2026
15My AI timelines will shift significantly (>1 year in either direction) at least once during 2026.75%Dec 2026๐Ÿ”ฎ

Calibration Review

No resolutions yet. First review planned for December 2026.

When enough predictions resolve, I’ll analyze my calibration here:

  • Of predictions I rated 90%, how many were right?
  • Of predictions I rated 60%, how many were right?
  • Am I systematically overconfident? Underconfident?

Perfect calibration means my 70% predictions come true 70% of the time. That’s the goal. I expect to be overconfident at first; most people are.


Rules for This Page

  1. No editing predictions after the fact. If I want to update my view, I add a note with a new date; the original prediction stays.
  2. Specific and falsifiable. “AI will be important” doesn’t count. “GPT-5 will score >90th percentile on the bar exam by 2027” does.
  3. Honest probabilities. No hedging with 50% on everything. If I believe something, I should be willing to go above 50%.
  4. Resolve honestly. If I was wrong, I say so. The point is learning, not ego protection.
  5. Review annually. Every December, I’ll resolve what can be resolved and add new predictions.

Last updated: February 2026

Think one of my predictions is off? Tell me why. I’ll consider updating my probability (with a note, of course).


See also: My Epistemic Approach | What I Believe | How I’ve Changed | Questions I’m Exploring