Predictions are the tax on beliefs. If you claim to know something about the future, you should be willing to put a number on it and be held accountable.
This page is my public prediction log. Each prediction has a date, a specific claim, a probability, and (eventually) a resolution. The goal isn’t to be right; it’s to be calibrated. When I say 80%, I should be right about 80% of the time.
Inspired by Gwern’s prediction tracking, Scott Alexander’s predictions, and Philip Tetlock’s work on superforecasting.
How to Read This
- Probability: My confidence level. 50% = coin flip. 90% = very confident.
- Resolution date: When this can be checked. Some are specific dates, others are ranges.
- Status: ๐ฎ Open | โ Correct | โ Wrong | โธ๏ธ Voided (if the question became meaningless)
- Predictions are grouped by domain and sorted by date made.
- I don’t edit predictions after making them. Updates go in the resolution notes.
AI & Machine Learning
Made: February 2026
| # | Prediction | Probability | Resolves | Status |
|---|---|---|---|---|
| 1 | By end of 2027, at least one major AI lab will publish a paper demonstrating meaningful progress on mechanistic interpretability of frontier models (understanding >50% of a specific circuit’s function). | 85% | Dec 2027 | ๐ฎ |
| 2 | By end of 2028, AI agents will be capable of autonomously completing multi-step software engineering tasks (e.g., implementing a full feature from a spec) with >80% reliability in production codebases. | 70% | Dec 2028 | ๐ฎ |
| 3 | The “open source vs. closed” AI debate will effectively be won by open-weight models for most practical applications by 2028. Closed models will retain an edge only for frontier capabilities. | 65% | Dec 2028 | ๐ฎ |
| 4 | By end of 2026, the transformer architecture will still dominate production AI systems. No alternative architecture will have >10% market share. | 90% | Dec 2026 | ๐ฎ |
| 5 | AI-generated code will account for >30% of all new code committed to production repositories at major tech companies by end of 2027. | 75% | Dec 2027 | ๐ฎ |
Technology & Industry
Made: February 2026
| # | Prediction | Probability | Resolves | Status |
|---|---|---|---|---|
| 6 | Remote work will remain the dominant mode for senior software engineers through 2030. The “return to office” push will largely fail for top talent. | 80% | Dec 2030 | ๐ฎ |
| 7 | At least one major AI startup (valued >$5B) will collapse or be acquired at a massive discount by end of 2027 due to inability to build a sustainable business model. | 85% | Dec 2027 | ๐ฎ |
| 8 | The “vibe coding” trend (non-engineers building software with AI) will produce at least 3 notable product successes but will not significantly reduce demand for senior engineers by 2028. | 75% | Dec 2028 | ๐ฎ |
Geopolitics
Made: February 2026
| # | Prediction | Probability | Resolves | Status |
|---|---|---|---|---|
| 9 | US-China semiconductor competition will intensify. China will not achieve cutting-edge chip manufacturing (<5nm) domestically by 2030. | 70% | Dec 2030 | ๐ฎ |
| 10 | The EU AI Act will be meaningfully enforced (at least 3 major fines or enforcement actions) by end of 2028. | 60% | Dec 2028 | ๐ฎ |
| 11 | Taiwan will not be invaded by China before 2030. | 90% | Dec 2030 | ๐ฎ |
Personal
Made: February 2026
| # | Prediction | Probability | Resolves | Status |
|---|---|---|---|---|
| 12 | I will still be working primarily in AI/ML engineering by end of 2028. | 85% | Dec 2028 | ๐ฎ |
| 13 | At least one of my ventures under Mindent AI will generate >$10K MRR by end of 2027. | 55% | Dec 2027 | ๐ฎ |
| 14 | I will change my mind on at least 2 beliefs currently listed on my beliefs | Beliefs page during 2026. | 90% | Dec 2026 |
| 15 | My AI timelines will shift significantly (>1 year in either direction) at least once during 2026. | 75% | Dec 2026 | ๐ฎ |
Calibration Review
No resolutions yet. First review planned for December 2026.
When enough predictions resolve, I’ll analyze my calibration here:
- Of predictions I rated 90%, how many were right?
- Of predictions I rated 60%, how many were right?
- Am I systematically overconfident? Underconfident?
Perfect calibration means my 70% predictions come true 70% of the time. That’s the goal. I expect to be overconfident at first; most people are.
Rules for This Page
- No editing predictions after the fact. If I want to update my view, I add a note with a new date; the original prediction stays.
- Specific and falsifiable. “AI will be important” doesn’t count. “GPT-5 will score >90th percentile on the bar exam by 2027” does.
- Honest probabilities. No hedging with 50% on everything. If I believe something, I should be willing to go above 50%.
- Resolve honestly. If I was wrong, I say so. The point is learning, not ego protection.
- Review annually. Every December, I’ll resolve what can be resolved and add new predictions.
Last updated: February 2026
Think one of my predictions is off? Tell me why. I’ll consider updating my probability (with a note, of course).
Related
See also: My Epistemic Approach | What I Believe | How I’ve Changed | Questions I’m Exploring