In order to help reduce my anxiety levels and stay sane, I made multiple predictions about the COVID situation, and the economy in general, in early April. I then avoided all news or social media for the rest of April month. Last night, my information quarantine period ended, and I scored the predictions.
Instead of saying “here’s the number that I think will happen,” I picked several metrics and gave both probabilities and ranges. I predicted there was a “90% chance number of infected will be above 800k and below 5 Million,” for example. For each metric I looked at, I had ranges for 80, 70, and 60% probabilities. My reason for doing this was to produce scores at the end, which will tell me how accurate my sense of confidence is.
Anxiety is often driven by a sense that something bad will happen. By actively testing my ability to predict the feature, my anxiety goes down. I feel more confident in my ability to weed out situations that feel unlikely, but because they sound so scary, grab more of my attention than they deserve. Maintaining careful control of my attention and directing it to purposes that serve me makes me a lot happier than letting my attention be drawn into whatever scary thing pops up, either in my imagination or a news feed.
Here’s how I did:
My 60% confidence range for COVID infections and death were 1.2M-1.8M, and 60k-120k deaths. In both cases, the reality came in right around the bottom of my 60% confidence interval, with infections being a tad lower. The biggest miss was the 90% confidence that S&P 500 would be lower than 280. This one miss ends up being good for my calibration at the 90% interval, since if you make 10 predictions at 90% confidence, and don’t get one of them wrong, it means your confidence estimates are off.
Overall, I got around 60% of my ‘60% confidence interval’ predictions right, and just over 90% correct on anything i predicted with higher confidence. This means I should be more aggressive at choosing tighter bounds on these 70%-80% confidence intervals. Next time, as well, I’ll try to pick a 50% confidence estimate, and measure how many metrics end up ‘over’ and how many are ‘under’ that 50% confidence level.
Anyone can make accurate predictions about the future, as long as you predict a super broad range of outcomes. For example, you know for certain that there will be more than 0 and fewer than 8 billion cases in May. My goal in this project is to reduce my range of possible predicted outcomes, and have an accurate sense of my own confidence intervals. Doing this will help me feel more confident in rejecting scenarios which feel both scary and wildly unlikely.
It sure as hell beats scrolling the news and clicking refresh.