The OP is anxious to predict the future because he has been wrong or late in the past, but that should simply been seen as a first attempt.
Because there is enormous value in predicting somewhat well, it's only fair that a first round doesn't go very well. In fact, there are studies that professional economists routinely perform worse than "the ordinary person in the street" regarding their (economic) forecasts!
It's a great idea to publish predictions and then to look back and reflect, because it fine-tunes (to use a fashionable AI term) one's ability to predict better.
Also, check out the research by Philip E. Tetlock and co-workers for academics that have studied people's ability to predict, and the book "Superforecasters" for the story of some people who are particularly great at it (I met one MIT physics Ph.D. who works for the European Commission part of that group once).
Tetlock's research tended to focus on very specific, falsifiable True/False questions rather than open-ended questions with predicted quantities in them.
A good Tetlock question might be: "Will ChatGPT-5 be released in 2025?"
But not: "Will we see AGI in 2025?" (too non-specific on what AGI is)
Even this one is iffy: "Will OpenAI's valuation increase by at least 50% in 2025?" (because there's a number in the statement)
Once you have a binary question, superforecasters can then start doing Fermi estimation to find bounds, and then initializing and updating Bayesian priors based on new information and continuous research. The answers are typically of the form, "Yes, with a 76.2% probability."
So "superforecasters" are prediction specialists on a narrowly defined scope, but they are not prophets. The kinds of predictions that they make don't make for fun, entertaining reading.