The Algorithm Knows Your Future—But Who Controls It?
As AI predictions shape every aspect of our lives, three scholars reveal who really benefits from our algorithmic oracle and what we're losing in the process.
Right now, as you read this sentence, algorithms on some distant server are busy predicting your next move. What you'll click, what you'll buy, even whether you'll commit a crime. With over 1 billion predictions generated every second, we're living inside an invisible oracle that few of us truly understand.
The question isn't whether this predictive layer exists—it's who controls it, and what that means for the rest of us.
The Prediction Machine We Never Asked For
Oxford economist Maximilian Kasy pulls back the curtain in his new book The Means of Prediction. Most AI predictions shaping our lives rely on "supervised learning"—algorithms trained on massive datasets to spot patterns and make educated guesses about future outcomes.
But these aren't just guesses. They determine whether you get that mortgage, land that job, or receive parole. The algorithm doesn't care about your potential or circumstances—it cares about what people like you have done before.
"If an algorithm selecting what you see on social media promotes outrage, thereby maximizing engagement and ad clicks," Kasy writes, "that's because promoting outrage is good for profits from ad sales." The same logic applies to hiring algorithms that screen out candidates "likely to have family-care responsibilities" or insurance systems that flag people "likely to develop chronic health problems."
This isn't a bug—it's the feature. What's profitable for companies rarely aligns with what's good for individuals or society.
The Rationality Trap
How did we get here? UC Berkeley's Benjamin Recht traces the problem back to World War II in The Irrational Decision. The mathematical models that helped win the war convinced a generation of scientists that computers should be designed as "ideal rational agents"—machines that make optimal decisions by quantifying uncertainty and maximizing utility.
This "mathematical rationality" infected everything. Supply chains, flight schedules, social media feeds—all optimized according to the same casino-like logic of costs, benefits, and statistical probability.
The apostles of this worldview—think Nate Silver, Steven Pinker, and Silicon Valley's tech bros—genuinely believe we'd all be better off if we made decisions like computers. But Recht points out the absurdity: "Advances in clean water, antibiotics, and public health brought life expectancy from under 40 in the 1850s to 70 by 1950"—all without formal optimization algorithms.
Humans managed to build democracy, discover quantum mechanics, and invent airplanes using something algorithms can't quantify: judgment, intuition, and moral reasoning.
Predictions as Self-Fulfilling Prophecies
Oxford philosopher Carissa Véliz offers the sharpest insight in Prophecy: predictions aren't neutral forecasts—they're "magnets that bend reality toward themselves."
Consider Gordon Moore's famous 1965 prediction that computer chip density would double every two years. "Moore's Law" didn't just happen to come true—an entire industry spent billions making it come true because it served their financial interests.
The same dynamic plays out today. When AI boosters promise that artificial general intelligence will "solve humanity's final problem," they're not just making a prediction—they're reshaping how we think about AI's role while distracting us from present-day problems like job displacement, privacy erosion, and algorithmic bias.
Predictions are power moves disguised as analysis. As Véliz notes, "When we believe a prediction and act in accordance with it, it's akin to obeying an order."
The Democratic Alternative
Kasy proposes "data trusts"—collective bodies where citizens democratically decide how their data gets used. It's a compelling vision, but he's realistic about the challenges: "This won't be easy to implement. Or happen overnight."
The race is on between algorithmic control and democratic alternatives. Which will move faster—the systems turning our brains to "goo," or our ability to build something better?
Meanwhile, Recht suggests we remember that mathematical rationality isn't the only way to make good decisions. Some of humanity's greatest achievements came from trusting human judgment over algorithmic optimization.
The future belongs to whoever controls the predictions. The question is whether that will be us—or the machines we built to think for us.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
xAI delayed a model release for days to perfect Baldur's Gate responses. What this gaming obsession reveals about AI competition strategies and market positioning.
Anthropic and OpenAI are pouring millions into opposing political campaigns over a single AI safety bill. What this proxy war reveals about the industry's future.
MIT's 2025 report reveals why AI promises fell short, LLM limitations, and what the hype correction means for the future
Apple's latest iOS update packs AI features, encrypted messaging, and video podcasts—but notably skips the promised Siri overhaul. What's the company really prioritizing?
Thoughts
Share your thoughts on this article
Sign in to join the conversation