Why People Should Not Fear AI
I've been working on AI pre-LLMs where models were niche and results were a massive engineering effort to demonstrate meaningful results but still thrown in anyways during the data-driven narrative adoption. While AI has certainly improved with LLMs and a lot of the brittleness has increasingly moved away to enable more free-form workflows, we're still nowhere near the singularity and people should not be fleeing as though humanity is running into a dead end. There is nothing more frustrating than seeing individuals suggest there is no point in learning anymore when the opposite is true. However, it's important to first state why LLMs are flawed to reach the singularity.
From Magic Wand to Probabilistic Engine
The fundamental misunderstanding of large language models (LLMs) is thinking of them as reasoning engines. They are not. At their core, they are extraordinarily complex probabilistic models. When you give an LLM a prompt, it isn't "thinking"; it is calculating the most statistically likely sequence of words to follow, based on the patterns it learned from ingesting a vast portion of the internet. Think of it as autocomplete on a cosmic scale.
This is why the common "bad workflow" is so frustrating. A user starts with a high-level task like, "Write a marketing plan for my new coffee shop," and receives a generic AI document. Believing the AI "misunderstood," the user responds, "No, make it stop using dashes and sound more human!" The AI, still just playing a probability game, slightly adjusts its output but lacks the core context to succeed. This back-and-forth is like trying to guide a blindfolded person by shouting vague directions.
The "good workflow" starts from a different premise. It accepts the AI is a probabilistic engine and that the human's job is to constrain the field of probabilities to make a correct answer more likely. Effective AI use is not about asking it for goals; it's about using it to gain precision about goals you already have. A "prompt engineer" is simply someone skilled at providing the necessary context to bias the model's output toward a desired result.
Consider drafting a memo on open-source software risks:
- Bad Prompt: "Write about the risks of open-source software." This will produce a generic, Wikipedia-style article, as it's the most probable response to such a vague query.
- Good Prompt: "Act as a senior counsel at a technology firm. I need a two-page internal memo for our engineering leads. The memo should focus specifically on the viral nature of the GPLv3 license and the potential risks of commingling GPL-licensed code with our core product, 'Project Atlas.' Use a professional but direct tone. Explain the concept of 'copyleft' in simple terms and recommend a clear action item: a mandatory code audit."
This prompt works because it drastically prunes the AI's probability tree by providing guardrails: a persona, a format, specific constraints, and a desired outcome. The human, with their prior understanding, provides the crucial context.
The Last Mile is the Most Valuable Mile
This brings us to the fears about AI's economic impact. History shows that technology primarily automates tasks, not jobs, and in doing so, it makes human judgment more valuable, not less. The spreadsheet didn't eliminate the accountant; it freed them from manual calculation and created the financial analyst, who could now model complex future scenarios.
AI is doing the same for knowledge work. It excels at the first 80% of a task: the research, the data synthesis, the first draft, the code boilerplate. This automation of the repetitive and predictable makes the final 20%—the human "last mile"—exponentially more important. This is the realm of:
- Strategic Judgment: Is this the right direction? Does this align with our core principles?
- Creative Synthesis: Combining disparate ideas into something truly novel.
- Contextual Nuance: Understanding the unspoken needs of a client or the political dynamics of a team.
To make it worse, "correctness" defined above for humans also evolves. Consider what a "good" email is. The first principles of psychology may not change but the representation of those principles change over time. Alpha is often a niche signal individuals find but others have not caught on yet. While in more scientifically grounded fields, correctness is more standardized; a large portion of the human economy is still centered around relationships and social abstractions.
The new economy boils down to first principle understanding of people. To move away from being knowledge workers on our own but to become philosophers and engineers. To think about the likelihood of correctness and to ask the meaningful questions and guide the proper sequence of actions to utilize information to compound more information over and over.
We should not frown upon this evolution. This is the fundamental promise of technology: to liberate us from fundamentally repetitive labor and allow us to operate more purely in the philosophical and strategic realms. By automating the drudgery, AI elevates the uniquely human skills of critical reasoning and judgment to the forefront of value creation.
Why the Singularity Isn't Coming from LLMs
But what about the future? While progress will continue, the current LLM paradigm has fundamental, structural ceilings that make it a very unlikely path to a true, autonomous general intelligence, or "Singularity."
- As we've established, these are probabilistic models, not reasoning models. More robust probability trees are still probability trees. They are masters of interpolation within their training data but struggle with true extrapolation or novel causal reasoning.
- Fuel is running out. Models like GPT-4 have been trained on a significant portion of the high-quality, publicly accessible internet. We are approaching the limits of available training data. Future gains will be harder to achieve, and we face the risk of "model collapse" where AIs trained on synthetic data from other AIs begin a cycle of degradation.
- They face the ambiguity of correctness. Our best method for improving models is Reinforcement Learning from Human Feedback (RLHF), which boils down to asking a person to choose if response A or B is better (which people can guess the quality of). Assuming all goes as good as it can, this still optimizes for preference, not objective truth. An LLM has no clear objective function for what "good" means, nor does it possess an autonomous feedback loop for real-world tasks. A self-driving car gets direct, physical feedback if it fails. An LLM that gives flawed strategic advice receives no inherent signal that it was wrong.
This leaves us with an incredibly powerful tool, but a tool nonetheless. It is a librarian of unimaginable scope, a tireless intern for first drafts, and a simulator for exploring ideas. It is a lever that allows those with clear judgment and deep domain knowledge to compound their knowledge and scale their output like never before. The future belongs to the augmented human, not the autonomous agent (unless a more robust dynamic RL solution is cracked but that's for a later statement).