This Is The Worst AI Will Ever Be
If AI capabilities are commoditizing, what's your actual competitive advantage?
I was reading a recent Substack Post titled "The AI revolution is here. Will the economy survive the transition?"
If you're not familiar with it, some influential names in AI jumped into a Google doc to debate certain topics involving AI. The authors included:
Michael Burry, the guy who called the 2008 crash.
Jack Clark, co-founder of Anthropic.
Dwarkesh Patel, who's interviewed everyone from Mark Zuckerberg to Tyler Cowen.
Patrick McKenzie, the moderator.
As I was reading through it, one of their exchanges brought me to a pause.
Jack Clark said something regarding discussions with policymakers that applies just as much to the enterprise:
"This is the worst it will ever be! and it's really hard to convey to them just how important that ends up being."
Read that again and think about it. The AI capabilities causing disruption today are the weakest they will ever be. Every concern you have about AI right now?
It's going to get worse…or better, depending on whether you're ready.
The Obvious
When Clark tells policymakers this is the worst it’ll ever be, he’s making a point about trajectory that most people miss. If you last played with LLMs in November, you’re now wildly mis-calibrated about the frontier.
Think about what that means for your business. Whatever AI can do today represents the floor.
Not the ceiling. The floor.
Is your competitor experimenting with AI-powered customer service? If so, then that chatbot is the least sophisticated their AI strategy will ever be. Those coding assistants that seem merely helpful? They'll only get more capable. Fast.
The question isn't whether AI will transform your industry. It's whether you'll lead that transformation or be in a position requiring you to catch up.
Strategic Implications
Most executives are trained to plan based on current market conditions with reasonable projections for gradual change. But AI doesn't follow gradual improvement curves. It accelerates.
The competitive window is shrinking
That "wait and see" approach to AI adoption isn't cautious. It's falling behind in real-time. Every quarter you delay, your competitors aren't just moving ahead. They're moving ahead faster because the tools themselves are improving.
According to a 2024 McKinsey survey, organizations that adopted AI early were 2.3 times more likely to report significant revenue increases attributed to AI than late adopters.
But here's the part that should keep you up at night. The gap between early and late adopters widened by 40% compared to the previous year's survey. The distance between leaders and laggards isn't staying constant. It's growing.
Every quarter you delay, your competitors aren't just moving ahead. They're moving ahead faster because the tools themselves are improving. It's compounding in ways most people don't grasp yet.
Infrastructure choices compound
The data strategies, cloud architectures, and integration frameworks you build now will either enable or constrain your ability to leverage increasingly powerful AI systems. Building for today's capabilities means rebuilding constantly. Building for adaptability means staying ahead.
Here's an example. If you're building data pipelines that assume current AI models can only process structured data, you're already behind.
The latest models handle video, audio, images, PDFs, and unstructured text with equal ease. Your infrastructure choices made in 2026 based on 2025 capabilities are already outdated.
How much of your current software stack is hardcoded to work with specific tools, versions, or even, specific platform versioned/API versioned data schema? Now imagine you need to upgrade your AI capabilities every six months. Systems that take nine months (let alone years) to integrate new capabilities will leave you perpetually behind.
Talent needs are shifting underneath you
The skills your organization needs today are different from what you'll need in six months. The same adage can be applied here as well. If you're hiring for today's AI landscape, you're already hiring for yesterday's needs.
The World Economic Forum's 2025 Future of Jobs Report found that 39% of workers' core skills are expected to change by 2030.
Not just be assisted with. Change.
And that's down slightly from 44% predicted in their 2023 report, not because disruption is slowing but because companies are finally investing more in training to keep pace.
Think about it differently. You can hire the world's best prompt engineer today. In a year, that skill might be as relevant as being an expert at operating a fax machine because promptless-focused workflow engines exist today.
The winning move isn't hiring for specific AI skills. It's building teams that can learn and adapt faster than the technology evolves. See my article titled The Rise of the Forward Deployed Engineer for a great example of a new position designed for adaptability.
Why We Get This Wrong
There are understandable reasons why leadership teams underestimate AI's trajectory. Let's be honest about them.
Pattern matching to the wrong examples
We've seen hype cycles before. Blockchain. VR. 3D printing. Many found niche applications but didn't transform entire industries overnight.
The temptation is to pattern-match AI to these examples. Don’t worry, it’s natural. But AI is fundamentally different. It's a general-purpose technology more like electricity or the internet than any specific application.
Electricity didn't just create electric lighting. It transformed manufacturing, communication, transportation, and entertainment. AI is following the same path, just faster.
Goldman Sachs Research forecasts that AI could boost U.S. labor productivity by 15% over 10 years, with the technology's impact on global GDP potentially reaching 7% (about $7 trillion) or even climbing to 10-15% in their more recent projections.
For context, $7 trillion is roughly the entire annual GDP of France and the UK combined.
We're not talking about a niche technology finding its market. According to their January 2026 analysis, AI-related spending already accounted for almost one percentage point of U.S. real GDP growth in the first half of 2025 alone.
Exponential change breaks our brains
Our brains are wired for linear thinking. When you see an AI system that's 70% as good as a human at a task, it's natural to think "we have time."
But the jump from 70% to 90% might happen in months. Not years. And 90% to 95% might be enough to fundamentally change human-to-human workflows.
By early 2026, we're seeing Claude Opus 4.5 break 80% on SWE-bench Verified (real GitHub bug fixes), Gemini 3 Pro hit 92.6% on GPQA Diamond (PhD-level science questions), and GPT-5.2 lead on GDPval (real-world professional work across 44 occupations).
The pace hasn't slowed. It's accelerated.
The difference between "interesting toy" and "replaces entire job categories" can be as small as a 15% improvement in capability. You're not watching a gradual evolution. You're watching phase transitions.
Recency bias is hurting you
You evaluate AI based on what it can do today. Maybe you tried ChatGPT for a few tasks or saw a demo. But the version you tested is already outdated by the time you formed an opinion about it.
Imagine you're an executive at a Fortune 500 company that dismissed AI in 2025 after testing it on tasks where it performed poorly. The decision seemed justified at the time. The models were clunky, error-prone, not ready for prime time.
It’s January of 2026 and you are ready to engage once again. You don’t realize it but you gifted your competitors a leg up in their last 4 quarterly earnings announcements. Your competition had nearly 12 months of organizational learning.
How?
On the technology side, custom integrations were built. Workflow optimizations were developed out. Training cycles were completed. Examples were captured and created. Agents began learning about their respective domains.
What’s even worse, is that your competitors had learned how to use it effectively at a lower capacity while your organization sat still. What did your competitors accomplish?
Lets examine this:
Custom integrations built and built to scale - check.
System of Records (SOR) vendor relationships/capabilities examined and contracts reviewed - check.
Human-to-human workflow analysis, alignment, and optimization completed - check.
FDE’s (Forward Deployed Engineers) up-skilled within the organization - check.
All of this accomplished despite the fact that the agents quality rate was at 20%.
Some of this might sound extreme. Maybe even alarming, but the landscape is changing faster than most can grasp. The reason? We've spent decades building enterprise systems around deterministic logic. If X happens, then do Y every time.
Predictable.
AI doesn't work that way. It's probabilistic, contextual, sometimes brilliant and sometimes wrong in ways your existing systems were never designed to handle. You won't fully grasp how transformational this is until you pilot it, capture the context of how you operate and train the agents.
By then, your competitors will have spent 18 months learning to work with and negotiate that uncertainty and risk.
Uncomfortable Economics
Michael Burry, the guy who predicted the 2008 crash and, later, had a movie made about him, participated in this conversation as well. I’d like to take some time to examine his skepticism.
Burry pointed out that in past technology cycles, when one company made a major investment like adding an escalator, competitors had to follow. In the end, neither benefited from that expensive project. No durable margin improvement. Both companies ended up in the same spot.
He worries that's how most AI implementations will play out. Trillions in spending with no clear path to utilization by the real economy. Most companies won't benefit because their competitors will benefit to the same extent. Neither will have a competitive advantage.
The answer: Your company's system of context (more about this in an upcoming article).
When AI commoditizes intelligence itself, the companies that win won't be the ones with the best models or agents. They'll be the ones with the richest, most interconnected understanding of their business, customers, and operations.
Your proprietary data. Your institutional knowledge. Your customer relationships and feedback loops. Your documented processes and hard-won lessons.
This is why companies that wait are making a worse bet than Burry realizes. You're not just missing out on productivity gains. You're missing the window to build your context graph while your competitors are building theirs.
Every month of customer interactions, every workflow optimization, every integration, every piece of feedback creates context that makes your AI more valuable than a competitor's AI using the same underlying model.
What This Means For Your Strategy
Understanding that this is the worst it'll ever be should reshape how you think about AI strategy. Not next quarter. Now.
Build for adaptability, not today’s capabilities
Don't ask "what can AI do for us right now?"
Ask "how do we build an organization that can rapidly integrate increasingly powerful AI capabilities?"
This means creating modular systems that can swap in better AI models as they become available. Establishing data pipelines and governance frameworks that will scale with the rapid rate of capability creation. Building teams that can experiment, learn, and pivot quickly.
The winning approach is building abstraction layers. Don't build directly on top of one AI model. Build systems that can plug in whatever the best available model is at any given time.
Yes, this requires more upfront architectural thinking. It's worth it.
Accelerate your learning curve beyond recognition
Your organization's AI literacy is a depreciating asset if you're not actively developing it. What seemed cutting-edge six months ago is table stakes today.
The executives who'll thrive aren't necessarily the ones who understand AI deeply right now. They're the ones who are learning the fastest (a mentor of mine has taught me that — thank you Rashmi).
I've been part of companies that create AI councils that meet quarterly to discuss strategy. By the time they make a decision, the landscape has shifted. On the other hand, I have been part of companies who have empowered small teams to experiment daily, fail fast, and share learnings across the organization.
A 2024 study by Boston Consulting Group found that companies with decentralized AI experimentation programs saw 3x faster adoption rates than those with centralized, committee-driven approaches.
Speed of learning beats depth of planning when the environment changes this fast.
Every executive should be using AI tools daily for real work. Not playing with them. Using them. You can't make informed strategic decisions about technology you don't understand at a visceral level.
Your VP of Strategy should be using AI to analyze competitive threats. Your CFO should be using it to spot patterns in financial data. Your CHRO should be using it to identify retention risks.
Rethink what creates defensible value
If your competitive advantage relies on tasks that AI is even marginally good at today, that advantage is eroding. Fast.
The defensible moats in an AI-enabled world look different. Proprietary data that gets better with use still matters. Deeply integrated customer relationships still matter. Brand trust and regulatory positioning still matter. Speed of adaptation and organizational learning matters more than ever.
But here's what doesn't create a moat anymore: having smart people who can do analysis faster than average. AI is already better than “average” at most analytical tasks. It'll be better than “good” by the next model upgrade announcement this quarter.
The pyramid is flattening.
Plan for workforce transformation
Many executives think about AI as an augmentation tool. Something that makes existing workers more productive. That’s not wrong, but it’s incomplete.
As capabilities improve, entire job categories will transform. The question isn’t just “how do we make our analysts 20% more efficient?” It’s “what does the analyst role become when AI can do 80% of traditional analyst work?”
According to research from the University of Pennsylvania and OpenAI, around 80% of the U.S. workforce could have at least 10% of their work tasks affected by AI. Approximately 19% of workers may see at least 50% of their tasks impacted. This research was published in 2023 based on capabilities that already existed then.
The capabilities have improved dramatically since. The impact percentages have gone up.
Here’s what smart companies are doing. They’re not just thinking about productivity gains. They’re thinking about role transformation. What happens when your customer service team spends 80% less time answering routine questions? Do you cut headcount? Or do you redeploy those people to higher-value work that AI can’t do yet?
The companies that treat this as a cost-cutting exercise only will see short-term gains and long-term strategic weakness. The companies that treat it as a transformation opportunity will build entirely new capabilities.
This is the worst AI will ever be. Proceed accordingly.


Good morning HR and Learning & Development leaders!
Are you listening?
Wow, the part about Jack Clark's "worst it will ever be" quote realy stood out to me! Such a brilliant way to frame it. As a CS teacher, this perspective on AI's trajectory is crucial. What do you think is the biggest blind spot policymakers have right now regarding this accelerating change? Really great piece, so insightful!