Capital over context?
The arbitrage opportunity hiding in plain sight.

The Silicon Valley playbook has worked the same way for decades. Raise venture capital. Recruit elite engineers from MIT, Stanford, Carnegie Mellon. Build sophisticated software. Then fly into Cincinnati or Milwaukee or Kansas City to explain to manufacturers and logistics companies why they need your product.
Here’s what nobody says out loud: the people who actually understand those manufacturers’ problems already left Cincinnati for Mountain View.
And here’s what’s changing: they don’t need to anymore.
We’re entering an era where context is no longer overshadowed by capital. The hard-won operational knowledge, the decade of watching orders flow through a system, the battle scars from five ERP implementations, the trust earned from solving real problems, this matters more than your tech stack.
Agentic AI and coding LLMs are picking apart the moat that separated people who understand problems from people who can build software. The question isn’t whether this shift is happening. The question is whether the people with context will realize it before the next wave of well-funded startups figures out how to truly take advantage of it.
The Capital-Driven Model Runs on Extraction
The formula is clean.
Raise a seed round. Hire engineers at $200K base plus equity. Build for six months. Launch. Iterate. Raise Series A. Scale the team.
Eventually you’ll need to understand your customers, but that comes later. First: product. The assumption is that smart people can figure out any domain if they’re smart enough.
This works until it doesn’t. But the extraction continues either way.
Product or Customer-First?
When you raise $10 million before you have customers, you build what investors find exciting.
That's not cynicism, it's incentives.
You build elegant APIs. Beautiful dashboards. Impressive demonstrations. You optimize for the next funding round, not for the warehouse manager who'll actually use the software at 5 AM when the overnight shift is short-staffed.
Imagine that you are part of a team of engineers that builds inventory management software. It's technically sophisticated, genuinely impressive architecture. They demo it to a mid-sized distributor in Ohio. The distributor's ops manager asks: "How does this handle partial pallet picks with mixed lot numbers when the forklift driver doesn't have both hands free?"
Silence. That scenario wasn't in the user stories.
The ops manager has been managing that warehouse for nineteen years. She knows every workaround in their current system. She knows why they track things the way they do, even if it seems weird to outsiders. She knows what breaks when you're short-staffed, when a SKU gets discontinued, when a supplier changes packaging mid-season.
The engineers will learn this eventually. They'll hire domain experts, usually expensive consultants or a VP who worked at a competitor. They'll do customer research, build personas, iterate.
It works well enough. But they're starting from zero on the understanding curve while charging premium prices from day one.
What Context Actually Means
Context isn't tribal knowledge or institutional memory or any of those other soft phrases people use to avoid being specific. Context is concrete.
It's knowing that the production line always jams on Tuesdays because that's when they run the narrow-gauge material and the tensioner was installed slightly off-spec in 2019.
It's understanding that Purchase Order 4000-series numbers mean the order came through the EDI system and needs different handling than manually entered orders.
It's recognizing that when Customer X says they need something by Friday, they actually mean it needs to ship by Wednesday because their receiving dock is closed Thursdays.
This knowledge lives in people's heads, in email chains, in the muscle memory of experienced workers. Very little of it makes it into documentation. Almost none of it makes it into the requirements doc a consultant writes after three weeks of discovery interviews.
Hard-Won Lessons Beat Book Learning
You can’t teach someone in a conference room what it feels like when the system goes down during peak season. You can describe it. You can show them the incident reports. But they won’t know it the way the person who lived through it knows it.
That person understands which workarounds actually work under pressure. Which reports people will ignore no matter how prominently you display them. Which integrations are held together with duct tape and which ones are solid. Where the edge cases hide.
A developer fresh from a coding bootcamp can learn React in three months. Learning how a regional health system actually coordinates patient transfers across facilities? That takes years. And you need to be there, in the building, watching it happen.
Process Knowledge Is Underrated and Underpriced
Every company has an official process and an actual process. The gap between them is where context lives.
The official process says:
Sales receives order, enters it into the CRM, sends it to operations, operations schedules production, production updates the ERP, shipping coordinates delivery.
The actual process says:
Sales receives order over text message at 7 PM, calls their contact in operations because the online form times out half the time, operations checks with production lead verbally because the schedule in the system is always forty-eight hours behind reality, production updates a Google Sheet that three people check, shipping coordinates delivery by calling the customer directly because the delivery notes field in the ERP doesn’t sync with the carrier integration.
Software built around the official process fails.
Software built around the actual process works. But you only learn the actual process by being there.
Data Tells Stories If You Know How to Listen
Systems of record contain more truth than anyone wants to admit.
Not the sanitized data warehouse truth.
The messy operational database truth.
Why does that customer have seventeen different ship-to addresses? Because they’re a hospital network and each facility orders separately, but they all roll up to the same billing entity, and the address labeled “Main Campus” is actually a loading dock that’s only open Wednesdays.
Why are there three SKUs that seem identical? Because one is the old vendor code from before the acquisition, one is the new consolidated SKU, and one is what the legacy system calls it, and changing any of them would break reporting that Finance depends on.
Why does this order have a manual discount code applied? Because that customer’s buyer negotiated a deal in 2017 that’s not in any contract anyone can find, but everyone knows to honor it because the last person who didn’t got an angry call from the VP of Sales.
A data analyst from outside the company looks at this and sees data quality problems. Someone with context looks at it and sees the archaeological record of how the business actually works.
Leveling the Playing Field
For thirty years, if you understood a business problem deeply but couldn't code, you had three options.
Hire developers.
Buy off-the-shelf software and force your process to fit it.
Or, do nothing.
That calculus has changed.
Coding LLMs Democratized Implementation
A senior software engineer at a major tech company makes between $300K and $500K in total compensation, according to 2024 levels.fyi data. That engineer brings deep technical knowledge, architectural thinking, and years of experience.
A domain expert with moderate technical literacy can now scaffold a working application using Claude, GPT-4, or Cursor in a fraction of the time it would've taken them to write specs for a developer.
The code isn't always elegant. It doesn't always follow best practices. But it often works.
I'm not saying LLMs write production-ready enterprise software on their own.
They don't.
But they compress the distance between "I know what needs to happen" and "I built something that does it."
That compression is nonlinear.
Tasks that required senior engineering talent, setting up authentication, building API integrations, creating database schemas, writing business logic, are now accessible to people who understand the problem but lack traditional coding expertise.
You still need to know what you're building and why. You still need to debug, test, iterate. But the barrier to entry dropped from "spend four years learning computer science" to "spend four weeks learning how to prompt and read code."
Technical Moats Are Eroding Fast
Every SaaS vendor’s pitch deck used to include a slide about their proprietary technology. Their advanced algorithms. Their sophisticated architecture. Their technical differentiation.
Most of that is becoming commoditized.
Building a React frontend? LLMs can scaffold it.
Setting up a PostgreSQL database with proper indexing? LLMs know the patterns.
Integrating with Salesforce’s API? LLMs have read the documentation.
Creating a scheduled job that processes data overnight? LLMs have written thousands of variations.
The technical complexity that used to justify $100M valuations is increasingly just configuration and integration work. Still necessary, still needs to be done right, but no longer a sustainable competitive advantage on its own.
What LLMs can’t commoditize is knowing what to build. Understanding that the nightly batch process needs to run at 2 AM, not midnight, because that’s when the upstream system finishes its own processing.
Recognizing that the “delivery date” field needs to be calculated differently for drop-ship orders versus warehouse stock. Knowing which reports people actually look at versus which ones get generated and ignored.
You can’t prompt your way to that knowledge. You earn it.
Agentic AI Accelerates the Shift
We’re moving past “LLMs help you write code faster” into “LLMs can manage entire workflows with minimal human intervention.” Agentic AI systems that can plan, execute, and iterate are still early. But they’re getting better quickly.
This matters because it further reduces the advantage of having a large engineering team.
A startup with fifty developers can build faster than a company with five developers. But if those five developers are equipped with AI agents that handle routine implementation, testing, and deployment?
The gap narrows.
The constraint shifts from “how fast can we write code” to “how well do we understand what needs to be built.”
And that brings us back to context.
Credit Where It’s Due
None of this happens without Silicon Valley.
The irony of this entire argument is that the tools enabling domain experts to compete are themselves products of the capital-driven model we've been critiquing.
OpenAI, Anthropic, Google DeepMind, these companies raised billions to build the foundational models that make agentic AI possible.
They hired the world's best researchers. They built massive compute infrastructure. They iterated relentlessly on model architectures that most people still don't fully understand.
Without that investment, without that concentration of talent and resources, we wouldn't have Claude or Gemini 3 or any of the tools that compress the gap between understanding and execution.
The same goes for the infrastructure companies. Vercel makes deployment trivial. Stripe handles payments. AWS and Google Cloud provide enterprise-grade hosting that used to require entire ops teams. These are all products of the Valley ecosystem, built by well-funded teams solving hard technical problems at scale.
This isn’t a story about one model winning and another losing.
It’s about how the outputs of the capital-driven model, the AI platforms, the developer tools, the cloud infrastructure, are now available to people who historically couldn’t access them.
The Valley built the foundation. What’s changing is who gets to build on top of it.
The question isn't whether we need Silicon Valley's technical contributions.
We do.
The question is whether those contributions remain concentrated in companies that lack operational context, or whether they diffuse to people who've spent careers understanding real problems.
That diffusion is happening now. And it's happening because Valley companies chose to make their tools accessible rather than keeping them proprietary.
Context Without Capital
The Midwest doesn't have venture capital density. It doesn't have the same concentration of technical talent. It doesn't have the ecosystem of startups, acquirers, and experienced operators recycling through companies.
What it has is proximity to real complexity.
Manufacturing. Logistics. Agriculture. Healthcare delivery outside major metros.
These industries are operationally intricate in ways that make typical SaaS companies look straightforward.
They've been underserved by software for decades, not because there isn't money in them, but because the people who understand them and the people who can build software historically lived in different zip codes.
The People Who Stayed
Not everyone left for the coasts. Some people graduated with CS degrees and took jobs at local companies. Some taught themselves to code while working in operations or finance or logistics. Some came back after a few years in San Francisco or Seattle, burnt out on startup culture.
These people have something valuable that’s hard to acquire later: they never lost touch with the problems.
They’re embedded in industries that need better software.
They have relationships with potential customers because they’ve worked alongside them for years. They understand the constraints, the workflows, the politics, the unspoken rules. They know which problems are worth solving because they’ve felt the pain personally.
What they historically lacked was the ability to build software at the level of sophistication that venture-backed startups could produce.
That gap is closing.
Context as Competitive Advantage
A Silicon Valley startup selling to manufacturers will hire a few people with industry experience. Maybe a VP of Sales who worked at a competitor. Maybe a solutions engineer who came from the industry. They’ll do discovery calls, build personas, run design sprints.
It helps.
It’s better than nothing.
But it’s not the same as ten years on the plant floor.
The person with ten years on the plant floor knows things that don’t come up in discovery calls. They know the seasonal patterns, the personality dynamics, the unwritten rules. They know which features will actually get used and which ones will look good in demos but gather dust in production.
They can build something that fits how people actually work instead of how they’re supposed to work.
That’s worth more than elegant code.
Building Capability vs. Buying Solutions
There’s a critical choice companies face when adopting AI.
Do you build internal capability or outsource to vendors?
Most take the path of least resistance.
Hire a consultant. Buy a platform. Let someone else handle the complexity. It’s faster. It’s cleaner. It’s also how you end up dependent on vendors who don’t understand your operations.
The problem with outsourcing AI implementation is the same problem we’ve been discussing throughout this article. The vendor builds what they think you need based on discovery calls and requirements docs. They deploy their generic solution. They train your team on their interface. Then they leave.
You’re locked into their workflow, their update cycle, their pricing model. When your business changes, you’re waiting on their product roadmap.
Building internal capability is harder. It requires investment in your own team. It means accepting that your first attempts won’t be perfect. It means resisting the impulse to hand the problem to experts who promise to solve everything.
But it’s also how you maintain control over your competitive advantage.
The Arbitrage Opportunity
There’s a gap opening up. Silicon Valley vendors are selling features without understanding workflow reality. People with context can now build solutions that actually fit.
That’s an arbitrage opportunity.
Selling Features vs. Solving Problems
Enterprise software sales runs on feature checklists. Does it have SSO? Does it integrate with Salesforce? Does it support role-based permissions? Does it have an API?
Procurement departments use these lists to evaluate vendors. Everyone knows it’s somewhat absurd. Having a feature and having a feature that works well for your specific use case aren’t the same thing. But it’s an easy way to narrow the field.
Meanwhile, the actual users, the people who’ll spend eight hours a day in the software, care about different things. Does it make their job easier? Does it fit their workflow? Does it avoid forcing them to do unnecessary steps? Does it surface the information they need when they need it?
Vendors optimized for the procurement process often miss on the actual user experience. They build broad, generic tools that technically check all the boxes but feel clunky in practice.
Someone with context builds narrow, specific tools that skip the procurement checklist but nail the user experience. In a world where you can build software cheaply with AI assistance, that’s a viable strategy.
Being the Customer You Serve
Product-market fit is easier when you are the market. You don’t need to do user research to understand the pain points. You feel them. You don’t need to validate demand.
You know it exists because you’d pay for a solution yourself.
This removes whole categories of risk that plague traditional startups. You’re not guessing whether anyone will want this. You’re not iterating based on survey responses from people you met at a conference. You’re building for yourself and people like you.
It also means you can start small and bootstrap. You don’t need venture capital to validate the idea. You need enough revenue from your first few customers to keep developing.
That’s achievable when those customers are people you already know and the development costs are low because you’re using AI to accelerate execution.
Defending Against Disruption
Incumbent companies usually lose to startups because they’re slow to adapt and burdened by legacy systems. But what if the incumbent’s people build the new tools themselves?
That’s the ultimate defense against disruption. You maintain your context advantage, your customer relationships, your domain expertise. You just upgrade your technical capability.
A manufacturer that builds its own production planning tools, tailored exactly to its processes, doesn’t need to fear the generic MES vendor.
A regional health system that builds its own patient coordination software, designed around its actual workflows, doesn’t need to fear the EHR add-on modules.
The technology stops being the bottleneck.
Context becomes the moat.
What Hasn’t Changed
AI hasn’t eliminated the need for capital, technical skill, or go-to-market strategy. It’s shifted the equation, not erased it.
AI Tools Still Require Sophistication
Prompting an LLM to write code is easy. Getting it to write good code is hard. Knowing when the code it wrote is wrong, knowing how to fix it, knowing how to architect a system that won’t collapse under real-world load, these things still require expertise.
You can learn this expertise faster than you could’ve learned traditional software development. The on-ramp is shorter.
But there’s still an on-ramp.
Someone with zero technical background can’t just start shipping production software after reading a blog post about prompt engineering.
They can start building useful tools for themselves and their team. That’s not nothing. But scaling from there to a sustainable enterprise-grade solution requires learning.
Capital Still Matters for Scale
Building your first version cheaply doesn’t mean you can scale cheaply. At some point you need infrastructure, security, compliance, customer support, sales, marketing.
That costs money.
Bootstrapping works for some businesses. Venture capital works for others. The shift isn’t that capital becomes irrelevant.
It’s that you can get further before you need it.
You can prove more, de-risk more, build more leverage.
That changes the power dynamic in fundraising conversations. It doesn’t eliminate the need to have them.
Valley Startups Can Hire Domain Experts
Nothing stops a well-funded startup from hiring people with deep industry experience.
They do it all the time. They pay well for it.
The difference is timing and cost. They usually hire domain experts after they’ve built the first version, when they realize they need help understanding customers.
By then they’ve made architectural decisions that are expensive to change. They’ve built features that seemed important but aren’t. They’ve skipped things that seemed minor but matter.
Hiring expertise later is possible. It’s just less efficient than starting with it.
Network Effects and Ecosystem Advantages
Silicon Valley has network effects that won’t disappear. Experienced operators who’ve built companies before. Investors who’ve seen patterns across hundreds of deals. Talent density that makes hiring easier. Acquirers who look there first.
These advantages compound. They’re real. They’re valuable. They’re also not insurmountable, especially for businesses that don’t need to scale to billions in revenue to be successful.
A company doing $20M in revenue with strong margins and happy customers doesn’t need to be in San Francisco. It needs to be where its customers are and where its domain experts want to live.
A Rebalancing, Not a Revolution
We’re not witnessing the death of venture capital or the end of Silicon Valley. We’re watching context get repriced.
For decades, technical execution capability was scarce and valuable. Domain knowledge was common and cheap. Software companies could hire industry experts for a fraction of what they paid engineers. The engineers were the constraint.
That ratio is shifting. Technical execution is becoming less scarce as AI tools democratize development.
Domain knowledge, real, deep, operational expertise, remains as hard to acquire as ever. Maybe harder, given how many experienced practitioners left their industries for tech jobs.
The people who stayed close to real problems, who kept learning how things actually work, who maintained relationships with potential customers, those people now have leverage they didn’t have before. They can build solutions that well-funded outsiders struggle to replicate.
This doesn’t mean every domain expert should quit their job and start a software company.
Most won’t. Most shouldn’t.
But some will. And when they do, they’ll have advantages that capital alone can’t overcome.
The next decade of B2B software might look less like “Stanford CS grads disrupt industry X” and more like “industry X builds its own tools.”
That would be a welcome change.
The best solutions to complex problems usually come from people who’ve lived with those problems long enough to understand them completely. We’re finally reaching a point where those people can build the solutions themselves.
They just needed the tools to catch up with their knowledge.

