Building Defensible Economics in AI
Why margin discipline will separate winners from losers in the next funding cycle.
I spent three years building Moltin before writing this article. The experience has taught me something uncomfortable: technical elegance means nothing if the unit economics don't work.
Now I watch AI startups pitch with the same reverence for model performance I once had, completely blind to the margin trap they're building for themselves.
Angel investors have started walking away from AI deals. Yes, it has begun.
Not because the technology isn't impressive. It is. They're walking because the numbers don't add up.
When Bessemer Venture Partners analyzed fast-growing AI startups in 2025, they found "Supernovas" averaging 25% gross margins while traditional SaaS companies cruise at 75% or higher.
Some AI companies were running negative margins entirely.
This isn't FUD (Fear, Uncertainty, and Doubt) from people who don't understand AI. This is pattern recognition from investors who've seen infrastructure plays crater before. The concerns are valid. But they're also solvable if you face them early instead of pretending compute costs will magically fix themselves.
The Real Problem With AI Isn’t Hype, It’s Math
Traditional software has near-zero marginal costs. Build it once, sell it a million times. Every new user is almost pure profit. AI applications burn real money with every inference. Your 1,000th customer costs you actual dollars in compute, not just server pennies.
According to Stanford's 2025 AI Index Report, inference costs for GPT-3.5-level performance dropped 280-fold between November 2022 and October 2024.
That sounds encouraging until you read the fine print.
Those cost reductions apply to older, established models. Frontier models that startups actually need to compete? They're getting more expensive.
The agentic workflows everyone's building now consume 10-100x more tokens per task than simple chat completions did in December 2023, per an analysis from SaaStr.
Here's what that means in practice.
Early reports showed GitHub Copilot costing Microsoft roughly $80 per heavy user per month while charging $10. That's a $20 average loss per subscriber. In mid-2025, Cursor reportedly paid Anthropic $650 million annually while generating $500 million in revenue. Negative 30% gross margin.
Their response? Build proprietary models. Because at that margin profile, you don't have a choice.
The margin compression isn't subtle.
Bessemer's data shows AI "Shooting Stars" with more sustainable growth hitting 60% gross margins, which sounds better until you realize that's the floor for Series B conversations in traditional software.
Below 60%, investors start questioning whether you've built a software company or a services business with extra steps.
Why Investor Skepticism Is Actually Healthy
Every cycle has its moment of reckoning. For AI, we've hit the point where "we'll figure out margins later" doesn't fly anymore. This scrutiny isn't the market turning against AI. It's the market maturing.
Investors watched the zero-interest era fund companies that never had to prove unit economics.
That's over.
Interest rates rose, capital got expensive, and suddenly everyone cares about cash flow. PitchBook's 2025 data shows early-stage AI valuations compressing from a 74% premium over non-AI startups to just 30%, year-over-year.
Translation: investors are demanding proof.
The uncomfortable truth? Most AI startups haven't built businesses yet. They've built impressive demos that cost more to run than customers will pay. OpenAI itself lost $5 billion in 2024 on $3.7 billion in revenue, according to Market Clarity's analysis of profitable AI startups.
If the most successful AI company in the world operates at a loss, everyone else should probably have a plan.
But here's what the doom-and-gloom narratives miss. Anthropic expects gross margins to have improve from -94% in 2024 to 40% in 2025 and 77% by 2028.
OpenAI's compute margin jumped from 35% in January 2024 to 70% by October 2025, The Information reported. The trajectory is clear for companies that optimize deliberately instead of assuming scale will solve everything.
The question isn't whether AI companies can be profitable. It's which ones will be.
A Framework for Sustainable Economics
If you're building an AI startup right now, you need to make peace with a hard fact: your margins will be worse than traditional SaaS for at least the next few years. The game is showing investors a credible path from where you are to where software margins live. Here's how.
Know Your True Unit Economics From Day One
Most early-stage founders treat unit economics like a later-stage concern. That's backward. You need to know your cost-per-customer down to the token before you set pricing.
Build a model that tracks compute cost per customer, gross margin by user segment, and how those metrics change with volume. Include both your platform fees and the actual inference costs you're eating.
You don’t want to discover what happens when your power users cost 10x what light users do, but they're all paying the same price. You can't fix that problem if you don't measure it. Track usage patterns by cohort. Calculate what heavy usage actually costs you. Then price accordingly or accept that you're subsidizing growth at the expense of margins.
Your dashboard should show:
Cost-per-inference
Blended margin across customer segments
CAC payback by cohort
If you can't pull those numbers in under five minutes, your financial infrastructure isn't ready for investor diligence.
Design Pricing That Reflects Reality
Flat subscription pricing made sense when software had zero marginal cost. With AI, it's financial malpractice. You can't charge every user $50/month when some consume $5 of compute and others burn $200. That's not a business model. That's a transfer payment from your light users to your heavy ones.
Bessemer's research shows that sustainable AI companies embrace hybrid models:
Base subscription plus usage tiers.
Compute credits that scale with consumption.
Separate pricing for AI features versus traditional functionality.
Replit demonstrates this well. Their model layers a base subscription with usage credits for compute, AI agent calls, and hosting. Heavy users pay more. Light users get predictable costs. Margins improve as usage scales.
The key is aligning what customers pay with what they consume. Usage-based pricing feels uncomfortable if you've spent years in traditional SaaS.
Get over it.
The alternative is selling a dollar for eighty cents and calling it growth.
Build Margin-Enhancing Architecture
This is where your technical background matters. Inference costs aren't fixed. They're a function of architecture choices you make. Every decision about model selection, caching strategy, and prompt engineering directly impacts your margins.
Start with caching. If you're hitting your model provider for the same or similar queries repeatedly, you're burning money for no reason. Intelligent caching can reduce GPU costs by 5-10x according to infrastructure providers like Tensormesh.
Semantic similarity matching lets you serve responses from cache instead of recomputing. The first implementation is trivial. The margin improvement is immediate.
Model selection matters more than most founders realize. You don't need GPT-4 for every task. Build a router that uses cheaper models for simple queries and reserves frontier models for complex reasoning.
Progressive loading works: start with a fast, cheap model, escalate only when needed.
Cursor cut their compute costs dramatically by building proprietary models optimized for their specific use case instead of relying entirely on third-party APIs.
Prompt engineering seems like a minor optimization until you're processing millions of requests.
Shorter prompts mean fewer tokens. Fewer tokens mean lower costs.
Strip unnecessary context. Compress system messages. Format outputs efficiently. These micro-optimizations compound.
Infrastructure decisions have long-term margin implications. Negotiate annual contracts with your cloud provider instead of paying usage rates. Evaluate when self-hosted inference makes sense. Consider multi-provider strategies to optimize for cost versus latency versus availability.
These aren't problems you solve on day one, but you need a roadmap.
Diversify Beyond Pure AI Features
The companies with the best margins aren’t pure AI plays. They’re platforms where AI is one component in a larger value stack. Look at Replit. Their margins on hosting and infrastructure are significantly higher than margins on AI agent inference. The blended model works because not every dollar of revenue costs the same to deliver.
Canva proves this at scale. AI features enhance the product, but most of the value comes from design tools, templates, and collaboration features that don’t scale linearly with compute costs.
The AI makes the product better. The platform makes the margins defensible.
Build features that create switching costs without burning compute. For example:
Data storage and organization.
Collaboration and workflow tools.
Integrations with existing systems.
Marketplace dynamics where users create value for each other.
These layers don’t just improve retention. They improve margins.
Think about what you’re really selling. If it’s just a thin wrapper on someone else’s expensive model, you don’t have a business. You have a distribution play that ends the moment your supplier decides to compete with you.
Anthropic launched Claude Code. OpenAI has Codex. If your entire value proposition is “our UX on their model,” that advantage evaporates fast.
Verticalize and Differentiate
Horizontal AI tools face a race to the bottom on both pricing and margins. Everyone has access to the same foundation models. Everyone pays roughly the same inference costs.
Differentiation becomes almost impossible. You’re competing on UX and distribution against companies that can afford to lose money longer than you can.
Vertical specialization changes the equation. Build deep expertise in one domain. Healthcare AI can charge premium prices because healthcare customers pay for accuracy and compliance. Legal AI commands higher ACVs because the value is clear and the switching costs are real. Financial services, enterprise security, specialized manufacturing and the list goes on and on. The one thing all of these verticals have in common is that they have dollars and pain.
Proprietary data creates real moats.
If you’re training on unique datasets your competitors can’t access, you’re building something defensible. Fine-tuned models that solve vertical problems better than generic models give you pricing power. Integration depth into existing workflows makes switching painful.
The decision is strategic: either charge premium prices through vertical differentiation, or optimize costs through focused specialization. Ideally, do both. Pick a vertical, own it, then expand from strength.
What to Show Investors
Transparency beats optimism in due diligence. Don’t pretend margins will magically improve. Show the path from current state to target state with specific initiatives and timelines.
Your deck needs a margin bridge. Here’s current gross margin, here’s where we’re going, here’s how we get there. Break it down: X percentage points from infrastructure optimization, Y points from pricing changes, Z points from product mix evolution. Give quarters, not hand-waving.
Sensitivity analysis matters. How do margins change as we scale? What happens if inference costs don’t drop as fast as we’re modeling? What if they rise? Show that you’ve pressure-tested the assumptions. Investors won’t believe a straight-line path to 80% margins. Show them the scenarios.
Be honest about which features make money and which lose money. Every AI company has features that are strategic losses. That’s fine. What’s not fine is not knowing which ones they are.
If your analytics tool attracts users but costs $15/month to serve at a $10 price point, own it. Explain why it’s worth the subsidy and when that changes.
Red flags that kill deals:
“We’ll figure out margins later.”
“Inference costs will solve themselves.”
“Everyone in AI has this problem.”
“We’ll get acquired before profitability matters.”
These are startup suicide notes. Investors have seen this movie. It ends badly.
The Path Forward
This isn’t an argument against AI startups. I run an AI platform. I believe agents will transform how we work. But belief in the technology doesn’t excuse ignoring economics.
The companies that win won’t have the most impressive models. They’ll have the best margins. Success comes from building sustainability into the architecture from day one, not bolting it on after you’ve raised $50 million at prices you can’t support.
Ask yourself these questions:
Can we reach 60%+ gross margins at scale?
If not, why are we building software instead of a services company?
Do we have differentiated value that commands pricing power, or are we selling someone else’s AI with our UX?
Are we building features that don’t scale linearly with compute costs?
Do we have a roadmap to improve margins over time with specific initiatives and owners?
If those answers are fuzzy, you’re not ready for institutional capital.
That’s okay. Get ready.
The angel investors raising concerns about AI economics aren’t wrong. They’re right. The question is whether you’re building a company that proves the bears wrong or confirms their thesis. Margin discipline separates the two.
This scrutiny is healthy. It forces rigor.
The AI companies that survive the next funding cycle won’t be the ones that grew fastest. They’ll be the ones that built actual businesses with real unit economics and defensible margins.
Start there.

