On Trust and AI — Applied

When Coding Becomes Cheap, What Happens to SaaS?

A weekend experiment with AI-built software, and what it reveals about where SaaS value is actually heading.

Saturday night, I was staring at a familiar kind of mess: raw server logs, referrers that don’t quite line up, timestamps that mean nothing until you’ve normalized them, and the quiet irritation that comes from knowing the answers are in there while also knowing what it usually costs to pull them out.

I wanted the basic things people always say they want to know, and rarely instrument cleanly: Who’s showing up? Where did they come from? Do they bounce, or do they read? What do journeys look like when you can actually see them?

I also had a second curiosity running in the background. I’ve been experimenting with the idea of making the site navigable for agents: clean routes, predictable structures, explicit artifacts, pages that are readable without a browser pretending to be a person. I wanted to see whether anything would notice.

And I wanted to prove something to myself.

I care about data. I don’t like arguing from vibes, and I don’t trust my own intuition when the underlying economics are shifting. If the cost of coding is actually collapsing the way everyone claims, the best way to understand what that means is to build something that used to be expensive and see what it costs now.

So I re-ran an experiment on myself that I’ve been periodically running for the last two years.

The experiment

I let the AI build my project, a full analytics dashboard, from scratch. End-to-end. The goal was to get a feel for how far along we are on the curve of AI collapsing the cost of software production.

Not “help me write a query.” Not “suggest some charts.” I mean: parse the logs, design the aggregation layer, generate the dashboard, wire the visualizations, ship a working analytics surface.

I didn’t touch the code. I treated the model like a contractor and myself like the reviewer who shows up at the end with a flashlight.

Two hours later I had a deployed dashboard with enough surface area to answer real questions: 262 unique visitors, 606 page views, 33 seconds average session, 65.6% bounce. Device, OS, browser breakdown. Entry pages by source. Continue rates by source. Page engagement time. Visual visitor journeys that show the looping patterns you normally only infer from funnels.

Then the agent traffic panel lit up with a small but unmistakable signal. Only a few requests, because the system was brand new, but already coming from the places you’d expect. One request asked for /book.md as markdown. Others touched /articles and /contact. Tiny volume. Clear shape.

A few years ago, I would have paid for this. More realistically, I would have paid to avoid building it. I would have justified a subscription because the alternative was hiring time I didn’t want to spend.

Now I built it as a weekend project.

That difference is the beginning of the question this post is actually about: when the cost of coding drops toward zero, how does the SaaS business model survive?

When costs fall in steps, business models break

Cost reduction is not always gradual. When it is, business models float. Companies adjust pricing, find efficiencies, shift positioning. The market adapts because the change is slow enough to absorb.

But sometimes costs don’t decline. They vanish. A new technology eliminates an entire category of expense in a single step, and the businesses built on the assumption that the expense was permanent find themselves standing on nothing.

Sending a message across the ocean used to be slow and expensive enough that it shaped how you worked. You wrote carefully because iterations hurt. You waited days because days were normal. Then email arrived, then instant messaging, and the transport cost dropped so far that nobody budgets for it. Communication still has costs (attention, coordination, human bandwidth), but moving the message stopped being the constraint.

Long-distance voice did the same thing. International calls used to be expensive enough to schedule. Then VoIP made voice transport effectively free, and the constraint moved somewhere else: meetings, time zones, fatigue, context switching.

Software is walking into one of those steps right now. The effort to produce working systems is falling fast enough that old assumptions (“this takes a team,” “this takes months,” “this requires a specialized translator”) are becoming stale.

That doesn’t make all software business models worthless. It reallocates value away from the parts of the pipeline that have collapsed and pushes it toward whatever remains scarce: verification, accountability, operational reliability, distribution, data, and the real-world constraints software hooks into.

Why SaaS worked when coding was expensive

SaaS pricing made sense in a world where code generation was a genuinely scarce resource, constrained by human capital. You needed developers, and developers were expensive. You needed product managers to shape what they built, project managers to coordinate the work, trainers to onboard the organization, and support staff to keep it running. The entire apparatus existed because translating business intent into working software was a slow, labor-intensive process.

The SaaS model amortized all of that across millions of users. Pay the upfront cost once, sell the outcome many times, keep iterating fast enough that switching feels painful.

Accounting went through a version of this transition a generation ago. Before personal computing, a tax firm could justify an army of accountants doing manual aggregation and reconciliation. The firm wasn’t only selling wisdom; it was selling labor capacity. Humans adding sums, humans cross-checking, humans producing ledgers. Spreadsheets changed the economics because the mechanical part moved into a machine. Value shifted toward judgment, interpretation, compliance, and advising.

Software is now living through a similar inversion. The mechanical translation, turning an idea into a pile of syntax, used to be expensive. AI is taking direct aim at that translation layer.

So the question for SaaS becomes blunt: if “we built it so you don’t have to” stops being scarce, what exactly are you selling?

The verification bottleneck, applied to software

I’ve written about this elsewhere as The Verification Gap: in any AI workflow, verification becomes the scarce resource. This post is specifically exploring what that means for the business of software, since that’s where the majority of AI investment is currently headed.

In a world where software can be produced cheaply, verification becomes the expensive part. Not verification as a slogan. Verification as the thing that costs time and attention:

  • Are the numbers correct?
  • Are the definitions consistent?
  • Does this behave the same way tomorrow as it did today?
  • Will it fail cleanly when the data gets weird?

Most people won’t do this work unless they’re forced to. And even when they do, they won’t log it. We don’t yet have tools that capture verification in a way that laypeople can understand and evaluate. The work happens, it produces no artifact, and it evaporates.

That matters because the old signal of trust is gone. Software used to be expensive enough that its very existence was proof someone invested millions in getting it right. If an AI could have built it last night, you can no longer assume that the company behind it has a million-dollar investment in the correctness of what the software is telling you.

If you’re shipping AI-built systems and you aren’t proving you’re verifying them, everyone should assume you aren’t. Furthermore, if you’re shipping anything that an AI can plausibly create, people should assume the default path was exactly that: an AI created it. Maybe with a human nudging prompts and merging branches. Maybe with a human doing a quick skim at the end. Unless you can demonstrate otherwise, there’s no rational reason for a buyer to believe they’re paying for scarce human craftsmanship.

That assumption is going to feel unfair to teams doing real engineering work. It’s still the equilibrium we’re drifting toward, because the cost of producing software is collapsing faster than the market’s ability to distinguish “carefully built” from “generated and shipped.”

So the differentiator stops being whether you used AI. Everyone will. The differentiator becomes whether you can show your work: what you tested, what you validated, what you measured in production, and what accountability exists when something breaks.

They aren’t paying for code anymore. They’re paying for confidence.

The uncomfortable truth for software firms

If you run a software company, the value of your engineering team is increasingly being reduced to a trust signal in your product line.

Not because your team can type faster than an AI. Because your team represents a process that says: someone competent verified this, and someone competent will be accountable when it breaks.

That used to be hard to replicate from the outside. It’s getting easier.

You probably need fewer engineers than you did before. And you can probably afford to devote more of them to solving internal niche problems rather than assuming your SaaS product will justify their expense on its own. The technologists who are valuable now are the ones who invest themselves in delivering a quality outcome for customers. People who translate requirements into code and take pride not in the problem they solve, but in the craftsmanship of the coding process itself, are doing what is now largely an AI job.

If I’m a buyer and I can’t see into your process, I can’t tell whether you’re running a disciplined engineering organization or letting an AI generate most of the stack and calling it a day. You can tell me you have QA. You can tell me you have secure SDLC. You can tell me you have review gates. If I can’t observe any of it, I’m back to buying on faith.

Meanwhile, the buy-versus-build math is shifting under everyone’s feet.

I don’t need a full engineering department to replicate a feature set anymore. In a lot of cases I need one engineer I trust, a clear description of the behavior I want, and a pile of AI time. Sometimes the “clear description” is a video of the feature working.

Six months ago, that same feature might have represented a million-dollar build once you factor in salaries, coordination, and calendar time. Today, I can build it internally for pennies on the dollar, even after paying someone solid to own it and support it. The engineer isn’t there to translate intent into syntax. The engineer is there to verify the output, set guardrails, handle the sharp edges when reality shows up, and be accountable when something breaks.

If I’m not willing to do that internally, another firm will. They’ll undercut you with a smaller team running an AI fleet and offer me the same familiar guarantees: “enterprise grade,” “secure,” “compliant,” “battle tested.” From my side of the table, you’re both selling trust. The question becomes why I should trust the established player more.

Brand reputation helps, but it erodes faster than most incumbents want to admit. In a world where feature replication is cheap and fast, reputation stops being a moat you inherit. It becomes something you earn continuously, with evidence that your verification process is real and your accountability is real.

Software becomes a recipe, not a secret

Source code was never the thing you needed to create software. Source code is a working example, one particular realization of behavior.

When you sell software, you are selling behavior. You are also showing the world what that behavior looks like.

In an agentic world, behavior is easy to copy. Observation becomes a spec. A feature demo becomes a blueprint. A screen recording becomes requirements. Once you can reproduce behavior cheaply, the economics start to look like manufacturing: the moment a product ships, someone can buy one, tear it down, and build a knockoff. You don’t win by having a design. You win by having distribution, brand, operational competence, regulatory positioning, unique inputs, or a trust posture that’s hard to imitate.

Software is inheriting that problem, and it’s doing it at machine speed.

So what does SaaS become?

Pure-play SaaS, the kind that sells access to features behind a login screen, has to evolve into something else or die. The product was never really the code. It was the outcome the code delivered and the trust that it was delivered correctly. When code generation stops being scarce, what’s left to sell?

Scarce Commodity Definition Examples
Visible verification Processes, proofs, and accountability that customers can observe and evaluate, not just take on faith. Published test coverage, third-party audits, public incident postmortems, SLA dashboards
Operational reliability Running systems at scale with uptime, incident response, and hard-earned maturity that takes years to build. 99.99% uptime guarantees, 24/7 on-call engineering, multi-region failover, battle-tested migration tooling
Data advantage Proprietary datasets, data gravity, and network effects that are expensive to reproduce. Aggregated industry benchmarks, cross-customer learning models, years of historical records that new entrants start without
Regulatory positioning Real control environments and compliance certifications, not marketing pages about security. FedRAMP authorization, SOC 2 Type II, HIPAA BAAs, validated data residency controls
Embedded workflows Deep integration into how a customer operates, where switching is not trivial. ERP connectors, SSO/SCIM provisioning, custom workflow automation, data pipelines that feed downstream systems
Outcomes, not features Tying software to something scarce in the real world that code alone cannot replicate. Managed logistics, payment processing with underwriting, infrastructure provisioning, contractual liability backing

The really scary part for most incumbents is structural. They are large. They have built slow-moving enterprises around the old model: big sales teams, long implementation cycles, annual contracts, and feature roadmaps designed to justify next year’s renewal. The moat they relied on to keep others out, the sheer expense of building a competitive product, has suddenly dried up. A startup with three engineers and an AI fleet can now reproduce in weeks what took the incumbent years and tens of millions of dollars to assemble.

The most fragile SaaS businesses are the ones that sell features as if features are inherently defensible. In the environment we’re walking into, features are cheap. Proof is expensive.

The dashboard as a small proof

I’m not interested in arguing this from theory. I prefer to work with evidence, data, and concrete examples.

This dashboard was my small proof. I took a surface area I would have purchased not long ago and demonstrated that I can create it myself, quickly, cheaply, and easily iterate on it by asking for what I want.

It also demonstrated to me the parts that don’t collapse: my own time verifying whether I trust what it says, and my own responsibility standing behind the deployment.

That’s the shape of the shift.

Coding gets cheap. Verification stays costly. SaaS has to stop selling “we wrote the code” and start selling “you can trust the outcome, and here’s why.”

If you’re building software for a living, that’s the future you’re pricing against, whether you want to or not.