Online course · Self-paced

Writing code used to be the bottleneck.
Now it's releasing without breaking things.

A blueprint for how engineering teams can perfect the new discipline of software inspection — the rate-limiting step in a world where AI writes the code and humans are still accountable for what ships.

The course
Software Stability in the Age of AI
Format
25 modules · 5 parts · self-paced
Price
$299
Enroll for $299 Instant access · Lifetime updates

Something real has changed. Most engineering orgs haven't named it yet.

For forty years, writing code was the expensive part of software. Organizations designed themselves around that constraint: engineers built things, reviewers glanced at diffs, and the whole system held together because code was slow and deliberate by default.

That assumption is gone. With Opus 4.6 and 4.7, serious engineers are going weeks without writing a line by hand. Product managers are shipping full features. In some cases, non-technical leaders throughout the organization are encouraged to open PRs. The volume of code entering your repo has quietly doubled, then doubled again.

And the review culture you built for the old world is now the rate-limiting step. Senior engineers rubber-stamp plausible-looking diffs. Secondary bugs — the ones that break things outside the feature being changed — slip through. Silent bugs sit in production for weeks before someone notices. Your codebase is accumulating a category of risk your current process wasn't designed to catch.

Every engineering organization is about to split into two groups: the ones that figure out how to release AI-written code safely, and the ones that calcify because their only remaining defense against bugs is to slow everything down. There is no third option that involves more AI.

Every other form of engineering figured this out. We build bridges with three roles, not one.

A bridge is designed by an architect. It is built by a contractor. And it is inspected — rigorously, separately, by someone whose entire job is to find what the builder missed. No civil engineer would sign off on a bridge they designed, built, and inspected alone. The profession won't allow it, and for good reason.

Software has always collapsed these three roles into one person, because for forty years that person was the bottleneck no matter what. You couldn't afford to split the work. The architect-builder-inspector hybrid was a compromise the economics demanded.

AI broke that compromise. The builder is now fast, cheap, and tireless. The work that matters — and the work that's now in short supply — is on either side of it. Product Architects describe what should be built and why. Software Stability Engineers read what the AI produced, stratify the risk, and decide whether it is safe to ship.

"Inspection is where the differential value lives. Every org will figure out how to prompt an AI. Only the good ones will figure out how to inspect its output."

In a mature project, the ratio is roughly one inspector for every person prompting. One-to-one. Partly that's because a single prompter can now do in a week what used to take three to five developers — so there's a lot more output per builder to inspect. Partly it's because the work of review itself has gotten bigger: it now absorbs a lot of what the developer used to do during the project — familiarizing with the feature, thinking about edge cases, running through the flows by hand. If the one-to-one number surprises you, it's because you're still thinking of inspection as review — something that happens in the thirty minutes between writing code and merging it. It isn't. It's a parallel discipline with its own arc.

A distinct discipline: Software Inspection.

Inspection isn't QA and isn't code review with more time. It's four compounding disciplines the course teaches in depth.

Install these four well and your builders ship fearlessly. Don't, and you spend the next three years unwinding the debt.

Five parts. Twenty-five modules. Self-paced.

Part 1

The Basics

  • 1.1 Unleashing the non-technical developer: why this is worth it
  • 1.2 How other engineering disciplines manage risk
  • 1.3 The Software Stability Engineer
  • 1.4 The Product Architect
  • 1.5 Why this needs humans — and won't just be AI
Part 2

Classifying Development Disasters

  • 2.1 Primary vs secondary bugs
  • 2.2 Silent risks
  • 2.3 Security risks
  • 2.4 Performance risks
  • 2.5 Transition states
  • 2.6 Rollback
Part 3

The Software Inspection Process

  • 3.1 The Inspection Report and post-mortems
  • 3.2 Incoming materials
  • 3.3 Familiarization
  • 3.4 Risk analysis
  • 3.5 Manual testing
  • 3.6 Automatic testing
  • 3.7 AI bugbots
  • 3.8 Resolutions
  • 3.9 Gamification: making this role not suck
  • 3.10 Training new inspectors: parallel and secondary inspections
Part 4

Pre-empting problems with non-technical Product Architects

  • 4.1 Rulesets for non-technical developers
  • 4.2 Some PRs are going to get closed
Part 5

Tools to manage this

  • 5.1 The lame way
  • 5.2 The cool way (with a prompt!)

From engineering leaders who took the course.

"The risk taxonomy alone was worth the price. My team went from 'I'm nervous about this PR' to 'I'm worried about secondary bugs in the billing path, silent and serious' in about three weeks. That shift changed every release conversation we have."
Dan P. — CTO
"Having a serious system for stability means we can let non-technical leaders build things, and that's transformative to our business. Our velocity as a company is at least 3× what it was last year."
Jason K. — CEO

Enroll in this course.

Full access to every module and every future update. Buy it once, keep it forever.

$299
One-time · Lifetime access
Enroll now
  • All 25 modules across 5 parts, self-paced
  • The risk taxonomy and scoring system your team can adopt
  • A ready-to-use prompt for building your own inspection software
  • Templates for inspection reports and post-mortems
  • Free access to all future updates of the course