3 Comments
User's avatar
Paul LaPosta's avatar

This is a solid retrospective. What works especially well is that you track capability shifts without collapsing them into hype cycles. The throughline from narrow models to systems that can plan and act is clear.

One frame I’d add looking forward: the biggest change over the last ten years isn’t just better models, it’s where authority moved.

Tool invocation is the threshold.

When AI moved from analysis to action, invoking tools and changing state, the question stopped being “how capable is the model?” and became “how governable is the system?”

When artifacts become cheap and execution becomes delegable, output stops being evidence of understanding. Organizations can look productive while becoming increasingly illegible. That’s not an intelligence problem, it’s a control problem.

Output got cheap, so we added standards. Information got cheap, so we added traceability. Execution is getting cheap, so we need enforceable delegation.

That’s the Illegibility Crisis: leaders remain accountable for outcomes they can no longer clearly see, explain, or reconstruct once decision chains vanish.

More on that mechanism:

https://leanpub.com/illegibility_crisis

Collaboration with agents only holds if authority is scoped, revocable, and auditable. Otherwise it turns into speed without control.

I’ve been working on an open spec (DAS-1) as a thin control plane between agentic systems and actionable governance:

https://github.com/forgedculture/das-1

Artifacts are cheap. Judgment is scarce.

John Samuel's avatar

The frontier models feel like a commodity layer, while the real leverage is moving into data, distribution, and deeply‑embedded workflows.

larry Lippincott's avatar

AI WILL destroy the human race.