4 Comments
User's avatar
Dave Foulkes's avatar

β€œ imprisoned, executed, or worst, sent to Australia.” made me laugh anyway :)

Expand full comment
Brent Naseath's avatar

Clear, logical, and relevant. And much needed. Well done!

Expand full comment
Emily Burnett's avatar

I really enjoyed this piece, and just subscribed for updates about your book. Your point about it needing an expert/oracle (human with experience) to vet results is spot on in my experiences using it for various things, including creating an app. It "helpfully" was prone to very much overcomplicating the process, and if I didn't know better, would've implemented their convoluted "solutions."

Expand full comment
Anne Steinacker's avatar

Really appreciated your framing of LLMs as oracles – it elegantly captures the shift from automation to context-driven augmentation.

I’m developing a human–machine logic model called KSODI in my off-hours – not commercially, just as a structured side project rooted in training, coaching, and systems logic.

It’s designed to frame epistemic clarity and resonance as operators – and might offer a meta-structure for the kinds of interaction shifts you’re describing.

Not mainstream (yet), but could be interesting for teams thinking in these layers:

github.com/Alkiri-dAraion/KSODI-Methode

Expand full comment