Hi HN, I wrote this after spending the last two years on the AI coding adoption curve.
Most takes I'm reading are about treating AI failures as a memory or context window problem. I argue it's an orientation problem. LLMs are probabilistic-like generation engines; enterprise codebases are deterministic structures. Forcing probability into a deterministic system without strict boundaries generates compounding divergence between the end goals and what AI agents are coding.
I've been experimenting with building a deterministic orientation substrate (repo-graph) using a three-layer truth model to force agents to respect boundaries before they generate syntax.
I also argue that traditional safety-critical processes are about to become highly relevant to standard agentic development, not only for compliance, but because heavy process is a sound containment vessel for AI-generated entropy.
Curious to hear how other teams are preventing architectural drift when deploying agents at scale.