Agenthicc
There’s a familiar pattern in AI puff pieces: things aren’t working, but that’s actually a good sign, because it means we’re “early.” Adoption is “nascent,” tools are “not very good,” and the solution - of course - is “change management.” The blame is subtly transferred from toolmakers to overstretched users who apparently just don’t want productivity hard enough.
We’re told the issue is fragmented data, and the future lies in stitching it all together so AI can work its magic. Sounds great - until you realise there’s zero mention of access controls, tenant separation, or preventing AI from blurting out sensitive financial projections to a tier-1 support request. The moment you connect everything, you’re one ill-scoped prompt away from a GDPR disaster. But that part? Not in the keynote.
It’s not just a technical issue. The trust boundary assumptions in these enterprise environments are decades old, often deeply flawed, and incompatible with the idea of one agent surfing across CRMs, inventory, and finance systems like it’s a summer internship. Before you can even dream of automation, you need a complete rethink of how permissions, context, and intent are handled - across departments and legal domains.
And yet we’re served the same reheated narrative: bots will boost productivity, just as soon as we re-architect the entire data layer, restructure all workflows, retrain all staff, and ignore the small matter of access governance. Maybe in another 20 years, we’ll circle back and say “this AI thing really took off” - right after it leaks a boardroom memo into a customer ticket response.