Discussion about this post

User's avatar
Myles Bryning's avatar

The smart data layer and current context issues cause a fair few problems at the moment with agents. If these are resolved, then I can see agents making decisions which can far out perform humans in certain areas for sure.

Andrew Searls's avatar

This maps closely to something I just wrote about from the implementation side; the gap between "AI produces an output" and "someone owns the decision" is exactly where the engineering problem lives. I've been framing it as a bandwidth problem: another pair of eyes only helps if they can see what the first pair is missing. Your decision gap diagram names the structural version of that.

The piece I keep coming back to in practice is your point about confidence being inferred from fluency. That's the failure mode I see most often.

1 more comment...

Ready for more?