3 Comments
User's avatar
Myles Bryning's avatar

The smart data layer and current context issues cause a fair few problems at the moment with agents. If these are resolved, then I can see agents making decisions which can far out perform humans in certain areas for sure.

Dominika Michalska's avatar

Agree that data and context are major bottlenecks. But even if those are solved, decision-making introduces a different layer of complexity like ownership, accountability, and how outputs are interpreted in practice. In my experience, the challenge isn’t just whether agents can outperform human, but how decisions are structured around them so they can be used safely.

Andrew Searls's avatar

This maps closely to something I just wrote about from the implementation side; the gap between "AI produces an output" and "someone owns the decision" is exactly where the engineering problem lives. I've been framing it as a bandwidth problem: another pair of eyes only helps if they can see what the first pair is missing. Your decision gap diagram names the structural version of that.

The piece I keep coming back to in practice is your point about confidence being inferred from fluency. That's the failure mode I see most often.