AI systems generate answers, but real decisions require context, accountability, and control. This article explores why AI is not designed for decision-making and what’s missing in real-world AI systems.
The smart data layer and current context issues cause a fair few problems at the moment with agents. If these are resolved, then I can see agents making decisions which can far out perform humans in certain areas for sure.
Agree that data and context are major bottlenecks. But even if those are solved, decision-making introduces a different layer of complexity like ownership, accountability, and how outputs are interpreted in practice. In my experience, the challenge isn’t just whether agents can outperform human, but how decisions are structured around them so they can be used safely.
Those trust and uncertainty layers are doing more work than most people will appreciate right now. And the problem compounds when you factor in how volatile everything else is simultaneously: trade, institutions, what 'reliable information' even means. The gap isn't just a design problem. The ground those layers are supposed to anchor to is shifting too.
You're raising something important. If the ground itself is unstable, what counts as reliable information, which institutions hold, what trade frameworks even mean, then any decision layer built on top of that inherits the instability.
I think that's exactly why the structural piece matters more, not less. When the environment is volatile, the temptation is to move faster and trust AI outputs more because the situation feels too complex for human judgment alone. But that's when the gap between output and owned decision becomes most dangerous. Structure doesn't eliminate the instability. But it makes visible where assumptions are being made and who is accountable when those assumptions turn out to be wrong.
The alternative is decisions that feel informed but have no traceable foundation. And in a shifting environment, that's where real damage happens unnoticed.
Focusing on "where things break" is exactly the right lens. Here's what I find fascinating: organizations spent decades building systems precisely because they didn't trust humans — double-checks, approval flows, ISO. The premise was always "humans make mistakes." Then AI arrives, and suddenly those same never-trusted humans are promoted to judge AI output. Maybe what's broken isn't AI reliability — it's that organizations haven't figured out where to place the checkpoint.
This is a sharp observation. And I think it goes one step further.
The control systems organizations built, double-checks, approvals, ISO, all assumed the decision-maker and the decision-generator were the same person. The human made the judgment and carried the accountability. AI breaks that assumption. Now the system generating the output has no stake, and the person acting on it often has no visibility into how it was formed.
So it's not just that organizations haven't figured out where to place the checkpoint. It's that the checkpoint was designed for a world where the person deciding was also the person reasoning. That's no longer true, and almost nothing has been redesigned to account for it.
Honestly, decision-making is an exclusively human role in my opinion. I am currently writing a piece on utilitarianism and a concern with AI decision making is that it would be unable to balence human rights and individualism with the ‘greater good’.
Thank you, Lucy! The utilitarianism angle is interesting and I think it connects directly to this. One of the underexplored tensions is that AI systems optimize toward outcomes without holding the moral weight of those outcomes. A utilitarian framework at least demands that trade-offs are acknowledged. Most AI-assisted decision flows skip that step entirely. The trade-off happens, but no one names it.
I'd be curious how you're framing the accountability side in your piece. That's where I think the real gap is, not in what AI optimizes for, but in who holds responsibility when the optimization is wrong.
This maps closely to something I just wrote about from the implementation side; the gap between "AI produces an output" and "someone owns the decision" is exactly where the engineering problem lives. I've been framing it as a bandwidth problem: another pair of eyes only helps if they can see what the first pair is missing. Your decision gap diagram names the structural version of that.
The piece I keep coming back to in practice is your point about confidence being inferred from fluency. That's the failure mode I see most often.
Andrew, I like the bandwidth framing, but I think it slightly mislocates the problem.
In practice, it’s not that we can’t see enough. It’s that we don’t have a clear structure for turning what we see into a decision. Even with more eyes (human or AI), the core questions remain unresolved: who owns the decision, how uncertainty is handled, and what happens if we’re wrong. Without that, additional visibility just produces more plausible options, not better decisions. That’s where the fluency issue becomes dangerous. Confidence gets inferred from how coherent something sounds, not from how well it holds up under consequences. So the gap isn’t just perceptual. It’s structural.
This is a useful framing. I wonder whether the missing layer is not only context, but experience. In major decisions, what we used to value in human judgment was not simply access to more information, but accumulated salience: memory of consequences, long-term associations, felt trade-offs, and pattern recognition shaped by responsibility over time. AI systems can generate contextually plausible outputs, but without durable memory, salience weighting, and long-term association across cases, they lack the broader experiential grounding that makes judgment more than answer production.
Currently since 2023 or 2024 we were given the hot potato to play with - the AI. No one told the planet that it comes with severe limitations, some of them carry legal weight and privacy issues.
I'm glad that now more and more people talk about the responsibilities of AI and how decisions are shaped based on those inputs.
The AI can speed up the process, but even then it's flawed and you need to double verify the input as well. Instead of using the logical part of the brain, we, now, are lost into veryfying what mumbo jumbo the machine is spewing at us making the work twice as hard as before.
The smart data layer and current context issues cause a fair few problems at the moment with agents. If these are resolved, then I can see agents making decisions which can far out perform humans in certain areas for sure.
Agree that data and context are major bottlenecks. But even if those are solved, decision-making introduces a different layer of complexity like ownership, accountability, and how outputs are interpreted in practice. In my experience, the challenge isn’t just whether agents can outperform human, but how decisions are structured around them so they can be used safely.
Those trust and uncertainty layers are doing more work than most people will appreciate right now. And the problem compounds when you factor in how volatile everything else is simultaneously: trade, institutions, what 'reliable information' even means. The gap isn't just a design problem. The ground those layers are supposed to anchor to is shifting too.
You're raising something important. If the ground itself is unstable, what counts as reliable information, which institutions hold, what trade frameworks even mean, then any decision layer built on top of that inherits the instability.
I think that's exactly why the structural piece matters more, not less. When the environment is volatile, the temptation is to move faster and trust AI outputs more because the situation feels too complex for human judgment alone. But that's when the gap between output and owned decision becomes most dangerous. Structure doesn't eliminate the instability. But it makes visible where assumptions are being made and who is accountable when those assumptions turn out to be wrong.
The alternative is decisions that feel informed but have no traceable foundation. And in a shifting environment, that's where real damage happens unnoticed.
Focusing on "where things break" is exactly the right lens. Here's what I find fascinating: organizations spent decades building systems precisely because they didn't trust humans — double-checks, approval flows, ISO. The premise was always "humans make mistakes." Then AI arrives, and suddenly those same never-trusted humans are promoted to judge AI output. Maybe what's broken isn't AI reliability — it's that organizations haven't figured out where to place the checkpoint.
This is a sharp observation. And I think it goes one step further.
The control systems organizations built, double-checks, approvals, ISO, all assumed the decision-maker and the decision-generator were the same person. The human made the judgment and carried the accountability. AI breaks that assumption. Now the system generating the output has no stake, and the person acting on it often has no visibility into how it was formed.
So it's not just that organizations haven't figured out where to place the checkpoint. It's that the checkpoint was designed for a world where the person deciding was also the person reasoning. That's no longer true, and almost nothing has been redesigned to account for it.
Honestly, decision-making is an exclusively human role in my opinion. I am currently writing a piece on utilitarianism and a concern with AI decision making is that it would be unable to balence human rights and individualism with the ‘greater good’.
I am keen to go in the same direction as your page, and so any feedback is soooo welcome !
Thank you, Lucy! The utilitarianism angle is interesting and I think it connects directly to this. One of the underexplored tensions is that AI systems optimize toward outcomes without holding the moral weight of those outcomes. A utilitarian framework at least demands that trade-offs are acknowledged. Most AI-assisted decision flows skip that step entirely. The trade-off happens, but no one names it.
I'd be curious how you're framing the accountability side in your piece. That's where I think the real gap is, not in what AI optimizes for, but in who holds responsibility when the optimization is wrong.
This maps closely to something I just wrote about from the implementation side; the gap between "AI produces an output" and "someone owns the decision" is exactly where the engineering problem lives. I've been framing it as a bandwidth problem: another pair of eyes only helps if they can see what the first pair is missing. Your decision gap diagram names the structural version of that.
The piece I keep coming back to in practice is your point about confidence being inferred from fluency. That's the failure mode I see most often.
Andrew, I like the bandwidth framing, but I think it slightly mislocates the problem.
In practice, it’s not that we can’t see enough. It’s that we don’t have a clear structure for turning what we see into a decision. Even with more eyes (human or AI), the core questions remain unresolved: who owns the decision, how uncertainty is handled, and what happens if we’re wrong. Without that, additional visibility just produces more plausible options, not better decisions. That’s where the fluency issue becomes dangerous. Confidence gets inferred from how coherent something sounds, not from how well it holds up under consequences. So the gap isn’t just perceptual. It’s structural.
This is a useful framing. I wonder whether the missing layer is not only context, but experience. In major decisions, what we used to value in human judgment was not simply access to more information, but accumulated salience: memory of consequences, long-term associations, felt trade-offs, and pattern recognition shaped by responsibility over time. AI systems can generate contextually plausible outputs, but without durable memory, salience weighting, and long-term association across cases, they lack the broader experiential grounding that makes judgment more than answer production.
Currently since 2023 or 2024 we were given the hot potato to play with - the AI. No one told the planet that it comes with severe limitations, some of them carry legal weight and privacy issues.
I'm glad that now more and more people talk about the responsibilities of AI and how decisions are shaped based on those inputs.
The AI can speed up the process, but even then it's flawed and you need to double verify the input as well. Instead of using the logical part of the brain, we, now, are lost into veryfying what mumbo jumbo the machine is spewing at us making the work twice as hard as before.