Secretary of Defense Pete Hegseth recently stood before the Senate and insisted that “AI is not making lethal decisions.” He wants the American public to believe that a human finger is always on the trigger. But as the conflict over Iran intensifies, that narrative is beginning to crumble. The speed of the modern “OODA loop” (Observe, Orient, Decide, Act) has been compressed so tightly by software like Palantir’s Maven and Anthropic’s Claude that human “judgment” has become a mere formality, making the Pentagon’s AI Accountability a Dangerous Myth.
The Speed of the “Sprinting with Scissors” Doctrine
The military’s hunger for “decision advantage” has led to a ruthless prioritization of speed over certainty. The Pentagon is currently locked in a legal battle with Anthropic because the AI firm dared to suggest limitations on how its technology is used. Hegseth’s response was calling the CEO an “ideological lunatic.”

Retired Gen. Michael Kurilla boasted that the AI-supported system can now process over a thousand targets every 24 hours. No human, or team of humans, can meaningfully “verify” that many lethal targets in a day without relying almost entirely on the algorithm’s output. AI is exponentially increasing the rate at which commands are executed. When the machine flags a target, the human “operator” often has only seconds to agree or disagree, essentially turning them into a glorified “Accept” button.
The Human Cost of “Smart” Targeting
In February 2026, a U.S. strike hit an Iranian elementary school, reportedly killing 168 children. While the Pentagon investigates, the central question remains: was the data fed into the commander’s workflow curated by an AI that couldn’t tell the difference between a military barracks and a classroom?
AI is only as good as the signals intelligence it receives. If the data is flawed, the AI’s “recommendation” is a death sentence based on a lie. Machines can calculate trajectories, but they cannot weigh the moral calculus of collateral damage. Handing the curation of targets to software creates a psychological distance that makes killing easier and accountability harder to pin down.
“We are sprinting as fast as we can with scissors, and when we trip, it’s the civilians who get cut.”
The Ethics of the “Homeland Defender”
Calling the Pentagon’s AI use “accountable” is a farce. By rebranding drone operators and intelligence analysts as “Homeland Defenders” and pushing them to close “kill chains” at lightning speed, the administration has removed the one thing that keeps war from becoming a total massacre: hesitation.
The Pentagon’s AI Accountability is a Dangerous Myth because it uses the presence of a human to justify the actions of a machine. If a commander approves a thousand targets a day based on AI “detections,” they aren’t exercising judgment; they are participating in an automated assembly line of death. Hegseth can talk about “following the law” all he wants, but when 168 children die in a single strike, the law clearly isn’t working, and neither is the accountability.
If an AI sifts through data to flag a target and a human clicks “fire” in under three seconds, can we honestly say a human made the decision, or has the Pentagon successfully automated the conscience out of warfare?





