The Algorithmic Commander: How AI is Redefining Mission Command
By Dr. İpek İpek
The fog of war has always been the commander’s greatest enemy. Carl von Clausewitz warned of the friction inherent in military operations—the accumulation of minor difficulties that transform the simple into the complex. Today, as militaries worldwide grapple with the exponentially growing volume of battlefield data and the compressed timelines of modern conflict, a new paradigm is emerging. The true revolution in military IT isn’t found in autonomous weapons or unmanned platforms alone, but in the sophisticated marriage of human judgment with artificial intelligence that’s fundamentally redefining how commanders make decisions under fire.

The Convergence of Crisis and Capability
The urgency driving this transformation cannot be overstated. As the U.S. Army demonstrated during Project Convergence Capstone 5 at Fort Irwin in March 2025, traditional command structures—with their paper-based processes and disconnected systems—are woefully inadequate for the speed and complexity of contemporary warfare. Maj. Gen. Patrick Ellis, director of the Army’s command-and-control modernization, captured this reality starkly: “There’s probably a headquarter somewhere today at an exercise where an intel officer is going to write everything down on a piece of sticky note that came out of his intel system, walk across the [Tactical Operations Center], hand it over to the fires guy who has to type it into the fires system to make it work.”
This antiquated approach is not merely inefficient—it’s potentially catastrophic. In an era where China and Russia field increasingly sophisticated military capabilities, the side that can process information faster, make decisions more accurately, and execute actions more precisely will hold a decisive advantage. The Pentagon recognizes this reality, requesting over $3 billion for AI and Joint All-Domain Command and Control (JADC2) initiatives in recent budget cycles, with the explicit goal of achieving “information advantage at the speed of relevance.”

The Architecture of Augmented Command
The U.S. Army’s Next Generation Command and Control (NGC2) program represents the most ambitious attempt to realize this vision. Rather than replacing human commanders with machines, NGC2 creates what military theorists call “human-machine teaming”—a collaborative framework where AI amplifies human cognitive capabilities. In contrast, humans provide the judgment, creativity, and ethical oversight that remain uniquely human.
At its core, NGC2 operates through three fundamental functions that mirror human cognition: sense, make sense, and act. The “sense” function employs advanced algorithms to process the overwhelming volume of intelligence, surveillance, and reconnaissance (ISR) data streaming from modern battlefield sensors. Machine learning models, including sophisticated object detection algorithms such as YOLO (You Only Look Once), can identify and classify military targets from full-motion video more quickly and accurately than human analysts working alone.
The “make sense” function represents where AI truly distinguishes itself. Here, generative AI and machine learning algorithms synthesize disparate data streams—from drone footage to signals intelligence to social media feeds—creating a coherent operational picture that would be impossible for human analysts to assemble in actionable timeframes. Chad Nash, project lead for NGC2, explains how this integration breaks down traditional silos: “Today, they’re pulling from several different sources and as you go up classification, that database is not the same database that you’re using at the lower level. We’ve broken that paradigm and we’re using a single data layer, single map service to provide across different platforms.”

The Human Element in Machine-Age Warfare
Critics of military AI often frame the debate as a binary choice between human control and machine autonomy. However, the most effective military AI systems emerging from current experimentation don’t seek to eliminate human decision-making—they seek to enhance it exponentially. This perspective should reassure military leaders, defense strategists, and policymakers about the supportive role of AI in the military, rather than its potential to replace human judgment.
During recent tests at Project Convergence, soldiers demonstrated this synergy in action. Tank crews could seamlessly access real-time intelligence feeds, examine vehicle maintenance data, and coordinate fires—all while maintaining tactical mobility. The AI didn’t make targeting decisions for them; it provided them with a level of situational awareness and analytical support that transformed their effectiveness.
This approach addresses a critical vulnerability in fully autonomous systems: the inability to adapt to truly novel situations. As Paul Lushenko from the U.S. Army War College notes, ‘Not every AI model will be trained for every battlefield scenario, and the AI will have its limitations.’ This emphasis on the adaptability of human operators should instill confidence in the audience about the resilience of the human-machine collaboration in the face of unforeseen challenges.
Operational Models from the Battlefield
The war in Ukraine offers compelling evidence of AI’s impact on the battlefield through human-machine collaboration. Ukrainian forces have successfully employed AI-coordinated drone swarms that can identify and engage targets with minimal human intervention while maintaining human oversight for critical decisions. These systems compress what military strategists call the “kill chain”—the process from target identification to engagement—from minutes to seconds.
Similarly, the Pentagon’s Project Maven demonstrates how AI can accelerate the OODA (Observe, Orient, Decide, Act) loop without removing human judgment from lethal decisions. By automating the labor-intensive process of analyzing surveillance footage, AI enables human operators to focus on higher-level tactical decisions while maintaining accountability for engagement choices. This highlights the crucial role of human operators in the AI decision-making process, reassuring the audience about the preservation of human control in military operations.
These applications reveal AI’s most significant military value: not replacing human decision-makers, but enabling them to operate at a speed and scale previously impossible. As one Ukrainian military blogger noted, AI algorithms can ‘constantly review all reconnaissance data and detect even the slightest change,’ providing commanders with unprecedented situational awareness. This should excite military leaders, defense strategists, and policymakers about the potential of AI to revolutionize military operations.
The Data Imperative and Network Resilience
The effectiveness of AI-augmented command relies fundamentally on data—its quality, accessibility, and security. The Pentagon’s JADC2 strategy recognizes data as a “strategic military asset” that requires careful management and protection. This has driven investments in resilient, cybersecure networks that can function in what the military terms DDIL environments—denied, degraded, intermittent, and limited communications scenarios.
Lt. Gen. Dennis Crall, the Joint Staff’s chief information officer, emphasizes that JADC2 “transcends any single capability, platform, or system,” instead representing a fundamental shift in how military forces manage and share information. The goal is to create a system where critical data can flow seamlessly from sensors to decision-makers to shooters, even when adversaries attempt to disrupt communications.
This network-centric approach also addresses interoperability challenges that have long plagued coalition warfare. By establishing common data standards and interfaces, AI-enabled command systems can integrate not just across military services but across allied nations—a capability that will prove essential in future conflicts involving multiple partners.
The Strategic Implications for Defense Leadership
For senior defense leaders, the implications of this AI-driven transformation extend far beyond tactical improvements. Nations that master human-machine teaming in military operations will possess decisive advantages in future conflicts. The side that can process intelligence faster, coordinate operations more effectively, and adapt to changing circumstances more rapidly will control the tempo and outcome of military engagements.
However, this transformation also presents significant challenges. Military organizations must fundamentally rethink training, doctrine, and organizational structures to optimize human-machine collaboration. They must also address valid concerns about over-reliance on AI systems and the potential for adversaries to exploit technological dependencies.
The investment requirements are substantial. Beyond the direct costs of AI development and deployment, militaries must modernize networks, train personnel, and develop new operational concepts. Yet the costs of falling behind are far greater. As Deputy Defense Secretary Kathleen Hicks warns, maintaining information and decision advantage requires sustained focus on “initiatives and programs which enhance department capabilities to face current and future threats.”
The Path Forward: Building Trust Through Transparency
The greatest challenge in implementing AI-augmented command lies not in technology but in human psychology. Military personnel must develop trust in AI systems while maintaining healthy skepticism about their limitations. This requires what researchers call “calibrated trust”—understanding when to rely on AI recommendations and when human judgment should override algorithmic suggestions.
Stuart Young from the Army Research Laboratory emphasizes the importance of natural language interfaces that allow “soldiers to interact with the robot in a way that they would naturally interact with a teammate.” This human-centric approach to AI design ensures that technology serves military professionals rather than overwhelming them.
The Pentagon’s SABER (Securing Artificial Intelligence for Battlefield Effective Robustness) program addresses another critical concern: ensuring AI systems remain resilient against adversarial attacks. As Lt. Col. Nathaniel Bastian explains, “Our warfighters deserve to know the AI they’re using is secure and resilient to adversarial threats.”
Conclusion: The Algorithmic Advantage
The transformation of military command and control through AI represents more than technological advancement—it represents an evolution in the fundamental nature of military leadership. The commanders of tomorrow will not choose between human intuition and machine intelligence; they will seamlessly integrate both to achieve decision superiority on increasingly complex battlefields.
The U.S. Army’s NGC2 program, the Pentagon’s JADC2 strategy, and similar efforts by allied nations are not merely procurement programs—they are investments in a new form of warfare where information advantage translates directly to battlefield dominance. The militaries that master this integration of human judgment with AI capabilities will write the rules for future conflicts.
As we stand at this inflection point, defense leaders must recognize that the question is not whether AI will transform military operations, but how quickly they can adapt their organizations to harness its potential. The algorithmic commander is not a distant future concept—it is an emerging reality that demands immediate attention, substantial investment, and clear-eyed understanding of both its promise and its perils.
The fog of war will never be eliminated, but for the first time in military history, commanders have tools that can pierce through much of its obscurity. In this new paradigm, victory does not belong solely to those with the most advanced technology, but to those who most effectively combine human wisdom with machine intelligence in service of strategic objectives. The future of warfare will be written by algorithmic commanders—leaders who understand that in the age of AI, the most potent weapon is the seamless partnership between human and machine.