The US Army is dabbling with integrating AI chatbots into their strategic planning, albeit inside the confines of a war game simulation based on the favored video game Starcraft II. 

This study, led by the US Army Research Laboratory, analyzes OpenAI’s GPT-4 Turbo and GPT-4 Vision battlefield strategies. 

This is a component of OpenAI’s collaboration with the Department of Defense (DOD) following its establishment of the generative AI task force last yr. 

AI’s use on the battlefield is hotly debated, with a recent similar study on AI wargaming finding that LLMs like GPT-3.5 and GPT-4 are inclined to escalate diplomacy tactics, sometimes leading to nuclear war. 

This recent research from the US Army used Starcraft II to simulate a battlefield scenario involving a limited variety of military units.

Researchers dubbed this technique “COA-GPT” –  with COA standing for the military term “Courses of Action,” which essentially describes military tactics. 

COA-GPT assumed the role of a military commander’s assistant, tasked with devising strategies to obliterate enemy forces and capture strategic points. 

COA-GPT is an AI-powered decision support system that assists command and control personnel in developing Courses of Action (COAs). It utilizes LLMs constrained by predefined guidelines. Command and control personnel input mission information and COA-GPT generates potential COAs. Through an iterative process using natural language, the human operators and COA-GPT collaborate to refine and choose essentially the most suitable COA for the mission objectives. Source: ArXiv.

The researchers note that traditional COA is notoriously slow and labor-intensive. COA-GPT makes decisions in seconds while integrating human feedback into the AI’s learning process. 

Wargame AIIllustrates the iterative means of developing Courses of Action (COAs) with human input. In (a), COA-GPT generates an initial COA without human guidance, displaying force movements (blue arrows) across bridges and engagement directives (red arrows) against hostile units. Panel (b) shows the COA after a human commander adjusts it, specifying that friendly aviation forces should directly engage hostile aviation. Finally, (c) demonstrates further COA refinement, with forces splitting to deal with each enemy artilleries and reconnaissance units receiving orders to advance to the northern bridge. Source: ArXiv.

COA-GPT excels other methods, but there may be a value

COA-GPT demonstrated superior performance to existing methods, outpaced existing methods in generating strategic COAs, and will adapt to real-time feedback. 

However, there have been flaws. Most notably, COA-GPT incurred higher casualties in accomplishing mission objectives.

The study states, “We observe that the COA-GPT and COA-GPT-V, even when enhanced with human feedback exhibits higher friendly force casualties in comparison with other baselines.”

Does this deter the researchers? Seemingly not.

The study says, “In conclusion, COA-GPT represents a transformative approach in military C2 operations, facilitating faster, more agile decision-making and maintaining a strategic edge in modern warfare.”

It’s worrying that an AI system that caused more unnecessary casualties than the baseline is defined as a “transformative approach.”

The DOD has already identified other avenues for exploring AI’s military uses, but concerns concerning the technology’s readiness and ethical implications loom. 

For example, who’s responsible when military AI applications go fallacious? The developers? The person in charge? Or someone further down the chain?

AI warfare systems are already deployed within the Ukraine and Palestine-Israel conflicts, but these questions remain largely untested.

Let’s hope it stays that way.

The post The US Army experiments with GPT-4-controlled battlefield tactics appeared first on DailyAI.

This article was originally published at dailyai.com