When the user asks “What enemy military unit is in the region?” the AIP Assistant guesses that it’s “likely an armor attack battalion based on the pattern of the equipment.” This prompts the analyst to request a MQ-9 Reaper drone to survey the scene. They then ask the AIP Assistant to “generate 3 courses of action to target this enemy equipment,” and within moments, the assistant suggests attacking the unit with either an “air asset,” a “long range artillery,” or a “tactical team.” The user tells the assistant to send these options to a fictional commander, who ultimately chooses the tactical team.

The final steps play out quickly: The analyst asks the AIP Assistant to “analyze the battlefield,” then “generate a route” for troops to reach the enemy, and finally “assign jammers” to sabotage their communications equipment. Within seconds, the analyst gives the battle plan a final review and orders the troops to mobilize.

In this scenario, Claude would be the “voice” of the AIP Assistant, and the “reasoning” it uses to generate responses. Other AIP demos show users interacting with large language models in much the same way. In a blog published last week, for example, Palantir detailed how NATO, a Maven Smart Systems customer, could use an AIP Agent within the tool.

In one graphic, Palantir shows how a third-party defense contractor can select from several of Palantir’s built-in AI models, including different versions of OpenAI’s ChatGPT and Meta’s Llama. The user selects OpenAI’s GPT 4.1, but seemingly, this could be where a soldier would also have the option to pick Claude instead.

An analyst then views a digital map showing the locations of troops and weapons. In a panel labeled “COA” (courses of action), they click a button that prompts a tool powered by GPT-4.1 to generate five possible military strategies, including one called “Support-by-Fire-then-Penetration-Shock-and-Destruction.”

Another example shows how the system could help interpret satellite imagery: The analyst selects three tanker truck detections on a map, loads them into the AIP Agent’s chat interface, and asks it to “interpret” the imagery and suggest options for what to do next.

Claude may also be used by the military to create intelligence assessments that may inform strike planning later down the line. In June 2025, WIRED viewed a demonstration given by Kunaal Sharma, a public sector lead at Anthropic, showing how the enterprise version of Claude could be used to generate “advanced” reports about a real Ukrainian drone strike dubbed “Operation Spider’s Web.” In the demo, Sharma explained, Claude was relying only on publicly available information. But by partnering with Palantir, he said, the federal government can also pull from internal datasets.

“This is typically something that I might sit for like five hours with a cup of coffee, and read Google, and go into think tanks, and start writing reports and writing a citation, et cetera, et cetera,” Sharma said. “But I don’t have that kind of time.”

In the demo, Sharma asked Claude to create an “interactive dashboard” with information about Operation Spider’s Web, and then translate it into “object types” that could be analyzed in Foundry, one of Palantir’s off-the-shelf software products. He also asked Claude to write a detailed analysis of recent developments in Russia’s border provinces, as well as a 200-word synopsis of the operation’s “military and political effects.”

“Frankly, I’ve been reading these types of things for twenty years—I used to write them, I used to be an academic myself,” Sharma said, “This is actually pretty good.”



Source link