Russia is Lying About its AI Capabilities: How Russia is Using Emerging Technologies to Hide Human Rights Violations

Lauren Kahn is a research fellow at the Council on Foreign Relations.

Since Russia’s full-scale invasion of Ukraine began, Russian President Vladimir Putin’s military forces have flouted international norms and international humanitarian law. Russian forces in Ukraine have looted, mistreated prisoners of war, forcibly moved Ukrainians to Russia, and targeted civilians. It is no surprise, then, that Russia is also making extensive use of anti-personnel mines, anti-tank mines, booby traps, and improvised devices in Ukraine, often spreading them from a distance with artillery systems. According to experts at the Halo Trust, an international demining organization, over 450 different types of munition have been found throughout Ukraine, including the notorious PFM-1 “butterfly” or “petal” mine that inflicts heavy casualties and particularly harms children due to its plastic, small toy-like appearance.

Due to the indiscriminate harm these weapons inflict, often long after conflicts subside, 164 countries have become party to the 1997 Convention on the Prohibition of the Use, Stockpiling, Production, and Transfer of Anti-Personnel Mines. Russia is not one of them. (While the United States is not either, it essentially follows the convention independently). While the Russian military’s use of anti-personnel mines is not uncommon—Human Rights Watch has documented their use in both Libya and Syria in the last decade—Russian use of these weapons in Ukraine “marks a rare circumstance . . . in which a country not party to the 1997 Mine Ban Treaty uses the weapon on the territory of a party to the treaty.”

But what makes Russia’s use of anti-personnel mines and cluster munitions in Ukraine unique is that Moscow justifies their use by claiming that the mines and munitions are precisely targeted with artificial intelligence (AI). What makes anti-personnel mines harmful, and what has landed them on the short list of weapons that have been banned in war, is that they cannot distinguish between military personnel and civilians, by design. Anti-personnel mines are meant to prevent anything and anyone from approaching them. Even “smart” mines, which can self-destruct after a set amount of time and can be turned off, cannot discern what is approaching them when activated.

Russia claims that it has solved the mine discrimination problem, however, with a “smarter” anti-personnel mine that relies on AI—the POM-3 “Medallion.” In a Russian TV interview, the head of the research engineering institute that produces the POM-3 claimed that it is the first mine of its kind, in that it is no longer indiscriminate. He said its developers carefully studied the movement of a variety of objects in varying physical locations and successfully used that data to train an algorithm how to differentiate, say, the walking pattern of an approaching civilian such as a farmer from that of a soldier. Even more impressively, he said the POM-3 could tell a Russian soldier from a Ukrainian soldier. According to him, the munition’s fuse sensors use this algorithm to consider all these factors and can then “rank” potential incoming targets. This, in theory, allows Russian soldiers “to pass through their minefield like a knife through butter” while rendering the minefield “insurmountable for the enemy,” but safe for civilians.

It is highly unlikely that the POM-3 is AI-enabled at all, let alone able to successfully distinguish an enemy soldier’s footsteps from a civilian’s. Even if such minute differences in movement patterns did exist, it would require an incredibly large amount of training data and development to generate an algorithm that could tell them apart with any accuracy, even in the best possible case. We know from the various problems facing autonomous vehicle development that AI is brittle—it breaks when introduced to complex environments. Self-driving cars, which have been under development for decades, are not yet good enough to operate reliably in the real world, even on roadways where traffic law introduces some constraints and predictability. No such parameters can be assumed for where, how, and when combatants will deploy mines in a warzone. Such a claim is even more unbelievable when taking into account Russia’s inability to keep up with its ambitions for AI capabilities and the fact that it has not used other AI-enabled weapons in Ukraine yet.

This is not the first time Russia has claimed to have used smarter, more precise versions of existing, indiscriminate munitions. For example, during the Russian intervention in Syria in 2015, Russian state-sponsored media outlets published photos of “Russian KAB-500S precision-guided bombs—a weapon the Russian Defense Ministry was thought to have rejected in 2012 due to high costs—strapped to the bellies of advanced fighter planes parked on Syrian runways.” Data suggested that actual use of the weapons in the conflict was minimal. Precision-guided bombs, also known as smart bombs, are touted as more humane weapons, as they are more accurate than unguided, “dumb” bombs, thus reducing the likelihood of collateral damage. Russia possessed these smart bomb capabilities and promoted propaganda suggesting their use. But strong evidence of Russia’s reliance on dumb bombs in the conflict led the United Nations to allege in 2018 that Russia purposely used unguided, dumb bombs in Syria in a potential “effort to shift responsibility for possible war crimes and civilian deaths to their ally, the Syrian regime of Bashar al-Assad.”

Russia’s claim that it uses higher-tech weaponry than it really possesses serves a dual purpose. First, it makes it easier to deflect Western criticism about civilian deaths and maintain some semblance of rhetorical support for international human rights norms. Second, it allows Russia to present its military as modern and innovative. Whereas smart bombs served these purposes for Russia in the 2010s, claims of AI-enabled smart munitions and technologies play this role today. The war in Ukraine has been unique in that many technologies are being used for the first time on the battlefield and existing technologies are being used at scale or in new, innovative ways. The Ukrainian military has received significant media attention for successfully capitalizing on emerging technologies, including facial recognition, open-source satellite imagery, drones, and loitering munitions. Russia, on the other hand, has suffered very public logistical failings and demonstrated an inability to leverage emerging technologies as effectively as Ukraine. For example, a video posted to Twitter in late August shows Russian propagandists attempting to use a drone, which Ukrainian forces then hijacked with electronic warfare technology. Since Ukraine has been so successful at such information and social media warfare, it is no surprise that Russia has begun to make exaggerated, unverifiable claims about having advanced AI military capabilities.

Russia’s false narrative that the POM-3 is an AI-enabled, discriminate weapon speaks to the symbolic role for weapons beyond their use on the battlefield. As Scott Sagan argues, weapons can even serve “functions similar to those of flags, airlines, and Olympic teams: they are part of what modern states believe they have to possess to be legitimate, modern states.” Therefore, in addition to the raw tactical benefits of smarter weapons (whether they are actually smart or not), they pose a secondary, normative benefit based on their perception. They serve as potential evidence of respect for international norms on conduct in warfare and human rights (even if such respect is not complete or genuine) and a technologically advanced military. In the case of smart bombs, the technologies were real and could be analyzed. But in the emerging realm of AI and deep learning, officials can get away with more exaggerated claims, even ones that border on science fiction, since the underlying technologies involved are not commonly understood. As states continue to develop and use new technologies in warfare, we will no doubt see more dubious claims about them to skirt norms surrounding the use of force and exaggerate military technological capabilities.

Works Cited

“Ukraine: Apparent War Crimes in Russia-Controlled Areas,” Human Rights Watch, April 3, 2022, https://www.hrw.org/news/2022/04/03/ukraine-apparent-war-crimes-russia-controlled-areas.

Peter Beaumont, “Danger in Every Step: The ‘Chaotic and Complex’ Work of Ukraine’s De-Miners,” The Guardian, October 3, 2022, https://www.theguardian.com/world/2022/oct/03/ukraine-de-miners-russia-war.

“Anti-Personnel Landmines Convention,” United Nations Office for Disarmament Affairs, December 10, 2019, https://www.un.org/disarmament/anti-personnel-landmines-convention/.

“Fact Sheet: Changes to US Anti-Personnel Landmine Policy,” The White House, June 21, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/06/21/fact-sheet-changes-to-u-s-anti-personnel-landmine-policy/.

“Ukraine: Russia Uses Banned Antipersonnel Landmines,” Human Rights Watch, March 29, 2022, https://www.hrw.org/news/2022/03/29/ukraine-russia-uses-banned-antipersonnel-landmines.

“IHL Treaties and the Regulation of Weapons,” Canadian Red Cross, Accessed October 10, 2022, https://www.redcross.ca/how-we-help/international-humanitarian-law/what-is-international-humanitarian-law/weapons-and-international-humanitarian-law/ihl-treaties-and-the-regulation-of-weapons.

 “Human Rights Watch Position Paper on ‘Smart’ (Self-Destructing) Landmines,” Human Rights Watch, February 2004, https://www.hrw.org/sites/default/files/report_pdf/smartmines_formatted.pdf.

“POM-3 Landmine,” Collective Awareness to UXO, Accessed October 10, 2022, https://cat-uxo.com/explosive-hazards/landmines/pom-3-landmine.

Dmitri Drozdenko, “Прокляитие вражеской пехоты: чем страшна мина ПОМ-3 [Curse of the Enemy Infantry: The Terrible POM-3 Landmine],” TV Zvezda, September 26, 2017, https://tvzvezda.ru/news/201709260758-b9rq.htm.

Toby Walsh, “Bet You’re on the List: How Criticizing ‘Smart Weapons’ Got Me Banned from Russia,” The Conversation, June 20, 2022, https://theconversation.com/bet-youre-on-the-list-how-criticising-smart-weapons-got-me-banned-from-russia-185399.

Gregory C. Allen, “Russia Probably Has Not Used AI-Enabled Weapons in Ukraine, but That Could Change,” Center for Strategic & International Studies, May 26, 2022, https://www.csis.org/analysis/russia-probably-has-not-used-ai-enabled-weapons-ukraine-could-change.

Paul McCleary, “Putin’s Smart Bombs Aren’t That Smart,” Foreign Policy, October 14, 2015, https://foreignpolicy.com/2015/10/14/putin-smart-bombs-arent-all-that-smart/.

Lauren Kahn and Michael C. Horowitz, “Who Gets Smart? Explaining How Precision Bombs Proliferate,” Journal of Conflict Resolution, July 11, 2022, https://journals.sagepub.com/doi/10.1177/00220027221111143

“Smart Bombs Have Gone Global,” The Economist, March 25, 2021, https://www.economist.com/graphic-detail/2021/03/25/smart-bombs-have-gone-global.

Kareem Shaheen, “Russia Suspected of Using ‘Dumb’ Bombs to Shift Blame for Syria War Crimes,” The Guardian, March 6, 2018, https://www.theguardian.com/world/2018/mar/06/russia-suspected-of-using-dumb-bombs-to-shift-blame-for-syria-war-crimes.

Lauren Kahn, “How Ukraine is Remaking War,” Foreign Affairs, August 29, 2022, https://www.foreignaffairs.com/ukraine/how-ukraine-remaking-war.

Andrew Perpetua, Twitter post, August 27, 2022, 4:05 p.m., https://twitter.com/AndrewPerpetua/status/1563618807536656385.

Scott D. Sagan, “Why Do States Build Nuclear Weapons?: Three Models in Search of a Bomb,” International Security 21, no. 3 (Winter, 1996-1997): 54-86, https://www.jstor.org/stable/2539273#metadata_info_tab_contents.

Lauren Kahn
Lauren Kahn

Lauren Kahn is a research fellow at the Council on Foreign Relations, where her work focuses on defense innovation and the impact of emerging technologies on international security, with a particular emphasis on artificial intelligence.