Onward, Robot Soldiers?

I’ve written multiple times about basic values, technology trends, and how they can be causes of unintended consequences.

Today I’m exploring the topic of autonomous weapons, reasons behind their development, and potential outcomes. This is a big topic that I will certainly return to multiple times.

Autonomous weapons are characterized by understanding battlefield goals and finding ways to achieve these goals without human action. Such weapons are currently being researched, developed, and tested as intelligent wingmen for fighter pilots, as support vehicles carrying supplies and fuel, and as offensive weapons.

Basic Values and Technology Development

When it comes to reasons to consider their development, Robert Work, US Deputy Secretary of Defense from 2014 – 2017, has said that it is “a moral imperative to at least pursue [the] hypothesis” that autonomous weapons are less likely to make mistakes than humans in battle.

If that hypothesis is correct (and even if it is not), then why wouldn’t a military want to explore autonomous weapons? If a country’s military believes that it acts justly or at least better than others and that autonomous weapons may improve their capabilities, that moral imperative claim makes sense. This is just another example in the pursuit of better weapons. Basic values would require the exploration of autonomous weaponry if they confer an advantage.

But autonomous weapons obviously feel different. And as expected there is a campaign to stop their development.

Inevitability driven by technological improvements heightens with competition. That’s the argument that if a country doesn’t develop a military capability, its enemies may, and that would be an existential threat from which there may be no recovery.

We see competition and tech-driven inevitability in other domains too. I’ve written about arms races in universities and how that leads to second order effects. I wrote about better processing power, higher resolution cameras, networked systems, and machine learning enabling facial recognition and making surveillance inevitable. I’ve written multiple times about autonomous vehicles and their optimization for safety as well as potential systemic risks.

Still, the autonomous weapons question scares more people than the other examples. There is more uncertainty about outcomes and autonomous weapons just seem… scarier.

Recent Warfare Inflection Points

WWI is often thought to be a transition between traditional and modern warfare. That was the point at which technological advancements made earlier, more “personal” styles of fighting unsustainable. At the start of WWI, French officers entered battle wearing red pants, carrying swords, and depending on rank, wore plumes in their caps. By the end they were in camouflage. Close-range fighting was increasingly replaced by long-range cannons, armored tanks, reconnaissance planes, and poison gas.

For the question of autonomous weapons, poison gas offers a interesting comparison. It’s an example of a new technology for which there was broad disgust. Fritz Haber, the inventor of chlorine gas (but also the inventor of synthetic fertilizers), was broadly condemned.

Poison gas was considered inhumane and in 1925 countries agreed to ban it, though obviously it has been deployed again more recently in Syria. One interpretation of the success of the poison gas ban is cynical; since it can blow back toward those deploying it, it is not very effective.

But the technological explosion of capabilities from the start of WW1 to the end of WWII, a span of just 30 years, was incredible.

After WWII, the only time atomic weapons were used in warfare, we also saw successful negotiations to limit usage. The nuclear non-proliferation treaties worked in the bilateral (US – USSR) dominated world. Non-proliferation agreements also became more difficult when the world changed. Today there are state actors that may be less predictable than the US and USSR seemed during the Cold War as well as the fear of non-state actors attaining nuclear weapons.

But during the Cold War there was also an instance of a semi-autonomous weapons system. The USSR had a “dead hand” system to launch a nuclear counterattack.

Related to that, it took human intervention to avert a system misinterpretation that the US had attacked the USSR. In 1983, Stanislav Petrov, the “man who saved the world” paused and determined that the USSR’s missile detection system was incorrect in warning that five warheads were launched from North Dakota in the US. The missiles turned out to be sunlight glinting on clouds.

Robot Soldiers

The Campaign to Stop Killer Robots is working to build support for a ban on autonomous weapons.

But is a ban on autonomous weapons possible? What are the inputs needed for a successful ban? What is needed for a moratorium to last? What other effects might there be?

There are several differences that make a moratorium on autonomous weapons even more challenging than earlier nuclear non-proliferation treaties.

With autonomous weapons:

  • Capabilities are emerging (though predictable), rather than already present.
  • There is as yet no example of their use to scare people into collaboration.
  • Supporting technology is more broadly distributed.
  • The big countries want to develop them (and possibly only then start a moratorium or ban).
  • There are more potential actors. The world is not bilateral, or not as bilateral as before. There are small non-state actors.
  • It is easier to hide an autonomous weapons program than it is to hide a nuclear testing program (which itself is possible to hide as well).

Polling for a Plan

Public opinion is important to different degrees around the world. Is it likely that public opinion will really manage or sway autonomous weapons development?

While public opinion swayed against nuclear weapons, it also swayed against nuclear power. Accidents at nuclear power plants, including Three Mile Island in the US, Chernobyl in the former USSR, and Fukushima in Japan stifled the nuclear power industry. As a result, the world consumed more fossil fuels than otherwise. With the added pollution associated with burning fossil fuels, how many lives were shortened?

We have many examples of public opinion being cultivated and distorted in support of previous wars, such as the Spanish-American War (claims the Spanish sunk the  USS Maine), the Vietnam War (claims that a US ship was attacked in the Gulf of Tonkin), and the Second Iraq War (claims of weapons of mass destruction).

But public opinion that was once in support of a war can also reverse. Given time, such a reversal may be inevitable. sway the willingness of a nation’s military to continue in a protracted war, again such as Vietnam, the Second Gulf War, and the War in Afghanistan.

Now that civilians contribute a lot to military technology, public opinion can make a difference as well, for example when Google engineers petitioned to stop supporting Project Maven, which was developing AI-powered drones.

Countries that rely more on public opinion may be swayed in what they can legitimately do (or they must do it in secret). Likewise, it is a great advantage to be able to make the civilian creators of your enemies’ weapons unwilling to contribute to the effort.

A Different Discussion on Autonomy

It’s also interesting how different the default discussion is for autonomous weapons compared to the discussion around autonomous vehicles.

With autonomous vehicles (AVs), the default discussion pushed by their promoters is that human-driven cars are unnecessarily dangerous and AVs are a path to safety with great social benefits. This can be true about AVs while their largely ignored rise in systemic risk can also be true.

Could proponents apply the same tactic to autonomous weapons? For example, promoters of autonomous weapons could make the case that traditional warfare is worse than it needs to be, humans make mistakes and target the wrong people, humans commit war crimes, and that there would be less of a need for humans in the military or a draft. Autonomous weapons could be a path to safety. I won’t be surprised to see attempts to move the discussion move in that direction.