A United Nations panel agreed Friday to consider guidelines and potential limitations for military uses of artificial intelligence amid concerns from human rights groups and other leaders that so-called “killer robots” could pose a long-term, lethal threat to humanity.
Continue Reading Below
Advocacy groups warned about the threats posed by such "killer robots" and aired a chilling video illustrating their possible uses on the sidelines of the first formal U.N. meeting of government experts on Lethal Autonomous Weapons Systems this week. More than 80 countries took part.
Ambassador Amandeep Gill of India, who chaired the gathering, said participants plan to meet again in 2018. He said ideas discussed this week included the creation of legally binding instrument, a code of conduct, or a technology review process.
The Campaign to Stop Killer Robots, an umbrella group of advocacy groups, says 22 countries support a ban of the weapons and the list is growing. Human Rights Watch, one of its members, called for an agreement to regulate them by the end of 2019 — admittedly a longshot.
The meeting falls under the U.N.'s Convention on Certain Conventional Weapons — also known as the Inhumane Weapons Convention — a 37-year old agreement that has set limits on the use of arms and explosives like mines, blinding laser weapons and booby traps over the years.
The group operates by consensus, so the least ambitious goals are likely to prevail, and countries including Russia and Israel have firmly staked out opposition to any formal ban. The United States has taken a go-slow approach, rights groups say.
Continue Reading Below
Russian President Vladimir Putin said last September that the country that masters artificial intelligence would be the “ruler in the world,” adding that AI and its use in weaponry raises “colossal opportunities and threats that are difficult to predict now.
Tesla founder and tech entrepreneur Elon Musk warned earlier this year that “AI is a fundamental risk to the existence of human civilization.”
U.N. officials say in theory, fully autonomous, computer-controlled weapons don't exist yet but defining exactly what killer robots are and how much human interaction is involved was a key focus of the meeting. The United States argued that it was "premature" to establish a definition.
The concept alone stirs the imagination and fears, as dramatized in Hollywood futuristic or science-fiction films that have depicted uncontrolled robots deciding on their own about firing weapons and killing people.
Ambassador Gill played down such concerns.
"Ladies and gentlemen, I have news for you: The robots are not taking over the world. So that is good news, humans are still in charge ... We have to be careful in not emotionalizing or dramatizing this issue," he told reporters Friday.
The United States, in comments presented, said autonomous weapons could help improve guidance of missiles and bombs against military targets, thereby "reducing the likelihood of inadvertently striking civilians." Autonomous defensive systems could help intercept enemy projectiles, one U.S. text said.
Some top academics like Stephen Hawking, technology experts and human rights groups have warned about the threats posed by artificial intelligence, amid concerns that it might one day control such systems — and perhaps sooner rather than later.
"The bottom line is that governments are not moving fast enough" said Steven Goose, executive director of arms at Human Rights Watch. He said a treaty by the end of 2019 is "the kind of timeline we think this issue demands."
The Associated Press contributed to this report.