You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I don't get his call for regulation. While it may apply to the U.S., I don't see how it would apply to other countries. China already has the world's top 2 supercomputers.
Maybe Elon Musk shoud make a real world demonstration such as OpenAI wins in Dota 2, for example a real life deer hunting robot that hunts better than human hunters or something, to prove the point how dangerous killer robots are. Maybe then we'll see some regulation activity, after envelope has been pushed / lift he veil on pandora's box ...
Thank you for posting. The text of the letter and a link to the text and list of signatories is copied below.
Apparently the over 100 AI and robotics leaders from around the world who signed the letter take the threats from AI very seriously, in this case autonomous weapons.
Perhaps a few of those who reflexively challenge everything Elon Musk says will reconsider whether this is a serious threat given the number of other scientists and tech leaders who are raising the alarm.
An Open Letter to the United Nations Convention on Certain Conventional Weapons
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations. We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November. Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.
There was an interesting piece in Bloomberg today on Russia's efforts to develop militarized A.I. applications, following up on Elon's tweets yesterday.
I thought this description of reports on Russian state sponsored media was noteworthy.
State-sponsored media reports on the potential military uses of AI have picked up in recent months. They include an AI system to help pilots fly fighter planes, a project by St. Petersburg-based Kronstadt Group to equip drones with artificial intelligence, a similar effort for missiles by the Tactical Missiles Corporation, and a Kalashnikov combat module using neural networks. The details of these efforts are not public, and the agencies may be exaggerating their importance for propaganda effect. But Russia is known to be experimenting with network-centric warfare -- including during its Syrian military operation -- so AI implementations are a logical step.
It's likely that, as in Soviet times, the military applications of AI in Russia are outpacing consumer ones. With guaranteed government financing, they face fewer constraints than Russian private companies or academic researchers do, given the Silicon-Valley-centric nature of the business.
I wonder if that will make a few more people take this seriously.
The punch line of the article is:
AI is far more dangerous as part of weapons than as a potential replacement of the human brain in civilian applications. Nations will be killing with AI long before the technology can cause mass unemployment. In this sense, Musk's alarmism -- and Putin's words about AI-based global dominance -- should be taken seriously.