Technology has advanced to a point where autonomous weaponry could virtually be employed within battle sooner than later. Technological gurus such as Elon Musk and Stephen Hawking have both exhibited antipathy towards the use of autonomous intelligent robots in the military in an open letter addressed to the public. It is interesting that they would have this stance as they both have achieved success due to their studies in the field of robotics, but nevertheless their status as juggernauts in this field validates their viewpoint as one that should definitely be taken into consideration as robots become more autonomous. We chose to analyze why technologists like Elon Musk and Stephen Hawking would be against autonomous weaponry, the pros and cons of having them present in the military, and which test would be best to evaluate how ethical their presence would be.
There are many people who would have a stake in this topic. In fact, everybody would be involved as autonomous weaponry, if brought to the forefront, would become the next revolutionary method of warfare, which means major countries would be in an arms race to develop these weapons in order to compete. Musk and Hawking warn that if autonomous military weaponry become a presence, “autonomous weapons will become the Kalashnikovs of tomorrow.” Arms businesses could stand to prosper and suffer, depending on whether or not they could incorporate autonomy into their weapons. There would be less need for ground infantry if we could send robots into dangerous situations, but it would be difficult to have AI differentiate between civilians and enemies. The biggest issue with AI weaponry however, as Musk and Hawking explain, is that they would be difficult to control.
They continue to assert that if this scenario occurs “it will only be a matter of time until they appear on the black market.” From a utilitarian point of view, anyone with the resources would be able to obtain these dangerous weapons and would be able to use them to attack any group of people they please. One can deduce that the combination of the low cost of production and relatively easy accessibility gives automated weaponry the potential to easily be distributed into multiple different hands with few methods of controlling the distribution. From a justice point of view, the arms race would be unfair for the countries that are not already technologically and resourcefully advanced. Overall, though, this technology would be difficult to control. Musk and Hawking affirm this sentiment as they close the paragraph with a vindictive statement that these would be “Weapons beyond meaningful human control.”
There is a certain viewpoint that might deem humans as a controlling factor as it would be our choice whether or not to start this arms race. This would be a test on the virtues of the human race. The letter briefly touches on that when they say “the key question for humanity today is whether to start a global AI arms race or prevent it from starting.” In this point of view it is up to the humans whether or not they want to see how the technology would affect warfare. One can understand how this argument would be accepted as automated weaponry has thus far been ignored for the most part. In fact, as Musk and Hawking explain, “most AI researchers have no interest in building AI weapons.” Just like chemists and biologists have refrained from venturing further into the realm of chemical and biological warfare, AI researchers do not want to taint automated intelligence by introducing the potential for major public backlash to a field that could really greatly benefit society. Most chemists, biologists, and physicists have supported international agreements to prevent the production of biological, chemical, and nuclear weapons, and it seems that AI researchers are of the same mindset. All stakeholders could find benefits if businesses/AI researchers chose to focus the efforts on developing AI to help civilians caught in the war zones, rather than creating new weapons. Human infantry would then not have to worry about civilians if they are already taken care of.
We concluded that the utilitarian test would be the best way to evaluate automated weaponry as the main issue here is how it would be difficult to determine how this technology would affect all humans as its introduction would definitely affect people on a world scale. Musk and Hawking do believe in the potential benefits of automated intelligence, and think that we should focus on how we can utilize it to help society, but warn that there should be a preemptive “ban on offensive autonomous (weaponry)” to hopefully prevent the case in which this technology propagates out of human control. Link to open letter: https://futureoflife.org/open-letter-autonomous-weapons/