The US military’s AI initiatives on its autonomous robots reignite the debate on their authorization to kill


Camille Coirault

November 27, 2023 at 5:21 p.m.

23

Soldiers (US Army) © © Gorodenkoff / Shutterstock

AI is a non-negotiable technology for a modern armed force © Gorodenkoff / Shutterstock

The rise in the use of AI in the US military is rekindling the flames of debate over the autonomy of these systems and their ethical implications. In particular on their ability to make lethal decisions.

The US Air Force’s Replicator project perfectly illustrates the extent to which AI has become essential in the logic of war: almost 6 billion dollars invested to build 2,000 autonomous fighters, the XQ-58A Valkyrie. While there are already concerns about fully autonomous drones, the current situation raises a crucial question: to what extent can a fully autonomous machine be allowed to kill? A theme that was the central pillar of the franchise Terminator or the series Ghost in the Shell.

Advances and applications of AI in the military sector

In total, the US Army is managing more than 800 AI-based projects. Their applications are very varied: surveillance drones (MQ-9 Reaper), autonomous vehicles (Multi-Utility Tactical Transport), air defense system (Phalanx CIWS) or logistics support robots.

However, integrating artificial intelligence into military applications is not an easy process. Gregory Allen is the former head of AI at the Pentagon and expresses this concern: “

the Department of Defense struggles to adopt AI developments from the latest advances in machine learning “. A statement which clearly illustrates the difficulties faced by the American army in implementing AI in its operations.

MQ-9 Reaper © © Master Sgt.  Robert W. Valenca / US Air Force

The MQ-9 Reaper, a semi-autonomous drone used by the US Air Force for surveillance or attack operations © Master Sgt. Robert W. Valenca / US Air Force

Ethics and responsibility of AI in the military

Today, imagining the proliferation of fully autonomous lethal weapons remains a major source of concern, especially from an ethical point of view. According to experts, few doubts remain as to the massive development of this type of device within the American armed forces in the coming years. Opposite, the Pentagon wants to be reassuring and assures that “ humans will always be in control “.

An assertion put to the test as technological developments are rapid and the speed of data processing is increasing more and more quickly. How can we not think in this case that humans will be put in the background, relegated to a simple supervisory role?

Replicator Project Ambitions and Strategic Direction

Even if the Replicator project is still immersed in some gray areas, it represents in any case a major strategic ambition: to automate part of the American strike force. The former director of the armed forces of the Senate, Christian Brose, supports this initiative without too much surprise: “ The issue is gradually shifting: it is no longer so much a question of whether it is the right decision, but rather of how to actually proceed, within the tight deadlines required. “.

In addition, we must add to this the technology race undertaken with China, which has greatly intensified since the beginning of the 2000s. The Replicator project fits perfectly into the Pentagon’s desire to stimulate military innovation to stay up to date, or even surpass its enemies technologically. Ethics in all this? If military supremacy must do without anything, it is the latter, the United States has understood this for a long time.

Source : AP News, Geo



Source link -99