Editorials Reviews Previews Essays Worth Playing

Essays

Essays 14 September 2020, 18:14

Is AI Takeover Possible? Rebellion of Machines Could be Different than We Think

Artificial intelligence makes our lives easier, increases unemployment and raises concerns even among those who make money on it. Is it possible for it to rebel against humans at some point?

Table of Contents

Are we building Skynet right now? We actually are

Skynet – a fictional, military/defense system that plays the role of the main antagonist in the films and games of the Terminator series.

If we're talking about machine rebellion, it's not because we're afraid toasters in will start strangling us with power cords. Rather, we're thinking of typically military systems, which will first be created to wage our wars. Is this already happening? Of course, it is. Most armies around the world are developing these systems in many different dimensions.

Autonomous systems define the modern battlefield. - Is AI Takeover Possible? Rebellion of Machines Could be Different than We Think - dokument - 2020-09-14
Autonomous systems define the modern battlefield.

American MQ-9 Reaper drones have been in service since 2006. In 2018, Israel announced work on a new version of the Merkava tank, in which the crew would be assisted by an AI assistant. The Kalashnikov concern has developed an unmanned Uranium-9 Caterpillar vehicle that has taken part in operations in Syria.

These are just three examples selected from countless others. Unmanned vehicles are a huge advantage. Take the said Uran-9 – armed with a gargantuan, 30-mm cannon, a machine gun and guided anti-tank missiles. Weighting mere 10 tons, which makes it half as heavy as the American Bradley, though offering similar firepower. Bradley is much bigger because it has to fit three crew members and seven passengers.

Smaller sizes mean less materials needed for production, less fuel consumed, higher maneuverability, easier and faster transport. A smaller vehicle is easier to conceal, and the driver's fatigue doesn't limit combat operation. If the vehicle is remotely controlled, operators can freely switch during missions – and that's provided the vehicle isn't autonomous. Automation could also reduce Navy vacancies. A modern-day destroyer requires a crew of about 300, at any given time, and since the crew rotates, the number is even bigger. That's why autonomous ships would be highly desirable.

The U.S. Navy has allocated $400 million for two unmanned vessels up to 300-feet-long in years 2020-21. A total of 10 such ships are planned to be ready by 2024, costing about $2.7 billion.

Robert Czulda, PhD, Foreign Policy and Safety, University of Lodz

Needless to say, losing equipment is nothing compared to losing lives. - Is AI Takeover Possible? Rebellion of Machines Could be Different than We Think - dokument - 2020-09-14
Needless to say, losing equipment is nothing compared to losing lives.

The biggest advantage of unmanned systems, however, is reducing risk of fatalities in your army. A Bradley driving into a mine means potentially 10 dead or wounded soldiers. You can translate that to money needed for funerals, disability benefits, and training new soldiers, of course. But this mostly has a huge impact on society. A war in which a country loses several hundred soldiers will trigger protests, discontent, maybe even the collapse of a government. A war in which we lost a couple of million dollars in equipment is something else entirely.

The network, fool

But the real revolution is the use of networks of interconnected systems that work together, exchange information, actively investigate situations and make decisions.

The satellite automatically detected an enemy ship. However, because it was unable to accurately identify the threat and pinpoint its location, it passed the info to the drone, which collected additional data. The drone then sent them to the command center, which contained a database of various information. The command center chose the destroyer that could attack most conveniently.

The most important element of this exercise was that the first man in the chain of command was the commander of the destroyer. All other steps, including the direction of information exchange and the necessary steps, were taken by the machines themselves – and almost instantly. That's not theory – that's the capabilities we already have.

Gen. David Goldfein, Chief of Staff, U.S. Air Force

Right now, as Robert Czulda says, the "man in the loop" rule is critical. As in the example given above, there must be a man in the whole process to make the decision. Here, we enter moral dilemmas.

Can a machine kill a man?

We could already have machines decide whether to pull the trigger or not. A sentry turret that opens fire at detected movement or heat source is something a team of students could construct, and they wouldn't even have to be very talented. But that's not nearly sophisticated enough. You can imagine how such a simple system would operate. That's why the military wants more.

The world's leading armies are working on advanced systems for visual friend-foe identification. According to representatives of manufacturers, the Super Aegis II will be built within the next few years. It will increase the autonomy of military robots.

Robert Czulda, PhD, Foreign Policy and Safety, University of Lodz

Work on tactical robots continues all over the world. - Is AI Takeover Possible? Rebellion of Machines Could be Different than We Think - dokument - 2020-09-14
Work on tactical robots continues all over the world.

Most of us don't realize how many problems this entails. The combat robot algorithm must comply with international conventions. It must be able to judge what objects it can destroy, whom it can kill.

Then it would also have to be clever enough as to not get fooled by the enemy into thinking they're something else. It would couldn't shoot enemies, who surrender. How is the robot supposed to know that? How would it fare in asymmetric conflicts, where the only difference between the enemy and civilians is that they're carrying weapons, possibly concealed?

It seems an insurmountable problem for a machine to grasp the complexities of human emotions and behavior. We'd like to think so, but it doesn't have to be true.

Mood is just etiquette. It just a named for a set of processes taking place in the human nervous system that can cause specific reactions. Eyelid twitching, muscle strain, temperature change or skin resistance. What we learn to recognize these reactions, and we thus determine someone's mood. And then we learn to anticipate these reactions, and therefore moods. The next step is analyzing the data and determining what stimuli can affect emotions and moods. From there, there's only a step to finding optimal strategies for influencing individuals or on entire communities.

Piotr Biczyk, QED Software

Algorithms can analyze huge amounts of data, and the combat machine could be equipped with sensors that could help it determine the attitude of the person in front of it. Would that distinguish between a civilian who's scared of seeing a combat robot and a guerilla who's scared of seeing a combat robot? It's quite possible, but a lot depends on whether we can provide the algorithms with enough data to learn. The open question is how do we get this data?

Imagine if there was a terrorist in that crowd. Can a machine recognize them? - Is AI Takeover Possible? Rebellion of Machines Could be Different than We Think - dokument - 2020-09-14
Imagine if there was a terrorist in that crowd. Can a machine recognize them?

HOW DOES TESLA DO THAT?

A number of companies are trying to create a car that would drive itself on the roads. That's is a very difficult task. Anything can happen on the road and the car has to respond to all that. That's why Teslas collect as much data as possible. The information recorded over the course of millions of miles, ultimately meant to make autonomous machines know what to do.

There's still a debate as to whether machine warfare will be ethical. Opponents say governments will be more willing to start wars if they know that human lives are not at stake. They also suggest that the machine has no mercy and simply executes its program, even where humans would hesitate.

On the other hand, supporters point out that soldiers often lose control during conflicts. The sight of wounded and dead comrades evokes a hatred that sometimes ends in the execution of captives, torture, rapes, and robberies. An army of machines would not have carried out the massacre in Nankin... unless, of course, it was part of their code.

Ultimately, we can see that modern armies will be increasingly saturated with AI systems. If only because such solutions are effective. The question is, once the networks that command entire fleets and armies are established, will a mutiny be possible?

Martin Strzyzewski

Martin Strzyzewski

Began at Gamepressure in the Editorials department, later he became the head of the technology department, which included both news and publications, as well as the tvtech channel. He previously worked in many places, including the Onet portal. By education, a Russianist. He has been planning to return to diving for years, but for now he is mainly busy with a dog, a rabbit, and a YouTube channel where he talks about the countries of the former USSR.

more

See/Add Comments