Researchers say AI will ‘probably’ destroy humanity

0
5

Some AI-based systems could start to “cheat”, with very concrete consequences for humanity. As impressive as they are, many observers like Elon Musk agree that the technologies associated with artificial intelligence also carry considerable risks that need to be anticipated today. This is also the conclusion of a chilling new research paper, the authors of which believe that this technology represents a real existential threat to humanity. Far from being the first time we have seen this discourse resurface; although this statement rests on very serious grounds, it is often accompanied by rather cartoonish, if not completely fanciful, arguments. But this time, the situation is very different. It starts with the identity of these whistleblowers. They are not just cranks blowing air in the depths of a dark forum; We owe this work to very serious researchers from reliable and prestigious institutions, such as the University of Oxford and DeepMind, one of the world leaders in artificial intelligence. Cadors, in short, who would not step up to the plate without a justified reason. And when they, too, start claiming that humanity has vastly underestimated the dangers associated with AI, it’s best to listen. Especially since they present technical arguments that seem more than convincing. GAN, (also?) powerful programs His postulate is contained in a sentence that is also the title of his research paper: “advanced artificial agents intervene in the reward process”. To understand this tortuous claim, we need to start by looking at the concept of a Generative Adversarial Network, or GAN. GANs are programs designed by engineer Ian Goodfellow. Very briefly, they work thanks to two relatively independent subroutines that oppose each other, hence the term “adversary”. On the one hand, we have a relatively standard neural network that learns with iterations. On the other hand, there is a second network that oversees the formation of the first. Like a teacher, review your friend’s findings to let him know if the learning is progressing in the desired direction. If the results are satisfactory, the first network receives a virtual “reward” that encourages it to persevere in the same direction. Otherwise, he inherits a reprimand telling him he followed the wrong track. It’s a concept that works very well, so much so that GANs are now used in many fields. But a problem could arise with technological developments, especially if this architecture was integrated with these famous “advanced artificial agents”. This term designates a new class of still hypothetical algorithms. They would be significantly more advanced and more autonomous than current GANs. And above all, they would have much more room for maneuver that would allow them to define their own goals, as long as it helps humans solve real problems “in environments where they do not have the source code”, that is, the real world. The researchers explain that motivating this system with a reward system could have quite catastrophic consequences. The paper’s key claim is in the title: Advanced artificial agents intervene in reward provision. Furthermore, we argue that AIs that intervene in the provision of their rewards would have very bad consequences. 2/15 — Michael Cohen (@Michael05156007) September 6, 2022 What if AI cheats? In fact, this model could push the AI ​​to develop a strategy that would allow it to “intervene in the reward process,” as the article’s title explains. In other words, these algorithms could start “cheating” by over-optimizing the process that allows it to get “rewards”… even if that means leaving humans out. Indeed, since this approach is supposed to tell the AI ​​in which direction to move, any action that leads to a reward is assumed to be fundamentally beneficial. In essence, the program would behave like a puppy being trained that swipes kibbles straight out of the bag or bites its owner’s hand instead of responding to its commands to earn its reward; if this behavior is not dealt with immediately, it can escalate quite quickly. And what makes this article disturbing and very interesting is that it’s not about killer robots or other fantastic sci-fi-inspired predictions; the disaster scenario proposed by the researchers is based on a very specific problem, that is, the amount of resources available on our planet. The authors imagine a kind of grand zero-sum game, with on the one hand, a humanity that needs to sustain itself, and on the other, a program that would use all the resources at its disposal without the slightest consideration, simply to get these famous rewards. Imagine, for example, a medical AI designed to diagnose serious pathologies. In this scenario, the program could find a way to “cheat” to get its reward, even if it delivers an incorrect diagnosis. Then I would no longer have the slightest interest in correctly identifying diseases. A different approach to man-machine competition Instead, he would be content to produce completely false results in industrial quantities just to get his chance. Even if it means completely deviating from your initial goal and appropriating all the electricity available on the grid. And this is only the tip of a gigantic iceberg. “In a world with finite resources, there will inevitably be competition for resources,” said Michael Cohen, lead author of the study, in an interview with Motherboard. “And if you’re competing with something that’s capable of moving forward at every moment, you shouldn’t expect to win,” he insists. Winning the “manage to use the last bit of available energy” competition while playing against something much smarter than us would probably be very difficult. Losing would be fatal. 12/15 — Michael Cohen (@Michael05156007) September 6, 2022 “And losing this game would be fatal,” he insists. With his team, therefore, he has come to the conclusion that the annihilation of humanity by an AI is no longer just “possible”; it is now “probable” if AI research continues at its current pace. And this is where the shoe pinches. Because this technology is a great tool that is already working wonders in many areas. And this is probably just the beginning; AI in the broadest sense still has immense potential, the full potential of which we probably haven’t yet grasped. Today, AI is undeniably an added value for humanity, and therefore there is a real interest in pushing this work as far as possible. The precautionary principle must have the last word But this also means that we could be getting closer and closer to this scenario that smacks of dystopia. Obviously, it must be remembered that for the moment they remain hypothetical and quite abstract. But, therefore, the researchers insist on the importance of maintaining control over this work; he believes that it would be useless to give in to the temptation of unbridled research, knowing that we are still very far from having explored all the possibilities of today’s technologies. “Given our current understanding, it wouldn’t be worth developing unless you do serious work to figure out how to control them,” Cohen concludes. Without succumbing to catastrophism, this work nevertheless reminds us that we will have to be very careful at every major stage of AI research, and even more so when it comes to entrusting critical systems to GANs. In the end, those looking for a moral to this story can rely on the conclusion of the excellent Wargames; this anticipatory film released in 1983 and still relevant today deals with this theme admirably well. And as WOPR so aptly puts it in the final scene, the only way to win this strange game may be to… simply refrain from playing. The text of the study is available here.
#Researchers #destroy #humanity

LEAVE A REPLY

Please enter your comment!
Please enter your name here