A few days ago, the British Prime Minister’s technology adviser, Rishi Sunak, announced that artificial intelligence could become powerful enough to “kill a lot of people” in just two years, warning that “the existential danger that is being talked about is what will happen.” “. as soon as we create intelligence “more than people”.

“If the producers and developers of artificial intelligence are not regulated on a global scale, there may be “very powerful” systems that will be difficult for people to control,” British newspaper The Independent quotes Matt Clifford.

promotional material

A few days later, OpenAI CEO Sam Altman returned to say he was afraid of the generative AI he helped develop. In an interview with the Times of India, Altman said he was so stressed out that he developed insomnia after launching the ChatGPT AI chatbot.

He told reporters that OpenAI had done “a really bad thing.” He added that he did not consider the launch of “GBT Chat” something bad in itself, but that the launch of the bot was too late and it no longer has much influence on what happens next. Altman has made several statements about his concerns about the future of artificial intelligence and concerns about competitors who might create malicious algorithms. He then decided to sign an open letter warning that “artificial intelligence will lead to the extinction of mankind.”

Is artificial intelligence going crazy?

Amid all these warnings that artificial intelligence could become a vicious killer capable of destroying people, British officials demanded speed and the need to organize artificial intelligence producers and developers on a global scale and create very strong systems that would allow everyone to control this new technology.

In response, Hassan Hamed, an Egyptian information technology expert, told Al-Arabiya.net that artificial intelligence can certainly get out of human control due to its ability to make independent decisions, but given the current state of artificial intelligence. , it is unlikely that this will happen. This is in the short term.

He adds: “On the other hand, the biggest danger in artificial intelligence is not so much that it gets out of control, but rather its immoral use by the countries that develop it, and seeing it as a new type of arms race that will impose your control on the stage in the next step.”


The Egyptian expert emphasized the need to ensure that laws governing the work in the field of artificial intelligence are not absent, as this will lead to their issuance retroactively after the onset of unforeseen consequences, given the ambiguity of the current vision of its future and the exact limits of its capabilities. He added that the imminent danger that should now be discussed is not the emergence of artificial intelligence out of control, but rather a frenzied arms race that has already begun to control this area.

And now, dear reader, tell us your opinion on this topic in the comments.

Previous articleRussia destroys Kakhovka dam in Ukraine to thwart southern attack
Next articleOther countries to face budget cuts as Ukraine joins the EU, says Financial Times.
Clayton Turner is a news reporter and copy editor for 24PalNews. Born and raised in Virginia, Clayton graduated from Virginia Tech’s Frank Batten School of Leadership and Public Policy and majored in journalism.

Leave a Reply