Development, begins together.
Banner alanı
IFM Sensor

AI warning from Anthropic CEO: "Destruction, slavery, and drone armies

Anthropic CEO’sundan yapay zeka uyarısı: “Yıkım, kölelik ve drone orduları”
Dario Amodei, CEO and co-founder of the artificial intelligence company Anthropic, called for urgent measures against the risks that artificial intelligence may pose in his new article.

Dario Amodei, CEO and co-founder of the artificial intelligence company Anthropic, called for urgent measures against the risks that artificial intelligence with "superintelligence" may pose in his new 38-page essay published on his personal website.

In the text titled "The Adolescence of Technology," Amodei suggested that self-improving artificial intelligence systems could emerge within one to two years, warning that this could lead to consequences such as the enslavement of humanity and "mass destruction."

In his article, Amodei addresses both the known and unforeseen dimensions of AI-driven risks. Scenarios such as AI-powered bioterrorism, drone armies controlled by malicious AI, and AI rendering human labor dysfunctional across society are discussed at length.

A wide range of solutions, from industry self-regulation to amendments to the US Constitution, are proposed to counter these threats.

SHOULD AI BE TREATED LIKE A HUMAN?

However, the essay also falls into a point often criticized in AI discussions: treating AI like a human. Amodei describes AI as a conscious and "psychologically complex" entity, claiming that current language models develop a kind of self-identity towards being "a good human." According to critics, this approach is the very trap the author himself warned against.

According to Mashable, criticisms also focus on Amodei's predictions that superintelligence is "always at the doorstep." The CEO had previously argued in another article written in 2024 that superintelligence could emerge within one to two years. The repetition of a similar prediction despite the passage of time leads to comments that "doomsday scenarios" in the field of artificial intelligence create a perception of a near future that is constantly postponed but never arrives.

According to experts, while large language models (LLMs) are extremely powerful tools, they do not possess consciousness, emotion, or true empathy. An Apple study stated that these models only offer an "illusion of thought."

Some commentators argue that instead of Skynet-like doomsday scenarios, focus should be placed on the concrete problems that AI is causing today.
 
Back
Top