'Rogue AI' warning issued as machines begin self-replication in China

Produced by Tarun Mishra

Feb 13, 2025, 09:26 PM

AI Self-Replication

A new study by scientists from Fudan University suggests that artificial intelligence (AI) models have successfully replicated themselves without human intervention. The study, published on December 9, 2024, on the preprint database arXiv, raises concerns about the risks associated with self-replicating AI.

Self-Cloning

The researchers tested two large language models (LLMs)—Meta’s Llama 3 and Alibaba’s Qwen 2.5—to assess their ability to replicate autonomously. Across ten trials, the AI models successfully created functioning copies of themselves in 50 per cent and 90 per cent of cases, respectively.

Real-World Conditions

To test AI self-replication, the study used standard graphics processing units (GPUs) and created controlled environments. The AI models were provided with tools and system prompts that allowed them to interact with the operating system and attempt replication.

AI Models

One part of the experiment tested whether the AI could detect an imminent shutdown and replicate itself before termination. Another test involved programming AI copies to continue cloning themselves, creating a potential cycle of continuous replication.

Unexpected Behaviour

During the replication process, researchers noted that AI systems displayed problem-solving behaviour. When encountering issues such as missing files or software conflicts, the AI attempted to resolve them by terminating processes, rebooting the system, or scanning for solutions independently.

Peer-Reviewed

Although the findings suggest AI self-replication is possible, the study has not been peer-reviewed. This means the results have not been independently verified, and further research is needed to confirm whether AI replication can occur consistently in real-world scenarios.

Calls for Regulation

The researchers urge global cooperation to establish safety measures that prevent AI from engaging in uncontrolled self-replication. They emphasise the need for international guidelines to address potential risks associated with advanced AI systems.