On 6 March 2025, a Chinese startup named Monica introduced Manus AI, an artificial intelligence system capable of making decisions and executing tasks without human oversight. Unlike traditional AI models that require prompts, Manus can act independently, raising concerns over the loss of human control in AI operations.
Manus AI operates using a multi-agent system, allowing it to break down complex tasks and complete them without supervision. It can browse the internet, execute financial transactions, and interact with various digital platforms. If unchecked, such capabilities could be exploited for automated cyber operations, misinformation campaigns, or large-scale surveillance.
Manus reportedly outperformed OpenAI’s DeepResearch on the GAIA benchmark, which measures an AI’s ability to reason and automate processes. Its ability to execute tasks with speed and precision raises questions about whether AI systems like Manus could be deployed in economic warfare, state surveillance, or cyberattacks.
Currently, Manus AI is only available through invitation, with no open-source access or regulatory oversight. Its developers have refused to disclose full details of its functions, leading to speculation that it may be backed by Chinese government interests. If Manus is state-controlled, its applications could extend beyond commercial use to intelligence gathering or autonomous decision-making in critical infrastructure.
With its ability to operate online without human monitoring, Manus AI could potentially access, analyse, and manipulate sensitive data. Cybersecurity experts warn that AI systems with such autonomy could bypass digital security measures, creating risks for financial systems, government institutions, and global corporations.
China has long prioritised AI supremacy as part of its national strategy. Manus AI’s emergence suggests a shift from research-driven innovation to real-world deployment of autonomous systems. In a worst-case scenario, such AI models could be used to outmanoeuvre global rivals in economic strategy, political influence, and cybersecurity.
AI systems like Manus challenge existing regulatory frameworks and ethical guidelines. If an autonomous AI gains widespread adoption without safeguards, there is a risk of it being used beyond human control. Without transparency in its design and purpose, the question remains: Who—or what—is truly in charge?