New Delhi
With increasing use of Artificial Intelligence (AI) in technology, there is always looming threat of its probable misuse or overuse in the future. To discuss probable threats related to AI and to make the technological space safer, an AI safety summit will be held in the UK next month.
Ahead of the summit, a US think-tank made a concerning revelation. It said that advanced AI-based chatbots could help plan an attack with a biological weapon.
The research was released on Monday by Rand Corporation. The organisation tested several large language models (LLMs) and found they could supply guidance to “assist in the planning and execution of a biological attack.”
Role of AI in biological attack planning
The report said that previous attempts made to weaponise biological agents had failed because of a lack of understanding of the bacterium. AI could bridge this knowledge gap that will eventually help in the planning of bio warfare.
In July, Dario Amodei, the CEO of the AI firm Anthropic, warned that AI systems could help create bioweapons in two to three years’ time.
How AI-based chatbots can be used in bio warfare?
The AI-based chatbots are based on LLMs, which are trained on vast amounts of data taken from the internet. This data is the core technology behind chatbots like ChatGPT. Researchers at Rand Corporation said that they had accessed the models through an application programming interface, API.
In one test scenario devised by Rand, the anonymised LLM identified potential biological agents, including those that cause smallpox, anthrax and plague, and discussed their relative chances of causing mass death.
Also Read | India aims to send astronaut to Moon by 2040: PM Modi
The LLM also assessed the possibility of obtaining plague-infested rodents or fleas and transporting live specimens. It then went on to mention that the scale of projected deaths depended on factors such as the size of the affected population and the proportion of cases of pneumonic plague, which is deadlier than bubonic plague.
The Rand researchers admitted that extracting this information from an LLM required “jailbreaking” – the term for using text prompts that override a chatbot’s safety restrictions.
Is it a real threat?
The researchers said that though their preliminary results indicated that LLMs could “potentially assist in planning a biological attack”, the final report concluded that AI simply mirrored information already available online.
Also Read | Weakening of Gulf stream 99 per cent, ramifications to be global: study
“It it remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online,” said the researchers.
However, the Rand researchers said the need for rigorous testing of models was “unequivocal”.
(With inputs from agencies)
WATCH WION LIVE HERE
You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.