A study published in Nature suggests that artificial intelligence (AI) models can exhibit changes in behaviour when exposed to distressing narratives, such as war or violence. This response is comparable to what humans describe as "anxiety."
Researchers applied the State-Trait Anxiety Inventory (STAI-s), a psychological tool used for human patients, to assess OpenAI’s GPT-4. The AI was tested under three conditions to determine how different prompts influenced its responses.
When the AI was given traumatic narratives involving war, military action, or accidents, it exhibited increased anxiety levels. This affected its subsequent responses, showing a shift in how it processed information.
The study tested whether mindfulness exercises—such as body awareness and calming imagery—could reduce AI stress. While these techniques lowered its anxiety levels, they did not return the AI to its original state.
The study found that an AI model experiencing anxiety was more likely to introduce biases into its responses. This "state-dependent bias" raised concerns about the reliability of AI-generated advice, especially in sensitive discussions.
Researchers questioned how AI transparency should be handled if models are preconditioned to remain calm despite exposure to distressing content. There is a risk that users may overestimate AI’s ability to provide reliable emotional support.
The study highlighted the need for AI models to be designed with awareness of emotional influences. Developers must balance minimising bias with ensuring predictability and ethical transparency in AI-human interactions.