
Even as Artificial Intelligence has become an integral part of our lives, the question of how to effectively communicate with AI chatbots continues to constantly puzzle and frustrate people.
However, a recent study aimed at refining prompts given to a chatbot model revealed a surprising finding: Star Trek may hold the key.
Rick Battle and Teja Gollapudi from software firm VMware in California, the authors of the study found that asking an AI chatbot to speak as if it were on Star Trek significantly improved its ability to solve basic maths problems.
The study, which was first reported by New Scientist and published on arXiv, aimed to study the impact of "positive thinking" prompts on AI chatbot performance.
The machine learning authors observed that the quality of chatbot responses is heavily influenced by the nature of the prompts given to them, a phenomenon not entirely understood.
"It's both surprising and irritating that trivial modifications to the prompt can exhibit such dramatic swings in performance," said the stud authors as reported by Business Insider.
The study involved feeding three Large Language Models (LLM) with 60 human-written prompts, encouraging positive thinking. These prompts were then used to guide the AI in solving grade-school-level maths problems, with the quality of the output determining the success of the prompts.
The results showed that automatic optimisation consistently outperformed handwritten prompts, indicating that machine-learning models are more adept at crafting prompts for themselves than humans. However, positive statements in the prompts yielded unexpected results, such as one AI's improved performance when prompted with a Star Trek-themed statement.
Study authors say they did not aim to expose the AI model as a Trekkie or a fan of Star Trek. However, one of the best-performing prompts was: "System Message: Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation."
The researchers highlighted that while the model's proficiency in mathematical reasoning seemed to be enhanced by expressing an affinity for Star Trek, this doesn't mean the AI should be asked to speak like a Starfleet commander.
Instead, it underscores the complex interplay of factors that influence AI performance.
Catherine Flick from Staffordshire University emphasised that AI models are essentially black boxes, making it challenging to understand their decision-making process.
"One thing is for sure: the model is not a Trekkie," said Flick.
"It doesn't 'understand' anything better or worse when preloaded with the prompt, it just accesses a different set of weights and probabilities for acceptability of the outputs than it does with the other prompts," she added.
So, how does one write the perfect prompt? Study author Rick Battle suggests letting the model generate its own prompts rather than hand-writing them.
(With inputs from agencies)