AI is here, where are you? Learn these 10 essential AI terms like Uncanny Valley, LLM, hallucination, and AGI to better understand artificial intelligence’s impact on technology and everyday life.

Artificial intelligence is already present in your online life, whether you realise it or not. It is high time you understood some of the most important AI terms to navigate this epochal shift in technology. Here are 10 AI terms you should know, particularly if you're trying to make sense of current conversations around AI.

AI Slop nickname for low-quality, mass-produced, or lazily generated AI content. This includes spammy articles, messy or distorted images, and nonsensical text created without care—often in large quantities. Platforms like YouTube, social media, and content farms are increasingly flooded with such material.

Uncanny Valley refers to the unsettling feeling people get when they encounter AI-generated faces or bodies that look almost—but not quite—human. These creations are eerily close to being lifelike, yet something about them feels off. The "valley" describes the dip in comfort levels as realism increases but fails to reach full authenticity.

If you’ve used ChatGPT, Google Gemini or similar AI tools, you’ve already used an LLM. These models are trained on vast amounts of text (usually collected from the internet) to understand and generate human-like language in response to user inputs.

LLaMA is a family of open-source LLMs developed by Meta (the parent company of Facebook, Instagram and WhatsApp). LLaMA models are used by researchers and developers to build and experiment with AI systems, and are designed to be more accessible than closed-source models like ChatGPT.

Prompt engineering is the process of designing inputs (prompts) to guide AI models towards better or more accurate outputs. It’s essentially the art of asking the right questions. Although the term sounds technical, it’s something most people already do when interacting with AI. Crafting clearer, more specific prompts can significantly improve the responses you get.

When AI “hallucinates,” it generates false or misleading content while sounding confident. This might include incorrect facts, made-up quotes, or fake references. Even advanced AI models like ChatGPT can hallucinate, which is why human fact-checking remains essential.

AGI refers to a hypothetical future form of AI that can understand, learn, and apply knowledge across a wide range of tasks—much like a human. Unlike current AI, which is task-specific, AGI would be capable of general reasoning and adaptability. It remains a subject of research and debate, with concerns about potential risks and implications.

AI systems learn from human-created data, which can include historical biases related to race, gender, geography, and more. When AI models absorb and replicate these patterns, they can reinforce stereotypes or unfair assumptions. Recognising and mitigating bias is a major challenge in AI development.

Named after British mathematician Alan Turing, this test measures a machine's ability to exhibit intelligent behaviour indistinguishable from a human. If you converse with an AI model or bot and cannot tell it isn’t human, it may have passed the Turing Test. However, passing the test doesn’t necessarily mean the AI is intelligent or conscious.

Synthetic media refers to any content—text, images, video, or audio—generated by AI, like this image of late Pope Francis in a puff jacket. Synthetic media includes everything from harmless AI-generated memes and artwork to potentially harmful deepfakes and fake news articles. While synthetic media can be creative and fun, it also raises serious ethical and security concerns.