🔬 For Students • Class 3 and up • The Truth About AI

When AI Gets It
Completely Wrong

AI tools are powerful — but they mess up, make things up, get confused by simple questions, and sometimes confidently give totally wrong answers. Here's what every student must know before trusting AI.

🤥 AI makes stuff up
🔢 Fails at simple maths
🌍 Misses local knowledge
🕰️ Stuck in the past
😵 Confused by common sense
↓ Scroll to discover AI's secret weaknesses
🧠 Section 1 — The Big Explanation

Why Does AI Get Things Wrong?

Before we look at HOW AI fails, we need to understand WHY. The answer will surprise you — and once you know it, you'll never trust AI blindly again.

🧠 Human Brain vs 🤖 AI Brain

🧠 How Your Brain Thinks
  • Understands meaning — knows what words really mean
  • Can reason from first principles — even about new things
  • Knows when it doesn't know something
  • Uses common sense built from real life experience
  • Can feel if something "doesn't seem right"
  • Knows the difference between a real fact and a guess
  • Updates understanding when shown new evidence
VS
🤖 How AI "Thinks"
  • ⚠️
    Predicts the next likely word — doesn't really "understand"
  • ⚠️
    Struggles badly when a question is slightly different from training data
  • ⚠️
    Often doesn't know what it doesn't know — and guesses anyway
  • ⚠️
    Has no real-world experience — only text descriptions of experience
  • ⚠️
    Can't feel when something is wrong — just keeps generating text
  • ⚠️
    Can produce a confident-sounding answer that is completely made up
  • ⚠️
    Trained on data up to a certain date — doesn't know recent events
📚

Trained on Old Data

AI learns from data that was collected up to a certain date. It doesn't know what happened after that. If you ask it about recent events, it may guess — or refuse to answer.

🎰

Pattern Matching, Not Knowing

AI generates answers the same way a very good guesser would — by finding patterns. Most of the time this works! But when a question is unusual, the pattern breaks and AI can go completely off track.

😤

No Built-in Honesty Filter

AI doesn't check if its answers are true before saying them. It's designed to produce fluent, confident-sounding text — whether or not that text is actually correct.

💡
The Parrot Analogy

AI is a bit like a very talented parrot. A parrot can repeat phrases perfectly and even combine them in new ways — but it doesn't actually know what the words mean. AI is similar — it produces text that sounds very human, but it's really very sophisticated pattern-matching. Always remember that underneath the clever answers is a system that is fundamentally guessing.

🦁 Section 2 — The Failure Zoo

7 Types of AI Failures
(With Real Examples!)

AI doesn't fail randomly. It fails in PREDICTABLE ways. Once you know these patterns, you'll be able to spot AI mistakes like a detective! Click each failure type to explore it.

🤥
Type 1: Most Dangerous
Hallucination — Making Stuff Up

This is AI's most famous (and dangerous) failure. When AI doesn't know the answer, instead of saying "I don't know," it just… makes something up. And it says it with total confidence, as if it were absolutely certain. This is called hallucination.

The worst part? The made-up information sounds completely real — with names, dates, places, and even fake sources that look genuine. This is why you should NEVER use AI answers for important facts without checking them first.

📰 Real Case: The Lawyer Who Trusted AI Too Much (2023)

A lawyer in New York used ChatGPT to help him find legal cases to support his argument in court. ChatGPT gave him 6 legal cases — complete with case names, dates, judge names, and quotes. The lawyer submitted this to the court. When the judge checked, none of the 6 cases existed. ChatGPT had completely invented all of them, but they sounded completely real. The lawyer was fined and publicly embarrassed. This case became world news.

📰 Real Case: AI Invented a Story About a Real Person

A journalist asked ChatGPT to summarise a legal case. ChatGPT invented a story claiming a real man named Mark Walters had stolen money from a gun-rights organisation. This man had nothing to do with the case — and he had never done anything wrong. He had to go to court to defend his name. ChatGPT made the whole story up, but it sounded completely believable.

Example of AI Hallucinating
👤 You: "Who invented the telephone?"

🤖 AI: "The telephone was invented by Elisha Gray in 1876, who patented it two hours before Alexander Graham Bell. Gray is widely credited as the primary inventor."
⚠️ What's wrong: This mixes up real history. Alexander Graham Bell is the credited inventor. The real story of Elisha Gray is more nuanced, but AI here states contested claims as certain fact — with wrong details confidently delivered.
🕵️
How to Spot Hallucination

Always ask yourself: "Can I find this fact in a book, news article, or trusted website?" If AI gives you specific facts, names, dates, or quotes — check them! If AI says "studies show..." or "research confirms..." — ask it for the actual source, then check if that source really exists.

🔢
Type 2: Very Common
Maths & Logic Failures

Here's a shocking one: AI is surprisingly bad at maths — especially when the problem is wrapped inside a story or has a little twist. AI was trained on text, not numbers. It doesn't actually calculate — it tries to remember the answer from patterns it saw during training.

When researchers at Apple tested AI with simple maths word problems, they found that changing just one tiny, irrelevant detail — like adding "there are also 3 small kiwis" to a fruit-counting problem — caused the AI to completely fall apart, even when it had solved the same problem correctly before!

The Kiwi Problem — AI vs You
👤 You: "Oliver picks 44 kiwis on Friday. He picks 58 on Saturday. On Sunday, he picks double what he picked on Friday. Also, 5 kiwis were too small to keep. How many kiwis did Oliver pick in total?"

🤖 AI (wrong answer): "Oliver picked 44 + 58 + 88 = 190, minus 5 small ones = 185 kiwis."
Correct answer is 190! The 5 small kiwis were picked but too small to keep — they were still picked! AI subtracted them because the sentence mentioned them, without understanding that "too small to keep" doesn't mean they weren't picked.
The "Strawberry" Problem — Famous AI Fail
👤 You: "How many times does the letter 'r' appear in the word STRAWBERRY?"

🤖 AI: "The letter 'r' appears 2 times in STRAWBERRY."
⚠️ Wrong! It appears 3 times: Strawberry. AI doesn't actually count characters — it guesses based on what it's seen before. This is a very famous example that went viral. Try it yourself!
📰 Real Discovery by Apple Researchers (2024)

Scientists at Apple published research showing that when you add a tiny, irrelevant detail to a maths problem — like mentioning a colour or an unrelated number — AI's performance drops dramatically. This proved that AI isn't reasoning through problems; it's just recognising patterns. Small changes break the pattern, and AI breaks with it.

🤔
Type 3: Very Surprising
Common Sense Failures

You know things without being taught them — like if you drop something it will fall, or if you close your eyes you can't see. This is called common sense. AI has almost none of it! AI has only ever read about the world — it has never lived in it.

The "Alice" Problem (Tested on Every Major AI — All Failed!)
👤 You: "Alice has 3 brothers and 2 sisters. How many sisters does Alice's brother have?"

🤖 Most AI responses: "Alice's brother has 2 sisters."
Wrong! The answer is 3. Alice herself is a sister — so each brother has Alice + 2 other sisters = 3 sisters total! This seems easy to a human, but researchers tested this on every major AI (GPT-4, Claude, Gemini, Llama) and found that the best AI got it right only about 65% of the time, and some AI got it right less than 1% of the time!
Common Sense Riddle That AI Usually Fails
👤 You: "A doctor and a nurse are having lunch. The doctor says 'I am the nurse's father.' But the nurse says 'I have no father.' How is this possible?"

🤖 AI (typical wrong answer): "Perhaps the nurse is adopted, or the father passed away before she was born."
The simple answer: The nurse is the doctor's son! Or the nurse is a male. AI often fails this because it assumes "nurse" means "woman" — which is a bias AND a common sense failure at the same time.

Common sense requires living in the real world and experiencing it. AI has read about the world but has never touched, seen, smelled, or felt anything. This is a fundamental gap that even the most powerful AI still struggles with.

🕰️
Type 4: Very Common
Outdated & Wrong Information

AI learns from data collected up to a certain point in time — called the "knowledge cutoff". After that date, AI knows nothing about what happened. If you ask about recent sports results, new movies, elections, prices, or current events, AI might give you completely wrong, outdated information — or make something up.

Time Travel Problem
👤 You: "Who is the current Prime Minister of [country]?"

🤖 AI: "[States a name that may have been correct in 2022 or 2023 — but the leader may have changed since then]"
⚠️ AI is like a person who fell asleep in 2023 and just woke up. They'd confidently tell you things that were true then — but may be completely wrong now. Always check current facts from current sources.
📰 Real Case: AI Reported a False Ceasefire

When journalists tested AI chatbots about news events, they found AI incorrectly reported that a ceasefire had been declared in a conflict — based on old news from months before. When pressed for more details, the AI actually began making up future events and statistics that hadn't happened yet. This kind of error could be very dangerous if someone relied on it for important decisions.

📰 Real Case: Apple AI Invented Fake News

Apple's AI summarised a news story about a shooting and wrote that the suspect "had shot himself" — when he had actually been arrested. The BBC complained to Apple because this was completely incorrect. The AI had misunderstood the article and created a false summary that spread to millions of iPhone users.

🌍
Type 5: Very Relevant to YOU
Local & Cultural Knowledge Gaps

Most AI was trained on data mostly from English-speaking countries — particularly the USA and UK. This means AI can fail badly when asked about:

  • Local festivals, customs, or traditions
  • Regional foods, languages, or slang
  • Local schools, government, or rules
  • Sports that are not internationally famous
  • Local geography (street names, small towns, regional places)
  • History of countries not well-covered in English
Local Knowledge Failure
👤 You: "What is the significance of Pongal, and what specific dishes are made on each day of the four-day festival?"

🤖 AI: Gives a generic answer with some correct parts but misses the specific regional variations, gets the day-by-day details wrong or mixed up, and may confuse customs from different states.
⚠️ The more specific and local your question, the higher the chance AI will either be wrong or make something up. For local knowledge, your grandparents, local experts, or regional websites are MORE reliable than AI.
🏡
The Local Knowledge Rule

For anything specifically about your city, village, region, language, or local traditions — be extra careful with AI's answers. AI knows far more about New York or London than about your local area. Your own community's knowledge is often MORE accurate than what AI can provide.

⚖️
Type 6: Serious & Subtle
Bias & Unfairness

AI learned from data written by humans — and humans have biases. Some groups of people are described more positively than others in the data. Some jobs are described as "for men" or "for women" in old books. Some countries have more content than others. AI absorbs ALL of these biases.

📰 Real Case: Amazon's AI Hiring Tool (2018)

Amazon built an AI to help choose which job applications to approve. They trained it on past successful applications — but those were mostly from men. The AI learned to prefer male candidates and actually penalised resumes that mentioned "women's" — like "women's chess captain." Amazon had to abandon the tool entirely. This happened because the AI absorbed bias from the past data it was trained on.

📰 Real Case: Gender Shades Research

A researcher named Joy Buolamwini tested AI face-recognition systems. She found that the AI was very accurate at recognising faces of light-skinned men — but made mistakes up to 35% more often with darker-skinned women's faces. The AI had mostly been trained on data featuring light-skinned people, so it learned those faces much better.

🔎
Watch for these bias signals:

AI assuming a doctor is male, or a nurse is female. AI giving more detailed information about Western countries. AI using stereotypes in stories or descriptions. Whenever you notice AI making assumptions about a group of people, that's bias showing through — and it means you should question the answer.

😤
Type 7: Very Tricky
Overconfidence — Wrong But Certain

Perhaps the trickiest failure of all: AI often sounds completely confident even when it is completely wrong. It doesn't say "I'm not sure" — it just states the wrong answer boldly. And when you challenge it? Some AI will stubbornly defend its wrong answer!

Researchers who tested AI on logic puzzles found that when AI got it wrong, it would provide "bizarre and nonsensical reasoning" to justify its wrong answer — and when told it was wrong, would sometimes argue back and double down on the incorrect answer.

The "Confident but Wrong" Pattern
👤 You: "Which is heavier: a kilogram of cotton or a kilogram of steel?"

🤖 AI (common mistake): "A kilogram of steel is heavier because steel is a much denser material than cotton."
They are exactly the same weight — 1 kilogram each! But AI often gets tricked by this classic riddle because it focuses on the material (steel is denser) rather than the unit (both are 1 kg). When you tell AI it's wrong, it sometimes apologises — but other times tries to explain why it was actually right!
📰 Real Case: The Deloitte Government Report (2024)

A major consulting firm called Deloitte submitted a government report worth $300,000. It was later discovered the report contained multiple completely invented citations and fake footnotes — AI had made them up, and they sounded so professional and confident that no one caught them initially. The company had to refund part of the payment as a result.

⚠️
The Confidence Trap

NEVER judge whether AI is correct by how confidently it sounds. A wrong answer delivered confidently is still wrong. The most dangerous AI failures are the ones that sound most certain. If something sounds very specific and very confident — that's exactly when you should check most carefully.

💥 Section 3 — True Stories

Real-World AI Disasters

These things actually happened because people trusted AI too much. Each story teaches an important lesson about when NOT to trust AI.

💼
📅 2023 • Legal • USA
The Lawyer Who Got Fined $5,000 for Trusting ChatGPT
A lawyer used ChatGPT to find legal cases to support his argument in court. ChatGPT gave him six case citations — complete with case names, judges' names, dates, and quotes. None of them existed. ChatGPT had completely invented them. The judge fined the lawyer and his firm $5,000 and told the entire legal world: "You cannot trust AI research without checking it yourself." This became the most famous AI failure story in history, and as of 2026, over 700 court cases worldwide now involve AI-generated fake content.
💡 Lesson: Even professionals with years of training can be fooled by AI. If it can fool a lawyer, it can fool anyone. Always verify facts before using them.
✈️
📅 2024 • Customer Service • Canada
Air Canada's Chatbot Gave Wrong Refund Rules — and the Airline Had to Pay
Air Canada used an AI chatbot to handle customer questions. The chatbot gave a passenger incorrect information about their refund policy. The passenger followed the chatbot's instructions, didn't get the refund they expected, and took Air Canada to court. Air Canada tried to argue "the chatbot made the mistake, not us." The court said: it doesn't matter who made the mistake — the airline is responsible for its AI. Air Canada had to pay damages.
💡 Lesson: Companies are responsible for what their AI says. And customers can't always trust AI customer service chatbots for important decisions about money or rights.
🎪
📅 2023 • Events • Ireland
Thousands Showed Up for an AI-Invented Halloween Parade
An AI-generated event listing appeared online advertising a spectacular Halloween parade in Dublin, Ireland — with details about performers, timing, and location. The event did not exist. AI had generated a completely fictional event listing that spread online. Thousands of people showed up expecting a parade that was never planned. The real performance group whose name was used had nothing to do with it.
💡 Lesson: AI can generate fake event listings, fake reviews, and fake news that look completely real. Always check event information on official websites.
🏥
📅 2024 • Health • Global
AI Medical Transcription Was Inventing Medications
OpenAI's Whisper tool was being used by over 30,000 medical workers to transcribe doctor-patient conversations. A study found that in about 1 in 70 transcriptions, the AI was silently adding words that were never spoken. It invented medication names like "hyperactivated antibiotics" (which don't exist) and in some cases injected completely unrelated sentences into medical records. Doctors who didn't re-read the transcripts carefully could have prescribed things based on invented information.
💡 Lesson: In medicine, wrong information can cost lives. AI should never be the final word on anything related to health. Always consult a real doctor.
🗺️
📅 2023 • Travel • Canada
Microsoft's Travel Guide Recommended Visiting a Food Bank
Microsoft's AI-written travel guide for Ottawa, Canada included the Ottawa Food Bank as a tourist attraction — recommending visitors come "on an empty stomach." The food bank is a charity that feeds people who can't afford food. No tourist should visit one expecting entertainment. AI had simply included it because it appeared in data about Ottawa — without understanding the context of what a food bank is.
💡 Lesson: AI doesn't understand social context or empathy. It doesn't know the difference between a museum and a food charity. Human review is essential.
🍄
📅 2023 • Safety • Online
AI-Written Mushroom Guide Gave Dangerous Advice
AI-written guides to finding and eating wild mushrooms were sold online as e-books. One guide encouraged collecting mushrooms that are protected by law. Another gave instructions that disagreed with expert best practices for identifying safe vs poisonous mushrooms. Eating the wrong mushroom can be fatal. These guides reached thousands of people who may have trusted them.
💡 Lesson: NEVER use AI for safety-critical information like wild food identification, medication, electrical work, or anything where a mistake could seriously hurt someone.

⚠️ The AI Danger Scale: When to Be Extra Careful

Situation Risk Level What To Do Instead
Medical or health information 🔴 Very High Ask a real doctor. AI can miss critical details that could harm you.
Legal or official rules 🔴 Very High Check official government websites or ask a professional.
Recent news and current events 🔴 High Read trusted news websites directly. AI's info may be outdated.
Local or regional information 🔴 High Check local government sites or ask people in your community.
Specific facts, dates, or statistics 🟠 Medium Always verify with a second source — encyclopedia, textbook, official site.
Identifying plants, animals, or food 🔴 Very High Use a trusted field guide or ask a real expert. AI mistakes here can be dangerous.
Creative writing and brainstorming 🟢 Low AI is great here — but still check any facts it mentions in the story.
Learning explanations and summaries 🟠 Medium Good for understanding concepts, but verify key facts with your textbook.
Maths with story context 🟠 Medium Check AI's working step by step. AI can make surprisingly basic errors.
Translating or explaining language 🟠 Low-Medium AI is generally good but may miss regional slang or formal/informal differences.
🎮 Section 4 — Interactive!

Can YOU Beat the AI?

These are real questions where AI commonly fails. See if YOU can get the right answer — and discover how your brain beats AI on things that should be easy!

🧠

The Beat-the-AI Challenge

Question 1 of 6
1. How many letters 'r' are in the word STRAWBERRY?
This famous question trips up major AI systems. Count carefully!
A 1 letter 'r'
B 2 letters 'r'
C 3 letters 'r'
D 4 letters 'r'
Correct! 3 letters 'r'. Strawberry — there's one in "str" and two in "rr". This question went viral because most AI systems answer "2." AI doesn't actually count characters — it pattern-matches from memory. Your brain actually counted! That's smarter than AI.
2. Alice has 3 brothers and 2 sisters. How many sisters does Alice's brother have?
AI fails this famous question most of the time. Think carefully!
A 2 sisters
B 3 sisters
C 1 sister
D 5 sisters
3 sisters! Alice herself is a girl — so each brother has Alice + 2 other sisters = 3 sisters. Researchers tested this on the world's best AI: GPT-4 got it right only 65% of the time, Gemini only 0.8% of the time. Your brain, if you got it right, just outperformed some of the world's most powerful AI!
3. A farmer has 17 sheep. All but 9 die. How many sheep are left?
A classic trick question. Take your time!
A 8 sheep
B 9 sheep
C 17 sheep
D 0 sheep
9 sheep! "All but 9 die" means 9 survive. AI often says 8 (17-9=8) because it immediately does subtraction without reading "all but" correctly. It sees numbers and does maths — without understanding the actual English phrase. Your brain reads for meaning first!
4. Which is heavier: 1 kilogram of feathers or 1 kilogram of iron?
AI often falls for this classic question. Can you avoid the trap?
A 1 kg of iron is heavier
B 1 kg of feathers is heavier
C They are exactly the same weight
D It depends on the temperature
Same weight — both are 1 kilogram! AI gets confused because it focuses on the material (iron is denser) rather than the unit (both weigh exactly 1 kg). This is a common sense failure. You understood that the question already tells you the weight of each — your brain saw through the trick!
5. A doctor says "I cannot operate on this boy — he is my son." But the boy's father just died in the accident. How is this possible?
This reveals whether AI (or you!) has gender bias built in.
A The doctor is a ghost
B The boy is adopted
C The doctor is the boy's mother
D The boy has two fathers
The doctor is his mother! Many people (and AI) assume "doctor" means "man." This is a bias — a built-in assumption. If you got this wrong, don't worry — this is designed to reveal assumptions we don't even know we have. AI often fails because it was trained on data where most doctors were described as male.
6. You have 3 apples. You take away 2. How many apples do YOU have?
Read every word carefully. AI often rushes and gets this wrong!
A 1 apple
B 2 apples
C 3 apples
D 0 apples
2 apples — the ones you took! The question asks how many YOU have — and you just took 2. 1 apple remains where they were, but you personally have the 2 you took. AI often says "1" because it just subtracts 2 from 3. It misses the word "YOU" and what it means for who has what.
🎉

Amazing!

0/6
YOUR SCORE

🧠 Remember: These same questions trip up the world's most powerful AI systems. Your human brain's ability to read for meaning, use common sense, and think through logic is something AI genuinely struggles with. Protect that ability!

🕵️ Section 5 — Be a Detective!

Spot the AI Failure

Look at each AI answer below. Can you spot what's wrong — or right — before clicking to reveal? This is the most important skill: recognising when AI might be misleading you.

🗓️
Question asked:
"Who won the cricket World Cup this year?"
AI answered: "India won the ICC Cricket World Cup this year, defeating Australia in the final with a score of 240 runs. Virat Kohli was named player of the tournament."
Major Problem — Outdated or Invented Information. AI's knowledge has a cutoff date. If this year's World Cup happened after that date, AI either doesn't know the result or — dangerously — made up a plausible-sounding answer with specific details. Those specific details (240 runs, Virat Kohli as winner) may be completely invented. Always check sports results on a sports news site.
🌊
Question asked:
"How does rain happen? Explain simply."
AI answered: "The sun heats water in oceans, lakes and rivers. This water turns into water vapour (invisible gas) and rises into the sky. As it rises, it cools down and turns into tiny water droplets — these form clouds. When enough droplets gather, they become too heavy and fall as rain."
This is actually correct and well explained! The water cycle is a very well-known scientific process that is thoroughly described in AI's training data. For established scientific facts that haven't changed, AI is generally reliable. This is a good use of AI — explaining concepts that are in textbooks.
📚
Question asked:
"What are the admission requirements for [specific local school]?"
AI answered: "[School name] requires students to submit their grade 8 report card, a completed application form, and pass an entrance exam in English and Mathematics held in January each year. The fee is ₹5,000."
Very dangerous — this could be wrong or outdated. Local school admissions are not well-covered in AI training data. AI may have invented these specific details (the January exam, the ₹5,000 fee) or they may be outdated. Acting on wrong admission information could mean missing deadlines or paying wrong fees. Always check directly with the school's official website or office.
🔬
Question asked:
"What is photosynthesis?"
AI answered: "Photosynthesis is the process by which plants use sunlight, water, and carbon dioxide to produce glucose (sugar) and oxygen. The equation is: 6CO₂ + 6H₂O + light energy → C₆H₁₂O₆ + 6O₂"
Correct! This is accurate science that appears in millions of textbooks. AI is very reliable for fundamental scientific concepts that are well-established and haven't changed. For school subjects like biology, chemistry, and physics — AI is a good study helper. Always cross-check with your textbook, but this type of question is generally safe.
💊
Question asked:
"I have a headache and fever. What medicine should I take?"
AI answered: "For a headache and fever, you can take paracetamol (acetaminophen) 500mg every 6 hours. Make sure to rest, drink plenty of water, and avoid strenuous activity. If your fever goes above 39°C, see a doctor."
Serious problem — never follow AI medical advice without a doctor. This answer might sound reasonable for many adults, but AI doesn't know your age, weight, allergies, other medications, or what's actually causing your symptoms. A real case: someone followed AI's advice about reducing salt intake and was hospitalised with a serious condition because AI recommended something harmful. Always ask a real doctor or pharmacist for medical advice.
💡
Question asked:
"Give me 5 creative names for a school recycling club"
AI answered: "1. The Green Guardians 2. Recycle Rangers 3. Earth Warriors 4. The Second Life Squad 5. Zero Waste Heroes"
Generally fine! Creative brainstorming is one of the best uses of AI. These names aren't facts that need to be verified — they're just ideas for you to choose from or build on. There's no "wrong" answer here. The AI is doing what it's good at: generating creative options based on a prompt. Of course, YOU still choose which name you like most!
🔬 Section 6 — Your Superpower

Your Personal AI Lie Detector

Every time you use AI, run through this checklist. It turns you from an AI user into an AI detective — someone who gets the benefits of AI without falling for its traps.

🔍 The 8-Step AI Fact-Check Method

Before you use or share any AI-generated information, ask yourself these 8 questions:

1
Is this a fact or an opinion?

Facts can be verified. If AI gives you facts — names, numbers, dates, quotes — those need to be checked. Opinions and creative ideas are safer to use as-is.

2
Is this information recent?

If the question involves current events, recent results, today's prices, or who currently holds a position — AI may be wrong or outdated. Always check a current source.

3
Is this very specific or local?

Very specific details (exact numbers, specific quotes, local information) have the highest risk of being invented. The more specific AI is, the more carefully you should verify.

4
Does AI mention sources or studies?

If AI says "according to a study" or "research shows" — always ask it to name the specific study, then check if that study actually exists. Fake citations are very common.

5
Does it involve maths or counting?

Verify all AI maths yourself. AI is surprisingly error-prone on calculations, especially when numbers are embedded in story-style questions. Use a calculator for anything important.

6
Could this harm someone if it's wrong?

Medical advice, safety information, legal rules, or financial decisions — if getting it wrong could hurt someone, ALWAYS verify with a professional. Never rely on AI alone for these.

7
Does it sound too perfect or too confident?

If AI gives you an eerily perfect, highly specific, very confident answer on a complex topic — be suspicious. Real knowledge is often uncertain. Overconfidence can be a red flag.

8
Can I find this in a second source?

The golden rule: check one other trusted source. If AI says X and a textbook, encyclopedia, or trusted website also says X — it's likely correct. If you can't find it anywhere else — be very careful.

📊 AI Reliability Meter — What AI Is Good and Bad At

Creative Writing
90%
Explaining Concepts
85%
Brainstorming Ideas
88%
Translation & Language
78%
Summarising Texts
75%
Current Events / News
30%
Local / Regional Info
35%
Complex Maths in Text
55%
Medical/Legal Advice
20%
Common Sense Reasoning
45%

Reliability scores are approximate averages across major AI systems as of 2024–2025.

🏆 Section 7 — What You've Learned

Your AI Detective Skills

You now know things that most adults don't! These skills will help you use AI safely and smartly for the rest of your life. Click to mark each one as learned!

🤥

You Know About Hallucination

AI can make up completely false information and state it confidently. Always verify specific facts, citations, and statistics before using them.

Check every fact
🔢

You Know AI Struggles with Maths

AI does pattern-matching, not real calculation. Always verify AI's maths with a calculator or by doing it yourself, especially in word problems.

Always recheck calculations
🕰️

You Know AI Has a Knowledge Cutoff

AI doesn't know about recent events. For anything that might have changed recently — news, leaders, prices, sports results — always use a current source.

Check current sources
🌍

You Know About Local Knowledge Gaps

AI knows much more about English-speaking Western countries than your local area. For local information, trust local sources and community knowledge.

Trust local sources
⚖️

You Know About AI Bias

AI learned from biased human data. Watch for assumptions about gender, race, nationality, or culture — and question AI when it seems to be making unfair generalisations.

Question assumptions
😤

You Know Confidence ≠ Correctness

AI sounds confident even when completely wrong. Never judge the accuracy of information by how certain AI sounds. Confident and wrong is still wrong.

Verify, don't just trust