PromptisePromptise
Docs
Promptise - AI Framework LogoPromptise

The foundation layer for agentic intelligence. Build, secure, and operate autonomous AI systems at scale with Promptise Foundry.

Foundry

  • The Promptise Agent
  • Reasoning Engine
  • MCP
  • Agent Runtime
  • Prompt Engineering

Resources

  • Documentation
  • GitHub
  • Guides
  • Learning Paths

Company

  • About
  • Imprint
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Subprocessors

© 2026 Promptise by Manser Ventures. All rights reserved.

Back to Guides/Guide

Hallucination Basics & Quick Checks

Beginner’s guide to reducing LLM hallucinations. Learn to spot weak answers, add source checks, and use confidence fields. Includes a lab on adding a verify-before-answering step to boost reliability.

September 4, 2025
8 min read
Promptise Team
Beginner
Prompt EngineeringBeginner PathHallucinationsConfidence RatingsSource CheckingSafer Outputs

Large language models sometimes hallucinate—they produce fluent text that sounds right but isn’t grounded in reliable sources. A hallucination can be a made-up fact, an invented citation, or a confident answer to a question the model can’t truly verify.

Two companion ideas help beginners tame this: quick checks (fast ways to spot shaky claims) and nudges (small prompt additions that push the model to show sources and uncertainty). You won’t eliminate hallucinations completely, but you can catch most of the risky ones before they reach users.

In this guide you’ll learn the core mental model for hallucinations, a tiny workflow to reduce them, and a few copy-paste prompts. You’ll finish with a short lab where you add a “verify-before-answering” step and a confidence field to the output.

Definition recap: Hallucination = the model asserts content that isn’t supported by accessible evidence. Uncertainty = an explicit signal that the model may be wrong (e.g., confidence rating or “I don’t know”). Verification step = a deliberate move in your prompt that asks the model to check facts against sources before answering.


Mental Model: Three Gaps

Think of hallucinations as appearing in one of three gaps:

  1. Knowledge gap: The model was never trained on the needed fact or forgot it.

  2. Retrieval gap: The model “knows” roughly, but you didn’t point it to evidence.

  3. Discipline gap: The model could hedge, but your prompt rewarded confident fluency.

Your job is to close the gaps with short, mechanical constraints: require sources, allow “I don’t know,” and ask for a confidence estimate tied to evidence.

💡 Insight: When you ask for format + evidence + uncertainty together, the model shifts from storytelling to reporting. Even a simple “List your sources and say how sure you are” cuts errors.


Walkthrough: A 4-Step “Verify-Before-Answering” Loop

We’ll use a small factual question as our running example: “When did the Eiffel Tower open to the public?” The same loop applies to features, settings, definitions—anything that could be wrong.

  1. Set the role & rule: Tell the model it’s a cautious research assistant. Permit “I don’t know.”

  2. Ask for sources first: Require 1–2 sources (title + URL if tools are available; otherwise “source name + date”).

  3. Answer with ties: The answer must be traceable to the sources.

  4. Declare uncertainty: Include a one-line confidence and why.

⚠️ Pitfall: If you only ask for sources at the end, the model may invent them to satisfy your format. Ask for verification before answering.


Practical: Copy-Paste Prompts

A starter

Sets tone and guardrails for most beginner tasks.

json

You are a careful research assistant. Follow these rules: - If you are not certain, say "I don't know." - Before answering, identify 1–2 likely sources; prefer official docs, primary data, or reputable references. - Do not fabricate titles, URLs, quotes, or numbers. - Always include a confidence rating: high / medium / low with a one-line justification. - Keep answers concise; include only information supported by your sources.

A

This is your everyday pattern to reduce hallucinations.

json

Task: {{QUESTION}} Deliverable (use this exact structure): - Sources considered (2): {{source_name_or_title}} — {{why relevant}} - Answer (2–4 sentences), each claim tied to a source. - Confidence: {{high|medium|low}} — {{one-line reason}} If no suitable sources are available or you’re unsure, say "I don't know" and explain what would be needed to verify.

“Vague vs. precise” example

Vague: “Tell me about the Eiffel Tower.”

Precise with checks: “Task: When did the Eiffel Tower open to the public? Use the structure above. Do not answer until you list sources you would check. If dates conflict, report both and state confidence.”

A JSON-shaped output option

Useful when your app expects structured fields.

json

{ "answer": "{{concise answer tied to sources}}", "sources": [ {"name": "{{source 1}}", "access_path": "{{url or reference}}"}, {"name": "{{source 2}}", "access_path": "{{url or reference}}"} ], "confidence": { "level": "high | medium | low", "rationale": "{{why this level given the sources}}" }, "unknown_allowed": true }


Troubleshooting & Trade-offs

Models love fluency; your prompts must reward restraint. If you still see shaky claims, try one of these adjustments.

  • Tighten the format: Reduce room for storytelling. Ask for 2 sentences max and require the confidence field.

  • Raise the bar for sources: Say “Use official docs or primary data when possible.”

  • Add refusal scaffolding: “If sources conflict or are missing, respond with ‘I don’t know’ plus what to check.”

  • Separate steps: First produce sources, then the answer. If you can, run the steps as two calls and validate the sources programmatically.

Trade-off wise, more verification means more latency and occasionally fewer creative flourishes. For beginner tasks, that’s almost always worth it.


Mini Exercise / Lab

Goal: Add a verify-before-answering step and a confidence field.

Scenario: You’re answering: “What year was the first iPhone released?” (Safe, well-known, and easy to cross-check.)

Your prompt to the model (combine system + user):

json

[System] You are a careful research assistant… (use the starter system prompt above). [User] Task: What year was the first iPhone released? Deliverable: - Sources considered (2): {{name/title}} — {{why relevant}} - Answer (≤2 sentences), tie claims to sources. - Confidence: {{high|medium|low}} — {{one-line reason}} If unsure or if sources conflict, say "I don't know" and what to verify.

Expected output (example shape, not authoritative content):

Sources considered:
1) Apple press release — primary announcement
2) Reputable tech history page — cross-check date

Answer:
The first iPhone was released in 2007, as announced by Apple (press release) and confirmed by a reputable tech history page.

Confidence: high — two independent, primary/secondary sources agree on the same year.

Reflection: Did the answer list sources first? Is the confidence field present with a reason? If anything was missing, tighten your format or split the steps.


Summary & Conclusion

Hallucinations happen when the model fills gaps with confident prose. You counter them with small, reliable constraints: ask for sources before answering, allow “I don’t know,” and require a brief confidence statement. This shifts the model from storytelling to reporting.

We walked through a simple four-step loop and gave you a starter system prompt, a structured user template, and a JSON option. The lab shows how to embed verification and uncertainty in a few lines—no heavy tooling required. Expect a modest cost in verbosity and time, repaid by fewer wrong answers and clearer limits when the model isn’t sure.

Keep an eye on two pitfalls: invented citations and confident tone without evidence. When in doubt, escalate the standard—better a cautious answer than a plausible fiction.

Next steps:

  • Apply the user template to three of your common questions; save the strongest as a snippet.

  • Add a validator that checks the presence of sources and confidence fields before accepting answers.

  • Try a two-step flow: first ask for sources only, then ask for the final answer referencing them

Learning Paths

Structured Learning

Follow guided learning paths from beginner to advanced. Master prompt engineering step by step.

Explore Paths

Continue Your Learning Journey

Ready to Master More? Explore our comprehensive guides and take your prompt engineering skills to the next level.

Explore More GuidesBrowse Learning Paths