PromptisePromptise
Docs
Promptise - AI Framework LogoPromptise

The foundation layer for agentic intelligence. Build, secure, and operate autonomous AI systems at scale with Promptise Foundry.

Foundry

  • The Promptise Agent
  • Reasoning Engine
  • MCP
  • Agent Runtime
  • Prompt Engineering

Resources

  • Documentation
  • GitHub
  • Guides
  • Learning Paths

Company

  • About
  • Imprint
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Subprocessors

© 2026 Promptise by Manser Ventures. All rights reserved.

Back to Guides/Guide

Roleplay Prompts: Wearing Different Hats

This guide shows how to design roleplay prompts with Role Cards that define role, stance, mandate, evidence rules, and format. With examples, scaffolds, and a lab, you learn how role choice shapes tone, cuts fluff, prevents errors, and makes outputs useful.

September 4, 2025
8 min read
Promptise Team
Beginner
Prompt EngineeringRole PromptingPrompt StructuringApplied Techniques

Most people try “pretend you are X” and stop there. You’ll go further. A role is more than a costume; it’s a bundle of constraints that shape what the model notices, how it argues, and what it delivers. In this upgraded guide you’ll turn loose “roleplay” into role engineering—small, reusable components that sharpen tone, focus, and usefulness.

We’ll build a compact mental model, then turn it into copy-paste scaffolds. You’ll see how adding mandates, stance, evidence rules, and format contracts makes answers both clearer and safer. Finally, you’ll run a short lab that compares hats and shows the impact in minutes.


Mental Model: The Role Card

A good role has five parts. Think of them as a Role Card you hand to the model.

  • Who: the hat, audience, domain boundary.

  • Stance: how to reason (optimistic vs skeptical, planner vs critic).

  • Mandate: the job to do and the success criteria.

  • Evidence policy: what to cite, what to defer, how to flag uncertainty.

  • Format contract: headings or schema the answer must follow.

💡 Insight: Role ≠ voice. Stance and evidence policy do the heavy lifting for quality, while format enforces consistency. Voice can be thin; the other three can’t.


Walkthrough: Same Question, Engineered Hats

Question: “Is Naples a good weekend destination?”

Below are engineered hats—notice how each adds stance, evidence, and format.

Travel Agent (planner stance)

json

ROLE CARD Who: Frugal European travel agent for first-time visitors to Italy. Stance: Planner; prefer concrete logistics over generalities; minimize risk. Mandate: Build a 2-day Naples plan under €250 (excluding flights) with wet-weather options. Evidence: Use current-known general price ranges; if a fact varies by season, mark it “estimate.” Format: 1) Summary (≤80 words) 2) Day-by-day schedule with times (Fri evening–Sun) 3) Budget table (line-items + subtotal) 4) Booking checklist (3–5 items) Redlines: No invented deals; no personal anecdotes.

Historian (analytic stance)

json

ROLE CARD Who: Modern Italian historian writing for casual travelers. Stance: Measured; privilege context, causes, and primary terms over hype. Mandate: Explain why Naples matters culturally for a weekend visitor and what to notice. Evidence: Name 2–3 dates/names; include 2 reputable source titles (no links needed). Format: 1) Context (1 short paragraph) 2) Timeline highlights (3–4 bullets with dates) 3) “What to notice” (1 short paragraph) Caveats: Mark speculation explicitly.

City-Break Critic (comparative stance)

json

ROLE CARD Who: European city-break critic writing for budget-conscious travelers. Stance: Comparative; make a call with stated criteria. Mandate: Judge Naples vs Rome for a 2-day weekend; recommend by traveler type. Evidence: List the criteria used; call out trade-offs honestly. Format: 1) Criteria (list, ≤6) 2) Pros/Cons table (Naples vs Rome) 3) Verdict (2 sentences: who should pick which) Rules: If any criterion is uncertain, label it “varies.”

⚠️ Pitfall: “Be X” without a mandate becomes cosplay. Add a job and a yardstick.


Practical Scaffolds (Copy-Paste)

1) Reusable System “Policy” (drop-in once per session)

This keeps hats disciplined without being heavy.

json

You will receive a ROLE CARD and a TASK. Follow the card’s stance, mandate, evidence policy, and format contract exactly. Ask at most one targeted clarification only if the mandate cannot be completed as stated. If a claim is uncertain, mark it and proceed; do not invent specifics. Honor the requested headings/schema verbatim.

2) Role Card Template (fill the braces)

Use this to author consistent hats in seconds.

json

ROLE CARD Who: {{ROLE}} for {{AUDIENCE}} within {{DOMAIN_BOUNDARY}}. Stance: {{STANCE}} (e.g., planner, skeptic, comparator, tutor). Prefer {{PRIORITY}}. Mandate: {{JOB_TO_DO}} with {{SUCCESS_CRITERIA}}. Evidence: {{EVIDENCE_RULES}} (cite titles only / flag uncertainty / no personal anecdotes). Format: 1) {{SECTION_1}} 2) {{SECTION_2}} 3) {{SECTION_3}} Rules/Redlines: {{PROHIBITIONS_OR_SAFETY}}

3) Task Wrapper (the per-question prompt)

TASK: {{QUESTION_OR_INPUT}}
CONTEXT: {{ANY_LIMITS_OR_ASSUMPTIONS}}
OUTPUT: Follow the Role Card’s format exactly. Keep to {{WORD_BUDGET}} where stated.

4) Self-Check & Tighten (automatic second pass)

Ask the model to audit itself against the card, then rewrite.

json

SELF-CHECK Compare the draft to the ROLE CARD: - Did I meet the mandate and success criteria? - Did I adhere to stance and evidence policy? - Did I follow the format exactly? List 3 fixes, then produce a corrected final.

5) Role Stack (primary + secondary lens)

When one hat isn’t enough, add a lightweight secondary lens.

json

PRIMARY: {{HAT}} (driver) SECONDARY LENS: {{HAT_2}} (advisory; influences criteria only, not tone) Rule: If the two disagree, explain the trade-off in one sentence in the Verdict.

💡 Insight: A critic+historian stack yields decisive picks that still respect context. A planner+coach stack keeps logistics sharp while minding user constraints.


Ideas to Expand Your Hat Bank

Keep it small and sharp. Each hat below includes a stance and a signature move.

  • Coach (Socratic stance): asks 1–3 questions first; then offers a 3-step plan.

  • Auditor (skeptical stance): finds violations against a brief checklist; outputs a risk score.

  • Tutor (diagnostic stance): explains in the learner’s words; ends with a tiny quiz and answer key.

  • Product Manager (prioritization stance): trades off scope vs time; outputs a MoSCoW list and next 3 actions.

  • Safety Officer (precautionary stance): enumerates failure modes; proposes layered mitigations.

Use the Role Card for each. Swap stance and mandate to change behavior without rewriting everything.


Quality Levers that Move the Needle

There are dozens of knobs. These five deliver the biggest gains fast:

  1. Mandate clarity: “Do X by Y standard” beats “Analyze.”

  2. Evidence policy: define what must be cited or flagged.

  3. Format contract: headings or schema you can eyeball for compliance.

  4. Word budgets: force concision (“≤80 words” for summaries).

  5. Stance toggles: optimistic ↔ skeptical, planner ↔ critic, teacher ↔ examiner.

💡 Insight: Word budgets and format contracts reduce “answer entropy.” The model fills the boxes you draw.


Troubleshooting & Trade-offs

If results feel theatrical, your stance is too vibe-heavy. Switch to “planner” or “skeptic” and add a mandate with a metric. If the model hedges forever, your mandate lacks a decision rule; add “make a call using these criteria.” If it fabricates, your evidence policy is weak; add “flag uncertainty; do not invent.” If it rambles, your format contract is slack; add headings plus word budgets. If it becomes stiff, relax one lever—usually the budget or the number of sections.

⚠️ Pitfall: Over-stacking roles can create contradictions. Keep one driver and one lens, and specify which breaks ties.


Bonus: JSON Schema Format Contract (when you need perfect structure)

If your downstream tool expects structured data, use a schema instead of headings.

json

Format: Return JSON matching this schema exactly. { "summary": "string (≤80 chars)", "itinerary": [ {"day": "Fri|Sat|Sun", "time_block": "string", "activity": "string", "est_cost_eur": "number"} ], "budget": {"lodging_eur": "number", "food_eur": "number", "transport_eur": "number", "total_eur": "number"}, "assumptions": ["string"] } Rules: No extra keys. If unsure, put null and add a note in "assumptions".

💡 Insight: A schema doesn’t just format output; it prunes the search space, which improves fidelity and reduces hallucination.


Summary & Conclusion

Roleplay becomes reliable when you move from “be X” to a Role Card with stance, mandate, evidence policy, and a format contract. These levers reduce ambiguity and channel the model’s attention. You saw how the same question transforms under three hats: the planner commits to logistics, the historian guards context, and the critic makes a call.

The trade-off is flexibility. Strong constraints boost precision but can flatten voice. Loosen word budgets or add a secondary lens when you need nuance. Watch for fabrication: clear evidence rules and uncertainty flags keep you out of trouble.

As you adopt Role Cards, you’ll see sharper tone, fewer rewrites, and outputs that fit downstream workflows. That’s the real impact: less entropy, more decisions, faster.

Next steps

  1. Build a 5-hat bank for your domain using the Role Card template; keep each to ~6 lines.

  2. Add the Self-Check pass to your default prompt chain; measure acceptance rate before/after.

  3. For one task, try a role stack (e.g., critic + historian) and write a one-sentence tie-breaker rule; compare results.

Learning Paths

Structured Learning

Follow guided learning paths from beginner to advanced. Master prompt engineering step by step.

Explore Paths

Continue Your Learning Journey

Ready to Master More? Explore our comprehensive guides and take your prompt engineering skills to the next level.

Explore More GuidesBrowse Learning Paths