PromptisePromptise
Docs
Promptise - AI Framework LogoPromptise

The foundation layer for agentic intelligence. Build, secure, and operate autonomous AI systems at scale with Promptise Foundry.

Foundry

  • The Promptise Agent
  • Reasoning Engine
  • MCP
  • Agent Runtime
  • Prompt Engineering

Resources

  • Documentation
  • GitHub
  • Guides
  • Learning Paths

Company

  • About
  • Imprint
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Subprocessors

© 2026 Promptise by Manser Ventures. All rights reserved.

Back to Guides/Guide

Prompting 101: clear asks, roles, constraints

This guide shows how to write prompts that models can’t misunderstand. Learn the ARCF framework—Ask, Role, Constraints, Format—to get reliable outputs in bullets, steps, tables, or JSON. Includes acceptance criteria, reusable system prompts, common fixes, and practical examples so you can reduce drift, cut fluff, and hit the exact format you need every time.

September 3, 2025
18 min read
Promptise Team
Beginner
prompt engineering

You open a new chat. You type, “Summarize this.”
The model dutifully spins paragraphs you didn’t ask for, forgets the bullets you wanted, and somehow adds a motivational quote. It’s not trying to be difficult—it’s doing what language models do:

This guide teaches you how to give instructions the model can’t misunderstand. You’ll learn a compact mental model, see it applied end-to-end, and practice with a short lab. By the end, you’ll hit target formats—bullets, tables, steps—without wrestling the output.


Why prompts fail (and how to fix them)

LLMs are excellent at following structure and mediocre at guessing your intent. If you say “summarize,” the model invents a structure. If you say “five bullets, last bullet a risk,” it stops guessing and starts following.

Two quick definitions so we’re on the same page:

  • Prompt: Everything you send the model in the conversation, including system and user messages.

  • Context window: The model’s “short-term memory.” Put critical instructions up front so they’re hard to miss.

💡 Insight: Pretend you’re delegating to a literal but fast intern. If it must appear, name it. If it must not appear, forbid it.


The tiny mental model:

You don’t need a hundred “prompting tricks.” You need one reliable skeleton.

  • Ask — the one sentence task you want done.

  • Role — the voice and expertise you want applied.

  • Constraints — measurable rules (length, tone, do/don’ts).

  • Format — the exact output shape (bullets, steps, table, JSON).

Add a final layer—acceptance criteria—so success is testable by anyone on your team. Think of them as your mini-rubric.

⚠️ Pitfall: “Be concise” is not measurable. “≤120 words” is.


A gentle ramp from zero → useful

Let’s take the common disaster prompt—“Explain cloud storage”—and walk it to something the model can’t mess up.

First, name the target. Who is this for? “A non-technical adult” is clearer than “anyone.” Second, give the model a voice. Patient educator beats vague neutrality. Third, fence the language and size. No acronyms. ≤100 words. Fourth, choose the shape. Bullets, not paragraphs. Fifth, write checks you can verify in ten seconds.

Here’s the transformation:

Vague → Precise (with acceptance criteria)

json

A: Explain cloud storage to a non-technical adult. R: You are a patient consumer tech educator. C: Use everyday analogies; avoid acronyms; ≤100 words; no sales pitch. F: 5 bullets, each ≤18 words. Final bullet: one privacy caution. Acceptance criteria: - Exactly 5 bullets. - No acronyms (e.g., “S3”, “SLA”, “RAID”). - ≤100 words total. - Last bullet includes a privacy caution.

Read it once and notice how your brain now knows what a correct answer looks like. That’s the whole game.


Your reusable guardrails: the

Set this at the start of a chat and reuse it. It keeps the model calm and format-focused.

json

You are a concise, friendly educator. Follow ARCF: - Ask: address the stated task only. - Role: write from the provided role for the intended audience. - Constraints: obey length limits, style rules, and do/don’ts; prefer plain language. - Format: match the requested structure exactly; no extra sections. Quality rules: - If acceptance criteria are given, satisfy them before optimizing style. - If criteria conflict, prioritize: Format > Constraints > Role. - Never invent sources or data. If unsure, say “Not confirmed.” - Keep average sentence length 12–18 words unless told otherwise.

Keep it in a snippet manager. Paste once. Enjoy fewer surprises.


A tour of formats (with live-feeling examples)

Bullets (fast consumption). Use bullets for skimmable updates, highlights, and “what changed.” Specify how many and how long.

A: List habits that improve sleep for new parents.
R: You are a pediatric sleep coach.
C: ≤90 words total; plain language; no medical claims or products.
F: 6 bullets; each ≤12 words; last bullet is a caution.

Acceptance: bullets == 6; each ≤12 words; total ≤90; last bullet contains a caution

Numbered steps (procedures). Steps are for action. Start each line with a verb; keep one sentence per step to avoid drift.

json

A: Give me a 4-step plan to declutter a desk in 20 minutes. R: You are a calm organizer. C: Timeboxed actions; include a single timer cue; no shaming. F: 4 numbered steps; one sentence each; each starts with a verb. Acceptance: steps == 4; exactly one timer cue; every step begins with a verb.

Tables (comparisons). Tables are where models love to freestyle column names. Don’t let them.

json

A: Compare three note-taking options for students. R: You are a neutral productivity reviewer. C: Focus on first-week experience; no affiliate language. F: Markdown table: App | Best for | Key limitation. 3 rows. Acceptance: headers match exactly; rows == 3; each cell ≤8 words; no empty cells.

JSON (machine-friendliness). When another tool will read the output, lock keys and counts.

json

A: Produce a checklist for a website launch. R: You are a process engineer. C: ≤5 items; plain names; no extra keys. F: Valid JSON array of objects with keys: ["step","owner","risk"]. Acceptance: parses as JSON; length ≤5; keys match exactly; no null values.

💡 Insight: If the output will feed another system (slides, trackers, scripts), your Format and Acceptance are not optional—they’re your contract.


When the model misbehaves (and how to nudge it back)

If extra paragraphs sneak in, your format was underspecified. Add “no extra sections” and restate counts. If jargon appears, make it illegal: “no acronyms; define new terms in one sentence.” If it ignores you twice, halve the scope and tighten the limits. Smaller asks get crisp results.

A reliable order of operations when rules conflict: Format → Constraints → Role. Format first keeps your structure intact; you can polish tone later.


A quick end-to-end vignette

Imagine you’re sending a one-slide update to executives. You have five bullet slots and 90 words to spend. You need status, a risk, and a next step. Here’s a prompt that ensures you get exactly that:

json

A: Produce a one-slide project update for executives. R: You are a crisp PM communicating status and risks. C: ≤90 words; plain language; include one risk and one next step. F: 5 bullets; each ≤16 words; bullet 4 names a risk; bullet 5 is the next step. Acceptance: bullets == 5; each ≤16 words; total ≤90; bullet 4 is a risk; bullet 5 is actionable.

Run it, then check your criteria like a flight checklist. If a rule is broken, state it again, more literally, and rerun.


Mini lab (10–15 minutes)

You’ll rewrite two vague prompts. Type your answers in your own editor so you can tweak and reuse them.

A. Event recap (bullets). Vague: “Write a quick event recap.” Your goal: 6 bullets, CTA in the last bullet, tight word budget.

One strong answer to compare with:

json

A: Recap the meetup for attendees who missed it. R: You are a crisp community manager. C: ≤110 words; plain language; no sales pitch. F: 6 bullets; each ≤14 words; last bullet is a next step. Acceptance: bullets == 6; each ≤14 words; total ≤110; last bullet is actionable.

B. Onboarding plan (steps). Vague: “Help me onboard a new teammate.” Your goal: 5 steps, step 3 ends with a three-item checklist, no internal tool names.

One strong answer to compare with:

json

A: Provide a 5-step onboarding plan for a remote teammate’s first week. R: You are an organized team lead. C: Plain language; avoid internal tool names; include a checklist in step 3. F: 5 numbered steps; one sentence each; step 3 ends with a 3-item checklist. Acceptance: steps == 5; step 3 ends with three checklist items; no tool names.

Score yourself with this quick rubric (0–2 each):
Ask specific to a reader (0–2). Role guides tone (0–2). Constraints measurable (0–2). Format explicit (0–2). Acceptance testable (0–2).


What to remember (and actually use)

  • Name the shape you want (bullets, steps, table) and count it.

  • Make rules measurable so you can check them fast.

  • Freeze voice with a role that fits your audience.

  • Add acceptance criteria so anyone can verify success.

  • Keep a system prompt loaded so every task starts on rails.

That’s the whole craft. Clear asks. Specific roles. Hard constraints. Exact formats. You’ll spend less time fixing outputs and more time using them.

Learning Paths

Structured Learning

Follow guided learning paths from beginner to advanced. Master prompt engineering step by step.

Explore Paths

Continue Your Learning Journey

Ready to Master More? Explore our comprehensive guides and take your prompt engineering skills to the next level.

Explore More GuidesBrowse Learning Paths