PromptisePromptise
Docs
Promptise - AI Framework LogoPromptise

The foundation layer for agentic intelligence. Build, secure, and operate autonomous AI systems at scale with Promptise Foundry.

Foundry

  • The Promptise Agent
  • Reasoning Engine
  • MCP
  • Agent Runtime
  • Prompt Engineering

Resources

  • Documentation
  • GitHub
  • Guides
  • Learning Paths

Company

  • About
  • Imprint
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Subprocessors

© 2026 Promptise by Manser Ventures. All rights reserved.

Back to Guides/Guide

Summarization that Doesn’t Lose the Plot (Chain-Of-Density)

Learn how to write summaries that keep key names, dates, places, and numbers. Explore Chain of Density, a prompt pattern that replaces vague text with grounded entities. Includes prompts, examples, troubleshooting, and a mini lab on real articles.

September 4, 2025
8 min read
Promptise Team
Beginner
SummarizationChain-of-Density (CoD)Prompt engineeringEntity-rich outputsFaithfulness & hallucination controlLLM best practices

Promise: You’ll learn a simple, reliable way to write crisp summaries that keep the important names, dates, places, and numbers. We’ll use Chain-of-Density (CoD)—an iterative prompting trick that compacts a summary while steadily adding missing key entities. You’ll compare it with a vanilla summary on three short articles.

Key terms. A summary is a shorter version of a text. An entity is a concrete thing you can name: a person, place, organization, date, number, or defined term. Chain-of-Density is a prompt pattern that builds a summary in small rounds, each round adding missing entities without getting longer.

Why this matters now: LLMs tend to write smooth but vague summaries that drop specifics. That’s bad for research, notes, and decision-making. CoD fixes this by turning “make it shorter” into “make it tighter and more specific.”

💡 Insight: Density is not length. Two summaries can be 120 words, but the denser one squeezes in more verified entities (from the source text), replacing fluff with specifics.


Mental Model: “Tighten the net without changing the size”

Think of summarization as tightening a net around the facts. CoD keeps the same word budget but swaps generalities for named entities each round. You stop when you can’t add meaningful entities without breaking the length or inventing facts.

Mini example (toy paragraph): “The city approved a plan to upgrade transit next year.”

  • Vanilla (12 words): The city okayed a transit upgrade planned for next year.

  • CoD, Round 1 (12 words): Zurich approved the 2026 ZVV S-Bahn upgrade plan on September 1. Same length, richer entities: Zurich, 2026, ZVV S-Bahn, September 1.


Walkthrough: CoD on a short article

We’ll use a synthetic micro-article so you can copy-paste.

Source text (≈170 words): On September 1, the Zurich City Council approved a CHF 120 million plan to extend the BlueLine tram to Oerlikon by 2028. The project, proposed by the Zurich Transport Authority (ZVV), adds five new stops and upgrades signaling to reduce wait times. A pilot bus lane on Hofwiesenstrasse will run from October 2025 to March 2026 to measure traffic impact. Local businesses, represented by the Oerlikon Merchants Association, voiced concerns about construction noise. In response, the plan funds sound barriers and limits nighttime work. Environmental groups supported the extension, citing a projected 12% drop in car trips. The canton will co-finance 40% if federal matching is confirmed by mid-2026. If approvals proceed on schedule, ground-breaking begins in Q1 2027, with driver training in late 2028 and service launch before the 2028 holiday season.

Step 1 — Vanilla summary (target 110–130 words). You ask for a straightforward summary. It’ll usually read fine but lose specifics.

Step 2 — CoD Round 1 (same length). Ask the model to keep the word budget but add missing entities (names, dates, numbers) drawn only from the text.

Step 3 — CoD Round 2 (same length). Repeat once more, adding still-missing entities. Stop when further additions would force fabrication or break the length.

⚠️ Pitfall: Don’t let the model invent facts. Always say “Use entities only if explicitly stated in the text.”


Practical Prompts (copy-paste)

Start with a reusable system prompt that sets tone, rules, and outputs.

Starter system prompt (use once per chat):

json

You are a careful summarizer. Write crisp, entity-rich summaries. Rules: - Use ONLY facts present in the provided text. - Prefer named entities (people, orgs, places, dates, numbers, program names). - Keep within the word budget. - No speculation or external knowledge. - If a requested entity isn’t in the text, omit it. Output style: 1 short paragraph. Neutral tone.

Vanilla summary prompt:

json

Summarize the text below in {{WORD_BUDGET}} words. Text: {{TEXT}}

Chain-of-Density, Round 1 prompt:

json

Improve the prior summary without changing the {{WORD_BUDGET}}-word budget. Add missing entities that ARE present in the text (names, dates, places, organizations, numbers, program names). Replace general phrases with specific entities. Do not add new claims. Return: SUMMARY: <one paragraph> ENTITIES_ADDED: <comma-separated list>

Chain-of-Density, Round 2 prompt (repeat as needed):

json

Tighten the previous SUMMARY again with the SAME {{WORD_BUDGET}}-word budget. Add any remaining entities present in the text that increase specificity and relevance. Do not remove correct entities. Do not fabricate. Return: SUMMARY: <one paragraph> ENTITIES_ADDED: <comma-separated list>

One-shot CoD (when you want a single final answer):

json

Write a {{WORD_BUDGET}}-word summary that is dense with entities from the text. Process silently in 2–3 internal passes to add missing entities without exceeding the budget. Return only: SUMMARY: <one paragraph> ENTITIES_INCLUDED: <comma-separated list> TEXT: {{TEXT}}


Worked Example (using the micro-article)

Vanilla (≈120 words): Zurich approved a CHF 120 million plan to extend the BlueLine tram to Oerlikon by 2028, adding five stops and upgrading signaling to cut waits. A pilot bus lane on Hofwiesenstrasse will run from October 2025 to March 2026 to gauge traffic effects. Local businesses raised noise concerns; the plan funds sound barriers and limits nighttime work. Environmental groups support the project, projecting fewer car trips. The canton will cover 40% if federal matching arrives by mid-2026. If approvals continue, construction could start in early 2027, with driver training in late 2028 and service launching before the 2028 holidays.

CoD Round 1 (≈120 words): On September 1, the Zurich City Council approved a CHF 120 million plan to extend the BlueLine tram to Oerlikon by 2028, adding five stops and upgraded signaling to cut waits. A pilot bus lane on Hofwiesenstrasse will run October 2025–March 2026 to gauge traffic. The Oerlikon Merchants Association flagged construction noise; the plan funds sound barriers and curbs night work. Environmental groups backed the project, citing a projected 12% drop in car trips. The canton will co-finance 40% if federal matching is confirmed by mid-2026. If approvals proceed, ground-breaking begins Q1 2027, with driver training in late 2028 and service launching before the 2028 holiday season.

CoD Round 2 (≈120 words): On September 1, the Zurich City Council approved a CHF 120 million ZVV plan to extend the BlueLine tram to Oerlikon by 2028, adding five stops and signaling upgrades to cut waits. A pilot bus lane on Hofwiesenstrasse runs Oct 2025–Mar 2026 to measure traffic impact. The Oerlikon Merchants Association raised noise concerns; funding covers sound barriers and limits on nighttime work. Environmental groups supported the extension, projecting a 12% drop in car trips. The canton will fund 40% if federal matching is confirmed by mid-2026. With timely approvals, Q1 2027 ground-breaking leads to late-2028 driver training and a pre-holiday 2028 launch.

Notice how Round 2 kept the length but increased entity coverage (ZVV, timing details) without inventing facts.


Troubleshooting & Trade-offs

CoD excels when the source has many concrete details. If the text is very sparse, you’ll hit a ceiling: there aren’t enough entities to add. That’s fine—CoD will stabilize early. Another trade-off is fluency vs density. Very dense summaries can feel staccato; you can balance by asking for “smooth prose, not bullet-like compression.”

Common failure modes include length drift, hallucinated entities, and generic paraphrases replacing specifics. Counter them by keeping a strict word budget, restating “facts must come from the text,” and asking the model to output an entity list so you can spot fabrications quickly. If the model keeps drifting, lower the word budget slightly; paradoxically, tighter budgets can force sharper choices.

💡 Insight: “Entities first, style second.” Lock down the entities list before you fuss over phrasing.


Mini Exercise / Lab

Goal: Compare vanilla vs CoD on three short articles and judge entity coverage and faithfulness qualitatively.

Setup: Pick any three short texts (150–400 words). They can be news posts, docs, or synthetic snippets.

Procedure:

  1. For each article, produce a Vanilla summary at 110–130 words.

  2. Run CoD for two rounds (same word budget).

  3. For each pair, note:

    • Entity Coverage: How many distinct, correct entities made it in?

    • Faithfulness: Any additions not in the source? Any dropped must-haves?

Expected output (snippet for one article):

json

VANILLA (121 words): <paragraph> CoD ROUND 1 (120 words): ENTITIES_ADDED: September 1, Zurich City Council, CHF 120 million, Oerlikon, five stops CoD ROUND 2 (119 words): ENTITIES_ADDED: ZVV, Hofwiesenstrasse, Oct 2025–Mar 2026, 12%, Q1 2027 NOTES: CoD included dates, amounts, and orgs; no hallucinations; same length.

If you want a quick scoring rubric, give 1 point per correct entity mentioned in the source, minus 2 points per hallucinated entity. CoD should outperform vanilla on points while keeping faithfulness high.


Summary & Conclusion

You learned how Chain-of-Density preserves the plot: it keeps the word budget fixed while adding missing, source-grounded entities in small passes. The mental model is simple—tighten the net, don’t enlarge it. In practice, CoD turns vague summaries into decision-ready notes by swapping fluff for names, dates, numbers, and organizations.

The main trade-off is fluency vs density, and the main pitfall is hallucinating specifics. You reduce both by enforcing a word budget, requiring entities to appear in the text, and asking the model to reveal which entities it added. After two rounds, you’ll usually hit a sweet spot of clarity and specificity.

Next steps:

  • Run the lab on three texts and tally entity coverage vs hallucinations.

  • Add a post-check: “Bold any entity you’re uncertain about” to surface weak spots.

  • Try a 60–80 word budget to see how density and readability shift on the same articles.

Learning Paths

Structured Learning

Follow guided learning paths from beginner to advanced. Master prompt engineering step by step.

Explore Paths

Continue Your Learning Journey

Ready to Master More? Explore our comprehensive guides and take your prompt engineering skills to the next level.

Explore More GuidesBrowse Learning Paths