Build the tool APIs
your agents call.
The Model Context Protocol is how AI agents discover and call tools. Promptise MCP is the production framework for building those servers — typed, authenticated, observable, and ready for compliance from the first request.
from promptise import MCPServer
server = MCPServer("acme-tools")
@server.tool()
async def revenue_report(
quarter: str,
year: int
) -> dict:
"""Get quarterly revenue data."""
return await db.query(...)What is Promptise MCP?
Your tools. Their protocol.
Production infrastructure included.
The Model Context Protocol is how AI agents discover and call tools. Promptise gives you both sides of the wire: a production server framework where one decorator gets you authentication, authorization, rate limiting, and audit logs — and a native client that auto-discovers tools from any MCP server, federates many of them into a single tool suite, and adapts them straight to your agent.
Without Promptise
You expose raw MCP tools with no auth, no rate limiting, no audit trail. Any agent on the network calls anything. No middleware. No guards. No testing framework.
With MCPServer
@server.tool() auto-generates schemas from type hints. Stack JWT auth, role and scope guards, rate limiting, circuit breakers, and audit logs as middleware. Every handler receives a typed RequestContext with the authenticated client and session state baked in.
What you get
JWT + OAuth + API key auth, per-tool role guards, 14+ middleware types, job queue with progress, HMAC-chained audit logs, live dashboard, routers with namespaces, OpenAPI import, and 3 transports.
MCPServer = @tool() + auth + guards + middleware + queue + audit + dashboard
The complete MCP server stack
Everything you need to build production MCP servers. Click any capability to explore.
Decorate. Type-hint. Ship.
Tools are plain async functions. Schemas auto-generate from your type hints. The decorator wires in authentication, authorization, dependencies, middleware, and a typed RequestContext — without you writing a line of plumbing.
@server.tool(
auth=True,
guards=[HasRole("analyst"), HasScope("read")],
middleware=[RateLimit(60), Cache(ttl=300)],
)
async def revenue_report(
ctx: RequestContext,
quarter: str,
year: int,
) -> dict:
"""Get quarterly revenue."""
print(ctx.client.client_id) # "agent-007"
print(ctx.client.roles) # {"analyst"}
print(ctx.client.extra) # {"org": "acme"}
return await db.query(quarter, year)One decorator, full stack: identity, authorization, rate limiting, caching, and a typed context — already on the function signature.
Trust that survives a compliance audit.
Authentication integrates with your existing infrastructure. Every tool call is recorded with cryptographic integrity.
JWT Bearer
JWTAuth (HS256) and AsymmetricJWTAuth (RS256/ES256) with LRU token caching
OAuth 2.0
Client credentials grant with refresh token rotation. Built-in token endpoint for dev/test
API Keys
APIKeyAuth with timing-safe comparison via hmac.compare_digest. Per-key roles and metadata
Failures are not failures. They are retries.
Network blips, database timeouts, rate limits — all handled automatically. Circuit breakers prevent cascade failures. Fallback chains provide graceful degradation.
See everything. Understand anything.
Prometheus metrics and OpenTelemetry tracing built in. Your MCP server metrics flow into any OTLP-compatible backend.
Some tools take minutes. That is fine.
Mark any tool as async and it returns immediately with a job ID. The agent polls for completion. Long-running operations become first-class citizens.
queue = MCPQueue(server, max_workers=4)
@queue.job(name="generate_report")Turn any REST API into MCP tools.
Point OpenAPIProvider at an OpenAPI spec and it generates MCP tools automatically. Your existing REST APIs become AI-callable without rewriting anything.
OpenAPIProvider("openapi.json")Test like you mean it.
TestClient gives you an in-memory server for unit tests. Mock dependencies cleanly. Validate schemas. Check auth flows. All without touching the network.
async def test_revenue_report():
async with TestClient(server) as client:
result = await client.call(
"revenue_report",
quarter="Q4",
year=2024
)
assert result["total"] > 0The agent perspective
What agents see when they connect
to your MCP server.
When an agent connects, it doesn't see your code. It sees auto-generated tool descriptions, typed parameter schemas, and structured responses. Everything that makes tool calling reliable happens behind the scenes — without the agent writing a single line of retry or auth logic.
You write this (5 lines)
@server.tool()
async def search_users(
name: str,
department: str = "all"
) -> list[dict]:
"""Find employees by name."""
return await db.search(name, department)The agent receives this (automatic)
{
"name": "search_users",
"description": "Find employees by name.",
"parameters": {
"name": { "type": "string", "required": true },
"department": { "type": "string", "default": "all" }
}
}
+ JWT validation ✓ + rate limiting ✓
+ input validation ✓ + audit log ✓Growth path
Starts simple.
Scales to thousands of tools.
Hello world
One @server.tool() decorator. Zero config. Agents connect immediately.
@server.tool()
async def hello(): ...Organized
MCPRouter groups tools by domain. Prefix namespacing keeps names clean.
users = MCPRouter(prefix="users")
orders = MCPRouter(prefix="orders")Production
JWT auth, rate limiting, circuit breakers, background job queue. Full middleware stack.
server.middleware = [AuthMiddleware(jwt), RateLimitMiddleware(100/min)]Enterprise
Multi-server composition. OpenAPI import. HMAC audit logs. Live dashboard. Registry discovery.
server.mount(users_server)
server.mount(billing_server)Comparison
What you get that
raw MCP doesn't.
| Capability | Promptise MCP | Raw MCP SDK | Build from scratch |
|---|---|---|---|
| Schema from Python types | — | — | |
| JWT + OAuth + API key auth | — | ||
| Per-tool role guards | — | ||
| Rate limiting + circuit breakers | — | ||
| HMAC-chained audit logs | — | — | |
| In-process TestClient | — | — | |
| Background job queue | — | ||
| OpenAPI → MCP import | — | — | |
| Live terminal dashboard | — | — | |
| Dependency injection | — | ||
| Composable middleware | — | ||
| Router namespacing | — |
"Build from scratch" = you write the middleware, auth, testing, and audit yourself. Estimate: 2-4 weeks of engineering.
Build your first tool API
in 60 seconds.
One decorator. Schema from types. Auth middleware. Audit logs. In-process testing. Ship today.
$ pip install promptise
$ promptise serve my_tools:server --transport http --port 8080