Frequently Asked Questions¶
Honest, complete answers about Promptise Foundry — the production Python framework for building AI agents, MCP servers, and autonomous runtimes.
Foundations¶
What is Promptise Foundry?¶
Promptise Foundry is a production-grade Python framework for building AI agents, MCP (Model Context Protocol) servers, prompt engineering systems, and autonomous runtimes. It is secure by default, model-agnostic, MCP-native, and designed for teams shipping agentic AI to production. One coherent stack instead of gluing libraries together.
Is Promptise Foundry production-ready?¶
Yes. No stub implementations, no half-finished features, no NotImplementedError in shipped code. Built-in: access control, capability-based policies, audit logging, encrypted transport, sandboxed code execution, observability with 8 transporters, crash-recovery via journals. Every backend listed as a parameter option works — if it's documented, it's implemented.
Is Promptise Foundry open source?¶
Apache 2.0 — source at github.com/promptise-com/foundry.
How do I install Promptise Foundry?¶
Python 3.10 or newer required. Optional extras for memory backends, sandboxing, and observability — see Installation Extras.
Comparisons¶
How is Promptise Foundry different from LangChain?¶
LangChain is a general-purpose LLM toolkit with hundreds of integrations. Promptise Foundry is a focused production framework — one coherent stack covering agents, MCP servers, prompt engineering, and autonomous runtime, with built-in access control, audit trails, sandboxing, and governance. Promptise has fewer abstractions, no silent fallbacks (errors raise instead of degrading), is async-first, and uses MCP-native tool discovery instead of manual wiring.
How does Promptise Foundry compare to LangGraph?¶
LangGraph is an orchestration layer focused on stateful graphs. Promptise Foundry includes a full Agent Runtime with crash recovery via journals, governance (budget, health, mission), and five trigger types (cron, event, message, webhook, file watch) — built for long-running autonomous agents that survive restarts, not just stateful conversations.
How does Promptise Foundry compare to CrewAI?¶
CrewAI focuses on multi-agent role-playing workflows. Promptise Foundry covers single agents, multi-agent coordination via the cross-agent delegation system (ask_peer / broadcast over HTTP+JWT), plus the production infrastructure — MCP servers, governance, observability, sandboxing — that real deployments need.
Models & Local-First¶
Which LLM models does Promptise Foundry support?¶
Any model. Use a string like "openai:gpt-5-mini", "anthropic:claude-sonnet-4.5", or "ollama:llama3", or pass any LangChain BaseChatModel directly. Switching providers requires changing one string in build_agent().
Does Promptise Foundry support self-hosted or local LLMs?¶
Yes. Use any Ollama model via "ollama:model-name". Local embeddings via SentenceTransformers, local guardrail models (DeBERTa, GLiNER, Llama Guard), and local memory backends (ChromaDB) make air-gapped deployments fully supported.
MCP — Model Context Protocol¶
What is MCP and why does Promptise Foundry use it?¶
MCP (Model Context Protocol) is the open standard for connecting LLMs to tools, resources, and prompts. Promptise Foundry is MCP-native — agents auto-discover tools from any MCP server URL, schemas convert to typed tools automatically, and you can build your own MCP servers with the included SDK.
Does Promptise Foundry have its own MCP client?¶
Yes — built from scratch, no third-party MCP client dependencies:
MCPClientfor single serversMCPMultiClientfor connecting to N servers with unified tool list and auto-routingMCPToolAdapterfor converting MCP tools to LangChainBaseToolobjects with recursive schema handling
Supports HTTP, SSE, and stdio transports. Bearer token and API key authentication.
Can I build my own MCP server with Promptise Foundry?¶
Yes — same relationship to MCP that FastAPI has to REST. Decorators for tools, resources, and prompts. Schema auto-generated from type hints. Middleware chain. Authentication, guards, caching, health checks, metrics, exception handlers, webhooks, dependency injection (FastAPI-style Depends), session state, versioning, namespace transforms, OpenAPI tool generation, streaming, elicitation, sampling, and a TestClient for in-process testing. See Building Production MCP Servers.
How do I deploy a Promptise Foundry MCP server?¶
Supports stdio, streamable HTTP, and SSE transports with configurable CORS. Auth gate at the transport level. CLI flags: --dashboard (live terminal UI), --reload (hot-reload during development). Kubernetes health probes (liveness, readiness, startup) built in.
Agent Runtime¶
What is the Agent Runtime?¶
The operating system for autonomous AI agents. Turns stateless LLM calls into persistent, governed processes. Each AgentProcess has lifecycle states (CREATED → STARTING → RUNNING → SUSPENDED → STOPPING → STOPPED/FAILED), trigger queues, heartbeat monitoring, concurrency control, conversation buffer, and journaled state for crash recovery.
What's the difference between PromptiseAgent and the Agent Runtime?¶
| PromptiseAgent | Agent Runtime | |
|---|---|---|
| What | Single agent created via build_agent() |
Wraps an agent in an AgentProcess |
| Invocation | .run() or .chat() |
Triggered by events, cron, webhooks, files |
| Lifecycle | Stateless between calls | Long-running, persistent, recoverable |
| Use case | Request/response | Autonomous, ambient, scheduled |
What trigger types can launch an autonomous agent?¶
Five built-in:
- CronTrigger — cron expressions
- EventTrigger —
EventBussubscription - MessageTrigger — topic-based pub/sub with wildcards
- WebhookTrigger — HTTP POST with HMAC verification
- FileWatchTrigger — directory monitoring with glob patterns
Multiple triggers compose on one process. Custom trigger types can be registered.
How does Promptise Foundry handle crash recovery?¶
Through journals. InMemoryJournal and FileJournal record every state transition, trigger event, and invocation result. The ReplayEngine reconstructs state from a checkpoint plus replay log. When a process crashes, it restarts from the last known good state — no lost conversations, no lost trigger queue, no data loss.
Does Promptise Foundry support distributed deployments?¶
Yes. RuntimeCoordinator for multi-node coordination, with StaticDiscovery and RegistryDiscovery for node discovery. Health checks over HTTP. No etcd or Consul dependency required.
What governance does the Agent Runtime provide?¶
Four subsystems:
| Subsystem | What it does |
|---|---|
| Budget | Per-run and daily limits on tool calls, LLM turns, cost units, irreversible actions |
| Health | Anomaly detection — stuck loops, repeating patterns, empty responses, high error rates |
| Mission | LLM-as-judge evaluation against success criteria with confidence thresholds |
| Secrets | Per-process credentials with TTL expiry, rotation without restart, zero-fill revocation, never serialized to journal |
Escalation via webhook POST and EventBus emission. Enforcement: log, pause, stop, or escalate.
Security & Multi-User¶
Does Promptise Foundry handle multi-user access control?¶
Yes. The MCP server SDK ships:
JWTAuth(HS256),AsymmetricJWTAuth(RS256/ES256),APIKeyAuth- Capability-based access policies
- Per-tool permission guards:
HasRole,HasAllRoles,RequireAuth,RequireClientId - Per-user audit trails (HMAC-chained for tamper detection)
CallerContextpropagation across the entire stack via async contextvars
Built for multi-tenant production deployments. See Building Multi-User Systems.
How does Promptise Foundry handle prompt injection and security?¶
PromptiseSecurityScanner with six detection heads:
| Head | Detects |
|---|---|
| DeBERTa | Prompt injection (ML model) |
| Regex (69 patterns) | PII |
| Regex (96 patterns) | Credentials |
| GLiNER | Named entities |
| Llama Guard / Azure AI | Content safety |
| Custom rules | Domain-specific |
All models run locally. Input blocking and output redaction built in. Memory retrieval and semantic cache responses are also rescanned.
Can I sandbox code execution?¶
Yes. Docker-based sandbox with seccomp syscall filtering, capability dropping (~40 caps), read-only rootfs, resource limits (CPU, memory, time), and network isolation (none, restricted, full). Optional gVisor kernel for stronger isolation. Five agent tools auto-injected when sandbox is enabled — execute, read file, write file, list files, install package. Path traversal and shell injection prevention built in.
Memory, Cache, and Persistence¶
What memory backends does Promptise Foundry support?¶
Three providers ship in the framework:
InMemoryProvider— testingChromaProvider— local vector search, persistentMem0Provider— enterprise-grade graph search
Configured on build_agent(). Before every invocation, the agent auto-searches memory and injects relevant results into the system prompt with prompt-injection mitigation built in.
Does Promptise Foundry support conversation persistence?¶
Yes — ConversationStore protocol with four backends:
InMemoryConversationStoreSQLiteConversationStorePostgresConversationStoreRedisConversationStore
The chat() method handles load → invoke → persist automatically. Session ownership is enforced.
Does Promptise Foundry support semantic caching?¶
Yes. SemanticCache with in-memory or Redis backends. Per-user, per-session, or shared scope isolation. Serves cached responses for semantically similar queries — typically 30–50% cost reduction. Output guardrails re-scan cached responses. GDPR purge_user() supported. Encrypted-at-rest option available for Redis.
Prompt Engineering¶
Does Promptise Foundry have prompt engineering features?¶
Yes — prompts as software components:
@promptdecorator- 8 PromptBlock types (
Identity,Rules,OutputFormat,ContextSlot,Section,Examples,Conditional,Composite) with priority-based token budgeting ConversationFlow— system prompts that transform across phases- 5 composable strategies:
ChainOfThought,StructuredReasoning,SelfCritique,PlanAndExecute,Decompose - 4 built-in perspectives:
Analyst,Critic,Advisor,Creative - Guards:
ContentFilterGuard,LengthGuard,SchemaStrictGuard, custom validators - 14 context providers — Tool, Memory, Task, Blackboard, User, Environment, Conversation, Team, Error, Output, Static, Callable, Conditional, World
PromptInspectorfor tracing assembly step-by-step
Multi-Agent Systems¶
Does Promptise Foundry support multi-agent systems?¶
Yes. Cross-agent delegation:
ask_peer()— send a question to another agent over HTTP+JWT and await the answerbroadcast()— send to multiple peers in parallel with timeout
Graceful degradation if a peer fails. SuperAgent YAML files declare cross-agent references with cycle detection.
Configuration Files¶
What is a SuperAgent file?¶
A .superagent YAML file that defines an entire agent declaratively — model, instructions, MCP servers, memory, sandbox, observability, cache, guardrails, cross-agents. Environment variable resolution via ${VAR} and ${VAR:-default}. Loadable via the CLI:
What is an .agent manifest?¶
A YAML manifest for the Agent Runtime — declares model, instructions, MCP servers, triggers, world state, memory, journal, open mode, budget, health, mission, and secrets. Validated, savable, deployable from the CLI. Distinct from .superagent (one-shot agents); .agent is for runtime processes with triggers and lifecycle.
What is Open Mode?¶
Self-modifying agents with 14 meta-tools — modify_instructions, create_tool, connect_mcp_server, add_trigger, remove_trigger, spawn_process, list_processes, store_memory, search_memory, forget_memory, list_capabilities, get_secret, check_budget, check_mission. Guardrails: max instruction length, max custom tools, MCP URL whitelist, mandatory sandbox for agent-written code. Hot-reload without losing conversation state. Rollback to original config.
Operations¶
How does observability work?¶
Four levels — OFF, BASIC, STANDARD, FULL. Every LLM turn, tool call, token count, latency, retry, cache hit/miss is recorded. 8 transporters: HTML report, JSON file, structured log, console, Prometheus, OpenTelemetry, webhook, callback. Ring buffer with configurable max entries. Thread-safe.
Is there a dashboard for monitoring agents?¶
Yes. Two dashboards — one for the MCP server (six tabs: server overview, tool stats, agents, request log, performance, raw logs) and one for the Agent Runtime (process state, invocation counts, trigger status, context inspection, memory usage, journal history). Both are live terminal UIs.
Can I track costs with Promptise Foundry?¶
Promptise Foundry does not estimate or track LLM provider prices — those change weekly and would require constant maintenance. The Budget governance system tracks tool calls, LLM turns, and abstract cost units that you can map to your own pricing model. ToolCostAnnotations on tools let you assign per-call cost weights.
What about job queues?¶
The MCP server SDK includes MCPQueue with priority scheduling, retry with exponential backoff, progress reporting, and cancellation. Auto-registered tools — queue_submit, queue_status, queue_result, queue_cancel, queue_list. Background tasks supported for fire-and-forget work after a handler returns.
How is testing done?¶
For MCP servers: TestClient runs the full pipeline in-process — validation, dependency injection, guards, middleware, handler — with no network. For prompts: mock_llm(), mock_context(), assert_schema(), assert_contains(), assert_latency(), assert_guard_passed() helpers work with pytest.
Does Promptise Foundry have a CLI?¶
Yes:
| Command | What it does |
|---|---|
promptise agent <file> |
Run a .superagent |
promptise validate |
Validate a config file |
promptise list-tools |
Discover tools from MCP servers |
promptise run |
Run a one-shot prompt |
promptise serve |
Serve an MCP server |
The runtime has its own CLI for managing processes, triggers, and manifests.
Examples and Community¶
Where can I find examples?¶
Runnable examples at github.com/promptise-com/foundry/tree/main/examples — covering agents, MCP servers (examples/mcp/), prompt engineering (examples/prompts/), and runtime use cases (examples/runtime/). Every example uses real LLM calls — no mocks, no stubs.
How can I contribute?¶
Issues and pull requests welcome at github.com/promptise-com/foundry. Conventional commits required (feat:, fix:, docs:, refactor:, test:, chore:). Type hints on all public APIs (Python 3.10+ syntax), Google-style docstrings, and tests for all new functionality. See Contributing.