Prompts API Reference¶
Prompt engineering framework — decorators, blocks, flows, strategies, guards, context providers, chaining, loading, versioning, and testing.
Core¶
prompt decorator¶
promptise.prompts.core.prompt(model='openai:gpt-5-mini', *, observe=False, inspect=None)
¶
Decorator that turns a function into a :class:Prompt.
The function's docstring becomes the prompt template. Parameters
become template variables ({param_name}). The return type
annotation determines output parsing.
Usage::
@prompt(model="openai:gpt-5-mini")
async def summarize(text: str, max_words: int = 100) -> str:
"""Summarize in {max_words} words: {text}"""
result = await summarize("long article...")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
str
|
LLM model identifier (e.g. |
'openai:gpt-5-mini'
|
observe
|
bool
|
Enable observability recording. |
False
|
inspect
|
_Inspector | None
|
Optional :class: |
None
|
Returns:
| Type | Description |
|---|---|
Callable[[Callable[..., Any]], Prompt]
|
Decorator that produces a :class: |
Prompt¶
promptise.prompts.core.Prompt
¶
Dual-mode prompt: standalone LLM caller + agent instruction source.
Wraps a function whose docstring is the prompt template. Decorated
with @prompt(model=...) to create a Prompt instance.
Standalone mode — await prompt("text")::
result = await analyze("quarterly report")
Agent-integrated mode — render without LLM call::
text = await prompt.render_async(ctx) # full prompt text
messages = await prompt.to_messages(ctx) # LangChain messages
All with_*() methods return copies (immutable composition).
model
property
¶
LLM model identifier.
name
property
¶
Prompt name (derived from the function name).
return_type
property
¶
Declared return type.
template
property
¶
Raw prompt template text.
__call__(*args, **kwargs)
async
¶
Execute the full prompt pipeline with an LLM call.
Pipeline: 1. Bind arguments to function signature 2. Render template with variables 3. Build PromptContext 4. Run on_before hook 5. Run context providers 6. Apply perspective 7. Apply strategy 8. Inject constraints + schema instructions 9. Run input guards 10. Call LLM 11. Parse strategy output 12. Parse return type 13. Run output guards 14. Run on_after hook 15. Return typed result
on_after(fn)
¶
Return a copy with an after-execution hook.
on_before(fn)
¶
Return a copy with a before-execution hook.
on_error(fn)
¶
Return a copy with an error-handling hook.
render(ctx=None, **kwargs)
¶
Render the full prompt text WITHOUT calling the LLM.
Runs: blocks → template → perspective → strategy → constraints.
Async providers are skipped; use :meth:render_async for
the full pipeline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ctx
|
PromptContext | None
|
Optional :class: |
None
|
**kwargs
|
Any
|
Template variables for |
{}
|
Returns:
| Type | Description |
|---|---|
str
|
Rendered prompt text. |
render_async(ctx=None, **kwargs)
async
¶
Async render with context providers (no LLM call).
Full pipeline: blocks → template → context providers → perspective → strategy → constraints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ctx
|
PromptContext | None
|
Optional :class: |
None
|
**kwargs
|
Any
|
Template variables. |
{}
|
Returns:
| Type | Description |
|---|---|
str
|
Rendered prompt text with all dynamic context injected. |
to_messages(ctx=None, **kwargs)
async
¶
Produce LangChain message objects for agent integration.
Returns [SystemMessage(rendered_prompt), HumanMessage(input)]
when kwargs are provided, or just [SystemMessage] for
system prompt use.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ctx
|
PromptContext | None
|
Optional :class: |
None
|
**kwargs
|
Any
|
Template variables. |
{}
|
Returns:
| Type | Description |
|---|---|
list[Any]
|
List of LangChain message objects. |
with_blocks(*blocks)
¶
Return a copy with additional prompt blocks.
with_constraints(*texts)
¶
Return a copy with additional constraints.
with_context(*providers)
¶
Return a copy with additional context providers.
with_guards(*guards)
¶
Return a copy with additional guards.
with_inspector(inspector)
¶
Return a copy with a PromptInspector attached.
with_model(model)
¶
Return a copy with a different model.
with_perspective(perspective)
¶
Return a copy with a cognitive perspective.
with_strategy(strategy)
¶
Return a copy with a reasoning strategy.
with_world(**contexts)
¶
Return a copy with pre-populated world contexts.
PromptStats¶
promptise.prompts.core.PromptStats
dataclass
¶
Execution statistics for a single prompt call.
constraint¶
promptise.prompts.core.constraint(text)
¶
Decorator that attaches a constraint to a :class:Prompt.
Constraints are hard requirements appended to the prompt text as numbered instructions before the LLM call.
Usage::
@prompt(model="openai:gpt-5-mini")
@constraint("Must cite at least 2 sources")
@constraint("Under 300 words")
async def write_argument(topic: str) -> str:
"""Write a persuasive argument about: {topic}"""
Blocks (Layer 1)¶
Composable prompt blocks with priority-based token budgeting.
Block¶
promptise.prompts.blocks.Block
¶
Bases: Protocol
A composable unit of prompt content.
Implement this protocol to create custom block types.
The priority determines survival when token budgets are tight
(10 = always included, 1 = nice-to-have).
Three ways to create custom blocks:
-
Class: Implement the Block protocol::
class MyBlock: name = "my_block" priority = 5 def render(self, ctx=None) -> str: return "Custom content"
-
@block decorator: Turn a function into a block::
@block("my_block", priority=5) def my_block(ctx=None) -> str: return "Custom content"
-
SimpleBlock: Inline with just a string::
my_block = SimpleBlock("my_block", "Custom content", priority=5)
BlockContext¶
promptise.prompts.blocks.BlockContext
dataclass
¶
Runtime context available to blocks during rendering.
Identity¶
promptise.prompts.blocks.Identity
¶
Who the agent is — always included (priority 10).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
description
|
str
|
Core identity statement. |
required |
name
|
str
|
Block name. Defaults to |
'identity'
|
traits
|
list[str] | None
|
Optional list of personality / capability traits. |
None
|
Rules¶
promptise.prompts.blocks.Rules
¶
Behavioral constraints (priority 9).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
rules
|
list[str]
|
List of rule strings. |
required |
name
|
str
|
Block name. Defaults to |
'rules'
|
OutputFormat¶
promptise.prompts.blocks.OutputFormat
¶
Response structure specification (priority 8).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Output format — |
'text'
|
schema
|
type | None
|
Optional Pydantic model or dataclass for JSON output. |
None
|
instructions
|
str
|
Additional formatting instructions. |
''
|
name
|
str
|
Block name. Defaults to |
'output_format'
|
ContextSlot¶
promptise.prompts.blocks.ContextSlot
¶
Dynamic injection point, filled at runtime (priority configurable).
Use :meth:fill to provide content. Unfilled slots render the
default value (empty string if not set).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Slot identifier (also the block name). |
required |
priority
|
int
|
Importance 1-10. Default 6. |
6
|
default
|
str
|
Fallback text when unfilled. |
''
|
fill(content)
¶
Return a copy with content filled.
Section¶
promptise.prompts.blocks.Section
¶
Custom named section (priority configurable).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Section identifier (also the block name). |
required |
content
|
str | Callable[..., str]
|
Static text or a callable |
required |
priority
|
int
|
Importance 1-10. Default 5. |
5
|
Examples¶
promptise.prompts.blocks.Examples
¶
Few-shot examples with auto-truncation (priority 4).
When token budget is tight, fewer examples are included.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
examples
|
list[dict[str, str]]
|
List of |
required |
name
|
str
|
Block name. Defaults to |
'examples'
|
max_count
|
int | None
|
Maximum examples to include. Default: all. |
None
|
Conditional¶
promptise.prompts.blocks.Conditional
¶
Block that renders only when a condition is true.
Inherits priority from the inner block.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Block name. |
required |
block
|
Block
|
The inner block to conditionally render. |
required |
condition
|
Callable[[BlockContext | None], bool]
|
|
required |
Composite¶
promptise.prompts.blocks.Composite
¶
Groups multiple blocks as a single unit.
Priority is the maximum of all inner blocks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Block name. |
required |
blocks
|
list[Block]
|
List of inner blocks. |
required |
separator
|
str
|
Join string between blocks. |
'\n\n'
|
SimpleBlock¶
promptise.prompts.blocks.SimpleBlock
¶
A block created from a string or callable.
The simplest way to create a custom block::
disclaimer = SimpleBlock("disclaimer", "This is not financial advice.", priority=9)
dynamic = SimpleBlock("stats", lambda ctx: f"Users: {ctx.metadata['users']}", priority=5)
ToolsBlock¶
promptise.prompts.blocks.ToolsBlock
¶
Render available tool schemas for the LLM.
Auto-formats tool names, descriptions, and parameter schemas
from BaseTool instances. Priority 9 (always included —
tools are essential for agentic reasoning).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
list | None
|
List of LangChain |
None
|
show_schemas
|
bool
|
Include full JSON parameter schemas. |
True
|
max_tools
|
int
|
Maximum tools to show (rest truncated with count). |
50
|
PhaseBlock¶
promptise.prompts.blocks.PhaseBlock
¶
Stage-specific instructions that change per reasoning phase.
Each phase has its own instructions. The block renders only the instructions for the current phase. Priority 8 (guides behavior at each step).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
instructions
|
dict[str, str] | None
|
Mapping of phase names to instruction text. |
None
|
current_phase
|
str
|
The currently active phase. |
''
|
set_phase(phase)
¶
Return a new block with the phase set.
PlanBlock¶
promptise.prompts.blocks.PlanBlock
¶
Render the current plan with subgoal progress.
Shows each subgoal with completion status. Priority 7 (important for multi-step reasoning).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
subgoals
|
list[str] | None
|
List of subgoal descriptions. |
None
|
completed
|
list[str] | None
|
List of completed subgoal descriptions. |
None
|
active
|
str
|
The currently active subgoal. |
''
|
update(subgoals, completed, active)
¶
Return a new block with updated plan state.
ObservationBlock¶
promptise.prompts.blocks.ObservationBlock
¶
Inject recent tool results into the prompt.
Shows the most recent tool observations from state. Priority 6 (trimmed if token budget is tight).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
observations
|
list[dict] | None
|
List of observation dicts. |
None
|
max_observations
|
int
|
Maximum observations to show. |
5
|
max_result_length
|
int
|
Truncate each result to this length. |
500
|
update(observations)
¶
Return a new block with updated observations.
ReflectionBlock¶
promptise.prompts.blocks.ReflectionBlock
¶
Inject past learnings from reflection/evaluation.
Shows recent mistakes and corrections to prevent repeating them. Priority 4 (dropped early under token pressure).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reflections
|
list[dict] | None
|
List of reflection dicts with iteration, mistake, correction, confidence fields. |
None
|
max_reflections
|
int
|
Maximum reflections to show. |
3
|
update(reflections)
¶
Return a new block with updated reflections.
PromptAssembler¶
promptise.prompts.blocks.PromptAssembler
¶
Assembles blocks into a final prompt with optional token budgeting.
When a token_budget is given to :meth:assemble, blocks are
dropped lowest-priority-first until the total fits within budget.
Higher-priority blocks survive; lower-priority blocks are listed
in excluded.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*initial_blocks
|
Block
|
Blocks to include initially. |
()
|
separator
|
str
|
String between rendered blocks. |
'\n\n'
|
token_budget
|
int | None
|
Default token budget ( |
None
|
add(block)
¶
Add a block. Returns self for chaining.
assemble(ctx=None, *, token_budget=None)
¶
Assemble all blocks into a final prompt.
When a token budget is set (either here or in the constructor), blocks are dropped lowest-priority-first until the assembled prompt fits within the budget. Blocks with the same priority are dropped in reverse insertion order (later blocks first).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ctx
|
BlockContext | None
|
Optional rendering context for blocks. |
None
|
token_budget
|
int | None
|
Override the constructor budget for this call.
|
None
|
Returns:
| Name | Type | Description |
|---|---|---|
An |
AssembledPrompt
|
class: |
AssembledPrompt
|
excluded block names, token estimate, and per-block traces. |
fill_slot(slot_name, content)
¶
Fill a :class:ContextSlot by name. Returns self for chaining.
remove(name)
¶
Remove a block by name. Returns self for chaining.
AssembledPrompt¶
promptise.prompts.blocks.AssembledPrompt
dataclass
¶
Result of assembling prompt blocks.
block (decorator)¶
promptise.prompts.blocks.block(name, *, priority=5)
¶
Decorator that turns a function into a Block.
Usage::
@block("safety_rules", priority=9)
def safety_rules(ctx=None) -> str:
return "1. Never share personal data\n2. Always cite sources"
# Use in PromptAssembler or PromptNode:
assembler = PromptAssembler(Identity("Analyst"), safety_rules)
blocks (utility)¶
promptise.prompts.blocks.blocks(*block_list)
¶
Decorator that attaches blocks to a :class:~promptise.prompts.core.Prompt.
Blocks are assembled and prepended to the system prompt at execution time.
Usage::
@prompt(model="openai:gpt-5-mini")
@blocks(Identity("Expert analyst"), Rules(["Cite sources"]))
async def analyze(text: str) -> str:
"""Analyze: {text}"""
PromptBuilder¶
Fluent runtime prompt construction.
PromptBuilder¶
promptise.prompts.builder.PromptBuilder
¶
Fluent builder for constructing :class:Prompt instances at runtime.
All methods return self for chaining. Call :meth:build to
produce the final :class:Prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Prompt name (used for logging and observability). |
'builder_prompt'
|
build()
¶
Construct the :class:Prompt from accumulated configuration.
Creates a synthetic function from the template text, then wraps it in a Prompt with all configured components.
Returns:
| Name | Type | Description |
|---|---|---|
Configured |
Prompt
|
class: |
constraint(*texts)
¶
Add constraints.
context(*providers)
¶
Add context providers.
env(env_ctx)
¶
Set the environment context.
guard(*guards)
¶
Add guards (applied to both input and output).
model(name)
¶
Set the LLM model identifier.
observe(enabled=True)
¶
Enable or disable observability recording.
on_after(fn)
¶
Set the after-execution hook.
on_before(fn)
¶
Set the before-execution hook.
on_error(fn)
¶
Set the error-handling hook.
output_type(t)
¶
Set the output type for structured parsing.
perspective(p)
¶
Set the cognitive perspective.
strategy(s)
¶
Set the reasoning strategy.
system(text)
¶
Set the system/instruction text (prepended to template).
template(text)
¶
Set the prompt template with {variable} placeholders.
user(user_ctx)
¶
Set the user context.
world(**contexts)
¶
Add world contexts by name.
Conversation Flows¶
ConversationFlow¶
promptise.prompts.flows.ConversationFlow
¶
Base class for conversation flow state machines.
Subclass this and use the @phase decorator to define phase
handlers. Set base_blocks for blocks that are always active.
Attributes:
| Name | Type | Description |
|---|---|---|
base_blocks |
list[Block]
|
Blocks always included in the prompt (class-level). |
get_prompt()
¶
Get the current prompt without advancing the turn counter.
next_turn(user_message, *, assistant_message='', transition_to=None)
async
¶
Process a conversation turn.
Records the message in history, runs the current phase handler, and returns the updated prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
user_message
|
str
|
The user's message this turn. |
required |
assistant_message
|
str
|
Optional assistant reply to record. |
''
|
transition_to
|
str | None
|
Force a phase transition before processing. |
None
|
Returns:
| Type | Description |
|---|---|
AssembledPrompt
|
The assembled prompt for this turn. |
reset()
¶
Reset the flow to its initial state.
start()
async
¶
Initialize the flow and enter the initial phase.
Returns the first assembled prompt.
Raises:
| Type | Description |
|---|---|
ValueError
|
If no initial phase is defined. |
transition(phase_name)
async
¶
Explicitly transition to a new phase.
Returns the updated prompt after the transition.
Phase¶
promptise.prompts.flows.Phase
dataclass
¶
A named phase in a conversation flow.
Phases can carry blocks that are automatically activated on entry and deactivated on exit, plus optional lifecycle hooks.
TurnContext¶
promptise.prompts.flows.TurnContext
¶
Mutable context passed to @phase handlers each turn.
Phase handlers use this to activate/deactivate blocks, fill context slots, and trigger phase transitions.
history
property
¶
Conversation message history (read-only view).
phase
property
¶
Current phase name.
state
property
¶
Arbitrary flow state dict. Mutate freely.
turn
property
¶
Current turn number (0-based).
activate(block)
¶
Add a block to the active prompt composition.
deactivate(name)
¶
Remove a block by name from the active composition.
fill_slot(name, content)
¶
Fill a :class:ContextSlot block by name.
get_prompt()
¶
Assemble the current prompt (base + active blocks).
transition(phase_name)
¶
Request a transition to another phase.
The transition happens after the current handler completes.
phase (decorator)¶
promptise.prompts.flows.phase(name, *, initial=False, blocks=None, on_enter=None, on_exit=None)
¶
Decorator that marks a method as a phase handler.
Usage::
class MyFlow(ConversationFlow):
@phase("greeting", initial=True)
async def greet(self, ctx: TurnContext):
ctx.activate(Section("greet", "Say hello."))
@phase("working", blocks=[OutputFormat(format="json")])
async def work(self, ctx: TurnContext):
...
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Phase name (used for transitions). |
required |
initial
|
bool
|
Whether this is the starting phase. |
False
|
blocks
|
list[Block] | None
|
Blocks auto-activated when entering this phase. |
None
|
on_enter
|
Callable[..., Any] | None
|
Callback invoked on phase entry. |
None
|
on_exit
|
Callable[..., Any] | None
|
Callback invoked on phase exit. |
None
|
Strategies and Perspectives¶
Reasoning strategies and cognitive perspectives. Both are composable with +.
Strategy¶
promptise.prompts.strategies.Strategy
¶
Bases: Protocol
Protocol for reasoning strategies.
wrap() transforms the prompt text before the LLM call to inject
reasoning instructions. parse() extracts the final answer from
the LLM's raw output.
Perspective¶
promptise.prompts.strategies.Perspective
¶
Bases: Protocol
Protocol for cognitive perspectives.
Different from Strategy: - Strategy = HOW to reason (step-by-step, critique, decompose) - Perspective = FROM WHERE to reason (analyst, critic, advisor)
They are orthogonal and composable.
apply(prompt_text, ctx)
¶
Prepend or inject the perspective framing into prompt_text.
chain_of_thought¶
promptise.prompts.strategies.chain_of_thought = ChainOfThoughtStrategy()
module-attribute
¶
self_critique¶
promptise.prompts.strategies.self_critique = SelfCritiqueStrategy()
module-attribute
¶
structured_reasoning¶
promptise.prompts.strategies.structured_reasoning = StructuredReasoningStrategy()
module-attribute
¶
plan_and_execute¶
promptise.prompts.strategies.plan_and_execute = PlanAndExecuteStrategy()
module-attribute
¶
decompose¶
promptise.prompts.strategies.decompose = DecomposeStrategy()
module-attribute
¶
analyst¶
promptise.prompts.strategies.analyst = AnalystPerspective()
module-attribute
¶
critic¶
promptise.prompts.strategies.critic = CriticPerspective()
module-attribute
¶
advisor¶
promptise.prompts.strategies.advisor = AdvisorPerspective()
module-attribute
¶
creative¶
promptise.prompts.strategies.creative = CreativePerspective()
module-attribute
¶
perspective¶
promptise.prompts.strategies.perspective(role, instructions='')
¶
Create a :class:CustomPerspective.
Context Providers¶
Pluggable async context that gets injected into the prompt at runtime.
context (decorator)¶
promptise.prompts.context.context(*providers)
¶
Decorator that attaches context providers to a :class:Prompt.
Usage::
@prompt(model="openai:gpt-5-mini")
@context(tool_context(), memory_context())
async def analyze(text: str) -> str:
"""Analyze: {text}"""
BaseContext¶
promptise.prompts.context.BaseContext
¶
Extensible context container.
Accepts arbitrary keyword arguments. Predefined subclasses add typed convenience fields but NEVER restrict what developers can store.
Example::
# Predefined fields
user = UserContext(user_id="123", name="Alice")
# Custom fields — no subclassing needed
user = UserContext(user_id="123", department="eng", clearance="high")
# Entirely custom context
project = BaseContext(sprint="2026-Q1", budget=50000)
# Access
project.sprint # "2026-Q1"
project["budget"] # 50000
project.get("missing") # None
# Extend after creation
project.deadline = "March 2026"
# Merge
combined = project.merge(BaseContext(team_size=5))
ContextProvider¶
promptise.prompts.context.ContextProvider
¶
Bases: Protocol
Pluggable source of dynamic context for prompts.
Implement this protocol to create custom context providers. Return empty string to skip injection when data isn't available.
provide(ctx)
async
¶
Generate context text at runtime.
PromptContext¶
promptise.prompts.context.PromptContext
dataclass
¶
The agent's complete world during prompt execution.
Every context provider, strategy, perspective, guard, and hook receives this object. It carries everything the agent knows.
The :attr:world dict holds :class:BaseContext instances keyed
by name. Predefined keys ("user", "environment", etc.)
have convenience properties. Developers add custom contexts via
world["project"] = BaseContext(...).
UserContext¶
promptise.prompts.context.UserContext
¶
Bases: BaseContext
Who the agent is serving. Extend with any user-specific fields.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
user_id
|
str
|
Unique user identifier. |
''
|
name
|
str
|
Display name. |
''
|
preferences
|
dict[str, Any] | None
|
User preference dict. |
None
|
expertise_level
|
str
|
|
'intermediate'
|
language
|
str
|
Preferred language. |
'english'
|
**kwargs
|
Any
|
Any additional fields. |
{}
|
ConversationContext¶
promptise.prompts.context.ConversationContext
¶
Bases: BaseContext
Conversation history and state.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[dict[str, str]] | None
|
List of |
None
|
turn_count
|
int
|
Number of conversation turns. |
0
|
summary
|
str
|
Compressed summary of older turns. |
''
|
**kwargs
|
Any
|
Any additional fields. |
{}
|
EnvironmentContext¶
promptise.prompts.context.EnvironmentContext
¶
Bases: BaseContext
Runtime environment. Extend with deployment-specific fields.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timestamp
|
float | None
|
Epoch timestamp (defaults to now). |
None
|
timezone
|
str
|
IANA timezone string. |
''
|
platform
|
str
|
OS platform (darwin, linux, windows). |
''
|
available_apis
|
list[str] | None
|
List of available API identifiers. |
None
|
**kwargs
|
Any
|
Any additional fields. |
{}
|
ErrorContext¶
promptise.prompts.context.ErrorContext
¶
Bases: BaseContext
Previous errors for retry and recovery.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
errors
|
list[dict[str, Any]] | None
|
List of |
None
|
retry_count
|
int
|
How many retries have been attempted. |
0
|
last_error
|
str
|
The most recent error message. |
''
|
**kwargs
|
Any
|
Any additional fields. |
{}
|
OutputContext¶
promptise.prompts.context.OutputContext
¶
Bases: BaseContext
Expected output characteristics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Output format ( |
''
|
schema_description
|
str
|
Human-readable schema description. |
''
|
examples
|
list[dict[str, Any]] | None
|
List of example outputs. |
None
|
constraints
|
list[str] | None
|
List of output constraint strings. |
None
|
**kwargs
|
Any
|
Any additional fields. |
{}
|
TeamContext¶
promptise.prompts.context.TeamContext
¶
Bases: BaseContext
Other agents in the team.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agents
|
list[dict[str, Any]] | None
|
List of |
None
|
completed_tasks
|
list[dict[str, Any]] | None
|
List of |
None
|
**kwargs
|
Any
|
Any additional fields. |
{}
|
Guards¶
Guard¶
promptise.prompts.guards.Guard
¶
Bases: Protocol
Protocol for prompt input/output guards.
Implement check_input and check_output to create a custom
guardrail. Return the (possibly transformed) value to pass, or
raise :class:GuardError to reject.
GuardError¶
promptise.prompts.guards.GuardError
¶
Bases: Exception
Raised when a guard rejects input or output.
guard (decorator)¶
promptise.prompts.guards.guard(*guards)
¶
Decorator that attaches guards to a :class:Prompt.
Usage::
@prompt(model="openai:gpt-5-mini")
@guard(content_filter(blocked=["secret"]), length(max_length=2000))
async def analyze(text: str) -> str:
"""Analyze: {text}"""
Chaining¶
Composable execution primitives: sequential, parallel, conditional branching, retry, and fallback.
chain¶
promptise.prompts.chain.chain(*prompts)
¶
Create a sequential chain of prompts.
Each prompt's output is passed as input to the next prompt. The final prompt's output is returned.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*prompts
|
Prompt
|
Two or more :class: |
()
|
Returns:
| Type | Description |
|---|---|
_Chain
|
Callable chain that executes prompts sequentially. |
parallel¶
promptise.prompts.chain.parallel(**prompts)
¶
Execute multiple prompts concurrently.
All prompts receive the same input. Returns a dict mapping prompt names to their results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**prompts
|
Prompt
|
Named :class: |
{}
|
Returns:
| Type | Description |
|---|---|
_Parallel
|
Callable that executes all prompts concurrently. |
branch¶
promptise.prompts.chain.branch(condition, routes, default=None)
¶
Route to different prompts based on a condition.
The condition callable receives the same arguments as the prompts and must return a string key matching one of the routes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
condition
|
Callable[..., str]
|
Function that returns a route key. |
required |
routes
|
dict[str, Prompt]
|
Mapping of route keys to :class: |
required |
default
|
Prompt | None
|
Fallback prompt when no route matches. |
None
|
Returns:
| Type | Description |
|---|---|
_Branch
|
Callable that routes to the appropriate prompt. |
retry¶
promptise.prompts.chain.retry(target, max_retries=3, backoff=1.0)
¶
Wrap a prompt with exponential backoff retry.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
target
|
Prompt
|
The :class: |
required |
max_retries
|
int
|
Maximum number of retry attempts. |
3
|
backoff
|
float
|
Base backoff duration in seconds (doubles each attempt). |
1.0
|
Returns:
| Type | Description |
|---|---|
_Retry
|
Callable that retries on failure. |
fallback¶
promptise.prompts.chain.fallback(primary, *alternatives)
¶
Try prompts in order until one succeeds.
If the primary prompt fails, each alternative is tried in order. The first successful result is returned.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
primary
|
Prompt
|
The preferred :class: |
required |
*alternatives
|
Prompt
|
Fallback prompts tried in order. |
()
|
Returns:
| Type | Description |
|---|---|
_Fallback
|
Callable that tries prompts until success. |
Inspector¶
Introspect prompt assembly step-by-step: which blocks were included, token counts, context providers, guard results.
PromptInspector¶
promptise.prompts.inspector.PromptInspector
¶
Collects and displays prompt assembly and execution traces.
Attach to prompts, flows, or graphs to record every step of prompt composition and execution.
graph_traces
property
¶
All recorded graph traces.
traces
property
¶
All recorded prompt traces.
clear()
¶
Discard all recorded traces.
last()
¶
Most recent prompt trace, or None.
last_graph()
¶
Most recent graph trace, or None.
record_assembly(assembled, prompt_name, model)
¶
Record a prompt assembly. Returns the trace for further updates.
record_context(trace, provider_name, chars_injected, render_time_ms)
¶
Record a context provider execution within a trace.
record_execution(trace, output, latency_ms)
¶
Update a trace with execution results.
record_graph(graph_name, node_traces, total_duration_ms, path, iterations, final_state)
¶
Record a complete graph execution.
record_guard(trace, guard_name, passed)
¶
Record a guard check result.
summary()
¶
Human-readable summary of all recorded traces.
PromptTrace¶
promptise.prompts.inspector.PromptTrace
dataclass
¶
Complete trace of a prompt assembly + execution.
Registry and Versioning¶
PromptRegistry¶
promptise.prompts.registry.PromptRegistry
¶
Singleton registry for versioned prompts.
Stores prompts keyed by (name, version) with a latest pointer
for each name. Supports rollback to previous versions.
clear()
¶
Remove all registered prompts.
get(name, ver=None)
¶
Retrieve a prompt by name and optional version.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Prompt name. |
required |
ver
|
str | None
|
Version string. When |
None
|
Returns:
| Type | Description |
|---|---|
Prompt
|
The registered :class: |
Raises:
| Type | Description |
|---|---|
KeyError
|
Prompt or version not found. |
latest_version(name)
¶
Return the latest version string for a prompt.
Raises:
| Type | Description |
|---|---|
KeyError
|
Prompt not found. |
list()
¶
List all registered prompts and their versions.
Returns:
| Type | Description |
|---|---|
dict[str, list[str]]
|
Dict mapping prompt names to lists of version strings. |
register(name, ver, p)
¶
Register a prompt under name at version ver.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Prompt name. |
required |
ver
|
str
|
Semantic version string (e.g. |
required |
p
|
Prompt
|
The :class: |
required |
rollback(name)
¶
Remove the latest version and return the new latest.
Raises:
| Type | Description |
|---|---|
KeyError
|
Prompt not found or only one version exists. |
registry (singleton)¶
promptise.prompts.registry.registry = PromptRegistry()
module-attribute
¶
version (decorator)¶
promptise.prompts.registry.version(ver)
¶
Decorator that registers a :class:Prompt in the global registry.
Usage::
@version("1.0.0")
@prompt(model="openai:gpt-5-mini")
async def summarize(text: str) -> str:
"""Summarize: {text}"""
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ver
|
str
|
Semantic version string. |
required |
Returns:
| Type | Description |
|---|---|
Any
|
Decorator that registers and returns the Prompt. |
Loader and Templates¶
Load .prompt YAML files and render templates.
load_prompt¶
promptise.prompts.loader.load_prompt(path, *, register=False)
¶
Load a prompt from a .prompt YAML file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str | Path
|
Path to the |
required |
register
|
bool
|
If |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
A |
Any
|
class: |
Any
|
class: |
|
Any
|
has |
Raises:
| Type | Description |
|---|---|
PromptFileError
|
File not found, invalid format, or parse error. |
PromptValidationError
|
Schema validation failed. |
Example::
prompt = load_prompt("prompts/analyze.prompt")
result = await prompt(text="quarterly figures...")
load_directory¶
promptise.prompts.loader.load_directory(path, *, register=False)
¶
Load all .prompt files from a directory into a registry.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str | Path
|
Directory path to scan (recursively). |
required |
register
|
bool
|
If |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
A |
Any
|
class: |
Any
|
all loaded prompts. |
Example::
registry = load_directory("prompts/")
load_url¶
promptise.prompts.loader.load_url(url, *, register=False)
async
¶
Load a prompt from a URL (e.g. GitHub raw file).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
HTTP(S) URL pointing to a |
required |
register
|
bool
|
If |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
A |
Any
|
class: |
Any
|
class: |
Raises:
| Type | Description |
|---|---|
PromptFileError
|
HTTP error or parse error. |
Example::
prompt = await load_url(
"https://raw.githubusercontent.com/org/prompts/main/analyze.prompt"
)
save_prompt¶
promptise.prompts.loader.save_prompt(prompt, path, *, version=None, author=None, description=None, tags=None)
¶
Save a :class:~promptise.prompts.core.Prompt to a YAML file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt
|
Any
|
The Prompt instance to save. |
required |
path
|
str | Path
|
Destination file path. |
required |
version
|
str | None
|
Version string (overrides prompt's stored version). |
None
|
author
|
str | None
|
Author name (overrides prompt's stored author). |
None
|
description
|
str | None
|
Description (overrides prompt's stored description). |
None
|
tags
|
list[str] | None
|
Tags list (overrides prompt's stored tags). |
None
|
Example::
save_prompt(my_prompt, "prompts/analyze.prompt",
version="2.0.0", author="data-team")
PromptFileError¶
promptise.prompts.loader.PromptFileError
¶
Bases: SuperAgentError
Error loading or saving a .prompt file.
PromptValidationError¶
promptise.prompts.loader.PromptValidationError
¶
TemplateEngine¶
promptise.prompts.template.TemplateEngine
¶
Template engine with include registry and optional shell interpolation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
includes
|
dict[str, str] | None
|
Mapping of template names to template text. Used
by |
None
|
shell_executor
|
ShellExecutor | Callable[[str], str] | None
|
Optional callable that runs a shell command
and returns its stdout. When provided, occurrences of
|
None
|
render(template, variables)
¶
Render template with variables.
Processing order:
{% include "name" %}— resolved from includes registry!`shell command``` — replaced with stdout (only if ashell_executor`` was provided to the engine; otherwise the syntax is left as-is).{% if condition %}...{% else %}...{% endif %}— truthiness{% for item in items %}...{% endfor %}— iteration{{/}}— literal brace escapes{variable}— interpolation via :meth:str.format_map
Raises:
| Type | Description |
|---|---|
KeyError
|
A referenced variable is missing from variables. |
ValueError
|
An |
ShellExecutionError
|
A |
render_template¶
promptise.prompts.template.render_template(template, variables, includes=None, *, shell_executor=None)
¶
Convenience function — create a :class:TemplateEngine and render.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
template
|
str
|
Template text with |
required |
variables
|
dict[str, Any]
|
Values to substitute. |
required |
includes
|
dict[str, str] | None
|
Optional mapping of template names for
|
None
|
shell_executor
|
ShellExecutor | Callable[[str], str] | None
|
Optional callable enabling |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Rendered text. |
Suite¶
Group related prompts with shared defaults (strategy, perspective, constraints, guards, context).
PromptSuite¶
promptise.prompts.suite.PromptSuite
¶
Group of prompts sharing world configuration.
Subclass and set class attributes to configure shared defaults.
Decorate methods with @prompt(...) to define prompts that
inherit suite-level configuration.
Class Attributes
context_providers: Shared context providers. default_strategy: Default reasoning strategy. default_perspective: Default cognitive perspective. default_constraints: Default constraints. default_guards: Default guards. default_world: Default world contexts (dict of BaseContext).
prompts
property
¶
Discover all :class:Prompt instances on this suite.
Returns:
| Type | Description |
|---|---|
dict[str, Prompt]
|
Dict mapping prompt name to Prompt instance. |
__init_subclass__(**kwargs)
¶
Apply suite defaults to all @prompt-decorated methods.
render_async(ctx=None)
async
¶
Async render all prompts with context providers.
Returns combined text from all prompts in the suite.
system_prompt()
¶
Render a combined system prompt from all suite prompts.
Returns a static text block suitable for build_agent()
when the suite is used as agent instructions. For dynamic
context, use individual prompt's render_async().
Testing¶
PromptTestCase¶
promptise.prompts.testing.PromptTestCase
¶
Base class for prompt tests.
Set the prompt class attribute to the :class:Prompt under test.
Use :meth:mock_llm and :meth:mock_context to isolate tests
from real LLM calls and external context sources.
Works with both unittest.TestCase and pytest patterns.
Attributes:
| Name | Type | Description |
|---|---|---|
prompt |
Prompt | None
|
The :class: |
assert_contains(result, substring)
¶
Assert the result (as string) contains a substring.
assert_context_provided(stats, provider_name)
¶
Assert a specific context provider was used.
assert_guard_passed(result)
¶
Assert the result was not blocked by a guard.
Checks that the result is not None (guards raise exceptions, so a non-None result means all guards passed).
assert_latency(stats, max_ms)
¶
Assert the call latency is within limit.
assert_not_contains(result, substring)
¶
Assert the result does NOT contain a substring.
assert_schema(result, expected_type)
¶
Assert the result matches the expected type.
Works with dataclasses, Pydantic models, and basic types.
mock_context(**contexts)
¶
Temporarily add world contexts to the prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**contexts
|
BaseContext
|
World contexts to inject (e.g.
|
{}
|
mock_llm(response)
¶
Mock the LLM call to return a fixed response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
response
|
str
|
The string response the LLM should return. |
required |
run_prompt(*args, **kwargs)
async
¶
Execute the prompt under test.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*args
|
Any
|
Positional arguments passed to the prompt. |
()
|
**kwargs
|
Any
|
Keyword arguments passed to the prompt. |
{}
|
Returns:
| Type | Description |
|---|---|
Any
|
The prompt's output. |
Raises:
| Type | Description |
|---|---|
ValueError
|
No prompt configured. |
run_with_stats(*args, **kwargs)
async
¶
Execute the prompt and return both result and stats.
Returns:
| Type | Description |
|---|---|
tuple[Any, PromptStats | None]
|
Tuple of (result, PromptStats). |
Observability¶
PromptObserver¶
promptise.prompts.observe.PromptObserver
¶
Bridge between prompt execution and ObservabilityCollector.
Records timeline events for prompt start, end, error, guard blocks, and context provider execution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
collector
|
Any
|
An :class: |
required |
record_context(prompt_name, provider_name, chars_injected)
¶
Record context provider execution.
record_end(prompt_name, model, latency_ms, output_length)
¶
Record prompt execution end.
record_error(prompt_name, error)
¶
Record prompt execution error.
record_guard_block(prompt_name, guard_name, reason)
¶
Record a guard blocking execution.
record_start(prompt_name, model, input_text)
¶
Record prompt execution start.