Agent API Reference¶
Core agent creation, invocation, identity, conversations, and the full reasoning engine — every public class across promptise.agent, promptise.engine, and promptise.conversations.
Building Agents¶
build_agent¶
promptise.agent.build_agent(*, servers, model, instructions=None, trace_tools=False, cross_agents=None, sandbox=None, observer=None, observer_agent_id=None, observe=None, memory=None, memory_auto_store=False, extra_tools=None, flow=None, conversation_store=None, conversation_max_messages=0, optimize_tools=None, guardrails=None, cache=None, approval=None, events=None, max_invocation_time=0, adaptive=None, context_engine=None, agent_pattern=None, pattern=None, graph_blocks=None, node_pool=None, max_agent_iterations=25)
async
¶
Build an MCP-first agent and return a :class:PromptiseAgent.
Discovers tools from the configured MCP servers, converts them into
LangChain tools, and builds an agent graph. The result is always a
:class:PromptiseAgent with observability and memory as opt-in
capabilities.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
servers
|
Mapping[str, ServerSpec]
|
Mapping of server name to spec (HTTP/SSE recommended). |
required |
model
|
ModelLike
|
REQUIRED. A LangChain chat model instance, a provider id
string accepted by |
required |
instructions
|
str | Any | None
|
Optional system prompt. Defaults to the built-in
|
None
|
trace_tools
|
bool
|
Print each tool invocation and result to stdout. |
False
|
cross_agents
|
Mapping[str, CrossAgent] | None
|
Optional mapping of peer name → CrossAgent. Each
peer is exposed as an |
None
|
memory
|
Any | None
|
Optional :class: |
None
|
memory_auto_store
|
bool
|
When |
False
|
sandbox
|
bool | dict[str, Any] | None
|
Optional sandbox configuration ( |
None
|
observer
|
Any | None
|
Optional :class: |
None
|
observer_agent_id
|
str | None
|
Agent identifier for tool-event recording. |
None
|
observe
|
bool | Any | None
|
Plug-and-play observability. Can be:
- |
None
|
extra_tools
|
list[BaseTool] | None
|
Optional additional :class: |
None
|
flow
|
Any | None
|
Optional :class: |
None
|
conversation_store
|
Any | None
|
Optional
:class: |
None
|
conversation_max_messages
|
int
|
Maximum messages to keep per session
when using the conversation store. |
0
|
Returns:
| Name | Type | Description |
|---|---|---|
A |
PromptiseAgent
|
class: |
PromptiseAgent
|
Use |
|
PromptiseAgent
|
|
PromptiseAgent¶
promptise.agent.PromptiseAgent
¶
The unified Promptise agent.
Always returned by :func:build_agent. Observability and memory
are opt-in capabilities activated by constructor parameters — disabled
features no-op or return sensible defaults, so callers never need to
check what type they got back.
.. code-block:: python
# Simple — no observe, no memory
agent = await build_agent(servers=..., model="openai:gpt-5-mini")
result = await agent.ainvoke({"messages": [...]})
await agent.shutdown()
# With observability
agent = await build_agent(..., observe=True)
result = await agent.ainvoke({"messages": [...]})
stats = agent.get_stats()
agent.generate_report("report.html")
await agent.shutdown()
# With memory
agent = await build_agent(..., memory=InMemoryProvider())
result = await agent.ainvoke({"messages": [...]}) # auto-injects context
await agent.shutdown()
Attributes:
| Name | Type | Description |
|---|---|---|
collector |
The :class: |
|
provider |
The :class: |
ainvoke(input, config=None, *, caller=None, **kwargs)
async
¶
Invoke the agent asynchronously.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Any
|
LangGraph-style input dict with |
required |
config
|
dict[str, Any] | None
|
LangGraph config dict (callbacks, etc.). |
None
|
caller
|
CallerContext | None
|
Optional :class: |
None
|
When memory is enabled, relevant context is searched and injected
as a SystemMessage before the inner graph runs. When
observability is enabled, a callback handler is attached to
capture every LLM turn, tool call, and token count.
astream(input, config=None, *, caller=None, **kwargs)
async
¶
Stream the agent asynchronously with memory and observability.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
caller
|
CallerContext | None
|
Optional :class: |
None
|
astream_with_tools(input, config=None, *, caller=None, include_arguments=True, tool_display_names=None, **kwargs)
async
¶
Stream agent execution with tool visibility.
Yields structured :class:StreamEvent objects that show the
complete agent reasoning process: tool calls, their results,
LLM tokens, and the final response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Any
|
Agent input (same format as |
required |
config
|
dict[str, Any] | None
|
LangChain config dict. |
None
|
caller
|
CallerContext | None
|
Per-request identity for multi-user. |
None
|
include_arguments
|
bool
|
Include tool arguments in events. |
True
|
tool_display_names
|
dict[str, str] | None
|
Custom display names for tools. |
None
|
Yields:
| Type | Description |
|---|---|
AsyncIterator[Any]
|
class: |
AsyncIterator[Any]
|
class: |
AsyncIterator[Any]
|
class: |
Example::
async for event in agent.astream_with_tools(input, caller=caller):
if event.type == "tool_start":
print(f"🔧 {event.tool_display_name}...")
elif event.type == "token":
print(event.text, end="", flush=True)
invoke(input, config=None, *, caller=None, **kwargs)
¶
Invoke the agent synchronously.
Memory injection requires async I/O. When a running event loop
is detected (e.g. inside Jupyter), memory injection is skipped
for the sync path — use :meth:ainvoke instead.
chat(message, *, session_id, user_id=None, caller=None, metadata=None, system_prompt=None)
async
¶
Send a message and get a response, with automatic session persistence.
This is the high-level API for building chat applications. The conversation store (if configured) handles loading history, persisting new messages, and session lifecycle automatically.
If no conversation store is configured, this still works — it just has no history beyond the current call.
Ownership enforcement: When user_id (or caller.user_id)
is provided and the session already exists, the store checks that
the session belongs to that user. If it belongs to a different
user, :class:~promptise.conversations.SessionAccessDenied is raised.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
The user's message text. |
required |
session_id
|
str
|
Unique identifier for this conversation session.
Use :func: |
required |
user_id
|
str | None
|
Optional user identifier. Shorthand for
|
None
|
caller
|
CallerContext | None
|
Optional :class: |
None
|
metadata
|
dict[str, Any] | None
|
Optional metadata to attach to the user message (e.g. source, IP, device). |
None
|
system_prompt
|
str | None
|
Optional per-call system prompt override. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The assistant's response text. |
Raises:
| Type | Description |
|---|---|
SessionAccessDenied
|
If |
Example::
from promptise.conversations import generate_session_id
agent = await build_agent(..., conversation_store=store)
sid = generate_session_id()
reply = await agent.chat("Hello!", session_id=sid, user_id="user-42")
reply = await agent.chat("What did I say?", session_id=sid, user_id="user-42")
get_session(session_id, *, user_id=None)
async
¶
Get session metadata.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session_id
|
str
|
The session to retrieve. |
required |
user_id
|
str | None
|
When provided, verifies the session belongs to this user. |
None
|
Returns:
| Type | Description |
|---|---|
Any
|
class: |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If no conversation store is configured. |
SessionAccessDenied
|
If |
list_sessions(*, user_id=None, limit=50, offset=0)
async
¶
List conversation sessions from the configured store.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
user_id
|
str | None
|
Filter by user. |
None
|
limit
|
int
|
Maximum sessions to return. |
50
|
offset
|
int
|
Pagination offset. |
0
|
Returns:
| Type | Description |
|---|---|
list[Any]
|
List of :class: |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If no conversation store is configured. |
delete_session(session_id, *, user_id=None)
async
¶
Delete a conversation session and all its messages.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session_id
|
str
|
The session to delete. |
required |
user_id
|
str | None
|
When provided, verifies the session belongs to this user before deleting. |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
|
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If no conversation store is configured. |
SessionAccessDenied
|
If |
update_session(session_id, *, calling_user_id=None, user_id=..., title=..., metadata=...)
async
¶
Update session metadata (title, user_id, custom metadata).
Only provided fields are updated — omitted fields are left unchanged.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session_id
|
str
|
The session to update. |
required |
calling_user_id
|
str | None
|
The user making this request. When provided, verifies ownership before applying changes. |
None
|
user_id
|
str | None
|
New user_id to assign (for ownership transfer). |
...
|
title
|
str | None
|
New title. |
...
|
metadata
|
dict[str, Any] | None
|
New metadata dict. |
...
|
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If no conversation store is configured or the
store does not support |
SessionAccessDenied
|
If |
shutdown()
async
¶
Flush transporters, close MCP connections, and release resources.
Always safe to call — no-ops for features that are not enabled. Call this when the agent is no longer needed.
get_stats()
¶
Return aggregate observability statistics.
Returns an empty dict when observability is not enabled.
generate_report(path, title='Agent Observability Report')
¶
Generate an interactive HTML report and return its file path.
Raises:
| Type | Description |
|---|---|
RuntimeError
|
When observability is not enabled. |
CallerContext¶
promptise.agent.CallerContext
dataclass
¶
Identity and metadata for the caller of an agent invocation.
Pass this to ainvoke() or chat() to carry per-request
identity through the entire invocation — guardrails, conversation
ownership, observability, and (future) MCP token forwarding.
Attributes:
| Name | Type | Description |
|---|---|---|
user_id |
str | None
|
Unique user identifier. Used for conversation session ownership and observability correlation. |
bearer_token |
str | None
|
JWT or OAuth token for the caller. Currently available for guardrails and logging; MCP token forwarding is a planned enhancement. |
roles |
set[str]
|
Caller's roles (e.g. |
scopes |
set[str]
|
OAuth scopes (e.g. |
metadata |
dict[str, Any]
|
Arbitrary key-value metadata (IP, user-agent, etc.). |
Example::
caller = CallerContext(
user_id="user-42",
bearer_token="eyJhbGciOiJIUzI1NiIs...",
roles={"analyst"},
)
result = await agent.ainvoke(input, caller=caller)
reply = await agent.chat("Hello", session_id=sid, caller=caller)
get_current_caller¶
promptise.agent.get_current_caller()
¶
Return the :class:CallerContext for the current invocation.
Safe to call from guardrails, context providers, observability
handlers, or any code running inside an ainvoke() / chat()
call. Returns None outside of an invocation.
Reasoning Graph Engine¶
PromptGraphEngine¶
promptise.engine.execution.PromptGraphEngine
¶
Adaptive graph traversal engine.
Traverses a PromptGraph by executing nodes and following edges.
The graph is copied per invocation so concurrent calls are safe
and runtime mutations don't affect the original graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
graph
|
PromptGraph
|
The graph to traverse. |
required |
model
|
BaseChatModel
|
LangChain |
required |
max_iterations
|
int
|
Maximum total node executions per run. |
50
|
max_node_iterations
|
int
|
Maximum times a single node can execute (prevents infinite tool-calling loops). |
25
|
hooks
|
list[Any] | None
|
List of hook instances for interception. |
None
|
allow_self_modification
|
bool
|
Allow the LLM to modify the graph
via structured output |
True
|
max_mutations_per_run
|
int
|
Cap on graph mutations per run. |
10
|
last_report
property
¶
The execution report from the last ainvoke() call.
ainvoke(input, config=None, **kwargs)
async
¶
Run the graph to completion.
Returns {"messages": [...]} matching the LangGraph contract.
astream_events(input, *, config=None, version='v2', **kwargs)
async
¶
Stream execution events matching LangGraph v2 format.
Yields event dicts consumed by PromptiseAgent.astream_with_tools():
- on_tool_start, on_tool_end, on_tool_error
- on_chat_model_stream
- on_node_start, on_node_end (engine-specific)
PromptGraph¶
promptise.engine.graph.PromptGraph
¶
A directed graph of reasoning nodes.
Nodes are added via add_node(). Edges via add_edge().
The entry point is set via set_entry().
The graph supports runtime mutation via apply_mutation()
and copy() for per-invocation isolation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Human-readable graph name (for visualization and logs). |
'graph'
|
edges
property
¶
All edges in the graph.
entry
property
¶
The entry node name.
nodes
property
¶
All nodes in the graph.
add_edge(from_node, to_node, *, condition=None, label='', priority=0)
¶
Add a directed edge. Returns self for chaining.
Warns if from_node or to_node do not exist in the graph
yet (they may be added later, so this is a warning, not an error).
add_node(node)
¶
Add a node to the graph. Returns self for chaining.
always(from_node, to_node)
¶
Add an unconditional edge: A always goes to B.
apply_mutation(mutation)
¶
Apply a single mutation to the graph.
Called by the engine during execution to modify the live graph copy. Validates mutations before applying.
copy()
¶
Copy the graph for per-invocation isolation.
Nodes are shared references (they are stateless config objects).
Edges use copy-on-write — shared until the copy mutates them,
at which point they are lazily copied via _ensure_edges_owned().
describe()
¶
Human-readable graph description.
from_pool(nodes, *, system_prompt='', name='autonomous')
classmethod
¶
Create a graph from a pool of nodes (autonomous mode).
The engine will use AutonomousNode to dynamically
build the execution path from the node pool. Nodes with
is_entry=True start first. Nodes with
is_terminal=True can end the graph.
get_edges_from(node_name)
¶
Get all outgoing edges from a node, sorted by priority (desc).
Uses a precomputed adjacency index — O(1) lookup instead of O(E) scan.
get_node(name)
¶
Get a node by name. Raises KeyError if not found.
has_node(name)
¶
Check if a node exists.
loop_until(node_name, exit_to, *, condition, max_iterations=5)
¶
Add a loop: node re-enters itself until condition, then exits.
on_confidence(from_node, to_node, min_confidence=0.7)
¶
Add an edge that fires when output confidence exceeds threshold.
on_error(from_node, to_node)
¶
Add an edge that fires when a node has an error.
on_guard_fail(from_node, to_node)
¶
Add an edge that fires when any guard fails.
on_no_tool_call(from_node, to_node)
¶
Add an edge that fires when the node made NO tool calls (final answer).
on_output(from_node, to_node, key, value=True)
¶
Add an edge that fires when output[key] == value.
on_tool_call(from_node, to_node)
¶
Add an edge that fires when the node made tool calls.
remove_node(name)
¶
Remove a node and all edges referencing it.
sequential(*node_names)
¶
Chain nodes with always edges: A → B → C → ...
set_entry(node_name)
¶
Set the entry node. Returns self for chaining.
to_mermaid()
¶
Export graph as Mermaid diagram syntax.
validate()
¶
Validate the graph topology. Returns list of error strings.
Checks:
- Entry node is set and exists
- All edge targets exist (or are __end__)
- All transition targets in node configs exist
- No unreachable nodes (except entry)
- No dead-end nodes without __end__ transition
when(from_node, to_node, condition, label='')
¶
Add a conditional edge: A goes to B when condition is True.
Edge¶
promptise.engine.graph.Edge
dataclass
¶
A directed edge between two nodes.
Attributes:
| Name | Type | Description |
|---|---|---|
from_node |
str
|
Source node name. |
to_node |
str
|
Target node name. |
condition |
Callable[[NodeResult], bool] | None
|
Optional callable |
label |
str
|
Human-readable label for visualization. |
priority |
int
|
When multiple conditional edges match, the one with the highest priority wins. Default is 0. |
Graph State¶
GraphState¶
promptise.engine.state.GraphState
dataclass
¶
The complete state of a graph execution.
Passed to every node's execute() method. Nodes read from
and write to this state. The engine updates it after each node
execution.
The graph field carries the live (mutable) copy of the
PromptGraph so nodes can inspect and modify the graph
topology at runtime.
Attributes:
| Name | Type | Description |
|---|---|---|
messages |
list[Any]
|
Full LangChain message history. |
context |
dict[str, Any]
|
Key-value state that persists across nodes. |
current_node |
str
|
Name of the node currently being executed. |
visited |
list[str]
|
Ordered list of node names visited (for cycle detection). |
iteration |
int
|
Global iteration counter (incremented each time any node executes). |
node_iterations |
dict[str, int]
|
Per-node execution counter (for detecting stuck nodes). |
graph |
Any
|
The live |
plan |
list[str]
|
Current subgoals (set by planning nodes). |
completed |
list[str]
|
Completed subgoals. |
observations |
list[dict[str, Any]]
|
Tool results accumulated during execution. |
reflections |
list[dict[str, Any]]
|
Past learnings/mistakes from reflection nodes. |
tool_calls_made |
int
|
Total tool calls across all nodes. |
node_timings |
dict[str, float]
|
Cumulative milliseconds per node name. |
total_tokens |
int
|
Cumulative token usage across all nodes. |
node_history |
list[NodeResult]
|
Ordered list of |
active_subgoal
property
¶
Return the first uncompleted subgoal, or None.
all_subgoals_complete
property
¶
Return True if every planned subgoal is completed.
add_observation(tool_name, result, args=None, success=True, duration_ms=0.0)
¶
Record a tool observation.
Observations are capped at 50 entries. When the cap is reached, the oldest entries are discarded.
add_reflection(iteration, mistake, correction, confidence=0.5, stage='')
¶
Record a reflection from a reflect/evaluate node.
complete_subgoal(subgoal)
¶
Mark a subgoal as completed.
increment_node_iteration(node_name)
¶
Increment and return the per-node iteration count.
record_node_timing(node_name, ms)
¶
Add ms to the cumulative timing for node_name.
trim_messages()
¶
Trim message history to max_messages if exceeded.
Keeps all system messages (essential context) plus the most recent non-system messages to stay within the configured cap.
NodeResult¶
promptise.engine.state.NodeResult
dataclass
¶
Everything that happened during a single node execution.
This is the richest observability unit in the engine. Every field
is populated by the node's execute() method and the engine's
post-processing. Stored in GraphState.node_history for full
traceability.
NodeEvent¶
promptise.engine.state.NodeEvent
dataclass
¶
An event emitted during streaming execution.
These are engine-level events (on_node_start, on_node_end,
on_graph_mutation) that complement the standard LangChain events
(on_tool_start, on_chat_model_stream, etc.).
NodeFlag¶
promptise.engine.state.NodeFlag
¶
Bases: str, Enum
Typed flags that declare a node's role and capabilities.
Use these instead of bare strings for type safety and IDE support::
PlanNode("plan", flags={NodeFlag.ENTRY, NodeFlag.INJECT_TOOLS})
CACHEABLE = 'cacheable'
class-attribute
instance-attribute
¶
This node's output can be cached for identical inputs.
CRITICAL = 'critical'
class-attribute
instance-attribute
¶
This node must succeed — graph aborts on failure.
ENTRY = 'entry'
class-attribute
instance-attribute
¶
This node starts the reasoning graph.
INJECT_TOOLS = 'inject_tools'
class-attribute
instance-attribute
¶
Receives MCP tools at runtime from build_agent().
ISOLATED_CONTEXT = 'isolated_context'
class-attribute
instance-attribute
¶
Don't inherit context from previous nodes — start fresh.
LIGHTWEIGHT = 'lightweight'
class-attribute
instance-attribute
¶
Use a smaller/faster model for this node (if model_override not set).
NO_HISTORY = 'no_history'
class-attribute
instance-attribute
¶
Don't inject conversation history into this node's context.
OBSERVABLE = 'observable'
class-attribute
instance-attribute
¶
Emit extra observability events for this node.
PARALLEL_SAFE = 'parallel_safe'
class-attribute
instance-attribute
¶
Safe to run concurrently with other parallel_safe nodes.
READONLY = 'readonly'
class-attribute
instance-attribute
¶
This node only reads state, never writes. Safe for parallel execution.
REQUIRES_HUMAN = 'requires_human'
class-attribute
instance-attribute
¶
This node pauses for human input before proceeding.
RETRYABLE = 'retryable'
class-attribute
instance-attribute
¶
This node can be retried on failure.
SKIP_ON_ERROR = 'skip_on_error'
class-attribute
instance-attribute
¶
Skip this node if a previous node errored (don't abort).
STATEFUL = 'stateful'
class-attribute
instance-attribute
¶
This node modifies state.context (for dependency tracking).
SUMMARIZE_OUTPUT = 'summarize_output'
class-attribute
instance-attribute
¶
Auto-summarize long outputs before passing to next node.
TERMINAL = 'terminal'
class-attribute
instance-attribute
¶
Reaching this node can end the graph.
VALIDATE_OUTPUT = 'validate_output'
class-attribute
instance-attribute
¶
Auto-validate output against output_schema before proceeding.
VERBOSE = 'verbose'
class-attribute
instance-attribute
¶
Include full prompt and response in observability logs.
GraphMutation¶
promptise.engine.state.GraphMutation
dataclass
¶
A single graph modification requested during execution.
Mutations are applied to the live graph copy (not the original). The engine validates mutations before applying them.
Attributes:
| Name | Type | Description |
|---|---|---|
action |
str
|
One of |
node_name |
str
|
Target node name (for add/remove node). |
node_config |
dict[str, Any]
|
Configuration dict for new nodes (used with
|
from_node |
str
|
Source node for edge operations. |
to_node |
str
|
Target node for edge operations. |
condition |
str
|
Optional condition string for conditional edges. |
ExecutionReport¶
promptise.engine.state.ExecutionReport
dataclass
¶
Summary of a complete graph execution.
Produced by PromptGraphEngine after ainvoke() completes.
summary()
¶
Human-readable execution summary.
Base Node Types¶
BaseNode¶
promptise.engine.base.BaseNode
¶
Concrete base class for graph nodes.
Provides default implementations and common configuration.
Subclass this for custom node behaviour, or use one of the
built-in types (PromptNode, ToolNode, etc.).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Unique node identifier within the graph. |
required |
instructions
|
str
|
Natural-language description of what this
node does. Injected into the LLM prompt for
|
''
|
description
|
str
|
Short description shown in graph
visualization and |
''
|
transitions
|
dict[str, str] | None
|
Mapping of output keys to next-node names.
E.g. |
None
|
default_next
|
str | None
|
Fallback node name if no transition matches. |
None
|
max_iterations
|
int
|
Maximum times this node can execute in a single graph run (prevents infinite loops). |
10
|
metadata
|
dict[str, Any] | None
|
Arbitrary metadata accessible by hooks and observability. |
None
|
is_entry
property
¶
Whether this node starts the graph.
is_terminal
property
¶
Whether this node can end the graph.
execute(state, config)
async
¶
Execute this node. Subclasses must override.
has_flag(flag)
¶
Check if this node has a specific flag.
stream(state, config)
async
¶
Stream execution events.
Default implementation executes the node and yields a single
on_node_end event. Override for fine-grained streaming.
NodeProtocol¶
promptise.engine.base.NodeProtocol
¶
@node decorator¶
promptise.engine.base.node(name, *, instructions='', transitions=None, default_next=None, max_iterations=10, metadata=None, flags=None, is_entry=False, is_terminal=False)
¶
Decorator that turns an async function into a graph node.
Usage::
@node("fetch_data", default_next="process")
async def fetch_data(state: GraphState) -> NodeResult:
data = await api.get(state.context["url"])
state.context["data"] = data
return NodeResult(node_name="fetch_data", output=data)
graph.add_node(fetch_data)
The decorated function can accept:
- (state: GraphState)
- (state: GraphState, config: dict)
- () (for side-effect-only nodes)
Returns:
| Type | Description |
|---|---|
Any
|
A |
Standard Nodes¶
PromptNode¶
promptise.engine.nodes.PromptNode
¶
Bases: BaseNode
A complete reasoning unit in the graph.
Each PromptNode is a self-contained processing pipeline:
Input → Preprocess → Context Assembly → LLM Call → Tool Execution → Postprocess → Guards → Output
Every aspect is configurable: - What the LLM sees (blocks, strategy, perspective, context layers) - What the LLM can do (tools, tool_choice) - What the LLM must produce (output_schema, guards) - How data flows in (input_keys read from state.context) - How data flows out (output_key writes to state.context) - Pre/post processing (preprocessor, postprocessor callables) - Context from previous node (inherit_context_from)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Unique node identifier. |
required |
blocks
|
list[Any] | None
|
PromptBlocks to assemble for this node's system prompt. |
None
|
strategy
|
Any | None
|
Reasoning strategy (ChainOfThought, SelfCritique, etc.). |
None
|
perspective
|
Any | None
|
Cognitive perspective (Analyst, Critic, etc.). |
None
|
tools
|
list[BaseTool] | None
|
Tools available at THIS node. |
None
|
tool_choice
|
str
|
|
'auto'
|
inject_tools
|
bool
|
When |
False
|
output_schema
|
type | None
|
Pydantic model for structured output. |
None
|
guards
|
list[Any] | None
|
Guards that validate the output before proceeding. |
None
|
context_layers
|
dict[str, int] | None
|
Extra named context layers with priorities. Merged into the system prompt during assembly. |
None
|
max_tokens
|
int
|
Max response tokens for this node. |
4096
|
temperature
|
float
|
LLM temperature for this node. |
0.0
|
input_keys
|
list[str] | None
|
Keys to read from |
None
|
output_key
|
str | None
|
Key to write this node's output to in |
None
|
inherit_context_from
|
str | None
|
Name of a previous node whose output
should be injected as context. Reads from
|
None
|
preprocessor
|
Callable | None
|
Async callable |
None
|
postprocessor
|
Callable | None
|
Async callable |
None
|
include_observations
|
bool
|
Auto-inject recent tool results from state. |
True
|
include_plan
|
bool
|
Auto-inject current plan/subgoals from state. |
True
|
include_reflections
|
bool
|
Auto-inject past learnings from state. |
True
|
inject_tools
property
¶
Whether this node receives MCP tools at runtime.
execute(state, config)
async
¶
Execute the full node pipeline:
- Preprocessor (custom data transformation)
- Context assembly (blocks + input_keys + inherited context + state)
- LLM call (with tools if configured)
- Tool execution (auto-loop if tools called)
- Postprocessor (custom output transformation)
- Guards (validate output)
- Write output to state.context[output_key]
from_config(config)
classmethod
¶
Create a PromptNode from a configuration dict.
Handles all PromptNode parameters including data flow, preprocessing, and context control.
stream(state, config)
async
¶
Stream LLM tokens and tool events.
ToolNode¶
promptise.engine.nodes.ToolNode
¶
Bases: BaseNode
Execute a specific tool or let the engine pick from available tools.
Unlike PromptNode (where tools are called by the LLM), ToolNode executes tools directly based on state context. Useful when tool selection is deterministic rather than LLM-driven.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
list[BaseTool] | None
|
Available tools for this node. |
None
|
validate_inputs
|
bool
|
Validate tool arguments against schema. |
True
|
deduplicate
|
bool
|
Block identical (tool, args) calls. |
True
|
max_result_chars
|
int
|
Cap tool result size. |
4000
|
tool_selector
|
Callable[[GraphState], tuple[str, dict]] | None
|
Optional callable that selects which tool
to call based on state. If |
None
|
execute(state, config)
async
¶
Execute tools from state or selector.
RouterNode¶
promptise.engine.nodes.RouterNode
¶
Bases: BaseNode
Lightweight LLM call that decides which path to take.
No tool calling. The LLM sees the current state (via context blocks) and picks from a list of named routes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
routes
|
dict[str, str] | None
|
Mapping of route names to next-node names. |
None
|
context_blocks
|
list[Any] | None
|
Blocks to render for the LLM's context. |
None
|
model_override
|
BaseChatModel | None
|
Optional different model for routing (e.g., a smaller/faster model for routing decisions). |
None
|
execute(state, config)
async
¶
Ask the LLM to choose a route.
GuardNode¶
promptise.engine.nodes.GuardNode
¶
Bases: BaseNode
Validate state against guards and route based on pass/fail.
No LLM call. Runs guards against the current state and routes
to on_pass or on_fail nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
guards
|
list[Any] | None
|
List of guard instances to run. |
None
|
target_key
|
str | None
|
Key in |
None
|
on_pass
|
str
|
Node name if all guards pass. |
'__end__'
|
on_fail
|
str | None
|
Node name if any guard fails. |
None
|
execute(state, config)
async
¶
Run guards and route accordingly.
ParallelNode¶
promptise.engine.nodes.ParallelNode
¶
Bases: BaseNode
Run multiple child nodes concurrently and merge results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
nodes
|
list[BaseNode] | None
|
Child nodes to execute in parallel. |
None
|
merge_strategy
|
str
|
How to merge results.
|
'concatenate'
|
merge_fn
|
Callable | None
|
Custom merge function (when merge_strategy is
|
None
|
execute(state, config)
async
¶
Execute all child nodes concurrently.
LoopNode¶
promptise.engine.nodes.LoopNode
¶
Bases: BaseNode
Repeat a subgraph until a condition is met.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
body_node
|
BaseNode | None
|
The node to repeat (can be a SubgraphNode). |
None
|
condition
|
Callable[[GraphState], bool] | None
|
Callable that receives state and returns |
None
|
max_loop_iterations
|
int
|
Maximum iterations before forcing exit. |
5
|
execute(state, config)
async
¶
Execute body node in a loop until condition met.
HumanNode¶
promptise.engine.nodes.HumanNode
¶
Bases: BaseNode
Pause execution and wait for human input.
Integrates with the existing ApprovalPolicy system.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt_template
|
str
|
Template shown to the human. |
'Approve this action?'
|
timeout
|
float
|
Seconds to wait before applying |
300.0
|
on_approve
|
str
|
Next node if human approves. |
'__end__'
|
on_deny
|
str | None
|
Next node if human denies. |
None
|
on_timeout
|
str
|
Next node if timeout expires. |
'__end__'
|
execute(state, config)
async
¶
Pause for human input.
TransformNode¶
promptise.engine.nodes.TransformNode
¶
Bases: BaseNode
Transform state data without calling the LLM.
Useful for formatting, aggregation, data extraction, or preparing state for the next node.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
transform
|
Callable[[GraphState], Any] | None
|
Callable that receives state and returns a value
to store in |
None
|
output_key
|
str
|
Key in |
'transform_result'
|
execute(state, config)
async
¶
Execute the transform function.
SubgraphNode¶
promptise.engine.nodes.SubgraphNode
¶
Bases: BaseNode
Embed a complete sub-graph as a single node.
The subgraph runs to completion, then the parent graph continues.
State can be shared (inherit_state=True) or isolated.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
subgraph
|
Any
|
A |
None
|
inherit_state
|
bool
|
If |
True
|
execute(state, config)
async
¶
Run the subgraph to completion.
AutonomousNode¶
promptise.engine.nodes.AutonomousNode
¶
Bases: BaseNode
Meta-node: the LLM receives a pool of configured nodes and dynamically composes its own reasoning path at runtime.
Instead of following a pre-defined graph, the agent:
- Sees all available nodes as a menu of capabilities
- Decides which node to execute next based on the task
- Executes that node (with its tools, blocks, guards)
- Evaluates the result
- Decides the next node or finishes
The developer provides the LEGO blocks. The agent builds the model.
Usage::
autonomous = AutonomousNode(
"agent",
node_pool=[
web_researcher("search", tools=search_tools),
data_analyst("analyze", tools=db_tools),
fact_checker("verify"),
summarizer("conclude"),
planner("plan"),
],
planner_instructions="You are a research assistant. "
"Use the available steps to answer the user's question thoroughly.",
)
graph = PromptGraph("autonomous")
graph.add_node(autonomous)
graph.set_entry("agent")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node_pool
|
list[BaseNode] | None
|
Available nodes the agent can choose from. Each node is fully configured with its own tools, blocks, guards, and instructions. |
None
|
planner_instructions
|
str
|
How the agent should approach selecting and sequencing nodes. |
''
|
allow_repeat
|
bool
|
Whether the agent can re-use the same node multiple times. |
True
|
max_steps
|
int
|
Maximum nodes the agent can execute per run. |
15
|
execute(state, config)
async
¶
Autonomously select and execute nodes from the pool.
Reasoning Nodes¶
ThinkNode¶
promptise.engine.reasoning_nodes.ThinkNode
¶
Bases: PromptNode
Analyzes current state, identifies gaps, recommends next step.
Pre-configured with: - Instructions for structured gap analysis - Auto-injects observations and plan from state - No tools (pure reasoning) - Outputs: gap_analysis, confidence, next_step, reasoning
The developer just creates ThinkNode("think") — everything
else is handled internally.
PlanNode¶
promptise.engine.reasoning_nodes.PlanNode
¶
Bases: PromptNode
Creates structured plans with self-evaluation and subgoal tracking.
Pre-configured with: - Instructions for creating prioritized subgoals - Quality self-evaluation (re-plans if below threshold) - Auto-manages state.plan and state.completed - Outputs: subgoals, priorities, active_subgoal, quality_score
execute(state, config)
async
¶
Execute planning and update state.plan.
ReflectNode¶
promptise.engine.reasoning_nodes.ReflectNode
¶
Bases: PromptNode
Reviews what happened, identifies mistakes, generates corrections.
Pre-configured with: - Instructions for honest self-assessment - Auto-injects observations and past reflections - Auto-stores new reflections in state - Outputs: progress_assessment, mistake, correction, confidence, route
execute(state, config)
async
¶
Execute reflection and auto-store in state.
CritiqueNode¶
promptise.engine.reasoning_nodes.CritiqueNode
¶
Bases: PromptNode
Challenges the current answer/plan with counter-arguments.
Pre-configured with: - Instructions for adversarial review - Severity scoring (routes to revision if above threshold) - Outputs: weaknesses, counter_arguments, improvements, severity
execute(state, config)
async
¶
Execute critique and route based on severity.
ObserveNode¶
promptise.engine.reasoning_nodes.ObserveNode
¶
Bases: PromptNode
Processes raw tool results into structured, actionable data.
Pre-configured with: - Instructions to extract entities, facts, and key findings - Auto-injects recent observations - Enriches state.context with extracted data - Outputs: summary, entities, facts, key_findings
execute(state, config)
async
¶
Execute observation and enrich state with extracted data.
SynthesizeNode¶
promptise.engine.reasoning_nodes.SynthesizeNode
¶
Bases: PromptNode
Combines all gathered data into a comprehensive final answer.
Pre-configured with: - Instructions for synthesis with source citation - Auto-injects ALL observations, reflections, plan progress - Default terminal node (ends the graph) - Outputs: answer, confidence, sources
ValidateNode¶
promptise.engine.reasoning_nodes.ValidateNode
¶
Bases: PromptNode
LLM evaluates whether output meets specified criteria.
Pre-configured with: - Natural language validation criteria - Pass/fail routing - Outputs: passes, issues, suggestions, confidence
execute(state, config)
async
¶
Execute validation and route based on pass/fail.
JustifyNode¶
promptise.engine.reasoning_nodes.JustifyNode
¶
Bases: PromptNode
Forces explicit justification of the last decision/action.
Pre-configured with: - Instructions for structured reasoning chain - Auto-injects the last action from state history - Stores justification in state for audit trail - Outputs: reasoning_chain, evidence, conclusion, confidence
execute(state, config)
async
¶
Execute justification and store in state.
RetryNode¶
promptise.engine.reasoning_nodes.RetryNode
¶
FanOutNode¶
promptise.engine.reasoning_nodes.FanOutNode
¶
Prebuilt Graph Factories¶
Factory functions that build common reasoning patterns. All exposed as static methods on PromptGraph (e.g. PromptGraph.react(...)).
build_react_graph¶
promptise.engine.prebuilts.build_react_graph(tools=None, system_prompt='', *, blocks=None, max_node_iterations=15)
¶
Build a standard ReAct agent graph.
Single PromptNode with tools. The LLM decides when to call tools and when to produce the final answer. The engine handles the tool-calling loop automatically.
This is the default graph used by build_agent().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
list[BaseTool] | None
|
Tools available to the agent. |
None
|
system_prompt
|
str
|
System prompt text. |
''
|
blocks
|
list[Any] | None
|
Optional PromptBlocks for the reasoning node. |
None
|
max_node_iterations
|
int
|
Max tool-calling iterations. |
15
|
Returns:
| Type | Description |
|---|---|
PromptGraph
|
A |
build_peoatr_graph¶
promptise.engine.prebuilts.build_peoatr_graph(tools=None, system_prompt='', *, planning_instructions='', acting_instructions='', thinking_instructions='', reflecting_instructions='', blocks=None)
¶
Build a PEOATR (Plan → Act → Think → Reflect) graph.
Four-stage reasoning pattern where the agent: 1. Plans subgoals with self-evaluation 2. Executes tools to achieve subgoals 3. Analyzes tool results (think) 4. Reflects on progress and routes (replan/continue/answer)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
list[BaseTool] | None
|
Tools available during the Act stage. |
None
|
system_prompt
|
str
|
Base system prompt prepended to all stages. |
''
|
planning_instructions
|
str
|
Extra instructions for the Plan stage. |
''
|
acting_instructions
|
str
|
Extra instructions for the Act stage. |
''
|
thinking_instructions
|
str
|
Extra instructions for the Think stage. |
''
|
reflecting_instructions
|
str
|
Extra instructions for the Reflect stage. |
''
|
blocks
|
list[Any] | None
|
Optional PromptBlocks shared across all stages. |
None
|
Returns:
| Type | Description |
|---|---|
PromptGraph
|
A |
build_research_graph¶
promptise.engine.prebuilts.build_research_graph(search_tools=None, synthesis_tools=None, system_prompt='', *, blocks=None, verify=True)
¶
Build a Search → Verify → Synthesize research pipeline.
Three-stage pattern for research tasks: 1. Search: use search tools to gather information 2. Verify: cross-check findings (optional) 3. Synthesize: combine findings into a final answer
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
search_tools
|
list[BaseTool] | None
|
Tools for the search stage. |
None
|
synthesis_tools
|
list[BaseTool] | None
|
Optional tools for the synthesis stage. |
None
|
system_prompt
|
str
|
Base system prompt. |
''
|
blocks
|
list[Any] | None
|
Optional PromptBlocks shared across stages. |
None
|
verify
|
bool
|
Whether to include a verification stage. |
True
|
Returns:
| Type | Description |
|---|---|
PromptGraph
|
A |
build_autonomous_graph¶
promptise.engine.prebuilts.build_autonomous_graph(node_pool=None, system_prompt='', *, tools=None, max_steps=15)
¶
Build an autonomous agent that composes its own reasoning path.
The agent receives a pool of configured nodes and dynamically decides which to execute at each step. If no node_pool is provided, a default set is created from the tools.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node_pool
|
list | None
|
Pre-configured nodes the agent can choose from. |
None
|
system_prompt
|
str
|
Base instructions for the autonomous planner. |
''
|
tools
|
list[BaseTool] | None
|
Tools to create default nodes from (if no pool given). |
None
|
max_steps
|
int
|
Maximum reasoning steps. |
15
|
Returns:
| Type | Description |
|---|---|
PromptGraph
|
A |
build_deliberate_graph¶
promptise.engine.prebuilts.build_deliberate_graph(tools=None, system_prompt='', **kwargs)
¶
Build a deliberate reasoning graph: Think → Plan → Act → Observe → Reflect.
Slower but higher quality. The agent thinks before acting, observes results carefully, and reflects before continuing.
build_debate_graph¶
promptise.engine.prebuilts.build_debate_graph(system_prompt='', *, max_rounds=5, **kwargs)
¶
Build a debate graph: Proposer → Critic → Judge.
Two adversarial nodes alternate until consensus.
build_pipeline_graph¶
promptise.engine.prebuilts.build_pipeline_graph(*nodes)
¶
Build a simple sequential pipeline from a list of nodes.
Usage::
graph = PromptGraph.pipeline(
planner("plan"),
web_researcher("search", tools=my_tools),
summarizer("conclude"),
)
Engine Hooks¶
Observers that the engine calls at each node boundary. Used for logging, metrics, cycle detection, and budget enforcement.
Hook¶
promptise.engine.hooks.Hook
¶
Bases: Protocol
Protocol for engine hooks.
All methods are optional — implement only the ones you need.
The engine checks hasattr before calling each method.
on_error(node, error, state)
async
¶
Called when a node raises. Return a next-node name to redirect, or None to re-raise.
on_graph_mutation(mutation, state)
async
¶
Called before a graph mutation is applied. Return False to block it.
post_node(node, result, state)
async
¶
Called after a node executes. Can modify result.
post_tool(tool_name, result, args, state)
async
¶
Called after a tool executes. Can modify result. Return modified result.
pre_node(node, state)
async
¶
Called before a node executes. Can modify state.
pre_tool(tool_name, args, state)
async
¶
Called before a tool executes. Can modify args. Return modified args.
LoggingHook¶
promptise.engine.hooks.LoggingHook
¶
TimingHook¶
promptise.engine.hooks.TimingHook
¶
Enforce per-node time budgets.
If a node exceeds its time budget, the hook sets an error on the result but does NOT abort the graph (the engine's error recovery handles that).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
default_budget_ms
|
float
|
Default time budget in milliseconds. |
30000
|
per_node_budgets
|
dict[str, float] | None
|
Override budgets per node name. |
None
|
CycleDetectionHook¶
promptise.engine.hooks.CycleDetectionHook
¶
Detect infinite loops by tracking visit patterns.
If the same sequence of N nodes repeats max_repeats times,
the hook forces the graph to end.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sequence_length
|
int
|
Length of the pattern to detect. |
3
|
max_repeats
|
int
|
How many times the pattern must repeat before triggering. |
3
|
MetricsHook¶
promptise.engine.hooks.MetricsHook
¶
Collects per-node metrics: tokens, latency, errors, call counts.
Access metrics via hook.metrics dict after execution.
BudgetHook¶
promptise.engine.hooks.BudgetHook
¶
Preprocessors and Postprocessors¶
Pluggable functions that run before/after a node's LLM call. Compose with chain_preprocessors and chain_postprocessors.
context_enricher¶
promptise.engine.processors.context_enricher(*, include_timestamp=True, include_iteration=True)
¶
Add timestamp and iteration info to state.context.
state_summarizer¶
promptise.engine.processors.state_summarizer(*, max_context_chars=2000)
¶
Truncate long string values in state.context to save tokens.
input_validator¶
promptise.engine.processors.input_validator(*, required_keys)
¶
Validate that required keys exist in state.context.
json_extractor¶
promptise.engine.processors.json_extractor(*, keys=None)
¶
Parse JSON from LLM output string, optionally filter to specific keys.
confidence_scorer¶
promptise.engine.processors.confidence_scorer()
¶
Add a confidence score based on hedging language in text output.
state_writer¶
promptise.engine.processors.state_writer(*, fields)
¶
Write specific output fields to state.context keys.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fields
|
dict[str, str]
|
Mapping of output_key → state.context_key.
E.g. |
required |
output_truncator¶
promptise.engine.processors.output_truncator(*, max_chars=4000)
¶
Truncate output text to a maximum length.
chain_preprocessors¶
promptise.engine.processors.chain_preprocessors(*fns)
¶
Combine multiple preprocessors into one. Runs in order.
chain_postprocessors¶
promptise.engine.processors.chain_postprocessors(*fns)
¶
Combine multiple postprocessors into one. Pipes output through each.
Conversations¶
ConversationStore¶
promptise.conversations.ConversationStore
¶
Bases: Protocol
Interface for conversation persistence backends.
All methods are async. Implementations must handle their own
thread-safety and connection management. Call :meth:close when
the store is no longer needed.
Security note: Stores do NOT enforce session ownership internally.
Ownership checks happen at the PromptiseAgent layer via
_enforce_ownership() before calling store methods. If you use stores
directly (outside of build_agent()), you MUST verify session ownership
yourself by calling get_session() and checking user_id before
load_messages() or delete_session().
Implementing a custom store requires four data methods plus
:meth:close::
class MyStore:
async def load_messages(self, session_id: str) -> list[Message]: ...
async def save_messages(self, session_id: str, messages: list[Message]) -> None: ...
async def delete_session(self, session_id: str) -> bool: ...
async def list_sessions(self, *, user_id=None, limit=50, offset=0) -> list[SessionInfo]: ...
async def close(self) -> None: ...
close()
async
¶
Release resources (connections, file handles).
delete_session(session_id)
async
¶
Delete a session and all its messages.
Returns True if the session existed and was deleted.
get_session(session_id)
async
¶
Return session metadata, or None if the session does not exist.
Used for ownership checks before loading messages.
list_sessions(*, user_id=None, limit=50, offset=0)
async
¶
List sessions, optionally filtered by user.
Returns sessions ordered by updated_at descending (most
recent first).
load_messages(session_id)
async
¶
Load all messages for a session, ordered by creation time.
Returns an empty list if the session does not exist.
save_messages(session_id, messages)
async
¶
Persist the full message list for a session.
Creates the session if it does not exist. Replaces all messages if the session already exists.
Message¶
promptise.conversations.Message
dataclass
¶
A single conversation message.
Attributes:
| Name | Type | Description |
|---|---|---|
role |
str
|
The message role ( |
content |
str
|
The message text. |
metadata |
dict[str, Any]
|
Optional metadata (tool calls, token counts, latency, etc.). |
created_at |
datetime
|
UTC timestamp. Defaults to now. |
SessionInfo¶
promptise.conversations.SessionInfo
dataclass
¶
Metadata about a conversation session.
Attributes:
| Name | Type | Description |
|---|---|---|
session_id |
str
|
Unique session identifier. |
user_id |
str | None
|
Optional user identifier for multi-user applications. |
title |
str | None
|
Optional human-readable title. |
message_count |
int
|
Number of messages in the session. |
created_at |
datetime
|
When the session was first created. |
updated_at |
datetime
|
When the session was last updated. |
metadata |
dict[str, Any]
|
Application-specific metadata (tags, source, etc.). |
expires_at |
datetime | None
|
Optional expiry time for data-retention enforcement.
|
Built-in Stores¶
promptise.conversations.InMemoryConversationStore
¶
Dict-backed conversation store for testing and prototyping.
No persistence. All data lives in-process and is lost when the process exits.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_sessions
|
int
|
Maximum sessions to keep. Oldest evicted first.
|
0
|
max_messages_per_session
|
int
|
Maximum messages per session. Oldest
messages are dropped when the limit is reached. |
0
|
cleanup_expired()
async
¶
Remove all expired sessions. Returns the count of sessions removed.
Call periodically for data-retention enforcement.
close()
async
¶
No-op for in-memory store.
delete_session(session_id)
async
¶
Delete a session and its messages.
get_session(session_id)
async
¶
Return session metadata or None.
list_sessions(*, user_id=None, limit=50, offset=0)
async
¶
List sessions ordered by most recently updated. Expired sessions excluded.
load_messages(session_id)
async
¶
Load all messages for a session. Returns empty for expired sessions.
save_messages(session_id, messages)
async
¶
Persist messages, creating the session if needed.
session_count()
¶
Return the number of active sessions.
update_session(session_id, *, user_id=..., title=..., metadata=...)
async
¶
Update session metadata.
Only provided fields are updated. Returns True if the
session existed.
promptise.conversations.SQLiteConversationStore
¶
SQLite-backed conversation store using aiosqlite.
Auto-creates tables on first use. Requires
pip install aiosqlite.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the SQLite database file. Use |
'conversations.db'
|
table_prefix
|
str
|
Prefix for table names. |
'promptise_'
|
max_messages_per_session
|
int
|
Rolling window limit. |
0
|
close()
async
¶
Close the database connection.
delete_session(session_id)
async
¶
Delete a session and all its messages.
get_session(session_id)
async
¶
Return session metadata or None.
list_sessions(*, user_id=None, limit=50, offset=0)
async
¶
List sessions ordered by most recently updated.
load_messages(session_id)
async
¶
Load all messages for a session.
save_messages(session_id, messages)
async
¶
Persist messages, creating the session if needed.
update_session(session_id, *, user_id=..., title=..., metadata=...)
async
¶
Update session metadata fields.
promptise.conversations.PostgresConversationStore
¶
PostgreSQL-backed conversation store using asyncpg.
Auto-creates tables and indexes on first use. Requires
pip install asyncpg.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dsn
|
str
|
PostgreSQL connection string
(e.g. |
required |
table_prefix
|
str
|
Prefix for table names. Defaults to |
'promptise_'
|
pool_min
|
int
|
Minimum connection pool size. |
2
|
pool_max
|
int
|
Maximum connection pool size. |
10
|
max_messages_per_session
|
int
|
Rolling window limit. |
0
|
close()
async
¶
Close the connection pool.
delete_session(session_id)
async
¶
Delete a session and all its messages (cascade).
get_session(session_id)
async
¶
Return session metadata or None.
list_sessions(*, user_id=None, limit=50, offset=0)
async
¶
List sessions ordered by most recently updated.
load_messages(session_id)
async
¶
Load all messages for a session.
save_messages(session_id, messages)
async
¶
Persist messages, creating the session if needed.
update_session(session_id, *, user_id=..., title=..., metadata=...)
async
¶
Update session metadata fields. Only provided fields are changed.
promptise.conversations.RedisConversationStore
¶
Redis-backed conversation store using redis.asyncio.
Stores sessions and messages as JSON in Redis hashes and sorted sets. Ideal for ephemeral sessions, caching, or high-throughput scenarios.
Requires pip install redis.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
Redis connection URL (e.g. |
'redis://localhost:6379'
|
key_prefix
|
str
|
Prefix for all Redis keys. |
'promptise:'
|
ttl
|
int
|
Default TTL in seconds for session data. |
0
|
max_messages_per_session
|
int
|
Rolling window limit. |
0
|
close()
async
¶
Close the Redis connection.
delete_session(session_id)
async
¶
Delete a session from Redis.
get_session(session_id)
async
¶
Return session metadata or None.
list_sessions(*, user_id=None, limit=50, offset=0)
async
¶
List sessions from Redis sorted set index.
load_messages(session_id)
async
¶
Load all messages for a session from a Redis sorted set.
save_messages(session_id, messages)
async
¶
Persist messages as a Redis list.
update_session(session_id, *, user_id=..., title=..., metadata=...)
async
¶
Update session metadata in Redis hash.