Agent
The @agent decorator is the primary way to define a deployable agent in Python. It registers a function as an agent entry point that the CLI can discover and the runtime shim can execute.
Papayya does not ship LLM provider adapters and does not call LLMs on your behalf. You bring your own LLM SDK (OpenAI, Anthropic, etc.) and call it directly inside your @agent function. Papayya wraps your function with durable execution, budget enforcement, and observability.
modelis a display label only. It appears in the dashboard and on deployed agent metadata. It is never used to route to a provider, select an SDK, or compute cost. Your code picks the provider.
Python
Defining an agent
from papayya import agent
from openai import OpenAI
import json
@agent(name="research-bot", model="gpt-4o-mini", budget_usd=1.00)
def research_bot(input_data):
"""Your agent loop — call your LLM SDK directly."""
client = OpenAI()
prompt = input_data if isinstance(input_data, str) else json.dumps(input_data)
messages = [
{"role": "system", "content": "You are a research assistant."},
{"role": "user", "content": prompt},
]
for _ in range(10):
response = client.chat.completions.create(
model="gpt-4o-mini", messages=messages,
)
choice = response.choices[0]
if not choice.message.tool_calls:
return choice.message.content
# ... handle tool calls ...
return "Max steps reached."
# Run locally — just call the function
if __name__ == "__main__":
print(research_bot("What are the latest AI trends?"))Multiple agents per file
@agent(name="researcher", model="gpt-4o-mini", budget_usd=1.00)
def researcher(input_data):
...
@agent(name="summarizer", model="gpt-4o-mini", budget_usd=0.50)
def summarizer(input_data):
...The CLI discovers all @agent functions automatically. The runtime shim uses PAPAYYA_AGENT_FUNCTION to select which one to run.
Deploying
# Zero-arg — discovers agent.py in cwd
papayya deploy
# Explicit file
papayya deploy agents.pyTriggering runs programmatically
from papayya import Client
client = Client()
# Blocking — waits for completion, returns output
result = client.run_sync(agent_id="research-bot", input="AI trends", budget_cents=100)
print(result) # the output string
print(result.run_id) # the run ID
steps = client.get_steps(result.run_id)
# Non-blocking — returns immediately
run = client.run(agent_id="research-bot", input="AI trends")
# poll with client.get_status(run["id"])@agent decorator parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Agent identifier (used as slug for deploy/lookup) |
model | str | Yes | Display label (e.g. gpt-4o-mini). Not used for routing. |
instructions | str | No | System prompt (stored as agent config metadata) |
max_steps | int | No | Maximum execution steps (default: 50) |
budget_usd | float | No | Spend cap in USD |
Registry functions
| Function | Description |
|---|---|
get_registry() | Returns dict of all registered @agent functions. |
get_agent(name) | Look up a specific agent registration by name. |
TypeScript
In TypeScript, AgentConfig.model is a ModelClient — an object with a chat(messages, tools, system) method that you implement around whichever provider SDK you use. Papayya does not ship provider adapters.
import { Agent, defineTool } from "papayya";
import type { ModelClient, ModelResponse } from "papayya";
import Anthropic from "@anthropic-ai/sdk";
const searchWeb = defineTool(
{
name: "search_web",
description: "Search the web",
parameters: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
async (input) => {
const { query } = input as { query: string };
return callSearchAPI(query);
},
);
// You write this adapter once — it's ~30 lines and lives in your codebase,
// not in Papayya. This keeps provider SDK churn out of the critical path.
const anthropic = new Anthropic();
const client: ModelClient = {
async chat(messages, tools, system): Promise<ModelResponse> {
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4096,
system,
messages: messages as any,
tools: tools.map((t) => ({
name: t.name,
description: t.description,
input_schema: t.input_schema,
})),
});
return {
content: response.content as any,
stop_reason:
response.stop_reason === "tool_use" ? "tool_use" :
response.stop_reason === "max_tokens" ? "max_tokens" : "end_turn",
usage: {
input_tokens: response.usage.input_tokens,
output_tokens: response.usage.output_tokens,
},
};
},
};
const agent = new Agent({
name: "research-bot",
model: client,
model_label: "claude-sonnet-4-20250514", // optional, for dashboard display
instructions: "You are a research assistant.",
max_steps: 10,
tools: [searchWeb],
});
const result = await agent.run("What are the latest AI trends?");
console.log(result.output);AgentConfig
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Agent identifier |
model | ModelClient | Yes | Your client — Papayya does not ship provider adapters |
model_label | string | No | Display label for dashboard metadata (not used for routing) |
instructions | string | (() => string) | Yes | System prompt (or factory function) |
tools | ToolDefinition[] | Yes | Tools the agent can call |
max_steps | number | Yes | Maximum execution steps |
description | string | No | Human-readable description |
budget_usd | number | No | Spend cap in USD |
Methods
| Method | Returns | Description |
|---|---|---|
agent.run(input, options?) | Promise<TaskResult> | Execute locally using the ModelClient you supplied. Options: signal, on_step. |
agent.deploy(cloud?) | Promise<AgentMetadata> | Deploy definition to cloud |
agent.trigger(input, cloud?) | Promise<{ task_id }> | Trigger a cloud run |
agent.status(taskId, cloud?) | Promise<RunStatus> | Get cloud run status |
agent.toDefinition() | AgentDefinition | Serialize (strips execute functions) |
Why BYOF?
Papayya is infrastructure. Shipping first-class provider adapters would mean tracking every SDK update, pricing change, and tool-call format quirk from every provider — and when one breaks, it breaks your runs mid-flight. Instead you own a tiny adapter (≈30 lines), pinned to the SDK version you control, and Papayya focuses on durability, checkpointing, and observability.