Quickstart
Get a durable AI agent running in under 5 minutes. Papayya is a durable execution runtime — it does not ship LLM provider adapters. You bring your own LLM SDK (anthropic, openai, bedrock, ollama, ...) and Papayya wraps your calls with checkpointing and observability.
Prerequisites
- Python 3.10+ or Node.js 20+
- An LLM API key for whichever provider you plan to use
Path A: Wrap Existing Code (Local)
The fastest way to add Papayya to a project you already have. No containers, no deploy step.
Python
pip install papayya# anywhere in your existing codebase
from papayya.durable import papayya
t = papayya() # auto-detects API key, persists to cloud
run = t.run("my-agent", budget_usd=5.0)
# Wrap your existing functions — each call is checkpointed
search = run.task("search", my_search_function)
summarize = run.task("summarize", my_llm_summarize)
results = search("AI agents in production")
summary = summarize(results)
run.complete(summary)If your process crashes and restarts with the same run_id, completed tasks are replayed from cache — no re-execution.
TypeScript
npm install papayyaimport { papayya } from "papayya";
const t = papayya();
const run = t.run({ agent: "my-agent", budgetUsd: 5.0 });
const search = run.task("search", mySearchFunction);
const summarize = run.task("summarize", myLLMSummarize);
const results = await search("AI agents in production");
const summary = await summarize(results);
await run.complete(summary);View in Dashboard
Open the Dashboard and click the Local tab to see your run with checkpoints, timing, and cost.
Path B: Deploy to the Cloud
Deploy your agent code and let Papayya run it in managed containers.
1. Sign up
Create an account in the Dashboard (opens in a new tab) or via the CLI:
# Python
papayya signup
# TypeScript
papayya signup2. Store your LLM API key
Your agent runs inside containers. LLM keys must be stored as project secrets:
# Set whichever provider key your agent code imports
papayya secrets set ANTHROPIC_API_KEY sk-ant-...
# or
papayya secrets set OPENAI_API_KEY sk-proj-...3. Define your agent
Python
# agent.py
import json
from openai import OpenAI
from papayya import agent
TOOLS = [
{
"type": "function",
"function": {
"name": "get_company_info",
"description": "Look up basic information about a company.",
"parameters": {
"type": "object",
"properties": {"company_name": {"type": "string"}},
"required": ["company_name"],
},
},
},
]
def get_company_info(company_name: str) -> str:
data = {
"anthropic": "Anthropic is an AI safety company that created Claude.",
"stripe": "Stripe is a financial infrastructure platform.",
}
return data.get(company_name.lower(), f"No data for '{company_name}'.")
@agent(name="research-bot", model="gpt-4o-mini", budget_usd=1.00)
def research_bot(input_data):
"""Your agent loop — call your LLM SDK directly."""
client = OpenAI()
prompt = input_data if isinstance(input_data, str) else json.dumps(input_data)
messages = [
{"role": "system", "content": "You are a research assistant."},
{"role": "user", "content": prompt},
]
for _ in range(10):
response = client.chat.completions.create(
model="gpt-4o-mini", messages=messages, tools=TOOLS,
)
choice = response.choices[0]
if not choice.message.tool_calls:
return choice.message.content
messages.append(choice.message)
for tc in choice.message.tool_calls:
args = json.loads(tc.function.arguments)
result = get_company_info(**args)
messages.append({"role": "tool", "tool_call_id": tc.id, "content": result})
return "Max steps reached."
# Run locally: python agent.py
if __name__ == "__main__":
print(research_bot("Tell me about Anthropic"))The @agent decorator registers your function for deployment. Locally, just call it directly. The runtime shim discovers it automatically.
TypeScript
papayya init --name my-agentThis creates a single agent.ts file with example tools. Edit it to fit your use case.
4. Deploy
# Zero-arg — auto-discovers agent.py in cwd and @agent functions in it
papayya deploy
# Or specify the file explicitly
papayya deploy agent.py5. Run
Trigger a run via the API or SDK:
from papayya import Client
client = Client()
result = client.run_sync(agent_id="research-bot", input="Tell me about Anthropic")
print(result)Or via CLI:
papayya run --input "Tell me about Anthropic" --cloud6. Observe
Open the Dashboard (opens in a new tab) — click the Cloud tab to see:
- Run status and timing
- Step-by-step execution trace with tool calls
- Token usage and cost per step
- Cancel or replay controls
Check on runs (CLI)
# Python
papayya status <run-id>
papayya logs <run-id>
# TypeScript
papayya status <task-id>
papayya logs <task-id>What's next?
- Triggers — three ways to start runs: API calls, cron schedules, and webhooks
- Core Concepts — understand runs, steps, durability, and budgets
- SDK Reference — Agent class, tools, local execution, model support
- CLI Reference — all commands and flags
- API Reference — REST API for programmatic access
- Dashboard Guide — navigating the observability UI