Context API
The context object is passed to your handler function. It provides access to PIE's capabilities.
Overview
async function handler(input, context) {
// context.fetch() - Make HTTP requests
// context.db.query() - Run read-only Postgres queries
// context.secrets - Access developer secrets
// context.userConfig - User-configurable settings
// context.user - User information
// context.oauth - OAuth operations (connectors only)
// context.ai - AI capabilities (chat, analyze, summarize, listModels)
// context.widget - Control plugin widget (show, update, hide)
// context.publicApp - Public app metadata / public action response API
// context.notify - Post notifications (automations only)
// context.streamEvent() - Push real-time events to the client
// context.getSessionMetadata() - Read persistent sandbox session metadata
// context.updateSessionMetadata() - Write persistent sandbox session metadata
// context.tasks - Create and manage scheduled tasks
// context.billing - Charge users custom amounts
return { /* result */ };
}context.fetch()
Make HTTP requests from your agent. All requests are logged for auditing.
Signature
const response = await context.fetch(url, options);Parameters
| Parameter | Type | Description |
|---|---|---|
url | string | The URL to request |
options | object | Fetch options (optional) |
Options
{
method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH',
headers: { 'Content-Type': 'application/json', ... },
body: 'string or stringified JSON',
}Response
{
ok: boolean, // true if status 200-299
status: number, // HTTP status code
body: string, // Response body as string
}Examples
GET request:
const response = await context.fetch('https://api.example.com/data');
const data = JSON.parse(response.body);POST request with JSON:
const response = await context.fetch('https://api.example.com/items', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'New Item' }),
});With authorization header:
const response = await context.fetch('https://api.example.com/protected', {
headers: {
'Authorization': `Bearer ${context.secrets.API_TOKEN}`,
},
});Limits
- Maximum 10 requests per handler execution
- 4 minute timeout per HTTP request (240 seconds)
- Overall handler execution timeout of 120 seconds
- Response body truncated at 10MB
context.secrets
Access your agent's developer secrets.
Usage
const apiKey = context.secrets.MY_API_KEY;
if (!apiKey) {
return { error: true, message: 'API key not configured' };
}Notes
- Secrets are defined in your manifest's
developerSecrets - Values are set when you create/configure the agent
- Secrets are encrypted at rest (AES-256-GCM)
- Missing secrets return
undefined
context.userConfig
Access user-configurable settings defined in your manifest's userFields. Each user can customize these values through the agent settings UI.
Usage
const { topics, maxResults, frequency } = context.userConfig;
if (!topics || topics.length === 0) {
return { error: true, message: 'Please configure your topics in settings' };
}Notes
- Values are defined per-user via the agent settings modal
- Defaults to the
defaultvalues in youruserFieldsschema if not configured - Returns an empty object
{}if nouserFieldsare defined - Type-safe: returns the correct type for each field
Example
Given this manifest:
{
"userFields": {
"topics": { "type": "tags", "default": ["tech"] },
"maxResults": { "type": "number", "default": 5 }
}
}Access in your handler:
async function handler(input, context) {
const { topics, maxResults } = context.userConfig;
// topics: ["tech"] (or user's custom array)
// maxResults: 5 (or user's custom number)
for (const topic of topics) {
// fetch news for each topic...
}
}See Manifest Schema - User-Configurable Fields for field type documentation.
context.user
Information about the current user.
Properties
| Property | Type | Description |
|---|---|---|
id | string | User's unique ID (UUID) |
displayName | string | User's display name |
Example
console.log(`Running for user: ${context.user.displayName}`);context.oauth
OAuth operations for connectors. Only available when your manifest includes oauth configuration.
context.oauth.isConnected()
Check if the user has connected OAuth for this agent.
const connected = await context.oauth.isConnected();
if (!connected) {
return {
error: true,
message: 'Please connect the service first',
requiresAuth: true
};
}context.oauth.fetch()
Make authenticated requests. PIE automatically:
- Adds the
Authorization: Bearer {token}header - Refreshes the token if expired
- Returns the response
const response = await context.oauth.fetch(url, options);Parameters: Same as context.fetch()
Response:
{
ok: boolean,
status: number,
body: string,
headers: { [key: string]: string },
}Example:
const response = await context.oauth.fetch(
'https://api.github.com/user/repos',
{
headers: { 'Accept': 'application/vnd.github.v3+json' },
}
);
if (!response.ok) {
throw new Error(`GitHub API error: ${response.status}`);
}
const repos = JSON.parse(response.body);context.oauth.getConnectionInfo()
Get information about the OAuth connection.
const info = await context.oauth.getConnectionInfo();
// {
// connected: true,
// email: '[email protected]',
// provider: 'google',
// connectedAt: '2024-01-15T10:30:00Z'
// }context.db
Run read-only queries against external Postgres databases.
context.db.query()
Execute a SQL query with strict safety guardrails.
const result = await context.db.query({
connection: {
connectionString: context.secrets.POSTGRES_URL,
ssl: 'require',
},
sql: 'SELECT id, email FROM users ORDER BY created_at DESC LIMIT 25',
params: [],
timeoutMs: 20000, // optional
maxRows: 1000, // optional
maxResponseBytes: 2e6, // optional
});Parameters
| Field | Type | Description |
|---|---|---|
connection | object | Connection info (connectionString or host/user/password/database fields) |
sql | string | SQL query text |
params | array | Positional query params (optional) |
timeoutMs | number | Query timeout in milliseconds (optional) |
maxRows | number | Max rows returned (optional) |
maxResponseBytes | number | Max serialized response size in bytes (optional) |
Safety model
- Only read-only SQL is allowed (
SELECT,WITH, andEXPLAINon read queries) - Multi-statement SQL is blocked
- Mutating or admin statements are blocked (
INSERT,UPDATE,DELETE,ALTER,DROP, etc.) - Queries run in read-only transaction mode with server-side timeout and response limits
Response
{
rows: [{ id: 'u1', email: '[email protected]' }],
columns: ['id', 'email'],
rowCount: 1,
truncated: false,
executionTimeMs: 42,
statementType: 'select'
}context.managedDb
Full CRUD access to the PIE-managed developer database. Unlike context.db (which requires you to provide connection credentials to an external database), the managed database is provisioned by PIE and credentials are handled automatically.
The database is automatically provisioned on first use — you don't need to initialize it manually. If your manifest has a database section, tables are also created automatically on save.
Every query has the end-user's ID injected as a PostgreSQL session variable (app.user_id), enabling Row-Level Security policies for per-user data isolation.
Declarative Schema
Define your tables in the manifest's database.tables section and they'll be created automatically on save. See the Manifest Schema reference.
TIP
See the Developer Database guide for setup instructions, RLS patterns, and examples.
context.managedDb.query()
Execute a SQL query against your managed developer database.
// With RLS-enabled tables (declared with rls: "pie_user_id" in manifest),
// rows are automatically filtered to the current user
const result = await context.managedDb.query(
'SELECT * FROM notes ORDER BY created_at DESC'
);
// Only returns notes belonging to the current user
// INSERT — pie_user_id is auto-populated via column default
await context.managedDb.query(
'INSERT INTO notes (title, content) VALUES ($1, $2)',
['My Note', 'Note content']
);
// For tables without RLS, filter manually
const result = await context.managedDb.query(
'SELECT * FROM shared_config WHERE key = $1',
['theme']
);Parameters
| Field | Type | Description |
|---|---|---|
sql | string | SQL query text (supports full CRUD: SELECT, INSERT, UPDATE, DELETE, and DDL) |
params | array | Positional query params (optional) |
opts | object | Options object (optional) |
opts.timeoutMs | number | Query timeout (default 30000, max 30000) |
opts.maxRows | number | Max rows returned (default 1000, max 10000) |
Response
{
rows: [{ id: '...', user_id: '...', title: 'My Note' }],
columns: ['id', 'user_id', 'title'],
rowCount: 1,
truncated: false,
executionTimeMs: 12
}Session Variables
These PostgreSQL session variables are set on every query:
| Variable | Description |
|---|---|
app.user_id | The end-user's PIE user ID |
app.plugin_id | Your plugin's ID |
Access them in SQL with current_setting('app.user_id', true). Use them in RLS policies for automatic per-user data isolation.
Differences from context.db.query()
context.db.query() | context.managedDb.query() | |
|---|---|---|
| Database | External (you provide credentials) | PIE-managed (automatic) |
| Access | Read-only | Full CRUD |
| User context | None | app.user_id injected |
| Signature | query({ connection, sql, params }) | query(sql, params, opts) |
context.ai
AI capabilities for plugin code. Available in tools, connectors, and automations.
context.ai.chat()
Full multi-turn chat completion with tool calling and multimodal (image) support. This is the most powerful AI method — it gives your plugin direct access to any supported LLM with the full OpenAI Chat Completions message format, including function/tool calling and image inputs.
Use this when you need:
- Multi-step agent loops (the model calls tools, you execute them, and feed results back)
- Multimodal inputs (screenshots, images alongside text)
- Fine-grained control over system prompts, message history, and tool definitions
- Specific model selection (e.g.,
openai/gpt-5.4for computer use)
const result = await context.ai.chat({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' },
],
});
// result: { content: 'The capital of France is Paris.', toolCalls: [], usage: { ... } }With tool calling:
const result = await context.ai.chat({
messages: [
{ role: 'system', content: 'You are a browser automation agent.' },
{ role: 'user', content: 'Navigate to example.com and get the page title.' },
],
tools: [
{
type: 'function',
function: {
name: 'exec_js',
description: 'Execute JavaScript code in the browser',
parameters: {
type: 'object',
properties: {
code: { type: 'string', description: 'JS code to run' },
},
required: ['code'],
},
},
},
],
model: 'openai/gpt-5.4',
reasoningEffort: 'low',
});
if (result.toolCalls.length > 0) {
const call = result.toolCalls[0];
// call: { id: 'call_abc123', name: 'exec_js', arguments: '{"code":"..."}' }
const args = JSON.parse(call.arguments);
// Execute the tool, then send results back in a follow-up chat call
}With multimodal (image) input:
const result = await context.ai.chat({
messages: [
{ role: 'system', content: 'Describe what you see in screenshots.' },
{
role: 'user',
content: [
{ type: 'text', text: 'What is on this page?' },
{ type: 'image_url', image_url: { url: 'data:image/png;base64,...' } },
],
},
],
model: 'openai/gpt-5.4',
});Parameters:
| Parameter | Type | Description |
|---|---|---|
messages | array | Array of chat messages in OpenAI format ({ role, content } or multimodal content arrays). Required. |
tools | array | OpenAI-format tool definitions ({ type: 'function', function: { name, description, parameters } }). Optional. |
model | string | Model ID to use (see Supported Models). Optional — defaults to openai/gpt-5.4. |
temperature | number | Sampling temperature (0–2). Optional. |
reasoningEffort | string | Reasoning effort level ('low', 'medium', 'high'). Optional — defaults to 'low'. |
Returns:
{
content: string | null, // The model's text response (null if only tool calls)
toolCalls: [ // Array of tool calls the model wants to make
{
id: string, // Unique call ID (use this in tool result messages)
name: string, // Function name
arguments: string, // JSON string of arguments
}
],
usage: { // Token usage data
promptTokens: number,
completionTokens: number,
totalTokens: number,
cost: number, // Estimated cost in USD
},
}Agent loop pattern:
The most common pattern is an agent loop where you repeatedly call context.ai.chat(), execute any tool calls, and feed results back:
let messages = [
{ role: 'system', content: 'You are a helpful assistant with tools.' },
{ role: 'user', content: userPrompt },
];
for (let step = 0; step < 20; step++) {
const result = await context.ai.chat({ messages, tools, model: 'openai/gpt-5.4' });
if (result.toolCalls.length === 0) {
// Model is done — result.content has the final answer
return { answer: result.content };
}
// Append the assistant's response (with tool calls) to history
messages.push({
role: 'assistant',
content: result.content,
tool_calls: result.toolCalls.map(tc => ({
id: tc.id,
type: 'function',
function: { name: tc.name, arguments: tc.arguments },
})),
});
// Execute each tool call and append results
for (const tc of result.toolCalls) {
const args = JSON.parse(tc.arguments);
const output = await executeMyTool(tc.name, args);
messages.push({
role: 'tool',
tool_call_id: tc.id,
content: typeof output === 'string' ? output : JSON.stringify(output),
});
}
}Notes:
- Messages follow the OpenAI Chat Completions format. Roles:
system,user,assistant,tool. - Tool result messages must have
role: 'tool'and include thetool_call_idfrom the corresponding tool call. - For multimodal messages, use content arrays with
{ type: 'text', text: '...' }and{ type: 'image_url', image_url: { url: '...' } }objects. Image URLs can bedata:image/png;base64,...or HTTPS URLs. - All calls are routed through OpenRouter — no separate API key needed. Billing is handled automatically.
- The 4-minute timeout applies per call (same as
context.fetch).
context.ai.analyze()
Use AI to analyze and classify data:
const result = await context.ai.analyze({
prompt: 'Classify this email as urgent or not urgent. Return JSON: {"label": "urgent|not_urgent"}',
data: { subject, from, snippet }
});
// result: { label: 'urgent', confidence: 0.95 }With a specific model:
const result = await context.ai.analyze({
prompt: 'Generate a detailed product description',
data: { name, features, audience },
model: 'google/gemini-3.1-pro-preview'
});Parameters:
| Parameter | Type | Description |
|---|---|---|
prompt | string | Instructions for the AI |
data | object | Data to analyze |
model | string | Optional. Model ID to use (see Supported Models below). Defaults to google/gemini-3-flash-preview. |
Returns: Parsed JSON from the AI response, or text if not JSON.
context.ai.summarize()
Get a text summary of content:
const summary = await context.ai.summarize(longArticleText);
// "Brief summary of the article highlighting key points..."With a specific model:
const summary = await context.ai.summarize(longArticleText, {
model: 'anthropic/claude-sonnet-4.6'
});Parameters:
| Parameter | Type | Description |
|---|---|---|
content | string | Text to summarize |
options | object | Optional second argument |
options.model | string | Optional. Model ID to use (see Supported Models below). Defaults to google/gemini-3-flash-preview. |
Returns: String summary
context.ai.listModels()
Get the list of available models at runtime:
const models = await context.ai.listModels();
// [
// { id: 'google/gemini-3-flash-preview', name: 'Gemini 3 Flash', provider: 'Google', description: '...' },
// { id: 'anthropic/claude-sonnet-4.6', name: 'Claude Sonnet 4.6', provider: 'Anthropic', description: '...' },
// ...
// ]Returns: Array of model objects with id, name, provider, and description.
Supported Models
These are the currently available models you can pass to context.ai.chat(), context.ai.analyze(), and context.ai.summarize():
| Model ID | Name | Provider |
|---|---|---|
openai/gpt-5.4 | GPT-5.4 | OpenAI |
google/gemini-3-flash-preview | Gemini 3 Flash | |
google/gemini-3-pro-preview | Gemini 3 Pro | |
google/gemini-3.1-pro-preview | Gemini 3.1 Pro | |
google/gemini-2.5-flash-lite | Gemini 2.5 Flash Lite | |
anthropic/claude-sonnet-4.6 | Claude Sonnet 4.6 | Anthropic |
anthropic/claude-opus-4.6 | Claude Opus 4.6 | Anthropic |
anthropic/claude-sonnet-4.5 | Claude Sonnet 4.5 | Anthropic |
anthropic/claude-haiku-4.5 | Claude Haiku 4.5 | Anthropic |
openai/gpt-5.2-chat | GPT-5.2 | OpenAI |
openai/gpt-5.2-pro | GPT-5.2 Pro | OpenAI |
openai/gpt-5.2-codex | GPT-5.2 Codex | OpenAI |
openai/gpt-5-mini | GPT-5 Mini | OpenAI |
openai/gpt-5-nano | GPT-5 Nano | OpenAI |
openai/gpt-4o | GPT-4o | OpenAI |
openai/gpt-4o-mini | GPT-4o Mini | OpenAI |
moonshotai/kimi-k2.5 | Kimi K2.5 | Moonshot |
x-ai/grok-4.1-fast | Grok 4.1 Fast | xAI |
deepseek/deepseek-v3.2 | DeepSeek V3.2 | DeepSeek |
minimax/minimax-m2.5 | MiniMax M2.5 | MiniMax |
openai/gpt-oss-120b | GPT OSS 120B | Ultra Fast |
openai/gpt-oss-safeguard-20b:nitro | GPT OSS Safeguard 20B | Ultra Fast |
Use context.ai.listModels() to get the most up-to-date list at runtime, as new models may be added.
Billing
AI calls are billed based on the actual token usage of the model you select. Different models have different per-token costs — more capable models (e.g., Claude Opus 4.6, GPT-5.2 Pro) cost more per token than lightweight models (e.g., GPT-5 Nano, Gemini 2.5 Flash Lite). The cost is determined by the upstream provider (via OpenRouter) and your subscription plan's margin multiplier is applied on top.
Note: AI calls count toward your token usage and balance.
context.widget
Control your plugin's widget iframe. Available in widget action handlers (isolated-vm) and tool action handlers (E2B). Widgets are sandboxed iframes that communicate with your plugin handler via PIE.sendAction() in the iframe and context.widget on the server side.
context.widget.show(data)
Open the widget and send initial data. If the widget is already open, updates its data.
await context.widget.show({
event: 'dashboard',
items: [{ id: 1, name: 'Item 1' }],
});context.widget.update(data)
Push new data to an already-open widget. The widget receives this via PIE.onData(callback).
await context.widget.update({
event: 'progress',
completed: 3,
total: 10,
});context.widget.hide()
Close the widget.
await context.widget.hide();_triggerToolCall — Background E2B dispatch from widget actions
Widget actions run in isolated-vm (30s timeout, lightweight). When a widget action needs to kick off heavy work (image generation, long API calls, data processing), it can return a _triggerToolCall object. The platform will then dispatch a background E2B execution of the same plugin with the trigger object as the input — no LLM in the loop.
// Inside your widget action handler:
case 'start_processing': {
await context.managedDb.query('UPDATE jobs SET status=$1 WHERE id=$2', ['processing', payload.jobId]);
await context.widget.update({ event: 'processing_started', flash: 'Processing...' });
return {
success: true,
_triggerToolCall: {
action: 'run_heavy_job',
job_id: payload.jobId
}
};
}The _triggerToolCall object is passed directly as the input to your plugin's handler() function in E2B. Your handler's switch(input.action) routing handles it like any other tool action.
How it works:
- User clicks a button in the widget →
PIE.sendAction('start_processing', { jobId }) - Widget action runs in isolated-vm — does lightweight DB updates, sends widget update, returns
_triggerToolCall - Response sent to client immediately (user sees feedback)
- Platform dispatches
toolSandbox.execute()in background → E2B runs the heavy action - E2B handler calls
context.widget.update()when done → widget updates via SSE
Requirements:
_triggerToolCallmust be an object with anactionstring property- The action value should match a case in your handler's
switch(input.action)routing - Additional parameters are passed through as-is (e.g.,
slide_id,job_id)
See the Widget Actions guide for full examples and best practices.
context.publicApp
context.publicApp is available when your plugin uses PIE public apps. It has two modes depending on the runtime.
Mode 1: Widget Runtimes
When the current plugin already has a published public app instance for the current developer/user, PIE injects deployment metadata into widget actions:
context.publicApp
// {
// instanceId: '...',
// instanceSlug: 'pie-forms-e8e807f3',
// pluginSlug: 'pie-forms',
// baseUrl: 'https://your-pie-domain.com'
// }Use this to build share links from widget actions:
const shareUrl =
context.publicApp.baseUrl +
'/apps/' +
context.publicApp.pluginSlug +
'/' +
context.publicApp.instanceSlug;If the plugin has no deployed public app yet, context.publicApp is null in these runtimes.
Mode 2: Public Action Runtime
When the browser calls:
POST /api/public-actions/{instanceIdOrSlug}/{actionId}your handler receives:
{
_publicAction: true,
actionId: 'load_form',
payload: { ... },
instanceId: '...',
visitorIp: '...'
}In that runtime, context.publicApp looks like this:
context.publicApp
// {
// instanceId: '...',
// visitorIp: '...',
// respond(data) { ... }
// }context.publicApp.respond(data)
Use respond() to set the JSON payload returned to the browser:
async function handler(input, context) {
if (input._publicAction && input.actionId === 'load_form') {
const result = await context.managedDb.query(
'SELECT id, title FROM forms WHERE id = $1',
[input.payload.formId]
);
context.publicApp.respond({
form: result.rows[0] || null,
});
return { success: true };
}
}If you do not call respond(), PIE returns your handler result as data.
Public Action Data Model
Public actions are anonymous browser requests:
context.user.idis the plugin owner's user ID in a public action runtimecontext.managedDb.query()runs against the plugin owner's managed database access- Anonymous visitors are not automatically isolated by RLS
That means your public action handler must validate what a visitor can access using your own public IDs, slugs, tokens, or row filters.
See the Public Apps and Routes guide for .pie packaging, hosted URL structure, SPA routing, uploads, and custom domains.
context.files
Upload, list, and manage files in the user's PIE storage. Files are scoped to your agent by default -- other agents cannot see them unless the user explicitly grants access.
Uploaded documents (PDF, text, markdown, etc.) are automatically indexed in the user's knowledge base, making them searchable by the AI in chat.
context.files.upload(filename, mimeType, base64Data)
Upload a file. PIE automatically generates a descriptive filename, tags, and description using AI.
const result = await context.files.upload(
'report.pdf',
'application/pdf',
base64EncodedData
);
// result: {
// id: 'abc-123',
// filename: 'quarterly-sales-report-q1-2026.pdf', // AI-generated
// originalFilename: 'report.pdf',
// mimeType: 'application/pdf',
// size: 125000,
// tags: ['report', 'sales', 'quarterly'],
// description: 'Q1 2026 quarterly sales report with revenue breakdown',
// url: 'https://...' // short-lived signed URL
// }Parameters:
| Parameter | Type | Description |
|---|---|---|
filename | string | Original filename (with extension) |
mimeType | string | MIME type (e.g., image/png, application/pdf) |
base64Data | string | File content as base64-encoded string |
Returns: Object with id, filename, mimeType, size, tags, description, and a short-lived url.
Quota: 250 MB per agent per user. An error is thrown if the quota is exceeded.
context.files.list(mimeTypeFilter?)
List files your agent has access to (own files + files granted by the user).
// All files
const files = await context.files.list();
// Only images
const images = await context.files.list('image/*');Parameters:
| Parameter | Type | Description |
|---|---|---|
mimeTypeFilter | string | Optional. Filter by MIME type prefix (e.g., image/*) |
Returns: Array of file objects with id, filename, mimeType, size, tags, description, createdAt.
context.files.get(fileId)
Get metadata for a specific file.
const file = await context.files.get('abc-123');context.files.getUrl(fileId)
Get a short-lived download/preview URL (valid for ~15 minutes).
const url = await context.files.getUrl('abc-123');context.files.delete(fileId)
Delete a file your agent created.
const success = await context.files.delete('abc-123');Note: You can only delete files your agent created. The user can delete any file from the Files page.
Returning files in tool results
To display a file inline in the chat (especially images), return a file object in your handler result:
async function handler(input, context) {
// ... generate or fetch a file ...
const uploaded = await context.files.upload(
'chart.png',
'image/png',
base64ImageData
);
return {
success: true,
file: uploaded, // PIE will display this inline in chat
description: 'Monthly revenue chart',
};
}PIE detects the file property and renders images inline in the chat message.
File Search integration
When you upload a document type (PDF, text, markdown, JSON, etc.), PIE automatically indexes it in the user's knowledge base. This means:
- The AI can find and cite the document when answering questions
- The user can ask "What does my report say about..." and get answers from agent-uploaded files
- No extra code needed -- indexing happens automatically on upload
context.pdf
Extract text content from PDF files. The PDF is downloaded from a URL and parsed server-side, so your plugin gets clean, readable text — no binary data to deal with.
context.pdf.extractText(url, options?)
Downloads a PDF from the given URL and returns the extracted text content.
const result = await context.pdf.extractText('https://example.com/document.pdf');
console.log(result.text); // Full text content of the PDF
console.log(result.pages); // Number of pages
console.log(result.info); // PDF metadata (title, author, etc.)Parameters:
| Parameter | Type | Description |
|---|---|---|
url | string | URL to download the PDF from (must be publicly accessible or a signed/temporary URL) |
options.maxPages | number | Maximum number of pages to extract (default: all pages, max: 200) |
Returns:
| Field | Type | Description |
|---|---|---|
success | boolean | Whether extraction succeeded |
text | string | The extracted text content |
pages | number | Total number of pages in the PDF |
info | object? | PDF metadata: title, author, subject, creator, creationDate, modDate |
Example: Dropbox PDF reading
// Get a temporary download link from Dropbox
const linkData = await context.oauth.fetch(
'https://api.dropboxapi.com/2/files/get_temporary_link',
{ method: 'POST', headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ path: '/Documents/report.pdf' }) }
);
const link = JSON.parse(linkData.body).link;
// Extract text from the PDF
const pdf = await context.pdf.extractText(link);
return { success: true, content: pdf.text, pages: pdf.pages };Limits:
- Maximum PDF size: 20 MB
- Maximum pages: 200 (use
options.maxPagesto limit) - Download timeout: 30 seconds
context.notify() (Automations Only)
Post messages to the user's PIE Assistant session:
await context.notify('Your notification message', {
title: 'Optional Title',
urgent: true, // Highlights the notification
});Parameters:
| Parameter | Type | Description |
|---|---|---|
message | string | The notification message (supports markdown) |
options.title | string | Optional title/heading |
options.urgent | boolean | If true, notification is highlighted |
Example with formatting:
await context.notify(
`## Daily Report\n\n` +
`- **Emails**: ${emailCount} new\n` +
`- **Tasks**: ${taskCount} due today\n\n` +
`---\n` +
`*Generated at ${new Date().toLocaleString()}*`,
{ title: 'Morning Summary' }
);context.streamEvent()
Push a real-time event to the client during tool execution. Events are delivered immediately through the chat stream — the user sees them while your handler is still running. This is useful for long-running tools that need to provide live feedback or hand off interactive URLs.
Signature
await context.streamEvent(type, data);Parameters
| Parameter | Type | Description |
|---|---|---|
type | string | Event type identifier (e.g., 'browser_live_view') |
data | object | Arbitrary data payload delivered to the client (optional, defaults to {}) |
Returns
true on success.
Built-in event types
PIE's client recognizes these event types out of the box:
| Event type | Data fields | Behavior |
|---|---|---|
browser_live_view | url (string), sessionId (string, optional) | Shows a live browser view card in chat with "Watch Live" link and optional iframe preview |
browser_needs_input | url (string), message (string, optional) | Promotes the live view card to an alert state prompting the user to take control |
You can also emit custom event types — they'll be delivered to the client as generic stream events.
Example: Browser live view
async function handler({ task }, context) {
// Create a remote browser session
const session = await createBrowserSession();
const liveViewUrl = await getLiveViewUrl(session.id);
// Push the live view URL to the user immediately
await context.streamEvent('browser_live_view', {
url: liveViewUrl,
sessionId: session.id,
});
// Run the browsing task (this takes 30–120 seconds)
const result = await runBrowserAgent(session.id, task);
return { success: true, summary: result.summary };
}The user sees the "Watch Live" card within seconds, then the agent's final summary once the task completes.
Example: Requesting user input
// If the agent gets stuck on a login page:
await context.streamEvent('browser_needs_input', {
url: liveViewUrl,
message: 'Please log in with your credentials',
});
// Wait for the user to act, then retry
await waitForPageChange(session.id);Limits
- Maximum 20 stream events per handler execution
typemust be 1–100 charactersdatais serialized as JSON — keep payloads small
context.getSessionMetadata()
Retrieve the current persistent sandbox session's metadata. Only available for plugins with runtime.persistent: true.
Signature
const metadata = await context.getSessionMetadata();Returns
An object (Record<string, unknown>) containing whatever metadata the plugin has previously stored. Returns {} if no metadata has been set.
Example
const meta = await context.getSessionMetadata();
if (meta.initialized) {
console.log(`Resuming session for repo: ${meta.repoUrl}`);
}Notes
- Metadata persists across sandbox pause/resume cycles.
- Metadata is stored per user per plugin — each user has their own session state.
- Returns
{}for non-persistent plugins (no error thrown).
context.updateSessionMetadata()
Merge new key-value pairs into the persistent sandbox session's metadata. Only available for plugins with runtime.persistent: true.
Signature
await context.updateSessionMetadata(metadata);Parameters
| Parameter | Type | Description |
|---|---|---|
metadata | object | Key-value pairs to merge into the existing metadata. Existing keys not present in the update are preserved. |
Returns
true on success.
Example
await context.updateSessionMetadata({
initialized: true,
repoUrl: 'https://github.com/org/repo',
totalRuns: (meta.totalRuns || 0) + 1,
});Special metadata keys
| Key | Behavior |
|---|---|
requestKill | If set to true, the platform will kill (not pause) the sandbox after the current execution completes. Use this when the user wants to end their session. |
Example: Ending a session
if (action === 'end_session') {
await context.updateSessionMetadata({ requestKill: true });
return { result: 'Session ended.' };
}Notes
- Updates are merged (shallow), not replaced. To remove a key, set it to
null. - Keep metadata small — it's stored as JSONB in the database.
- Metadata survives session kills for context restoration purposes.
context.tasks
Create, list, update, and delete scheduled tasks for the user. Use this to build reminders, recurring heartbeats, cron jobs, and other scheduled actions.
context.tasks.create(options)
Create a new scheduled task.
const task = await context.tasks.create({
name: 'Daily standup reminder',
taskType: 'heartbeat',
schedule: { kind: 'cron', expr: '0 9 * * 1-5' },
payload: { kind: 'heartbeat', message: 'Time for your daily standup!' },
delivery: { target: 'pie_assistant' },
deleteAfterRun: false,
});Parameters:
| Field | Type | Description |
|---|---|---|
name | string | Human-readable task name |
taskType | string | 'heartbeat', 'cron', or 'webhook' |
schedule | object | Schedule config (see below) |
payload | object | What to do when the task runs |
delivery | object | Where to deliver ({ target: 'pie_assistant' }, 'email', or 'telegram') |
description | string | Optional description |
deleteAfterRun | boolean | If true, task is deleted after first run (for one-time reminders) |
activeHours | object | Optional { start: 'HH:MM', end: 'HH:MM' } window |
enabled | boolean | Whether the task is active (default: true) |
Schedule types:
| Kind | Fields | Example |
|---|---|---|
once | atMs (unix ms) | { kind: 'once', atMs: 1735689600000 } |
cron | expr (cron expression) | { kind: 'cron', expr: '0 9 * * *' } |
interval | everyMs (milliseconds) | { kind: 'interval', everyMs: 3600000 } |
Returns: The created task object with id, name, schedule, nextRunAt, etc.
context.tasks.list()
List all of the user's scheduled tasks.
const tasks = await context.tasks.list();
for (const task of tasks) {
console.log(`${task.name} - next run: ${task.nextRunAt}`);
}Returns: Array of task objects.
context.tasks.update(taskId, updates)
Update an existing task.
const updated = await context.tasks.update('task-uuid', {
schedule: { kind: 'cron', expr: '0 10 * * 1-5' },
enabled: true,
});Parameters:
| Field | Type | Description |
|---|---|---|
taskId | string | The task ID to update |
updates | object | Fields to update (same fields as create, all optional) |
Returns: The updated task object.
context.tasks.delete(taskId)
Delete a scheduled task.
const success = await context.tasks.delete('task-uuid');Returns: true on success.
Example: One-time reminder
const tomorrow9am = new Date();
tomorrow9am.setDate(tomorrow9am.getDate() + 1);
tomorrow9am.setHours(9, 0, 0, 0);
await context.tasks.create({
name: 'Call dentist',
taskType: 'heartbeat',
schedule: { kind: 'once', atMs: tomorrow9am.getTime() },
payload: { kind: 'heartbeat', message: 'Reminder: Call the dentist!' },
deleteAfterRun: true,
});context.input (Automations Only)
Automation-specific input data. In your handler function, input contains:
Properties
| Property | Type | Description |
|---|---|---|
lastRunAt | number | null | Timestamp (ms) of last successful run |
triggeredBy | string | 'cron', 'webhook', or 'manual' |
triggerData | any | Webhook payload or lifecycle event data. For webhook triggers, includes _headers with selected HTTP headers. |
Example
async function handler(input, context) {
const { lastRunAt, triggeredBy, triggerData } = input;
// Process only data since last run
const sinceTime = lastRunAt || (Date.now() - 24 * 60 * 60 * 1000);
if (triggeredBy === 'webhook') {
// Handle webhook payload
const { eventType, data } = triggerData;
}
// ... rest of automation logic
}Lifecycle Hooks
Agents can export additional handlers for lifecycle events:
onConnect(input, context)
Called immediately after a user connects OAuth. Use this to:
- Set up external subscriptions (webhooks, watches)
- Send a welcome notification
- Fetch initial data
async function onConnect(input, context) {
// input.triggerData.event === 'onConnect'
await context.notify('Welcome! Setting up your account...', {
title: 'Connected'
});
// Set up Gmail push notifications
await context.oauth.fetch('https://gmail.googleapis.com/gmail/v1/users/me/watch', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
topicName: context.secrets.PUBSUB_TOPIC,
labelIds: ['INBOX'],
}),
});
return { success: true };
}
module.exports = { handler, onConnect };onWebhook(input, context)
Called when an external service POSTs to your agent's webhook URL (/api/webhooks/plugin/{pluginId}).
The input.triggerData object contains the webhook request body, plus a _headers object with selected HTTP headers from the request. This is useful for services like GitHub that send the event type in a header:
async function onWebhook(input, context) {
const payload = input.triggerData;
// Access HTTP headers injected by the webhook route
const githubEvent = payload._headers?.['x-github-event'];
// 'push', 'pull_request', 'issues', etc.
// The rest of the payload is the request body
const action = payload.action; // 'opened', 'closed', etc.
// Decode base64 Pub/Sub message (Gmail, etc.)
if (payload.message?.data) {
const decoded = JSON.parse(atob(payload.message.data));
// Process decoded notification
}
return { success: true, processed: 5 };
}
module.exports = { handler, onWebhook };Available headers in _headers:
| Header | Description |
|---|---|
x-github-event | GitHub webhook event type |
x-hub-signature-256 | GitHub HMAC signature |
content-type | Request content type |
Event-Triggered Heartbeats
If your agent declares webhook events in its manifest (via eventTypeField and events[]), users can create heartbeats that trigger automatically when specific events arrive. The webhook payload is injected into the AI prompt as context. See Declaring Webhook Events for details.
onDisconnect(input, context) (Optional)
Called when a user disconnects OAuth. Use this to clean up external subscriptions.
async function onDisconnect(input, context) {
// Clean up external resources
await context.oauth.fetch('https://api.example.com/unsubscribe', {
method: 'DELETE'
});
return { success: true };
}
module.exports = { handler, onConnect, onDisconnect };onInstall(input, context) (Optional)
Called when a user installs the agent. Use this to perform initial setup, provision resources, or send a welcome message.
async function onInstall(input, context) {
// input.triggerData.event === 'onInstall'
await context.notify('Thanks for installing! Let\'s get you set up.', {
title: 'Welcome'
});
return { success: true };
}
module.exports = { handler, onInstall };onUninstall(input, context) (Optional)
Called when a user uninstalls the agent. Use this to clean up any resources or external registrations.
async function onUninstall(input, context) {
// input.triggerData.event === 'onUninstall'
// Remove external webhook registrations, clean up resources, etc.
await context.fetch('https://api.example.com/hooks/remove', {
method: 'DELETE',
headers: { 'Authorization': `Bearer ${context.secrets.API_TOKEN}` },
});
return { success: true };
}
module.exports = { handler, onUninstall };Handler Priority
When an agent is invoked, PIE checks handlers in this order:
onInstall- iftriggerData.event === 'onInstall'onUninstall- iftriggerData.event === 'onUninstall'onConnect- iftriggerData.event === 'onConnect'onDisconnect- iftriggerData.event === 'onDisconnect'onWebhook- iftriggeredBy === 'webhook'handler- for cron/manual triggers
If the specific handler doesn't exist, PIE falls back to handler.
Error Handling
Always handle errors gracefully:
async function handler(input, context) {
try {
const response = await context.fetch('https://api.example.com/data');
if (!response.ok) {
return {
error: true,
message: `API error: ${response.status}`
};
}
return JSON.parse(response.body);
} catch (error) {
return {
error: true,
message: error.message || 'Unknown error'
};
}
}Return Values
Your handler should return:
Success:
return {
// Structured data for the AI to use
temperature: 72,
condition: 'Sunny',
};Error:
return {
error: true,
message: 'Something went wrong',
};Requires Authentication (connectors):
return {
error: true,
message: 'Please connect the service',
requiresAuth: true,
};Sandbox Templates
By default, your agent code runs in a lightweight Node.js sandbox with no extra packages. If your agent needs heavy dependencies (Playwright, Puppeteer, native binaries, ML libraries, etc.), you can create a sandbox template that pre-installs everything during a one-time build step.
When you need a template
- Your agent imports npm packages that aren't available in the default sandbox (e.g.,
playwright,@browserbasehq/sdk,sharp) - You need system-level packages installed via
apt-get(e.g.,ffmpeg,chromium) - You want faster cold starts by avoiding runtime installs
Creating a template
- Go to the Developer Portal and open the Templates tab in the right panel
- Enter a name, display name, and a setup script (bash)
- Click Create & Build Template
The setup script runs once during the template build (not on every execution). Example:
npm install playwright @browserbasehq/sdk
apt-get update && apt-get install -y ffmpegPIE builds the template in the background (1-3 minutes). You'll see the status update to "Ready" when it's done. If the build fails, you'll see the error and build logs.
Assigning a template to your agent
- Select your agent in the Developer Portal
- In the Settings tab, find the Sandbox Template dropdown
- Pick your template (or a PIE-provided system template like "Browser")
System templates
PIE provides pre-built templates for common use cases:
| Template | Includes |
|---|---|
| Browser (Playwright + Browserbase) | playwright, @browserbasehq/sdk |
| Claude Code Agent Environment | @anthropic-ai/claude-code, gh CLI, git, curl, jq |
System templates are available to all developers and cannot be modified.
Notes
- Templates are reusable — one template can be assigned to multiple agents
- Template builds are cached — rebuilds only run when you change the setup script
- The setup script runs as root in a Debian-based container
- Keep setup scripts minimal for faster builds
context.billing
The context.billing API lets your agent charge users custom amounts during execution. This is for variable-cost operations like phone calls, API pass-through charges, or data processing where the cost isn't known until execution time.
Prerequisite: You must enable customUsageEnabled in your plugin's pricing configuration via PUT /api/plugins/:id/pricing.
context.billing.chargeUsage({ amount, description })
Charge the user a custom amount. The charge is deducted from the user's prepaid balance immediately.
const result = await context.billing.chargeUsage({
amount: 150000, // microdollars (required)
description: '3-min call to +1-555-1234', // human-readable (required)
});
// result: { success: true, charged: 150000 }Parameters:
| Parameter | Type | Description |
|---|---|---|
amount | number | Amount in microdollars (1,000,000 = $1.00). Minimum 100. Must be an integer. |
description | string | Human-readable description shown in the user's billing history. 1-200 characters. |
Returns:
{
success: true,
charged: 150000, // amount actually charged (microdollars)
}Errors:
The call throws an error if:
- Custom usage charging is not enabled for the plugin (403)
- Amount is below the minimum (100 microdollars)
- Amount exceeds
maxCustomChargeMicrodollars(or the $5.00 default cap) - More than 10 charge calls in a single handler execution
Example: Metered phone call
async function handler(input, context) {
const call = await startPhoneCall(input.number);
const result = await waitForCallEnd(call.id);
const costPerMinute = 50000; // $0.05/min
const cost = Math.ceil(result.durationMinutes * costPerMinute);
await context.billing.chargeUsage({
amount: cost,
description: `${result.durationMinutes}-min call to ${input.number}`,
});
return { transcript: result.transcript, duration: result.durationMinutes };
}Example: API cost pass-through
async function handler(input, context) {
const response = await callExternalApi(input.query);
if (response.cost > 0) {
const costMicrodollars = Math.round(response.cost * 1_000_000);
await context.billing.chargeUsage({
amount: costMicrodollars,
description: `API query: ${input.query.substring(0, 50)}`,
});
}
return response.data;
}Limits
- Minimum charge: 100 microdollars ($0.0001)
- Maximum charge per call: Configurable via
maxCustomChargeMicrodollarsin pricing (default: 5,000,000 / $5.00) - Maximum calls per execution: 10
chargeUsagecalls per handler invocation - Description length: 1-200 characters
Revenue Share
Custom usage charges follow the same revenue share as other pricing models (70% developer / 30% platform by default). Earnings appear in your developer dashboard alongside per-usage and subscription revenue.
Configuring Custom Usage
Enable and configure via the pricing API:
PUT /api/plugins/:id/pricing{
"customUsageEnabled": true,
"maxCustomChargeMicrodollars": 10000000
}| Field | Type | Description |
|---|---|---|
customUsageEnabled | boolean | Set to true to allow context.billing.chargeUsage() calls |
maxCustomChargeMicrodollars | number | null | Max microdollars per charge call. null uses server default ($5.00). |
You can also enable this from the Pricing section of the Developer Portal.
context.machine
The context.machine API allows plugins to interact with the user's connected Mac through PIE Connect. Only available if the plugin declares machineCapabilities in its manifest.
context.machine.isOnline()
Check if the user has an online machine.
const online = await context.machine.isOnline();context.machine.list()
List all connected machines.
const machines = await context.machine.list();
// [{ machineId, capabilities, connectedAt }]context.machine.execute(capability, params)
Execute a command on the user's machine.
const info = await context.machine.execute('machine.info', {});
const clip = await context.machine.execute('clipboard.read', {});
await context.machine.execute('notifications.send', { title: 'Hello', message: 'World' });
const msgs = await context.machine.execute('messages.read', { limit: 5 });See the full Machine API Reference for details on each capability.