Skip to content

Context API

The context object is passed to your handler function. It provides access to PIE's capabilities.

Overview

js
async function handler(input, context) {
  // context.fetch() - Make HTTP requests
  // context.db.query() - Run read-only Postgres queries
  // context.secrets - Access developer secrets
  // context.userConfig - User-configurable settings
  // context.user - User information
  // context.oauth - OAuth operations (connectors only)
  // context.ai - AI capabilities (chat, analyze, summarize, listModels)
  // context.widget - Control plugin widget (show, update, hide)
  // context.publicApp - Public app metadata / public action response API
  // context.notify - Post notifications (automations only)
  // context.streamEvent() - Push real-time events to the client
  // context.getSessionMetadata() - Read persistent sandbox session metadata
  // context.updateSessionMetadata() - Write persistent sandbox session metadata
  // context.tasks - Create and manage scheduled tasks
  // context.billing - Charge users custom amounts
  
  return { /* result */ };
}

context.fetch()

Make HTTP requests from your agent. All requests are logged for auditing.

Signature

js
const response = await context.fetch(url, options);

Parameters

ParameterTypeDescription
urlstringThe URL to request
optionsobjectFetch options (optional)

Options

js
{
  method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH',
  headers: { 'Content-Type': 'application/json', ... },
  body: 'string or stringified JSON',
}

Response

js
{
  ok: boolean,      // true if status 200-299
  status: number,   // HTTP status code
  body: string,     // Response body as string
}

Examples

GET request:

js
const response = await context.fetch('https://api.example.com/data');
const data = JSON.parse(response.body);

POST request with JSON:

js
const response = await context.fetch('https://api.example.com/items', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ name: 'New Item' }),
});

With authorization header:

js
const response = await context.fetch('https://api.example.com/protected', {
  headers: { 
    'Authorization': `Bearer ${context.secrets.API_TOKEN}`,
  },
});

Limits

  • Maximum 10 requests per handler execution
  • 4 minute timeout per HTTP request (240 seconds)
  • Overall handler execution timeout of 120 seconds
  • Response body truncated at 10MB

context.secrets

Access your agent's developer secrets.

Usage

js
const apiKey = context.secrets.MY_API_KEY;

if (!apiKey) {
  return { error: true, message: 'API key not configured' };
}

Notes

  • Secrets are defined in your manifest's developerSecrets
  • Values are set when you create/configure the agent
  • Secrets are encrypted at rest (AES-256-GCM)
  • Missing secrets return undefined

context.userConfig

Access user-configurable settings defined in your manifest's userFields. Each user can customize these values through the agent settings UI.

Usage

js
const { topics, maxResults, frequency } = context.userConfig;

if (!topics || topics.length === 0) {
  return { error: true, message: 'Please configure your topics in settings' };
}

Notes

  • Values are defined per-user via the agent settings modal
  • Defaults to the default values in your userFields schema if not configured
  • Returns an empty object {} if no userFields are defined
  • Type-safe: returns the correct type for each field

Example

Given this manifest:

json
{
  "userFields": {
    "topics": { "type": "tags", "default": ["tech"] },
    "maxResults": { "type": "number", "default": 5 }
  }
}

Access in your handler:

js
async function handler(input, context) {
  const { topics, maxResults } = context.userConfig;
  
  // topics: ["tech"] (or user's custom array)
  // maxResults: 5 (or user's custom number)
  
  for (const topic of topics) {
    // fetch news for each topic...
  }
}

See Manifest Schema - User-Configurable Fields for field type documentation.

context.user

Information about the current user.

Properties

PropertyTypeDescription
idstringUser's unique ID (UUID)
displayNamestringUser's display name

Example

js
console.log(`Running for user: ${context.user.displayName}`);

context.oauth

OAuth operations for connectors. Only available when your manifest includes oauth configuration.

context.oauth.isConnected()

Check if the user has connected OAuth for this agent.

js
const connected = await context.oauth.isConnected();

if (!connected) {
  return { 
    error: true, 
    message: 'Please connect the service first',
    requiresAuth: true 
  };
}

context.oauth.fetch()

Make authenticated requests. PIE automatically:

  • Adds the Authorization: Bearer {token} header
  • Refreshes the token if expired
  • Returns the response
js
const response = await context.oauth.fetch(url, options);

Parameters: Same as context.fetch()

Response:

js
{
  ok: boolean,
  status: number,
  body: string,
  headers: { [key: string]: string },
}

Example:

js
const response = await context.oauth.fetch(
  'https://api.github.com/user/repos',
  {
    headers: { 'Accept': 'application/vnd.github.v3+json' },
  }
);

if (!response.ok) {
  throw new Error(`GitHub API error: ${response.status}`);
}

const repos = JSON.parse(response.body);

context.oauth.getConnectionInfo()

Get information about the OAuth connection.

js
const info = await context.oauth.getConnectionInfo();
// {
//   connected: true,
//   email: '[email protected]',
//   provider: 'google',
//   connectedAt: '2024-01-15T10:30:00Z'
// }

context.db

Run read-only queries against external Postgres databases.

context.db.query()

Execute a SQL query with strict safety guardrails.

js
const result = await context.db.query({
  connection: {
    connectionString: context.secrets.POSTGRES_URL,
    ssl: 'require',
  },
  sql: 'SELECT id, email FROM users ORDER BY created_at DESC LIMIT 25',
  params: [],
  timeoutMs: 20000,       // optional
  maxRows: 1000,          // optional
  maxResponseBytes: 2e6,  // optional
});

Parameters

FieldTypeDescription
connectionobjectConnection info (connectionString or host/user/password/database fields)
sqlstringSQL query text
paramsarrayPositional query params (optional)
timeoutMsnumberQuery timeout in milliseconds (optional)
maxRowsnumberMax rows returned (optional)
maxResponseBytesnumberMax serialized response size in bytes (optional)

Safety model

  • Only read-only SQL is allowed (SELECT, WITH, and EXPLAIN on read queries)
  • Multi-statement SQL is blocked
  • Mutating or admin statements are blocked (INSERT, UPDATE, DELETE, ALTER, DROP, etc.)
  • Queries run in read-only transaction mode with server-side timeout and response limits

Response

js
{
  rows: [{ id: 'u1', email: '[email protected]' }],
  columns: ['id', 'email'],
  rowCount: 1,
  truncated: false,
  executionTimeMs: 42,
  statementType: 'select'
}

context.managedDb

Full CRUD access to the PIE-managed developer database. Unlike context.db (which requires you to provide connection credentials to an external database), the managed database is provisioned by PIE and credentials are handled automatically.

The database is automatically provisioned on first use — you don't need to initialize it manually. If your manifest has a database section, tables are also created automatically on save.

Every query has the end-user's ID injected as a PostgreSQL session variable (app.user_id), enabling Row-Level Security policies for per-user data isolation.

Declarative Schema

Define your tables in the manifest's database.tables section and they'll be created automatically on save. See the Manifest Schema reference.

TIP

See the Developer Database guide for setup instructions, RLS patterns, and examples.

context.managedDb.query()

Execute a SQL query against your managed developer database.

js
// With RLS-enabled tables (declared with rls: "pie_user_id" in manifest),
// rows are automatically filtered to the current user
const result = await context.managedDb.query(
  'SELECT * FROM notes ORDER BY created_at DESC'
);
// Only returns notes belonging to the current user

// INSERT — pie_user_id is auto-populated via column default
await context.managedDb.query(
  'INSERT INTO notes (title, content) VALUES ($1, $2)',
  ['My Note', 'Note content']
);

// For tables without RLS, filter manually
const result = await context.managedDb.query(
  'SELECT * FROM shared_config WHERE key = $1',
  ['theme']
);

Parameters

FieldTypeDescription
sqlstringSQL query text (supports full CRUD: SELECT, INSERT, UPDATE, DELETE, and DDL)
paramsarrayPositional query params (optional)
optsobjectOptions object (optional)
opts.timeoutMsnumberQuery timeout (default 30000, max 30000)
opts.maxRowsnumberMax rows returned (default 1000, max 10000)

Response

js
{
  rows: [{ id: '...', user_id: '...', title: 'My Note' }],
  columns: ['id', 'user_id', 'title'],
  rowCount: 1,
  truncated: false,
  executionTimeMs: 12
}

Session Variables

These PostgreSQL session variables are set on every query:

VariableDescription
app.user_idThe end-user's PIE user ID
app.plugin_idYour plugin's ID

Access them in SQL with current_setting('app.user_id', true). Use them in RLS policies for automatic per-user data isolation.

Differences from context.db.query()

context.db.query()context.managedDb.query()
DatabaseExternal (you provide credentials)PIE-managed (automatic)
AccessRead-onlyFull CRUD
User contextNoneapp.user_id injected
Signaturequery({ connection, sql, params })query(sql, params, opts)

context.ai

AI capabilities for plugin code. Available in tools, connectors, and automations.

context.ai.chat()

Full multi-turn chat completion with tool calling and multimodal (image) support. This is the most powerful AI method — it gives your plugin direct access to any supported LLM with the full OpenAI Chat Completions message format, including function/tool calling and image inputs.

Use this when you need:

  • Multi-step agent loops (the model calls tools, you execute them, and feed results back)
  • Multimodal inputs (screenshots, images alongside text)
  • Fine-grained control over system prompts, message history, and tool definitions
  • Specific model selection (e.g., openai/gpt-5.4 for computer use)
js
const result = await context.ai.chat({
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is the capital of France?' },
  ],
});

// result: { content: 'The capital of France is Paris.', toolCalls: [], usage: { ... } }

With tool calling:

js
const result = await context.ai.chat({
  messages: [
    { role: 'system', content: 'You are a browser automation agent.' },
    { role: 'user', content: 'Navigate to example.com and get the page title.' },
  ],
  tools: [
    {
      type: 'function',
      function: {
        name: 'exec_js',
        description: 'Execute JavaScript code in the browser',
        parameters: {
          type: 'object',
          properties: {
            code: { type: 'string', description: 'JS code to run' },
          },
          required: ['code'],
        },
      },
    },
  ],
  model: 'openai/gpt-5.4',
  reasoningEffort: 'low',
});

if (result.toolCalls.length > 0) {
  const call = result.toolCalls[0];
  // call: { id: 'call_abc123', name: 'exec_js', arguments: '{"code":"..."}' }
  const args = JSON.parse(call.arguments);
  // Execute the tool, then send results back in a follow-up chat call
}

With multimodal (image) input:

js
const result = await context.ai.chat({
  messages: [
    { role: 'system', content: 'Describe what you see in screenshots.' },
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What is on this page?' },
        { type: 'image_url', image_url: { url: 'data:image/png;base64,...' } },
      ],
    },
  ],
  model: 'openai/gpt-5.4',
});

Parameters:

ParameterTypeDescription
messagesarrayArray of chat messages in OpenAI format ({ role, content } or multimodal content arrays). Required.
toolsarrayOpenAI-format tool definitions ({ type: 'function', function: { name, description, parameters } }). Optional.
modelstringModel ID to use (see Supported Models). Optional — defaults to openai/gpt-5.4.
temperaturenumberSampling temperature (0–2). Optional.
reasoningEffortstringReasoning effort level ('low', 'medium', 'high'). Optional — defaults to 'low'.

Returns:

js
{
  content: string | null,   // The model's text response (null if only tool calls)
  toolCalls: [              // Array of tool calls the model wants to make
    {
      id: string,           // Unique call ID (use this in tool result messages)
      name: string,         // Function name
      arguments: string,    // JSON string of arguments
    }
  ],
  usage: {                  // Token usage data
    promptTokens: number,
    completionTokens: number,
    totalTokens: number,
    cost: number,           // Estimated cost in USD
  },
}

Agent loop pattern:

The most common pattern is an agent loop where you repeatedly call context.ai.chat(), execute any tool calls, and feed results back:

js
let messages = [
  { role: 'system', content: 'You are a helpful assistant with tools.' },
  { role: 'user', content: userPrompt },
];

for (let step = 0; step < 20; step++) {
  const result = await context.ai.chat({ messages, tools, model: 'openai/gpt-5.4' });

  if (result.toolCalls.length === 0) {
    // Model is done — result.content has the final answer
    return { answer: result.content };
  }

  // Append the assistant's response (with tool calls) to history
  messages.push({
    role: 'assistant',
    content: result.content,
    tool_calls: result.toolCalls.map(tc => ({
      id: tc.id,
      type: 'function',
      function: { name: tc.name, arguments: tc.arguments },
    })),
  });

  // Execute each tool call and append results
  for (const tc of result.toolCalls) {
    const args = JSON.parse(tc.arguments);
    const output = await executeMyTool(tc.name, args);
    messages.push({
      role: 'tool',
      tool_call_id: tc.id,
      content: typeof output === 'string' ? output : JSON.stringify(output),
    });
  }
}

Notes:

  • Messages follow the OpenAI Chat Completions format. Roles: system, user, assistant, tool.
  • Tool result messages must have role: 'tool' and include the tool_call_id from the corresponding tool call.
  • For multimodal messages, use content arrays with { type: 'text', text: '...' } and { type: 'image_url', image_url: { url: '...' } } objects. Image URLs can be data:image/png;base64,... or HTTPS URLs.
  • All calls are routed through OpenRouter — no separate API key needed. Billing is handled automatically.
  • The 4-minute timeout applies per call (same as context.fetch).

context.ai.analyze()

Use AI to analyze and classify data:

js
const result = await context.ai.analyze({
  prompt: 'Classify this email as urgent or not urgent. Return JSON: {"label": "urgent|not_urgent"}',
  data: { subject, from, snippet }
});

// result: { label: 'urgent', confidence: 0.95 }

With a specific model:

js
const result = await context.ai.analyze({
  prompt: 'Generate a detailed product description',
  data: { name, features, audience },
  model: 'google/gemini-3.1-pro-preview'
});

Parameters:

ParameterTypeDescription
promptstringInstructions for the AI
dataobjectData to analyze
modelstringOptional. Model ID to use (see Supported Models below). Defaults to google/gemini-3-flash-preview.

Returns: Parsed JSON from the AI response, or text if not JSON.

context.ai.summarize()

Get a text summary of content:

js
const summary = await context.ai.summarize(longArticleText);
// "Brief summary of the article highlighting key points..."

With a specific model:

js
const summary = await context.ai.summarize(longArticleText, {
  model: 'anthropic/claude-sonnet-4.6'
});

Parameters:

ParameterTypeDescription
contentstringText to summarize
optionsobjectOptional second argument
options.modelstringOptional. Model ID to use (see Supported Models below). Defaults to google/gemini-3-flash-preview.

Returns: String summary

context.ai.listModels()

Get the list of available models at runtime:

js
const models = await context.ai.listModels();
// [
//   { id: 'google/gemini-3-flash-preview', name: 'Gemini 3 Flash', provider: 'Google', description: '...' },
//   { id: 'anthropic/claude-sonnet-4.6', name: 'Claude Sonnet 4.6', provider: 'Anthropic', description: '...' },
//   ...
// ]

Returns: Array of model objects with id, name, provider, and description.

Supported Models

These are the currently available models you can pass to context.ai.chat(), context.ai.analyze(), and context.ai.summarize():

Model IDNameProvider
openai/gpt-5.4GPT-5.4OpenAI
google/gemini-3-flash-previewGemini 3 FlashGoogle
google/gemini-3-pro-previewGemini 3 ProGoogle
google/gemini-3.1-pro-previewGemini 3.1 ProGoogle
google/gemini-2.5-flash-liteGemini 2.5 Flash LiteGoogle
anthropic/claude-sonnet-4.6Claude Sonnet 4.6Anthropic
anthropic/claude-opus-4.6Claude Opus 4.6Anthropic
anthropic/claude-sonnet-4.5Claude Sonnet 4.5Anthropic
anthropic/claude-haiku-4.5Claude Haiku 4.5Anthropic
openai/gpt-5.2-chatGPT-5.2OpenAI
openai/gpt-5.2-proGPT-5.2 ProOpenAI
openai/gpt-5.2-codexGPT-5.2 CodexOpenAI
openai/gpt-5-miniGPT-5 MiniOpenAI
openai/gpt-5-nanoGPT-5 NanoOpenAI
openai/gpt-4oGPT-4oOpenAI
openai/gpt-4o-miniGPT-4o MiniOpenAI
moonshotai/kimi-k2.5Kimi K2.5Moonshot
x-ai/grok-4.1-fastGrok 4.1 FastxAI
deepseek/deepseek-v3.2DeepSeek V3.2DeepSeek
minimax/minimax-m2.5MiniMax M2.5MiniMax
openai/gpt-oss-120bGPT OSS 120BUltra Fast
openai/gpt-oss-safeguard-20b:nitroGPT OSS Safeguard 20BUltra Fast

Use context.ai.listModels() to get the most up-to-date list at runtime, as new models may be added.

Billing

AI calls are billed based on the actual token usage of the model you select. Different models have different per-token costs — more capable models (e.g., Claude Opus 4.6, GPT-5.2 Pro) cost more per token than lightweight models (e.g., GPT-5 Nano, Gemini 2.5 Flash Lite). The cost is determined by the upstream provider (via OpenRouter) and your subscription plan's margin multiplier is applied on top.

Note: AI calls count toward your token usage and balance.

context.widget

Control your plugin's widget iframe. Available in widget action handlers (isolated-vm) and tool action handlers (E2B). Widgets are sandboxed iframes that communicate with your plugin handler via PIE.sendAction() in the iframe and context.widget on the server side.

context.widget.show(data)

Open the widget and send initial data. If the widget is already open, updates its data.

js
await context.widget.show({
  event: 'dashboard',
  items: [{ id: 1, name: 'Item 1' }],
});

context.widget.update(data)

Push new data to an already-open widget. The widget receives this via PIE.onData(callback).

js
await context.widget.update({
  event: 'progress',
  completed: 3,
  total: 10,
});

context.widget.hide()

Close the widget.

js
await context.widget.hide();

_triggerToolCall — Background E2B dispatch from widget actions

Widget actions run in isolated-vm (30s timeout, lightweight). When a widget action needs to kick off heavy work (image generation, long API calls, data processing), it can return a _triggerToolCall object. The platform will then dispatch a background E2B execution of the same plugin with the trigger object as the input — no LLM in the loop.

js
// Inside your widget action handler:
case 'start_processing': {
  await context.managedDb.query('UPDATE jobs SET status=$1 WHERE id=$2', ['processing', payload.jobId]);
  await context.widget.update({ event: 'processing_started', flash: 'Processing...' });

  return {
    success: true,
    _triggerToolCall: {
      action: 'run_heavy_job',
      job_id: payload.jobId
    }
  };
}

The _triggerToolCall object is passed directly as the input to your plugin's handler() function in E2B. Your handler's switch(input.action) routing handles it like any other tool action.

How it works:

  1. User clicks a button in the widget → PIE.sendAction('start_processing', { jobId })
  2. Widget action runs in isolated-vm — does lightweight DB updates, sends widget update, returns _triggerToolCall
  3. Response sent to client immediately (user sees feedback)
  4. Platform dispatches toolSandbox.execute() in background → E2B runs the heavy action
  5. E2B handler calls context.widget.update() when done → widget updates via SSE

Requirements:

  • _triggerToolCall must be an object with an action string property
  • The action value should match a case in your handler's switch(input.action) routing
  • Additional parameters are passed through as-is (e.g., slide_id, job_id)

See the Widget Actions guide for full examples and best practices.

context.publicApp

context.publicApp is available when your plugin uses PIE public apps. It has two modes depending on the runtime.

Mode 1: Widget Runtimes

When the current plugin already has a published public app instance for the current developer/user, PIE injects deployment metadata into widget actions:

js
context.publicApp
// {
//   instanceId: '...',
//   instanceSlug: 'pie-forms-e8e807f3',
//   pluginSlug: 'pie-forms',
//   baseUrl: 'https://your-pie-domain.com'
// }

Use this to build share links from widget actions:

js
const shareUrl =
  context.publicApp.baseUrl +
  '/apps/' +
  context.publicApp.pluginSlug +
  '/' +
  context.publicApp.instanceSlug;

If the plugin has no deployed public app yet, context.publicApp is null in these runtimes.

Mode 2: Public Action Runtime

When the browser calls:

text
POST /api/public-actions/{instanceIdOrSlug}/{actionId}

your handler receives:

js
{
  _publicAction: true,
  actionId: 'load_form',
  payload: { ... },
  instanceId: '...',
  visitorIp: '...'
}

In that runtime, context.publicApp looks like this:

js
context.publicApp
// {
//   instanceId: '...',
//   visitorIp: '...',
//   respond(data) { ... }
// }

context.publicApp.respond(data)

Use respond() to set the JSON payload returned to the browser:

js
async function handler(input, context) {
  if (input._publicAction && input.actionId === 'load_form') {
    const result = await context.managedDb.query(
      'SELECT id, title FROM forms WHERE id = $1',
      [input.payload.formId]
    );

    context.publicApp.respond({
      form: result.rows[0] || null,
    });

    return { success: true };
  }
}

If you do not call respond(), PIE returns your handler result as data.

Public Action Data Model

Public actions are anonymous browser requests:

  • context.user.id is the plugin owner's user ID in a public action runtime
  • context.managedDb.query() runs against the plugin owner's managed database access
  • Anonymous visitors are not automatically isolated by RLS

That means your public action handler must validate what a visitor can access using your own public IDs, slugs, tokens, or row filters.

See the Public Apps and Routes guide for .pie packaging, hosted URL structure, SPA routing, uploads, and custom domains.

context.files

Upload, list, and manage files in the user's PIE storage. Files are scoped to your agent by default -- other agents cannot see them unless the user explicitly grants access.

Uploaded documents (PDF, text, markdown, etc.) are automatically indexed in the user's knowledge base, making them searchable by the AI in chat.

context.files.upload(filename, mimeType, base64Data)

Upload a file. PIE automatically generates a descriptive filename, tags, and description using AI.

js
const result = await context.files.upload(
  'report.pdf',
  'application/pdf',
  base64EncodedData
);

// result: {
//   id: 'abc-123',
//   filename: 'quarterly-sales-report-q1-2026.pdf',  // AI-generated
//   originalFilename: 'report.pdf',
//   mimeType: 'application/pdf',
//   size: 125000,
//   tags: ['report', 'sales', 'quarterly'],
//   description: 'Q1 2026 quarterly sales report with revenue breakdown',
//   url: 'https://...'  // short-lived signed URL
// }

Parameters:

ParameterTypeDescription
filenamestringOriginal filename (with extension)
mimeTypestringMIME type (e.g., image/png, application/pdf)
base64DatastringFile content as base64-encoded string

Returns: Object with id, filename, mimeType, size, tags, description, and a short-lived url.

Quota: 250 MB per agent per user. An error is thrown if the quota is exceeded.

context.files.list(mimeTypeFilter?)

List files your agent has access to (own files + files granted by the user).

js
// All files
const files = await context.files.list();

// Only images
const images = await context.files.list('image/*');

Parameters:

ParameterTypeDescription
mimeTypeFilterstringOptional. Filter by MIME type prefix (e.g., image/*)

Returns: Array of file objects with id, filename, mimeType, size, tags, description, createdAt.

context.files.get(fileId)

Get metadata for a specific file.

js
const file = await context.files.get('abc-123');

context.files.getUrl(fileId)

Get a short-lived download/preview URL (valid for ~15 minutes).

js
const url = await context.files.getUrl('abc-123');

context.files.delete(fileId)

Delete a file your agent created.

js
const success = await context.files.delete('abc-123');

Note: You can only delete files your agent created. The user can delete any file from the Files page.

Returning files in tool results

To display a file inline in the chat (especially images), return a file object in your handler result:

js
async function handler(input, context) {
  // ... generate or fetch a file ...

  const uploaded = await context.files.upload(
    'chart.png',
    'image/png',
    base64ImageData
  );

  return {
    success: true,
    file: uploaded,  // PIE will display this inline in chat
    description: 'Monthly revenue chart',
  };
}

PIE detects the file property and renders images inline in the chat message.

File Search integration

When you upload a document type (PDF, text, markdown, JSON, etc.), PIE automatically indexes it in the user's knowledge base. This means:

  • The AI can find and cite the document when answering questions
  • The user can ask "What does my report say about..." and get answers from agent-uploaded files
  • No extra code needed -- indexing happens automatically on upload

context.pdf

Extract text content from PDF files. The PDF is downloaded from a URL and parsed server-side, so your plugin gets clean, readable text — no binary data to deal with.

context.pdf.extractText(url, options?)

Downloads a PDF from the given URL and returns the extracted text content.

js
const result = await context.pdf.extractText('https://example.com/document.pdf');
console.log(result.text);    // Full text content of the PDF
console.log(result.pages);   // Number of pages
console.log(result.info);    // PDF metadata (title, author, etc.)

Parameters:

ParameterTypeDescription
urlstringURL to download the PDF from (must be publicly accessible or a signed/temporary URL)
options.maxPagesnumberMaximum number of pages to extract (default: all pages, max: 200)

Returns:

FieldTypeDescription
successbooleanWhether extraction succeeded
textstringThe extracted text content
pagesnumberTotal number of pages in the PDF
infoobject?PDF metadata: title, author, subject, creator, creationDate, modDate

Example: Dropbox PDF reading

js
// Get a temporary download link from Dropbox
const linkData = await context.oauth.fetch(
  'https://api.dropboxapi.com/2/files/get_temporary_link',
  { method: 'POST', headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ path: '/Documents/report.pdf' }) }
);
const link = JSON.parse(linkData.body).link;

// Extract text from the PDF
const pdf = await context.pdf.extractText(link);
return { success: true, content: pdf.text, pages: pdf.pages };

Limits:

  • Maximum PDF size: 20 MB
  • Maximum pages: 200 (use options.maxPages to limit)
  • Download timeout: 30 seconds

context.notify() (Automations Only)

Post messages to the user's PIE Assistant session:

js
await context.notify('Your notification message', {
  title: 'Optional Title',
  urgent: true, // Highlights the notification
});

Parameters:

ParameterTypeDescription
messagestringThe notification message (supports markdown)
options.titlestringOptional title/heading
options.urgentbooleanIf true, notification is highlighted

Example with formatting:

js
await context.notify(
  `## Daily Report\n\n` +
  `- **Emails**: ${emailCount} new\n` +
  `- **Tasks**: ${taskCount} due today\n\n` +
  `---\n` +
  `*Generated at ${new Date().toLocaleString()}*`,
  { title: 'Morning Summary' }
);

context.streamEvent()

Push a real-time event to the client during tool execution. Events are delivered immediately through the chat stream — the user sees them while your handler is still running. This is useful for long-running tools that need to provide live feedback or hand off interactive URLs.

Signature

js
await context.streamEvent(type, data);

Parameters

ParameterTypeDescription
typestringEvent type identifier (e.g., 'browser_live_view')
dataobjectArbitrary data payload delivered to the client (optional, defaults to {})

Returns

true on success.

Built-in event types

PIE's client recognizes these event types out of the box:

Event typeData fieldsBehavior
browser_live_viewurl (string), sessionId (string, optional)Shows a live browser view card in chat with "Watch Live" link and optional iframe preview
browser_needs_inputurl (string), message (string, optional)Promotes the live view card to an alert state prompting the user to take control

You can also emit custom event types — they'll be delivered to the client as generic stream events.

Example: Browser live view

js
async function handler({ task }, context) {
  // Create a remote browser session
  const session = await createBrowserSession();
  const liveViewUrl = await getLiveViewUrl(session.id);

  // Push the live view URL to the user immediately
  await context.streamEvent('browser_live_view', {
    url: liveViewUrl,
    sessionId: session.id,
  });

  // Run the browsing task (this takes 30–120 seconds)
  const result = await runBrowserAgent(session.id, task);

  return { success: true, summary: result.summary };
}

The user sees the "Watch Live" card within seconds, then the agent's final summary once the task completes.

Example: Requesting user input

js
// If the agent gets stuck on a login page:
await context.streamEvent('browser_needs_input', {
  url: liveViewUrl,
  message: 'Please log in with your credentials',
});

// Wait for the user to act, then retry
await waitForPageChange(session.id);

Limits

  • Maximum 20 stream events per handler execution
  • type must be 1–100 characters
  • data is serialized as JSON — keep payloads small

context.getSessionMetadata()

Retrieve the current persistent sandbox session's metadata. Only available for plugins with runtime.persistent: true.

Signature

js
const metadata = await context.getSessionMetadata();

Returns

An object (Record<string, unknown>) containing whatever metadata the plugin has previously stored. Returns {} if no metadata has been set.

Example

js
const meta = await context.getSessionMetadata();
if (meta.initialized) {
  console.log(`Resuming session for repo: ${meta.repoUrl}`);
}

Notes

  • Metadata persists across sandbox pause/resume cycles.
  • Metadata is stored per user per plugin — each user has their own session state.
  • Returns {} for non-persistent plugins (no error thrown).

context.updateSessionMetadata()

Merge new key-value pairs into the persistent sandbox session's metadata. Only available for plugins with runtime.persistent: true.

Signature

js
await context.updateSessionMetadata(metadata);

Parameters

ParameterTypeDescription
metadataobjectKey-value pairs to merge into the existing metadata. Existing keys not present in the update are preserved.

Returns

true on success.

Example

js
await context.updateSessionMetadata({
  initialized: true,
  repoUrl: 'https://github.com/org/repo',
  totalRuns: (meta.totalRuns || 0) + 1,
});

Special metadata keys

KeyBehavior
requestKillIf set to true, the platform will kill (not pause) the sandbox after the current execution completes. Use this when the user wants to end their session.

Example: Ending a session

js
if (action === 'end_session') {
  await context.updateSessionMetadata({ requestKill: true });
  return { result: 'Session ended.' };
}

Notes

  • Updates are merged (shallow), not replaced. To remove a key, set it to null.
  • Keep metadata small — it's stored as JSONB in the database.
  • Metadata survives session kills for context restoration purposes.

context.tasks

Create, list, update, and delete scheduled tasks for the user. Use this to build reminders, recurring heartbeats, cron jobs, and other scheduled actions.

context.tasks.create(options)

Create a new scheduled task.

js
const task = await context.tasks.create({
  name: 'Daily standup reminder',
  taskType: 'heartbeat',
  schedule: { kind: 'cron', expr: '0 9 * * 1-5' },
  payload: { kind: 'heartbeat', message: 'Time for your daily standup!' },
  delivery: { target: 'pie_assistant' },
  deleteAfterRun: false,
});

Parameters:

FieldTypeDescription
namestringHuman-readable task name
taskTypestring'heartbeat', 'cron', or 'webhook'
scheduleobjectSchedule config (see below)
payloadobjectWhat to do when the task runs
deliveryobjectWhere to deliver ({ target: 'pie_assistant' }, 'email', or 'telegram')
descriptionstringOptional description
deleteAfterRunbooleanIf true, task is deleted after first run (for one-time reminders)
activeHoursobjectOptional { start: 'HH:MM', end: 'HH:MM' } window
enabledbooleanWhether the task is active (default: true)

Schedule types:

KindFieldsExample
onceatMs (unix ms){ kind: 'once', atMs: 1735689600000 }
cronexpr (cron expression){ kind: 'cron', expr: '0 9 * * *' }
intervaleveryMs (milliseconds){ kind: 'interval', everyMs: 3600000 }

Returns: The created task object with id, name, schedule, nextRunAt, etc.

context.tasks.list()

List all of the user's scheduled tasks.

js
const tasks = await context.tasks.list();
for (const task of tasks) {
  console.log(`${task.name} - next run: ${task.nextRunAt}`);
}

Returns: Array of task objects.

context.tasks.update(taskId, updates)

Update an existing task.

js
const updated = await context.tasks.update('task-uuid', {
  schedule: { kind: 'cron', expr: '0 10 * * 1-5' },
  enabled: true,
});

Parameters:

FieldTypeDescription
taskIdstringThe task ID to update
updatesobjectFields to update (same fields as create, all optional)

Returns: The updated task object.

context.tasks.delete(taskId)

Delete a scheduled task.

js
const success = await context.tasks.delete('task-uuid');

Returns: true on success.

Example: One-time reminder

js
const tomorrow9am = new Date();
tomorrow9am.setDate(tomorrow9am.getDate() + 1);
tomorrow9am.setHours(9, 0, 0, 0);

await context.tasks.create({
  name: 'Call dentist',
  taskType: 'heartbeat',
  schedule: { kind: 'once', atMs: tomorrow9am.getTime() },
  payload: { kind: 'heartbeat', message: 'Reminder: Call the dentist!' },
  deleteAfterRun: true,
});

context.input (Automations Only)

Automation-specific input data. In your handler function, input contains:

Properties

PropertyTypeDescription
lastRunAtnumber | nullTimestamp (ms) of last successful run
triggeredBystring'cron', 'webhook', or 'manual'
triggerDataanyWebhook payload or lifecycle event data. For webhook triggers, includes _headers with selected HTTP headers.

Example

js
async function handler(input, context) {
  const { lastRunAt, triggeredBy, triggerData } = input;
  
  // Process only data since last run
  const sinceTime = lastRunAt || (Date.now() - 24 * 60 * 60 * 1000);
  
  if (triggeredBy === 'webhook') {
    // Handle webhook payload
    const { eventType, data } = triggerData;
  }
  
  // ... rest of automation logic
}

Lifecycle Hooks

Agents can export additional handlers for lifecycle events:

onConnect(input, context)

Called immediately after a user connects OAuth. Use this to:

  • Set up external subscriptions (webhooks, watches)
  • Send a welcome notification
  • Fetch initial data
js
async function onConnect(input, context) {
  // input.triggerData.event === 'onConnect'
  
  await context.notify('Welcome! Setting up your account...', {
    title: 'Connected'
  });
  
  // Set up Gmail push notifications
  await context.oauth.fetch('https://gmail.googleapis.com/gmail/v1/users/me/watch', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      topicName: context.secrets.PUBSUB_TOPIC,
      labelIds: ['INBOX'],
    }),
  });
  
  return { success: true };
}

module.exports = { handler, onConnect };

onWebhook(input, context)

Called when an external service POSTs to your agent's webhook URL (/api/webhooks/plugin/{pluginId}).

The input.triggerData object contains the webhook request body, plus a _headers object with selected HTTP headers from the request. This is useful for services like GitHub that send the event type in a header:

js
async function onWebhook(input, context) {
  const payload = input.triggerData;
  
  // Access HTTP headers injected by the webhook route
  const githubEvent = payload._headers?.['x-github-event'];
  // 'push', 'pull_request', 'issues', etc.
  
  // The rest of the payload is the request body
  const action = payload.action; // 'opened', 'closed', etc.
  
  // Decode base64 Pub/Sub message (Gmail, etc.)
  if (payload.message?.data) {
    const decoded = JSON.parse(atob(payload.message.data));
    // Process decoded notification
  }
  
  return { success: true, processed: 5 };
}

module.exports = { handler, onWebhook };

Available headers in _headers:

HeaderDescription
x-github-eventGitHub webhook event type
x-hub-signature-256GitHub HMAC signature
content-typeRequest content type

Event-Triggered Heartbeats

If your agent declares webhook events in its manifest (via eventTypeField and events[]), users can create heartbeats that trigger automatically when specific events arrive. The webhook payload is injected into the AI prompt as context. See Declaring Webhook Events for details.

onDisconnect(input, context) (Optional)

Called when a user disconnects OAuth. Use this to clean up external subscriptions.

js
async function onDisconnect(input, context) {
  // Clean up external resources
  await context.oauth.fetch('https://api.example.com/unsubscribe', {
    method: 'DELETE'
  });
  
  return { success: true };
}

module.exports = { handler, onConnect, onDisconnect };

onInstall(input, context) (Optional)

Called when a user installs the agent. Use this to perform initial setup, provision resources, or send a welcome message.

js
async function onInstall(input, context) {
  // input.triggerData.event === 'onInstall'
  
  await context.notify('Thanks for installing! Let\'s get you set up.', {
    title: 'Welcome'
  });
  
  return { success: true };
}

module.exports = { handler, onInstall };

onUninstall(input, context) (Optional)

Called when a user uninstalls the agent. Use this to clean up any resources or external registrations.

js
async function onUninstall(input, context) {
  // input.triggerData.event === 'onUninstall'
  
  // Remove external webhook registrations, clean up resources, etc.
  await context.fetch('https://api.example.com/hooks/remove', {
    method: 'DELETE',
    headers: { 'Authorization': `Bearer ${context.secrets.API_TOKEN}` },
  });
  
  return { success: true };
}

module.exports = { handler, onUninstall };

Handler Priority

When an agent is invoked, PIE checks handlers in this order:

  1. onInstall - if triggerData.event === 'onInstall'
  2. onUninstall - if triggerData.event === 'onUninstall'
  3. onConnect - if triggerData.event === 'onConnect'
  4. onDisconnect - if triggerData.event === 'onDisconnect'
  5. onWebhook - if triggeredBy === 'webhook'
  6. handler - for cron/manual triggers

If the specific handler doesn't exist, PIE falls back to handler.

Error Handling

Always handle errors gracefully:

js
async function handler(input, context) {
  try {
    const response = await context.fetch('https://api.example.com/data');
    
    if (!response.ok) {
      return { 
        error: true, 
        message: `API error: ${response.status}` 
      };
    }
    
    return JSON.parse(response.body);
  } catch (error) {
    return { 
      error: true, 
      message: error.message || 'Unknown error' 
    };
  }
}

Return Values

Your handler should return:

Success:

js
return {
  // Structured data for the AI to use
  temperature: 72,
  condition: 'Sunny',
};

Error:

js
return {
  error: true,
  message: 'Something went wrong',
};

Requires Authentication (connectors):

js
return {
  error: true,
  message: 'Please connect the service',
  requiresAuth: true,
};

Sandbox Templates

By default, your agent code runs in a lightweight Node.js sandbox with no extra packages. If your agent needs heavy dependencies (Playwright, Puppeteer, native binaries, ML libraries, etc.), you can create a sandbox template that pre-installs everything during a one-time build step.

When you need a template

  • Your agent imports npm packages that aren't available in the default sandbox (e.g., playwright, @browserbasehq/sdk, sharp)
  • You need system-level packages installed via apt-get (e.g., ffmpeg, chromium)
  • You want faster cold starts by avoiding runtime installs

Creating a template

  1. Go to the Developer Portal and open the Templates tab in the right panel
  2. Enter a name, display name, and a setup script (bash)
  3. Click Create & Build Template

The setup script runs once during the template build (not on every execution). Example:

bash
npm install playwright @browserbasehq/sdk
apt-get update && apt-get install -y ffmpeg

PIE builds the template in the background (1-3 minutes). You'll see the status update to "Ready" when it's done. If the build fails, you'll see the error and build logs.

Assigning a template to your agent

  1. Select your agent in the Developer Portal
  2. In the Settings tab, find the Sandbox Template dropdown
  3. Pick your template (or a PIE-provided system template like "Browser")

System templates

PIE provides pre-built templates for common use cases:

TemplateIncludes
Browser (Playwright + Browserbase)playwright, @browserbasehq/sdk
Claude Code Agent Environment@anthropic-ai/claude-code, gh CLI, git, curl, jq

System templates are available to all developers and cannot be modified.

Notes

  • Templates are reusable — one template can be assigned to multiple agents
  • Template builds are cached — rebuilds only run when you change the setup script
  • The setup script runs as root in a Debian-based container
  • Keep setup scripts minimal for faster builds

context.billing

The context.billing API lets your agent charge users custom amounts during execution. This is for variable-cost operations like phone calls, API pass-through charges, or data processing where the cost isn't known until execution time.

Prerequisite: You must enable customUsageEnabled in your plugin's pricing configuration via PUT /api/plugins/:id/pricing.

context.billing.chargeUsage({ amount, description })

Charge the user a custom amount. The charge is deducted from the user's prepaid balance immediately.

js
const result = await context.billing.chargeUsage({
  amount: 150000,                          // microdollars (required)
  description: '3-min call to +1-555-1234', // human-readable (required)
});
// result: { success: true, charged: 150000 }

Parameters:

ParameterTypeDescription
amountnumberAmount in microdollars (1,000,000 = $1.00). Minimum 100. Must be an integer.
descriptionstringHuman-readable description shown in the user's billing history. 1-200 characters.

Returns:

js
{
  success: true,
  charged: 150000,  // amount actually charged (microdollars)
}

Errors:

The call throws an error if:

  • Custom usage charging is not enabled for the plugin (403)
  • Amount is below the minimum (100 microdollars)
  • Amount exceeds maxCustomChargeMicrodollars (or the $5.00 default cap)
  • More than 10 charge calls in a single handler execution

Example: Metered phone call

js
async function handler(input, context) {
  const call = await startPhoneCall(input.number);
  const result = await waitForCallEnd(call.id);

  const costPerMinute = 50000; // $0.05/min
  const cost = Math.ceil(result.durationMinutes * costPerMinute);

  await context.billing.chargeUsage({
    amount: cost,
    description: `${result.durationMinutes}-min call to ${input.number}`,
  });

  return { transcript: result.transcript, duration: result.durationMinutes };
}

Example: API cost pass-through

js
async function handler(input, context) {
  const response = await callExternalApi(input.query);

  if (response.cost > 0) {
    const costMicrodollars = Math.round(response.cost * 1_000_000);
    await context.billing.chargeUsage({
      amount: costMicrodollars,
      description: `API query: ${input.query.substring(0, 50)}`,
    });
  }

  return response.data;
}

Limits

  • Minimum charge: 100 microdollars ($0.0001)
  • Maximum charge per call: Configurable via maxCustomChargeMicrodollars in pricing (default: 5,000,000 / $5.00)
  • Maximum calls per execution: 10 chargeUsage calls per handler invocation
  • Description length: 1-200 characters

Revenue Share

Custom usage charges follow the same revenue share as other pricing models (70% developer / 30% platform by default). Earnings appear in your developer dashboard alongside per-usage and subscription revenue.

Configuring Custom Usage

Enable and configure via the pricing API:

PUT /api/plugins/:id/pricing
json
{
  "customUsageEnabled": true,
  "maxCustomChargeMicrodollars": 10000000
}
FieldTypeDescription
customUsageEnabledbooleanSet to true to allow context.billing.chargeUsage() calls
maxCustomChargeMicrodollarsnumber | nullMax microdollars per charge call. null uses server default ($5.00).

You can also enable this from the Pricing section of the Developer Portal.


context.machine

The context.machine API allows plugins to interact with the user's connected Mac through PIE Connect. Only available if the plugin declares machineCapabilities in its manifest.

context.machine.isOnline()

Check if the user has an online machine.

javascript
const online = await context.machine.isOnline();

context.machine.list()

List all connected machines.

javascript
const machines = await context.machine.list();
// [{ machineId, capabilities, connectedAt }]

context.machine.execute(capability, params)

Execute a command on the user's machine.

javascript
const info = await context.machine.execute('machine.info', {});
const clip = await context.machine.execute('clipboard.read', {});
await context.machine.execute('notifications.send', { title: 'Hello', message: 'World' });
const msgs = await context.machine.execute('messages.read', { limit: 5 });

See the full Machine API Reference for details on each capability.

Built with VitePress