Give your AI
superpowers

The tool execution platform for AI agents. One integration gives your LLM access to 100+ tools, code execution, and OAuth management.

SOC2 Compliant
Encrypted
99.9% SLA
quickstart.ts
import { RunTools } from "@runtools/sdk"

// Initialize with your API key
const rt = new RunTools({ 
  apiKey: process.env.RUNTOOLS_API_KEY 
})

// Get tool_search for your LLM
const tools = await rt.getToolDefinitions()

// Execute any tool your LLM discovers
const result = await rt.execute({
  tool: "gmail_send",
  params: {
    to: "[email protected]",
    subject: "Hello from AI!",
    body: "This email was sent by my AI agent."
  }
})

console.log(result)
// { success: true, messageId: "..." }
sandbox.ts
import { RunTools } from "@runtools/sdk"

const rt = new RunTools({ apiKey: "rt_..." })

// Create an isolated sandbox
const sandbox = await rt.sandbox.create({
  runtime: "python:3.12",
  memory: "512mb",
  timeout: 300
})

// Execute code in isolation
const result = await sandbox.exec(`
  import pandas as pd
  df = pd.read_csv("data.csv")
  print(df.describe())
`)

// Sandbox persists state between calls!
await sandbox.exec("df.head(10)")

// Pause to save costs, resume anytime
await sandbox.pause()  // Snapshot saved
await sandbox.resume() // Restored in <500ms
Isolated Sandboxes

Secure, stateful
execution environments

Spin up isolated Firecracker microVMs in milliseconds. Each sandbox has its own filesystem, network, and memory — completely isolated from others. Perfect for untrusted code execution.

Sub-500ms cold starts with Firecracker
Hardware-level isolation, not containers
Pause & resume — save costs, keep state
Persistent filesystem across executions
Explore Sandboxes
code_exec Tool

Let your LLM
write & run code

Give your AI the code_exec tool and watch it solve complex problems by writing code. Massive token savings — instead of returning data to the LLM for processing, let it process data directly in the sandbox.

Token comparison:

50K

Without code_exec

2K

With code_exec

96% savings
Process large datasets without context bloat
Stateful — variables persist between calls
LLM iterates on code until it works
Configure Code Execution
llm-with-code-exec.ts
// Your LLM discovers the code_exec tool
const response = await llm.chat({
  messages: [{ 
    role: "user", 
    content: "Analyze sales.csv and find top products" 
  }],
  tools: await rt.getToolDefinitions()
})

// LLM decides to use code_exec
// → Writes Python code to analyze CSV
// → Runs in isolated sandbox
// → Returns just the insights, not raw data!

// Result from LLM:
// "Based on my analysis of 50,000 rows:
//  1. Product A: $2.4M revenue (↑23%)
//  2. Product B: $1.8M revenue (↑12%)
//  3. Product C: $1.2M revenue (↓5%)"

// Only 847 tokens used instead of 50,000! 🎉

100+ pre-built tools

Gmail
Outlook
Slack
Discord
Teams
Telegram
WhatsApp
Twilio
Code Exec
GitHub
GitLab
SSH
AWS
Vercel
PostgreSQL
MongoDB
Redis
Supabase
Firebase
Pinecone
OpenAI
Anthropic
DALL-E
Midjourney
Stability
Exa
Perplexity
Notion
Confluence
Google Docs
Airtable
Google Calendar
Calendly
Asana
Linear
Jira
Monday
Stripe
Shopify
Uber
DoorDash
Instacart
Postmates
Zapier
Make
n8n
Web Scrape
Browserless
Firecrawl
Salesforce
HubSpot
Mailchimp
SendGrid
Segment
Mixpanel
Amplitude
Twitter/X
LinkedIn
Instagram
TikTok
Reddit
YouTube
Plaid
QuickBooks
DocuSign
Gmail
Outlook
Slack
Discord
Teams
Telegram
WhatsApp
Twilio
Code Exec
GitHub
GitLab
SSH
AWS
Vercel
PostgreSQL
MongoDB
Redis
Supabase
Firebase
Pinecone
OpenAI
Anthropic
DALL-E
Midjourney
Stability
Exa
Perplexity
Notion
Confluence
Google Docs
Airtable
Google Calendar
Calendly
Asana
Linear
Jira
Monday
Stripe
Shopify
Uber
DoorDash
Instacart
Postmates
Zapier
Make
n8n
Web Scrape
Browserless
Firecrawl
Salesforce
HubSpot
Mailchimp
SendGrid
Segment
Mixpanel
Amplitude
Twitter/X
LinkedIn
Instagram
TikTok
Reddit
YouTube
Plaid
QuickBooks
DocuSign
Gmail
Outlook
Slack
Discord
Teams
Telegram
WhatsApp
Twilio
Code Exec
GitHub
GitLab
SSH
AWS
Vercel
PostgreSQL
MongoDB
Redis
Supabase
Firebase
Pinecone
OpenAI
Anthropic
DALL-E
Midjourney
Stability
Exa
Perplexity
Notion
Confluence
Google Docs
Airtable
Google Calendar
Calendly
Asana
Linear
Jira
Monday
Stripe
Shopify
Uber
DoorDash
Instacart
Postmates
Zapier
Make
n8n
Web Scrape
Browserless
Firecrawl
Salesforce
HubSpot
Mailchimp
SendGrid
Segment
Mixpanel
Amplitude
Twitter/X
LinkedIn
Instagram
TikTok
Reddit
YouTube
Plaid
QuickBooks
DocuSign
Simple Integration

How it works

Three simple steps to give your AI agent the ability to execute real-world actions

01

Integrate tool_search

Add our tool_search function to your LLM. One function that lets your AI discover every tool.

tools: ["tool_search"]
02

AI discovers tools

User says "send an email" → LLM calls tool_search → Gets gmail_send, outlook_send, etc.

tool_search("send email")
03

Execute & return

LLM calls the tool. We handle OAuth, rate limits, retries. Results return instantly.

gmail_send({ to: "..." })

Example flow

"Book an Uber"tool_searchuber_requestRide booked

Built for AI engineers

Everything you need to build powerful AI agents that can take action in the real world

Stateful Code Execution

Variables persist between calls. Firecracker snapshots save entire VM state.

Sub-500ms Cold Start

Firecracker microVMs boot instantly. Pre-warmed pools for high traffic.

OAuth Token Management

We store, refresh, and inject tokens. Users connect once, tools work forever.

Tool Marketplace

100+ pre-built tools for Gmail, Slack, databases, and more. Or bring your own.

Hardware Isolation

Each sandbox runs in its own microVM. Stronger isolation than containers.

93% Token Savings

Return tool schemas on-demand instead of stuffing them all in context.

Trusted by teams at

ACME
STARTUP
TECHCORP
AI LABS
DEVCO

Simple, transparent pricing

Start free, scale as you grow. No hidden fees.

Free

$0/month

Perfect for side projects and experimentation

  • 1,000 tool executions/month
  • 10 code execution minutes
  • Community support
  • All 100+ tools
Get Started Free
Most Popular

Pro

$49/month

For production AI applications

  • 50,000 tool executions/month
  • 100 code execution minutes
  • Priority support
  • OAuth token management
  • Custom tool creation
  • Team collaboration
Start Pro Trial

Enterprise

Custom

For large-scale deployments

  • Unlimited executions
  • Unlimited code execution
  • Dedicated support
  • SLA guarantees
  • On-premise deployment
  • Custom integrations
Contact Sales

Ready to give your AI
superpowers?

Join thousands of developers building the next generation of AI applications.

Start Building Free