The tool execution platform for AI agents. One integration gives your LLM access to 100+ tools, code execution, and OAuth management.
import { RunTools } from "@runtools/sdk"
// Initialize with your API key
const rt = new RunTools({
apiKey: process.env.RUNTOOLS_API_KEY
})
// Get tool_search for your LLM
const tools = await rt.getToolDefinitions()
// Execute any tool your LLM discovers
const result = await rt.execute({
tool: "gmail_send",
params: {
to: "[email protected]",
subject: "Hello from AI!",
body: "This email was sent by my AI agent."
}
})
console.log(result)
// { success: true, messageId: "..." }import { RunTools } from "@runtools/sdk"
const rt = new RunTools({ apiKey: "rt_..." })
// Create an isolated sandbox
const sandbox = await rt.sandbox.create({
runtime: "python:3.12",
memory: "512mb",
timeout: 300
})
// Execute code in isolation
const result = await sandbox.exec(`
import pandas as pd
df = pd.read_csv("data.csv")
print(df.describe())
`)
// Sandbox persists state between calls!
await sandbox.exec("df.head(10)")
// Pause to save costs, resume anytime
await sandbox.pause() // Snapshot saved
await sandbox.resume() // Restored in <500msSpin up isolated Firecracker microVMs in milliseconds. Each sandbox has its own filesystem, network, and memory — completely isolated from others. Perfect for untrusted code execution.
Give your AI the code_exec tool and watch it solve complex problems by writing code. Massive token savings — instead of returning data to the LLM for processing, let it process data directly in the sandbox.
Token comparison:
50K
Without code_exec
2K
With code_exec
// Your LLM discovers the code_exec tool
const response = await llm.chat({
messages: [{
role: "user",
content: "Analyze sales.csv and find top products"
}],
tools: await rt.getToolDefinitions()
})
// LLM decides to use code_exec
// → Writes Python code to analyze CSV
// → Runs in isolated sandbox
// → Returns just the insights, not raw data!
// Result from LLM:
// "Based on my analysis of 50,000 rows:
// 1. Product A: $2.4M revenue (↑23%)
// 2. Product B: $1.8M revenue (↑12%)
// 3. Product C: $1.2M revenue (↓5%)"
// Only 847 tokens used instead of 50,000! 🎉100+ pre-built tools
Three simple steps to give your AI agent the ability to execute real-world actions
Add our tool_search function to your LLM. One function that lets your AI discover every tool.
tools: ["tool_search"]User says "send an email" → LLM calls tool_search → Gets gmail_send, outlook_send, etc.
tool_search("send email")LLM calls the tool. We handle OAuth, rate limits, retries. Results return instantly.
gmail_send({ to: "..." })Example flow
Everything you need to build powerful AI agents that can take action in the real world
Variables persist between calls. Firecracker snapshots save entire VM state.
Firecracker microVMs boot instantly. Pre-warmed pools for high traffic.
We store, refresh, and inject tokens. Users connect once, tools work forever.
100+ pre-built tools for Gmail, Slack, databases, and more. Or bring your own.
Each sandbox runs in its own microVM. Stronger isolation than containers.
Return tool schemas on-demand instead of stuffing them all in context.
Trusted by teams at
Start free, scale as you grow. No hidden fees.
Perfect for side projects and experimentation
For production AI applications
For large-scale deployments
Join thousands of developers building the next generation of AI applications.