You shouldn't have to build streaming infrastructure, tool orchestration, API layers, and client SDKs just to ship an agent. Reminix handles all of it.
serve() gives you.One function call turns your agent into a production system. Here's everything you get out of the box.
import { serve } from "reminix"
// Your agent — Vercel AI, LangChain, or plain code
const agent = createSupportBot({
model: "gpt-4o",
tools: [lookupOrder, createTicket],
system: "You are a support agent for Acme Corp.",
})
// One line to production.
serve(agent, {
name: "support-bot",
tools: ["memory", "knowledge_base"],
})/execute, /chat, /stream, /health — ready to use with auth, validation, and error handling.
Token-by-token streaming with backpressure and client reconnection. No WebSocket plumbing.
Memory, web search, knowledge base, KV storage — wired up and ready. No infra to provision.
Type-safe clients for all your agents. Import, connect, and call.
API keys, environment variables, per-key rate limits. Configured, not coded.
Request tracing, error rates, latency percentiles, token usage. Built in.
Each of these would take weeks to build, deploy, and maintain. Add them to any agent with a single string.
tools=["memory"]User-scoped persistent memory across conversations. Your agent remembers what each user told it — without you building embedding pipelines, vector databases, or retrieval logic.
Saves: Pinecone/Weaviate setup, embedding pipeline, user isolation logic
tools=["knowledge_base"]Upload documents, we handle chunking, embedding, indexing, and retrieval. Project-scoped RAG that works out of the box.
Saves: Document processing pipeline, chunking strategy, vector index management
tools=["web_search"]Your agent can search the web and fetch page content. No API keys to rotate, no rate limits to handle, no HTML parsing to maintain.
Saves: Search API integration, content extraction, rate limit handling
tools=["kv_storage"]Persistent key-value store for agent state, caches, user preferences, and structured data. No database to provision or maintain.
Saves: Database provisioning, schema management, connection pooling
Write tools in Python or TypeScript. Deploy them alongside your agent. They work exactly like built-in tools — any agent can call them.
Mix built-in and your own tools freely. A single agent can use memory, web search, and your custom CRM tool in the same conversation.
import { tool } from "reminix"
export const lookupCustomer = tool({
name: "lookup_customer",
description: "Find customer by email",
run: async ({ email }: { email: string }) => {
const customer = await crm.find(email)
return customer
},
})
export const createTicket = tool({
name: "create_ticket",
description: "Create a support ticket",
run: async ({ subject, body }) => {
return await helpdesk.create({ subject, body })
},
})Already have an agent? Wrap it with our adapter. No rewrites needed.
Python
TypeScript
Python & TS
Python & TS
Python & TS
Python & TS
Use any framework that produces a callable agent. Reminix handles the rest — APIs, streaming, tools, and deployment.
Pick the pattern that fits your problem. The platform handles tools, streaming, state, and infra either way.
Multi-turn conversations with managed state, message history, and user-scoped memory.
const agent = new ChatAgent({
model: "claude-sonnet-4-5",
tools: ["memory"],
})
serve(agent, { name: "support-bot" })Support bots, assistants, research agents, onboarding
Stateless single-shot execution. Send input, get output. No conversation state to manage.
const agent = new TaskAgent({
model: "claude-sonnet-4-5",
tools: ["web_search"],
})
serve(agent, { name: "analyzer" })Data processing, extraction, reports, code analysis
Multi-step orchestration with branching, approvals, and parallel execution.
const agent = new WorkflowAgent({
steps: [enrich, score, route],
tools: ["kv_storage"],
})
serve(agent, { name: "lead-router" })ETL pipelines, multi-agent, approval flows, enrichment
Bring your own OAuth apps. We handle auth flows, token exchange, secure storage, and automatic refresh. You get a valid token and use it with any SDK.
Create an OAuth app in Google Console, Slack API, Notion, etc. Add the client ID and secret to Reminix.
Reminix generates the auth URL, handles the callback, exchanges the code for tokens, and stores them encrypted.
Call ctx.connections.token("google") and get a valid access token. Reminix refreshes expired tokens automatically.
// Get a fresh token — auto-refreshed
const token = await ctx.connections.token("google")
// Use Google's own SDK directly
const calendar = google.calendar({
version: "v3",
auth: token,
})
const events = await calendar.events.list({
calendarId: "primary",
timeMin: new Date().toISOString(),
})Any OAuth 2.0 provider works. Add your client ID and secret — we handle the protocol.
The things you'd spend weeks building on Railway or Render.
Type-safe clients with streaming, error handling, and retries. One SDK for all your agents.
Execute, chat, stream, and health endpoints. Auth, validation, CORS, rate limiting — configured, not coded.
Secrets, deployments, versions, rollbacks, logs. The ops layer you don't want to build.
Every project runs in its own isolated microVM. Your agents, your container, your server.
The Reminix Runtime adapters are open source (Apache 2.0). Read the code, understand what runs your agents, contribute if you want.
Inspect, fork, contribute
Open source adapters for both
See exactly how your code connects to the platform