Agent runtime for building AI agents that integrate with the Reminix API.
npm install @reminix/runtime
# or
pnpm add @reminix/runtime
import { Agent, serve } from '@reminix/runtime';
// Create an agent with optional metadata for dashboard display
const agent = new Agent('my-agent', {
metadata: {
framework: 'custom',
model: 'gpt-4',
description: 'My first agent',
},
});
agent.onInvoke(async (input) => {
return { output: `Processed: ${input.message}` };
});
agent.onChat(async (messages) => {
const lastMessage = messages[messages.length - 1];
return {
message: { role: 'assistant', content: `Reply: ${lastMessage.content}` },
};
});
serve(agent, { port: 8080 });
Create an agent with a unique name and optional metadata:
import { Agent } from '@reminix/runtime';
const agent = new Agent('my-agent');
// With metadata (optional)
const agentWithMeta = new Agent('my-agent', {
metadata: {
framework: 'langchain',
model: 'gpt-4',
description: 'Customer support agent',
},
});
The metadata is available via the /_discover endpoint and displayed in the Reminix dashboard.
agent.onInvoke(handler)Register an invoke handler:
agent.onInvoke(async (input, ctx) => {
// input is Record<string, unknown>
// ctx has: conversationId, userId, custom
return { output: 'result' };
});
agent.onChat(handler)Register a chat handler:
agent.onChat(async (messages, ctx) => {
// messages is ChatMessage[]
// ctx has: conversationId, userId, custom
return {
message: { role: 'assistant', content: `Hello, user ${ctx?.userId}!` },
};
});
The ctx parameter is optional and contains request context:
conversationId: Track multi-turn conversationsuserId: Identify the usercustom: Any additional custom fieldsUse separate handlers for streaming responses:
agent.onInvokeStream(async function* (input, ctx) {
yield { chunk: 'Hello ' };
yield { chunk: 'World!' };
});
agent.onChatStream(async function* (messages, ctx) {
yield { chunk: 'Streaming ' };
yield { chunk: 'response...' };
});
The server routes to the appropriate handler based on the client's stream flag:
stream: false → uses onInvoke / onChatstream: true → uses onInvokeStream / onChatStreamStart a server hosting your agents:
import { serve } from '@reminix/runtime';
// Single agent
serve(agent, { host: '0.0.0.0', port: 8080 });
// Multiple agents
serve([agent1, agent2], { port: 8080 });
| Method | Path | Description |
|---|---|---|
| GET | /health |
Runtime health status |
| GET | /_discover |
Runtime and agent discovery |
| GET | /agent/{name}/health |
Agent capabilities |
| POST | /agent/{name}/invoke |
Invoke agent |
| POST | /agent/{name}/chat |
Chat with agent |
The /_discover endpoint returns runtime and agent information:
curl http://localhost:8080/_discover
{
"runtime": {
"version": "0.6.0",
"language": "typescript",
"framework": "hono"
},
"agents": [
{
"name": "my-agent",
"invoke": true,
"chat": true,
"metadata": {
"framework": "langchain",
"model": "gpt-4"
}
}
]
}
{
input: Record<string, unknown>;
stream?: boolean; // default: false
}
{
output: unknown;
}
{
messages: Array<{
role: 'system' | 'user' | 'assistant' | 'tool';
content: string | unknown[] | null;
name?: string;
tool_calls?: ToolCall[];
tool_call_id?: string;
}>;
stream?: boolean; // default: false
}
{
message: {
role: 'assistant';
content: string | unknown[] | null;
tool_calls?: ToolCall[];
};
}
When stream: true, responses are Server-Sent Events:
data: {"chunk": "Hello "}
data: {"chunk": "World!"}
data: [DONE]
For wrapping existing AI agents from popular frameworks (OpenAI, Anthropic, LangChain, LangGraph, Vercel AI), use the @reminix/adapters package:
npm install @reminix/adapters
See the @reminix/adapters documentation for details.
import OpenAI from 'openai';
import { fromOpenAI } from '@reminix/adapters/openai';
import { serve } from '@reminix/runtime';
const client = new OpenAI();
const agent = fromOpenAI(client, {
name: 'gpt4-agent',
model: 'gpt-4o',
system: 'You are a helpful assistant.',
});
serve(agent);
The runtime handles errors appropriately:
404 - Agent not found400 - Invalid request (missing input/messages, validation errors)501 - Handler not implemented502 - Agent handler returned invalid response500 - Internal server errorMIT