Introduction
Ship AI agents to production. Handle the routing, tools, and auth so you can focus on building.
What is this?
AgenticWork is infrastructure for AI agents. Instead of wiring up LLM providers, building tool integrations, and figuring out auth flows yourself, you point your code at our API and get all of it out of the box.
The short version:
Connect to any LLM (Claude, GPT-4, Gemini, Llama, etc.) through one API
Use 150+ pre-built tools for web search, cloud ops, code execution, diagrams
Let the Intelligence Slider pick the right model for each request
Enterprise auth with Azure AD, RBAC, audit logs
Architecture
Get started
Using the SDK
Using the CLI
Direct API calls
The Intelligence Slider
Most AI platforms make you pick a model upfront. We do something different.
The Intelligence Slider (0-100%) routes requests to the right model based on complexity. Simple formatting task? It goes to Haiku. Complex architecture question? Opus handles it. You set a budget preference, and the system optimizes for cost vs. capability automatically.
Read more about the Intelligence Slider
What's included
AgenticWork API
Routes requests, runs the chat pipeline, orchestrates tools
Web UI
Chat interface with admin controls
AgentiCode CLI
Terminal AI assistant (think Claude Code for your infra)
SDK
TypeScript library for building on top of the platform
MCP Proxy
Manages 13 tool servers with 150+ tools
LLM Providers
We support direct connections to:
Anthropic
Claude 3.5 Sonnet, Opus, Haiku
OpenAI
GPT-4o, GPT-4 Turbo, o1
Gemini 2.0 Flash, 1.5 Pro
AWS Bedrock
Claude, Llama, Titan
Ollama
Any local model
Docs
Enterprise stuff
If you're running this in prod, you probably care about:
Azure AD SSO - Your team logs in with their existing Microsoft accounts
Role-Based Access - Control who can do what
Audit Logs - Track everything for compliance
Rate Limiting - Prevent runaway costs
Multi-Tenant - Isolated workspaces per team
Last updated