Atla Insights supports a wide range of LLM providers and AI agent frameworks. All instrumentation methods share a common interface for easy integration.
LLM Providers
We currently support the following LLM providers:
| Provider | Instrumentation Function | Notes |
| Anthropic | instrument_anthropic | Also supports AnthropicBedrock client from Anthropic |
| Bedrock | instrument_bedrock | |
| Google GenAI | instrument_google_genai | E.g., Gemini |
| LiteLLM | instrument_litellm | Supports all available models in the LiteLLM framework |
| OpenAI | instrument_openai | Includes Azure OpenAI |
Basic Provider Usage
from atla_insights import configure, instrument_openai
from openai import OpenAI
configure(token="<MY_ATLA_INSIGHTS_TOKEN>")
instrument_openai()
client = OpenAI()
# All OpenAI calls are now instrumented
By default, instrumented LLM calls will be treated independently from one another. To logically group LLM calls into a trace, use the @instrument decorator.
Agent Frameworks
We currently support the following frameworks:
| Framework | Instrumentation Function | Notes |
| Agno | instrument_agno | Supported with openai, google-genai, litellm and/or anthropic models |
| BAML | instrument_baml | Supported with openai, anthropic or bedrock models |
| Claude Agent SDK | instrument_claude_agent_sdk | |
| Claude Code SDK | instrument_claude_code_sdk | |
| CrewAI | instrument_crewai | |
| Google AI SDK | instrument_google_adk | |
| LangChain | instrument_langchain | This includes e.g., LangGraph as well |
| MCP | instrument_mcp | Only includes context propagation. You will need to instrument the model calling the MCP server separately. |
| OpenAI Agents | instrument_openai_agents | Supported with openai, google-genai, litellm and/or anthropic models |
| Pydantic AI | instrument_pydantic_ai | |
| Smolagents | instrument_smolagents | Supported with openai, google-genai, litellm and/or anthropic models |
Framework + Provider Instrumentation
For Agno, BAML, OpenAI Agents and Smolagents in Python, you will need to instrument both the framework and the underlying LLM provider(s).
from atla_insights import configure, instrument_agno
configure(token="<MY_ATLA_INSIGHTS_TOKEN>")
# If you are using a single LLM provider (e.g., via `OpenAIChat`)
instrument_agno("openai")
# If you are using multiple LLM providers (e.g., `OpenAIChat` and `Claude`)
instrument_agno(["anthropic", "openai"])
Can’t find your framework or LLM provider?
If you are using a framework or LLM provider without native support, you can manually record LLM generations via our lower-level SDK.
from atla_insights.span import start_as_current_span
with start_as_current_span("my-llm-generation") as span:
# Run my LLM generation via an unsupported framework.
input_messages = [{"role": "user", "content": "What is the capital of France?"}]
tools = [
{
"type": "function",
"function": {
"name": "get_capital",
"parameters": {"type": "object", "properties": {"country": {"type": "string"}}},
},
}
]
result = my_client.chat.completions.create(messages=input_messages, tools=tools)
# Manually record LLM generation.
span.record_generation(
input_messages=input_messages,
output_messages=[choice.message for choice in result.choices],
tools=tools,
)
Note that all arguments are expected to be in an OpenAI-compatible format.
See the relevant OpenAI documentation for more details:
Feel free to let us know which frameworks and LLM providers you would like to see supported!
Schedule a call with the Atla team