Files
agentlens/README.md

180 lines
5.6 KiB
Markdown

<p align="center">
<h1 align="center">AgentLens</h1>
<p align="center">Agent observability that traces <strong>decisions</strong>, not just API calls.</p>
<p align="center">See <em>why</em> your AI agents chose what they chose.</p>
</p>
<p align="center">
<a href="https://pypi.org/project/vectry-agentlens/"><img src="https://img.shields.io/pypi/v/vectry-agentlens?color=blue" alt="PyPI"></a>
<a href="https://gitea.repi.fun/repi/agentlens/src/branch/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
<a href="https://agentlens.vectry.tech"><img src="https://img.shields.io/badge/demo-live-brightgreen" alt="Demo"></a>
</p>
---
## The Problem
Existing observability tools show you _what_ LLM calls were made. AgentLens shows you _why_ your agent made each decision along the way -- which tool it picked, what alternatives it rejected, and the reasoning behind every choice.
## Quick Start
```bash
pip install vectry-agentlens
```
```python
import agentlens
agentlens.init(api_key="your-key", endpoint="https://agentlens.vectry.tech")
with agentlens.trace("my-agent-task", tags=["production"]):
# Your agent logic here...
agentlens.log_decision(
type="TOOL_SELECTION",
chosen={"name": "search_web", "confidence": 0.92},
alternatives=[{"name": "search_docs", "reason_rejected": "query too broad"}],
reasoning="User query requires real-time data not in local docs"
)
agentlens.shutdown()
```
Open `https://agentlens.vectry.tech/dashboard` to see your traces.
## Features
- **Decision Tracing** -- Log every decision point with reasoning, alternatives, and confidence scores
- **OpenAI Integration** -- Auto-instrument OpenAI calls with one line: `wrap_openai(client)`
- **LangChain Integration** -- Drop-in callback handler for LangChain agents
- **Nested Traces** -- Multi-agent workflows with parent-child span relationships
- **Real-time Dashboard** -- SSE-powered live trace streaming with filtering and search
- **Decision Tree Viz** -- Interactive React Flow visualization of agent decision paths
- **Analytics** -- Token usage, cost tracking, duration timelines per trace
- **Self-Hostable** -- Docker Compose deployment, bring your own Postgres + Redis
## Architecture
```
SDK (Python) API (Next.js) Dashboard (React)
agentlens.trace() ------> POST /api/traces ------> Real-time SSE stream
agentlens.log_decision() Prisma + Postgres Decision tree viz
wrap_openai(client) Redis pub/sub Analytics & filters
```
## Integrations
### OpenAI
```python
import openai
from agentlens.integrations.openai import wrap_openai
client = openai.OpenAI()
wrap_openai(client) # Auto-traces all completions
with agentlens.trace("openai-task"):
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### LangChain
```python
from agentlens.integrations.langchain import AgentLensCallbackHandler
handler = AgentLensCallbackHandler()
agent.run("Do something", callbacks=[handler])
```
### Custom Agents
```python
with agentlens.trace("planner"):
agentlens.log_decision(
type="ROUTING",
chosen={"name": "research_agent"},
alternatives=[{"name": "writer_agent"}],
reasoning="Task requires data gathering first"
)
with agentlens.trace("researcher"):
# Nested trace creates child span automatically
agentlens.log_decision(
type="TOOL_SELECTION",
chosen={"name": "web_search"},
alternatives=[{"name": "database_query"}],
reasoning="Need real-time information"
)
```
## Decision Types
| Type | Use Case |
|------|----------|
| `TOOL_SELECTION` | Agent chose which tool/function to call |
| `ROUTING` | Agent decided which sub-agent or path to take |
| `PLANNING` | Agent formulated a multi-step plan |
| `RETRY` | Agent decided to retry a failed operation |
| `ESCALATION` | Agent escalated to human or higher-level agent |
| `MEMORY_RETRIEVAL` | Agent chose what context to retrieve |
| `CUSTOM` | Any other decision type |
## Self-Hosting
```bash
git clone https://gitea.repi.fun/repi/agentlens.git
cd agentlens
docker compose up -d
```
The dashboard will be available at `http://localhost:4200`.
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `DATABASE_URL` | `postgresql://agentlens:agentlens@postgres:5432/agentlens` | PostgreSQL connection string |
| `REDIS_URL` | `redis://redis:6379` | Redis connection string |
| `NODE_ENV` | `production` | Node environment |
## Project Structure
```
agentlens/
apps/web/ # Next.js 15 dashboard + API
packages/database/ # Prisma schema + client
packages/sdk-python/ # Python SDK (PyPI: vectry-agentlens)
examples/ # Example agent scripts
docker-compose.yml # Production deployment
```
## SDK Reference
See the full [Python SDK documentation](packages/sdk-python/README.md).
## Examples
See the [examples directory](examples/) for runnable agent scripts:
- `basic_agent.py` -- Minimal AgentLens usage with decision logging
- `openai_agent.py` -- OpenAI wrapper auto-instrumentation
- `multi_agent.py` -- Nested multi-agent workflows
- `customer_support_agent.py` -- Realistic support bot with routing and escalation
## Contributing
AgentLens is open source under the MIT license. Contributions welcome.
```bash
# Development setup
npm install
npx turbo dev # Start web app in dev mode
cd packages/sdk-python
pip install -e ".[dev]" # Install SDK in dev mode
pytest # Run SDK tests
```
## License
MIT