feat: Day 13 - root README, example agent scripts, and demo seed script
This commit is contained in:
181
README.md
181
README.md
@@ -1,3 +1,180 @@
|
|||||||
# agentlens
|
<p align="center">
|
||||||
|
<h1 align="center">AgentLens</h1>
|
||||||
|
<p align="center">Agent observability that traces <strong>decisions</strong>, not just API calls.</p>
|
||||||
|
<p align="center">See <em>why</em> your AI agents chose what they chose.</p>
|
||||||
|
</p>
|
||||||
|
|
||||||
AgentLens - Agent observability that traces decisions, not just API calls. See WHY your AI agents chose what they chose.
|
<p align="center">
|
||||||
|
<a href="https://pypi.org/project/vectry-agentlens/"><img src="https://img.shields.io/pypi/v/vectry-agentlens?color=blue" alt="PyPI"></a>
|
||||||
|
<a href="https://gitea.repi.fun/repi/agentlens/src/branch/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
|
||||||
|
<a href="https://agentlens.vectry.tech"><img src="https://img.shields.io/badge/demo-live-brightgreen" alt="Demo"></a>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Problem
|
||||||
|
|
||||||
|
Existing observability tools show you _what_ LLM calls were made. AgentLens shows you _why_ your agent made each decision along the way -- which tool it picked, what alternatives it rejected, and the reasoning behind every choice.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install vectry-agentlens
|
||||||
|
```
|
||||||
|
|
||||||
|
```python
|
||||||
|
import agentlens
|
||||||
|
|
||||||
|
agentlens.init(api_key="your-key", endpoint="https://agentlens.vectry.tech")
|
||||||
|
|
||||||
|
with agentlens.trace("my-agent-task", tags=["production"]):
|
||||||
|
# Your agent logic here...
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={"name": "search_web", "confidence": 0.92},
|
||||||
|
alternatives=[{"name": "search_docs", "reason_rejected": "query too broad"}],
|
||||||
|
reasoning="User query requires real-time data not in local docs"
|
||||||
|
)
|
||||||
|
|
||||||
|
agentlens.shutdown()
|
||||||
|
```
|
||||||
|
|
||||||
|
Open `https://agentlens.vectry.tech/dashboard` to see your traces.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Decision Tracing** -- Log every decision point with reasoning, alternatives, and confidence scores
|
||||||
|
- **OpenAI Integration** -- Auto-instrument OpenAI calls with one line: `wrap_openai(client)`
|
||||||
|
- **LangChain Integration** -- Drop-in callback handler for LangChain agents
|
||||||
|
- **Nested Traces** -- Multi-agent workflows with parent-child span relationships
|
||||||
|
- **Real-time Dashboard** -- SSE-powered live trace streaming with filtering and search
|
||||||
|
- **Decision Tree Viz** -- Interactive React Flow visualization of agent decision paths
|
||||||
|
- **Analytics** -- Token usage, cost tracking, duration timelines per trace
|
||||||
|
- **Self-Hostable** -- Docker Compose deployment, bring your own Postgres + Redis
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
SDK (Python) API (Next.js) Dashboard (React)
|
||||||
|
agentlens.trace() ------> POST /api/traces ------> Real-time SSE stream
|
||||||
|
agentlens.log_decision() Prisma + Postgres Decision tree viz
|
||||||
|
wrap_openai(client) Redis pub/sub Analytics & filters
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integrations
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
```python
|
||||||
|
import openai
|
||||||
|
from agentlens.integrations.openai import wrap_openai
|
||||||
|
|
||||||
|
client = openai.OpenAI()
|
||||||
|
wrap_openai(client) # Auto-traces all completions
|
||||||
|
|
||||||
|
with agentlens.trace("openai-task"):
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model="gpt-4o",
|
||||||
|
messages=[{"role": "user", "content": "Hello!"}]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### LangChain
|
||||||
|
|
||||||
|
```python
|
||||||
|
from agentlens.integrations.langchain import AgentLensCallbackHandler
|
||||||
|
|
||||||
|
handler = AgentLensCallbackHandler()
|
||||||
|
agent.run("Do something", callbacks=[handler])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Agents
|
||||||
|
|
||||||
|
```python
|
||||||
|
with agentlens.trace("planner"):
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ROUTING",
|
||||||
|
chosen={"name": "research_agent"},
|
||||||
|
alternatives=[{"name": "writer_agent"}],
|
||||||
|
reasoning="Task requires data gathering first"
|
||||||
|
)
|
||||||
|
with agentlens.trace("researcher"):
|
||||||
|
# Nested trace creates child span automatically
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={"name": "web_search"},
|
||||||
|
alternatives=[{"name": "database_query"}],
|
||||||
|
reasoning="Need real-time information"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Decision Types
|
||||||
|
|
||||||
|
| Type | Use Case |
|
||||||
|
|------|----------|
|
||||||
|
| `TOOL_SELECTION` | Agent chose which tool/function to call |
|
||||||
|
| `ROUTING` | Agent decided which sub-agent or path to take |
|
||||||
|
| `PLANNING` | Agent formulated a multi-step plan |
|
||||||
|
| `RETRY` | Agent decided to retry a failed operation |
|
||||||
|
| `ESCALATION` | Agent escalated to human or higher-level agent |
|
||||||
|
| `MEMORY_RETRIEVAL` | Agent chose what context to retrieve |
|
||||||
|
| `CUSTOM` | Any other decision type |
|
||||||
|
|
||||||
|
## Self-Hosting
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://gitea.repi.fun/repi/agentlens.git
|
||||||
|
cd agentlens
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
The dashboard will be available at `http://localhost:4200`.
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `DATABASE_URL` | `postgresql://agentlens:agentlens@postgres:5432/agentlens` | PostgreSQL connection string |
|
||||||
|
| `REDIS_URL` | `redis://redis:6379` | Redis connection string |
|
||||||
|
| `NODE_ENV` | `production` | Node environment |
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
agentlens/
|
||||||
|
apps/web/ # Next.js 15 dashboard + API
|
||||||
|
packages/database/ # Prisma schema + client
|
||||||
|
packages/sdk-python/ # Python SDK (PyPI: vectry-agentlens)
|
||||||
|
examples/ # Example agent scripts
|
||||||
|
docker-compose.yml # Production deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
## SDK Reference
|
||||||
|
|
||||||
|
See the full [Python SDK documentation](packages/sdk-python/README.md).
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
See the [examples directory](examples/) for runnable agent scripts:
|
||||||
|
|
||||||
|
- `basic_agent.py` -- Minimal AgentLens usage with decision logging
|
||||||
|
- `openai_agent.py` -- OpenAI wrapper auto-instrumentation
|
||||||
|
- `multi_agent.py` -- Nested multi-agent workflows
|
||||||
|
- `customer_support_agent.py` -- Realistic support bot with routing and escalation
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
AgentLens is open source under the MIT license. Contributions welcome.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Development setup
|
||||||
|
npm install
|
||||||
|
npx turbo dev # Start web app in dev mode
|
||||||
|
cd packages/sdk-python
|
||||||
|
pip install -e ".[dev]" # Install SDK in dev mode
|
||||||
|
pytest # Run SDK tests
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
48
examples/README.md
Normal file
48
examples/README.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# AgentLens Examples
|
||||||
|
|
||||||
|
Example scripts demonstrating the AgentLens SDK for tracing and observing AI agent behavior.
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install vectry-agentlens
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
| Script | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `basic_agent.py` | Simplest usage — init, trace, log decisions, shutdown |
|
||||||
|
| `openai_agent.py` | OpenAI integration — wrap the client for automatic LLM call tracing |
|
||||||
|
| `multi_agent.py` | Nested traces — planner delegates to researcher, writer, and editor sub-agents |
|
||||||
|
| `customer_support_agent.py` | Realistic support workflow — classification, routing, escalation, error handling |
|
||||||
|
| `seed_demo_traces.py` | Seeds the live dashboard with 11 realistic traces via direct HTTP POST (no SDK) |
|
||||||
|
|
||||||
|
## Running
|
||||||
|
|
||||||
|
Each SDK example follows the same pattern:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set your API key and endpoint
|
||||||
|
export AGENTLENS_API_KEY="your-key"
|
||||||
|
|
||||||
|
# Run any example
|
||||||
|
python examples/basic_agent.py
|
||||||
|
```
|
||||||
|
|
||||||
|
For the OpenAI example, you also need:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install openai
|
||||||
|
export OPENAI_API_KEY="sk-..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Seed Script
|
||||||
|
|
||||||
|
The seed script sends pre-built traces directly to the API — no SDK or OpenAI key needed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python examples/seed_demo_traces.py
|
||||||
|
```
|
||||||
|
|
||||||
|
This populates the dashboard with varied traces (COMPLETED, ERROR, RUNNING) across multiple agent types.
|
||||||
92
examples/basic_agent.py
Normal file
92
examples/basic_agent.py
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
"""
|
||||||
|
AgentLens Basic Example — Simplest possible usage.
|
||||||
|
|
||||||
|
Demonstrates:
|
||||||
|
- Initializing the SDK
|
||||||
|
- Creating a trace with tags
|
||||||
|
- Logging decision points (TOOL_SELECTION, PLANNING)
|
||||||
|
- Graceful shutdown
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
pip install vectry-agentlens
|
||||||
|
python basic_agent.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import agentlens
|
||||||
|
import time
|
||||||
|
|
||||||
|
# 1. Initialize AgentLens
|
||||||
|
agentlens.init(
|
||||||
|
api_key="your-api-key-here",
|
||||||
|
endpoint="http://localhost:4200",
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2. Run an agent task inside a trace context
|
||||||
|
with agentlens.trace("research-task", tags=["demo", "basic"]):
|
||||||
|
# Simulate: agent decides which tool to use for research
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={
|
||||||
|
"name": "search_web",
|
||||||
|
"confidence": 0.85,
|
||||||
|
"params": {"query": "latest AI research papers 2025"},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "search_docs",
|
||||||
|
"confidence": 0.6,
|
||||||
|
"reason_rejected": "Internal docs unlikely to have latest papers",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "search_arxiv",
|
||||||
|
"confidence": 0.78,
|
||||||
|
"reason_rejected": "Web search covers arXiv plus other sources",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Web search gives the broadest coverage for recent AI papers.",
|
||||||
|
)
|
||||||
|
|
||||||
|
time.sleep(0.3) # Simulate tool execution time
|
||||||
|
|
||||||
|
# Simulate: agent plans next steps after getting search results
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="PLANNING",
|
||||||
|
chosen={
|
||||||
|
"name": "summarize_top_3",
|
||||||
|
"confidence": 0.92,
|
||||||
|
"params": {"max_papers": 3, "format": "bullet_points"},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "summarize_all",
|
||||||
|
"confidence": 0.5,
|
||||||
|
"reason_rejected": "Too many results, would dilute quality",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Focusing on top 3 papers gives concise, high-value summary.",
|
||||||
|
)
|
||||||
|
|
||||||
|
time.sleep(0.2) # Simulate summarization
|
||||||
|
|
||||||
|
# Simulate: decide whether to retry with refined query
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="CUSTOM",
|
||||||
|
chosen={
|
||||||
|
"name": "return_results",
|
||||||
|
"confidence": 0.95,
|
||||||
|
"params": {"result_count": 3},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "refine_and_retry",
|
||||||
|
"confidence": 0.3,
|
||||||
|
"reason_rejected": "Current results are already high quality",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Results are comprehensive enough; no need to retry.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# 3. Shutdown — flush any pending data
|
||||||
|
agentlens.shutdown()
|
||||||
|
|
||||||
|
print("Done! Check your AgentLens dashboard for the 'research-task' trace.")
|
||||||
211
examples/customer_support_agent.py
Normal file
211
examples/customer_support_agent.py
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
"""
|
||||||
|
AgentLens Customer Support Example — Realistic support ticket workflow.
|
||||||
|
|
||||||
|
Demonstrates:
|
||||||
|
- Ticket classification with ROUTING decisions
|
||||||
|
- Specialist routing with TOOL_SELECTION
|
||||||
|
- Escalation decisions with ESCALATION type
|
||||||
|
- Error handling — traces capture exceptions automatically
|
||||||
|
- Multiple real-world decision patterns
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
pip install vectry-agentlens
|
||||||
|
python customer_support_agent.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import agentlens
|
||||||
|
import time
|
||||||
|
import random
|
||||||
|
|
||||||
|
# Initialize
|
||||||
|
agentlens.init(
|
||||||
|
api_key="your-api-key-here",
|
||||||
|
endpoint="http://localhost:4200",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Simulated ticket data
|
||||||
|
TICKETS = [
|
||||||
|
{
|
||||||
|
"id": "TKT-4021",
|
||||||
|
"subject": "Cannot access billing portal after password reset",
|
||||||
|
"priority": "high",
|
||||||
|
"customer_tier": "enterprise",
|
||||||
|
"body": "After resetting my password, I get a 403 error on the billing page. "
|
||||||
|
"I need to update our payment method before end of month.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "TKT-4022",
|
||||||
|
"subject": "Feature request: dark mode for dashboard",
|
||||||
|
"priority": "low",
|
||||||
|
"customer_tier": "free",
|
||||||
|
"body": "Would love to have a dark mode option. My eyes hurt during late-night sessions.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "TKT-4023",
|
||||||
|
"subject": "API returning 500 errors intermittently",
|
||||||
|
"priority": "critical",
|
||||||
|
"customer_tier": "enterprise",
|
||||||
|
"body": "Our production integration is failing ~20% of requests with 500 errors. "
|
||||||
|
"Started about 2 hours ago. This is blocking our release.",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def simulate_llm(prompt: str, delay: float = 0.15) -> str:
|
||||||
|
"""Fake LLM — replace with real calls."""
|
||||||
|
time.sleep(delay)
|
||||||
|
return f"[Response to: {prompt[:60]}]"
|
||||||
|
|
||||||
|
|
||||||
|
def process_ticket(ticket: dict) -> None:
|
||||||
|
"""Process a single support ticket through the agent pipeline."""
|
||||||
|
|
||||||
|
with agentlens.trace(
|
||||||
|
"customer-support-bot",
|
||||||
|
tags=["support", ticket["priority"], ticket["customer_tier"]],
|
||||||
|
):
|
||||||
|
# Step 1: Classify the ticket
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ROUTING",
|
||||||
|
chosen={
|
||||||
|
"name": "classify_ticket",
|
||||||
|
"confidence": 0.91,
|
||||||
|
"params": {
|
||||||
|
"ticket_id": ticket["id"],
|
||||||
|
"predicted_category": (
|
||||||
|
"billing"
|
||||||
|
if "billing" in ticket["subject"].lower()
|
||||||
|
else "bug"
|
||||||
|
if "error" in ticket["body"].lower() or "500" in ticket["body"]
|
||||||
|
else "feature_request"
|
||||||
|
),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "ask_customer_for_clarification",
|
||||||
|
"confidence": 0.2,
|
||||||
|
"reason_rejected": "Ticket subject and body are clear enough",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning=f"Ticket '{ticket['subject']}' clearly maps to a known category.",
|
||||||
|
)
|
||||||
|
|
||||||
|
classification = simulate_llm(f"Classify: {ticket['subject']}")
|
||||||
|
|
||||||
|
# Step 2: Route to specialist
|
||||||
|
is_critical = ticket["priority"] in ("critical", "high")
|
||||||
|
is_enterprise = ticket["customer_tier"] == "enterprise"
|
||||||
|
|
||||||
|
if is_critical and is_enterprise:
|
||||||
|
specialist = "senior_engineer"
|
||||||
|
elif is_critical:
|
||||||
|
specialist = "engineer"
|
||||||
|
elif "billing" in ticket["subject"].lower():
|
||||||
|
specialist = "billing_team"
|
||||||
|
else:
|
||||||
|
specialist = "general_support"
|
||||||
|
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ROUTING",
|
||||||
|
chosen={
|
||||||
|
"name": specialist,
|
||||||
|
"confidence": 0.87,
|
||||||
|
"params": {
|
||||||
|
"ticket_id": ticket["id"],
|
||||||
|
"priority": ticket["priority"],
|
||||||
|
"sla_minutes": 30 if is_enterprise else 240,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "general_support",
|
||||||
|
"confidence": 0.4,
|
||||||
|
"reason_rejected": "Ticket requires specialized handling"
|
||||||
|
if specialist != "general_support"
|
||||||
|
else "This is general support already",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning=f"Priority={ticket['priority']}, Tier={ticket['customer_tier']} -> route to {specialist}.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 3: Specialist handles ticket (nested trace)
|
||||||
|
with agentlens.trace(f"specialist-{specialist}", tags=[specialist]):
|
||||||
|
# Tool selection for the specialist
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={
|
||||||
|
"name": "search_knowledge_base",
|
||||||
|
"confidence": 0.82,
|
||||||
|
"params": {"query": ticket["subject"], "limit": 5},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "search_past_tickets",
|
||||||
|
"confidence": 0.7,
|
||||||
|
"reason_rejected": "KB is more authoritative for known issues",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "check_status_page",
|
||||||
|
"confidence": 0.6,
|
||||||
|
"reason_rejected": "Already checked — no ongoing incidents posted",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Knowledge base has resolution guides for common issues.",
|
||||||
|
)
|
||||||
|
|
||||||
|
kb_result = simulate_llm(f"Search KB for: {ticket['subject']}")
|
||||||
|
|
||||||
|
# Step 4: Escalation decision for critical tickets
|
||||||
|
if ticket["priority"] == "critical":
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ESCALATION",
|
||||||
|
chosen={
|
||||||
|
"name": "escalate_to_engineering",
|
||||||
|
"confidence": 0.94,
|
||||||
|
"params": {
|
||||||
|
"severity": "P1",
|
||||||
|
"team": "platform-reliability",
|
||||||
|
"ticket_id": ticket["id"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "resolve_at_support_level",
|
||||||
|
"confidence": 0.15,
|
||||||
|
"reason_rejected": "500 errors suggest infrastructure issue beyond support scope",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Intermittent 500s on enterprise account = immediate P1 escalation.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Simulate escalation failure for the critical ticket (shows error handling)
|
||||||
|
if random.random() < 0.3:
|
||||||
|
raise RuntimeError(
|
||||||
|
f"Escalation service unavailable for {ticket['id']}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate response
|
||||||
|
response = simulate_llm(
|
||||||
|
f"Draft response for {ticket['id']}: {ticket['subject']}",
|
||||||
|
delay=0.3,
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f" [{ticket['id']}] Processed -> routed to {specialist}")
|
||||||
|
|
||||||
|
|
||||||
|
# Process all tickets
|
||||||
|
print("Processing support tickets...\n")
|
||||||
|
|
||||||
|
for ticket in TICKETS:
|
||||||
|
try:
|
||||||
|
process_ticket(ticket)
|
||||||
|
except Exception as e:
|
||||||
|
# The trace context manager captures the error automatically
|
||||||
|
print(f" [{ticket['id']}] Error during processing: {e}")
|
||||||
|
|
||||||
|
# Shutdown
|
||||||
|
agentlens.shutdown()
|
||||||
|
|
||||||
|
print("\nDone! Check AgentLens dashboard for 'customer-support-bot' traces.")
|
||||||
|
print("Look for the ERROR trace — it shows how failures are captured.")
|
||||||
183
examples/multi_agent.py
Normal file
183
examples/multi_agent.py
Normal file
@@ -0,0 +1,183 @@
|
|||||||
|
"""
|
||||||
|
AgentLens Multi-Agent Example — Nested traces for orchestrated agent workflows.
|
||||||
|
|
||||||
|
Demonstrates:
|
||||||
|
- A "planner" agent that delegates to sub-agents
|
||||||
|
- Nested trace contexts that create parent-child span relationships automatically
|
||||||
|
- Multiple decision types: ROUTING, PLANNING, TOOL_SELECTION
|
||||||
|
- How the dashboard shows the full agent call tree
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
pip install vectry-agentlens
|
||||||
|
python multi_agent.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import agentlens
|
||||||
|
import time
|
||||||
|
|
||||||
|
# Initialize
|
||||||
|
agentlens.init(
|
||||||
|
api_key="your-api-key-here",
|
||||||
|
endpoint="http://localhost:4200",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def simulate_llm_call(prompt: str, delay: float = 0.2) -> str:
|
||||||
|
"""Fake LLM call — replace with real model calls in production."""
|
||||||
|
time.sleep(delay)
|
||||||
|
return f"[LLM response to: {prompt[:50]}...]"
|
||||||
|
|
||||||
|
|
||||||
|
# Top-level planner agent trace
|
||||||
|
with agentlens.trace("planner-agent", tags=["multi-agent", "blog-pipeline"]):
|
||||||
|
# Planner decides the workflow
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="PLANNING",
|
||||||
|
chosen={
|
||||||
|
"name": "research_then_write",
|
||||||
|
"confidence": 0.93,
|
||||||
|
"params": {
|
||||||
|
"steps": ["research", "outline", "draft", "review"],
|
||||||
|
"topic": "AI agents in production",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "write_directly",
|
||||||
|
"confidence": 0.4,
|
||||||
|
"reason_rejected": "Topic requires research for factual accuracy",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Complex topic — research phase needed before writing.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Planner routes to researcher agent
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ROUTING",
|
||||||
|
chosen={
|
||||||
|
"name": "researcher-agent",
|
||||||
|
"confidence": 0.95,
|
||||||
|
"params": {"query": "AI agents in production best practices 2025"},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "writer-agent",
|
||||||
|
"confidence": 0.3,
|
||||||
|
"reason_rejected": "Need facts before drafting",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Researcher goes first to gather source material.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Nested: Researcher Agent ---
|
||||||
|
with agentlens.trace("researcher-agent", tags=["research"]):
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={
|
||||||
|
"name": "web_search",
|
||||||
|
"confidence": 0.88,
|
||||||
|
"params": {
|
||||||
|
"query": "AI agents production deployment 2025",
|
||||||
|
"limit": 10,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "arxiv_search",
|
||||||
|
"confidence": 0.72,
|
||||||
|
"reason_rejected": "Need industry examples, not just papers",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Web search covers blog posts, case studies, and papers.",
|
||||||
|
)
|
||||||
|
|
||||||
|
research_results = simulate_llm_call(
|
||||||
|
"Summarize findings about AI agents in production"
|
||||||
|
)
|
||||||
|
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="MEMORY_RETRIEVAL",
|
||||||
|
chosen={
|
||||||
|
"name": "store_research_context",
|
||||||
|
"confidence": 0.9,
|
||||||
|
"params": {"key": "research_findings", "chunks": 5},
|
||||||
|
},
|
||||||
|
alternatives=[],
|
||||||
|
reasoning="Store condensed findings for the writer agent to consume.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Planner routes to writer agent
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ROUTING",
|
||||||
|
chosen={
|
||||||
|
"name": "writer-agent",
|
||||||
|
"confidence": 0.97,
|
||||||
|
"params": {"style": "technical-blog", "word_count": 1500},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "researcher-agent",
|
||||||
|
"confidence": 0.15,
|
||||||
|
"reason_rejected": "Research phase complete, enough material gathered",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Research complete — hand off to writer with gathered material.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Nested: Writer Agent ---
|
||||||
|
with agentlens.trace("writer-agent", tags=["writing"]):
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="PLANNING",
|
||||||
|
chosen={
|
||||||
|
"name": "structured_outline_first",
|
||||||
|
"confidence": 0.91,
|
||||||
|
"params": {
|
||||||
|
"sections": ["intro", "challenges", "solutions", "conclusion"]
|
||||||
|
},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "stream_of_consciousness",
|
||||||
|
"confidence": 0.3,
|
||||||
|
"reason_rejected": "Technical blog needs clear structure",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Outline-first approach produces better organized blog posts.",
|
||||||
|
)
|
||||||
|
|
||||||
|
outline = simulate_llm_call("Create blog outline for AI agents in production")
|
||||||
|
draft = simulate_llm_call("Write full blog draft from outline", delay=0.5)
|
||||||
|
|
||||||
|
# --- Nested deeper: Editor sub-agent within writer ---
|
||||||
|
with agentlens.trace("editor-agent", tags=["editing"]):
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={
|
||||||
|
"name": "grammar_check",
|
||||||
|
"confidence": 0.85,
|
||||||
|
"params": {"text_length": 1500, "style_guide": "technical"},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "skip_editing",
|
||||||
|
"confidence": 0.1,
|
||||||
|
"reason_rejected": "Always edit before publishing",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Run grammar and style check on the draft.",
|
||||||
|
)
|
||||||
|
|
||||||
|
edited = simulate_llm_call("Edit and polish the blog draft", delay=0.3)
|
||||||
|
|
||||||
|
print("Blog pipeline complete!")
|
||||||
|
print(f"Research: {research_results}")
|
||||||
|
print(f"Final draft: {edited}")
|
||||||
|
|
||||||
|
# Shutdown
|
||||||
|
agentlens.shutdown()
|
||||||
|
|
||||||
|
print("\nDone! Check AgentLens dashboard — you'll see nested spans:")
|
||||||
|
print(" planner-agent")
|
||||||
|
print(" -> researcher-agent")
|
||||||
|
print(" -> writer-agent")
|
||||||
|
print(" -> editor-agent")
|
||||||
113
examples/openai_agent.py
Normal file
113
examples/openai_agent.py
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
"""
|
||||||
|
AgentLens OpenAI Integration Example — Wrap the OpenAI client for automatic tracing.
|
||||||
|
|
||||||
|
Demonstrates:
|
||||||
|
- Wrapping openai.OpenAI() so all LLM calls are traced as spans
|
||||||
|
- Combining automatic LLM tracing with manual decision logging
|
||||||
|
- Using trace tags and metadata
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
pip install vectry-agentlens openai
|
||||||
|
export OPENAI_API_KEY="sk-..."
|
||||||
|
python openai_agent.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import agentlens
|
||||||
|
from agentlens.integrations.openai import wrap_openai
|
||||||
|
import openai # pip install openai
|
||||||
|
|
||||||
|
# 1. Initialize AgentLens
|
||||||
|
agentlens.init(
|
||||||
|
api_key="your-api-key-here",
|
||||||
|
endpoint="http://localhost:4200",
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2. Create and wrap the OpenAI client — all completions are now auto-traced
|
||||||
|
client = openai.OpenAI()
|
||||||
|
wrap_openai(client)
|
||||||
|
|
||||||
|
# 3. Use the wrapped client inside a trace
|
||||||
|
with agentlens.trace("email-drafting-agent", tags=["openai", "email", "demo"]):
|
||||||
|
# Decision: which model to use for this task
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="TOOL_SELECTION",
|
||||||
|
chosen={
|
||||||
|
"name": "gpt-4o",
|
||||||
|
"confidence": 0.9,
|
||||||
|
"params": {"temperature": 0.7, "max_tokens": 512},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "gpt-4o-mini",
|
||||||
|
"confidence": 0.7,
|
||||||
|
"reason_rejected": "Task needs higher quality reasoning for tone",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Email drafting requires nuanced tone — use the larger model.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# This call is automatically captured as an LLM_CALL span
|
||||||
|
classification = client.chat.completions.create(
|
||||||
|
model="gpt-4o",
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": "Classify the intent of this email request."},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "Write a professional follow-up email to a client "
|
||||||
|
"who hasn't responded to our proposal in 2 weeks.",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
temperature=0.3,
|
||||||
|
max_tokens=100,
|
||||||
|
)
|
||||||
|
|
||||||
|
intent = classification.choices[0].message.content
|
||||||
|
print(f"Classified intent: {intent}")
|
||||||
|
|
||||||
|
# Decision: choose email style based on classification
|
||||||
|
agentlens.log_decision(
|
||||||
|
type="ROUTING",
|
||||||
|
chosen={
|
||||||
|
"name": "polite_follow_up",
|
||||||
|
"confidence": 0.88,
|
||||||
|
"params": {"tone": "professional-warm", "urgency": "medium"},
|
||||||
|
},
|
||||||
|
alternatives=[
|
||||||
|
{
|
||||||
|
"name": "formal_reminder",
|
||||||
|
"confidence": 0.65,
|
||||||
|
"reason_rejected": "Too stiff for a 2-week follow-up",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "casual_check_in",
|
||||||
|
"confidence": 0.4,
|
||||||
|
"reason_rejected": "Client relationship is still formal",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
reasoning="Professional-warm tone balances urgency with courtesy.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Second LLM call — also auto-captured
|
||||||
|
draft = client.chat.completions.create(
|
||||||
|
model="gpt-4o",
|
||||||
|
messages=[
|
||||||
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "You draft professional emails. Tone: warm but professional.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": f"Draft a polite follow-up email. Context: {intent}",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
temperature=0.7,
|
||||||
|
max_tokens=512,
|
||||||
|
)
|
||||||
|
|
||||||
|
email_body = draft.choices[0].message.content
|
||||||
|
print(f"\nDrafted email:\n{email_body}")
|
||||||
|
|
||||||
|
# 4. Shutdown
|
||||||
|
agentlens.shutdown()
|
||||||
|
|
||||||
|
print("\nDone! Check AgentLens dashboard for the 'email-drafting-agent' trace.")
|
||||||
1363
examples/seed_demo_traces.py
Normal file
1363
examples/seed_demo_traces.py
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user