Files
agentlens/apps/web/public/llms.txt
Vectry 145b1669e7 feat: comprehensive SEO — meta tags, OG, Twitter cards, JSON-LD, sitemap, robots, llms.txt
Adds metadataBase, full OpenGraph + Twitter card tags, keywords,
JSON-LD structured data (SoftwareApplication + Organization),
sitemap.ts, robots.ts with AI crawler directives, and llms.txt
for AI agent discoverability.
2026-02-10 02:21:16 +00:00

51 lines
3.0 KiB
Plaintext

# AgentLens
> AgentLens is an open-source agent observability platform that traces AI agent decisions, not just API calls. It captures why agents choose specific tools, routes, or strategies — providing visibility into the reasoning behind every action.
AgentLens helps engineering teams debug, monitor, and improve AI agent applications in production. Unlike traditional LLM observability tools that only trace API calls, AgentLens captures the decision-making process: tool selection rationale, routing logic, retry strategies, and planning steps. It includes a real-time dashboard with decision tree visualization, cost analytics, and token tracking.
## Getting Started
- [GitHub Repository](https://gitea.repi.fun/repi/agentlens): Source code, issues, and contribution guide
- [PyPI Package](https://pypi.org/project/vectry-agentlens/): Install with `pip install vectry-agentlens`
- [Dashboard](https://agentlens.vectry.tech/dashboard): Live demo dashboard with sample traces
## Python SDK
- [Basic Usage](https://gitea.repi.fun/repi/agentlens/src/branch/main/examples/basic_agent.py): Minimal SDK usage with trace context and decision logging
- [OpenAI Integration](https://gitea.repi.fun/repi/agentlens/src/branch/main/examples/openai_agent.py): Wrap OpenAI client for automatic LLM call tracing
- [Multi-Agent Example](https://gitea.repi.fun/repi/agentlens/src/branch/main/examples/multi_agent.py): Nested multi-agent workflow tracing
- [Function Calling](https://gitea.repi.fun/repi/agentlens/src/branch/main/examples/moonshot_real_test.py): Real LLM test with tool/function calling
## Key Concepts
- **Traces**: Top-level containers for agent execution sessions, with tags and metadata
- **Spans**: Individual operations within a trace (LLM calls, tool calls, chain steps)
- **Decision Points**: The core differentiator — captures what was chosen, what alternatives existed, and why
- **Decision Types**: TOOL_SELECTION, ROUTING, RETRY, ESCALATION, MEMORY_RETRIEVAL, PLANNING, CUSTOM
## API
- POST /api/traces: Batch ingest traces from SDK (Bearer token auth)
- GET /api/traces: List traces with pagination, search, filters, and sorting
- GET /api/traces/:id: Get single trace with all spans, decisions, and events
- GET /api/traces/stream: Server-Sent Events for real-time trace updates
- GET /api/health: Health check endpoint
## Integrations
- **OpenAI**: `wrap_openai(client)` auto-instruments all chat completions, streaming, and tool calls
- **LangChain**: `AgentLensCallbackHandler` captures chains, agents, tools, and LLM calls
- **Any Python Code**: `@trace` decorator and `log_decision()` for custom instrumentation
## Self-Hosting
- Docker Compose deployment with PostgreSQL and Redis
- Single `docker compose up -d` to run
- Environment variables: DATABASE_URL, REDIS_URL, AGENTLENS_API_KEY
## Optional
- [Company Website](https://vectry.tech): Built by Vectry, an engineering-first AI consultancy
- [CodeBoard](https://codeboard.vectry.tech): Sister product — understand any codebase in 5 minutes