Langfuse¶
Introduction¶
Langfuse is an observability and monitoring platform for Large Language Model (LLM) applications. It provides comprehensive tracing capabilities that help you understand, debug, and optimize your AI applications by capturing detailed execution traces, tracking token usage, monitoring latency, and analyzing model performance.
The Langfuse integration in ELITEA enables you to:
- Track LLM interactions - Monitor all API calls, prompts, and responses
- Analyze performance - Measure latency, token consumption, and costs
- Debug issues - Trace execution flows and identify bottlenecks
- Monitor quality - Evaluate model outputs and track accuracy over time
Prerequisites¶
Before configuring Langfuse credentials in ELITEA, you need:
- Langfuse Account - Sign up at Langfuse Cloud or deploy a self-hosted instance
- API Credentials - Generate API keys from your Langfuse project dashboard (see detailed instructions below in the Obtaining Langfuse API Keys section):
- Public Key - Used for client identification
- Secret Key - Used for authentication
Obtaining Langfuse API Keys¶
This section provides detailed, step-by-step instructions for generating the API credentials required to integrate Langfuse with ELITEA.
Langfuse Cloud¶
If you're using Langfuse Cloud, follow these steps to obtain your API keys:
Step 1: Log In to Langfuse Cloud¶
- Visit Langfuse Cloud: Open your web browser and navigate to https://cloud.langfuse.com
- Log In or Create Account: If you don't have an account, create one. If you already have an account, log in using your credentials.
Step 2: Create or Select a Project¶
Once logged in to Langfuse Cloud:
Organization Selection
When you first log in, you may need to select or create an organization. Organizations are the top-level entity that contains projects. If prompted, select an existing organization or create a new one before proceeding to project creation.
- Access Projects: You'll be directed to the projects overview page
- Create New Project (if needed):
- Click the "New Project" button
- Enter a Project Name (e.g., "ELITEA Production")
- Optionally add a description
- Click "Create" to create the project
- Select Existing Project: If you already have a project, click on it to open the project dashboard
Project Organization
Projects in Langfuse organize your traces and analytics. You can create separate projects for different environments (development, staging, production) or different applications.
Step 3: Generate API Keys¶
- Open Project Settings: In your project dashboard, locate and click on the "Settings" icon or menu (usually in the left sidebar or top navigation)
- Access API Keys: In the settings menu, click on "API Keys" to view the API keys management page
- Create New Keys: Click the "Create New API Key" or "+ New API Key" button
- Name Your Key (Optional): Some versions allow you to provide a descriptive name for the key set (e.g., "ELITEA Integration")
- Generate Keys: Click "Generate" or "Create"
-
Copy Both Keys Immediately:
- Public Key: Copy the public key (typically starts with
pk-lf-) - Secret Key: Copy the secret key (typically starts with
sk-lf-)
Important: Secret Key Security
The secret key is only displayed once at creation time. You will not be able to view it again after you leave this page. Make sure to:
- Copy and store the secret key immediately in a secure location
- Use an ELITEA's Secrets feature
- Never commit keys to version control systems
- If you lose the secret key, you must generate a new key pair
- Public Key: Copy the public key (typically starts with
Next Steps
After obtaining your keys, verify they work correctly:
- Note the Base URL: For Langfuse Cloud, the base URL is
https://cloud.langfuse.com - Test in ELITEA: Use these credentials when creating your Langfuse credential in ELITEA (as described in the Creating Langfuse Credentials section)
Self-Hosted Langfuse¶
For self-hosted Langfuse deployments, the API key generation process is identical to Langfuse Cloud. Follow these steps:
- Deploy Langfuse: Follow the Langfuse Self-Hosting Guide using Docker Compose, Kubernetes, or your preferred cloud provider
- Access Your Instance: Navigate to your deployment URL (e.g.,
https://langfuse.yourcompany.com) and log in - Create/Select Organization and Project: Follow the same organization and project creation steps as Langfuse Cloud (Step 2 above)
- Generate API Keys: Follow Step 3 above to generate your public and secret keys
Self-Hosted Configuration
When creating your Langfuse credential in ELITEA:
- Use your custom deployment URL as the Base URL (e.g.,
https://langfuse.yourcompany.com) - Ensure SSL/TLS certificates are properly configured
- Verify firewall rules allow ELITEA to access your instance
- Test connectivity before proceeding with credential configuration
Creating Langfuse Credentials in Elitea¶
To integrate Langfuse with ELITEA for LLM tracing and observability:
Access Credentials Configuration
- Navigate to Settings → Credentials from the main navigation sidebar
- Click the + (Create New) button
-
Select Langfuse from the credential type list
Configure and Save Langfuse Credential
-
Fill in the required connection details:
Field Description Display Name Enter a descriptive name for this configuration (e.g., "Production Langfuse Tracing") ID Auto-generated from the Display Name Base URL Langfuse server endpoint URL.
For Langfuse Cloud:https://cloud.langfuse.com.
For Self-Hosted: Your custom deployment URL (e.g.,https://langfuse.yourcompany.com).
Do not include trailing slashes.Public Key Your Langfuse public API key (see Generate API Keys section for details). Used to identify your project/organization. Typically starts with pk-lf-(e.g.,pk-lf-...)Secret Key Your Langfuse secret API key (see Generate API Keys section for details). Provides authentication for API access. Stored securely and masked in the UI. Typically starts with sk-lf-(e.g.,sk-lf-...) -
Test Connection: Click Test Connection to verify that your credentials are valid and ELITEA can successfully connect to Langfuse
-
Save Credential: Click Save to create the credential. After saving, your Langfuse credential will be added to the credentials dashboard and will be ready to use in agent configurations, pipeline configurations, and toolkit integrations requiring LLM tracing. You can view, edit, or delete it from the Credentials menu at any time.
Usage in ELITEA¶
Once your Langfuse credential is configured in ELITEA (via Settings → Credentials), it enables LLM observability and tracing capabilities for your AI applications. Langfuse automatically captures execution traces, providing deep insights into your agent's behavior, performance, and costs.
How Langfuse Tracing Works¶
When Langfuse credentials are configured in your ELITEA project:
Automatic Trace Capture:
- LLM API Calls: Every interaction with language models (GPT-4, Claude, etc.) is logged with full request/response details
- Token Usage: Input and output token counts are tracked for each model call
- Latency Metrics: Response times are measured for performance analysis
- Tool Invocations: External toolkit calls (GitHub, Jira, etc.) are captured in the execution trace
- Conversation Context: Full conversation history and context windows are preserved
- Error Tracking: Failed requests and exceptions are logged with stack traces
Trace Hierarchy:
Langfuse organizes execution data in a hierarchical structure:
Trace (Conversation Session)
├── Generation (LLM Call 1)
│ ├── Input: User prompt + system instructions
│ ├── Output: Model response
│ ├── Metadata: Model name, temperature, max tokens
│ └── Metrics: Tokens (input/output), latency, cost
├── Span (Tool Execution)
│ ├── Tool: github_create_issue
│ ├── Input: Repository, title, description
│ ├── Output: Issue URL and ID
│ └── Duration: Execution time
└── Generation (LLM Call 2)
├── Input: Tool result + follow-up prompt
└── Output: Final response to user
Viewing Traces in Langfuse¶
After agents or pipelines execute with Langfuse credentials configured, access detailed traces in your Langfuse dashboard:
Access Langfuse Traces
- Navigate to your Langfuse instance:
- Langfuse Cloud: https://cloud.langfuse.com
- Self-Hosted: Your custom deployment URL
- Log in with your Langfuse account credentials
- Select the project matching your ELITEA Langfuse credential configuration
- Click Traces in the left sidebar to open the Trace Explorer
- View the list of all captured execution traces with:
- Trace ID and timestamp
- User information (if available)
- Execution status (success/error)
- Total token count and estimated cost
- Execution duration
Analyze Individual Traces
Click on any trace to view detailed execution breakdown:
Trace Overview Panel:
| Field | Description | Example |
|---|---|---|
| Trace ID | Unique identifier for the execution | trace_abc123xyz |
| Name | Trace name or conversation identifier | "Agent Execution: Deploy Feature" |
| Timestamp | When the execution started | 2026-02-11 14:30:22 UTC |
| Duration | Total execution time | 3.4 seconds |
| User ID | User who triggered the execution | user@company.com |
| Tags | Custom metadata tags | environment:production, agent:deployment |
Generation Details (LLM Calls):
For each LLM API call in the trace:
- Prompt: Complete input including system instructions, user message, and conversation history
- Completion: Full model response
- Model: Specific model used (e.g.,
gpt-4o,claude-3-5-sonnet-20241022) - Token Metrics:
- Input tokens: Number of tokens in the prompt
- Output tokens: Number of tokens in the response
- Total tokens: Sum of input and output
- Cost: Calculated based on model pricing (e.g.,
$0.0045) - Latency: Time taken to receive the response (e.g.,
1.2s)
Span Details (Tool Executions):
For each tool or external service call:
- Tool Name: Identifier of the tool used (e.g.,
github_create_pull_request) - Input Parameters: Arguments passed to the tool
- Output Result: Data returned by the tool
- Duration: Execution time for the tool call
- Status: Success or error indication
Example Use Cases¶
Use Case 1: Debugging Agent Failures
When an agent execution fails or produces unexpected results:
- Find the Trace: Search Langfuse by trace ID, user, or timestamp
- Review LLM Prompts: Examine the exact prompts sent to the model
- Check if system instructions are correct
- Verify context window contains relevant information
- Identify missing or incorrect data in the prompt
- Analyze Tool Calls: Inspect tool execution results
- Verify tool was called with correct parameters
- Check for tool execution errors
- Validate tool output matches expectations
- Identify Root Cause: Pinpoint where the execution diverged from expected behavior
Example Scenario:
Problem: Agent fails to create GitHub issue
Trace Analysis:
├── Generation 1: User asks to create issue ✓
│ └── Model decides to call github_create_issue tool ✓
├── Span: github_create_issue
│ ├── Input: repository="wrong-repo", title="Bug Report"
│ └── Error: "Repository not found" ✗
└── Root Cause: Agent selected wrong repository name
Solution: Update agent instructions to verify repo names
Use Case 2: Optimizing Token Usage and Costs
Monitor and reduce LLM API costs:
- Filter by Model: View traces for expensive models (GPT-4, Claude)
- Sort by Token Count: Identify conversations with high token usage
- Analyze Prompt Efficiency:
- Review prompts with excessive token counts
- Identify redundant information in system instructions
- Find opportunities to compress context
- Calculate ROI: Compare costs across different models and configurations
Example Analysis:
Before Optimization:
- Average tokens per conversation: 8,500
- Average cost per conversation: $0.12
- Monthly cost (10,000 conversations): $1,200
After Optimization:
- Compressed system instructions (saved 1,200 tokens)
- Removed redundant examples (saved 800 tokens)
- Average tokens per conversation: 6,500
- Average cost per conversation: $0.09
- Monthly cost: $900 (25% reduction)
Use Case 3: Performance Monitoring
Track agent response times and identify bottlenecks:
- Dashboard Overview: View average latency trends over time
- Slow Trace Analysis: Filter traces by duration to find slowest executions
- Bottleneck Identification:
- Compare LLM response times across models
- Measure tool execution durations
- Identify network latency issues
- Performance Optimization: Switch to faster models for time-sensitive tasks
Filtering and Searching Traces¶
Langfuse provides powerful filtering capabilities:
Filter by Time Range:
Filter by Status:
Search by Metadata:
Filter by Cost:
Best Practices¶
API Key Management
- Separate Keys by Environment - Use different API keys for development, staging, and production
- Rotate Keys Regularly - Periodically regenerate keys for enhanced security
- Limit Key Permissions - Use project-specific keys rather than organization-wide keys
- Monitor Key Usage - Track API key activity in Langfuse dashboard
Base URL Configuration
- Use HTTPS - Always use encrypted connections
- Verify URL - Ensure the Base URL is accessible from your ELITEA deployment
- No Trailing Slashes - Remove any trailing
/from the URL - Check Firewall Rules - Ensure network policies allow outbound connections to Langfuse
Integration Testing
- Test Before Production - Validate configuration in development environment first
- Monitor First Traces - Verify traces appear correctly in Langfuse after setup
- Check Trace Completeness - Ensure all expected data (tokens, latency, metadata) is captured
- Validate Cost Tracking - Confirm cost calculations align with provider billing
Organize Traces with Tags
Add custom metadata tags to traces for better organization:
- Environment tags:
production,staging,development - Feature tags:
deployment,code-review,testing - User tags: Team names, departments, or user roles
- Version tags: Agent version numbers for A/B testing
Set Up Alerts and Monitoring
Configure Langfuse alerts for:
- High Cost Executions: Alert when single trace exceeds cost threshold
- Error Rate Spikes: Notification when error rate increases
- Latency Issues: Alert on slow execution times
- Token Limit Warnings: Notification when approaching model token limits
Regular Trace Analysis
Establish a routine for trace review:
- Daily: Check for new errors and failures
- Weekly: Review cost trends and optimization opportunities
- Monthly: Analyze performance metrics and model effectiveness
- Quarterly: Comprehensive audit of agent behavior and improvements
Use Trace Data for Continuous Improvement
Leverage captured traces to:
- Refine Prompts: Improve system instructions based on observed behavior
- Update Training Data: Export successful interactions for fine-tuning
- Validate Changes: Compare traces before and after agent updates
- Document Edge Cases: Identify and document unusual execution patterns
Troubleshooting¶
Authentication Failed: Invalid public_key or secret_key
Cause: The provided API keys are incorrect or have been revoked.
Solution:
- Log in to your Langfuse dashboard
- Navigate to Project Settings → API Keys
- Verify the keys match exactly (no extra spaces)
- If keys are expired or revoked, generate new ones
- Update the configuration in ELITEA with the new keys
Access Forbidden: Check your API key permissions
Cause: The API keys lack necessary permissions for the project.
Solution:
- Verify the keys are from the correct Langfuse project
- Check key permissions in Langfuse dashboard
- Ensure the keys have read/write access to tracing data
- Generate new keys with appropriate permissions if needed
Connection Error: Unable to reach Langfuse at [URL]
Cause: The Base URL is incorrect or the Langfuse server is unreachable.
Solution:
-
Verify Base URL:
- For Langfuse Cloud:
https://cloud.langfuse.com - For Self-Hosted: Check your deployment URL
- Remove any trailing slashes
- For Langfuse Cloud:
-
Check Network Connectivity:
- Verify ELITEA can access the URL (firewall rules, network policies)
- Test the URL in a browser from the ELITEA server
- Ensure DNS resolution works correctly
-
Self-Hosted Issues:
- Verify the Langfuse instance is running
- Check SSL certificate validity
- Ensure the deployment is accessible externally (if ELITEA is remote)
Connection Timeout: Langfuse did not respond in time
Cause: The Langfuse server is slow to respond or experiencing issues.
Solution:
-
Check Server Status:
- For Langfuse Cloud: Check status page
- For Self-Hosted: Verify server health and resources
-
Network Latency:
- Test connection speed to the server
- Check for network congestion
- Consider timeout settings if on slow connection
-
Server Performance:
- Monitor Langfuse server resource usage (CPU, memory)
- Check for high load or ongoing maintenance
- Retry connection after a few minutes
No Traces Appearing in Langfuse
Cause: Configuration is saved but traces are not being sent to Langfuse.
Solution:
-
Verify Agent/Pipeline Configuration:
- Ensure the Langfuse credential is selected in agent or pipeline settings
- Check that observability is enabled for the execution
-
Check Langfuse Project:
- Verify you're looking at the correct project in Langfuse dashboard
- Check the project ID matches your API keys
-
Test with Simple Execution:
- Run a simple test agent or pipeline
- Check Langfuse dashboard within a few minutes
- Look for error messages in ELITEA logs
Incorrect Cost Calculations
Cause: Token counts or pricing information may not be accurate.
Solution:
-
Verify Model Pricing:
- Check that model pricing is configured correctly in Langfuse
- Update pricing information if provider costs change
-
Token Counting:
- Ensure token counts are being captured accurately
- Compare with provider billing statements
-
Currency Settings:
- Verify currency configuration in Langfuse matches your provider's billing currency



