AI Providers
The AI Providers tab allows you to configure connections to different Large Language Models (LLMs). This page explains how to set up and manage AI provider integrations within Opik.
Overview
Connecting AI providers enables you to:
- Send prompts and receive responses from different LLMs
- Set up a provider in one place and use it across projects
- Automatically record model metadata in the Playground
- Track and analyze traces using online evaluation rules
Managing AI Providers
Viewing Existing Providers

The AI Providers page displays a table of all configured connections with the following columns:
- Name: The name or identifier of the API key
- Created: The date and time when the provider was configured
- Provider: The type of AI provider (e.g., OpenAI)
Adding a New Provider Configuration
To add a new AI provider:
- Click the Add configuration button in the top-right corner

- In the Provider Configuration dialog that appears:
- Select a provider from the dropdown menu
- Enter your API key for that provider
- Click Save to store the configuration
Supported Providers
Opik supports integration with various AI providers, including:
- OpenAI
- Anthropic
- OpenRouter
- Gemini
- VertexAI
- Azure OpenAI
- Amazon Bedrock
- LM Studio (coming soon)
- vLLM / Ollama / any other OpenAI API-compliant provider
If you would like us to support additional LLM providers, please let us know by opening an issue on GitHub.
Provider-Specific Setup
Below are instructions for obtaining API keys and other required information for each supported provider:
OpenAI
- Create or log in to your OpenAI account
- Navigate to the API keys page
- Click “Create new secret key”
- Copy your API key (it will only be shown once)
- In Opik, select “OpenAI” as the provider and paste your key
Anthropic
- Sign up for or log in to Anthropic’s platform
- Navigate to the API Keys page
- Click “Create Key” and select the appropriate access level
- Copy your API key (it will only be shown once)
- In Opik, select “Anthropic” as the provider and paste your key
OpenRouter
- Create or log in to your OpenRouter account
- Navigate to the API Keys page
- Create a new API key
- Copy your API key
- In Opik, select “OpenRouter” as the provider and paste your key
Gemini
- Signup or login to Google AI Studio
- Go to the API keys page\
- Create a new API key for one your existing Google Cloud project
- Copy your API key (it will only be shown once)
- In Opik, select “Gemini” as the provider and paste your key
Azure OpenAI
Azure OpenAI provides enterprise-grade access to OpenAI models through Microsoft Azure. To use Azure OpenAI with Opik:
-
Get your Azure OpenAI endpoint URL
- Go to portal.azure.com
- Navigate to your Azure OpenAI resource
- Copy your endpoint URL (it looks like
https://your-company.openai.azure.com)
-
Construct the complete API URL
- Add
/openai/v1to the end of your endpoint URL - Your complete URL should look like:
https://your-company.openai.azure.com/openai/v1
- Add
-
Configure in Opik
- In Opik, go to Configuration → AI Providers
- Click “Add Configuration”
- Select “vLLM / Custom provider” from the dropdown
- Enter your complete URL in the URL field:
https://your-company.openai.azure.com/openai/v1 - Add your Azure OpenAI API key in the API Key field
- In the Models section, list the models you have deployed in Azure (e.g.,
gpt-4o) - Click Save to store the configuration
Once saved, you can use your Azure OpenAI models directly from Online Scores and the Playground.
Vertex AI
Option A: Setup via gcloud CLI
- Create a Custom IAM Role
- Create a Service Account
- Assign the Role to the Service Account
- Generate the Service Account Key File
The file
opik-key.jsoncontains your credentials. Open it in a text editor and copy the entire contents.
Option B: Setup via Google Cloud Console (UI)
Step 1: Create the Custom Role
- Go to IAM → Roles
- Click Create Role
- Fill in the form:
- Title:
Opik - ID:
opik - Description:
Custom IAM role for Opik - Stage:
Alpha
- Click Add Permissions, then search for and add:
aiplatform.endpoints.predictresourcemanager.projects.get
- Click Create
Step 2: Create the Service Account
- Go to IAM → Service Accounts
- Click Create Service Account
- Fill in:
- Service account name:
Opik Service Account - ID:
opik-sa - Description:
Service account for Opik role
- Click Done
Step 3: Assign the Role to the Service Account
- Go to IAM
- Find the service account
opik-sa@<my-project>.iam.gserviceaccount.com - Click the edit icon
- Click Add Another Role → Select your custom role: Opik
- Click Save
Step 4: Create and Download the Key
- Go to Service Accounts
- Click on the
opik-saaccount - Open the Keys tab
- Click Add Key → Create new key
- Choose JSON, click Create, and download the file
Open the downloaded JSON file, and copy its entire content to be used in the next step.
Final Step: Connect Opik to Vertex AI
- In Opik, go to Configuration → AI Providers
- Click “Add Configuration”
- Set:
- Provider:
Vertex AI - Location: Your model region (e.g.,
us-central1) - Vertex AI API Key: Paste the full contents of the
opik-key.jsonfile here
- Click Add configuration
Amazon Bedrock
Amazon Bedrock provides access to foundation models from leading AI companies through AWS. You can use Bedrock models in the Opik Playground by configuring it as a custom provider.
Getting AWS Credentials
- Log into the AWS Bedrock Console
- Go to the API Keys section
- Generate or copy your API Key
- Keep this API Key secure - you’ll need it for Opik configuration
Configuring Bedrock in Opik
- In Opik, go to Configuration → AI Providers
- Click “Add Configuration”
- Select “vLLM / Custom provider” from the dropdown
- Enter your Bedrock endpoint URL in the URL field
- Add your AWS Bedrock API Key in the API Key field
- In the Models section, write the Bedrock models you want to use
- Click Save to store the configuration
Bedrock URL Format by Region
Bedrock endpoints follow this pattern: https://bedrock-runtime.<region>.amazonaws.com/openai/v1
Examples by Region:
- US East 1:
https://bedrock-runtime.us-east-1.amazonaws.com/openai/v1 - US West 2:
https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1 - Europe West 1 (Ireland):
https://bedrock-runtime.eu-west-1.amazonaws.com/openai/v1 - Europe Central 1 (Frankfurt):
https://bedrock-runtime.eu-central-1.amazonaws.com/openai/v1 - Asia Pacific (Tokyo):
https://bedrock-runtime.ap-northeast-1.amazonaws.com/openai/v1 - Asia Pacific (Singapore):
https://bedrock-runtime.ap-southeast-1.amazonaws.com/openai/v1
Not all Bedrock models are available in all regions. Check the AWS Bedrock model availability documentation to verify model availability in your chosen region before configuring.
vLLM / Custom Provider
Use this option to add any OpenAI API-compliant provider such as vLLM, Ollama, etc. You can configure multiple custom providers, each with their own unique name, URL, and models.

Configuration Steps
- Provider Name: Enter a unique name to identify this custom provider (e.g., “vLLM Production”, “Ollama Local”, “Azure OpenAI Dev”)
- URL: Enter your server URL, for example:
http://host.docker.internal:8000/v1 - API Key (optional): If your model access requires authentication, enter the API key. Otherwise, leave this field blank.
- Models: List all models available on your server. You’ll be able to select one of them for use later.
- Custom Headers (optional): Add any additional HTTP headers required by your custom endpoint as key-value pairs.
If you’re running Opik locally, you would need to use http://host.docker.internal:<PORT>/v1 for Mac and Windows or http://172.17.0.1:<PORT>/v1 for Linux, and not http://localhost.
Custom Headers
Some custom providers may require additional HTTP headers beyond the API key for authentication or routing purposes. You can configure these headers using the “Custom headers” section:
- Click ”+ Add header” to add a new header
- Enter the header name (e.g.,
X-Custom-Auth,X-Request-ID) - Enter the header value
- Add multiple headers as needed
- Use the trash icon to remove headers
Common use cases for custom headers:
- Custom authentication: Additional authentication tokens or headers required by your infrastructure
- Request routing: Headers for routing requests to specific model versions or deployments
- Metadata tracking: Custom headers for tracking or logging purposes
- Enterprise features: Headers required for enterprise proxy configurations
Custom headers are sent with every request to your custom provider endpoint. Ensure header values are kept secure and not exposed in logs or error messages.
Managing Multiple Custom Providers
Once you’ve configured multiple custom providers, you can:
- Edit any custom provider by selecting it from the provider dropdown in the configuration dialog
- Delete custom providers that are no longer needed
- Switch between different custom providers in the Playground and Automation Rules
Each custom provider appears as a separate option in the provider dropdown, making it easy to work with multiple self-hosted or custom LLM deployments.
API Key Security
API keys are securely stored and encrypted in the Opik system. Only the name and provider type are visible in the interface after configuration. The actual key values are not displayed after initial setup.
Using AI Providers
Once configured, AI providers can be used in:
- Playground: For interactive experimentation with different models
- Online Evaluation: For systematic evaluation of model performance
Best Practices
- Use descriptive names for your API keys to easily identify their purpose
- Regularly rotate API keys according to your organization’s security policies
- Create separate keys for development and production environments
- Monitor usage to manage costs effectively
Troubleshooting
Common Issues
- Authentication Errors: Ensure your API key is valid and hasn’t expired
- Access Denied: Check that your API key has the required permissions for the models you’re trying to use
- Rate Limiting: If you’re encountering rate limits, consider adjusting your request frequency or contacting your provider to increase your limits
Additional Resources
- For programmatic configuration of AI providers, see the API Reference
- To learn about using different models in your application, see the SDK Documentation