Models
The TypeScript SDK provides flexible model configuration through direct integration with the Vercel AI SDK. You can use models from multiple providers with a simple, unified interface.
Overview
The TypeScript SDK supports three ways to configure models for evaluation and prompt generation:
- Model ID strings - Simple string identifiers (e.g.,
"gpt-4o","claude-3-5-sonnet-latest") - LanguageModel instances - Pre-configured Vercel AI SDK models with custom settings
- OpikBaseModel implementations - Custom model integrations for unsupported providers
Quick Start
Using Model ID Strings
The simplest approach is to pass a model ID string directly:
Using LanguageModel Instances
For advanced scenarios, use LanguageModel instances from Vercel AI SDK:
Generation Parameters
Parameters for Metrics
All LLM Judge metrics support these generation parameters directly in the constructor:
For advanced generation parameters, use modelSettings:
Parameters for evaluatePrompt
The evaluatePrompt function supports only temperature and seed:
Note: For full control over all Vercel AI SDK parameters, create a LanguageModel instance with your desired configuration and pass it to the model parameter. See Using LanguageModel Instances below.
Supported Providers
OpenAI
OpenAI models are supported through the @ai-sdk/openai package.
Example model IDs:
Usage:
For a complete list of available models, see the Vercel AI SDK OpenAI provider documentation.
Anthropic
Anthropic’s Claude models are supported through the @ai-sdk/anthropic package.
Example model IDs:
Usage:
For a complete list of available models, see the Vercel AI SDK Anthropic provider documentation.
Google Gemini
Google’s Gemini models are supported through the @ai-sdk/google package.
Example model IDs:
Usage:
For a complete list of available models, see the Vercel AI SDK Google provider documentation.
Using Models in Opik
Using LanguageModel Instances
For advanced scenarios requiring full Vercel AI SDK features (such as structured outputs, custom headers, or provider-specific parameters), create LanguageModel instances directly:
This approach gives you full control over Vercel AI SDK parameters that aren’t exposed through Opik’s simple interface.
Using Models with Metrics
LLM Judge metrics accept model configuration:
With Model ID String
With LanguageModel Instance
Custom Model Implementation
For unsupported providers, implement the OpikBaseModel interface:
OpikBaseModel Interface
Example Implementation
Model Resolution
The SDK automatically resolves models in this order:
- If a string is provided: Auto-detects provider and creates appropriate model
- If LanguageModel is provided: Uses the instance directly
- If OpikBaseModel is provided: Uses the custom implementation
- If undefined: Defaults to
"gpt-4o"
Best Practices
1. Use Model ID Strings for Simplicity
For most use cases, use model ID strings directly:
The Opik SDK handles model configuration internally for optimal evaluation performance.
2. Match Model Capabilities to Task
Choose models based on task requirements:
3. Use Different Models for Tasks and Metrics
Optimize costs by using different models:
4. Configure API Keys
Set up environment variables for each provider:
5. Handle Rate Limits
Use appropriate worker counts to avoid rate limits:
Troubleshooting
API Key Not Found
Model Not Supported
Rate Limit Errors
See Also
- evaluatePrompt Function - Using models with prompt evaluation
- Metrics - Using models with LLM Judge metrics
- Vercel AI SDK Documentation - Full LanguageModel documentation