Manage datasets
Datasets can be used to track test cases you would like to evaluate your LLM on. Each dataset is made up of a dictionary
with any key value pairs. When getting started, we recommend having an input
and optional expected_output
fields for
example. These datasets can be created from:
- Python SDK: You can use the Python SDK to create a dataset and add items to it.
- TypeScript SDK: You can use the TypeScript SDK to create a dataset and add items to it.
- Traces table: You can add existing logged traces (from a production application for example) to a dataset.
- The Opik UI: You can manually create a dataset and add items to it.
Once a dataset has been created, you can run Experiments on it. Each Experiment will evaluate an LLM application based on the test cases in the dataset using an evaluation metric and report the results back to the dataset.
Create a dataset via the UI
The simplest and fastest way to create a dataset is directly in the Opik UI. This is ideal for quickly bootstrapping datasets from CSV files without needing to write any code.
Steps:
- Navigate to Evaluation > Datasets in the Opik UI.
- Click Create new dataset.
- In the pop-up modal:
- Provide a name and an optional description
- Optionally, upload a CSV file with your data
- Click Create dataset.

If you need to create a dataset with more than 1,000 rows, you can use the SDK.
The UI dataset creation has some limitations:
- File size is limited to 1,000 rows via the UI.
- No support for nested JSON structures in the CSV itself.
For datasets requiring rich metadata, complex schemas, or programmatic control, use the SDK instead (see the next section).
Creating a dataset using the SDK
You can create a dataset and log items to it using the get_or_create_dataset
method:
If a dataset with the given name already exists, the existing dataset will be returned.
Insert items
Inserting dictionary items
You can insert items to a dataset using the insert
method:
Opik automatically deduplicates items that are inserted into a dataset when using the Python SDK. This means that you
can insert the same item multiple times without duplicating it in the dataset. This combined with the get or create dataset
methods means that you can use the SDK to manage your datasets in a “fire and forget” manner.
Once the items have been inserted, you can view them in the Opik UI:

Inserting items from a JSONL file
You can also insert items from a JSONL file:
Inserting items from a Pandas DataFrame
You can also insert items from a Pandas DataFrame:
Deleting items
You can delete items in a dataset by using the delete
method:
Downloading a dataset from Opik
You can download a dataset from Opik using the get_dataset
method:
Expanding a dataset with AI
Dataset expansion allows you to use AI to generate additional synthetic samples based on your existing dataset. This is particularly useful when you have a small dataset and want to create more diverse test cases to improve your evaluation coverage.
The AI analyzes the patterns in your existing data and generates new samples that follow similar structures while introducing variations. This helps you:
- Increase dataset size for more comprehensive evaluation
- Create edge cases and variations you might not have considered
- Improve model robustness by testing against diverse inputs
- Scale your evaluation without manual data creation
How to expand a dataset
To expand a dataset with AI:
- Navigate to your dataset in the Opik UI (Evaluation > Datasets > [Your Dataset])
- Click the “Expand with AI” button in the dataset view
- Configure the expansion settings:
- Model: Choose the LLM model to use for generation (supports GPT-4, GPT-5, Claude, and other models)
- Sample Count: Specify how many new samples to generate (1-100)
- Preserve Fields: Select which fields from your original data to keep unchanged
- Variation Instructions: Provide specific guidance on how to vary the data (e.g., “Create variations that test edge cases” or “Generate examples with different complexity levels”)
- Custom Prompt: Optionally provide a custom prompt template instead of the auto-generated one
- Start the expansion - The AI will analyze your data and generate new samples
- Review the results - New samples will be added to your dataset and can be reviewed, edited, or removed as needed

Configuration options
Sample Count: Start with a smaller number (10-20) to review the quality before generating larger batches.
Preserve Fields: Use this to maintain consistency in certain fields while allowing variation in others. For example, preserve the category
field while varying the input
and expected_output
.
Variation Instructions: Provide specific guidance such as:
- “Create variations with different difficulty levels”
- “Generate edge cases and error scenarios”
- “Add examples with different input formats”
- “Include multilingual variations”
Best practices
- Start small: Generate 10-20 samples first to evaluate quality before scaling up
- Review generated content: Always review AI-generated samples for accuracy and relevance
- Use variation instructions: Provide clear guidance on the type of variations you want
- Preserve key fields: Use field preservation to maintain important categorizations or metadata
- Iterate and refine: Use the custom prompt option to fine-tune generation for your specific needs
Dataset expansion works best when you have at least 5-10 high-quality examples in your original dataset. The AI uses these examples to understand the patterns and generate similar but varied content.