Contributing to the Agent Optimizer SDK

The Agent Optimizer SDK is currently in public preview. Some features, including tests and benchmarks, might be broken or not working as expected. We’re actively working on improving stability and reliability.

This guide will help you get started with contributing to the Agent Optimizer SDK, our tool for optimizing prompts and improving model performance.

Before you start, please review our general Contribution Overview and the Contributor License Agreement (CLA).

Project Structure

The Agent Optimizer is located in the sdks/opik_optimizer directory. Here’s an overview of the key components:

  • src/: Main source code
  • benchmarks/: Benchmarking tools and results
  • notebooks/: Example notebooks and tutorials
  • tests/: Test files
  • docs/: Additional documentation
  • scripts/: Utility scripts
  • setup.py: Package configuration
  • requirements.txt: Python dependencies

Setup

1

Create virtual environment

$python -m venv venv
>source venv/bin/activate # On Windows: venv\Scripts\activate
2

Install dependencies

$cd sdks/opik_optimizer
>pip install -r requirements.txt
>pip install -e .
3

Run tests

$pytest tests/

Note: The tests are still in development and therefore likely to be unstable.

Development Workflow

1

Create branch

Create a new branch for your changes
2

Make changes

Make your changes
3

Add tests

Add tests for new functionality
4

Run tests

Run the test suite
5

Run benchmarks

Run benchmarks if applicable
6

Submit PR

Submit a pull request

Testing

We use pytest for testing. When adding new features:

1

Write unit tests

Write unit tests in the tests/ directory
2

Run tests

Ensure all tests pass with pytest tests/

Benchmarking

The benchmarking tools are still in development. Results may vary and some features might not work as expected.

The optimizer includes benchmarking tools to measure performance improvements:

1

Run benchmarks

bash cd benchmarks python run_benchmark.py
2

View results

View results in the benchmark_results/ directory
3

Add benchmarks

Add new benchmarks for new optimization strategies

Documentation

When adding new features or making changes:

1

Update README

Update the README.md
2

Add docstrings

Add docstrings for new functions and classes
3

Add examples

Include examples in the notebooks/ directory
4

Update docs

Update the main documentation if necessary. See the Documentation Guide for details.

Code Style

We follow PEP 8 guidelines. Before submitting a PR:

1

Run linter

Run flake8 to check for style issues
2

Fix issues

Fix any linting errors
3

Check style

Ensure your code follows Python best practices

Pull Request Process

1

Fork repository

Fork the repository
2

Create branch

Create your feature branch
3

Make changes

Make your changes
4

Run checks

Run tests and benchmarks
5

Submit PR

Submit a pull request

Your PR should:

  • Have a clear description of the changes
  • Include tests for new functionality
  • Pass all CI checks
  • Follow the project’s coding standards
  • Include benchmark results if applicable

Notebooks and Examples

The notebooks/ directory contains examples and tutorials. When adding new features:

1

Create notebook

Create a new notebook demonstrating the feature
2

Add explanations

Include clear explanations and comments
3

Show usage

Show both basic and advanced usage
4

Add comparisons

Add performance comparisons if relevant

Need Help?

If you need help or have questions:

  • Open an issue on GitHub
  • Join our Comet Chat community
  • Check the existing documentation and notebooks

Remember to review our Contributor License Agreement (CLA) before contributing.