MetaPrompt Optimization

The MetaPrompter is a specialized optimizer designed for meta-prompt optimization. It focuses on improving the structure and effectiveness of prompts through systematic analysis and refinement of prompt templates, instructions, and examples.

How It Works

  1. Template Analysis

    • Deconstructs prompts into components
    • Identifies key structural elements
    • Analyzes component relationships
  2. Instruction Optimization

    • Refines task instructions
    • Improves clarity and specificity
    • Enhances task understanding
  3. Example Selection

    • Evaluates example relevance
    • Optimizes example ordering
    • Balances diversity and relevance
  4. Structural Refinement

    • Improves prompt organization
    • Enhances readability
    • Optimizes information flow
  5. Validation and Testing

    • Multi-metric evaluation
    • A/B testing of variations
    • Performance tracking

Configuration Options

Basic Configuration

1from opik_optimizer import MetaPromptOptimizer
2
3prompter = MetaPromptOptimizer(
4 model="openai/gpt-4", # or "azure/gpt-4"
5 project_name="my-project",
6 temperature=0.1,
7 max_tokens=5000,
8 num_threads=8,
9 seed=42
10)

Advanced Configuration

1prompter = MetaPromptOptimizer(
2 model="openai/gpt-4",
3 project_name="my-project",
4 temperature=0.1,
5 max_tokens=5000,
6 num_threads=8,
7 seed=42,
8 template_depth=3, # Depth of template analysis
9 max_variations=5, # Maximum number of variations to test
10 instruction_weight=0.6, # Weight of instruction optimization
11 example_weight=0.4, # Weight of example optimization
12 readability_threshold=0.8 # Minimum readability score
13)

Example Usage

1from opik_optimizer import MetaPromptOptimizer
2from opik.evaluation.metrics import LevenshteinRatio
3from opik_optimizer import (
4 MetricConfig,
5 TaskConfig,
6 from_llm_response_text,
7 from_dataset_field,
8)
9from opik_optimizer.demo import get_or_create_dataset
10
11# Initialize optimizer
12optimizer = MetaPromptOptimizer(
13 model="openai/gpt-4", # or "azure/gpt-4"
14 temperature=0.1,
15 max_tokens=5000,
16 num_threads=8,
17 seed=42
18)
19
20# Prepare dataset
21dataset = get_or_create_dataset("hotpot-300")
22
23# Define metric and task configuration (see docs for more options)
24metric_config = MetricConfig(
25 metric=LevenshteinRatio(),
26 inputs={
27 "output": from_llm_response_text(), # Model's output
28 "reference": from_dataset_field(name="answer"), # Ground truth
29 }
30)
31task_config = TaskConfig(
32 type="text_generation",
33 instruction_prompt="Provide an answer to the question.",
34 input_dataset_fields=["question"],
35 output_dataset_field="answer",
36 use_chat_prompt=True
37)
38
39# Run optimization
40results = optimizer.optimize_prompt(
41 dataset=dataset,
42 metric_config=metric_config,
43 task_config=task_config
44)
45
46# Access results
47results.display()

Model Support

The MetaPrompter supports all models available through LiteLLM. For a complete list of supported models and providers, see the LiteLLM Integration documentation.

Common Providers

  • OpenAI (gpt-4, gpt-3.5-turbo, etc.)
  • Azure OpenAI
  • Anthropic (Claude)
  • Google (Gemini)
  • Mistral
  • Cohere

Configuration Example

1optimizer = MetaPromptOptimizer(
2 model="google/gemini-pro", # or any LiteLLM supported model
3 project_name="my-project",
4 temperature=0.1,
5 max_tokens=5000
6)

Best Practices

  1. Template Design

    • Start with clear structure
    • Use consistent formatting
    • Include placeholders for variables
  2. Instruction Writing

    • Be specific and clear
    • Use active voice
    • Include success criteria
  3. Example Selection

    • Choose diverse examples
    • Ensure relevance to task
    • Balance complexity levels
  4. Optimization Strategy

    • Focus on one component at a time
    • Track changes systematically
    • Validate improvements

Research and References