start_as_current_span

opik.start_as_current_span(name: str, type: Literal['general', 'tool', 'llm', 'guardrail'] = 'general', input: Dict[str, Any] | None = None, output: Dict[str, Any] | None = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, project_name: str | None = None, model: str | None = None, provider: str | None = None, flush: bool = False, **kwargs: Dict[str, Any]) Generator[SpanData, Any, None]

A context manager for starting and managing a span and parent trace.

This function creates a span and parent trace (if missing) with input parameters, processes outputs, handles errors, and ensures the span/trace data is saved and flushed at the end of its lifecycle. It integrates distributed tracing headers and allows additional metadata, tags, and other contextual information to be provided.

Parameters:
  • name – The name of the span to create.

  • type – The type of the span. Defaults to “general”.

  • input – A dictionary representing the input data associated with the span.

  • output – A dictionary for providing the output associated with the span.

  • tags – A list of tags to associate with the span.

  • metadata – A dictionary of additional metadata to attach to the span or trace.

  • project_name – The name of the project associated with this span.

  • model – The model name related to the span or trace.

  • provider – The provider responsible for the span or trace.

  • flush – Whether to flush the client data after the span is created and processed.

  • **kwargs (Dict[str, Any]) – Additional parameters that may be passed to the context manager.

Yields:

An iterator that provides the span data within the context of the span manager lifecycle.

Examples

Basic usage

import opik

with opik.start_as_current_span("my_operation") as span:
    # Your code here
    span.metadata["custom_key"] = "custom_value"
    print("Executing operation...")

With input and output data

import opik

with opik.start_as_current_span(
    name="llm_completion",
    type="llm",
    input={"prompt": "What is the capital of France?"},
    output={"response": "The capital of France is Paris."},
    tags=["llm", "completion"],
    metadata={"model": "gpt-3.5-turbo"}
) as span:
    # Your LLM code here
    pass

With distributed tracing

import os
import opik

# extract headers from environment
distributed_trace_headers = os.environ.get("opik_distributed_trace_headers")

with opik.start_as_current_span(
    "process_request",
    opik_distributed_trace_headers=distributed_trace_headers
) as span:
    # Your code here
    pass

Error handling

import opik

try:
    with opik.start_as_current_span("risky_operation") as span:
        # Code that might fail
        raise ValueError("Something went wrong")
except ValueError as e:
    # The span will automatically capture error information
    print(f"Operation failed: {e}")
    # Error details are stored in span.error_info