Autogen

Autogen is a framework for building AI agents and applications built and maintained by Microsft.

Opik’s integration with Autogen relies on it’s built-in logging framework that relies on OpenTelemetry. You can learn more about Opik’s OpenTelemetry features in our get started guide.

Getting started

To use the Autogen integration with Opik, you will need to have the following packages installed:

$pip install -U "autogen-agentchat" opik opentelemetry-sdk opentelemetry-instrumentation-openai

In addition, you will need to set the following environment variables to configure the OpenTelemetry integration:

If you are using Opik Cloud, you will need to set the following environment variables:

$export OTEL_EXPORTER_OTLP_ENDPOINT=https://www.comet.com/opik/api/v1/private/otel
>export OTEL_EXPORTER_OTLP_HEADERS='Authorization=<your-api-key>,Comet-Workspace=default'

To log the traces to a specific project, you can add the projectName parameter to the OTEL_EXPORTER_OTLP_HEADERS environment variable:

$export OTEL_EXPORTER_OTLP_HEADERS='Authorization=<your-api-key>,Comet-Workspace=default,projectName=<your-project-name>'

You can also update the Comet-Workspace parameter to a different value if you would like to log the data to a different workspace.

Using Opik with Autogen

The Autogen library includes some examples on how to integrate with OpenTelemetry compatible tools, you can learn more about it here:

  1. If you are using autogen-core
  2. If you are using autogen_agentchat

In the example below, we will focus on the autogen_agentchat library that is a little easier to use:

1# First we will configure the OpenTelemetry
2from opentelemetry import trace
3from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
4 OTLPSpanExporter
5)
6from opentelemetry.sdk.resources import Resource
7from opentelemetry.sdk.trace import TracerProvider
8from opentelemetry.sdk.trace.export import BatchSpanProcessor
9from opentelemetry.instrumentation.openai import OpenAIInstrumentor
10
11def setup_telemetry():
12 """Configure OpenTelemetry with HTTP exporter"""
13 # Create a resource with service name and other metadata
14 resource = Resource.create({
15 "service.name": "autogen-demo",
16 "service.version": "1.0.0",
17 "deployment.environment": "development"
18 })
19
20 # Create TracerProvider with the resource
21 provider = TracerProvider(resource=resource)
22
23 # Create BatchSpanProcessor with OTLPSpanExporter
24 processor = BatchSpanProcessor(
25 OTLPSpanExporter()
26 )
27 provider.add_span_processor(processor)
28
29 # Set the TracerProvider
30 trace.set_tracer_provider(provider)
31
32 # Instrument OpenAI calls
33 OpenAIInstrumentor().instrument()
34
35# Now we can define and call the Agent
36import asyncio
37from autogen_agentchat.agents import AssistantAgent
38from autogen_agentchat.ui import Console
39from autogen_ext.models.openai import OpenAIChatCompletionClient
40
41
42# Define a model client. You can use other model client that implements
43# the `ChatCompletionClient` interface.
44model_client = OpenAIChatCompletionClient(
45 model="gpt-4o",
46 # api_key="YOUR_API_KEY",
47)
48
49# Define a simple function tool that the agent can use.
50# For this example, we use a fake weather tool for demonstration purposes.
51async def get_weather(city: str) -> str:
52 """Get the weather for a given city."""
53 return f"The weather in {city} is 73 degrees and Sunny."
54
55
56# Define an AssistantAgent with the model, tool, system message, and reflection
57# enabled. The system message instructs the agent via natural language.
58agent = AssistantAgent(
59 name="weather_agent",
60 model_client=model_client,
61 tools=[get_weather],
62 system_message="You are a helpful assistant.",
63 reflect_on_tool_use=True,
64 model_client_stream=True, # Enable streaming tokens from the model client.
65)
66
67
68# Run the agent and stream the messages to the console.
69async def main() -> None:
70 tracer = trace.get_tracer(__name__)
71 with tracer.start_as_current_span("agent_conversation") as span:
72 task = "What is the weather in New York?"
73
74 span.set_attribute("input", task) # Manually log the question
75 res = await Console(agent.run_stream(task=task))
76
77 # Manually log the response
78 span.set_attribute("output", res.messages[-1].content)
79
80 # Close the connection to the model client.
81 await model_client.close()
82
83
84if __name__ == "__main__":
85 setup_telemetry()
86 asyncio.run(main())

Once the integration is set-up, you will see the trace in Opik: