Batching and update operations
By default, the Opik Python SDK batches create requests for traces and spans to improve ingestion throughput. While this is the recommended setting for most use cases, it can cause issues when you update a span or trace shortly after creating it.
What can go wrong
When batching is enabled, create requests are queued and sent to the server in batches. If you call .end() or .update() on a span or trace before its batched create request has been flushed, the update may arrive at the server first. Since the server has no record of that span or trace yet, the update is silently dropped and the data is lost.
This only affects the low-level client API (client.trace(), client.span(), client.update_trace(), client.update_span(), .end(), .update()). The @track decorator and start_as_current_span context manager handle lifecycle automatically and are not affected.
Recommended approaches
Use decorators or context managers (recommended)
The safest way to create and manage spans and traces is through the @track decorator or start_as_current_span context manager. These handle the full lifecycle — creation, data capture, and finalization — in a single operation with no separate update step.
Re-send the full payload to overwrite
If you need to use the low-level client API, you can call client.trace() or client.span() again with the same ID and the complete data. The backend treats each create request as an upsert — if a trace or span with that ID already exists, the new payload overwrites the previous one entirely. This avoids the race condition because you are not relying on a separate update arriving after the original create.
This also works for spans — call client.span() with the same id and trace_id to overwrite.
Disable batching
If your workflow requires creating spans and then updating them shortly after (for example, to add output data that is only available after an async operation completes), you can disable batching. This ensures each create request is sent immediately, so subsequent updates will always find the span on the server.
Disabling batching reduces ingestion throughput. Only use this option if you need to call .end() or .update() shortly after creation.
Suppress the warning
If you are calling .end() or .update() well after creation (for example, after a long-running operation completes), the batched create will have already been flushed and there is no risk of data loss. In this case, you can suppress the warning by setting an environment variable:
Or in your Opik configuration file (~/.opik.config):
Use integrations
All Opik integrations (LangChain, LlamaIndex, DSPy, OpenAI, Anthropic, and others) handle span and trace lifecycle internally and are not affected by this issue. If you are using an integration, no changes are needed.