feat(tracing): enhance tracing for synchronous generator functions #500
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request
Summary
Fixed generator tracing to export concatenated chunks instead of generator objects to Openlayer. Previously, when using
@trace
or@trace_async
decorators on functions that return generators (like OpenAI streaming), Openlayer would receive useless<generator object>
instead of the actual concatenated content. This PR implements proper generator tracing that preserves streaming behavior while capturing meaningful trace data.Changes
Added synchronous generator support to
@trace
decoratorTracedSyncGenerator
class-based wrapper that delays trace creation until first iterationinspect.isgeneratorfunction()
detection to handle generator functions separately from regular functions_finalize_sync_generator_step()
helper function for proper cleanup and trace completionEnhanced async generator tracing in
@trace_async
decoratorTracedAsyncGenerator
infrastructure (already working correctly)Improved chunk concatenation and output handling
_join_output_chunks()
to properly concatenate generator outputsComprehensive testing and examples
trace_openai()
functionContext
Users reported that when using
@trace
decorators on OpenAI streaming functions, Openlayer was receiving generator objects instead of the actual AI responses. This made the traces useless for monitoring and analysis. The issue affected both sync and async generators, creating duplicate traces with incomplete data.Before: Openlayer received
<generator object at 0x...>
as outputAfter: Openlayer receives concatenated string like
"Hello world! How can I help you today?"
Testing
trace_openai()
functionKey validation results:
Real-world testing: