Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 3 additions & 12 deletions src/strands/telemetry/tracer.py
Original file line number Diff line number Diff line change
Expand Up @@ -667,18 +667,9 @@ def end_agent_span(
)

if hasattr(response, "metrics") and hasattr(response.metrics, "accumulated_usage"):
accumulated_usage = response.metrics.accumulated_usage
attributes.update(
{
"gen_ai.usage.prompt_tokens": accumulated_usage["inputTokens"],
"gen_ai.usage.completion_tokens": accumulated_usage["outputTokens"],
"gen_ai.usage.input_tokens": accumulated_usage["inputTokens"],
"gen_ai.usage.output_tokens": accumulated_usage["outputTokens"],
"gen_ai.usage.total_tokens": accumulated_usage["totalTokens"],
"gen_ai.usage.cache_read_input_tokens": accumulated_usage.get("cacheReadInputTokens", 0),
"gen_ai.usage.cache_write_input_tokens": accumulated_usage.get("cacheWriteInputTokens", 0),
}
)
# Attributes removed to prevent double counting in OpenTelemetry backends
# Usage metrics are already reported on the child model invocation spans
pass
Comment on lines 669 to +672
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have to take a closer look but if we are removing this then we should just remove the whole if condition. And no need for inline comments. This info is more for the PR I would say.

Copy link

@fenil210-cactus fenil210-cactus Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, can get rid of 'if' and remove inline comment too!

@rajib76 please check dude.

Copy link
Contributor

@poshinchen poshinchen Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users will not be able to view the accumulated tokens in the span / trace level after removing these attributes. Users will have to aggregate by themselves which I personally don't want it to happe. I think there could be a better way to prevent double counting.

Will get back after some investigation.

Copy link

@fenil210-cactus fenil210-cactus Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the cost is actually getting calculated twice and the same data is flowing to the Langfuse dashboard as well.

PFA screenshot. You can see how 0.048 is being calculated twice and then shown as 0.096 as the final value. It effectively doubles the cost for every operation. This is a serious concern. People might assume the issue is in their own flow, but in reality the problem is in how the data is being sent to Langfuse via OTEL and then getting calculated twice.

image


self._end_span(span, attributes, error)

Expand Down