-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Exposes average batch metrics at 1, 5 and 15 minutes time window. #18460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Exposes average batch metrics at 1, 5 and 15 minutes time window. #18460
Conversation
🤖 GitHub commentsJust comment with:
|
|
This pull request does not have a backport label. Could you fix it @andsel? 🙏
|
|
run exhaustive test |
…can't be created (1 minute and more intervals)
💛 Build succeeded, but was flaky
Failed CI StepsHistory
cc @andsel |
| current: 78 | ||
| average: | ||
| lifetime: 115 | ||
| 1_minute: 120 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason why in the api spec we are dropping the last_ prefix? Everywhere in the code these are last_n_minute(s). Seems like consistency would be nice unless there is a good reason not to?
| } | ||
| } | ||
| } | ||
| # Enrich byte_size and event_count averages with the last 1, 5, 15 minutes averages if available |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of helper methods and multiple calls we could do something like:
[:last_1_minute, :last_5_minutes, :last_15_minutes].each do |window|
key = window.to_s
result[:event_count][:average][window] = event_count_average_flow_metric[key]&.round if event_count_average_flow_metric[key]
result[:byte_size][:average][window] = byte_size_average_flow_metric[key]&.round if byte_size_average_flow_metric[key]
end
donoghuc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a few cases in the integration test? Seems like we have a place it would be easy to add
logstash/qa/integration/specs/monitoring_api_spec.rb
Lines 311 to 339 in 3659b6f
| Stud.try(max_retry.times, [StandardError, RSpec::Expectations::ExpectationNotMetError]) do | |
| # node_stats can fail if the stats subsystem isn't ready | |
| result = logstash_service.monitoring_api.node_stats rescue nil | |
| expect(result).not_to be_nil | |
| # we use fetch here since we want failed fetches to raise an exception | |
| # and trigger the retry block | |
| batch_stats = result.fetch("pipelines").fetch(pipeline_id).fetch("batch") | |
| expect(batch_stats).not_to be_nil | |
| expect(batch_stats["event_count"]).not_to be_nil | |
| expect(batch_stats["event_count"]["average"]).not_to be_nil | |
| expect(batch_stats["event_count"]["average"]["lifetime"]).not_to be_nil | |
| expect(batch_stats["event_count"]["average"]["lifetime"]).to be_a_kind_of(Numeric) | |
| expect(batch_stats["event_count"]["average"]["lifetime"]).to be > 0 | |
| expect(batch_stats["event_count"]["current"]).not_to be_nil | |
| expect(batch_stats["event_count"]["current"]).to be >= 0 | |
| expect(batch_stats["byte_size"]).not_to be_nil | |
| expect(batch_stats["byte_size"]["average"]).not_to be_nil | |
| expect(batch_stats["byte_size"]["average"]["lifetime"]).not_to be_nil | |
| expect(batch_stats["byte_size"]["average"]["lifetime"]).to be_a_kind_of(Numeric) | |
| expect(batch_stats["byte_size"]["average"]["lifetime"]).to be > 0 | |
| expect(batch_stats["byte_size"]["current"]).not_to be_nil | |
| expect(batch_stats["byte_size"]["current"]).to be >= 0 | |
| end | |
| end | |
| end |
Release notes
Exposes batch size metrics for last 1, 5 and 15 minutes.
What does this PR do?
Updates stats API response to expose also 1m, 5m and 15m average batch metrics.
Changed the response map returned by
refine_batch_metricsmethod as result of API query to_node/statsso tha contains the average values of last 1, 5 and 15 minutes forevent_countandbatch_size. These data is published once they are available from the metric collector.Why is it important/What is the impact to the user?
This feature permit to the user of Logstash to have the metering of batch average values over some recent time windows.
Checklist
[ ] I have made corresponding change to the default configuration files (and/or docker env variables)[ ] I have added tests that prove my fix is effective or that my feature worksThis feature rely onExtendedFlowMetricwhich is extensively tested about these time window management. To create a test at API level we should implement something that load for at least the time window duration and check the API response. Test that runs for minutes are not feasible.Author's Checklist
How to test this PR locally
Use the same test harness proposed in #18000, switch
pipeline.batch.metrics.sampling_modetofulland monitor for 1, 5, and 15 minutes the result of_node/statswith:curl http://localhost:9600/_node/stats | jq .pipelines.main.batchRelated issues
Use cases
Screenshots
Logs