-
Notifications
You must be signed in to change notification settings - Fork 474
[O11y][Apache Spark] Remove unnecessary filter from the visualizations #7467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[O11y][Apache Spark] Remove unnecessary filter from the visualizations #7467
Conversation
🌐 Coverage report
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
@harnish-elastic - Can you please add the context on why we changed the aggregation from maximum to last value? |
Co-authored-by: muthu-mps <101238137+muthu-mps@users.noreply.github.com>
Updated the description here. Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Package apache_spark - 0.6.1 containing this change is available at https://epr.elastic.co/search?package=apache_spark |
What does this PR do?
apache_spark.node.worker.memory.used
representing the used memory was used maximum aggregation which is inappropriate because the used memory value will always fluctuate. Hence need to update aggregation from maximum to last_value.Note
Checklist
changelog.yml
file.Related issues