Skip to content

[O11y][Apache Spark] Remove unnecessary filter from the visualizations #7467

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

harnish-crest-data
Copy link
Contributor

@harnish-crest-data harnish-crest-data commented Aug 21, 2023

  • Bug

What does this PR do?

  • Remove incorrect filters from the visualizations
  • The fields that were used for the maximum aggregation in the visualizations are gauge fields. The maximum aggregation is not appropriate for those fields. For example, there is a field name apache_spark.node.worker.memory.used representing the used memory was used maximum aggregation which is inappropriate because the used memory value will always fluctuate. Hence need to update aggregation from maximum to last_value.

Note

  • Since the look of the dashboard has not changed, no need to update the dashboard screenshot

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have added an entry to my package's changelog.yml file.

Related issues

@harnish-crest-data harnish-crest-data added Integration:apache_spark Apache Spark bugfix Pull request that fixes a bug issue labels Aug 21, 2023
@harnish-crest-data harnish-crest-data self-assigned this Aug 21, 2023
@harnish-crest-data harnish-crest-data requested a review from a team as a code owner August 21, 2023 06:25
@elasticmachine
Copy link

elasticmachine commented Aug 21, 2023

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-09-04T08:44:31.591+0000

  • Duration: 19 min 50 sec

Test stats 🧪

Test Results
Failed 0
Passed 15
Skipped 0
Total 15

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link

elasticmachine commented Aug 21, 2023

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (0/0) 💚
Files 100.0% (0/0) 💚
Classes 100.0% (0/0) 💚
Methods 75.0% (12/16) 👎 -12.826
Lines 100.0% (0/0) 💚 4.0
Conditionals 100.0% (0/0) 💚

Copy link
Contributor

@rajvi-patel-22 rajvi-patel-22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@muthu-mps
Copy link
Contributor

@harnish-elastic - Can you please add the context on why we changed the aggregation from maximum to last value?

Co-authored-by: muthu-mps <101238137+muthu-mps@users.noreply.github.com>
@harnish-crest-data
Copy link
Contributor Author

@harnish-elastic - Can you please add the context on why we changed the aggregation from maximum to last value?

Updated the description here. Thank you!

Copy link
Contributor

@muthu-mps muthu-mps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@harnish-crest-data harnish-crest-data merged commit 6f024d3 into elastic:main Sep 4, 2023
@elasticmachine
Copy link

Package apache_spark - 0.6.1 containing this change is available at https://epr.elastic.co/search?package=apache_spark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix Pull request that fixes a bug issue Integration:apache_spark Apache Spark
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[O11y][Apache Spark] Remove unnecessary filter from the dashboard and update aggregation for visualizations
4 participants