Skip to content

ESQL: Fix a bug in TOP #121552

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Feb 5, 2025
Merged

ESQL: Fix a bug in TOP #121552

merged 10 commits into from
Feb 5, 2025

Conversation

nik9000
Copy link
Member

@nik9000 nik9000 commented Feb 3, 2025

Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field.

The test for this required random bucketSize - without that the oversizing frequently wouldn't cause trouble.

Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
@elasticsearchmachine elasticsearchmachine added the Team:Analytics Meta label for analytical engine team (ESQL/Aggs/Geo) label Feb 3, 2025
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-analytical-engine (Team:Analytics)

@elasticsearchmachine
Copy link
Collaborator

Hi @nik9000, I've created a changelog YAML for you.

Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM as this definitely weeds out some failures and the added test helps, too.

Although I wonder if the condition "bucket was actually non-existant" is still sometimes not determined correctly - I left a comment.

@@ -143,13 +143,13 @@ public void collect(BytesRef value, int bucket) {
*/
public void merge(int bucket, BytesRefBucketedSort other, int otherBucket) {
long otherRootIndex = other.common.rootIndex(otherBucket);
if (otherRootIndex >= other.values.size()) {
long otherEnd = other.common.endIndex(otherRootIndex);
if (otherEnd >= other.values.size()) {
Copy link
Contributor

@alex-spies alex-spies Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not obvious to me how this guards against oversizing of other.values. Without being too familiar with this code, couldn't it happen in theory that the oversizing is by more than a whole bucketsize, and then otherEnd may still be < other.values.size() while the value may still not have been collected?

Would it make sense to, alternatively, check if the value at the otherRootIndex is not null? Per the invariant check, if any value was ever collected into the bucket, then the root value must not be null, right? So the condition would be otherRootIndex >= other.values.size() || other.values.get(otherRootIndex) == null.

If I'm reading this correctly, the present code would still trigger a failed assertion at other.checkInvariant(otherBucket) if the other.values array was oversized by bucketSize or more, and no value was ever collected into that bucket.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We always fill the newly allocated gather offsets with the appropriate encoded value. We wanted to make sure we didn't have to do that test.

I suppose ideally we would size in units of whole buckets. I think I could do that actually. That'd be better. No reason to have this half grown array.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't resizing in units of whole buckets still have the problem that you may have sized the array such that there's enough room for one more bucket, but the bucket itself is not yet in use/empty? Or do we only ever resize when we immediately put a value into the bucket AND never go over the size of a single bucket when resizing? If the latter is true, then I think we need to put a comment here as it's non-obvious IMHO.

Put differently, I understand that the current condition otherRootIndex >= other.values.size() is insufficient to determine all cases when the other bucket is actually empty. And it'd be great if the condition that we end up using is somewhat reasonably easy to prove correct; as when it's not correct, we get AssertErrors, which cause whole test suites to fail and be muted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose ideally we would size in units of whole buckets.

I've pushed a patch that does that. It's a little bigger but feels nicer.

Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're misusing BigArrays.oversize now. If I read this correctly, maybe it'd be more correct to just oversize in units of bytes and then manually round up to the next multiple of bucketSize.

@@ -143,13 +150,13 @@ public void collect(BytesRef value, int bucket) {
*/
public void merge(int bucket, BytesRefBucketedSort other, int otherBucket) {
long otherRootIndex = other.common.rootIndex(otherBucket);
long otherEnd = other.common.endIndex(otherRootIndex);
if (otherEnd >= other.values.size()) {
if (otherRootIndex >= other.values.size()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see we reverted the check, assuming that when there is enough space for the other root index to be contained in the other values, we're gonna say that there is a bucket at otherRootIndex.

I took a look at how grow currently works, and it seems that this expectation is reasonable, because we resize to multiples of the bucket size and call fillGatherOffsets to make sure that at there's an object at each new bucket roots.

Comment on lines 268 to 269
long newSizeInBuckets = BigArrays.overSize(bucket + 1, PageCacheRecycler.INT_PAGE_SIZE, Integer.BYTES * bucketSize);
values = bigArrays.resize(values, newSizeInBuckets * bucketSize);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may not be familiar enough with these methods, but the call to BigArrays.oversize looks weird.

Inside of oversize, I found checks like

        if (minTargetSize < pageSize) {

which seems to indicate that this method expects the minTargetSize to be in the same unit as the page size. Since the given pageSize is PageCacheRecycler.INT_PAGE_SIZE, this indicates that the minTargetSize should be in multiples of ints that we want to store. That'd be (bucket + 1)*bucketSize, not bucket +1, no? And then we'd also have to adjust the bytesPerElement to be just Integer.BYTES.

Also, in the code for oversize, it seems like the bytesPerElement can only by 1-8. Which would make our Integer.BYTES * bucketSize completely off the charts.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fun. I looked at it and it seemed to be working. but I do think it'd be clearer if it just did it in bytes always. I'll switch it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to do what Alex asked for and he's asked me to merge once CI is happy about it - I'm using primitive sizes and rounding up.

@nik9000 nik9000 requested a review from alex-spies February 5, 2025 19:18
@nik9000 nik9000 enabled auto-merge (squash) February 5, 2025 19:18
@nik9000 nik9000 disabled auto-merge February 5, 2025 21:27
@nik9000 nik9000 merged commit 1e12b54 into elastic:main Feb 5, 2025
17 checks passed
nik9000 added a commit to nik9000/elasticsearch that referenced this pull request Feb 5, 2025
Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field. This changes the oversize itself to always size on a bucket boundary.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
nik9000 added a commit to nik9000/elasticsearch that referenced this pull request Feb 5, 2025
Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field. This changes the oversize itself to always size on a bucket boundary.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
nik9000 added a commit to nik9000/elasticsearch that referenced this pull request Feb 5, 2025
Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field. This changes the oversize itself to always size on a bucket boundary.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
@nik9000
Copy link
Member Author

nik9000 commented Feb 5, 2025

Backports:
9.0: #121825
8.19: #121826
8.18: #121828

nik9000 added a commit that referenced this pull request Feb 5, 2025
Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field. This changes the oversize itself to always size on a bucket boundary.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
nik9000 added a commit that referenced this pull request Feb 5, 2025
Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field. This changes the oversize itself to always size on a bucket boundary.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
nik9000 added a commit that referenced this pull request Feb 5, 2025
Fix a bug in TOP which surfaces when merging results from ordinals. We
weren't always accounting for oversized arrays when checking if we'd
ever seen a field. This changes the oversize itself to always size on a bucket boundary.

The test for this required random `bucketSize` - without that the
oversizing frequently wouldn't cause trouble.
Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Nik, looks all good to me now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Analytics/ES|QL AKA ESQL >bug Team:Analytics Meta label for analytical engine team (ESQL/Aggs/Geo) v8.18.0 v8.19.0 v9.0.0 v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants