-
Notifications
You must be signed in to change notification settings - Fork 25.4k
ESQL: Fix a bug in TOP #121552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ESQL: Fix a bug in TOP #121552
Conversation
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
Pinging @elastic/es-analytical-engine (Team:Analytics) |
Hi @nik9000, I've created a changelog YAML for you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM as this definitely weeds out some failures and the added test helps, too.
Although I wonder if the condition "bucket was actually non-existant" is still sometimes not determined correctly - I left a comment.
@@ -143,13 +143,13 @@ public void collect(BytesRef value, int bucket) { | |||
*/ | |||
public void merge(int bucket, BytesRefBucketedSort other, int otherBucket) { | |||
long otherRootIndex = other.common.rootIndex(otherBucket); | |||
if (otherRootIndex >= other.values.size()) { | |||
long otherEnd = other.common.endIndex(otherRootIndex); | |||
if (otherEnd >= other.values.size()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not obvious to me how this guards against oversizing of other.values
. Without being too familiar with this code, couldn't it happen in theory that the oversizing is by more than a whole bucketsize, and then otherEnd
may still be < other.values.size()
while the value may still not have been collected?
Would it make sense to, alternatively, check if the value at the otherRootIndex
is not null? Per the invariant check, if any value was ever collected into the bucket, then the root value must not be null, right? So the condition would be otherRootIndex >= other.values.size() || other.values.get(otherRootIndex) == null
.
If I'm reading this correctly, the present code would still trigger a failed assertion at other.checkInvariant(otherBucket)
if the other.values
array was oversized by bucketSize
or more, and no value was ever collected into that bucket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We always fill the newly allocated gather offsets with the appropriate encoded value. We wanted to make sure we didn't have to do that test.
I suppose ideally we would size in units of whole buckets. I think I could do that actually. That'd be better. No reason to have this half grown array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't resizing in units of whole buckets still have the problem that you may have sized the array such that there's enough room for one more bucket, but the bucket itself is not yet in use/empty? Or do we only ever resize when we immediately put a value into the bucket AND never go over the size of a single bucket when resizing? If the latter is true, then I think we need to put a comment here as it's non-obvious IMHO.
Put differently, I understand that the current condition otherRootIndex >= other.values.size()
is insufficient to determine all cases when the other bucket is actually empty. And it'd be great if the condition that we end up using is somewhat reasonably easy to prove correct; as when it's not correct, we get AssertErrors, which cause whole test suites to fail and be muted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose ideally we would size in units of whole buckets.
I've pushed a patch that does that. It's a little bigger but feels nicer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're misusing BigArrays.oversize
now. If I read this correctly, maybe it'd be more correct to just oversize in units of bytes and then manually round up to the next multiple of bucketSize.
@@ -143,13 +150,13 @@ public void collect(BytesRef value, int bucket) { | |||
*/ | |||
public void merge(int bucket, BytesRefBucketedSort other, int otherBucket) { | |||
long otherRootIndex = other.common.rootIndex(otherBucket); | |||
long otherEnd = other.common.endIndex(otherRootIndex); | |||
if (otherEnd >= other.values.size()) { | |||
if (otherRootIndex >= other.values.size()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see we reverted the check, assuming that when there is enough space for the other root index to be contained in the other values, we're gonna say that there is a bucket at otherRootIndex
.
I took a look at how grow
currently works, and it seems that this expectation is reasonable, because we resize to multiples of the bucket size and call fillGatherOffsets
to make sure that at there's an object at each new bucket roots.
long newSizeInBuckets = BigArrays.overSize(bucket + 1, PageCacheRecycler.INT_PAGE_SIZE, Integer.BYTES * bucketSize); | ||
values = bigArrays.resize(values, newSizeInBuckets * bucketSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may not be familiar enough with these methods, but the call to BigArrays.oversize
looks weird.
Inside of oversize
, I found checks like
if (minTargetSize < pageSize) {
which seems to indicate that this method expects the minTargetSize
to be in the same unit as the page size. Since the given pageSize is PageCacheRecycler.INT_PAGE_SIZE
, this indicates that the minTargetSize
should be in multiples of ints that we want to store. That'd be (bucket + 1)*bucketSize
, not bucket +1
, no? And then we'd also have to adjust the bytesPerElement
to be just Integer.BYTES
.
Also, in the code for oversize
, it seems like the bytesPerElement
can only by 1-8. Which would make our Integer.BYTES * bucketSize
completely off the charts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fun. I looked at it and it seemed to be working. but I do think it'd be clearer if it just did it in bytes always. I'll switch it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to do what Alex asked for and he's asked me to merge once CI is happy about it - I'm using primitive sizes and rounding up.
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. This changes the oversize itself to always size on a bucket boundary. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. This changes the oversize itself to always size on a bucket boundary. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. This changes the oversize itself to always size on a bucket boundary. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. This changes the oversize itself to always size on a bucket boundary. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. This changes the oversize itself to always size on a bucket boundary. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field. This changes the oversize itself to always size on a bucket boundary. The test for this required random `bucketSize` - without that the oversizing frequently wouldn't cause trouble.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Nik, looks all good to me now!
Fix a bug in TOP which surfaces when merging results from ordinals. We weren't always accounting for oversized arrays when checking if we'd ever seen a field.
The test for this required random
bucketSize
- without that the oversizing frequently wouldn't cause trouble.