-
Notifications
You must be signed in to change notification settings - Fork 6.2k
[refactor] condense group offloading #11990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks super clean!
I just ran some benchmarks to confirm if the changes don't have any detrimental effect on the speed-memory trade-off (code) and they look alright to me.
Would maybe also run all the group offloading tests on the GPU, too.
finally: | ||
pinned_dict = None | ||
|
||
def _transfer_tensor_to_device(self, tensor, source_tensor, current_stream=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very clean!
|
||
if self.offload_to_disk_path: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect!
with context: | ||
# Load to CPU (if using streams) or directly to target device, pin, and async copy to device | ||
device = self.onload_device if self.stream is None else "cpu" | ||
loaded_tensors = safetensors.torch.load_file(self.safetensors_file_path, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
device
when supplied as torch.device("cuda")
to the onload_device
argument, it would fail here from safetensors
complaining invalid device cuda
. Simply wrapping up it within str()
solves the issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 👍🏽
The current implementation, after many of the recent updates, is hard to understand or debug/reason through. There is also one code path that seems completely unused (see the removed
self.offload_to_disk_path
related changes and the call intoself._onload_from_disk
).This PR tries to refactor and clean up some of the implementation so that implementing new changes is easier in the future.
Related to comment which I'm trying to debug through