Skip to content

WIP - do not merge - Vllm v1 hidden states #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

kyle-pena-kuzco
Copy link

@kyle-pena-kuzco kyle-pena-kuzco commented Jun 5, 2025

No description provided.

Copy link

github-actions bot commented Jun 5, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@kyle-pena-kuzco
Copy link
Author

kyle-pena-kuzco commented Jun 6, 2025

Here is a diagram for our implementation.

sequenceDiagram
   participant async_llm
   participant EngineCore
   participant GpuModelRunner/TpuModelRunner
   EngineCore->>async_llm:EngineCoreOutput
   note over async_llm: Is Sequence Complete?  If yes...
   note over async_llm: Is server enable_return_hidden_states?  If yes...
   note over async_llm: Are hidden states requested?  If yes...
   async_llm->>EngineCore:send HiddenStatesExtractionRequest
   note over EngineCore:  create prefill-only EngineCoreRequest (prompt_token_ids=prompt+response)
   EngineCore->>GpuModelRunner/TpuModelRunner: EngineCoreRequest
   note over GpuModelRunner/TpuModelRunner:Slice out hidden states for only last token
   note over GpuModelRunner/TpuModelRunner:Move slice (1,D) to CPU
   note over GpuModelRunner/TpuModelRunner:Include in ModelRunnerOutput as List[float] (~77kb)
   GpuModelRunner/TpuModelRunner->>EngineCore: ModelRunnerOutput
   EngineCore->>async_llm: EngineCoreOutput
Loading

Here is some analysis of internal serialization costs.

  • After a sequence is completed, if hidden states are requested, a single List[float] is serialized over zmq for the sequence in response to a HiddenStatesExtractionRequest. This corresponds to the selected token's hidden states.

  • The length of the List[float] is D, where D is the hidden dimension of the model. For example, for 3.1-8b-instruct this is 4,096.

  • This comes out to about 77kb in raw float bytes for 3.1-8b-instruct, and similar sizes for other models (ranging between about 50kb and 110kb raw float bytes).

  • The payload size for the List[float] is comparable with other currently supported features like top_logprobs. For example, for 3.1-8b-instruct it is less than returning top-2 logprobs on a typical 500 token response (per v1/logprobs.py).

  • We minimize the GPU-to-CPU cost by slicing out only the requested token's hidden states from the full hidden states tensor before moving to the CPU (kilobytes), instead of the entire full last layer hidden states tensor (megabytes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant