Skip to content

Conversation

@nvpohanh
Copy link
Contributor

@nvpohanh nvpohanh commented Dec 8, 2025

Purpose

VLLM config is set only during initialization stage, not during runtime stage. Therefore, we should not call get_current_vllm_config() during dunrime stage. Instead, cache the config we want during initialization stage and reuse it during runtime stage.

This was caused by #26315 by @MatthewBonanni .

This fixes #30240

Test Plan

On H200:

python3 -m vllm.entrypoints.openai.api_server --model Qwen/Qwen3-32B-FP8 --kv-cache-dtype auto --attention-config.backend FLASHINFER &
# wait until server is ready
lm_eval --model local-completions --tasks gsm8k --model_args base_url=http://127.0.0.1:8000/v1/completions,model=Qwen/Qwen3-32B-FP8,tokenized_requests=False,tokenizer_backend=None,num_concurrent=128,timeout=120,max_retries=1

Test Result

local-completions (base_url=http://127.0.0.1:8000/v1/completions,model=Qwen/Qwen3-32B-FP8,tokenized_requests=False,tokenizer_backend=None,num_concurrent=128,timeout=120,max_retries=1), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6679|±  |0.0130|
|     |       |strict-match    |     5|exact_match|↑  |0.7672|±  |0.0116

without all the warnings.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@nvpohanh nvpohanh marked this pull request as ready for review December 8, 2025 06:30
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a bug causing "Current vLLM config is not set." warnings by ensuring that the vLLM configuration is accessed only during the initialization stage. The approach of caching the force_use_trtllm_attention setting in FlashInferMetadataBuilder during initialization and then passing it down during the runtime stage is sound and effectively resolves the issue. The changes are well-contained and logical. I have no further comments as the implementation is solid.

Copy link
Member

@hmellor hmellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vllm_config already exists in the scope that you're using force_use_trtllm_attention, so it should just be accessed directly

@MatthewBonanni
Copy link
Contributor

Thanks, sorry I missed this

…attention is used

VLLM config is set only during initialization stage, not during runtime
stage. Therefore, we should not call get_current_vllm_config() during
dunrime stage. Instead, cache the config we want during initialization
stage and reuse it during runtime stage.

Signed-off-by: Po-Han Huang <[email protected]>
@nvpohanh nvpohanh force-pushed the dev-nvpohanh-vllm-config-warning branch from 446ee64 to 0501d9d Compare December 9, 2025 04:40
@nvpohanh nvpohanh requested a review from hmellor December 9, 2025 04:43
@nvpohanh
Copy link
Contributor Author

@hmellor could you review again? thanks

@mgoin mgoin added bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed labels Dec 10, 2025
@vllm-bot vllm-bot merged commit eea4180 into vllm-project:main Dec 10, 2025
49 of 55 checks passed
@github-project-automation github-project-automation bot moved this to Done in NVIDIA Dec 10, 2025
shaharmor98 pushed a commit to shaharmor98/smor-vllm that referenced this pull request Dec 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working nvidia ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

[Bug]: Lots of "Current vLLM config is not set." warnings when FlashInfer attention is used

5 participants