-
Notifications
You must be signed in to change notification settings - Fork 1k
Single Batch Overlap (SBO): Overlaping of Down GEMM with Combine Send #483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: Sulfur6 <[email protected]> Co-authored-by: AniZpZ <[email protected]>
Co-authored-by: sky <[email protected]>
Co-authored-by: sky <[email protected]>
| for (int token_idx = offset + sub_warp_id; token_idx < offset + num_tokens_to_send; token_idx += num_warps_per_group) { | ||
| if (overlap or (not is_rank_masked<true>(mask_buffer_ptr, dst_rank))) { | ||
| auto token_start_idx = overlap ? local_expert_signal_idx * block_m : offset; | ||
| auto token_end_idx = overlap ? min((local_expert_signal_idx + 1) * block_m, num_tokens_per_expert) : (offset + num_tokens_to_send); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi~ I have tried this great feature, and foud that combine_send was slower than the non-overlap situation, maybe block_m is too big for one SM in each itertions? I
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may consider increasing num_sms from the default value of 3 to 4-6. When block_m is set to 64, given that num_warps is 32: if num_token <= 32, one SM sends a single round; if 32 < num_token <= 64, one SM needs to send two rounds. Two rounds may take slightly longer than the origin combined send, but you can increase parallelism by raising num_sms.
wangfakang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks.
The DeepEP implementation for SBO (DeepEP #390) will be merged into the antgroup-opt branch.