Skip to content

Conversation

@autoantwort
Copy link
Contributor

@autoantwort autoantwort commented Jun 26, 2025

This results in lower total time needed to decompress all zip files.

For example from 4.4 min to 3.8 min @ daily job

This results in lower total time needed to decompress all zip files.
@BillyONeal
Copy link
Member

... what? Do we know why that order is faster?

@autoantwort
Copy link
Contributor Author

If you have 2 cores and the Tasks aaa, bb and ccccc: Random Ordering

1: aaa
2: bbccccc

You are using 7 time slots. If you start with the largest:

1: ccccc
2: aaabb

you only need 5 time slots.

@BillyONeal
Copy link
Member

I see, because we are parallelizing it's the bin packing problem and we're using size as a proxy for how big each task is in the bin.

Copy link
Member

@BillyONeal BillyONeal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the improvement!

@BillyONeal BillyONeal merged commit b6a74be into microsoft:main Jun 30, 2025
7 checks passed
m_fs.file_size(zip_paths[i].get()->path, VCPKG_LINE_INFO));
action_idxs.push_back(i);
}
std::sort(jobs_with_size.begin(), jobs_with_size.end(), [](const auto& l, const auto& r) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized this is completely broken because it broke the invariant that action_idxs[N] is talking about jobs[N] for all N, so we are now recording which packages got restored completely incorrectly :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm then we have to also Save the original Index for every job

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's how I fixed it in #1805

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants