-
Notifications
You must be signed in to change notification settings - Fork 1.1k
pallet-broker: Fix force_reserve
#10792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When issuing a `force_reserve` we are putting the reservation into the current and next region `WorkPlan`. The issue is that at the next sale rotation we override all unused cores. As the sale rotation isn't aware of the forcefully registered core, also the force reserved core is overwritten and the parachain looses their coretime for one region (it comes back in the next region). To fix this we now keep track of forcefully registered reserves. We input them alongside the other reservations into the workplan, but for the current region using any free cores from the previous sale.
|
/cmd prdoc --audience runtime_dev --bump minor |
…time_dev --bump minor'
| // Insert ForceReservations at the first free core from the old sale. | ||
| let mut force_core = old_sale.first_core + old_sale.cores_sold; | ||
| for schedule in ForceReservations::<T>::take() { | ||
| if force_core >= status.core_count { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there are some edge cases where it makes a difference whether we use core_count like here or cores_offered like a few lines above to bound the cores used.
How come changes in core count into play? Here for example we check that no core entered into the work plan are above core_count, which makes sense to me. But in the other lines mentioned, we don't check core count. Can it happen that core count changes and messes with these storage items?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not use cores_offered, because this number doesn't change. However, core_count may changes in a region. Something that I could imagine is that someone is sending batch(request_core_count(current + 1), force_reserve()), because there aren't enough free cores. This way a fresh core is added, that would not be tracked by cores_offered, thus we are using core_count.
skunert
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left one comment for my understanding, but otherwise looks good.
|
/cmd fmt |
| Workplan::<T>::get((sale_info.region_begin, status.core_count)), | ||
| Some(schedule.clone()) | ||
| ); | ||
| assert_eq!(ForceReservations::<T>::get(), vec![schedule.clone()]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC ReservationRecordOf is bounded. That should fix the benchmarks. On my phone so can't check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is bound, but I don't get your comment
Co-authored-by: Dónal Murray <[email protected]>
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
With different timesliceperiods, we may not always insert the schedule into the workplan. This is also verified by the tests.
Differential Tests Results (PolkaVM)Specified Tests
Counts
FailuresThe test specifiers seen in this section have the format 'path::case_idx::compilation_mode' and they're compatible with the revive differential tests framework and can be specified to it directly in the same way that they're provided through the The failures are provided in an expandable section to ensure that the PR does not get polluted with information. Please click on the section below for more information Detailed Differential Tests Failure Information
|
When issuing a `force_reserve` we are putting the reservation into the current and next region `WorkPlan`. The issue is that at the next sale rotation we override all unused cores. As the sale rotation isn't aware of the forcefully registered core, also the force reserved core is overwritten and the parachain looses their coretime for one region (it comes back in the next region). To fix this we now keep track of forcefully registered reserves. We input them alongside the other reservations into the workplan, but for the current region using any free cores from the previous sale. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dónal Murray <[email protected]>
When issuing a `force_reserve` we are putting the reservation into the current and next region `WorkPlan`. The issue is that at the next sale rotation we override all unused cores. As the sale rotation isn't aware of the forcefully registered core, also the force reserved core is overwritten and the parachain looses their coretime for one region (it comes back in the next region). To fix this we now keep track of forcefully registered reserves. We input them alongside the other reservations into the workplan, but for the current region using any free cores from the previous sale. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dónal Murray <[email protected]> (cherry picked from commit 84064c9)
|
Successfully created backport PR for |
When issuing a
force_reservewe are putting the reservation into the current and next regionWorkPlan. The issue is that at the next sale rotation we override all unused cores. As the sale rotation isn't aware of the forcefully registered core, also the force reserved core is overwritten and the parachain looses their coretime for one region (it comes back in the next region). To fix this we now keep track of forcefully registered reserves. We input them alongside the other reservations into the workplan, but for the current region using any free cores from the previous sale.