-
Notifications
You must be signed in to change notification settings - Fork 385
Bugfix nodelock release terminated #1398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix nodelock release terminated #1398
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: xrwang8 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Summary of ChangesHello @xrwang8, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug in the node locking mechanism that could lead to scheduler stalls. Previously, if a pod holding a node lock terminated, the lock might not be released immediately, causing subsequent workloads to wait for an extended timeout. The changes introduce logic to detect and release these stale locks proactively, significantly improving resource utilization and scheduler responsiveness. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses a bug where the scheduler could stall waiting for a node lock held by a terminated pod. The change to LockNode
eagerly releases the lock if the owning pod is found to be in a 'Succeeded', 'Failed', or 'Deleting' state, which prevents the five-minute timeout. The logic is sound and directly fixes the described issue. I've added one suggestion to improve maintainability by using a helper function for checking the pod's terminated state.
} else if previousPod.DeletionTimestamp != nil || previousPod.Status.Phase == corev1.PodSucceeded || previousPod.Status.Phase == corev1.PodFailed { | ||
klog.InfoS("Previous pod of NodeLock has terminated, releasing lock", "podName", previousPodName, "namespace", ns) | ||
skipOwnerCheck = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition to check for a terminated pod is correct. For better maintainability and code reuse, consider encapsulating this logic in a helper function. The pkg/scheduler/scheduler.go
file seems to use a util.IsPodInTerminatedState
function for a similar purpose.
If that function is accessible from this package, using it would be ideal. The code would become:
} else if util.IsPodInTerminatedState(previousPod) {
klog.InfoS("Previous pod of NodeLock has terminated, releasing lock", "podName", previousPodName, "namespace", ns)
skipOwnerCheck = true
}
This would improve readability and centralize the logic for checking pod termination status. You would need to add the corresponding import for the util
package.
Codecov Report❌ Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
Signed-off-by: xrwang8 <[email protected]>
d752828
to
a2d66a5
Compare
…e error logging Signed-off-by: xrwang8 <[email protected]>
38595ee
to
439da6c
Compare
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #1368
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Fix HAMi scheduler stalls where a workload could wait about five minutes after a previous pod terminated by eagerly releasing stale node locks once the owning pod has finished