Skip to content

Conversation

xrwang8
Copy link
Contributor

@xrwang8 xrwang8 commented Oct 11, 2025

What type of PR is this?

/kind bug

What this PR does / why we need it:

  • Release stale per-node locks when the previously owning pod has already terminated (Succeeded/Failed/Deleting), so follow-up workloads no longer wait for the five-minute timeout.
  • Harden TestConcurrentNodeLocks to verify cross-node concurrency without relying on wall-clock heuristics.

Which issue(s) this PR fixes:
Fixes #1368

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Fix HAMi scheduler stalls where a workload could wait about five minutes after a previous pod terminated by eagerly releasing stale node locks once the owning pod has finished

@hami-robot
Copy link
Contributor

hami-robot bot commented Oct 11, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: xrwang8
Once this PR has been reviewed and has the lgtm label, please assign archlitchi for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

Summary of Changes

Hello @xrwang8, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the node locking mechanism that could lead to scheduler stalls. Previously, if a pod holding a node lock terminated, the lock might not be released immediately, causing subsequent workloads to wait for an extended timeout. The changes introduce logic to detect and release these stale locks proactively, significantly improving resource utilization and scheduler responsiveness.

Highlights

  • Bug Fix: Stale Node Locks: This pull request addresses a bug where per-node locks were not being released promptly when the owning pod had already terminated (Succeeded, Failed, or Deleting). This caused follow-up workloads to experience unnecessary five-minute timeouts.
  • Improved Scheduler Performance: By eagerly releasing these stale locks, the PR prevents HAMi scheduler stalls, ensuring that new workloads can acquire necessary resources without delay once previous pods have finished their execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hami-robot hami-robot bot added the size/XS label Oct 11, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a bug where the scheduler could stall waiting for a node lock held by a terminated pod. The change to LockNode eagerly releases the lock if the owning pod is found to be in a 'Succeeded', 'Failed', or 'Deleting' state, which prevents the five-minute timeout. The logic is sound and directly fixes the described issue. I've added one suggestion to improve maintainability by using a helper function for checking the pod's terminated state.

Comment on lines +182 to 185
} else if previousPod.DeletionTimestamp != nil || previousPod.Status.Phase == corev1.PodSucceeded || previousPod.Status.Phase == corev1.PodFailed {
klog.InfoS("Previous pod of NodeLock has terminated, releasing lock", "podName", previousPodName, "namespace", ns)
skipOwnerCheck = true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This condition to check for a terminated pod is correct. For better maintainability and code reuse, consider encapsulating this logic in a helper function. The pkg/scheduler/scheduler.go file seems to use a util.IsPodInTerminatedState function for a similar purpose.

If that function is accessible from this package, using it would be ideal. The code would become:

} else if util.IsPodInTerminatedState(previousPod) {
    klog.InfoS("Previous pod of NodeLock has terminated, releasing lock", "podName", previousPodName, "namespace", ns)
    skipOwnerCheck = true
}

This would improve readability and centralize the logic for checking pod termination status. You would need to add the corresponding import for the util package.

Copy link

codecov bot commented Oct 11, 2025

Codecov Report

❌ Patch coverage is 50.00000% with 6 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
pkg/util/nodelock/nodelock.go 40.00% 2 Missing and 1 partial ⚠️
pkg/util/util.go 57.14% 2 Missing and 1 partial ⚠️
Flag Coverage Δ
unittests 63.81% <50.00%> (+0.05%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
pkg/util/nodelock/nodelock.go 51.79% <40.00%> (-1.19%) ⬇️
pkg/util/util.go 70.12% <57.14%> (-0.82%) ⬇️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@xrwang8 xrwang8 force-pushed the bugfix-nodelock-release-terminated branch from 38595ee to 439da6c Compare October 11, 2025 04:02
@xrwang8 xrwang8 closed this Oct 11, 2025
@xrwang8 xrwang8 deleted the bugfix-nodelock-release-terminated branch October 11, 2025 06:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

HAMi Scheduler Not Trying to Schedule Previously Pending Workload for 5mins

1 participant