Skip to content

Serving-aware partial preemption of workloads #3762

@mimowo

Description

@mimowo

What would you like to be added:

Serving workloads are different than training - they can be easily trimmed - a Deployment can run at 70% or 50% of Pods. This is different to most AI training workloads, where all Pods need to run. We want to leverage this fact and optimize preemptions.

In particular, when a new high priority workload comes in and we have multiple serving workloads, we want to distribute the preemptions across the serving workloads, rather than preempting one completely.

Note that this is also related to the partial preemption for batch workloads: #975. We may consider having a solution which solves both problems, but for now it seems reasonable to have this dedicated issue, emphasizing that serving workloads are special in this regard.

Why is this needed:

To improve experience of hosting mix of training and inference workloads. When the high-priority workload comes, we can make room for it by trimming multiple serving workloads, rather than preempting completely one.

Completion requirements:

This enhancement requires the following artifacts:

  • Design doc
  • API change
  • Docs update

The artifacts should be linked in subsequent comments.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/featureCategorizes issue or PR as related to a new feature.lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions