Skip to content

Conversation

@kaushikmitr
Copy link
Contributor

@kaushikmitr kaushikmitr commented Dec 16, 2025

This pull request introduces refactoring and simplification of the prediction-based routing logic in the inference pool scheduler, focusing on introducing the PrepareRequestData plugin to get the prefix cache score and updating the prediction-based routing helm chart config.

Core logic and data structure simplification:

  • The sloRequestContext struct now stores all prediction results in a single predictionsForScheduling slice, replacing the previous separate maps for TTFT and TPOT values. All code paths and tests have been updated to use this unified structure.
  • The generatePredictions and scoreWithoutPredictions functions now rely exclusively on precomputed prefix cache scores from the SLO context, removing the need to pass and use the CycleState object for these calculations.
  • The PrepareRequestData method is introduced to precompute and populate prefix cache scores in the SLO context, further decoupling data preparation from scoring and prediction logic.

Prediction-based scheduling flow and configuration:

  • The "prediction-based scheduling off" feature and related code paths (including the NoLatencyRoutingProfileName and associated logic) have been removed, consolidating the routing flow and simplifying the profile handler's logic.
  • The SLOAwareProfileHandler.Pick method is simplified to always return all profiles unless all have already been executed, removing conditional execution based on headers.
  • The default value for samplingMean in the latency scorer configuration is increased from 100.0 to 1000.0.

Helm Chart:

  • epp-config.yaml is simplified to pick prediction based routing when enabled or pick default if not.

@netlify
Copy link

netlify bot commented Dec 16, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit 283a4e3
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/69496e4b59132c00083afc44
😎 Deploy Preview https://deploy-preview-2005--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Dec 16, 2025
@kaushikmitr
Copy link
Contributor Author

kaushikmitr commented Dec 16, 2025

@ahg-g this PR simplifies the helm chart config to pick latency aware routing if enabled or switch to default if not.

@ahg-g
Copy link
Contributor

ahg-g commented Dec 19, 2025

@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing.

@kaushikmitr
Copy link
Contributor Author

@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing.

yes we need to clean up two things. The naming of the plugin (predicted latency) and renaming TPOT everywhere (including docs) to ITL

matchLen := state.PrefixCacheServers[ServerID(pod.GetPod().NamespacedName)]
pod.Put(approximateprefix.PrefixCacheMatchInfoKey, approximateprefix.NewPrefixCacheMatchInfo(matchLen, total))
}
// Store the state in plugin state for later use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to change anything in the prefix plugin?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prefix scorer does not consume the prefix state in the same way as the predicted latency one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its a bug. We are not writing in the plugin state of the prefix-cache-scorer plugin during preparadata but only during scoring. This change will fix that. Also talked to @rahulgurnani about this. We can track it in a separate PR too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not addressed yet

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my comment was in pending state, see my response above

@ahg-g
Copy link
Contributor

ahg-g commented Dec 22, 2025

@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing.

yes we need to clean up two things. The naming of the plugin (predicted latency) and renaming TPOT everywhere (including docs) to ITL

can we open an issue to track this work pls?

pod.Put(approximateprefix.PrefixCacheMatchInfoKey, approximateprefix.NewPrefixCacheMatchInfo(matchLen, total))
}
// Store the state in plugin state for later use.
p.pluginState.Write(request.RequestId, plugins.StateKey(p.TypedName().String()), state)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahg-g @kfswain talked to @rahulgurnani figured we need to write to plugin state in the preparadata step to ensure that running scorer is not needed to have the plugin state populated. I tested both with and without the feature gate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PreRequest extension for the prefix plugin is executed even if the prefix scorer is not configured to run, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I validated that

matchLen := state.PrefixCacheServers[ServerID(pod.GetPod().NamespacedName)]
pod.Put(approximateprefix.PrefixCacheMatchInfoKey, approximateprefix.NewPrefixCacheMatchInfo(matchLen, total))
}
// Store the state in plugin state for later use.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its a bug. We are not writing in the plugin state of the prefix-cache-scorer plugin during preparadata but only during scoring. This change will fix that. Also talked to @rahulgurnani about this. We can track it in a separate PR too.

@kaushikmitr
Copy link
Contributor Author

@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing.

yes we need to clean up two things. The naming of the plugin (predicted latency) and renaming TPOT everywhere (including docs) to ITL

can we open an issue to track this work pls?
done
#2032

@ahg-g
Copy link
Contributor

ahg-g commented Dec 22, 2025

/lgtm
/approve

Thanks!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 22, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ahg-g, kaushikmitr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 22, 2025
@k8s-ci-robot k8s-ci-robot merged commit 4c1b1ed into kubernetes-sigs:main Dec 22, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants