-
Notifications
You must be signed in to change notification settings - Fork 247
CORS-4264: Update the GCP provider to allow users to skip firewall actions #910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
barbacbd
commented
Oct 20, 2025
We need to compile the whole module instead of a single file to make GCP CCM work.
# Conflicts: # pkg/controller/nodeipam/OWNERS
….10-ose-gcp-cloud-controller-manager Updating ose-gcp-cloud-controller-manager images to be consistent with ART
This commit changes owners of the project to people from OpenShift.
We need to compile the whole module instead of a single file to make GCP CCM work.
…m rebase # Conflicts: # vendor/github.com/google/go-tpm/tpmutil/BUILD
This commit changes owners of the project to people from OpenShift.
We need to compile the whole module instead of a single file to make GCP CCM work.
…m rebase # Conflicts: # vendor/github.com/google/go-tpm/tpmutil/BUILD # Conflicts: # vendor/github.com/googleapis/gax-go/v2/BUILD # vendor/golang.org/x/oauth2/google/BUILD # vendor/golang.org/x/oauth2/google/internal/externalaccount/BUILD # vendor/golang.org/x/sys/unix/BUILD # vendor/golang.org/x/sys/windows/BUILD # vendor/google.golang.org/api/internal/gensupport/BUILD # vendor/google.golang.org/api/option/internaloption/BUILD # vendor/google.golang.org/protobuf/types/descriptorpb/BUILD
# Conflicts: # crd/client/gcpfirewall/informers/externalversions/BUILD # crd/client/gcpfirewall/informers/externalversions/gcpfirewall/v1alpha1/BUILD # crd/client/gcpfirewall/informers/externalversions/internalinterfaces/BUILD # crd/client/gcpfirewall/listers/gcpfirewall/v1alpha1/BUILD # vendor/github.com/googleapis/gax-go/v2/apierror/internal/proto/BUILD # vendor/golang.org/x/oauth2/authhandler/BUILD # vendor/google.golang.org/genproto/googleapis/rpc/code/BUILD # vendor/google.golang.org/genproto/googleapis/rpc/errdetails/BUILD
…ed by Ingress-GCE
Ran: go mod tidy && ./tools/update_vendor.sh && ./tools/update_bazel.sh
Bug 2041509: Rebase CCM onto latest changes with K8s 1.23 updates
Based on docs for internal loadbalancer here [1], backend services [2] and instances in instance-groups [3], following restrictions apply: - Internal LB can load balance to VMs in same region, but different subnets - Instance groups for the backend service must contain instance of the same subnet - An instance can only belong to one load balanced instance group - It is probably useful use-case to have nodes for the cluster belong to more than one subnet. And the current setup fails to create an internal load balancer with nodes in multiple subnets. This change finds pre-existing instance-groups that ONLY contain instances that belong to the cluster, uses them for the backend service. And only ensures instance-groups for remaining ones. [1] https://cloud.google.com/load-balancing/docs/internal [2] https://cloud.google.com/load-balancing/docs/backend-service#restrictions_and_guidance [3] https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances#addinstances Co-authored-by: Abhinav Dahiya <[email protected]>
OCPCLOUD-2926: Merge https://github.com/kubernetes/cloud-provider-gcp:master (c0af057) into main
This commit rewrites 49f5389. Work around GCP internal load balancer restrictions for multi-subnet clusters. GCP internal load balancers have specific restrictions that prevent straightforward load balancing across multiple subnets: 1. "Don't put a VM in more than one load-balanced instance group" 2. Instance groups can "only select VMs that are in the same zone, VPC network, and subnet" 3. "All VMs in an instance group must have their primary network interface in the same VPC network" 4. Internal LBs can load balance to VMs in same region but different subnets For clusters with nodes across multiple subnets, the previous implementation would fail to create internal load balancers. This change implements a two-pass approach: 1. Find existing external instance groups (matching externalInstanceGroupsPrefix) that contain ONLY cluster nodes and reuse them for the backend service 2. Create internal instance groups only for remaining nodes not covered by external groups This ensures compliance with GCP restrictions while enabling multi-subnet load balancing for Kubernetes clusters. References: - Internal LB docs: https://cloud.google.com/load-balancing/docs/internal - Backend service restrictions: https://cloud.google.com/load-balancing/docs/backend-service#restrictions_and_guidance - Instance group constraints: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances#addinstances 🤖 Commit message & comments Generated with [Claude Code](https://claude.ai/code)
NO-JIRA: UPSTREAM: 894: Adding SyncMutex to nodeipam unit tests
OCPBUGS-60772: Reuse instance groups
…r image to be consistent with ART for 4.21 Reconciling with https://github.com/openshift/ocp-build-data/tree/3fdad9b43ac7aa4e2ed5db0c6f5266809a9ebbc0/images/ose-gcp-cloud-controller-manager.yml
OCPBUGS-61006: Adjust vendoring to use go.work to get rid of the symlink
NO-JIRA: Update OWNERS
….21-ose-gcp-cloud-controller-manager OCPBUGS-62572: Updating ose-gcp-cloud-controller-manager-container image to be consistent with ART for 4.21
…tions cluster: Update the scripts to include the new variables providers/gce: Update the config to include the new `ManageFirewallRules` boolean setting. This variable will allow users to skip the creation, deletion, and updates to firewall rules when set to false. Users may not want or have the ability to add the permissions to perform these actions on their service account. When this is the case the firewall rules should be pre created and managed by someone with permissions to achieve the same goal.
|
This issue is currently awaiting triage. If the repository mantainers determine this is a relevant issue, they will accept it by applying the The DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Welcome @barbacbd! |
|
Hi @barbacbd. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: barbacbd The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
This needs to branch from the upstream ( |