-
Notifications
You must be signed in to change notification settings - Fork 7.5k
Adopt resources into release with correct instance and managed-by labels #7649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Jacob LeGrone <[email protected]>
444f9f9 to
b6f1029
Compare
pkg/action/validate.go
Outdated
| // Allow adoption of the resource if it already has app.kubernetes.io/instance and app.kubernetes.io/managed-by labels. | ||
| anno, err := accessor.Annotations(existing) | ||
| if err == nil && anno != nil && anno[helmReleaseAnnotation] == release { | ||
| requireUpdate = append(requireUpdate, info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we return the resource.Info for the latest version of the chart, not the resource that we've retrieved from the cluster. This might cause changes to not be applied when computing the 3-way diff; this still needs to be verified.
Signed-off-by: Jacob LeGrone <[email protected]>
b6f1029 to
271cb22
Compare
pho-enix
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice solution!
Personally I would prefer to re-use the
app.kubernetes.io/managed-by
app.kubernetes.io/instance
annotations and auto-inject those.
A lot of people already use them in their Charts or at least know then. Meaning less confusion because of double labelling. Additionally this would push the adoption of a best-practice.
|
See also discussion on labels in #6365 (comment) |
|
thanks @jlegrone. I'm going to give this a spin today. Looks like there are some significant behavioural changes under the hood to |
|
is this still a work-in-progress, or can we remove that label? |
@bacongobbler yes, see TODOs in the description. I think the main outstanding issue is to validate 3-way diff behavior, which is why I've kept the wip label for now. |
bacongobbler
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few quick comments as I'm reading the code. Testing this out now to verify behaviour
pkg/action/validate.go
Outdated
| } | ||
|
|
||
| return fmt.Errorf("existing resource conflict: namespace: %s, name: %s, existing_kind: %s, new_kind: %s", info.Namespace, info.Name, existing.GetObjectKind().GroupVersionKind(), info.Mapping.GroupVersionKind) | ||
| // Allow adoption of the resource if it already has app.kubernetes.io/instance and app.kubernetes.io/managed-by labels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the annotations listed above don't match up to the comment provided here. Does the comment here need to be updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, yep that's a holdover from the initial implementation
|
A basic test looks promising! I created two pods: Here's what I used for testing: testchart.zip To create the pods, I used: Then I tested with the following: Now going to test how this handles 3-way merges as you mentioned earlier. |
|
3-way merge seems to check out. I modified the template a little to add junk labels to the manifest when requested: testchart.zip Both Another test was to attempt to take ownership over an object residing in the cluster during an upgrade: I added Finally, doing that same workflow, but I did not add any ownership annotations to the resource. Looks like everything checks out here! |
pkg/action/install.go
Outdated
| return nil, errors.Wrap(err, "unable to build kubernetes objects from release manifest") | ||
| } | ||
|
|
||
| err = resources.Visit(setMetadataVisitor(rel.Name, rel.Namespace)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we're attaching this annotation to any objects going forward, both during an install AND an upgrade. NICE!
Signed-off-by: Jacob LeGrone <[email protected]>
8b2ff72 to
d829343
Compare
bacongobbler
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM. Given the usefulness of this feature, would you mind writing up some documentation for this? I know a lot of users would love to know how to adopt existing resources a helm release. This enables a lot of use cases that previously weren't possible.
…adata visitor Signed-off-by: Jacob LeGrone <[email protected]>
Signed-off-by: Jacob LeGrone <[email protected]>
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649 Signed-off-by: Jeff McCoy <[email protected]>
|
Hi Guys, What we are doing is: |
|
I'd suggest opening that as a new ticket, with more details, e.g., the chart you're trying to deploy or a minimal replication of the problem. That way the issue/question is likely to get more attention from people who know whether it's a bug, or if it's working-as-intended, e.g., what you're trying to do is not supported. |
Due to complexities with how helm manages namespaces as well as how we need to handle them for things like secret injection, this commit uses different strategies depending on the situation: - Each of these conditions only run on detected namespaces that are. not already in the cluster: - When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install - When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway: helm/helm#7649 Signed-off-by: Jeff McCoy <[email protected]>
|
Does it work other way around? |
|
I don't believe so, no. I believe that next time Helm touches such a resource (that it believes it owns per its own records, but which has had the labels removed), it will re-add those labels, as it would do for any resources that never had the labels in the first place, e.g., resources deployed by an earlier version of Helm, before the labels were standardised. |
Alternative implementation to #7627, #7625, and #7575.
Help with validation would be appreciated, but in theory this closes #6850, closes #4824, closes #2947, and closes #7418. This implementation would also help make #2730 very approachable.
How it Works
Option A: Standard Labelsa29365bHelm will assume that it has the right to "adopt" any existing Kubernetes resource containing the following labels:
app.kubernetes.io/managed-by: Helmapp.kubernetes.io/instance: <RELEASE_NAME>The major downside is that in order to get this functionality, setting these labels is required. Despite this having been a documented best practice since well before Helm 3, that might be hard to accomplish for chart authors who have historically been using non-standard values for these labels. A workaround for this would be to enable the same behavior via custom annotations.
Option B: New Helm Annotations 271cb22
Helm will assume that it has the right to "adopt" any existing Kubernetes resource containing the following annotations:
meta.helm.sh/release-name: <RELEASE_NAME>meta.helm.sh/release-namespace: <RELEASE_NAMESPACE>For consistency and to make things easier for chart authors, we will automatically inject or override existing values for these annotations. I think this ends up being the simplest solution, and probably something we can safely expose without a feature flag.
To Do
Add testsRelease Note
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct
meta.helm.sh/release-nameandmeta.helm.sh/release-namespaceannotations, and matches the label selectorapp.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label: