Skip to content

Conversation

@jlegrone
Copy link
Member

@jlegrone jlegrone commented Feb 21, 2020

Alternative implementation to #7627, #7625, and #7575.

Help with validation would be appreciated, but in theory this closes #6850, closes #4824, closes #2947, and closes #7418. This implementation would also help make #2730 very approachable.

How it Works

Option A: Standard Labels a29365b

Helm will assume that it has the right to "adopt" any existing Kubernetes resource containing the following labels:

  • app.kubernetes.io/managed-by: Helm
  • app.kubernetes.io/instance: <RELEASE_NAME>

The major downside is that in order to get this functionality, setting these labels is required. Despite this having been a documented best practice since well before Helm 3, that might be hard to accomplish for chart authors who have historically been using non-standard values for these labels. A workaround for this would be to enable the same behavior via custom annotations.

Option B: New Helm Annotations 271cb22

Helm will assume that it has the right to "adopt" any existing Kubernetes resource containing the following annotations:

  • meta.helm.sh/release-name: <RELEASE_NAME>
  • meta.helm.sh/release-namespace: <RELEASE_NAMESPACE>

For consistency and to make things easier for chart authors, we will automatically inject or override existing values for these annotations. I think this ends up being the simplest solution, and probably something we can safely expose without a feature flag.

To Do

  1. Validate that 3-way diffs work correctly. See PR comment below.
  2. Decide if this should be opt-in/behind an experimental feature flag
  3. Error out if annotations list a different release as owner of resource (fix for Resource ownership model does not guarantee exclusivity #7699 / Resource ownership between charts #2388)
    • This will be enabled in a followup PR, but is implemented in 8d1de39
  4. Add tests

Release Note

Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.

In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:

KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm

@helm-bot helm-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Feb 21, 2020
@jlegrone jlegrone added the wip work in progress label Feb 21, 2020
@jlegrone jlegrone force-pushed the jlegrone/adopt-resources branch 2 times, most recently from 444f9f9 to b6f1029 Compare February 21, 2020 02:01
// Allow adoption of the resource if it already has app.kubernetes.io/instance and app.kubernetes.io/managed-by labels.
anno, err := accessor.Annotations(existing)
if err == nil && anno != nil && anno[helmReleaseAnnotation] == release {
requireUpdate = append(requireUpdate, info)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we return the resource.Info for the latest version of the chart, not the resource that we've retrieved from the cluster. This might cause changes to not be applied when computing the 3-way diff; this still needs to be verified.

Copy link
Contributor

@pho-enix pho-enix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice solution!

Personally I would prefer to re-use the
app.kubernetes.io/managed-by
app.kubernetes.io/instance
annotations and auto-inject those.

A lot of people already use them in their Charts or at least know then. Meaning less confusion because of double labelling. Additionally this would push the adoption of a best-practice.

@jlegrone
Copy link
Member Author

See also discussion on labels in #6365 (comment)

@bacongobbler
Copy link
Member

thanks @jlegrone. I'm going to give this a spin today. Looks like there are some significant behavioural changes under the hood to helm install with the change to call KubeClient.Update, so I want to make sure we're not introducing any regressions here.

@bacongobbler
Copy link
Member

is this still a work-in-progress, or can we remove that label?

@jlegrone
Copy link
Member Author

jlegrone commented Mar 5, 2020

is this still a work-in-progress, or can we remove that label?

@bacongobbler yes, see TODOs in the description. I think the main outstanding issue is to validate 3-way diff behavior, which is why I've kept the wip label for now.

Copy link
Member

@bacongobbler bacongobbler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few quick comments as I'm reading the code. Testing this out now to verify behaviour

}

return fmt.Errorf("existing resource conflict: namespace: %s, name: %s, existing_kind: %s, new_kind: %s", info.Namespace, info.Name, existing.GetObjectKind().GroupVersionKind(), info.Mapping.GroupVersionKind)
// Allow adoption of the resource if it already has app.kubernetes.io/instance and app.kubernetes.io/managed-by labels.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the annotations listed above don't match up to the comment provided here. Does the comment here need to be updated?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, yep that's a holdover from the initial implementation

@bacongobbler
Copy link
Member

bacongobbler commented Mar 5, 2020

A basic test looks promising!

I created two pods: foo and bar. One had the annotations, the other didn't.

Here's what I used for testing: testchart.zip

To create the pods, I used:

><> unzip testchart
><> helm template foo testchart | kubectl create -f -
><> helm template bar testchart --set addAnnotations=true | kubectl create -f -

Then I tested with the following:

><> helm install car testchart
NAME: car
LAST DEPLOYED: Thu Mar  5 10:44:28 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
><> helm install foo testchart
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: default, name: foo, existing_kind: /v1, Kind=Pod, new_kind: /v1, Kind=Pod
><> helm install bar testchart
NAME: bar
LAST DEPLOYED: Thu Mar  5 10:44:34 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Now going to test how this handles 3-way merges as you mentioned earlier.

@bacongobbler
Copy link
Member

bacongobbler commented Mar 5, 2020

3-way merge seems to check out.

I modified the template a little to add junk labels to the manifest when requested: testchart.zip

$ helm template foo . --set addAnnotations=true | kubectl create -f -
$ helm template bar . --set addAnnotations=true,addLabels=true | kubectl create -f -
$ helm install foo . --set addLabels=true
$ helm install bar .

Both foo and bar had the labels and annotations attached as expected.

Another test was to attempt to take ownership over an object residing in the cluster during an upgrade:

$ helm template foo . --set addAnnotations=true | kubectl create -f -
$ helm template bar . --set addAnnotations=true | kubectl create -f -
$ helm install foo .
$ kubectl edit pod bar # change meta.helm.sh/release-name to "foo"

I added bar to the template, intending to take ownership of the object in the next upgrade:

apiVersion: v1
kind: Pod
metadata:
  name: {{ .Release.Name }}
  {{ if .Values.addLabels -}}
  labels:
    meta.helm.sh/release-name: "{{ .Release.Name }}"
    meta.helm.sh/release-namespace: "{{ .Release.Namespace }}"
  {{- end }}
  {{ if .Values.addAnnotations -}}
  annotations:
    meta.helm.sh/release-name: "{{ .Release.Name }}"
    meta.helm.sh/release-namespace: "{{ .Release.Namespace }}"
  {{- end }}
spec:
  containers:
  - name: {{ .Release.Name }}
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
---
apiVersion: v1
kind: Pod
metadata:
  name: bar
  {{ if .Values.addLabels -}}
  labels:
    meta.helm.sh/release-name: "{{ .Release.Name }}"
    meta.helm.sh/release-namespace: "{{ .Release.Namespace }}"
  {{- end }}
  {{ if .Values.addAnnotations -}}
  annotations:
    meta.helm.sh/release-name: "{{ .Release.Name }}"
    meta.helm.sh/release-namespace: "{{ .Release.Namespace }}"
  {{- end }}
spec:
  containers:
  - name: bar
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
$ helm upgrade foo .

Finally, doing that same workflow, but I did not add any ownership annotations to the resource.

$ helm template foo . --set addAnnotations=true | kubectl create -f -
$ helm template bar . | kubectl create -f -
$ helm install foo .
$ # edit the template as described above to take ownership of "bar"
$ helm upgrade foo .
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: existing resource conflict: namespace: default, name: bar, existing_kind: /v1, Kind=Pod, new_kind: /v1, Kind=Pod

Looks like everything checks out here!

return nil, errors.Wrap(err, "unable to build kubernetes objects from release manifest")
}

err = resources.Visit(setMetadataVisitor(rel.Name, rel.Namespace))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we're attaching this annotation to any objects going forward, both during an install AND an upgrade. NICE!

@jlegrone jlegrone force-pushed the jlegrone/adopt-resources branch from 8b2ff72 to d829343 Compare March 6, 2020 02:50
Copy link
Member

@bacongobbler bacongobbler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM. Given the usefulness of this feature, would you mind writing up some documentation for this? I know a lot of users would love to know how to adopt existing resources a helm release. This enables a lot of use cases that previously weren't possible.

Signed-off-by: Jacob LeGrone <[email protected]>
@helm-bot helm-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 11, 2020
@jlegrone jlegrone removed the wip work in progress label Mar 11, 2020
jeff-mccoy added a commit to zarf-dev/zarf that referenced this pull request Jan 30, 2022
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection, 
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649
jeff-mccoy added a commit to zarf-dev/zarf that referenced this pull request Feb 2, 2022
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection, 
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649
RothAndrew pushed a commit to zarf-dev/zarf that referenced this pull request Feb 7, 2022
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection, 
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649
jeff-mccoy added a commit to zarf-dev/zarf that referenced this pull request Feb 8, 2022
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection, 
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649
jeff-mccoy added a commit to zarf-dev/zarf that referenced this pull request Feb 8, 2022
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection, 
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649
jeff-mccoy added a commit to zarf-dev/zarf that referenced this pull request Feb 8, 2022
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection,
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649

Signed-off-by: Jeff McCoy <[email protected]>
@Shiva19908
Copy link

Shiva19908 commented Oct 24, 2022

Hi Guys,
Does helm adopt resources created by a helm hook, for example, by a helm pre-upgrade hook? We are getting below error during the upgrade:
upgrade.go:434: [debug] warning: Upgrade "abc" failed: no Service with the name "abc-headless" found Error: UPGRADE FAILED: no Service with the name "abc-headless" found helm.go:84: [debug] no Service with the name "abc-headless" found helm.sh/helm/v3/pkg/kube.(*Client).Update.func1 helm.sh/helm/v3/pkg/kube/client.go:257 helm.sh/helm/v3/pkg/kube.ResourceList.Visit helm.sh/helm/v3/pkg/kube/resource.go:32 helm.sh/helm/v3/pkg/kube.(*Client).Update helm.sh/helm/v3/pkg/kube/client.go:230 helm.sh/helm/v3/pkg/action.(*Upgrade).releasingUpgrade helm.sh/helm/v3/pkg/action/upgrade.go:376 runtime.goexit runtime/asm_amd64.s:1571 UPGRADE FAILED main.newUpgradeCmd.func2 helm.sh/helm/v3/cmd/helm/upgrade.go:201 github.com/spf13/cobra.(*Command).execute github.com/spf13/[email protected]/command.go:872 github.com/spf13/cobra.(*Command).ExecuteC github.com/spf13/[email protected]/command.go:990 github.com/spf13/cobra.(*Command).Execute github.com/spf13/[email protected]/command.go:918 main.main helm.sh/helm/v3/cmd/helm/helm.go:83 runtime.main runtime/proc.go:250 runtime.goexit runtime/asm_amd64.s:1571

What we are doing is:
We are creating a K8S service in the pre-upgrade hook and adding relavant labels and annotations so that Helm would adopt this service. But this is not working as expected i.e., Helm is throwing above error. However, if we create the same resource manually before the upgrade, Helm adopts the resource and works as expected. Furthermore, if we run helm upgrade again the second time after it failed the first time with the above error it passes and works as expected! Hence, we would like to know if Helm supports adopting of resources creating during the upgrade process, for example, by a hook?

@TBBle
Copy link
Contributor

TBBle commented Oct 24, 2022

I'd suggest opening that as a new ticket, with more details, e.g., the chart you're trying to deploy or a minimal replication of the problem. That way the issue/question is likely to get more attention from people who know whether it's a bug, or if it's working-as-intended, e.g., what you're trying to do is not supported.

Noxsios pushed a commit to zarf-dev/zarf that referenced this pull request Mar 8, 2023
Due to complexities with how helm manages namespaces as well
as how we need to handle them for things like secret injection,
this commit uses different strategies depending on the situation:

- Each of these conditions only run on detected namespaces that are. not already in the cluster:
- When a namespace manifest is in the chart, pre-create it with the necessary labels/annotations to be adopted on helm install
- When a namespace is not in the chart but is used by resources in the chart, it is created and labeled as managed by zarf

The edge case will be two charts in the same namespace where one or both define the namespace in their manifests, this actually violates the helm usage anyway:  helm/helm#7649

Signed-off-by: Jeff McCoy <[email protected]>
@lkoniecz
Copy link

lkoniecz commented Mar 9, 2023

Does it work other way around?
I mean if I remove those 2 annotations and the label will the resource be unpinned from the chart?

@TBBle
Copy link
Contributor

TBBle commented Mar 9, 2023

I don't believe so, no. I believe that next time Helm touches such a resource (that it believes it owns per its own records, but which has had the labels removed), it will re-add those labels, as it would do for any resources that never had the labels in the first place, e.g., resources deployed by an earlier version of Helm, before the labels were standardised.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet