Error rendered manifests contain a resource that already exists

Error rendered manifests contain a resource that already exists

CRD breaks install #33

Comments

stobias123 commented Feb 2, 2020

The text was updated successfully, but these errors were encountered:

vietwow commented Feb 3, 2020

I got the same issue and posted in here : Kong/kong#5520

hbagdi commented Feb 3, 2020

Can you try the following?

We have to keep compatibility with Helm v2 and hence the above flag is necessary if you CRDs already exist in the cluster.

vietwow commented Feb 3, 2020

It works now, thank you so much.

hbagdi commented Feb 3, 2020

hbagdi commented Feb 3, 2020

yspotts commented Feb 3, 2020

hbagdi commented Feb 3, 2020

We should. PR welcome!

yspotts commented Feb 3, 2020

So I believe the reason this is not an issue in other projects is that they use the crd-install hook for their CRDs. This hook was removed for Helm3, so I am guessing the CRD will never get installed for helm3 since that hook will never run. Not sure we want to add that hook at this point.

yspotts commented Feb 4, 2020

lindhe commented Feb 7, 2020

lindhe commented Feb 7, 2020

It seems to me that the issue is that the template file should not try to install the crd’s when using Helm 3, since Helm 3 will do that already.

lindhe commented Feb 10, 2020

I think that all of this stems from having a crds catalog but still supporting Helm 2. Could we drop Helm 2 support?

hbagdi commented Feb 10, 2020 •

Docs are missing for installation for Helm3. #34

As much as I wish to drop support for Helm 2, I don’t think it is reasonable at this point given that Helm 3 came out relatively recently and most users are still on Helm 2.
On the contrary, I’d consider dropping support for Helm 3 until there is some tooling (from the Helm community) to manage charts for both versions of Helm.

lindhe commented Feb 10, 2020 •

As much as I wish to drop support for Helm 3

I guess you mean Helm 2

On the contrary, I’d consider dropping support for Helm 3 until there is some tooling (from the Helm community) to manage charts for both versions of Helm.

Helm 3 have done a fairly good job of making it backwards compatible. A valid Helm 2 chart (i.e. with apiVersion: v1 ) should be installable with Helm 3. I guess we’ve found an edge case here, but was there a problem before the introduction of the crds directory?

hbagdi commented Feb 10, 2020

I correct my comment about. Sorry for the typo.

I guess we’ve found an edge case here, but was there a problem before the introduction of the crds directory?

Probably not. That’s the only change we introduced to work towards Helm 3 compatibility.

The more I think about it, the more I feel that we should revert the crds directory change.

yspotts commented Feb 10, 2020 •

that would be really unfortunate. It would certainly break our workflow and require two separate stages for helm install.

Barring that, do you believe docs for helm 3 (in my PR) would be insufficient for providing support for helm 3 for those that want it?

lindhe commented Feb 11, 2020 •

One strategy that would for sure work is to have one file in crds/ and a template file with the crd-install hook annotation in the templates/ directory. The template file will never be installed by Helm 3 since Helm 3 ignores the crd-install hook, and the files in crds/ will never be installed by Helm 2 since it does not look there. (see https://kubernetes.slack.com/archives/C0NH30761/p1581373188073800)

But that leads to repeated code, which is pretty unfavorable.

hbagdi commented Feb 12, 2020

Alright, I believed that we had a reason for not using the crd-install hook but doing some git spelunking, I couldn’t find any reason.

It is worth a try to solve this problem by introducing the crd-install hook and as folks here suggest, it will solve the problem for Helm 2 and Helm 3.

Can someone in this thread open a quick PR to add those hook annotation to the chart?

yspotts commented Feb 14, 2020

I hope to be able to get to it in the next few days.

Upgrade of Converted Releases Fails #147

Comments

gsexton commented Apr 30, 2020

Using Helm 3.2.0 and the 0.51 version of the 2to3 converter, I converted some releases from helm 2 to helm 3.

So it looks like the upgrade is failing because helm 3 expects some annotations to be present that the converter isn’t creating. Is that a good analysis?

Is there any way to get the converter to add these keys?

The text was updated successfully, but these errors were encountered:

hickeyma commented May 1, 2020

@gsexton Thanks for raising the issue.

To help me reproduce this issue would you have a chart that was used to create the last version of the release in Helm v2? This will help me to troubleshoot and identify the issue.

rimusz commented May 2, 2020

@gsexton can you try to install that chart with helm v3 and then do the upgrade?

gsexton commented May 8, 2020 •

@rimusz The base chart version won’t install using helm 3. I’m getting errors

gsexton commented May 8, 2020

@hickeyma Thanks for your response. I apologize for the delay in coming back to this.

I tried creating a simple test case, and it works as expected. Here’s some additional information. I dumped the PodSecurityPolicy that’s erroring out during the upgrade. Here’s the version of the original chart AFTER upgrade to Helm 3:

Here’s what it looked like on another cluster where it was installed via helm 3

And here’s the error message when the helm upgrade command is executed.

I looked through the source code for the converter, and it looks like it’s just copying the existing helm 2 annotations.

I diffed the grafana chart, and can see where the PodSecurityPolicy APIVersion has updated.

So, it looks to me like the issue here is that the API version has changed, and Helm 3 is trying to reconcile ownership using the additional metadata annotations, but they’re not present.

Does that sound right to you?

Do you see a way forward?

kiyutink commented May 19, 2020

We’re encountering the same issue with a release of nginx-ingress. We tried to manually change missing labels / annotations, but encountered the following problem:

I’m quite new to helm and k8s in general, would be grateful if someone could decypher it for me

gsexton commented May 19, 2020

@kiyutink We ran into something similar. In our case, there was a StatefulSet that didn’t have a selector defined at the template. Kubernetes/Helm supplied a default selector that matched all labels. In our case, what I did was write a little script to patch the set of selector labels to be the release and namespace. I then set the StatefulSet’s template selector to match. Once I did that, I could upgrade.

hickeyma commented May 19, 2020

There are a few things at play here and will try to answer them one by one.

The chart would need to be updated to a support Kubernetes API for the Kubernetes version that you are installing on. See Deprecated Kubernetes APIs for more details.

hickeyma commented May 19, 2020

I looked through the source code for the converter, and it looks like it’s just copying the existing helm 2 annotations.

That should be ok as it it is dependent on the annotations defined in the chart. If specific annotations are now required in a Kubernetes version for a kind then the Helm v2 release should be upgraded first with the modified annotation before conversion.

So, it looks to me like the issue here is that the API version has changed, and Helm 3 is trying to reconcile ownership using the additional metadata annotations, but they’re not present.

This is probably the Kubernetes cluster and not Helm.

I diffed the grafana chart, and can see where the PodSecurityPolicy APIVersion has updated.

This diff signifies that PodSecurityPolicy should use the supported API policy/v1beta1 as extensions/v1beta1 is now deprecated/removed for this kind. This is similar to #147 (comment) where would need to update you release to supported API prior to migrating the release to v3 or upgrading cluster.

hickeyma commented May 19, 2020

@gsexton @kiyutink I hope the above comments provide some feedback to your questions and issues.

To be able to identify an issue with the conversion and Helm upgrade, I would need to get a way to repeat the issues. On good way to do this would be to:

This would help me better troubleshoot this.

hickeyma commented May 20, 2020 •

With regard to annotations:

They can be used for adopting existing already created resources and are not mandatory in Helm templates (helm/helm#7649).

gsexton commented May 26, 2020

There are a few things at play here and will try to answer them one by one.

The issue here was actually CRDs. The reason the chart couldn’t install with helm 2 is because they had been created in the CRDs directory.

hickeyma commented May 26, 2020

The issue here was actually CRDs. The reason the chart couldn’t install with helm 2 is because they had been created in the CRDs directory.

Ok, good to know @gsexton. It is very interesting to try and troubleshoot different users environment and build up an understanding of what happens. Thanks for sharing.

dza89 commented May 27, 2020 •

Anyone experiencing the original issue:

hickeyma commented May 27, 2020

@dza89 Do you mind raising a new issue and add the steps to reproduce? This will help better troubleshoot the issue.

dza89 commented May 28, 2020

@hickeyma
If you want I’m fine with it, but this the exact the original issue.
The problem lies in changes in your helm chart and changes to helm version at the same time.

hickeyma commented May 28, 2020

@dza89 It would be better raise a new issue and provide more details. I would need to know the steps using a chart as an example through the different steps when first deployed using v2, the migration of that release and then the upgrade in v3 and where/how it fails.

Hashfyre commented Jun 24, 2020 •

I have the same issue trying to test out stable/prometheus-operator locally (testing a fork for a future PR, no code committed yet). This issue is there on a fresh install of the chart too, no upgrades involved.

@hickeyma I can report this as a fresh issue if you want. Just putting it here for posterity. I think many charts that got converted using helm-2to3 might have this issue. I’ll try and collect more data points.

hickeyma commented Jun 24, 2020

@Hashfyre A new issue with be great, thanks

AbhinayGupta741 commented Jun 30, 2020

I hope it helps someone!

hickeyma commented Jul 1, 2020

When you say you have the exact same issue, it would be better then to follow the feedback in the comments of the issue. The symptom showing here is «cannot upgrade». This is however related to a number of issues which needed to be resolved so that could continue.

bcollard commented Jul 16, 2020

this helped me, with a broken upgrade process on the prometheus-operator helm chart, after migration with helm-2to3:

abdennour commented Jul 19, 2020

vcorr commented Aug 12, 2020

Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment «xyz» in namespace «xxx» exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key «app.kubernetes.io/managed-by»: must be set to «Helm»; annotation validation error: missing key «meta.helm.sh/release-name»: must be set to «abc»; annotation validation error: missing key «meta.helm.sh/release-namespace»: must be set to «xxx»

I added the missing annotations and fixed the label and sure enough, that error was gone, but then I got the same error on the next object. after fixing each and every Kubernetes object in the release with the same annotations and label the upgrade finally worked. Is there anyway to automate this? I still have a pile of releases to convert and upgrade.

vcorr commented Aug 13, 2020

I found one way to reduce the work at least. You can set the label and annotations with kubectl:

This could be further improved of course by making a script that fetches all the objects in the release and fixes them all, but it’s still a hack.
I’m still interested to know if there is a better way for this!

[Helm] Trying to install a second release #950

Comments

katlimruiz commented May 15, 2020

Describe the bug
I have a cluster where I want to setup many different websites, each with its own nginx ingress.
I already have one that is running correctly. I used this command:

which the ingressClass matches with an ingress resource. This runs fine.

So I want to add another website. I want to use this command:

However, this gives me the following error:

Expected behavior
The second installation should be inserted correctly. Since it is under a different namespace and a different release name.

Your environment

The text was updated successfully, but these errors were encountered:

Dean-Coakley commented May 15, 2020

@katlimruiz Thanks for the bug report. We will fix it!

katlimruiz commented May 15, 2020

cool, I thought I was doing something wrong 👍

when is the next release?

Dean-Coakley commented May 15, 2020

Next release is planned for June/July.

The fix should be in master shortly. Which leaves you with some options for work arounds in the meantime:

Sorry for the hassle

Dean-Coakley commented May 15, 2020

Additionally, If you are not using our CRDs(if you are using ingress resources only) the following workaround should work:

Dean-Coakley commented May 16, 2020

If you are using CRDs you will route requests using our custom VirtualServer/VirtualServerRoute resources which are an alternative to Ingress resources:
https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#virtualserver-specification

If you have no virtualserver / vs resources in any namespace you can disable them as described above.
You then should be able to install the Ingress Controller in multiple namespaces via helm without issues.

Hope that clears things up.

katlimruiz commented May 18, 2020 •

It worked. I was not using the new CRDs, therefore I set the flag as false, and it worked as expected.
Hope the bug gets fixed eventually.

Comments

yoichiwo7 commented Oct 14, 2019

Output of helm version : v3.0.0-beta.4

Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE

Problem

The error message is like following:

Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: ServiceAccount, namespace: default, name: hello-world

Expected behavior

Reproduce the problem

You can run following scripts to reproduce the problem. helm v3.0.0-beta.4 fails and helm v3.0.0-beta.3 succeeds.

The result of the helm v3.0.0-beta.4 :

The result of the helm v3.0.0-beta.3 :

The text was updated successfully, but these errors were encountered:

yoichiwo7 commented Oct 14, 2019

I have checked related codes.

Lines 233 to 249 in 1e20eba

Note: I’m not sure whether the resource conflict check can be skipped for all other helm sub-commands in dry-run mode. Just skipping the resource conflict check for all sub-commands may cause regression bugs.

hickeyma commented Oct 14, 2019

@yoichiwo7 This maybe related to issue #6664

Dean-Coakley commented Oct 17, 2019

@yoichiwo7 Are you working on this or can I give it a try?

yoichiwo7 commented Oct 17, 2019

@Dean-Coakley
Not yet.
If you can give it a try, that would be great.

thomastaylor312 commented Oct 29, 2019 •

bacongobbler commented Oct 30, 2019 •

Upgrade fails but status is “deployed”? #8078

Comments

IdanAdar commented May 8, 2020 •

Output of helm version : 3.1.3

Output of kubectl version : 1.16.9

Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud

Given these commands:

How could it be that helmStatus gets the value “deployed” if the upgrade failed?

In helm 2 it was possible to use info.status.code to get the status code whether it failed or not, but in helm 3 this is missing.

• no code property
• maybe it’s looking at the previous (current?) release?

How can I find out the status of the current deployment attempt?
Am I missing something obvious?

The text was updated successfully, but these errors were encountered:

hickeyma commented May 8, 2020

@IdanAdar I am trying to reproduce this at the moment but not been able to yet. Would you have an example of chart or scenario how this occurred?

At the moment, the status is showing correct.

technosophos commented May 12, 2020

Also, including the output of helm history would help. It is entirely possible that you didn’t get far enough into the deployment for the upgrade’s release record to even be created. Thus the «deployed» release would be your last release, not the current failed release. Hard to say, though. We’d need more information.

tfemson commented Jun 13, 2020

@IdanAdar I had a similar issue, I had a previous deployment in the namespace that wasn’t deployed with helm. When i tried to deploy the same app with the same name with helm, I got that error because it didn’t have those labels helm would have added if it was deployed originally with helm e.g app.kubernetes.io/managed-by

I just deleted the old deployment and redeployed the app using helm and it was successful. That’s why nobody might be able to reproduce it.

hickeyma commented Jun 15, 2020

Thanks for the feedback @tfemson.

@IdanAdar Do you mind taking a look at the comments from @technosophos and @tfemson. If you could provide some more details with the following commands then it might helkp to better troubleshoot your isasue:

Hashfyre commented Jun 24, 2020 •

I have the same issue trying to test out stable/prometheus-operator locally (testing a fork for a future PR, no code committed yet)

this could be a related issue: helm/helm-2to3#147

madAndroid commented Jul 17, 2020 •

We can potentially update the secret labels and metadata in our upgrade script, but this is definitely a bug, since the other resources are properly updated.

Should I open a separate issue since it appears to be specific to secrets?

steinbachr commented Jul 28, 2020

Error rendered manifests contain a resource that already exists. Смотреть фото Error rendered manifests contain a resource that already exists. Смотреть картинку Error rendered manifests contain a resource that already exists. Картинка про Error rendered manifests contain a resource that already exists. Фото Error rendered manifests contain a resource that already exists

technosophos commented Jul 28, 2020

bitsofinfo commented Aug 13, 2020

experiencing the same

github-actions bot commented Nov 12, 2020

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

OfirYaron commented Mar 14, 2021

@IdanAdar I had a similar issue, I had a previous deployment in the namespace that wasn’t deployed with helm. When i tried to deploy the same app with the same name with helm, I got that error because it didn’t have those labels helm would have added if it was deployed originally with helm e.g app.kubernetes.io/managed-by

I just deleted the old deployment and redeployed the app using helm and it was successful. That’s why nobody might be able to reproduce it.

Even though issue was closed for no activity, for the case someone with similar issue ends up here:
I’ve had similar issue, indeed the reason was an already deployed state from different origin then helm, and in my case it was by switching helm 2 to helm 3.

Источники информации:

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *