What happened:
After enabling Hubble UI in Cilium configuration via Helm values (as in documentation) the Pods from Deployment hubble-relay referencing a non-existent container image public.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0 and reports ImagePullBackOff status:
$ kubectl -n kube-system get pod hubble-relay-684d54f68d-lpv8z -o=jsonpath='{$.spec.containers[:1].image}'
public.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0
$ kubectl -n kube-system get pod hubble-relay-684d54f68d-lpv8z -o=jsonpath='{$.status.containerStatuses[0].state.waiting.message}'
Back-off pulling image "public.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image "public.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0": failed to resolve image: public.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0: not found
$ docker pull public.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0
Error response from daemon: repository public.ecr.aws/eks/cilium/hubble-relay not found: name unknown: The repository with name 'cilium/hubble-relay' does not exist in the registry with id 'eks'
What you expected to happen:
After enabling Hubble UI in Cilium configuration via Helm values (as in documentation) the Pods from Deployment starts correctly.
How to reproduce it (as minimally and precisely as possible):
Create a new EKSA cluster with Cilium configuration as in documentation
Anything else we need to know?:
Pod's from Deployment hubble-ui are using container images from quay.io (quay.io/cilium/hubble-ui:v0.13.3) instead of public.ecr.aws/eks/...
By the way, I have a question: is it correct that changes in the Cilium configuration (Cluster.spec.clusterNetwork.cniConfig.cilium) - for example, adding the helmValues section - are ignored after the cluster is created by the eksctl anywhere upgrade operation?
Environment:
- EKS Anywhere Release: 0.24.4
- EKS Distro Release: v1-34-eks-13
What happened:
After enabling Hubble UI in Cilium configuration via Helm values (as in documentation) the Pods from Deployment
hubble-relayreferencing a non-existent container imagepublic.ecr.aws/eks/cilium/hubble-relay:v1.17.12-0and reportsImagePullBackOffstatus:What you expected to happen:
After enabling Hubble UI in Cilium configuration via Helm values (as in documentation) the Pods from Deployment starts correctly.
How to reproduce it (as minimally and precisely as possible):
Create a new EKSA cluster with Cilium configuration as in documentation
Anything else we need to know?:
Pod's from Deployment
hubble-uiare using container images from quay.io (quay.io/cilium/hubble-ui:v0.13.3) instead ofpublic.ecr.aws/eks/...By the way, I have a question: is it correct that changes in the Cilium configuration (
Cluster.spec.clusterNetwork.cniConfig.cilium) - for example, adding thehelmValuessection - are ignored after the cluster is created by theeksctl anywhere upgradeoperation?Environment: