Conversation
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughThis PR introduces Kubernetes Helm deployment support for circuit-python driver applications with E2E testing infrastructure, updates devcontainer and CI tooling versions, adds NodePort configuration options, refines Kiota datetime array detection, and bumps dependency versions. ChangesDevContainer & CI Tooling Updates
Kubernetes Helm Deployment with E2E Testing
Kiota DateTime Array Support
Sequence DiagramsequenceDiagram
participant CI as CI/Artifact Build
participant Job as e2e-test-k3d Job
participant Docker as Docker Registry
participant K3D as k3d Cluster
participant Helm as Helm Chart
participant FE as Frontend Service
participant BE as Backend Service
CI->>Job: Download build artifacts
Job->>Docker: Load *.tar images
Docker->>Docker: Retag images locally
Job->>K3D: Create k3d cluster
K3D->>Docker: Import local images
Job->>Helm: Install chart with NodePort config<br/>(FRONTEND_PORT, BACKEND_PORT)
Helm->>K3D: Deploy frontend + backend containers
K3D->>FE: Provision frontend service (NodePort)
K3D->>BE: Provision backend service (NodePort)
Job->>K3D: Wait for rollout completion
Job->>BE: Smoke test backend health<br/>(runner IP:nodePort)
Job->>FE: Smoke test frontend health<br/>(runner IP:nodePort)
BE-->>Job: Health response
FE-->>Job: Health response
Job->>CI: Report e2e test results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@extensions/context.py`:
- Line 111: The hardcoded kubectl version context["kubectl_version"] is set to
an outdated v1.15.0; update it to a modern compatible version such as "v1.32.5"
(or make it configurable) so kubectl is within one minor version of the cluster;
locate the assignment to context["kubectl_version"] in extensions/context.py and
replace "v1.15.0" with "v1.32.5" (or read from an environment/config var) to
ensure compatibility with the CI k3d-provisioned cluster.
In `@template/.github/workflows/ci.yaml.jinja`:
- Around line 343-351: Replace the single-shot curl checks with a short retry
loop that waits until each endpoint returns success or a timeout is reached;
specifically modify the steps that run "curl --fail-with-body
http://localhost:{{ backend_nodeport_number }}/api/healthcheck" and "curl
--fail-with-body http://localhost:{{ frontend_nodeport_number
}}/api/healthcheck" so they retry (sleep between attempts) for a configurable
total timeout (e.g., 60s) and fail the job if the endpoint never becomes
healthy, ensuring the job is resilient to slow startup.
In `@template/`{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.py:
- Around line 254-258: inject_missing_python_fields fails to mark datetime
import needed for list[datetime.datetime]; update the list-type handling in
inject_missing_python_fields (the elif branch that checks
field_type.startswith("list[")) so that when the inner type is datetime.datetime
(e.g. field_type == "list[datetime.datetime]" or inner contains
"datetime.datetime") it sets needs_datetime_import = True before emitting
Optional[list[...]] and using
n.get_collection_of_primitive_values(datetime.datetime); this ensures import
datetime is injected for list-of-datetime fields.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/deployment.yaml.jinja:
- Around line 80-81: The BACKEND_HOST value logic is inverted: instead of
switching to an external address when hostNetwork is enabled, always use
localhost when the backend sidecar is present; update the template that sets
BACKEND_HOST (the block referencing .Values.image.backend and
.Values.env.frontend.BACKEND_HOST) so it checks for the presence of the backend
sidecar (.Values.image.backend) first and sets value to "localhost" when true,
otherwise fall back to .Values.env.frontend.BACKEND_HOST; keep the hostNetwork
flag irrelevant for this decision because sidecar and frontend share the same
network namespace.
- Around line 116-121: Move the top-level "volumes:" key so it is only rendered
when .Values.persistence.backend.enabled is true; currently "volumes:" is always
emitted which creates an empty list when the conditional block is skipped and
yields invalid YAML. Modify the template so that the "volumes:" line and the
inner block that creates the "backend-logs" volume are both wrapped by the same
conditional (.Values.persistence.backend.enabled) that already guards the
corresponding volumeMounts, using the existing include "{{ repo_name
}}.fullname" symbol to build the claimName.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/pdb.yaml.jinja:
- Around line 1-11: The PodDisruptionBudget template can block drains when
replicaCount == 1 because .Values.pdb.minAvailable defaults to 1; update the
pdb.yaml.jinja template to make minAvailable conditional on .Values.replicaCount
(use 0 when .Values.replicaCount == 1, otherwise use .Values.pdb.minAvailable)
so the PDB does not prevent voluntary node drains for single-replica
deployments; touch the PodDisruptionBudget template (pdb.yaml.jinja) and adjust
the spec.minAvailable logic referencing .Values.pdb.minAvailable and
.Values.replicaCount.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/service-backend.yaml.jinja:
- Line 10: Add an explicit comment in values.yaml next to the
env.frontend.BACKEND_PORT entry indicating it is reused by the backend Service
(e.g., "# also used as the backend Service port") so readers know
.Values.env.frontend.BACKEND_PORT drives both the frontend container env var and
the backend Service port referenced in the service-backend.yaml.jinja template;
update the required-field message if present to avoid confusion.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/values.yaml.jinja:
- Around line 22-27: The values.yaml.jinja currently hardcodes
frontend_nodeport_number and backend_nodeport_number which makes
.Values.service.frontend.nodePort truthy even when type=ClusterIP; change the
defaults so nodePort is null (or omitted) by default in the rendered
values.yaml.jinja (use null instead of the numeric template variables for
frontend_nodeport_number/backend_nodeport_number), add a short comment telling
operators to set nodePort only for NodePort services, and also update the
service template guard (the template referencing
.Values.service.frontend.nodePort) to only emit nodePort when it exists and the
service type is not "ClusterIP" (use a conditional that checks both
.Values.service.frontend.nodePort and .Values.service.frontend.type).
In `@tests/copier_data/data4.yaml`:
- Around line 49-50: Remove the dead NodePort entries frontend_nodeport_number
and backend_nodeport_number from the data4 fixture since
is_circuit_python_driver: false and deploy_as_executable: true cause both
NodePort when conditions to be false (the Helm chart isn't generated); update
the YAML by deleting those two keys so future readers aren't misled.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d64a0217-33b6-40f8-b86b-149fd954d553
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (29)
.copier-answers.yml.devcontainer/Dockerfile.devcontainer/devcontainer-lock.json.devcontainer/devcontainer.json.devcontainer/install-ci-tooling.py.github/workflows/ci.yaml.github/workflows/hash_git_files.py.github/workflows/pre-commit.yamlcopier.ymlextensions/context.pypyproject.tomltemplate/.devcontainer/Dockerfiletemplate/.devcontainer/devcontainer.json.jinjatemplate/.github/workflows/ci.yaml.jinjatemplate/.github/workflows/hash_git_files.pytemplate/.github/workflows/pre-commit.yamltemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/Chart.yaml.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/_helpers.tpl.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/pdb.yaml.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/pvc-backend.yaml.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/service-backend.yaml.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/service-frontend.yaml.jinjatemplate/deployment/{% if is_circuit_python_driver %}helm{% endif %}/values.yaml.jinjatemplate/{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.pytests/copier_data/data1.yamltests/copier_data/data2.yamltests/copier_data/data3.yamltests/copier_data/data4.yaml
| item_format = items.get("format") | ||
| if item_type == "string" and item_format == "date-time": | ||
| field_types[field_name] = "list[datetime.datetime]" | ||
| has_primitive_array = True | ||
| elif item_type == "string": |
There was a problem hiding this comment.
inject_missing_python_fields never sets needs_datetime_import for list[datetime.datetime], causing a NameError in generated code.
The new "list[datetime.datetime]" type is correctly produced here, but its only consumer — inject_missing_python_fields — dispatches it through the elif field_type.startswith("list["): branch (line 327) without setting needs_datetime_import = True. As a result:
field_declarationsemitsOptional[list[datetime.datetime]]deser_entriesemitsn.get_collection_of_primitive_values(datetime.datetime)
…but import datetime is never injected (line 363 guard stays False). The generated file will raise a NameError at runtime.
🐛 Proposed fix in inject_missing_python_fields (lines 327–331)
elif field_type.startswith("list["):
inner = field_type[5:-1]
getter = f"get_collection_of_primitive_values({inner})"
writer_call = f'writer.write_collection_of_primitive_values("{field_name}", self.{field_snake})'
python_type = field_type
+ if inner == "datetime.datetime":
+ needs_datetime_import = True🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@template/`{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.py
around lines 254 - 258, inject_missing_python_fields fails to mark datetime
import needed for list[datetime.datetime]; update the list-type handling in
inject_missing_python_fields (the elif branch that checks
field_type.startswith("list[")) so that when the inner type is datetime.datetime
(e.g. field_type == "list[datetime.datetime]" or inner contains
"datetime.datetime") it sets needs_datetime_import = True before emitting
Optional[list[...]] and using
n.get_collection_of_primitive_values(datetime.datetime); this ensures import
datetime is injected for list-of-datetime fields.
| - name: BACKEND_HOST | ||
| value: "{{ if and (not .Values.hostNetwork.enabled) .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*.jinja" | grep -i deploymentRepository: LabAutomationAndScreening/copier-nuxt-python-intranet-app
Length of output: 985
🏁 Script executed:
git ls-files | grep -E "(deployment|helm)" | head -20Repository: LabAutomationAndScreening/copier-nuxt-python-intranet-app
Length of output: 965
🌐 Web query:
Kubernetes hostNetwork true pod containers network namespace sharing
💡 Result:
When you set hostNetwork: true on a Kubernetes Pod, the Pod uses the node’s network namespace rather than a dedicated per-Pod network namespace [1]. Implications for “pod containers network namespace sharing” 1) Containers within the same Pod still share a network namespace with each other (the normal Pod behavior). However, with hostNetwork: true that shared namespace is the host/node network namespace [2][1]. 2) Therefore, all containers in that Pod effectively “share the host network namespace” as a group; they are not isolated behind a per-Pod network namespace [1]. What Kubernetes expects you to configure - When using hostNetwork, you should specify ports so the scheduler can make the correct placement decisions [1]. - Specifically, when hostNetwork is true, hostPort must match containerPort, and unspecified hostPort fields default to match containerPort [1]. DNS note (common gotcha) - Kubernetes notes that to have DNS options set along with hostNetwork, you must explicitly use dnsPolicy = ClusterFirstWithHostNet (DNSPolicy defaults otherwise to ClusterFirst) [3]. Security / policy note - Many security/compliance guides recommend not generally permitting hostNetwork: true because it removes an isolation boundary (i.e., the container can access host networking resources) [4]. Related but different: process namespace sharing - hostNetwork is about the network namespace. - Separately, process namespace sharing between containers in a Pod is controlled by shareProcessNamespace, not hostNetwork [5].
Citations:
- 1: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/
- 2: https://kubernetes.io/docs/concepts/workloads/pods/pod/
- 3: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1
- 4: https://docs.datadoghq.com/security/default_rules/qet-z9y-rwq/
- 5: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/
🏁 Script executed:
cat -n "template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja" | sed -n '70,90p'Repository: LabAutomationAndScreening/copier-nuxt-python-intranet-app
Length of output: 1167
🏁 Script executed:
cat -n "template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja" | wc -lRepository: LabAutomationAndScreening/copier-nuxt-python-intranet-app
Length of output: 105
Always use localhost when the backend sidecar is present.
Containers in the same Pod share a network namespace regardless of the hostNetwork setting. When hostNetwork: true, they share the host network namespace; when hostNetwork: false, they share a per-Pod network namespace. In both cases, the backend sidecar is reachable via localhost within that shared namespace. The current condition incorrectly forces an external address when hostNetwork: true, breaking frontend→backend calls.
Suggested fix
- - name: BACKEND_HOST
- value: "{{ if and (not .Values.hostNetwork.enabled) .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"
+ - name: BACKEND_HOST
+ value: "{{ if .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: BACKEND_HOST | |
| value: "{{ if and (not .Values.hostNetwork.enabled) .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}" | |
| - name: BACKEND_HOST | |
| value: "{{ if .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}" |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/deployment.yaml.jinja around lines 80 - 81, The BACKEND_HOST value
logic is inverted: instead of switching to an external address when hostNetwork
is enabled, always use localhost when the backend sidecar is present; update the
template that sets BACKEND_HOST (the block referencing .Values.image.backend and
.Values.env.frontend.BACKEND_HOST) so it checks for the presence of the backend
sidecar (.Values.image.backend) first and sets value to "localhost" when true,
otherwise fall back to .Values.env.frontend.BACKEND_HOST; keep the hostNetwork
flag irrelevant for this decision because sidecar and frontend share the same
network namespace.
| {% raw %}{{- if .Values.pdb.enabled }} | ||
| apiVersion: policy/v1 | ||
| kind: PodDisruptionBudget | ||
| metadata: | ||
| name: {{ include "{% endraw %}{{ repo_name }}{% raw %}.fullname" . }} | ||
| spec: | ||
| minAvailable: {{ .Values.pdb.minAvailable }} | ||
| selector: | ||
| matchLabels: | ||
| app: '{{ include "{% endraw %}{{ repo_name }}{% raw %}.fullname" . }}' | ||
| {{- end }}{% endraw %} |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
LGTM — one operational note on minAvailable vs replicaCount.
The template is correct. Worth noting that the default minAvailable: 1 paired with the default replicaCount: 1 would block all voluntary node drains when pdb.enabled is set to true. Since pdb.enabled: false by default and the comment already recommends enabling it only for cloud deployment (where replicas > 1 are expected), this is a heads-up rather than a blocking concern.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/pdb.yaml.jinja around lines 1 - 11, The PodDisruptionBudget
template can block drains when replicaCount == 1 because
.Values.pdb.minAvailable defaults to 1; update the pdb.yaml.jinja template to
make minAvailable conditional on .Values.replicaCount (use 0 when
.Values.replicaCount == 1, otherwise use .Values.pdb.minAvailable) so the PDB
does not prevent voluntary node drains for single-replica deployments; touch the
PodDisruptionBudget template (pdb.yaml.jinja) and adjust the spec.minAvailable
logic referencing .Values.pdb.minAvailable and .Values.replicaCount.
| spec: | ||
| type: '{{ .Values.service.backend.type }}' | ||
| ports: | ||
| - port: {{ required "env.frontend.BACKEND_PORT is required" .Values.env.frontend.BACKEND_PORT | int }} |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | 💤 Low value
Backend service port is sourced from .Values.env.frontend.BACKEND_PORT.
This works as a single source of truth — one value drives both the frontend container's BACKEND_PORT env var and the backend Service port. However, it is non-obvious: anyone reading or editing the backend service template will be surprised to find the port under the frontend env path, and the required-field error message reinforces the confusion. Consider adding a brief comment in values.yaml alongside env.frontend.BACKEND_PORT (e.g., # also used as the backend Service port) to make the dual-purpose explicit.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/service-backend.yaml.jinja at line 10, Add an explicit comment in
values.yaml next to the env.frontend.BACKEND_PORT entry indicating it is reused
by the backend Service (e.g., "# also used as the backend Service port") so
readers know .Values.env.frontend.BACKEND_PORT drives both the frontend
container env var and the backend Service port referenced in the
service-backend.yaml.jinja template; update the required-field message if
present to avoid confusion.
| frontend: | ||
| type: NodePort | ||
| nodePort: '{% endraw %}{{ frontend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment | ||
| backend: | ||
| type: NodePort | ||
| nodePort: '{% endraw %}{{ backend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment |
There was a problem hiding this comment.
nodePort hardcoded in values.yaml will break any ClusterIP / LoadBalancer override.
Because nodePort is always rendered as a non-null, non-zero value (e.g., '30060'), the {{- if .Values.service.frontend.nodePort }} guard in the service templates will always be truthy. Kubernetes rejects a Service with a nodePort field set when its type is ClusterIP. So if Pulumi (or any operator) overrides only service.frontend.type to ClusterIP without also nulling out service.frontend.nodePort, the apply will be rejected.
The standard Helm pattern is to default nodePort to null (or omit it) and only populate it when the type requires it:
🛡️ Proposed fix
service:
frontend:
type: NodePort
- nodePort: '{% endraw %}{{ frontend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment
+ nodePort: {% endraw %}{{ frontend_nodeport_number }}{% raw %} # pulumi must also set nodePort: null when changing type to ClusterIP/LoadBalancer
backend:
type: NodePort
- nodePort: '{% endraw %}{{ backend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment
+ nodePort: {% endraw %}{{ backend_nodeport_number }}{% raw %} # pulumi must also set nodePort: null when changing type to ClusterIP/LoadBalancerAlternatively, add nodePort: null override guidance prominently in a comment, and consider guarding the service template with a type check:
{{- if and .Values.service.frontend.nodePort (ne .Values.service.frontend.type "ClusterIP") }}
nodePort: {{ .Values.service.frontend.nodePort }}
{{- end }}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/values.yaml.jinja around lines 22 - 27, The values.yaml.jinja currently
hardcodes frontend_nodeport_number and backend_nodeport_number which makes
.Values.service.frontend.nodePort truthy even when type=ClusterIP; change the
defaults so nodePort is null (or omitted) by default in the rendered
values.yaml.jinja (use null instead of the numeric template variables for
frontend_nodeport_number/backend_nodeport_number), add a short comment telling
operators to set nodePort only for NodePort services, and also update the
service template guard (the template referencing
.Values.service.frontend.nodePort) to only emit nodePort when it exists and the
service type is not "ClusterIP" (use a conditional that checks both
.Values.service.frontend.nodePort and .Values.service.frontend.type).
| frontend_nodeport_number: 30500 | ||
| backend_nodeport_number: 30505 |
There was a problem hiding this comment.
Dead configuration — NodePort values are irrelevant for this fixture.
data4 sets is_circuit_python_driver: false and deploy_as_executable: true. Both NodePort when conditions evaluate to false, so Copier will never read these values; the Helm chart directory isn't generated at all for this combination. Remove these two entries to avoid misleading future readers into thinking NodePorts are used in executable deployments.
🗑️ Proposed fix
auth_to_ecr_for_backend_e2e_in_ci: false
-frontend_nodeport_number: 30500
-backend_nodeport_number: 30505📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| frontend_nodeport_number: 30500 | |
| backend_nodeport_number: 30505 |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tests/copier_data/data4.yaml` around lines 49 - 50, Remove the dead NodePort
entries frontend_nodeport_number and backend_nodeport_number from the data4
fixture since is_circuit_python_driver: false and deploy_as_executable: true
cause both NodePort when conditions to be false (the Helm chart isn't
generated); update the YAML by deleting those two keys so future readers aren't
misled.
Why is this change necessary?
Templatize the Helm chart, and create a smoke test for k3s in CI
How does this change address the issue?
Uses k3d in CI to deploy the helm chart and then run some quick curl commands
What side effects does this change have?
N/A
How is this change tested?
Two downstream repos
Other
Pulls in some upstream template changes, and backports some changes to the Python Kiota fixer
Summary by CodeRabbit
New Features
Improvements
Chores