Skip to content

Circuit Python Helm chart#153

Merged
ejfine merged 26 commits intomainfrom
backend-fixer
May 5, 2026
Merged

Circuit Python Helm chart#153
ejfine merged 26 commits intomainfrom
backend-fixer

Conversation

@ejfine
Copy link
Copy Markdown
Contributor

@ejfine ejfine commented May 5, 2026

Why is this change necessary?

Templatize the Helm chart, and create a smoke test for k3s in CI

How does this change address the issue?

Uses k3d in CI to deploy the helm chart and then run some quick curl commands

What side effects does this change have?

N/A

How is this change tested?

Two downstream repos

Other

Pulls in some upstream template changes, and backports some changes to the Python Kiota fixer

Summary by CodeRabbit

  • New Features

    • Added Kubernetes Helm deployment support with configurable NodePort numbers for frontend and backend services
    • Introduced k3d-based E2E testing for containerized application deployment
    • Added Azure/AKS setup capabilities in CI/CD pipeline
  • Improvements

    • Updated development container base image to latest version
    • Enhanced CI/CD workflow with extended timeout support for E2E tests
  • Chores

    • Updated dependencies and GitHub Actions versions

@ejfine ejfine self-assigned this May 5, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

Warning

Rate limit exceeded

@ejfine has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 13 minutes and 35 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 1d92580f-77b5-43c3-a0b4-ccf59c7fa233

📥 Commits

Reviewing files that changed from the base of the PR and between 14e1d75 and 0bb8319.

📒 Files selected for processing (3)
  • .copier-answers.yml
  • extensions/context.py
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja
📝 Walkthrough

Walkthrough

This PR introduces Kubernetes Helm deployment support for circuit-python driver applications with E2E testing infrastructure, updates devcontainer and CI tooling versions, adds NodePort configuration options, refines Kiota datetime array detection, and bumps dependency versions.

Changes

DevContainer & CI Tooling Updates

Layer / File(s) Summary
Base Image & Dependencies
.devcontainer/Dockerfile, template/.devcontainer/Dockerfile
Devcontainer base image upgraded from 5.1.5-noble to 6.0.3-noble; removed temporary Yarn GPG key workaround.
Feature Locks & Config
.devcontainer/devcontainer-lock.json, .devcontainer/devcontainer.json, template/.devcontainer/devcontainer.json.jinja
Locked Claude Code and Python devcontainer features; updated ruff configuration comment to reference non-src setup.
Node/Package Management
.devcontainer/install-ci-tooling.py
Bumped PNPM_VERSION from 11.0.4 to 11.0.5.
CI Workflow Cache & Hash
.github/workflows/ci.yaml, .github/workflows/pre-commit.yaml, template/.github/workflows/pre-commit.yaml, .github/workflows/hash_git_files.py, template/.github/workflows/hash_git_files.py
Updated GitHub Actions cache action from v5.0.4 to v5.0.5 across workflows; included devcontainer-lock.json in devcontainer context hash computation.
Dependency & Context Versions
pyproject.toml, extensions/context.py, .copier-answers.yml
Bumped ty from >=0.0.32 to >=0.0.34; updated gha_cache to v5.0.5; added Azure/AKS tooling context entries (gha_absaoss_k3d, gha_azure_setup_helm, gha_azure_setup_kubectl) with corresponding helm_version and kubectl_version; updated copier template commit to v0.0.118-6-g0122f36.

Kubernetes Helm Deployment with E2E Testing

Layer / File(s) Summary
Configuration Schema
copier.yml
Added frontend_nodeport_number and backend_nodeport_number configuration questions gated to circuit-python driver deployments with NodePort range validation (30000–32767).
Test Data
tests/copier_data/data[1-4].yaml
Populated test data files with frontend and backend NodePort values for validation.
Helm Chart Metadata & Helpers
template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/Chart.yaml.jinja, template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/_helpers.tpl.jinja
Added Helm chart metadata (apiVersion v2, version 0.1.0); defined reusable Helm helpers for chart name and fully qualified name.
Helm Deployment & Persistence
template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja, template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/pvc-backend.yaml.jinja, template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/pdb.yaml.jinja
Defined multi-container deployment (frontend/backend) with conditional host networking, affinity/topology spread presets, backend log persistence, and pod disruption budgets.
Service Exposition
template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/service-frontend.yaml.jinja, template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/service-backend.yaml.jinja
Created NodePort services for frontend and backend with configurable ports and node port numbers from values.
Helm Values & Configuration
template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/values.yaml.jinja
Defined comprehensive values template including replicaCount, image repositories, frontend/backend environment variables (ports, backend host, mDNS settings), NodePort configurations, persistence settings, scheduling affinity, and pod disruption budgets.
E2E Testing Integration
template/.github/workflows/ci.yaml.jinja
Added new e2e-test-k3d job that downloads artifacts, loads Docker images into ephemeral k3d cluster, deploys via Helm with configured NodePorts, waits for rollout, and runs smoke-test healthchecks; extended timeout for main e2e-test job; updated test-compiled-frontend condition to gate on is_circuit_python_driver; integrated e2e-test-k3d into workflow-summary required checks.

Kiota DateTime Array Support

Layer / File(s) Summary
Array Item Detection
template/{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.py
Enhanced get_models_with_primitive_array_fields to detect date-time string arrays (items.type: "string", items.format: "date-time") and map them to list[datetime.datetime] instead of list[str].

Sequence Diagram

sequenceDiagram
    participant CI as CI/Artifact Build
    participant Job as e2e-test-k3d Job
    participant Docker as Docker Registry
    participant K3D as k3d Cluster
    participant Helm as Helm Chart
    participant FE as Frontend Service
    participant BE as Backend Service
    
    CI->>Job: Download build artifacts
    Job->>Docker: Load *.tar images
    Docker->>Docker: Retag images locally
    Job->>K3D: Create k3d cluster
    K3D->>Docker: Import local images
    Job->>Helm: Install chart with NodePort config<br/>(FRONTEND_PORT, BACKEND_PORT)
    Helm->>K3D: Deploy frontend + backend containers
    K3D->>FE: Provision frontend service (NodePort)
    K3D->>BE: Provision backend service (NodePort)
    Job->>K3D: Wait for rollout completion
    Job->>BE: Smoke test backend health<br/>(runner IP:nodePort)
    Job->>FE: Smoke test frontend health<br/>(runner IP:nodePort)
    BE-->>Job: Health response
    FE-->>Job: Health response
    Job->>CI: Report e2e test results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • zendern
  • idonaldson
🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Circuit Python Helm chart' is partially related to the changeset, referring to a key component, but does not capture the main objective of templatizing the Helm chart and adding k3d smoke tests in CI. Consider a more specific title like 'Add Helm chart templating and k3d smoke test CI' to better convey the primary changes.
✅ Passed checks (3 passed)
Check name Status Explanation
Description check ✅ Passed The pull request description follows the template structure with all major sections filled out, providing clear context about why changes are necessary, how they address the need, testing approach, and side effects.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@extensions/context.py`:
- Line 111: The hardcoded kubectl version context["kubectl_version"] is set to
an outdated v1.15.0; update it to a modern compatible version such as "v1.32.5"
(or make it configurable) so kubectl is within one minor version of the cluster;
locate the assignment to context["kubectl_version"] in extensions/context.py and
replace "v1.15.0" with "v1.32.5" (or read from an environment/config var) to
ensure compatibility with the CI k3d-provisioned cluster.

In `@template/.github/workflows/ci.yaml.jinja`:
- Around line 343-351: Replace the single-shot curl checks with a short retry
loop that waits until each endpoint returns success or a timeout is reached;
specifically modify the steps that run "curl --fail-with-body
http://localhost:{{ backend_nodeport_number }}/api/healthcheck" and "curl
--fail-with-body http://localhost:{{ frontend_nodeport_number
}}/api/healthcheck" so they retry (sleep between attempts) for a configurable
total timeout (e.g., 60s) and fail the job if the endpoint never becomes
healthy, ensuring the job is resilient to slow startup.

In `@template/`{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.py:
- Around line 254-258: inject_missing_python_fields fails to mark datetime
import needed for list[datetime.datetime]; update the list-type handling in
inject_missing_python_fields (the elif branch that checks
field_type.startswith("list[")) so that when the inner type is datetime.datetime
(e.g. field_type == "list[datetime.datetime]" or inner contains
"datetime.datetime") it sets needs_datetime_import = True before emitting
Optional[list[...]] and using
n.get_collection_of_primitive_values(datetime.datetime); this ensures import
datetime is injected for list-of-datetime fields.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/deployment.yaml.jinja:
- Around line 80-81: The BACKEND_HOST value logic is inverted: instead of
switching to an external address when hostNetwork is enabled, always use
localhost when the backend sidecar is present; update the template that sets
BACKEND_HOST (the block referencing .Values.image.backend and
.Values.env.frontend.BACKEND_HOST) so it checks for the presence of the backend
sidecar (.Values.image.backend) first and sets value to "localhost" when true,
otherwise fall back to .Values.env.frontend.BACKEND_HOST; keep the hostNetwork
flag irrelevant for this decision because sidecar and frontend share the same
network namespace.
- Around line 116-121: Move the top-level "volumes:" key so it is only rendered
when .Values.persistence.backend.enabled is true; currently "volumes:" is always
emitted which creates an empty list when the conditional block is skipped and
yields invalid YAML. Modify the template so that the "volumes:" line and the
inner block that creates the "backend-logs" volume are both wrapped by the same
conditional (.Values.persistence.backend.enabled) that already guards the
corresponding volumeMounts, using the existing include "{{ repo_name
}}.fullname" symbol to build the claimName.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/pdb.yaml.jinja:
- Around line 1-11: The PodDisruptionBudget template can block drains when
replicaCount == 1 because .Values.pdb.minAvailable defaults to 1; update the
pdb.yaml.jinja template to make minAvailable conditional on .Values.replicaCount
(use 0 when .Values.replicaCount == 1, otherwise use .Values.pdb.minAvailable)
so the PDB does not prevent voluntary node drains for single-replica
deployments; touch the PodDisruptionBudget template (pdb.yaml.jinja) and adjust
the spec.minAvailable logic referencing .Values.pdb.minAvailable and
.Values.replicaCount.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/service-backend.yaml.jinja:
- Line 10: Add an explicit comment in values.yaml next to the
env.frontend.BACKEND_PORT entry indicating it is reused by the backend Service
(e.g., "# also used as the backend Service port") so readers know
.Values.env.frontend.BACKEND_PORT drives both the frontend container env var and
the backend Service port referenced in the service-backend.yaml.jinja template;
update the required-field message if present to avoid confusion.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/values.yaml.jinja:
- Around line 22-27: The values.yaml.jinja currently hardcodes
frontend_nodeport_number and backend_nodeport_number which makes
.Values.service.frontend.nodePort truthy even when type=ClusterIP; change the
defaults so nodePort is null (or omitted) by default in the rendered
values.yaml.jinja (use null instead of the numeric template variables for
frontend_nodeport_number/backend_nodeport_number), add a short comment telling
operators to set nodePort only for NodePort services, and also update the
service template guard (the template referencing
.Values.service.frontend.nodePort) to only emit nodePort when it exists and the
service type is not "ClusterIP" (use a conditional that checks both
.Values.service.frontend.nodePort and .Values.service.frontend.type).

In `@tests/copier_data/data4.yaml`:
- Around line 49-50: Remove the dead NodePort entries frontend_nodeport_number
and backend_nodeport_number from the data4 fixture since
is_circuit_python_driver: false and deploy_as_executable: true cause both
NodePort when conditions to be false (the Helm chart isn't generated); update
the YAML by deleting those two keys so future readers aren't misled.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: d64a0217-33b6-40f8-b86b-149fd954d553

📥 Commits

Reviewing files that changed from the base of the PR and between bd9c1de and 14e1d75.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (29)
  • .copier-answers.yml
  • .devcontainer/Dockerfile
  • .devcontainer/devcontainer-lock.json
  • .devcontainer/devcontainer.json
  • .devcontainer/install-ci-tooling.py
  • .github/workflows/ci.yaml
  • .github/workflows/hash_git_files.py
  • .github/workflows/pre-commit.yaml
  • copier.yml
  • extensions/context.py
  • pyproject.toml
  • template/.devcontainer/Dockerfile
  • template/.devcontainer/devcontainer.json.jinja
  • template/.github/workflows/ci.yaml.jinja
  • template/.github/workflows/hash_git_files.py
  • template/.github/workflows/pre-commit.yaml
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/Chart.yaml.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/_helpers.tpl.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/pdb.yaml.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/pvc-backend.yaml.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/service-backend.yaml.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/service-frontend.yaml.jinja
  • template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/values.yaml.jinja
  • template/{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.py
  • tests/copier_data/data1.yaml
  • tests/copier_data/data2.yaml
  • tests/copier_data/data3.yaml
  • tests/copier_data/data4.yaml

Comment thread extensions/context.py Outdated
Comment thread template/.github/workflows/ci.yaml.jinja
Comment on lines +254 to +258
item_format = items.get("format")
if item_type == "string" and item_format == "date-time":
field_types[field_name] = "list[datetime.datetime]"
has_primitive_array = True
elif item_type == "string":
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

inject_missing_python_fields never sets needs_datetime_import for list[datetime.datetime], causing a NameError in generated code.

The new "list[datetime.datetime]" type is correctly produced here, but its only consumer — inject_missing_python_fields — dispatches it through the elif field_type.startswith("list["): branch (line 327) without setting needs_datetime_import = True. As a result:

  • field_declarations emits Optional[list[datetime.datetime]]
  • deser_entries emits n.get_collection_of_primitive_values(datetime.datetime)

…but import datetime is never injected (line 363 guard stays False). The generated file will raise a NameError at runtime.

🐛 Proposed fix in inject_missing_python_fields (lines 327–331)
 elif field_type.startswith("list["):
     inner = field_type[5:-1]
     getter = f"get_collection_of_primitive_values({inner})"
     writer_call = f'writer.write_collection_of_primitive_values("{field_name}", self.{field_snake})'
     python_type = field_type
+    if inner == "datetime.datetime":
+        needs_datetime_import = True
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@template/`{% if has_backend %}backend{% endif %}/kiota_nullable_fixer.py
around lines 254 - 258, inject_missing_python_fields fails to mark datetime
import needed for list[datetime.datetime]; update the list-type handling in
inject_missing_python_fields (the elif branch that checks
field_type.startswith("list[")) so that when the inner type is datetime.datetime
(e.g. field_type == "list[datetime.datetime]" or inner contains
"datetime.datetime") it sets needs_datetime_import = True before emitting
Optional[list[...]] and using
n.get_collection_of_primitive_values(datetime.datetime); this ensures import
datetime is injected for list-of-datetime fields.

Comment on lines +80 to +81
- name: BACKEND_HOST
value: "{{ if and (not .Values.hostNetwork.enabled) .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "*.jinja" | grep -i deployment

Repository: LabAutomationAndScreening/copier-nuxt-python-intranet-app

Length of output: 985


🏁 Script executed:

git ls-files | grep -E "(deployment|helm)" | head -20

Repository: LabAutomationAndScreening/copier-nuxt-python-intranet-app

Length of output: 965


🌐 Web query:

Kubernetes hostNetwork true pod containers network namespace sharing

💡 Result:

When you set hostNetwork: true on a Kubernetes Pod, the Pod uses the node’s network namespace rather than a dedicated per-Pod network namespace [1]. Implications for “pod containers network namespace sharing” 1) Containers within the same Pod still share a network namespace with each other (the normal Pod behavior). However, with hostNetwork: true that shared namespace is the host/node network namespace [2][1]. 2) Therefore, all containers in that Pod effectively “share the host network namespace” as a group; they are not isolated behind a per-Pod network namespace [1]. What Kubernetes expects you to configure - When using hostNetwork, you should specify ports so the scheduler can make the correct placement decisions [1]. - Specifically, when hostNetwork is true, hostPort must match containerPort, and unspecified hostPort fields default to match containerPort [1]. DNS note (common gotcha) - Kubernetes notes that to have DNS options set along with hostNetwork, you must explicitly use dnsPolicy = ClusterFirstWithHostNet (DNSPolicy defaults otherwise to ClusterFirst) [3]. Security / policy note - Many security/compliance guides recommend not generally permitting hostNetwork: true because it removes an isolation boundary (i.e., the container can access host networking resources) [4]. Related but different: process namespace sharing - hostNetwork is about the network namespace. - Separately, process namespace sharing between containers in a Pod is controlled by shareProcessNamespace, not hostNetwork [5].

Citations:


🏁 Script executed:

cat -n "template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja" | sed -n '70,90p'

Repository: LabAutomationAndScreening/copier-nuxt-python-intranet-app

Length of output: 1167


🏁 Script executed:

cat -n "template/deployment/{% if is_circuit_python_driver %}helm{% endif %}/templates/deployment.yaml.jinja" | wc -l

Repository: LabAutomationAndScreening/copier-nuxt-python-intranet-app

Length of output: 105


Always use localhost when the backend sidecar is present.

Containers in the same Pod share a network namespace regardless of the hostNetwork setting. When hostNetwork: true, they share the host network namespace; when hostNetwork: false, they share a per-Pod network namespace. In both cases, the backend sidecar is reachable via localhost within that shared namespace. The current condition incorrectly forces an external address when hostNetwork: true, breaking frontend→backend calls.

Suggested fix
-            - name: BACKEND_HOST
-              value: "{{ if and (not .Values.hostNetwork.enabled) .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"
+            - name: BACKEND_HOST
+              value: "{{ if .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: BACKEND_HOST
value: "{{ if and (not .Values.hostNetwork.enabled) .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"
- name: BACKEND_HOST
value: "{{ if .Values.image.backend }}localhost{{ else }}{{ .Values.env.frontend.BACKEND_HOST }}{{ end }}"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/deployment.yaml.jinja around lines 80 - 81, The BACKEND_HOST value
logic is inverted: instead of switching to an external address when hostNetwork
is enabled, always use localhost when the backend sidecar is present; update the
template that sets BACKEND_HOST (the block referencing .Values.image.backend and
.Values.env.frontend.BACKEND_HOST) so it checks for the presence of the backend
sidecar (.Values.image.backend) first and sets value to "localhost" when true,
otherwise fall back to .Values.env.frontend.BACKEND_HOST; keep the hostNetwork
flag irrelevant for this decision because sidecar and frontend share the same
network namespace.

Comment on lines +1 to +11
{% raw %}{{- if .Values.pdb.enabled }}
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ include "{% endraw %}{{ repo_name }}{% raw %}.fullname" . }}
spec:
minAvailable: {{ .Values.pdb.minAvailable }}
selector:
matchLabels:
app: '{{ include "{% endraw %}{{ repo_name }}{% raw %}.fullname" . }}'
{{- end }}{% endraw %}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

LGTM — one operational note on minAvailable vs replicaCount.

The template is correct. Worth noting that the default minAvailable: 1 paired with the default replicaCount: 1 would block all voluntary node drains when pdb.enabled is set to true. Since pdb.enabled: false by default and the comment already recommends enabling it only for cloud deployment (where replicas > 1 are expected), this is a heads-up rather than a blocking concern.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/pdb.yaml.jinja around lines 1 - 11, The PodDisruptionBudget
template can block drains when replicaCount == 1 because
.Values.pdb.minAvailable defaults to 1; update the pdb.yaml.jinja template to
make minAvailable conditional on .Values.replicaCount (use 0 when
.Values.replicaCount == 1, otherwise use .Values.pdb.minAvailable) so the PDB
does not prevent voluntary node drains for single-replica deployments; touch the
PodDisruptionBudget template (pdb.yaml.jinja) and adjust the spec.minAvailable
logic referencing .Values.pdb.minAvailable and .Values.replicaCount.

spec:
type: '{{ .Values.service.backend.type }}'
ports:
- port: {{ required "env.frontend.BACKEND_PORT is required" .Values.env.frontend.BACKEND_PORT | int }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | 💤 Low value

Backend service port is sourced from .Values.env.frontend.BACKEND_PORT.

This works as a single source of truth — one value drives both the frontend container's BACKEND_PORT env var and the backend Service port. However, it is non-obvious: anyone reading or editing the backend service template will be surprised to find the port under the frontend env path, and the required-field error message reinforces the confusion. Consider adding a brief comment in values.yaml alongside env.frontend.BACKEND_PORT (e.g., # also used as the backend Service port) to make the dual-purpose explicit.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/templates/service-backend.yaml.jinja at line 10, Add an explicit comment in
values.yaml next to the env.frontend.BACKEND_PORT entry indicating it is reused
by the backend Service (e.g., "# also used as the backend Service port") so
readers know .Values.env.frontend.BACKEND_PORT drives both the frontend
container env var and the backend Service port referenced in the
service-backend.yaml.jinja template; update the required-field message if
present to avoid confusion.

Comment on lines +22 to +27
frontend:
type: NodePort
nodePort: '{% endraw %}{{ frontend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment
backend:
type: NodePort
nodePort: '{% endraw %}{{ backend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

nodePort hardcoded in values.yaml will break any ClusterIP / LoadBalancer override.

Because nodePort is always rendered as a non-null, non-zero value (e.g., '30060'), the {{- if .Values.service.frontend.nodePort }} guard in the service templates will always be truthy. Kubernetes rejects a Service with a nodePort field set when its type is ClusterIP. So if Pulumi (or any operator) overrides only service.frontend.type to ClusterIP without also nulling out service.frontend.nodePort, the apply will be rejected.

The standard Helm pattern is to default nodePort to null (or omit it) and only populate it when the type requires it:

🛡️ Proposed fix
 service:
   frontend:
     type: NodePort
-    nodePort: '{% endraw %}{{ frontend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment
+    nodePort: {% endraw %}{{ frontend_nodeport_number }}{% raw %}  # pulumi must also set nodePort: null when changing type to ClusterIP/LoadBalancer
   backend:
     type: NodePort
-    nodePort: '{% endraw %}{{ backend_nodeport_number }}{% raw %}' # pulumi will override this for cloud deployment
+    nodePort: {% endraw %}{{ backend_nodeport_number }}{% raw %}  # pulumi must also set nodePort: null when changing type to ClusterIP/LoadBalancer

Alternatively, add nodePort: null override guidance prominently in a comment, and consider guarding the service template with a type check:

{{- if and .Values.service.frontend.nodePort (ne .Values.service.frontend.type "ClusterIP") }}
nodePort: {{ .Values.service.frontend.nodePort }}
{{- end }}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@template/deployment/`{% if is_circuit_python_driver %}helm{% endif
%}/values.yaml.jinja around lines 22 - 27, The values.yaml.jinja currently
hardcodes frontend_nodeport_number and backend_nodeport_number which makes
.Values.service.frontend.nodePort truthy even when type=ClusterIP; change the
defaults so nodePort is null (or omitted) by default in the rendered
values.yaml.jinja (use null instead of the numeric template variables for
frontend_nodeport_number/backend_nodeport_number), add a short comment telling
operators to set nodePort only for NodePort services, and also update the
service template guard (the template referencing
.Values.service.frontend.nodePort) to only emit nodePort when it exists and the
service type is not "ClusterIP" (use a conditional that checks both
.Values.service.frontend.nodePort and .Values.service.frontend.type).

Comment on lines +49 to +50
frontend_nodeport_number: 30500
backend_nodeport_number: 30505
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Dead configuration — NodePort values are irrelevant for this fixture.

data4 sets is_circuit_python_driver: false and deploy_as_executable: true. Both NodePort when conditions evaluate to false, so Copier will never read these values; the Helm chart directory isn't generated at all for this combination. Remove these two entries to avoid misleading future readers into thinking NodePorts are used in executable deployments.

🗑️ Proposed fix
 auth_to_ecr_for_backend_e2e_in_ci: false
-frontend_nodeport_number: 30500
-backend_nodeport_number: 30505
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
frontend_nodeport_number: 30500
backend_nodeport_number: 30505
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/copier_data/data4.yaml` around lines 49 - 50, Remove the dead NodePort
entries frontend_nodeport_number and backend_nodeport_number from the data4
fixture since is_circuit_python_driver: false and deploy_as_executable: true
cause both NodePort when conditions to be false (the Helm chart isn't
generated); update the YAML by deleting those two keys so future readers aren't
misled.

@ejfine ejfine marked this pull request as ready for review May 5, 2026 18:19
@ejfine ejfine requested a review from zendern May 5, 2026 18:19
@ejfine ejfine merged commit 87bba86 into main May 5, 2026
23 checks passed
@ejfine ejfine deleted the backend-fixer branch May 5, 2026 18:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants