Skip to content

Commit 644854b

Browse files
committed
Add GCP CCM quickstart for kops and local
Signed-off-by: LogicalShark <maralder@google.com>
1 parent fa9f875 commit 644854b

File tree

5 files changed

+426
-1
lines changed

5 files changed

+426
-1
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,4 @@ _rundir/
99
_tmp/
1010
/bin/
1111
__pycache__/
12+
/clusters/

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,9 @@
1111
This repository implements the [cloud provider](https://github.com/kubernetes/cloud-provider) interface for [Google Cloud Platform (GCP)](https://cloud.google.com/).
1212
It provides components for Kubernetes clusters running on GCP and is maintained primarily by the Kubernetes team at Google.
1313

14-
To see all available commands in this repository, run `make help`.
14+
To get started with the GCP CCM, see the **[kOps Quickstart](docs/kops-quickstart.md)** (automated setup) or the **[Manual CCM Setup Guide](docs/ccm-manual.md)**.
15+
16+
For local development, use `make help` to see all available commands.
1517

1618
## Components
1719

docs/ccm-manual.md

Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
# GCP Cloud Controller Manager (CCM) Manual Setup Guide
2+
3+
This guide provides instructions for building and deploying the GCP Cloud Controller Manager (CCM) to a self-managed Kubernetes cluster.
4+
5+
## Prerequisites
6+
7+
1. **Kubernetes Cluster**: A Kubernetes cluster running on Google Cloud Platform.
8+
* The cluster's components (`kube-apiserver`, `kube-controller-manager`, and `kubelet`) must have the `--cloud-provider=external` flag.
9+
* For an example of how to create GCE instances and initialize such a cluster manually using `kubeadm`, see **[Manual Kubernetes Cluster on GCE](manual-cluster-gce.md)**.
10+
2. **GCP Service Account**: The nodes (or the CCM pod itself) must have access to a GCP IAM Service Account with sufficient permissions to manage compute resources (e.g. instances, load balancers, and routes).
11+
3. **Docker & gcloud CLI**: Authorized and configured for pushing images to GCP Artifact Registry.
12+
13+
14+
## Step 1: Build and Push the CCM Image (Manual Clusters)
15+
16+
If you are using a manually provisioned cluster (e.g. `kubeadm`), build the `cloud-controller-manager` Docker image and push it to your registry:
17+
18+
```sh
19+
# Google Cloud Project ID, registry location, and repository name.
20+
GCP_PROJECT=$(gcloud config get-value project)
21+
GCP_LOCATION=us-central1
22+
REPO=my-repo
23+
24+
# Create an Artifact Registry repository (if it doesn't already exist)
25+
gcloud artifacts repositories create ${REPO} \
26+
--project=${GCP_PROJECT} \
27+
--repository-format=docker \
28+
--location=${GCP_LOCATION} \
29+
--description="Docker repository for CCM"
30+
31+
# Grant the cluster nodes permission to read from the newly created Artifact Registry.
32+
# This automatically extracts your GCE node's service account using kubectl and gcloud.
33+
NODE_NAME=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
34+
NODE_ZONE=$(kubectl get node $NODE_NAME -o jsonpath='{.metadata.labels.topology\.kubernetes\.io/zone}')
35+
NODE_SA=$(gcloud compute instances describe $NODE_NAME \
36+
--zone=$NODE_ZONE --project=${GCP_PROJECT} \
37+
--format="value(serviceAccounts[0].email)")
38+
39+
gcloud artifacts repositories add-iam-policy-binding ${REPO} \
40+
--project=${GCP_PROJECT} \
41+
--location=${GCP_LOCATION} \
42+
--member="serviceAccount:${NODE_SA}" \
43+
--role="roles/artifactregistry.reader"
44+
# Configure docker to authenticate with Artifact Registry
45+
gcloud auth configure-docker ${GCP_LOCATION}-docker.pkg.dev
46+
47+
# Build and Push
48+
IMAGE_REPO=${GCP_LOCATION}-docker.pkg.dev/${GCP_PROJECT}/${REPO} IMAGE_TAG=v0 make publish
49+
```
50+
51+
*Note: If `IMAGE_TAG` is omitted, the Makefile will use a combination of the current Git commit SHA and the build date.*
52+
53+
## Step 2: Deploy the CCM to your Cluster (Manual Clusters)
54+
55+
Once the image is pushed, you must deploy the necessary RBAC permissions and the CCM pod itself to the Kubernetes cluster.
56+
57+
For native Kubernetes clusters, avoid the legacy `deploy/cloud-controller-manager.manifest` (which is a SaltStack template used by legacy `kube-up`). Instead, use the kustomize-ready DaemonSet which correctly includes the RBAC roles and deployment:
58+
59+
1. Update the image to your newly pushed tag:
60+
```sh
61+
(cd deploy/packages/default && kustomize edit set image k8scloudprovidergcp/cloud-controller-manager=$IMAGE_REPO:$IMAGE_TAG)
62+
```
63+
2. The `manifest.yaml` DaemonSet is left intentionally blank of execution flags (`args: []`). You **must** provide the necessary command-line arguments to the `cloud-controller-manager` container. For a typical Kops or GCE cluster, you can supply these arguments by creating a Kustomize patch.
64+
65+
> [!NOTE]
66+
> If you skipped building your own image in Step 1 and chose to deploy the public upstream image (`k8scloudprovidergcp/cloud-controller-manager:latest`), you **must** also include `command: ["/cloud-controller-manager"]` in your patch's `containers` block. Locally built Dockerfile images automatically set the correct `ENTRYPOINT`, so they do not require this override!
67+
68+
> [!IMPORTANT]
69+
> Be sure to update the `--cluster-cidr` and `--cluster-name` arguments below to match your specific cluster's configuration. Note that GCP resource names cannot contain dots (`.`), so if your cluster name is `my.cluster.net`, you **must** use a sanitized format like `my-cluster-net` here!
70+
71+
```sh
72+
cat << EOF > deploy/packages/default/args-patch.yaml
73+
apiVersion: apps/v1
74+
kind: DaemonSet
75+
metadata:
76+
name: cloud-controller-manager
77+
namespace: kube-system
78+
spec:
79+
template:
80+
spec:
81+
volumes:
82+
- name: host-kubeconfig
83+
hostPath:
84+
path: /etc/kubernetes/admin.conf
85+
containers:
86+
- name: cloud-controller-manager
87+
command: ["/usr/local/bin/cloud-controller-manager"]
88+
volumeMounts:
89+
- name: host-kubeconfig
90+
mountPath: /etc/kubernetes/admin.conf
91+
readOnly: true
92+
args:
93+
- --kubeconfig=/etc/kubernetes/admin.conf
94+
- --authentication-kubeconfig=/etc/kubernetes/admin.conf
95+
- --authorization-kubeconfig=/etc/kubernetes/admin.conf
96+
- --cloud-provider=gce
97+
- --allocate-node-cidrs=true
98+
- --cluster-cidr=10.4.0.0/14
99+
- --cluster-name=kops-k8s-local
100+
- --configure-cloud-routes=true
101+
- --leader-elect=true
102+
- --use-service-account-credentials=true
103+
- --v=2
104+
EOF
105+
(cd deploy/packages/default && kustomize edit add patch --path args-patch.yaml)
106+
107+
# Deploy the configured package (this applies the DaemonSet and its required roles):
108+
kubectl apply -k deploy/packages/default
109+
```
110+
111+
### Alternative: Apply Standalone RBAC Roles
112+
113+
If you prefer to deploy the RBAC rules independently from the base daemonset package, you can apply them directly:
114+
115+
```sh
116+
kubectl apply -f deploy/cloud-node-controller-role.yaml
117+
kubectl apply -f deploy/cloud-node-controller-binding.yaml
118+
kubectl apply -f deploy/pvl-controller-role.yaml
119+
```
120+
121+
## Step 3: Verification
122+
123+
To verify that the Cloud Controller Manager is running successfully:
124+
125+
1. **Check the Pod Status**: Verify the pod is `Running` in the `kube-system` namespace.
126+
```sh
127+
kubectl get pods -n kube-system -l component=cloud-controller-manager
128+
```
129+
130+
2. **Check Pod Logs**: Look for any errors or access and authentication issues with the GCP API.
131+
```sh
132+
kubectl describe pod -n kube-system -l component=cloud-controller-manager
133+
kubectl logs -n kube-system -l component=cloud-controller-manager
134+
```
135+
136+
3. **Check Node Initialization**: The `kubelet` initially applies a `node.cloudprovider.kubernetes.io/uninitialized` taint when bound to an external cloud provider. The CCM should remove this taint once it successfully fetches the node's properties from the GCP API.
137+
```sh
138+
# Ensure no nodes have the uninitialized taint, output should be empty.
139+
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints | grep uninitialized
140+
```
141+
142+
4. **Verify External IPs and ProviderID**: Check if your nodes are correctly populated with GCP-specific data (e.g., `ProviderID` in the format `gce://...`).
143+
```sh
144+
kubectl describe nodes | grep "ProviderID:"
145+
```
146+
147+
## Teardown
148+
149+
If you used the default CCM package, you can clean up the local patch file and reset all changes to kustomization.yaml:
150+
```sh
151+
rm deploy/packages/default/args-patch.yaml
152+
git checkout deploy/packages/default/kustomization.yaml
153+
```
154+
155+
If you followed the [manual cluster setup guide](manual-cluster-gce.md), you may follow the [teardown steps](manual-cluster-gce.md#teardown) to clean up your GCP resources.

docs/kops-quickstart.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# GCP CCM with kOps Quickstart
2+
3+
This guide provides a quickstart for building and deploying the GCP Cloud Controller Manager (CCM) to a self-managed Kubernetes cluster provisioned with kOps.
4+
5+
## Prerequisites
6+
7+
A Google Cloud Platform project with billing enabled.
8+
9+
## Deployment
10+
11+
The `make kops-up` target is an end-to-end workflow that automatically:
12+
- Provisions a Kubernetes cluster using kOps.
13+
- Builds the CCM image locally.
14+
- Pushes the image to your Artifact Registry.
15+
- Deploys the CCM (along with required RBAC) to the cluster.
16+
17+
Run the following commands to get started:
18+
19+
```sh
20+
# Enable required GCP APIs
21+
gcloud services enable compute.googleapis.com
22+
gcloud services enable artifactregistry.googleapis.com
23+
24+
# Set environment variables
25+
export GCP_PROJECT=$(gcloud config get-value project)
26+
export GCP_LOCATION=us-central1
27+
export GCP_ZONES=${GCP_LOCATION}-a
28+
export KOPS_CLUSTER_NAME=kops.k8s.local
29+
export KOPS_STATE_STORE=gs://${GCP_PROJECT}-kops-state
30+
31+
# Create the state store bucket if it doesn't already exist
32+
gcloud storage buckets create ${KOPS_STATE_STORE} --location=${GCP_LOCATION} || true
33+
34+
# Run the cluster creation target, may take several minutes
35+
make kops-up
36+
```
37+
38+
## Verification
39+
40+
To verify that the Cloud Controller Manager is running successfully:
41+
42+
1. **Check the Pod Status**: Verify the pod is `Running` in the `kube-system` namespace.
43+
```sh
44+
kubectl get pods -n kube-system -l component=cloud-controller-manager
45+
```
46+
47+
2. **Check Pod Logs**: Look for any errors or access and authentication issues with the GCP API.
48+
```sh
49+
kubectl logs -n kube-system -l component=cloud-controller-manager
50+
```
51+
52+
3. **Check Node Initialization**: The CCM should remove the `node.cloudprovider.kubernetes.io/uninitialized` taint once it successfully fetches the node's properties from the GCP API.
53+
```sh
54+
# Ensure no nodes have the uninitialized taint, output should be empty.
55+
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints | grep uninitialized
56+
```
57+
58+
4. **Verify ProviderID**: Check if your nodes are correctly populated with GCP-specific data (e.g., `ProviderID` in the format `gce://...`).
59+
```sh
60+
kubectl describe nodes | grep "ProviderID:"
61+
```
62+
63+
## Teardown
64+
65+
To tear down the cluster and clean up resources:
66+
67+
```sh
68+
make kops-down
69+
```

0 commit comments

Comments
 (0)