Skip to main content

Kubernetes The Hard Way Part 4

· 4 min read
Ilham Surya
SRE Engineer - Fullstack Enthusiast - Go, Python, React, Typescript

kubernetes.

This is the continue blog from kubernetes the hard way part 3. in this fourth part i will continue the bootstrapping process for kubernetes-controller & kubernetes-worker. Also will addin on how to setting up the kubectl in the instance

What this part covers

  • Bootstrapping the control plane (kube-apiserver, controller-manager, scheduler).
  • RBAC for kubelet to talk to the API.
  • Bootstrapping worker nodes (containerd, kubelet, kube-proxy, CNI).
  • Configuring kubectl on admin host and doing smoke tests.

Pre-req: certificates, kubeconfigs, and encryption-config were prepared in Part 2/3.

1) Bootstrap the Control Plane

Run on each controller instance.

Install binaries

wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.30.2/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.30.2/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.30.2/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.30.2/bin/linux/amd64/kubectl"

chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

Prepare directories and certs

sudo mkdir -p /var/lib/kubernetes
sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/

Internal IP

INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)

kube-apiserver unit

cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NodeRestriction \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--etcd-servers=https://10.0.1.10:2379,https://10.0.1.11:2379,https://10.0.1.12:2379 \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Controller-manager unit

sudo mkdir -p /etc/kubernetes/config

cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes-the-hard-way \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Scheduler config & unit

cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--bind-address=0.0.0.0
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start services

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Verify health (on controller):

kubectl get componentstatuses --kubeconfig /var/lib/kubernetes/admin.kubeconfig

RBAC: allow kubelet TLS bootstrap

cat <<EOF | kubectl apply --kubeconfig /var/lib/kubernetes/admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:node:bootstrapper
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- kind: Group
name: system:bootstrappers
EOF

2) Bootstrap Worker Nodes

Run on each worker instance.

Install container runtime + CNI

sudo mkdir -p /etc/cni/net.d /opt/cni/bin /var/lib/kubelet /var/lib/kube-proxy /var/lib/kubernetes /var/run/kubernetes

wget -q --show-progress --https-only --timestamping \
https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64 \
https://github.com/containerd/containerd/releases/download/v1.7.18/containerd-1.7.18-linux-amd64.tar.gz \
https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz

sudo tar -xvf containerd-1.7.18-linux-amd64.tar.gz -C /usr/local
sudo install -m 755 runc.amd64 /usr/local/bin/runc
sudo tar -xvf cni-plugins-linux-amd64-v1.4.1.tgz -C /opt/cni/bin/

containerd config

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd || sudo systemctl enable --now containerd

Worker binaries

wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.30.2/bin/linux/amd64/kubelet" \
"https://storage.googleapis.com/kubernetes-release/release/v1.30.2/bin/linux/amd64/kube-proxy"

chmod +x kubelet kube-proxy
sudo mv kubelet kube-proxy /usr/local/bin/

Move certs + kubeconfigs

sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
sudo cp ca.pem /var/lib/kubernetes/

kubelet config

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.200.0.0/24"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
EOF

kubelet unit

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

kube-proxy config + unit

cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start worker services

sudo systemctl daemon-reload
sudo systemctl enable --now kubelet kube-proxy

3) Configure kubectl (admin host or controller-0)

mkdir -p ~/.kube
cp admin.kubeconfig ~/.kube/config
kubectl get componentstatuses
kubectl get nodes -o wide

If nodes show NotReady, install CNI (flannel example):

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

4) Smoke tests

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port 80 --type NodePort
kubectl get pods -o wide

# From a worker node:
curl -I http://localhost:<nodeport>

If response 200 OK, control plane + workers + networking are wired correctly.

Conclusion Part 4

  • Control plane services are up with TLS, RBAC, and encryption at rest.
  • Workers run containerd, kubelet, kube-proxy with CNI.
  • kubectl is configured and a sample workload runs.

Next (optional): add metrics-server, CoreDNS tuning, cluster-autoscaler, and automate this with Ansible/Terraform.