Series: Kubernetes at home
- Kubernetes at home - Part 1: The hardware - January 02, 2021
Kubernetes at home - Part 2: The install - January 05, 2021
- Kubernetes at home - Part 3: HAProxy Ingress - January 05, 2021
- Kubernetes at home - Part 4: DNS and a certificate with HAProxy Ingress - January 07, 2021
- Kubernetes at home - Part 5: Keycloak for authentication - January 16, 2021
- Kubernetes at home - Part 6: Keycloak authentication and Azure Active Directory - January 17, 2021
- Kubernetes at home - Part 7: Grafana, Prometheus, and the beginnings of monitoring - January 26, 2021
- Kubernetes at home - Part 8: MinIO initialization - March 01, 2021
- Kubernetes at home - Part 9: Minecraft World0 - April 24, 2021
- Kubernetes at home - Part 10: Wiping the drives - May 09, 2021
- Kubernetes at home - Part 11: Trying Harvester and Rancher on the bare metal server - May 29, 2021
- Kubernetes at home - Part 12: Proxmox at home - December 23, 2021
Kubernetes at home - Part 2: The install
In the previous part of this series, I gave a brief layout of the hardware. In this part of the series, I’m going to layout the highlights of the Kubernetes install itself. I’m sticking with as many default settings as I can in order to reduce future mistakes from not yet knowing why I’d change it.
Operating system
The server is running Ubuntu 20.04 LTS at the moment. I value stability. Debugging weird issues isn’t on my TODO list. Having said that, a wide variety of operating systems would work just as well and I might experiment with others if I add more nodes to the cluster.
Steps - kubeadm
I went ahead and chose containerd as the CRI runtime. The only change in installation was to use systemd cgroup driver.
The below kubeadm config.yaml file uses systemd cgroup driver, sets containderd socket, and then also sets a default podSubnet of 10.112.0.0/12 which is planning ahead to enable Calico install.
Edit 2021-01-06: I changed podSubnet from 10.0.0.0/8 to 10.112.0.0/12 to avoid conflict with service cluster ip range of 10.96.0.0/12
config.yml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
podSubnet: "10.112.0.0/12"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"
> kubeadm init --config config.yml
I went ahead and allowed scheduling on the control-plane node as well since it’s a single node cluster.
For networking, I went ahead and installed Project Calico. I don’t have a good reason to choose it beyond I found it in this documentation list. I’ll stick with Calico so long as it just works.
I gave this server a static IP address of “192.168.0.45” on my home network just to avoid problems with that changing around.
Single node information
Installation went smoothly. When I examine the cluster using kubectl, I see one node and a lot of solid info on it. Admittedly, the actual installation happened a few weeks ago as of time of this blog post being written, so that age on the below info is certainly higher.
daniel@bequiet:~$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
danielamd Ready control-plane,master 18d v1.20.1
daniel@bequiet:~$ kubectl describe nodes/danielamd
Name: danielamd
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=danielamd
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.0.45/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.133.205.192
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 12 Dec 2020 01:03:34 -0500
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: danielamd
AcquireTime: <unset>
RenewTime: Wed, 30 Dec 2020 23:54:37 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 30 Dec 2020 00:23:29 -0500 Wed, 30 Dec 2020 00:23:29 -0500 CalicoIsUp Calico is running on this node
MemoryPressure False Wed, 30 Dec 2020 23:53:27 -0500 Sat, 12 Dec 2020 01:03:32 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 30 Dec 2020 23:53:27 -0500 Sat, 12 Dec 2020 01:03:32 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 30 Dec 2020 23:53:27 -0500 Sat, 12 Dec 2020 01:03:32 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 30 Dec 2020 23:53:27 -0500 Sat, 19 Dec 2020 13:07:37 -0500 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.0.45
Hostname: danielamd
Capacity:
cpu: 12
ephemeral-storage: 479151816Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 65859444Ki
pods: 110
Allocatable:
cpu: 12
ephemeral-storage: 441586312895
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 65757044Ki
pods: 110
System Info:
Machine ID: <redacted>
System UUID: <redacted>
Boot ID: <redacted>
Kernel Version: 5.4.0-58-generic
OS Image: Ubuntu 20.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.3
Kubelet Version: v1.20.1
Kube-Proxy Version: v1.20.1
PodCIDR: 10.0.0.0/24
PodCIDRs: 10.0.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
... <abbreviated> ...
kube-system calico-kube-controllers-744cfdf676-2b9t7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d
kube-system calico-node-sq46f 250m (2%) 0 (0%) 0 (0%) 0 (0%) 18d
kube-system coredns-74ff55c5b-5whx2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18d
kube-system coredns-74ff55c5b-pdfqx 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18d
kube-system etcd-danielamd 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 18d
kube-system kube-apiserver-danielamd 250m (2%) 0 (0%) 0 (0%) 0 (0%) 18d
kube-system kube-controller-manager-danielamd 200m (1%) 0 (0%) 0 (0%) 0 (0%) 18d
kube-system kube-proxy-scbhr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d
kube-system kube-scheduler-danielamd 100m (0%) 0 (0%) 0 (0%) 0 (0%) 18d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1100m (9%) 0 (0%)
memory 240Mi (0%) 340Mi (0%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
Steps - kubectl
One quick aside, is that installing kubectl via arkade was easy. I found out about it via a Jamie Phillips blog post. Might as well try arkade out for a while.
daniel@bequiet:~$ arkade get kubectl --version v1.20.1
Downloading kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.20.1/bin/linux/amd64/kubectl
38.37 MiB / 38.37 MiB [-----------------------------------------------------------------------------------------] 100.00%
Tool written to: /home/daniel/.arkade/bin/kubectl
# Add (kubectl) to your PATH variable
export PATH=$PATH:$HOME/.arkade/bin/
# Test the binary:
/home/daniel/.arkade/bin/kubectl
# Or install with:
sudo mv /home/daniel/.arkade/bin/kubectl /usr/local/bin/
daniel@bequiet:~$ which kubectl
/home/daniel/.arkade/bin//kubectl
daniel@bequiet:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:09:25Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:51:19Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Summary
I have a perfectly fine, until told otherwise, single node kubernetes cluster running. It would be great if I installed applications on it to justify its existence.
Series: Kubernetes at home
- Kubernetes at home - Part 1: The hardware - January 02, 2021
Kubernetes at home - Part 2: The install - January 05, 2021
- Kubernetes at home - Part 3: HAProxy Ingress - January 05, 2021
- Kubernetes at home - Part 4: DNS and a certificate with HAProxy Ingress - January 07, 2021
- Kubernetes at home - Part 5: Keycloak for authentication - January 16, 2021
- Kubernetes at home - Part 6: Keycloak authentication and Azure Active Directory - January 17, 2021
- Kubernetes at home - Part 7: Grafana, Prometheus, and the beginnings of monitoring - January 26, 2021
- Kubernetes at home - Part 8: MinIO initialization - March 01, 2021
- Kubernetes at home - Part 9: Minecraft World0 - April 24, 2021
- Kubernetes at home - Part 10: Wiping the drives - May 09, 2021
- Kubernetes at home - Part 11: Trying Harvester and Rancher on the bare metal server - May 29, 2021
- Kubernetes at home - Part 12: Proxmox at home - December 23, 2021