Setup Kubernetes 1.25.2 with containerd on openSUSE Leap 15.4
Setup Kubernetes 1.25.2 with contained on openSUSE Leap 15.4
Do not want to work overtime in the weekend, and want to give a free time to myself in those terrible period of time.
environment
OS: openSUSE Leap 15.4
kubernetes: 1.25.2
containerd: 1.6.6/1.6.8
virtualization: Parallels Desktop 18 for Mac Pro Edition
setup the virtual machines
create 3 virtual machines of (aarch64) openSUSE Leap 15.4, one is for the master node, the other two is for the worker nodes. the 3 virtual machines are connected under the 'Shared' model in Parallels Desktop. The LZY-SUSE154-002 is the master node, the other two are the worker nodes.
172.18.0.3 LZY-SUSE154-001
172.18.0.6 LZY-SUSE154-002
172.18.0.7 LZY-SUSE154-003during the installation of the virtual machines, ensure swap partition is not created.
add IP, full-hostname and short-hostname into etc\hosts, ==not understand why need do this actually==.
172.18.0.3 LZY-SUSE154-001 node1
172.18.0.6 LZY-SUSE154-002 master
172.18.0.7 LZY-SUSE154-003 node2preflight in 3 virtual machines
sudo zypper addrepo https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64 aliyun-k8s
sudo zypper update
sudo zypper refresh
#ensure the swap is disable
sudo swapon --show disable the firewall in 3 virtual machines
disable the firewall is only for the easy test, it could enable the firewall but set the rules for specific ports and service.
sudo systemctl stop firewalld
sudo systemctl disable firewalldinstall kubelet, kubeadm and kubectl in 3 virtual machines
sudo zypper install kubelet=1.25.2-0
sudo zypper install kubeadm=1.25.2-0
#enable the kubelet service started on boot
sudo systemctl enable kubeletinstall containerd
option 1 (1.6.8) from the official binaries in master node
use sudo su - if superuser privileges required.
installing containerd
sudo wget https://github.com/containerd/containerd/releases/download/v1.6.8/containerd-1.6.8-linux-arm64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.6.8-linux-arm64.tar.gzstarting containerd via systemd, download the containerd.service unit file fromhttps://raw.githubusercontent.com/containerd/containerd/main/containerd.service into /usr/local/lib/systemd/system/
systemctl daemon-reload
systemctl enable --now containerdinstalling runc
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64
install -m 755 runc.arm64 /usr/local/sbin/runcinstalling CNI plugins
wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm-v1.1.1.tgz
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-arm-v1.1.1.tgz option 2 (1.6.6) from docker in worker node 1
sudo zypper install docker
sudo docker psoption 3 (1.6.6) from zupper repos in worker node 2
sudo zypper install containerd=1.6.6-150000.73.2
sudo zypper install containerd-ctr=1.6.6-150000.73.2configure containerd
initialize the containerd with config.toml
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
sudo vi /etc/containerd/config.toml update config.toml, this is to configure the systemd cgroup driver, overriding the sandbox (pause) image and configure the booster
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
endpoint = ["https://tgg8yfs8.mirror.aliyuncs.com"]restart and enable containerd
sudo systemctl restart containerd
sudo systemctl enable containerd do in master node, initialize the cluster with kubeadm
initialize the cluster
sudo kubeadm init --kubernetes-version=1.25.2 \
--apiserver-advertise-address=172.18.0.6 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=172.19.0.0/16 \
--pod-network-cidr=172.20.0.0/16make kubectl work for the non-root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configinstall a Pod network add-on
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlif https://raw.githubusercontent.com is blocked, just get the yaml file and copy/pates into vi
sudo vi kube-flannel.yml
sudo kubectl apply -f kube-flannel.ymlchange the IP range of service-cidr
net-conf.json: |
{
"Network": "172.20.0.0/16",
"Backend": {
"Type": "vxlan"
}
}check the status, all pods should be in running status
sudo kubectl get pods --all-namespacesdo in worker nodes, add the worker nodes into the cluster with kubeadm
specify the container runtime in --cri-socket, Kubernetes uses the Container Runtime Interface (CRI) to interface with your chosen container runtime, if it's using containerd, on Linux the default CRI socket for containerd is /run/containerd/containerd.sock, then set --cri-socket=/run/containerd/containerd.sock
sudo kubeadm join 172.18.0.6:6443 \
--cri-socket=/run/containerd/containerd.sock \
--token <your token> \
--discovery-token-ca-cert-hash sha256:<your hash>rebalance the CoreDNS Pods after at least one new node is joined
sudo kubectl -n kube-system rollout restart deployment coredns check status in master node
sudo kubectl label node lzy-suse154-001 node-role.kubernetes.io/worker=worker
sudo kubectl label node lzy-suse154-003 node-role.kubernetes.io/worker=worker
sudo kubectl get nodes -o wide
sudo kubectl get pods --all-namespaces -o widereferences
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm
https://kubernetes.io/docs/setup/production-environment/container-runtimes
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
https://help.aliyun.com/document_detail/60750.html
https://www.cnblogs.com/centos-python/articles/14097330.html
https://juejin.cn/post/7053683649283096606
评论已关闭