这是一个简单的基于ubuntu服务器的kubernetes的安装步骤,使用了kubeadm,
之前使用过kubespray,简单一点,但是感觉使用kubeadm搭建可能对细节更了解一些。
看自己需求吧。
ubuntu虚拟机搭建的k8s集群,用来测试学习。搭建包括下载安装修改配置的时间大概一两个小时。
-- 2022年02月10日20:37:43 当前版本 Kubernetes 1.230
Ubuntu 20 虚拟机
Kubernetes 1.23.0
所有主机都需要操作
sudo passwd root
sudo vim /etc/ssh/sshd_config
PermitRootLogin yes # 添加
sudo systemctl restart sshd.service
所有主机都需要操作
按照规划设置静态IP
静态ip根据自己需求修改,dns自己查看自己主机设置
sudo nano /etc/netplan/00-installer-config.yaml
network:
ethernets:
enp0s3:
dhcp4: no
addresses: [172.16.106.11/24] # 静态ip
gateway4: 172.16.106.1 # 网关
nameservers:
addresses: [202.106.1.20, 202.106.111.120] # dns 需要根据实际修改,查看windows10主机的dns
version: 2
sudo netplan apply
所有主机都需要操作
按照规划设置主机名
这里以master01为例
sudo hostnamectl set-hostname master01
sudo nano /etc/hosts
127.0.0.1 localhost
#127.0.1.1 master01
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.16.106.38 master01
172.16.106.39 node01
172.16.106.50 node02
172.16.106.51 master02
所有主机都需要操作
注:ubuntu没有自带这个功能模块,不用管它需要下载安装, centos等需要禁用
https://linuxconfig.org/how-to-disable-enable-selinux-on-ubuntu-20-04-focal-fossa-linux
下面是如果安装了这个模块后需要的操作
sestatus # 查看
sudo nano /etc/selinux/config # 修改配置
SELINUX=disabled
sestatus # 查看
reboot
sestatus # 查看
所有主机都需要操作
swapoff -a && sed -i '/swap/d' /etc/fstab
所有主机都需要操作
master server
work node
在自己的学习环境时,可以直接关闭防火墙
生产环境,尽量使用防火墙,开放指定端口
sudo ufw disable # ubuntu
systemctl stop firewalld && systemctl disable firewalld # centos
所有主机都需要操作
# Enable kernel modules
sudo modprobe overlay && \
sudo modprobe br_netfilter
# Add some settings to sysctl
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
所有主机都需要操作
sudo apt update && \
sudo apt install apt-transport-https ca-certificates curl software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - && \
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-cache policy docker-ce && \
sudo apt install -y containerd.io docker-ce docker-ce-cli && \
sudo systemctl status docker
参考:https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/
安装 kubelet, kubeadm and kubectl
sudo apt update
sudo apt -y install curl apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
注:https://packages.cloud.google.com/apt/doc/apt-key.gpg 需要翻墙可以先下载下来,
然后拷贝到master01,node01上 然后
sudo apt-key add apt-key.gpg
无法翻墙的可以更改为国内阿里源
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
所有主机都需要操作
sudo apt update && \
sudo apt -y install curl apt-transport-https
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
所有主机都需要操作
sudo apt update && \
sudo apt -y install vim git curl wget kubelet kubeadm kubectl && \
sudo apt-mark hold kubelet kubeadm kubectl
所有主机都需要操作
kubectl version --client && kubeadm version
所有主机都需要操作
# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Start and enable Services
sudo systemctl daemon-reload && \
sudo systemctl restart docker && \
sudo systemctl enable docker
所有主机都需要操作
sudo kubeadm config images list
root@master01:~# sudo kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.3
k8s.gcr.io/kube-controller-manager:v1.23.3
k8s.gcr.io/kube-scheduler:v1.23.3
k8s.gcr.io/kube-proxy:v1.23.3
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
sudo kubeadm config images pull
注:需要翻墙
如果没有翻墙可修改为本地源拉取:https://segmentfault.com/a/1190000038248999
# 拉取阿里镜像
kubeadm config print init-defaults > kubeadm.conf
sed -i 's/k8s.gcr.io/registry.aliyuncs.com\/google_containers/g' kubeadm.conf
sudo kubeadm config images list --config kubeadm.conf
root@master01:~# sudo kubeadm config images list --config kubeadm.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6
sudo kubeadm config images pull --config kubeadm.conf
# 修改镜像名字为谷歌镜像名字 前者为阿里云镜像名字 后者为谷歌镜像名字
# 前者:sudo kubeadm config images list --config kubeadm.conf
# 后者:sudo kubeadm config images list
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0 k8s.gcr.io/kube-apiserver:v1.23.3 &&
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0 k8s.gcr.io/kube-controller-manager:v1.23.3 &&
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0 k8s.gcr.io/kube-scheduler:v1.23.3 &&
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0 k8s.gcr.io/kube-proxy:v1.23.3 &&
docker tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6 &&
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0 &&
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6
master01主机 操作
sudo kubeadm init \
--pod-network-cidr=192.168.0.0/16 \
--upload-certs \
--control-plane-endpoint=master01
输出如下
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9 \
--control-plane --certificate-key 8ca22421d961623b8958b89a628f2324da23e107622ab0f008de97af427b698b
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9
mkdir -p $HOME/.kube && \
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl cluster-info
node01 node02操作
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9
master02 操作
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \
--discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cb623807e131118f6800867e9 \
--control-plane --certificate-key 8ca22421d961623b8958b89a628f2324da23e107622ab0f008de97af427b698b
master01操作
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
注:需要几分钟时间
watch kubectl get pods --all-namespaces
kubectl get nodes -o wide
注:pod显示全为Running, node节点全为ready说明安装成功,集群可以使用
参考文献:https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/
操作时间:2022年02月10日
Kubernetes版本:1.23.0
如有问题 欢迎提出
如有帮助 欢迎留言