系列文章

  1. K8S集群高可用部署
  2. kubeadm部署k8s集群

机器列表

IP地址主机名角色
10.211.55.203k8s-cluster-master1Master1
10.211.55.204k8s-cluster-master2Master2
10.211.55.205k8s-cluster-master3Master3
10.211.55.206k8s-cluster-node1Node1
10.211.55.207k8s-cluster-apiserver-vipVIP

Parallels desktop虚机处理

PD处理虚机克隆后product_uuid相同的方法:

  1. 先解除绑定,可以使用删除并保留文件的方式,也可以使用命令行

    1
    2
    3
    4
    5
    6
    7
    8
    9
    prlctl unregister ID|name

    ID/name: 为虚拟机名字

    比如我就是使用命令:
    prlctl unregister k8s-cluster-master1
    prlctl unregister k8s-cluster-master2
    prlctl unregister k8s-cluster-master3
    prlctl unregister k8s-cluster-node1
  2. 重新注册,注册的时候加上重新生成 UUID 的参数

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    prlctl register path --regenerate-src-uuid

    path: 虚拟机的 pvm 文件路径。
    执行后重启 product_uuid 即更改。

    命令:
    prlctl register /Volumes/MY_DATA/Parallels/k8s-cluster-master1.pvm --regenerate-src-uuid
    prlctl register /Volumes/MY_DATA/Parallels/k8s-cluster-master2.pvm --regenerate-src-uuid
    prlctl register /Volumes/MY_DATA/Parallels/k8s-cluster-master3.pvm --regenerate-src-uuid
    prlctl register /Volumes/MY_DATA/Parallels/k8s-cluster-node1.pvm --regenerate-src-uuid

Ubuntu22.04 虚机配置

每台机器的/etc/hosts配置:

1
2
3
4
5
10.211.55.203 k8s-cluster-master1
10.211.55.204 k8s-cluster-master2
10.211.55.205 k8s-cluster-master3
10.211.55.206 k8s-cluster-node1
10.211.55.207 k8s-cluster-apiserver-vip

K8S高可用集群安装

官方文档:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/

高可用拓扑结构

  • 堆叠etcd拓扑(本次选择这种方式)
  • 外部etcd拓扑

环境基础配置

此操作在所有机器上执行!

  • 关闭swap:

    1
    2
    swapoff -a
    sed -i '/\/swap.img/ s/^\(.*\)$/#\1/g' /etc/fstab
  • 允许iptables检查桥接流量:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    modprobe br_netfilter

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF

    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sudo sysctl --system

安装容器运行时(docker)

此操作在所有机器上执行!

  • 安装docker:

    1
    2
    3
    apt update
    apt install docker.io
    docker version
  • 配置docker:

    1
    2
    3
    4
    5
    6
    7
    # 换成阿里Docker仓库
    cat > /etc/docker/daemon.json << EOF
    {
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": ["https://tk6cqevn.mirror.aliyuncs.com"]
    }
    EOF
  • 启动docker:

    1
    2
    3
    systemctl restart docker
    systemctl enable docker
    docker info

安装 kubeadm、kubelet 和 kubectl

此操作在所有机器上执行!

  • 安装基础工具:

    1
    apt-get install -y apt-transport-https ca-certificates curl
  • 使用阿里云安装K8S工具:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    # 下载apt-key
    curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg

    # 添加阿里云K8S源
    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

    # 更新源
    apt-get update

    # 查看可以安装的k8s版本
    apt-cache madison kubeadm

    # 使用指定版本安装k8s
    apt-get install -y kubelet=1.23.8-00 kubeadm=1.23.8-00 kubectl=1.23.8-00

    # 安装最新版本
    apt-get install -y kubelet kubeadm kubectl

    systemctl enable kubelet

高可用配置

此操作在3台Master机器上执行!

高可用方案选择使用haproxy作为apiserver的负载均衡,keepalived作为高可用保障。

1
2
3
# 安装haproxy和keepalived
apt-get update
apt-get install haproxy keepalived

配置haproxy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# 编辑haproxy配置文件
vim /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

# 以下内容为k8s配置,请自行修改
frontend monitor
bind *:33300
mode http
option httplog
monitor-uri /monitor

frontend k8s-apiserver-frontend
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-apiserver-backend

backend k8s-apiserver-backend
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-cluster-master1 10.211.55.203:6443 check
server k8s-cluster-master2 10.211.55.204:6443 check
server k8s-cluster-master3 10.211.55.205:6443 check


# 开启启动,并重启haproxy
systemctl enable haproxy
systemctl restart haproxy

配置keepalived:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
# 复制样例配置文件到/etc/keepalived目录下
cp /usr/share/doc/keepalived/samples/keepalived.conf.sample /etc/keepalived/keepalived.conf

################################ k8s-cluster-master1配置 ################################
# 在 k8s-cluster-master1 机器编辑配置文件
# 删掉lvs相关配置,即virtual_server开头的配置部分
# global_defs添加选项
# 添加vrrp_script chk_haproxy配置段
# 修改vrrp_instance配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
# 使用root执行检查脚本,并开启脚本安全性检查
script_user root
enable_script_security
}

# 配置检查脚本
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}

vrrp_instance VI_1 {
# 添加角色状态
state MASTER
interface enp0s5
virtual_router_id 50
# nopreempt
priority 100
advert_int 1
virtual_ipaddress {
# 192.168.200.11
# 192.168.200.12
# 192.168.200.13
10.211.55.207/24
}
# 添加vrrp认证
authentication {
auth_type PASS
auth_pass 123456
}
# 关联检查脚本
track_script {
chk_haproxy
}
}


################################ k8s-cluster-master2配置 ################################
# k8s-cluster-master2配置
! Configuration File for keepalived

global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
# 使用root执行检查脚本,并开启脚本安全性检查
script_user root
enable_script_security
}

# 配置检查脚本
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}

vrrp_instance VI_1 {
# 添加角色状态
state BACKUP
interface enp0s5
virtual_router_id 50
# nopreempt
priority 99
advert_int 1
virtual_ipaddress {
# 192.168.200.11
# 192.168.200.12
# 192.168.200.13
10.211.55.207/24
}
# 添加vrrp认证
authentication {
auth_type PASS
auth_pass 123456
}
# 关联检查脚本
track_script {
chk_haproxy
}
}


################################ k8s-cluster-master3配置 ################################
# k8s-cluster-master3配置
! Configuration File for keepalived

global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
# 使用root执行检查脚本,并开启脚本安全性检查
script_user root
enable_script_security
}

# 配置检查脚本
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}

vrrp_instance VI_1 {
# 添加角色状态
state BACKUP
interface enp0s5
virtual_router_id 50
# nopreempt
priority 98
advert_int 1
virtual_ipaddress {
# 192.168.200.11
# 192.168.200.12
# 192.168.200.13
10.211.55.207/24
}
# 添加vrrp认证
authentication {
auth_type PASS
auth_pass 123456
}
# 关联检查脚本
track_script {
chk_haproxy
}
}

创建检查脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
vim /etc/keepalived/check_haproxy.sh
#!/bin/bash

check_code=`nc -z -v localhost 33300; echo $?`
if [ $check_code -ne 0 ]; then
err=1
else
err=0
fi

if [ $err -ne 0 ]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi

启动keepalived:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 给检查脚本赋予执行权限
chmod +x /etc/keepalived/check_haproxy.sh

# 启动
systemctl enable keepalived
systemctl start keepalived

# 检查mater1的vip是否绑定
ip addr

# 抓包查看vrrp信息
tcpdump -i enp0s5 vrrp -n

# ping vip,验证是否绑定成功
ping 10.211.55.207

使用 kubeadm 创建高可用集群

K8S组件镜像下载与准备

  • 因网络较慢,所以先使用kubeadm config下载K8S组件镜像:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # 查看当前会使用的k8s镜像版本
    kubeadm config images list

    # 使用阿里云的镜像仓库,拉取镜像到本地
    kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version 1.23.8

    # 镜像在一台主机上下载完成后,使用docker命令导出所有K8S相关的镜像
    docker save -o k8s_1.23.8_all_images.tar `docker images | grep 'registry.cn-hangzhou.aliyuncs.com/google_containers' | awk '{print $1 ":" $2}' | xargs`

    # 使用scp复制镜像压缩文件到其余所有机器(包括worker节点),并导入镜像
    scp k8s_1.23.8_all_images.tar root@10.211.55.204:~
    scp k8s_1.23.8_all_images.tar root@10.211.55.205:~
    scp k8s_1.23.8_all_images.tar root@10.211.55.206:~

    docker load -i k8s_1.23.8_all_images.tar
    docker images

第一个Master节点初始化

以下操作在Master节点中进行!

纯命令行方式

  • 使用命令行方式初始化Master节点:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    kubeadm init \
    --control-plane-endpoint "10.211.55.207:16443" \
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
    --kubernetes-version v1.23.8 \
    --service-cidr=10.96.0.0/12 \
    --pod-network-cidr=10.244.0.0/16 \
    --upload-certs

    # 参数解析
    --image-repository # K8S组件镜像仓库地址,默认为k8s.gcr.io,由于国内无法访问,所以需要更改为阿里云的
    --kubernetes-version # k8s版本
    --service-cidr # service的网络地址范围
    --pod-network-cidr # pod的网络地址范围
    --control-plane-endpoint "10.211.55.207:16443" # 为控制平面指定稳定的IP地址或DNS名称
    --upload-certs # 上传控制平面证书到“kubeadm-certs Secret”
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    # 初始化成功结果
    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

    export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

    You can now join any number of the control-plane node running the following command on each as root:

    kubeadm join 10.211.55.207:16443 --token fxl2st.3um4yxdvvtc1ztvr \
    --discovery-token-ca-cert-hash sha256:fbb9114fa97d3fe7abb995c416972280f2c8fea539fb81ece5ff7b48b45ed073 \
    --control-plane --certificate-key 44ddf6e7db4abaf116e9189ad6bf86008d5c8fda8ffc48f61b7b30997586a867

    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 10.211.55.207:16443 --token fxl2st.3um4yxdvvtc1ztvr \
    --discovery-token-ca-cert-hash sha256:fbb9114fa97d3fe7abb995c416972280f2c8fea539fb81ece5ff7b48b45ed073


    # 配置kubeconfig
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    # 将输出中master的join信息保存下来,接下来的master节点初始化需要用到
    kubeadm join 10.211.55.207:16443 --token fxl2st.3um4yxdvvtc1ztvr \
    --discovery-token-ca-cert-hash sha256:fbb9114fa97d3fe7abb995c416972280f2c8fea539fb81ece5ff7b48b45ed073 \
    --control-plane --certificate-key 44ddf6e7db4abaf116e9189ad6bf86008d5c8fda8ffc48f61b7b30997586a867

    # 将输出中woker node的join信息保存下来,接下来的node节点初始化需要用到
    kubeadm join 10.211.55.207:16443 --token fxl2st.3um4yxdvvtc1ztvr \
    --discovery-token-ca-cert-hash sha256:fbb9114fa97d3fe7abb995c416972280f2c8fea539fb81ece5ff7b48b45ed073
  • token过期处理:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # 如果master join信息过期,需要要重新上传证书并生成新的解密密钥
    kubeadm init phase upload-certs --upload-certs

    # 如果想在上面的命令,指定自己的--certificate-key,可以用下面的命令生成
    kubeadm certs certificate-key

    # 生成master join token
    kubeadm token create --certificate-key d7d498f0e61fbde32300ebe6c2c784b4b143e3f71d6d8594b490a0b3bcf3d88d --print-join-command

    # 输出结果
    kubeadm join 10.211.55.207:16443 --token j4r0wl.dzokez9hc3aovtkq --discovery-token-ca-cert-hash sha256:fbb9114fa97d3fe7abb995c416972280f2c8fea539fb81ece5ff7b48b45ed073 --control-plane --certificate-key d7d498f0e61fbde32300ebe6c2c784b4b143e3f71d6d8594b490a0b3bcf3d88d

配置文件方式

  • 使用配置文件方式初始化Master节点:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    # 生成初始化yml配置文件
    kubeadm config print init-defaults > kubeadm.yaml

    # 编辑配置文件
    # kubeadm-config配置字段详解:https://kubernetes.io/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/
    vim kubeadm.yaml
    # 修改api-server的广播地址为第一个Master节点的IP
    localAPIEndpoint.advertiseAddress: 10.211.55.203
    # 修改Master节点名
    nodeRegistration.name: k8s-cluster-master1
    # 新增 API 服务器签署证书所用的额外主题替代名(https证书)
    apiServer:
    certSANs:
    - 10.211.55.207

    # 新增控制面 IP 地址或 DNS 名称
    controlPlaneEndpoint: 10.211.55.207:16443

    # 修改K8S组件镜像仓库地址
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    #修改K8S版本
    kubernetesVersion: 1.23.8
    # 新增pod网络范围
    podSubnet: 10.244.0.0/16

    #其余配置,请根据自身需求,查询官方手册自行添加!


    # kubeadm.yml文件样例:
    apiVersion: kubeadm.k8s.io/v1beta3
    bootstrapTokens:
    - groups:
    - system:bootstrappers:kubeadm:default-node-token
    token: abcdef.0123456789abcdef
    ttl: 24h0m0s
    usages:
    - signing
    - authentication
    kind: InitConfiguration
    localAPIEndpoint:
    advertiseAddress: 10.211.55.203
    bindPort: 6443
    nodeRegistration:
    criSocket: /var/run/dockershim.sock
    imagePullPolicy: IfNotPresent
    name: k8s-cluster-master1
    taints: null
    ---
    apiServer:
    certSANs:
    - 10.211.55.207
    timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: 10.211.55.207:16443
    controllerManager: {}
    dns: {}
    etcd:
    local:
    dataDir: /var/lib/etcd
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: 1.23.8
    networking:
    dnsDomain: cluster.local
    serviceSubnet: 10.96.0.0/12
    podSubnet: 10.244.0.0/16
    scheduler: {}

    如果需要配置kube-proxy的工作模式为ipvs,可在kubeadm.yaml文件中加入以下配置:

    1
    2
    3
    4
    5
    6
    # kube-proxy配置字段详解:https://kubernetes.io/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: ipvs

    初始化master:

    1
    2
    3
    4
    5
    # 查询初始化会使用的镜像列表
    kubeadm config images list --config=kubeadm.yaml

    # 初始化
    kubeadm init --config kubeadm.yaml --upload-certs
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    # 结果输出
    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

    export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

    You can now join any number of the control-plane node running the following command on each as root:

    kubeadm join 10.211.55.207:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:74618ebaa025a063d9d87b8e6d94d5934cc2034d6d494d6d90deee56c7e8483b \
    --control-plane --certificate-key 5365e6db43a52b3a1608af749637410e5b37ce3fd2557d38b5ca3ebc558c40a1

    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 10.211.55.207:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:74618ebaa025a063d9d87b8e6d94d5934cc2034d6d494d6d90deee56c7e8483b


    # 配置kubeconfig
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    # 将输出中master的join信息保存下来,接下来的master节点初始化需要用到
    kubeadm join 10.211.55.207:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:74618ebaa025a063d9d87b8e6d94d5934cc2034d6d494d6d90deee56c7e8483b \
    --control-plane --certificate-key 5365e6db43a52b3a1608af749637410e5b37ce3fd2557d38b5ca3ebc558c40a1

    # 将输出中woker node的join信息保存下来,接下来的node节点初始化需要用到
    kubeadm join 10.211.55.207:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:74618ebaa025a063d9d87b8e6d94d5934cc2034d6d494d6d90deee56c7e8483b

    若安装过程出错,可卸载干净K8S,再重试:

    1
    2
    3
    4
    5
    6
    # 卸载k8s master
    kubeadm reset -f
    rm -rf /etc/cni/net.d
    rm -rf $HOME/.kube/
    iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
    ipvsadm -C

剩余Master节点初始化

1
2
# 在其余master节点执行
kubeadm join 10.211.55.207:16443 --token ubj3gz.ynktijqifo6ywocs --discovery-token-ca-cert-hash sha256:fbb9114fa97d3fe7abb995c4169 --control-plane --certificate-key 1eacd88da3c4e943c0031417c5e4faf2ff1e826332054586517326050a44159f

Worker节点初始化

1
2
3
4
5
6
7
8
9
10
# 检查master节点操作时,K8S组件镜像有无导入成功
docker images

# 使用master节点初始化时的命令,kubeadm join进行Node节点初始化
kubeadm join 10.211.55.207:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:74618ebaa025a063d9d87b8e6d94d5934cc2034d6d494d6d90deee56c7e8483b

# 若忘记join的token,可使用以下命令重新生成
kubeadm token create --print-join-command --ttl="24h0m0s"

部署 CNI 网络插件

这里选择flannel作为CNI插件。

部署flannel插件,容器镜像下载较慢,建议自备梯子下载,或找国内别人搭建的镜像仓库下载。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# flannel部署文件:https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# 编辑部署文件
vim kube-flannel.yml
# 修改名称为kube-flannel-cfg的ConfigMap,将net-conf.json中的Network改成和pod-network-cidr一致
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}

# 名称为kube-flannel-ds的DaemonSet,其中的image镜像,最好先提前下载和导入镜像到所有节点,在所有节点执行
# 镜像名称
# rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
# rancher/mirrored-flannelcni-flannel:v0.18.1
scp flannel-v0.18.1.tar root@10.211.55.204:~
scp flannel-v0.18.1.tar root@10.211.55.205:~
scp flannel-v0.18.1.tar root@10.211.55.206:~

docker load -i flannel-v0.18.1.tar

# 部署插件,在Master节点上执行
kubectl apply -f kube-flannel.yml

检查集群状态

1
2
3
4
5
6
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-cluster-master1 Ready control-plane,master 13m v1.23.8
k8s-cluster-master2 Ready control-plane,master 9m32s v1.23.8
k8s-cluster-master3 Ready control-plane,master 8m32s v1.23.8
k8s-cluster-node1 Ready <none> 6m30s v1.23.8