K8S三节点集群搭建


简介

由于工作的原因,需要对k8s进行二次开发,所以需要一个能统一的开发环境,而minikube并不适合深度开发,所以只能以最小集群的方式搭建开发环境。主要是宿主机使用vagrant+virtualbox通过Vagrantfile文件快速产生半成品虚拟主机集群,然后通过简单配置即可产生k8s三节点开发环境。

注:

  • 文档中初始化环境[*]表示所有主机都需要,而初始化环境[KNode1]表示只有KNode1主机执行
  • 文档中vagrant目录为共享目录,如果没有可以使用nfs服务器进行

环境

虚拟机配置:

IP HostName CPU Memory NodeType
10.199.88.201 KNode1 4 8G master
10.199.88.202 KNode2 4 8G node
10.199.88.203 KNode3 4 8G node

vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
	(1..3).each do |i|
		    config.vm.define "KNode#{i}" do |node|
                # 判断是否安装vagrant-proxyconf,是否使用代理,注意这里的10.0.2.3是虚拟机网关地址。
                if Vagrant.has_plugin?("vagrant-proxyconf")
                    # node.proxy.http     = "http://10.0.2.3:7890/"
                    # node.proxy.https    = "http://10.0.2.3:7890/"
                    # node.proxy.no_proxy = "localhost,127.0.0.1,.example.com"
                end
		        # 设置虚拟机的Box,这里使用的是rockylinux
                config.vm.box = "rockylinux/9"
                 # 设置虚拟机的Box版本,这里使用的是rockylinux9.1
                config.vm.box_version = "1.0.0"
		        # 设置虚拟机的主机名
		        node.vm.hostname="KNode#{i}"
		        # 设置虚拟机的IP
		        node.vm.network "private_network", ip: "10.199.88.20#{i}"
		        # VirtaulBox相关配置
		        node.vm.provider "virtualbox" do |v|
			        # 设置虚拟机的名称
			        v.name = "KNode#{i}"
			        # 设置虚拟机的内存大小
			        v.memory = 8192
			        # 设置虚拟机的CPU个数
			        v.cpus = 4
		        end
                node.vm.provision "shell", inline: <<-SHELL 

                SHELL
            end
	    end
    end
end

正文

1. 环境初始化[*]

a). 系统更新与工具安装

更新操作系统到最新、并安装常用软件包

# 修改软件源为阿里云
sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' -i.bak /etc/yum.repos.d/rocky*.repo
dnf makecache
# 更新软件软件包,排除内核包
yum --exclude=kernel* update -y
# 安装常用软件
yum install bash-com* net-tools sysstat vim wget telnet strace psmisc yum-utils traceroute tar -y
cat >> /etc/hosts << EOF
10.199.88.201 knode1 KNode1 KNode1.example.com knode1.example.com 
10.199.88.202 knode2 KNode2 KNode2.example.com knode2.example.com 
10.199.88.203 knode3 KNode3 KNode3.example.com knode3.example.com 
EOF

b). 配置时间同步

只要是分布式系统,时间同步是必不可少的,这里使用的是chrony,也可以使用ntpdate

yum install chrony -y
#配置chrony时间同步
IP=`ip addr | grep 'state UP' -A3 | grep "inet 10.199" | awk '{print $2}' | tr -d "addr:" | head -n 1 | cut -d / -f1`
sed -i '3,6s/^/#/g' /etc/chrony.conf
sed -i "7s|^|server $IP iburst|g" /etc/chrony.conf
echo "allow all" >> /etc/chrony.conf
echo "local stratum 10" >> /etc/chrony.conf
systemctl restart chronyd && systemctl enable chronyd && timedatectl set-ntp true && sleep 5 && systemctl restart chronyd && chronyc sources

b). kubernetes系统调整

主要是针对kubernetes系统级的调整

# 关闭swap、selinux和防火墙
swapoff -a && sysctl -w vm.swappiness=0 && setenforce 0 && sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/selinux/config && sed -i '/swap/s/^/#/' /etc/fstab  && systemctl stop firewalld.service && systemctl disable firewalld.service
# 添加内核模块,br_netfilter模块可以使iptables规则在Linux Bridges的上层工作,用于将桥接的流量转发至iptables链,可以使用lsmod查看当前内核模块
# 开机加载模块
cat <<EOF > /etc/modules-load.d/k8s.conf
br_netfilter
EOF
# 立即加载模块
modprobe br_netfilter
# 设置所需的sysctl参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#立即应用sysctl参数
sysctl --system

2. 创建证书[KNode1]

a). 安装cfssl工具

这里主要是安装cfssl工具并配置环境

VERSION="1.6.4"
wget https://github.com/cloudflare/cfssl/releases/download/v${VERSION}/cfssl_${VERSION}_linux_amd64 -O /usr/bin/cfssl
wget https://github.com/cloudflare/cfssl/releases/download/v${VERSION}/cfssl-certinfo_${VERSION}_linux_amd64 -O /usr/bin/cfssl-certinfo
wget https://github.com/cloudflare/cfssl/releases/download/v${VERSION}/cfssljson_${VERSION}_linux_amd64 -O /usr/bin/cfssljson
chmod +x /usr/bin/cfssl /usr/bin/cfssljson /usr/bin/cfssl-certinfo && mkdir -p /vagrant/ssl && cd /vagrant/ssl

b). 创建CA证书

这里把证书的验证类型分为server、client、peer三类,其中peer是支持全部验证类型。

cat <<EOF > ca-config.json
{
    "signing": {
        "default": {
            "expiry": "43800h"
        },
        "profiles": {
            "server": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
cat <<EOF > ca-csr.json
{
    "CN": "Self Signed CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "O": "example Personal",
            "ST": "SH",            
            "OU": "admin",
            "CN": "admin Personal Tester CA",
            "emailAddress": "admin@example.com"
        }    ]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

c). 创建etcd集群证书

这里为了方便,使用peer类型证书。

cat <<EOF > etcd-csr.json
{
    "CN": "etcd",
    "hosts":[
        "127.0.0.1","10.199.88.201","10.199.88.202","10.199.88.203","*.example.com"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "O": "example Personal",
            "ST": "SH",            
            "OU": "admin",
            "CN": "admin Personal Tester CA",
            "emailAddress": "admin@example.com"
        }
    ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-csr.json | cfssljson -bare etcd

d). 创建etcd客户端证书

使用client类型证书。

cat <<EOF > client-csr.json
{
    "CN": "client",
        "hosts":[
        "127.0.0.1","10.199.88.201","10.199.88.202","10.199.88.203","*.example.com"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "O": "example Personal",
            "ST": "SH",            
            "OU": "admin",
            "CN": "admin Personal Tester CA",
            "emailAddress": "admin@example.com"
        }
    ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssljson -bare client

3. etcd集群配置

a). 准备工作[KNode1]

这里主要下载etcd安装包,这里使用的是3.5.7版本

VERSION="v3.5.7"
wget  https://github.com/etcd-io/etcd/releases/download/${VERSION}/etcd-${VERSION}-linux-amd64.tar.gz -O /vagrant/etcd.tar.gz

b). KNode1节点配置etcd[KNode1]

mkdir /etc/etcd/ssl/ -p && cd /etc/etcd/ssl/ && cp -rf /vagrant/ssl/{etcd.pem,etcd-key.pem,ca.pem} .
mkdir /usr/bin/etcd && tar xf /vagrant/etcd.tar.gz -C /usr/bin/etcd --strip-components 1 && chown -R root. /usr/bin/etcd
cat <<EOF >/etc/etcd/etcd.conf
# [member]
#etcd名称
ETCD_NAME=KNode1
#etcd数据存放目录
ETCD_DATA_DIR=/opt/etcd
#etcd集群通讯监听地址
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
#etcd客户端访问监听地址
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
#关闭代理
ETCD_PROXY=off
# [cluster]
ETCD_ADVERTISE_CLIENT_URLS=https://10.199.88.201:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://10.199.88.201:2380
ETCD_INITIAL_CLUSTER="KNode1=https://10.199.88.201:2380,KNode2=https://10.199.88.202:2380,KNode3=https://10.199.88.203:2380"
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-cluster
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF
cat <<EOF > /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
After=network.target
[Service]
Environment=ETCD_DATA_DIR=/var/lib/etcd/default
EnvironmentFile=-/etc/etcd/etcd.conf
Type=notify
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd/etcd
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
groupadd etcd && useradd -c "Etcd user" -g etcd -s /sbin/nologin -r etcd && mkdir -p /opt/etcd && chown etcd:etcd -R /opt/etcd /etc/etcd

c). KNode2节点配置etcd[KNode2]

mkdir /etc/etcd/ssl/ -p && cd /etc/etcd/ssl/ && cp -rf /vagrant/ssl/{etcd.pem,etcd-key.pem,ca.pem} .
mkdir /usr/bin/etcd && tar xf /vagrant/etcd.tar.gz -C /usr/bin/etcd --strip-components 1 && chown -R root. /usr/bin/etcd
cat <<EOF >/etc/etcd/etcd.conf
# [member]
#etcd名称
ETCD_NAME=KNode2
#etcd数据存放目录
ETCD_DATA_DIR=/opt/etcd
#etcd集群通讯监听地址
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
#etcd客户端访问监听地址
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
#关闭代理
ETCD_PROXY=off
# [cluster]
ETCD_ADVERTISE_CLIENT_URLS=https://10.199.88.202:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://10.199.88.202:2380
ETCD_INITIAL_CLUSTER="KNode1=https://10.199.88.201:2380,KNode2=https://10.199.88.202:2380,KNode3=https://10.199.88.203:2380"
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-cluster
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF
cat <<EOF > /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
After=network.target
[Service]
Environment=ETCD_DATA_DIR=/var/lib/etcd/default
EnvironmentFile=-/etc/etcd/etcd.conf
Type=notify
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd/etcd
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
groupadd etcd && useradd -c "Etcd user" -g etcd -s /sbin/nologin -r etcd && mkdir -p /opt/etcd && chown etcd:etcd -R /opt/etcd /etc/etcd

d). KNode3节点配置etcd[KNode3]

mkdir /etc/etcd/ssl/ -p && cd /etc/etcd/ssl/ && cp -rf /vagrant/ssl/{etcd.pem,etcd-key.pem,ca.pem} .
mkdir /usr/bin/etcd && tar xf /vagrant/etcd.tar.gz -C /usr/bin/etcd --strip-components 1 && chown -R root. /usr/bin/etcd
cat <<EOF >/etc/etcd/etcd.conf
# [member]
#etcd名称
ETCD_NAME=KNode3
#etcd数据存放目录
ETCD_DATA_DIR=/opt/etcd
#etcd集群通讯监听地址
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
#etcd客户端访问监听地址
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
#关闭代理
ETCD_PROXY=off
# [cluster]
ETCD_ADVERTISE_CLIENT_URLS=https://10.199.88.203:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://10.199.88.203:2380
ETCD_INITIAL_CLUSTER="KNode1=https://10.199.88.201:2380,KNode2=https://10.199.88.202:2380,KNode3=https://10.199.88.203:2380"
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-cluster
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF
cat <<EOF > /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
After=network.target
[Service]
Environment=ETCD_DATA_DIR=/var/lib/etcd/default
EnvironmentFile=-/etc/etcd/etcd.conf
Type=notify
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd/etcd
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
groupadd etcd && useradd -c "Etcd user" -g etcd -s /sbin/nologin -r etcd && mkdir -p /opt/etcd && chown etcd:etcd -R /opt/etcd /etc/etcd

f). 启动集群[*]

systemctl enable etcd.service && systemctl start etcd.service
/usr/bin/etcd/etcdctl --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://10.199.88.201:2379,https://10.199.88.202:2379,https://10.199.88.203:2379 member list
#显示结果
6528649c209dba, started, KNode1, https://10.199.88.201:2380, https://10.199.88.201:2379, false
cc6d41f5efc11a77, started, KNode2, https://10.199.88.202:2380, https://10.199.88.202:2379, false
d0b257bed2b3d54c, started, KNode3, https://10.199.88.203:2380, https://10.199.88.203:2379, false

4. containerd[*]

a). 安装containerd

添加软件源,这里docker没有rockylinux的软件源,所以使用centos的。

yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo
yum install containerd -y

b). 配置并启动containerd

containerd config default | tee /etc/containerd/config.toml
# 修改容器运行时使用Systemd管理Cgroup
sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml
# 修改sandbox_image为阿里云源
sed -i 's/sandbox_image\ =.*/sandbox_image\ =\ "registry.aliyuncs.com\/google_containers\/pause:3.8"/g' /etc/containerd/config.toml && grep sandbox_image /etc/containerd/config.toml
# 配置加速器
# 参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
sed -i 's/      config_path = ""/      config_path = "\/etc\/containerd\/registry"/g' /etc/containerd/config.toml
mkdir -p /etc/containerd/registry/docker.io/
cat > /etc/containerd/registry/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://registry-1.docker.io"]
  capabilities = ["pull", "resolve"]
EOF
systemctl daemon-reload && systemctl start containerd.service && systemctl enable containerd.service

PS:

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 ,如 CPU、内存这类资源设置请求和限制。 若要对接控制组(CGroup),kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

c). 安装crictl

crictl 是CRI兼容的容器运行时命令行接口。可以使用它来检查和调试Kubernetes节点上的容器运行时和应用程序。这里主要是替换docker的相关命令。

#crictl版本
VERSION="v1.26.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz  -O /vagrant/crictl.tar.gz
tar xf  /vagrant/crictl.tar.gz -C /usr/local/bin
cat >>  /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
EOF

5. 初始化集群

a). 初始化环境[*]

添加kubernetes软件源,安装 kubeadm、 kubectl、kubelet并启动kubelet

# 添加软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.22.16 kubectl-1.22.16 kubeadm-1.22.16
systemctl enable kubelet && systemctl start kubelet

b). 安装ipvsadm[*]

# 安装ipset和ipvsadm
yum install ipset ipvsadm -y
# 由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块
cat > /etc/modules-load.d/ipvs.modules << EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF
# 执行加载模块脚本
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
# 查看对应的模块是否加载成功
lsmod | grep -e ip_vs

c). 配置Master节点[KNode1]

# 拷贝etcd客户端证书,主要是k8s和calico会是用客户端证书调用etcd.
cp /vagrant/ssl/client*.pem /etc/etcd/ssl/
# 使用官方默认值创建kubeadm.yaml文件
kubeadm config print init-defaults > /vagrant/kubeadm.yaml
# 创建初始化token
kubeadm token generate
# 显示内容
    abcdef.0123456789abcdef
cat <<EOF > /vagrant/kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    # kubeadm token generate内容
    token: abcdef.0123456789abcdef
    # 过期时间
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
kind: InitConfiguration
localAPIEndpoint:
  # API的连接地址,如果是高可用集群这里换成VIP地址
  advertiseAddress: 10.199.88.201
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: KNode1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
# etcd连接信息,这里会用到客户端证书,当然也可以使用peer类型证书,但是不安全
etcd:
  external:
    endpoints:
      - https://10.199.88.201:2379
      - https://10.199.88.202:2379
      - https://10.199.88.203:2379
    caFile: /etc/etcd/ssl/ca.pem
    certFile: /etc/etcd/ssl/client.pem
    keyFile: /etc/etcd/ssl/client-key.pem
# 修改为阿里镜像
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 集群版本
kubernetesVersion: 1.22.0
networking:
  # dns域名
  dnsDomain: example.local
  # Service网段
  serviceSubnet: 10.96.0.0/16
  # pod网段
  podSubnet: '10.244.0.0/16'
scheduler: {}
---
#kube Proxy配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# 对于 Kubernetes v1.10“SupportIPVSProxyMode”默认设置为 “true”。对于 Kubernetes v1.11及以后版本,该选项已完全删除。但是,您需要在v1.10之前为Kubernetes 明确启用 --feature-gates = SupportIPVSProxyMode = true。
# 如果已经初始化了则需要修改configmap,删除以下字段
#featureGates:
#  SupportIPVSProxyMode: true
mode: ipvs
ipvs:
  minSyncPeriod: 5s
  syncPeriod: 5s
  # ipvs 负载策略
  scheduler: 'wrr'
---
#kubelet配置
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
serializeImagePulls: false
evictionHard:
  memory.available: '1000Mi'
EOF
cd  /vagrant/ && kubeadm init --config=kubeadm.yaml
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

PS:

  • kubeadm init 输出内容如下,请记录kubeadm join相关内容,后面用于节点加入。如果没有记住也可以使用kubeadm join –discovery-file kubeconfig文件方式加入
[init] Using Kubernetes version: v1.22.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [knode1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 10.199.88.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.017765 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node knode1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node knode1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.199.88.201:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:b96420a18f8277e2d481481bcbd0db781da59cc2d3d236b32fa2a552a4a4c37c 

d). 配置Node节点[KNode2 KNode3]

kubeadm join 10.199.88.201:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:aade6417da5a7857db47b63d9ccb064846054d1660a6c8186485e6e24f63f566

6. 安装calico[KNode1节点]

这里使用的是calico网络插件。

curl  https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico-etcd.yaml -o /vagrant/calico.yaml
cd /vagrant/
# 修改etcd集群地址
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://10.199.88.201:2379,https://10.199.88.202:2379,https://10.199.88.203:2379\"@gi' calico.yaml
export ETCD_CERT=`cat /etc/etcd/ssl/client.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/client-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/ca.pem | base64 | tr -d '\n'`
sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml
sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml
# CALICO_IPV4POOL_CIDR 设置与pod-network-cidr相同,如果kubeadm init的时候指定了,此处保持默认即可 
# sed -i 's@192.168.0.0/16@10.224.0.0/16@gi' calico.yaml
# calico有两种模式,第一种就是IPIP,第二种就是BGP,其实它应用最多就是BGP,将CALICO_IPV4POOL_IPIP里的这个Always改成Never,就会让calico自动启动BGP模式
sed -i '/- name: CALICO_IPV4POOL_IPIP/{n;s/Always/Never/;}' calico.yaml
kubectl create -f calico.yaml

PS

  • 如果calico node容器报restarting failed container=mount-bpffs则需要删除calico.yaml文件里的"-best-effort"参数,官方配置文件生成导致,详情6255

7. 验证[KNode1]

kubectl get node

kubectl get pod --all-namdespaces

结束

至此3节点kubernetes集群安装完成。