主机资源

  • 192.168.234.130 (master-01/etcd-01)
  • 192.168.234.131 (master-02/etcd-02)
  • 192.168.234.132 (master-03/etcd-03)
  • 192.168.234.133 (node-01)

软件资源

1.系统初始化

添加统一管理密钥

# 在 192.168.234.130 上生成密钥
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:3t1y7eWd7ualxcDT1k00+hODz30OLv6wJpe3UIOcEbs root@master
The key's randomart image is:
+---[RSA 3072]----+
|            .  ..|
|             oo..|
|            oo o.|
|           . *+o*|
|        S   E B=B|
|       . . . + Xo|
|        . . *.+ B|
|          ..+B.*=|
|           +ooO*+|
+----[SHA256]-----+
# copy 密钥到受管控主机
$ ssh-copy-id root@192.168.234.130
$ ssh-copy-id root@192.168.234.131
$ ssh-copy-id root@192.168.234.132
$ ssh-copy-id root@192.168.234.133
# 添加ansible服务
$ apt install ansible -y
# 添加主机
$ cat > /etc/ansible/hosts << EOF
[masters]
192.168.234.130
192.168.234.131
192.168.234.132

[nodes]
192.168.234.133
EOF

添加hosts记录

$ ansible all -m shell -a "cat >> /etc/hosts << EOF
192.168.234.130 master-01 etcd-01
192.168.234.131 master-02 etcd-02
192.168.234.132 master-03 etcd-03
192.168.234.133 node-01
EOF"

创建目录

$ mkdir -pv /opt/kubernetes/{packages,pki,bin,etc,tmp}
$ tree /opt/kubernetes
/opt/kubernetes
├── bin    # 存放二进制文件
├── etc    # 存放配置文件
├── tmp    # 存放临时文件
├── packages    # 存放安装包/工具等
└── pki    # 存放ssl/tls相关
3 directories, 0 files
$ echo "export PATH=\$PATH:/opt/kubernetes/bin" >> ~/.bashrc
$ source ~/.bashrc

添加用户

# 添加k8s用户
$ ansible masters -m shell -a "useradd -r k8s"
# 添加etcd用户
$ ansible masters -m shell -a "useradd -r etcd"

2.创建私有CA

安装cfssl工具包

Release v1.6.1 · cloudflare/cfssl (github.com)

$ cd /opt/kubernets/packages
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
$ install cfssljson_1.6.1_linux_amd64 /opt/kubernets/bin/cfssljson
$ install cfssl-certinfo_1.6.1_linux_amd64 /opt/kubernets/bin/cfssl-certinfo
$ install cfssl_1.6.1_linux_amd64 /opt/kubernets/bin/cfssl

创建CA根证书/Key

$ cd /opt/kubernets/pki
# 创建CA证书签名请求文件
$ cat > ca-csr.json << EOF
{
  "CN": "kubernetes-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}
EOF
# 签发CA证书和密钥
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ll
total 24
drwxr-xr-x 2 root root 4096 Mar 22 08:08 ./
drwxr-xr-x 5 root root 4096 Mar 22 07:53 ../
-rw-r--r-- 1 root root 1054 Mar 22 08:08 ca.csr
-rw-r--r-- 1 root root  250 Mar 22 08:08 ca-csr.json
-rw------- 1 root root 1679 Mar 22 08:08 ca-key.pem
-rw-r--r-- 1 root root 1322 Mar 22 08:08 ca.pem
  • CN:Common Name:kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;
  • O:Organization:kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
  • kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

注意:

  1. 不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现 PEER'S CERTIFICATE HAS AN INVALID SIGNATURE 错误;
  2. 后续创建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的;

创建CA Config

该配置文件用于颁发证书, profiles用于定于不同场景下的配置信息

$ cd /opt/kubenetes/pki
$ cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
  • signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
  • server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
  • client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
  • "expiry": "876000h":证书有效期设置为 100 年;

分发证书文件到其他节点

$ ansible all -m shell -a "[ -d /opt ]||mkdir -pv /opt"
$ ansible all -m copy -a "src=/opt/kubernetes dest=/opt/"

3.安装和配置kubectl

安装kubectl

Releases · kubernetes/kubernetes (github.com)

$ cd /opt/kubernetes/pakcages
$ wget https://dl.k8s.io/v1.23.5/kubernetes-client-linux-amd64.tar.gz
$ tar zxvf kubernetes-client-linux-amd64.tar.gz
# 复制可执行文件到bin目录(所有的Master节点)
$ ansible masters -m copy -a "src=kubernetes/client/bin dest=/opt/kubernetes/ mode=0755"

配置kubectl

创建Admin证书和密钥

$ mkdir /opt/kubernetes/pki/admin && cd /opt/kubernetes/pki/admin
$ cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "opsnull"
    }
  ]
}
EOF
  • O: system:masters:kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system:masters
  • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
# 生成证书和密钥
$ cfssl gencert -ca=../ca.pem \
-ca-key=../ca-key.pem \
-config=../ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ll
total 24
drwxr-xr-x 2 root root 4096 Mar 22 12:44 ./
drwxr-xr-x 3 root root 4096 Mar 22 12:43 ../
-rw-r--r-- 1 root root 1009 Mar 22 12:44 admin.csr
-rw-r--r-- 1 root root  230 Mar 22 12:41 admin-csr.json
-rw------- 1 root root 1679 Mar 22 12:44 admin-key.pem
-rw-r--r-- 1 root root 1411 Mar 22 12:44 admin.pem

生成kubectl配置文件

kubectl默认寻找的配置文件是: ${HOME}/.kube/config

# 设置集群 不存在就添加, 存在就修改
$ kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.234.130:6443 \
--kubeconfig=${HOME}/.kube/config
# --certificate-authority CA根证书
# --server Server 地址
# --kubeconfig 配置文件路径
# --embed-certs 表示将certificate-authority证书写入到配置文件

# 添加认证信息
$ kubectl config set-credentials admin \
--client-certificate=/opt/kubernetes/pki/admin/admin.pem \
--client-key=/opt/kubernetes/pki/admin/admin-key.pem \
--embed-certs=true \
--kubeconfig=${HOME}/.kube/config

# --client-certificate    用户证书
# --client-key 用户密钥

# 设置上下文
$ kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=${HOME}/.kube/config

# --cluster    该上下文使用的集群
# --user    该上下文使用的用户

# 设置默认使用的上下文
$ kubectl config use-context kubernetes --kubeconfig=${HOME}/.kube/config

# 分发配置文件到其他master节点
$ ansible masters -m copy -a "src=~/.kube dest=~/ mode=0600"

其他master节点的配置文件可以将server修改为自己的IP地址

配置kubectl命令补全

# 安装 bash-completion
$ apt install bash-completion -y
# 添加对当前用户的支持
$ cat >> ${HOME}/.bashrc << EOF

# kubectl autocompletion
source <(kubectl completion bash)

EOF

# 添加对所有用户的支持, 按需添加
$ cat >> /etc/profile << EOF

# kubectl autocompletion
source <(kubectl completion bash)

EOF

4.部署ETCD集群

单独的静态部署方法

下载ETCD程序

当前使用三个master部署ETCD集群, 当然也可以部署其他的三台服务器

$ cd /opt/kubernetes/packages
$ wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
$ tar zxvf etcd-v3.5.2-linux-amd64.tar.gz

# 分发二进制文件到其他节点
$ ansible masters -m copy -a "src=etcd-v3.5.2-linux-amd64/etcd dest=/opt/kubernetes/bin/ mode=0755"
$ ansible masters -m copy -a "src=etcd-v3.5.2-linux-amd64/etcdctl dest=/opt/kubernetes/bin/ mode=0755"
$ ansible masters -m copy -a "src=etcd-v3.5.2-linux-amd64/etcdutl dest=/opt/kubernetes/bin/ mode=0755"

创建ETCD证书和密钥

$ mkdir /opt/kubernetes/pki/etcd && cd /opt/kubernetes/pki/etcd
$ cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.234.130",
    "192.168.234.131",
    "192.168.234.132",
    "localhost",
    "etcd-01",
    "etcd-02",
    "etcd-03"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

# 生成证书和密钥
$ cfssl gencert -ca=../ca.pem \
-ca-key=../ca-key.pem \
-config=../ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

$ ll etcd*pem
-rw------- 1 root root 1679 Mar 22 13:36 etcd-key.pem
-rw-r--r-- 1 root root 1497 Mar 22 13:36 etcd.pem
# 分发密钥和证书文件到其他节点
$ ansible masters -m shell -a "[ -d /opt/kubernetes/pki/etcd ] || mkdir /opt/kubernetes/pki/etcd"
$ ansible masters -m copy -a "src=./etcd.pem dest=/opt/kubernetes/pki/etcd/etcd.pem owner=etcd group=etcd mode=0644"
$ ansible masters -m copy -a "src=./etcd-key.pem dest=/opt/kubernetes/pki/etcd/etcd-key.pem owner=etcd group=etcd mode=0600"

hosts指定授权使用该证书的 etcd 节点 IP 列表,需要将 etcd 集群所有节点 IP 都列在其中

创建并分发systemd unit文件

$ cd /opt/kubernetes/tmp

$ cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://etcd.io/docs/v3.5

[Service]
Type=notify
User=etcd
Group=etcd
EnvironmentFile=/opt/kubernetes/etc/etcd.conf
ExecStart=/opt/kubernetes/bin/etcd
ExecStartPre=+bash -c "[ -d \${ETCD_DATA_DIR} ] || mkdir -p \${ETCD_DATA_DIR}"
ExecStartPre=+bash -c "chown -R \${USER}:\${GROUP} \${ETCD_DATA_DIR}"
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 分发unit文件
$ ansible masters -m copy -a "src=./etcd.service dest=/etc/systemd/system/etcd.service"
$ ansible masters -m shell -a "systemctl daemon-reload"

创建配置文件

# 创建配置文件
$ cd /opt/kubernetes/tmp
$ cat > etcd.conf << EOF
# Member config
ETCD_NAME="#NAME#"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://#IP#:2380"
ETCD_LISTEN_CLIENT_URLS="https://#IP#:2379,http://127.0.0.1:2379"

# Clustering config
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://#IP#:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://#IP#:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.234.130:2380,etcd02=https://192.168.234.131:2380,etcd03=https://192.168.234.132:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

# Security config
ETCD_CERT_FILE="/opt/kubernetes/pki/etcd/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/pki/etcd/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/kubernetes/pki/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/kubernetes/pki/etcd/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/pki/etcd/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/kubernetes/pki/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF

# 分发配置文件到etcd集群节点
$ ansible masters -m copy -a "src=etcd.conf dest=/opt/kubernetes/etc/etcd.conf"
# 修改配置文件
$ ansible 192.168.234.130 -m shell -a "sed -i -e s/#NAME#/etcd01/g -e s/#IP#/192.168.234.130/g /opt/kubernetes/etc/etcd.conf"
$ ansible 192.168.234.131 -m shell -a "sed -i -e s/#NAME#/etcd02/g -e s/#IP#/192.168.234.131/g /opt/kubernetes/etc/etcd.conf"
$ ansible 192.168.234.132 -m shell -a "sed -i -e s/#NAME#/etcd03/g -e s/#IP#/192.168.234.132/g /opt/kubernetes/etc/etcd.conf"

启动服务

$ ansible masters -m service -a "name=etcd state=started enabled=true"
# 等待服务启动成功后,获取集群状态
$ alias etcdctl='etcdctl --cacert=/opt/kubernetes/pki/ca.pem --cert=/opt/kubernetes/pki/etcd/etcd.pem --key=/opt/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.234.130:2379,https://192.168.234.131:2379,https://192.168.234.132:2379'
$ etcdctl endpoint health -w table

5.部署Master节点

最后修改:2022 年 03 月 23 日
如果觉得我的文章对你有用,请随意赞赏