typora/note/K8S/实战笔记/kubeadm-centos 部署.md

167 lines
3.4 KiB
Markdown
Raw Permalink Normal View History

2024-12-11 21:48:55 -05:00
### 1. 关闭防火墙
```sh
systemctl stop firewalld
systemctl disable firewalld
```
### 2. 关闭selinux
```sh
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
```
### 3. 关闭swap
```sh
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
```
### 4. 关闭完swap后一定要重启一下虚拟机
### 5. 根据规划设置主机名
```sh
hostnamectl set-hostname
```
### 6. 在master添加hosts
```sh
cat >> /etc/hosts << EOF
192.168.0.30 c-m-01
192.168.0.35 c-w-01
192.168.0.36 c-w-02
EOF
```
### 7. 将桥接的IPv4流量传递到iptables的链
```sh
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
```
### 8. 时间同步
- centos7
```sh
yum install ntpdate -y
ntpdate time.windows.com
```
- centos8
```sh
yum install -y chrony
systemctl enable chronyd --now
mv /etc/chrony.conf /etc/chrony.conf.bak
cat >> /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
server cn.ntp.org.cn iburst
EOF
systemctl restart chronyd.service
chronyc sources -v
```
### 9. 添加软件源
```sh
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
```
### 10. 安装启动kubelet
```sh
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
systemctl enable kubelet
```
### 11.配置关闭 Docker 的 cgroups修改 /etc/docker/daemon.json
```sh
cat > /etc/docker/daemon.json << EOF
{"exec-opts": ["native.cgroupdriver=systemd"]}
EOF
```
### 12. 重启 docker
```sh
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
```
### 13. 在 Master 节点下执行
```sh
kubeadm init \
--apiserver-advertise-address=192.168.0.30 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
```
### 14. 安装成功后,复制如下配置并执行
```sh
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
kubeadm join 192.168.0.20:6443 --token 1rqvtc.llsvwdbyynmsccnu \
--discovery-token-ca-cert-hash sha256:b9c12195b80ef6b8997b8275fc650ed252f64053a6e8273cbf59ec703112f6cf
```
### 15. 部署CNI网络插件,在 master 节点上执行
#### 15.1 下载 calico 配置文件,可能会网络超时
```sh
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O
```
#### 15.2 修改 calico.yaml 文件中的 CALICO_IPV4POOL_CIDR 配置,修改为与初始化的 cidr 相同
#### 15.3 修改 IP_AUTODETECTION_METHOD 下的网卡名称(新版没有)
#### 15.4 删除镜像 docker.io/ 前缀,避免下载过慢导致失败
```sh
sed -i 's#docker.io/##g' calico.yaml
```
### 16. 任意节点都可使用kubectl
#### 16.1 将 master 节点中 /etc/kubernetes/admin.conf 拷贝到需要运行的服务器的 /etc/kubernetes 目录中
```sh
scp /etc/kubernetes/admin.conf root@vm-worker-01:/etc/kubernetes
```
#### 16.2 在对应的服务器上配置环境变量
```sh
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
```