网站底部优化文字,淘宝开店流程步骤图片,百度经验手机版,泰安招聘信息最新招聘2022K8s环境搭建 文章目录 K8s环境搭建集群类型安装方式环境规划克隆三台虚拟机系统环境配置集群搭建初始化集群#xff08;仅在master节点#xff09;配置环境变量#xff08;仅在master节点#xff09;工作节点加入集群#xff08;knode1节点及knode2节点#xff09;安装ca…K8s环境搭建 文章目录 K8s环境搭建集群类型安装方式环境规划克隆三台虚拟机系统环境配置集群搭建初始化集群仅在master节点配置环境变量仅在master节点工作节点加入集群knode1节点及knode2节点安装calico网络仅master节点 集群类型 Kubernetes 集群大致分为两类一主多从和多主多从。 一主多从单 master 一个 Master 节点和多台Node 节点搭建简单但是有单机故障风险适合用于测试环境。多主多从高可用多台 Master 节点和多台 Node节点搭建麻烦安全性高适合用于生产环境。 安装方式
Kubernetes 有多种部署方式目前主流的方式有 kubeadm 、minikube 、二进制包。① minikube一个用于快速搭建单节点的 Kubernetes 工具。② kubeadm一个用于快速搭建Kubernetes 集群的工具可以用于生产环境。③ 二进制包从官网上下载每个组件的二进制包依次去安装建议生产环境使用。
环境规划 操作系统版本 Centos Stream 8安装源可访问阿里云开源镜像站或其他镜像站下载环境需要用到 3台虚拟机需要联通外网网卡类型 NAT 或者 桥接。本文通过脚本文件快速搭建 k8s集群 主机名ip地址内存磁盘处理器Kmaster192.168.129.2008GB100GB2Knode1192.168.129.2018GB100GB2Knode2192.168.129.2028GB100GB2 也可根据自身电脑性能进行内存和存盘的分配处理器每个节点最少两颗。 克隆三台虚拟机 准备三个文件夹 一定要注意是完整克隆 启动修改3台虚拟机主机名及ip地址。 注意修改ip地址的时候看清楚你的网卡名字叫什么我的是 ifcfg-ens160. 修改完成之后关机拍快照。 # kmaster 节点
[roottmp ~]# hostnamectl set-hostname kmaster
[rootkmaster ~]# cd /etc/sysconfig/network-scripts/
[rootkmaster network-scripts]# vi ifcfg-ens160
[rootkmaster network-scripts]# cat ifcfg-ens160
TYPEEthernet
BOOTPROTOnone
NAMEens160
DEVICEens160
ONBOOTyes
IPADDR192.168.129.200
NETMASK255.255.255.0
GATEWAY192.168.129.2
DNS1192.168.129.2# knode1 节点
[roottmp ~]# hostnamectl set-hostname knode1
[rootknode1 ~]# cd /etc/sysconfig/network-scripts/
[rootknode1 network-scripts]# vi ifcfg-ens160
[rootknode1 network-scripts]# cat ifcfg-ens160
TYPEEthernet
BOOTPROTOnone
NAMEens160
DEVICEens160
ONBOOTyes
IPADDR192.168.129.201
NETMASK255.255.255.0
GATEWAY192.168.129.2
DNS1192.168.129.2# knode2 节点
[roottmp ~]# hostnamectl set-hostname knode2
[rootknode2 ~]# cd /etc/sysconfig/network-scripts/
[rootknode2 network-scripts]# vi ifcfg-ens160
[rootknode2 network-scripts]# cat ifcfg-ens160
TYPEEthernet
BOOTPROTOnone
NAMEens160
DEVICEens160
ONBOOTyes
IPADDR192.168.129.202
NETMASK255.255.255.0
GATEWAY192.168.129.2
DNS1192.168.129.2系统环境配置 点击获取脚本及其配置文件 本脚本针对网卡为 ens160如果不是请修改脚本网卡指定。 K8s版本 1.27.0 [rootkmaster ~]# vim Stream8-k8s-v1.27.0.sh
#!/bin/bash
# CentOS stream 8 install kubenetes 1.27.0
# the number of available CPUs 1 is less than the required 2
# k8s 环境要求虚拟cpu数量至少2个
# 使用方法在所有节点上执行该脚本所有节点配置完成后复制第11步语句单独在master节点上进行集群初始化。
#1 rpm
echo ###00 Checking RPM###
yum install -y yum-utils vim bash-completion net-tools wget
echo 00 configuration successful ^_^
#Basic Information
echo ###01 Basic Information###
hostnamehostname
# 网卡为 ens160
hostip$(ifconfig ens160 |grep -w inet |awk {print $2})
echo The Hostname is:$hostname
echo The IPAddress is:$hostip#2 /etc/hosts
echo ###02 Checking File:/etc/hosts###
hosts$(cat /etc/hosts)
result01$(echo $hosts |grep -w ${hostname})
if [[ $result01 ! ]]
thenecho Configuration passed ^_^
elseecho hostname and ip not set,configuring......echo $hostip $hostname /etc/hostsecho configuration successful ^_^
fi
echo 02 configuration successful ^_^#3 firewall selinux
echo ###03 Checking Firewall and SELinux###
systemctl stop firewalld
systemctl disable firewalld
se01SELINUXdisabled
se02$(cat /etc/selinux/config |grep -w ^SELINUX)
if [[ $se01 $se02 ]]
thenecho Configuration passed ^_^
elseecho SELinux Not Closed,configuring......sed -i s/^SELINUXenforcing/SELINUXdisabled/g /etc/selinux/configecho configuration successful ^_^
fi
echo 03 configuration successful ^_^#4 swap
echo ###04 Checking swap###
swapoff -a
sed -i s/^.*swap/#/g /etc/fstab
echo 04 configuration successful ^_^#5 docker-ce
echo ###05 Checking docker###
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
echo list docker-ce versions
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce
systemctl start docker
systemctl enable docker
cat EOF /etc/docker/daemon.json
{registry-mirrors: [https://cc2d8woc.mirror.aliyuncs.com]
}
EOF
systemctl restart docker
echo 05 configuration successful ^_^#6 iptables
echo ###06 Checking iptables###
cat EOF /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_forward 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
echo 06 configuration successful ^_^#7 cgroup(systemd/cgroupfs)
echo ###07 Checking cgroup###
containerd config default /etc/containerd/config.toml
sed -i s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g /etc/containerd/config.toml
sed -i s/SystemdCgroup false/SystemdCgroup true/g /etc/containerd/config.toml
systemctl restart containerd
echo 07 configuration successful ^_^#8 kubenetes.repo
echo ###08 Checking repo###
cat EOF /etc/yum.repos.d/kubernetes.repo
[kubernetes]
nameKubernetes
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled1
gpgcheck1
repo_gpgcheck1
gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
echo 08 configuration successful ^_^#9 crictl
echo Checking crictl
cat EOF /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 5
debug: false
EOF
echo 09 configuration successful ^_^#10 kube1.27.0
echo Checking kube
yum install -y kubelet-1.27.0 kubeadm-1.27.0 kubectl-1.27.0 --disableexcludeskubernetes
systemctl enable --now kubelet
echo 10 configuration successful ^_^
echo Congratulations ! The basic configuration has been completed#11 Initialize the cluster
# 仅在master主机上做集群初始化
# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-versionv1.27.0 --pod-network-cidr10.244.0.0/16在三台节点上分别运行脚本 [rootkmaster ~]# sh Stream8-k8s-v1.27.0.sh
[rootknode1 ~]# sh Stream8-k8s-v1.27.0.sh
[rootknode2 ~]# sh Stream8-k8s-v1.27.0.sh# ***kmaster输出记录***
###00 Checking RPM###
CentOS Stream 8 - AppStream
CentOS Stream 8 - BaseOS
CentOS Stream 8 - Extras
CentOS Stream 8 - Extras common packages
Dependencies resolved.
Package Architecture Installing:bash-completion noarch net-tools x86_64
......略......Installed:conntrack-tools-1.4.4-11.el8.x86_64 cri-tools-1.26.0-0.x86_64 kubeadm-1.27.0-0.x86_64 kubectl-1.27.0-0.x86_64 libnetfilter_queue-1.0.4-3.el8.x86_64 socat-1.7.4.1-1.el8.x86_64 Complete!
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
10 configuration successful ^_^
Congratulations ! The basic configuration has been completed# ***knode1和knode2输出记录与kmaster一致***集群搭建
初始化集群仅在master节点 复制脚本中最后一段命令执行进行集群初始化 [rootkmaster ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-versionv1.27.0 --pod-network-cidr10.244.0.0/16
[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using kubeadm config images pull
W0719 10:48:35.823181 13745 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd vers
W0719 10:48:51.007564 13745 checks.go:835] detected that the sandbox image registry.aliyuncs.com/google_containers/pause:3.6 of the container runtime is
[certs] Using certificateDir folder /etc/kubernetes/pki
[certs] Generating ca certificate and key
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] an
[certs] Generating apiserver-kubelet-client certificate and key
[certs] Generating front-proxy-ca certificate and key
[certs] Generating front-proxy-client certificate and key
[certs] Generating etcd/ca certificate and key
[certs] Generating etcd/server certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.150 127.0.0.1 ::1]
[certs] Generating etcd/peer certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.150 127.0.0.1 ::1]
[certs] Generating etcd/healthcheck-client certificate and key
[certs] Generating apiserver-etcd-client certificate and key
[certs] Generating sa key and public key
[kubeconfig] Using kubeconfig folder /etc/kubernetes
[kubeconfig] Writing admin.conf kubeconfig file
[kubeconfig] Writing kubelet.conf kubeconfig file
[kubeconfig] Writing controller-manager.conf kubeconfig file
[kubeconfig] Writing scheduler.conf kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder /etc/kubernetes/manifests
[control-plane] Creating static Pod manifest for kube-apiserver
[control-plane] Creating static Pod manifest for kube-controller-manager
[control-plane] Creating static Pod manifest for kube-scheduler
[etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests
W0719 10:49:09.467378 13745 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd vers
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests. This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.059875 seconds
[upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-e
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ddct8j.i2dloykyc0wpwdg3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the cluster-info ConfigMap in the kube-public namespace
[kubelet-finalize] Updating /etc/kubernetes/kubelet.conf to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.129.200:6443 --token ddct8j.i2dloykyc0wpwdg3 \--discovery-token-ca-cert-hash sha256:3bdd47846f02bcc9858d2946714341f22b37aaa07dbaa61594f2a0ecce80f4fb配置环境变量仅在master节点
# 根据安装提示执行命令
[rootkmaster ~]# mkdir -p $HOME/.kube
[rootkmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[rootkmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[rootkmaster ~]# echo export KUBECONFIG/etc/kubernetes/admin.conf /etc/profile
[rootkmaster ~]# source /etc/profile[rootkmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster NotReady control-plane 2d23h v1.27.工作节点加入集群knode1节点及knode2节点 将初始化集群后生成的 kubeadm join 语句分别拷贝到两个节点执行 # knode1节点
[rootknode1 ~]# kubeadm join 192.168.129.200:6443 --token ddct8j.i2dloykyc0wpwdg3 \--discovery-token-ca-cert-hash sha256:3bdd47846f02bcc9858d2946714341f22b37aaa07dbaa61594f2a0ecce80f4fb
[preflight] Running pre-flight checks[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.# knode2节点
[rootknode2 ~]# kubeadm join 192.168.129.200:6443 --token ddct8j.i2dloykyc0wpwdg3 \--discovery-token-ca-cert-hash sha256:3bdd47846f02bcc9858d2946714341f22b37aaa07dbaa61594f2a0ecce80f4fb [preflight] Running pre-flight checks[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.[rootkmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster NotReady control-plane 2d23h v1.27.0
knode1 NotReady none 2d23h v1.27.0
knode2 NotReady none 2d23h v1.27.0安装calico网络仅master节点 安装网络组件前集群状态为 NotReady安装后稍等片刻集群状态将变为 Ready。 查看集群状态 [rootkmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster NotReady control-plane 2d23h v1.27.0
knode1 NotReady none 2d23h v1.27.0
knode2 NotReady none 2d23h v1.27.0安装 Tigera Calico operator版本 3.26 # 所需文件可在百度网盘自行获取
[rootkmaster ~]# kubectl create -f tigera-operator-3-26-1.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created配置 custom-resources.yaml [rootkmaster ~]# vim custom-resources-3-26-1.yaml
# 更改IP地址池中的 CIDR和 kubeadm 初始化集群中的 --pod-network-cidr 参数保持一致配置文件已做更改
# cidr: 10.244.0.0/16# 所需文件可在百度网盘自行获取
[rootkmaster ~]# kubectl create -f custom-resources-3-26-1.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created# 动态查看calico容器状态待全部running后集群状态变为正常
[rootkmaster ~]# watch kubectl get pods -n calico-system
NAME READYSTATUS RESTARTS AGE
calico-kube-controllers-5d6c98ff78-gcj2n 1/1Running 3 (103m ago) 2d23h
calico-node-cc9ct 1/1Running 3 (103m ago) 2d23h
calico-node-v8459 1/1Running 3 (103m ago) 2d23h
calico-node-w524w 1/1Running 3 (103m ago) 2d23h
calico-typha-bbb96d56-46w2v 1/1Running 3 (103m ago) 2d23h
calico-typha-bbb96d56-nrxkf 1/1Running 3 (103m ago) 2d23h
csi-node-driver-4wm4h 2/2Running 6 (103m ago) 2d23h
csi-node-driver-dr7hq 2/2Running 6 (103m ago) 2d23h
csi-node-driver-fjr77 2/2Running 6 (103m ago) 2d23h 再次查看集群状态 [rootkmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready control-plane 2d23h v1.27.0
knode1 Ready none 2d23h v1.27.0
knode2 Ready none 2d23h v1.27.·END 2 Running 6 (103m ago) 2d23h csi-node-driver-dr7hq 2/2 Running 6 (103m ago) 2d23h csi-node-driver-fjr77 2/2 Running 6 (103m ago) 2d23h 再次查看集群状态bash
[rootkmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready control-plane 2d23h v1.27.0
knode1 Ready none 2d23h v1.27.0
knode2 Ready none 2d23h v1.27.·END