当前位置: 首页 > news >正文

网站绩效营销广州腾虎网络科技有限公司

网站绩效营销,广州腾虎网络科技有限公司,闵行网页设计师,建材手机网站k8s部署官方提供了kind、minikube、kubeadmin等多种安装方式。 其中minikube安装在之前的文章中已经介绍过#xff0c;部署比较简单。下面介绍通过kubeadmin部署k8s集群。 生产中提供了多种高可用方案#xff1a; k8s官方文档 本文安装的是1.28.0版本。 建议去认真阅读一下…k8s部署官方提供了kind、minikube、kubeadmin等多种安装方式。 其中minikube安装在之前的文章中已经介绍过部署比较简单。下面介绍通过kubeadmin部署k8s集群。 生产中提供了多种高可用方案 k8s官方文档 本文安装的是1.28.0版本。 建议去认真阅读一下官方文档下面的操作基本是出自官方文档。 1、环境准备 三台centos7虚拟机:2核4G官网最低要求2核2G 内核版本 uname -r角色ip主机名master192.168.213.9k8s-kubeadmin-1node1192.168.213.10k8s-kubeadmin-2node2192.168.213.11k8s-kubeadmin-3 在三台虚拟机中修改hosts文件 确保可以通过主机名ping通对方。 修改主机名 查看 hostname修改 sudo hostnamectl set-hostname k8s-kubeadmin-12、安装 2.1、所有节点操作关闭防火墙 关闭防火墙免得要配置开放端口不想关闭也行不怕麻烦的话可以参考我之前的博客去设置开放防火墙端口。 systemctl stop firewalld #停止防火墙 systemctl disable firewalld #设置开机不启动2.2、所有节点操作禁用selinux # 将 SELinux 设置为 permissive 模式相当于将其禁用 sudo setenforce 0 sudo sed -i s/^SELINUXenforcing$/SELINUXpermissive/ /etc/selinux/config或者设置为SELINUXdisabled 2.3、所有节点操作关闭swap分区 #永久禁用swap,删除或注释掉/etc/fstab里的swap设备的挂载命令即可 nano /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0重启 reboot2.4、所有节点操作设置同步时间 yum -y install ntp systemctl start ntpd systemctl enable ntpd2.5、所有节点操作开启bridge-nf-call-iptalbes 在Kubernetes环境中iptables和IPVS都用于实现网络流量转发和负载均衡但它们在实现方式和功能上有一些区别。 iptables是Linux系统内置的一个工具可以对流量进行过滤和转发支持NAT等网络功能。在Kubernetes中iptables主要用于实现Service的ClusterIP和NodePort类型。当Service为ClusterIP类型时iptables会在节点上为每个Service IP添加一条规则将流量转发到后端Pod的IP上。当Service为NodePort类型时iptables会在每个节点上添加一条规则将流量从宿主机的NodePort转发到Service IP上。 相比之下IPVSIP Virtual Server是一个基于Linux内核实现的高性能负载均衡工具可以在内核态对流量进行处理支持多种负载均衡算法并能够进行会话保持。在Kubernetes中IPVS可以用于实现Service的负载均衡相比于iptablesIPVS具有更高的性能和更多的负载均衡算法选择可以更好地应对高流量和高并发的场景。IPVS代理使用iptables做数据包过滤、SNAT或伪装。 总结来说iptables和IPVS在Kubernetes中都用于实现网络流量的转发和负载均衡。iptables更适用于实现基于Service的负载均衡而IPVS则更适合于高性能、高并发的场景。在实际使用中可以根据需求选择合适的工具。 执行以下指令 cat EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOFsudo modprobe overlay sudo modprobe br_netfilter# 设置所需的 sysctl 参数参数在重新启动后保持不变 cat EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables 1 net.bridge.bridge-nf-call-ip6tables 1 net.ipv4.ip_forward 1 EOF# 应用 sysctl 参数而不重新启动 sudo sysctl --system通过运行以下指令确认 br_netfilter 和 overlay 模块被加载lsmod | grep br_netfilter lsmod | grep overlay通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1 sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward[rootk8s-kubeadmin-1 /]# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward net.bridge.bridge-nf-call-iptables 1 net.bridge.bridge-nf-call-ip6tables 1 net.ipv4.ip_forward 12.6、所有节点操作安装容器运行时containerd 安装containerd yum install -y yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum -y install containerd.io生成config.toml配置 containerd config default /etc/containerd/config.toml配置 systemd cgroup 驱动 在 /etc/containerd/config.toml 中设置 sed -i s/SystemdCgroup false/SystemdCgroup true/g /etc/containerd/config.toml启动containerd 、开机自启动 systemctl restart containerd systemctl enable containerd2.7、所有节点操作k8s配置阿里云yum源 官网中配置的是国外的yum地址速度较慢或者由于某些因素所有配置为阿里云的yum源。 cat EOF /etc/yum.repos.d/kubernetes.repo [kubernetes] name Kubernetes baseurl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled 1 gpgcheck 0 repo_gpgcheck 0 gpgkey https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF[rootk8s-kubeadmin-1 ~]# cd /etc/yum.repos.d [rootk8s-kubeadmin-1 yum.repos.d]# ll total 48 -rw-r--r--. 1 root root 1664 Nov 23 2020 CentOS-Base.repo -rw-r--r--. 1 root root 1309 Nov 23 2020 CentOS-CR.repo -rw-r--r--. 1 root root 649 Nov 23 2020 CentOS-Debuginfo.repo -rw-r--r--. 1 root root 314 Nov 23 2020 CentOS-fasttrack.repo -rw-r--r--. 1 root root 630 Nov 23 2020 CentOS-Media.repo -rw-r--r--. 1 root root 1331 Nov 23 2020 CentOS-Sources.repo -rw-r--r--. 1 root root 8515 Nov 23 2020 CentOS-Vault.repo -rw-r--r--. 1 root root 616 Nov 23 2020 CentOS-x86_64-kernel.repo -rw-r--r--. 1 root root 1919 Nov 21 03:56 docker-ce.repo -rw-r--r-- 1 root root 287 Nov 29 00:54 kubernetes.repo [rootk8s-kubeadmin-1 yum.repos.d]#安装docker sudo yum install -y yum-utilssudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.reposudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin启动 sudo systemctl start docker 开机启动 systemctl enable docker配置阿里云镜像加速 可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json -EOF {registry-mirrors: [https://e6sj15e9.mirror.aliyuncs.com] } EOF sudo systemctl daemon-reload sudo systemctl restart docker2.8、所有节点操作yum安装kubeadm、kubelet、kubectl 这是官网的安装 删除历史安装历史曾经安装过的可以执行卸载重新安装。 yum -y remove kubelet kubeadm kubectl访问查看阿里云上面的安装包详情 https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64看到1.28.0版本比较新的2023-08-16更新的选用这个。安装时加上版本号 yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 --disableexcludeskubernetes systemctl enable kubelet2.9、查看所需的镜像 需要准备镜像。 可以进行自定义镜像等操作。我采用的的查询阿里加速镜像器中存在的然后修改标签为它需要的。 kubeadm config images list[rootk8s-kubeadmin-1 yum.repos.d]# kubeadm config images list registry.k8s.io/kube-apiserver:v1.28.4 registry.k8s.io/kube-controller-manager:v1.28.4 registry.k8s.io/kube-scheduler:v1.28.4 registry.k8s.io/kube-proxy:v1.28.4 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1这些依赖镜像是阿里云镜像中没有的。 [rootk8s-kubeadmin-1 yum.repos.d]# docker search registry.k8s.io/kube-apiserver:v1.28.4 Error response from daemon: Unexpected status code 404所以下面的kubeadmin init命令很可能是成功不了的。 需要拉去阿里云上面的镜像下来然后tag修改为它需求的镜像标签。 docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0 registry.k8s.io/kube-apiserver:v1.28.4 docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0 registry.k8s.io/kube-controller-manager:v1.28.4 docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0 registry.k8s.io/kube-scheduler:v1.28.4 docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0 registry.k8s.io/kube-proxy:v1.28.4 docker tag registry.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0 docker tag registry.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1 docker tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.6[rootk8s-kubeadmin-1 yum.repos.d]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE flannel/flannel v0.22.3 e23f7ca36333 2 months ago 70.2MB registry.aliyuncs.com/google_containers/kube-apiserver v1.28.0 bb5e0dde9054 3 months ago 126MB registry.k8s.io/kube-apiserver v1.28.4 bb5e0dde9054 3 months ago 126MB registry.aliyuncs.com/google_containers/kube-scheduler v1.28.0 f6f496300a2a 3 months ago 60.1MB registry.k8s.io/kube-scheduler v1.28.4 f6f496300a2a 3 months ago 60.1MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.28.0 4be79c38a4ba 3 months ago 122MB registry.k8s.io/kube-controller-manager v1.28.4 4be79c38a4ba 3 months ago 122MB registry.aliyuncs.com/google_containers/kube-proxy v1.28.0 ea1030da44aa 3 months ago 73.1MB registry.k8s.io/kube-proxy v1.28.4 ea1030da44aa 3 months ago 73.1MB flannel/flannel-cni-plugin v1.2.0 a55d1bad692b 4 months ago 8.04MB registry.aliyuncs.com/google_containers/etcd 3.5.9-0 73deb9a3f702 6 months ago 294MB registry.k8s.io/etcd 3.5.9-0 73deb9a3f702 6 months ago 294MB registry.k8s.io/coredns/coredns v1.10.1 ead0a4a53df8 9 months ago 53.6MB registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df8 9 months ago 53.6MB registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 13 months ago 744kB registry.k8s.io/pause 3.6 e6f181688397 13 months ago 744kB registry.k8s.io/pause 3.9 e6f181688397 13 months ago 744kB kubernetesui/dashboard latest 07655ddf2eeb 14 months ago 246MB kubernetesui/dashboard v2.7.0 07655ddf2eeb 14 months ago 246MB kubernetesui/metrics-scraper latest 421615ce8dbd 2 years ago 34.4MB kubernetesui/metrics-scraper v1.0.8 421615ce8dbd 2 years ago 34.4MB registry.aliyuncs.com/google_containers/kube-proxy v1.17.4 6dec7cfde1e5 3 years ago 116MB registry.aliyuncs.com/google_containers/kube-apiserver v1.17.4 2e1ba57fe95a 3 years ago 171MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.17.4 7f997fcf3e94 3 years ago 161MB registry.aliyuncs.com/google_containers/kube-scheduler v1.17.4 5db16c1c7aff 3 years ago 94.4MB registry.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 4 years ago 41.6MB registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 4 years ago 288MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 5 years ago 742kB kubernetes/pause latest f9d5de079539 9 years ago 240kB2.9、k8s-kubeadmin-1节点执行安装master kubeadm init \ --apiserver-advertise-address192.168.213.9 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.28.0 \ --service-cidr10.96.0.0/12 \ --pod-network-cidr10.244.0.0/16 \ --cri-socketunix:///var/run/cri-dockerd.sock \ --v5 由于上面容器运行时安装了containerd和Docker Engine使用 cri-dockerd所以需要指定cri-socket参数。 安装的过程中要是出错了重新安装需要重置 kubeadm 安装的状态 kubeadm reset --cri-socketunix:///var/run/cri-dockerd.sock重置过程不会重置或清除 iptables 规则或 IPVS 表。如果希望重置 iptables则必须手动进行 iptables -F iptables -t nat -F iptables -t mangle -F iptables -X如果要重置 IPVS 表则必须运行以下命令 ipvsadm -Cour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.213.9:6443 --token askdfkjsdfkljkldffj\--discovery-token-ca-cert-hash sha256:kjlksjdfkasdkjflksdfljdfkdf然后根据提示操作 要使非 root 用户可以运行 kubectl请运行以下命令 它们也是 kubeadm init 输出的一部分 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config或者如果你是 root 用户则可以运行 export KUBECONFIG/etc/kubernetes/admin.conf这时执行 kubectl get node[rootk8s-kubeadmin-1 yum.repos.d]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-kubeadmin-1 NoReady control-plane 4h31m v1.28.0子节点加入k8s-kubeadmin-1节点 格式 kubeadm join --token token control-plane-host:control-plane-port --discovery-token-ca-cert-hash sha256:hashkubeadm join 192.168.213.9:6443 --token s5inwf.17rdxvhjalwyzj92 \--discovery-token-ca-cert-hash sha256:ce85d2ceaea7311ac3e58ee355d34ee9235702e3415d43b84f78da682210ee09 \--cri-socketunix:///var/run/cri-dockerd.sock --v5有可能token过期了 k8s-kubeadmin-1执行创建token: kubeadm token create会输出 5didvk.d09sbcov8ph2amjw如果你没有 --discovery-token-ca-cert-hash 的值则可以通过在控制平面节点上执行以下命令链来获取它 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2/dev/null | \openssl dgst -sha256 -hex | sed s/^.* //输出类似于以下内容 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78再执行 kubectl get node[rootk8s-kubeadmin-1 yum.repos.d]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-kubeadmin-1 NoReady control-plane 4h31m v1.28.0 k8s-kubeadmin-2 NoReady none 4h7m v1.28.0 k8s-kubeadmin-3 NoReady none 4h7m v1.28.0需要安装 Pod 网络附加组件 3.0、k8s-kubeadmin-1节点执行安装 Pod 网络附加组件-容器网络接口 (CNI) 下载安装 wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz mkdir -pv /opt/cni/bin tar zxvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml再次执行kubectl get node [rootk8s-kubeadmin-1 yum.repos.d]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-kubeadmin-1 Ready control-plane 4h31m v1.28.0 k8s-kubeadmin-2 Ready none 4h7m v1.28.0 k8s-kubeadmin-3 Ready none 4h7m v1.28.0查看命名空间kube-system的pod的状态 [rootk8s-kubeadmin-1 yum.repos.d]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-66f779496c-9tqbt 1/1 Running 0 4h42m coredns-66f779496c-wzvts 1/1 Running 0 4h42m dashboard-metrics-scraper-5657497c4c-v2dn4 1/1 Running 0 3h etcd-k8s-kubeadmin-1 1/1 Running 0 4h42m kube-apiserver-k8s-kubeadmin-1 1/1 Running 0 4h42m kube-controller-manager-k8s-kubeadmin-1 1/1 Running 0 4h42m kube-proxy-bwksp 1/1 Running 0 4h19m kube-proxy-gdd49 1/1 Running 0 4h42m kube-proxy-svj87 1/1 Running 0 4h18m kube-scheduler-k8s-kubeadmin-1 1/1 Running 0 4h42m kubernetes-dashboard-76f4b5bc7d-gjm79 0/1 CrashLoopBackOff 26 (4m14s ago) 124m3.1 、安装kubernetes-dashboard 拉取kubernetes-dashboard资源配置清单yaml文件 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml没有其他手段看外面的世界的话可能会比较慢或者拉取失败下面是我拉取下来的文件可以复制使用: 其中用到的两个镜像kubernetesui/dashboard:v2.7.0、kubernetesui/metrics-scraper:v1.0.8阿里云镜像加速器上面没有。可以查找加速器上面有的然后通过tag方式修改为它需要的。 在三个机器上都拉取配置。 下面的文件需要修改几个地方; kind: Service apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:ports:- port: 443targetPort: 8443name: https # 源文件没有namenodePort: 32001 # 源文件没有nodePorttype: NodePort # 源文件没有nodePortselector:k8s-app: kubernetes-dashboard源文件 # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the License); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an AS IS BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.apiVersion: v1 kind: Namespace metadata:name: kubernetes-dashboard---apiVersion: v1 kind: ServiceAccount metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1 kind: Secret metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard type: Opaque---apiVersion: v1 kind: Secret metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard type: Opaque data:csrf: ---apiVersion: v1 kind: Secret metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard type: Opaque---kind: ConfigMap apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: []resources: [secrets]resourceNames: [kubernetes-dashboard-key-holder, kubernetes-dashboard-certs, kubernetes-dashboard-csrf]verbs: [get, update, delete]# Allow Dashboard to get and update kubernetes-dashboard-settings config map.- apiGroups: []resources: [configmaps]resourceNames: [kubernetes-dashboard-settings]verbs: [get, update]# Allow Dashboard to get metrics.- apiGroups: []resources: [services]resourceNames: [heapster, dashboard-metrics-scraper]verbs: [proxy]- apiGroups: []resources: [services/proxy]resourceNames: [heapster, http:heapster:, https:heapster:, dashboard-metrics-scraper, http:dashboard-metrics-scraper]verbs: [get]---kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: [metrics.k8s.io]resources: [pods, nodes]verbs: [get, list, watch]---apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: kubernetes-dashboard roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment apiVersion: apps/v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.7.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespacekubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-hosthttp://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:kubernetes.io/os: linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service apiVersion: v1 metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment apiVersion: apps/v1 metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.8ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:kubernetes.io/os: linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}部署 kubectl apply -f [你的本地路径]/recommended.yaml本地创建dashboard-adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata:name: admin-usernamespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: admin-user roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboardkubectl apply -f [你的文件路径]/dashboard-adminuser.yaml kubectl -n kubernetes-dashboard create token admin-user保存输出的token后面登录使用。 获取所有的命名空间下的pod [rootk8s-kubeadmin-1 yum.repos.d]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-5r52b 1/1 Running 0 4h35m kube-flannel kube-flannel-ds-9jvk4 1/1 Running 0 4h35m kube-flannel kube-flannel-ds-jbc85 1/1 Running 0 4h35m kube-system coredns-66f779496c-9tqbt 1/1 Running 0 5h8m kube-system coredns-66f779496c-wzvts 1/1 Running 0 5h8m kube-system dashboard-metrics-scraper-5657497c4c-v2dn4 1/1 Running 0 3h27m kube-system etcd-k8s-kubeadmin-1 1/1 Running 0 5h9m kube-system kube-apiserver-k8s-kubeadmin-1 1/1 Running 0 5h9m kube-system kube-controller-manager-k8s-kubeadmin-1 1/1 Running 0 5h9m kube-system kube-proxy-bwksp 1/1 Running 0 4h45m kube-system kube-proxy-gdd49 1/1 Running 0 5h8m kube-system kube-proxy-svj87 1/1 Running 0 4h45m kube-system kube-scheduler-k8s-kubeadmin-1 1/1 Running 0 5h9m kube-system kubernetes-dashboard-76f4b5bc7d-gjm79 0/1 CrashLoopBackOff 30 (7m54s ago) 150m kubernetes-dashboard dashboard-metrics-scraper-5657497c4c-mk9hk 1/1 Running 0 4h28m kubernetes-dashboard kubernetes-dashboard-78f87ddfc-v6l57 1/1 Running 0 4h28m 查看所有的命名空间下的服务NodePort发布出去型 kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 none 443/TCP 5h10m kube-system kube-dns ClusterIP 10.96.0.10 none 53/UDP,53/TCP,9153/TCP 5h10m kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.109.201.223 none 8000/TCP 160m kubernetes-dashboard kubernetes-dashboard NodePort 10.105.61.238 none 443:32001/TCP 157mkubernetes-dashboard被部署到了k8s-kubeadmin-2节点。 访问https://k8s-kubeadmin-2:32001/
http://www.hkea.cn/news/14504121/

相关文章:

  • 城乡建设部门户网站php外贸网站模板
  • 淘宝做代码的网站黔东网站建设
  • 天津高端网站建设企业公司做网站最低需用多少钱
  • mixkit免费高清视频素材关键词优化一年的收费标准
  • 网站制作外包网站页面的宽度
  • 湖州网站推广网络设计实验报告
  • 网站开发的技术分类湖南长沙新增病例最新消息
  • 南宁网站快速排名提升昆明建设咨询监理有限公司网站
  • 建立公司网站需要什么dede免费手机网站模板下载
  • 榆次网站建设公司网址制作
  • 个体户做网站去哪里做天津房地产最新消息
  • 制作网站高手饮料代理招商网
  • 安卓市场网站建设国际新闻最新报道
  • 江苏做网站的企业网站建设数据库软件英文
  • 贵阳网站建设网站制作青岛哪家公司做网站好
  • 品牌网站设计公司哪家上海公司黄页网站
  • 网站开发人才南京本地网站建站
  • 网帆-网站建设官方店深圳画册设计品牌
  • 做企业网站设计价格是多少邗江区城乡建设局网站
  • 网站建设设计说明中国建设招标网是权威网站吗
  • 做同城服务网站比较成功的网站营销型网站建设设计
  • 厦门450元网站建设公司360网站建设公司哪家好
  • 最专业的外贸网站建设建设明细在哪里看
  • wordpress模板外贸怀化seo快速排名
  • 安徽工建设信息网站asp网站服务器架设
  • 做网站云主机厦门市湖里区建设局网站
  • 企业内网网站网站建设项目创业计划书
  • 彩票网站开发 添加彩种教程手机社交网站模板
  • 山东网站制作wordpress的平台
  • 网页和网站的联系沈阳妇科医院排名前十有哪些