网站上地图是怎样做的,新月直播,东莞企业网站教程,网站建设 开发本节重点介绍 :
打镜像#xff0c;导出镜像#xff0c;传输到各个节点并导入运行该项目配置prometheus和grafana
打镜像
本地build docker build -t ink8s-pod-metrics:v1 .build过程 导出镜像 docker save ink8s-pod-metrics ink8s-pod-metrics.tar 传输到各个node…本节重点介绍 :
打镜像导出镜像传输到各个节点并导入运行该项目配置prometheus和grafana
打镜像
本地build docker build -t ink8s-pod-metrics:v1 .build过程 导出镜像 docker save ink8s-pod-metrics ink8s-pod-metrics.tar
传输到各个node节点上
scp ink8s-pod-metrics.tar k8s-node01:~各个node节点上导入镜像
使用docker
docker load ink8s-pod-metrics.tar使用containerd
ctr --namespace k8s.io images import ink8s-pod-metrics.tar
运行该项目 kubectl apply -f rbac.yamlkubectl apply -f deployment.yaml
检查
[rootk8s-master01 ink8s-pod-metrics]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-d5d85bcd6-f74ch 1/1 Running 0 3d9h 10.100.85.199 k8s-node01 none none
grafana-d5d85bcd6-l44mx 1/1 Running 0 3d9h 10.100.85.198 k8s-node01 none none
ink8s-pod-metrics-deployment-85d9795d6-95lsp 1/1 Running 0 13m 10.100.85.207 k8s-node01 none none日志
[rootk8s-master01 ink8s-pod-metrics]# kubectl logs -l appink8s-pod-metrics -f
2021-08-23 20:34:35.377256 INFO app/get_k8s_objs.go:128 [pod.label:map[component:etcd tier:control-plane]]
2021-08-23 20:34:35.377266 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-apiserver tier:control-plane]]
2021-08-23 20:34:35.377274 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-controller-manager tier:control-plane]]
2021-08-23 20:34:35.377292 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:35.377299 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:35.377317 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-scheduler tier:control-plane]]
2021-08-23 20:34:35.377324 INFO app/get_k8s_objs.go:128 [pod.label:map[app.kubernetes.io/name:kube-state-metrics app.kubernetes.io/version:v1.9.7 pod-template-hash:564668c858]]
2021-08-23 20:34:35.377331 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:metrics-server pod-template-hash:7dbf6c4558]]
2021-08-23 20:34:35.377336 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:prometheus-5b9cdcfd6c k8s-app:prometheus statefulset.kubernetes.io/pod-name:prometheus-0]]
2021-08-23 20:34:35.377358 INFO app/get_k8s_objs.go:143 server_pod_ips_result][num_pod:11][time_took_seconds:6.189551999]
2021-08-23 20:34:39.197614 INFO app/get_k8s_objs.go:107 server_node_ips_result][num_node:2][time_took_seconds:0.009575987]
2021-08-23 20:34:39.200824 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:kube-dns pod-template-hash:68b9d7b887]]
2021-08-23 20:34:39.200857 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:kube-dns pod-template-hash:68b9d7b887]]
2021-08-23 20:34:39.200871 INFO app/get_k8s_objs.go:128 [pod.label:map[component:etcd tier:control-plane]]
2021-08-23 20:34:39.200889 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-apiserver tier:control-plane]]
2021-08-23 20:34:39.200903 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-controller-manager tier:control-plane]]
2021-08-23 20:34:39.200920 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:39.200934 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:39.200947 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-scheduler tier:control-plane]]
2021-08-23 20:34:39.200961 INFO app/get_k8s_objs.go:128 [pod.label:map[app.kubernetes.io/name:kube-state-metrics app.kubernetes.io/version:v1.9.7 pod-template-hash:564668c858]]
2021-08-23 20:34:39.200981 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:metrics-server pod-template-hash:7dbf6c4558]]
2021-08-23 20:34:39.200992 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:prometheus-5b9cdcfd6c k8s-app:prometheus statefulset.kubernetes.io/pod-name:prometheus-0]]
2021-08-23 20:34:39.201022 INFO app/get_k8s_objs.go:143 server_pod_ips_result][num_pod:11][time_took_seconds:0.013052527]node上请求 pod 的metrics
curl pod的ip:8080/metrics
[rootk8s-master01 ink8s-pod-metrics]# curl -s 10.100.85.207:8080/metrics |grep ink8s
# HELP ink8s_pod_metrics_get_node_detail k8s node detail each
# TYPE ink8s_pod_metrics_get_node_detail gauge
ink8s_pod_metrics_get_node_detail{containerRuntimeVersioncontainerd://1.4.4,hostnamek8s-master01,ip172.20.70.205,kubeletVersionv1.20.1} 1
ink8s_pod_metrics_get_node_detail{containerRuntimeVersioncontainerd://1.4.4,hostnamek8s-node01,ip172.20.70.215,kubeletVersionv1.20.1} 1
# HELP ink8s_pod_metrics_get_node_last_duration_seconds get node last duration seconds
# TYPE ink8s_pod_metrics_get_node_last_duration_seconds gauge
ink8s_pod_metrics_get_node_last_duration_seconds 0.008066143
# HELP ink8s_pod_metrics_get_pod_control_plane_pod_detail k8s pod detail of control plane
# TYPE ink8s_pod_metrics_get_pod_control_plane_pod_detail gauge
ink8s_pod_metrics_get_pod_control_plane_pod_detail{componentetcd,ip172.20.70.205,pod_nameetcd-k8s-master01} 1
ink8s_pod_metrics_get_pod_control_plane_pod_detail{componentkube-apiserver,ip172.20.70.205,pod_namekube-apiserver-k8s-master01} 1
ink8s_pod_metrics_get_pod_control_plane_pod_detail{componentkube-controller-manager,ip172.20.70.205,pod_namekube-controller-manager-k8s-master01} 1
ink8s_pod_metrics_get_pod_control_plane_pod_detail{componentkube-scheduler,ip172.20.70.205,pod_namekube-scheduler-k8s-master01} 1
# HELP ink8s_pod_metrics_get_pod_last_duration_seconds get pod last duration seconds
# TYPE ink8s_pod_metrics_get_pod_last_duration_seconds gauge
ink8s_pod_metrics_get_pod_last_duration_seconds 0.01159838prometheus target页面检查pod 发现pod已经发现到了但是 报错向http的server发送https的请求
prometheus和grafana配置
检查 kubernetes-pods的job
如果之前配置的https需要改为http的
- job_name: kubernetes-podshonor_timestamps: truescrape_interval: 30sscrape_timeout: 10smetrics_path: /metricsscheme: httpfollow_redirects: truerelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]separator: ;regex: truereplacement: $1action: keep- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]separator: ;regex: (.)target_label: __metrics_path__replacement: $1action: replace- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]separator: ;regex: ([^:])(?::\d)?;(\d)target_label: __address__replacement: $1:$2action: replace- separator: ;regex: __meta_kubernetes_pod_label_(.)replacement: $1action: labelmap- source_labels: [__meta_kubernetes_namespace]separator: ;regex: (.*)target_label: kubernetes_namespacereplacement: $1action: replace- source_labels: [__meta_kubernetes_pod_name]separator: ;regex: (.*)target_label: kubernetes_pod_namereplacement: $1action: replacekubernetes_sd_configs:- role: podkubeconfig_file: follow_redirects: true在prometheus中检查指标
查询 ink8s_pod_metrics_get_node_detail
ink8s_pod_metrics_get_node_detail{appink8s-pod-metrics, containerRuntimeVersioncontainerd://1.4.4, hostnamek8s-master01, instance10.100.85.207:8080, ip172.20.70.205, jobkubernetes-pods, kubeletVersionv1.20.1, kubernetes_namespacedefault, kubernetes_pod_nameink8s-pod-metrics-deployment-85d9795d6-95lsp, pod_template_hash85d9795d6}
1
ink8s_pod_metrics_get_node_detail{appink8s-pod-metrics, containerRuntimeVersioncontainerd://1.4.4, hostnamek8s-node01, instance10.100.85.207:8080, ip172.20.70.215, jobkubernetes-pods, kubeletVersionv1.20.1, kubernetes_namespacedefault, kubernetes_pod_nameink8s-pod-metrics-deployment-85d9795d6-95lsp, pod_template_hash85d9795d6}配置grafana
举例图片
本节重点总结 :
打镜像导出镜像传输到各个节点并导入运行该项目配置prometheus和grafana