无锡网站建设哪家做得比较好,电商网站开发有前台吗,网站彩票网站开发,增塑剂网站建设在生产实践过程中#xff0c;需要把data退役之后需要停机下线#xff0c;在下线之前需要确认机器是否已下线完成#xff0c;要去namenode的50070界面上查看显然效率低#xff0c;为了能够快速拿到节点信息#xff0c;写了简单的脚本。jmx/50070还有很多信息可以获取#…在生产实践过程中需要把data退役之后需要停机下线在下线之前需要确认机器是否已下线完成要去namenode的50070界面上查看显然效率低为了能够快速拿到节点信息写了简单的脚本。jmx/50070还有很多信息可以获取可以需求采集需要的指标可以转成Prometheus的export或是入到时序数据库。本文只是用于交流和学习。
# -*- coding: utf-8 -*-
__author__ machine
#date: 20220720
import json
import requestsurl_dict {集群1: http://192.168.100.1:50070,集群2: http://192.168.14.1:50070}for k,v in url_dict.items():print( )print(-----------------------------------------------------------------------------)print(集群名称,k)#print(v)urlvstr(/jmx?qryHadoop:serviceNameNode,nameNameNodeInfo)print(url)req requests.get(url)#print(req)result_json json.loads(req.text)#print(result_json)livenodejson.loads(result_json[beans][0][LiveNodes])deadnoderesult_json[beans][0][DeadNodes]print(运行节点的服务状态 )list_inservernode []list_decommissioned []for lip in livenode.values():#print(lip[xferaddr].split(:)[0])statuslip[adminState].split( )[0]if status Decommissioned:list_decommissioned.append(lip[xferaddr].split(:)[0])#print(退役节点,lip[xferaddr].split(:)[0])else:list_inservernode.append(lip[xferaddr].split(:)[0])#print(在线节点,lip[xferaddr].split(:)[0])print( )print(退役节点)for i in list_decommissioned:print(i)print(在线节点)for i in list_inservernode:print(i)print( )#print(-----------------------------------------------------------------------------)print(str(----------------------------- ) HDFS空间使用情况 str( -----------------------------))print(HDFS总共空间(TB),result_json[beans][0][Total] // (1024 * 1024 * 1024 * 1024) ,str(TB))print(HDFS已使用空间(TB), result_json[beans][0][Used] // (1024 * 1024 * 1024 * 1024), str(TB))print(HDFS剩余空间(TB), result_json[beans][0][Free] // (1024 * 1024 * 1024 * 1024), str(TB))print(HDFS已使用空间(使用率),result_json[beans][0][PercentUsed],str(%))print(-----------------------------------------------------------------------------)jmx hadoop部分参数
curl http://192.168.10.2:50070/jmx?
NameNode:50070
qryHadoop:serviceNameNode,nameRpcActivityForPort8020 MemHeapMaxM MemMaxM
Hadoop:serviceNameNode,nameJvmMetrics
MemHeapMaxM MemMaxM Hadoop:serviceNameNode,nameFSNamesystem CapacityTotal CapacityTotalGB CapacityRemaining CapacityRemainingGB TotalLoad FilesTotal
Hadoop:serviceNameNode,nameFSNamesystemState
NumLiveDataNodes
Hadoop:serviceNameNode,nameNameNodeInfo
LiveNodes java.lang:typeRuntime StartTime
Hadoop:serviceNameNode,nameFSNamesystemState
TopUserOpCounts:timestamp
Hadoop:serviceNameNode,nameNameNodeActivity
CreateFileOps FilesCreated FilesAppended FilesRenamed GetListingOps DeleteFileOps FilesDeleted
Hadoop:serviceNameNode,nameFSNamesystem
CapacityTotal CapacityTotalGB CapacityUsed CapacityUsedGB CapacityRemaining CapacityRemainingGB CapacityUsedNonDFS
DataNode
DataNode:50075
Hadoop:serviceDataNode,nameDataNodeActivity-slave-50010
BytesWritten BytesRead BlocksWritten BlocksRead ReadsFromLocalClient ReadsFromRemoteClient WritesFromLocalClient WritesFromRemoteClient BlocksGetLocalPathInfo