苏州网站建设logo,岱山县建设网站,邢台论坛吧,ftp修改网站基于hadoop 3.1.4
一、准备好需要的文件
1、hadoop-3.1.4编译完成的包 链接: https://pan.baidu.com/s/1tKLDTRcwSnAptjhKZiwAKg 提取码: ekvc 2、需要jdk环境 链接: https://pan.baidu.com/s/18JtAWbVcamd2J_oIeSVzKw 提取码: bmny 3、vmware安装包 链接: https://pan.baidu…基于hadoop 3.1.4
一、准备好需要的文件
1、hadoop-3.1.4编译完成的包 链接: https://pan.baidu.com/s/1tKLDTRcwSnAptjhKZiwAKg 提取码: ekvc 2、需要jdk环境 链接: https://pan.baidu.com/s/18JtAWbVcamd2J_oIeSVzKw 提取码: bmny 3、vmware安装包 链接: https://pan.baidu.com/s/1YxDntBWSCEnN9mTYlH0FUA 提取码: uhsj 4、vmware许可证 链接: https://pan.baidu.com/s/10CsLc-nJXnH5V9IMP-KZeg 提取码: r5y5 5、linux下载 镜像下载地址
二、准备工作
1、安装虚拟机 自行搜索 2、配置静态ip
cd /etc/sysconfig/network-scripts/机器的ip网关子网掩码根据自己机器自行查看
###静态ip配置
IPADDR192.168.109.103 ##我们需要指定的ip
NETMASK255.255.255.0
GATEWAY192.168.109.2
DNS18.8.8.83、linux安装jdk 卸载linux自带openjdk
rpm -qa|grep jdk.noarch后缀的不要删除
rpm -e --nodeps XXXtar -zxvf xxx.jarvim /etc/profileexport JAVA_HOME/usr/local/jdk1.8.0_361
export PATH$JAVA_HOME/bin:$PATH
export CLASSPATH.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarsource /etc/profile 4、关闭防火墙
systemctl stop firewalld.service
systemctl diabled firewalld.service规划 hadoop0 namenode datanode resourcemanager nodemanager hadoop1 secondarynamenode datanode nodemanager hadoop2 datanode nodemanager
5、配置hostname以及hosts文件
hostnamectl set-hostname hadoop0vim /etc/hosts
192.168.109.101 hadoop0
192.168.109.102 hadoop1
192.168.109.103 hadoop2三台机器都要配置全量的hosts不然后续启动secondarynamenode会失败
6、免密登陆
cd /root/.ssh
如果没有.ssh 则执行 mkdir -p /root/.ssh生成密码
ssh-keygen -t dsa
cd /root/.ssh
cat id_dsa.pub authorized_keys
解释一下先是生成密钥将密钥放在机器的授权密钥中其他两台机器重复上面步骤将hadoop1hadoop2的id_dsa.pub拷贝至hadoop0的authorized_keys中这时候hadoop0到hadoop0hadoop1hadoop2就能免密登陆
ssh hadoop17、统一工作目录
mkdir -p /export/server/
mkdir -p /export/data/
mkdir -p /export/software/8、hadoop环境变量
vim /etc/profileexport HADOOP_HOME/export/server/hadoop-3.1.4
export PATH$HADOOP_HOME/bin:$HADOOP_HOME/sbinsource /etc/profile9、hadoop配置文件都在hadoop0上操作后续会全量拷贝至其他机器
vim etc/hadoop/hadoop-env.shexport JAVA_HOME/usr/local/jdk1.8.0_361export HDFS_NAMENODE_USERroot
export HDFS_DATANODE_USERroot
export HDFS_SECONDARYNAMENODE_USERroot
export YARN_RESOURCEMANAGER_USERroot
export YARN_NODEMANAGER_USERroot
vim etc/hadoop/core-site.xmlpropertynamefs.defaultFS/namevaluehdfs://hadoop0:8020/value
/property
propertynamehadoop.tmp.dir/namevalue/export/data/hadoop-3.1.4/value
/property
propertynamehadoop.http.staticuser.user/namevalueroot/value
/propertyvim etc/hadoop/hdfs-site.xmlproperty
namedfs.namenode.secondary.http-address/name
valuehadoop1:9868/value
/property
vim etc/hadoop/mapred-site.xmlpropertynamemapreduce.framework.name/namevalueyarn/value
/property
propertynameyarn.app.mapreduce.am.env/namevalueHADOOP_MAPRED_HOME${HADOOP_HOME}/value
/propertypropertynamemapreduce.map.env/namevalueHADOOP_MAPRED_HOME${HADOOP_HOME}/value
/property
propertynamemapreduce.reduce.env/namevalueHADOOP_MAPRED_HOME${HADOOP_HOME}/value
/propertyvim etc/hadoop/workershadoop0
hadoop1
hadoop2
vim etc/hadoop/yarn-site.xmlpropertynameyarn.resourcemanager.hostname/namevaluehadoop0/value
/propertypropertynameyarn.nodemanager.aux-services/namevaluemapreduce_shuffle/value
/propertypropertynameyarn.scheduler.minimum-allocation-mb/namevalue512/value
/propertypropertynameyarn.scheduler.maximum-allocation-mb/namevalue2048/value
/propertypropertynameyarn.nodemanager.vmem-pmem-radio/namevalue4/value
/property10、hdfs 初始化切记不可多次初始化
hdfs namenode -format2023-03-26 00:12:47,011 INFO common.Storage: Storage directory /export/data/hadoop-3.1.4/dfs/name has been successfully formatted.total 16
-rw-r--r-- 1 root root 391 Mar 26 00:12 fsimage_0000000000000000000
-rw-r--r-- 1 root root 62 Mar 26 00:12 fsimage_0000000000000000000.md5
-rw-r--r-- 1 root root 2 Mar 26 00:12 seen_txid
-rw-r--r-- 1 root root 220 Mar 26 00:12 VERSION出现如上文件代表ok
11、集群启动 根据我们自己规划的机器分别启动对应进程
hadoop0 namenode datanode resourcemanager nodemanager
hadoop1 secondarynamenode datanode nodemanager
hadoop2 datanode nodemanagerhdfs --daemon start namenode|datanode|secondarynamenode
hdfs --daemon stop namenode|datanode|secondarynamenodeyarn --daemon start resourcemanager|nodemanager
yarn --daemon stop resourcemanager|nodemanager