发布时间:2025-06-24 20:02:52  作者:北方职教升学中心  阅读量:636


A是一个数字,表示这个是第几号服务器;

集群模式下配置一个文件myid,这个文件在dataDir目录下,这个文件里面有一个数据就是A的值,Zookeeper启动时读取此文件,拿到里面的数据与zoo.cfg里面的配置信息比较从而判断到底是哪个server。3

1

4)配置zoo.cfg文件

(1)重命名/export/service/zookeeper/conf目录下的zoo_sample.cfg为zoo.cfg

​​​​​​​mv zoo_sample.cfg zoo.cfg

(2)打开zoo.cfg文件

​​​​​​​vim zoo.cfg

修改数据存储路径配置

dataDir=/export/service/zookeeper/zkData

增加如下配置

#######################cluster##########################server.1=hadoop01:2888:3888server.2=hadoop02:2888:3888server.3=hadoop03:2888:3888

(3)同步/export/service/zookeeper目录内容到hadoop02、

在用的家目录/home/hadoop下创建bin文件夹

sudo vim  /home/hadoop/bin/xsync
#!/bin/bash#1. 判断参数个数if [ $# -lt 1 ]then  echo Not Enough Arguement!  exit;fi#2. 遍历集群所有机器for host in hadoop01 hadoop02 hadoop03do  echo ====================  $host  ====================  #3. 遍历所有目录,挨个发送  for file in $@  do    #4 判断文件是否存在    if [ -e $file ]    then      #5. 获取父目录      pdir=$(cd -P $(dirname $file); pwd)      #6. 获取当前文件的名称      fname=$(basename $file)      ssh $host "mkdir -p $pdir"      rsync -av $pdir/$fname $host:$pdir    else      echo $file does not exists!    fi  donedone

 授权限

sudo chmod 777 xsync

编写集群命令批量执行脚本

sudo vim  home/hadoop/bin/xcall
#! /bin/bash for i in hadoop01 hadoop02 hadoop03do    echo --------- $i ----------    ssh $i "$*"done

 授权限

sudo chmod 777 xcall

Java自行安装配置(三个服务器都需要)

 Zookeeper安装

1)集群规划

在hadoop01、

5)集群操作

(1)分别hadoop01、hadoop02和hadoop03三个节点上部署Zookeeper。hadoop03

​​​​​​​xsync /export/service/zookeeper/

(4)zoo.cfg配置参数解读

server.A=B:C:D。

服务器hadoop01

服务器hadoop02

服务器hadoop03

Zookeeper

Zookeeper

Zookeeper

Zookeeper

2)解压安装

(1)解压Zookeeper安装包到/export/service/目录下

[hadoop@hadoop01 software]$ tar -zxvf apache-zookeeper-3.7.1-bin.tar.gz -C /export/service/

(2)修改/export/service/apache-zookeeper-3.7.1-bin名称为zookeeper-3.7.1

[hadoop@hadoop01 module]$ mv apache-zookeeper-3.7.1-bin/ zookeeper

3)配置服务器编号

(1)在/export/service/zookeeper/目录下创建zkData

mkdir zkData

(2)在/export/service/zookeeper/zkData目录下创建一个myid的文件

vim myid

添加myid文件,注意一定要在linux里面创建,在notepad++里面很可能乱码

在文件中添加与server对应的编号:hadoop02、启动journalnode

在hadoop01、hadoop03上为2、

(2)将hadoop01公钥拷贝到要免密登录的目标机器上

ssh-copy-id hadoop01
ssh-copy-id hadoop02
ssh-copy-id hadoop03

编写集群分发脚本xsync

说明:在/home/hadoop/bin这个目录下存放的脚本,hadoop用户可以在系统任何地方直接执行。初始化zookeeper(任意节点)

hdfs zkfc -formatZK

验证zkfc是否格式化成功

# zkCli.sh[zk: localhost:2181(CONNECTED) 0] ls /hadoop-ha [mycluster]
2、hadoop02、hadoop03启动Zookeeper

bin/zkServer.sh start

(2)查看状态

bin/zkServer.sh status

ZK集群启动停止脚本zh.sh 

vim /home/hadoop/bin/zk.sh
#!/bin/bashcase $1 in"start"){	for i in hadoop01 hadoop02 hadoop03	do        echo ---------- zookeeper $i 启动 ------------		ssh $i "/export/service/zookeeper/bin/zkServer.sh start"	done};;"stop"){	for i in hadoop01 hadoop02 hadoop03	do        echo ---------- zookeeper $i 停止 ------------    		ssh $i "/export/service/zookeeper/bin/zkServer.sh stop"	done};;"status"){	for i in hadoop01 hadoop02 hadoop03	do        echo ---------- zookeeper $i 状态 ------------    		ssh $i "/export/service/zookeeper/bin/zkServer.sh status"	done};;esac

授权限

sudo chmod 777 zh.sh

配置Hadoop

安装包安装及环境变量自行配置

core-site.xml配置文件
<configuration>    <!-- 默认文件系统设置,指定HDFS集群的地址 -->    <property>      <name>fs.defaultFS</name>      <value>hdfs://mycluster</value>    </property>	    <!-- Hadoop临时文件目录设置,指定临时文件存放的位置 -->    <property>      <name>hadoop.tmp.dir</name>      <value>/export/service/tmp</value>    </property>    <!-- 文件缓冲区大小设置,单位为字节,控制I/O操作时的缓存大小 -->    <property>      <name>io.file.buffer.size</name>      <value>4096</value>    </property>        <!-- HDFS高可用性配置,指定ZooKeeper集群地址,用于监控NameNode的状态 -->    <property>      <name>ha.zookeeper.quorum</name>      <value>zookeeper01:2181,zookeeper02:2181,zookeeper03:2181</value>    </property></configuration>
 mapred-site.xml配置文件
<configuration>    <!-- 设置MapReduce框架为YARN -->    <property>      <name>mapreduce.framework.name</name>      <value>yarn</value>    </property>    <!-- 设置MapReduce作业历史服务器的地址 -->    <property>      <name>mapreduce.jobhistory.address</name>      <value>0.0.0.0:10020</value>    </property>    <!-- 设置MapReduce作业历史的Web应用访问地址 -->    <property>      <name>mapreduce.jobhistory.webapp.address</name>      <value>0.0.0.0:19888</value>    </property></configuration>
hdfs-site.xml配置文件
<configuration>    <!-- 配置HDFS集群名称 -->    <property>      <name>dfs.nameservices</name>      <value>mycluster</value>    </property>	    <!-- 配置HDFS集群中NameNode的高可用性节点 -->    <property>      <name>dfs.ha.namenodes.mycluster</name>      <value>nn1,nn2,nn3</value>    </property>	    <!-- 配置每个NameNode的RPC地址,用于客户端与NameNode通信 -->    <property>      <name>dfs.namenode.rpc-address.mycluster.nn1</name>      <value>hadoop01:8020</value>    </property>    <property>      <name>dfs.namenode.rpc-address.mycluster.nn2</name>      <value>hadoop02:8020</value>    </property>    <property>      <name>dfs.namenode.rpc-address.mycluster.nn3</name>      <value>hadoop03:8020</value>    </property>		    <!-- 配置每个NameNode的HTTP地址,用于浏览器访问Web界面 -->    <property>      <name>dfs.namenode.http-address.mycluster.nn1</name>      <value>hadoop01:9870</value>    </property>    <property>      <name>dfs.namenode.http-address.mycluster.nn2</name>      <value>hadoop02:9870</value>    </property>    <property>      <name>dfs.namenode.http-address.mycluster.nn3</name>      <value>hadoop03:9870</value>    </property>		    <!-- 配置HDFS的数据块复制因子 -->    <property>      <name>dfs.replication</name>      <value>3</value>    </property>        <!-- 配置HDFS数据块大小,单位为字节,这里是128MB -->    <property>      <name>dfs.blocksize</name>      <value>134217728</value>    </property>	    <!-- 配置NameNode存储目录 -->    <property>      <name>dfs.namenode.name.dir</name>      <value>file://${hadoop.tmp.dir}/dfs/name</value>    </property>    <!-- 配置DataNode存储目录 -->    <property>      <name>dfs.datanode.data.dir</name>      <value>file://${hadoop.tmp.dir}/dfs/data</value>    </property>    <!-- 配置共享编辑日志存储目录,用于NameNode高可用性 -->    <property>      <name>dfs.namenode.shared.edits.dir</name>      <value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/mycluster</value>    </property>     <!-- 配置JournalNode的日志存储目录 -->    <property>      <name>dfs.journalnode.edits.dir</name>      <value>/export/service/hadoop/tmp/dfs/journal</value>    </property>        <!-- 配置NameNode高可用性时的客户端代理提供者 -->    <property>      <name>dfs.client.failover.proxy.provider.mycluster</name>      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>    </property>	    <!-- 启用HDFS的自动故障转移功能 -->    <property>      <name>dfs.ha.automatic-failover.enabled</name>      <value>true</value>    </property>        <!-- 配置故障转移时的屏蔽方法,这里使用sshfence -->    <property>      <name>dfs.ha.fencing.methods</name>      <value>sshfence</value>    </property>    <!-- 配置用于SSH屏蔽的私钥文件 -->    <property>      <name>dfs.ha.fencing.ssh.private-key-files</name>      <value>/root/.ssh/id_rsa</value>    </property>    <!-- 配置SSH连接的超时时间,单位为毫秒 -->    <property>      <name>dfs.ha.fencing.ssh.connect-timeout</name>      <value>30000</value>    </property></configuration>
yarn-site.xml配置文件
<configuration>    <!-- 启用ResourceManager高可用性 -->    <property>      <name>yarn.resourcemanager.ha.enabled</name>      <value>true</value>    </property>    <!-- 设置YARN集群的ID,用于标识集群 -->    <property>      <name>yarn.resourcemanager.cluster-id</name>      <value>cluster1</value>    </property>    <!-- 启用ResourceManager的恢复功能 -->    <property>      <name>yarn.resourcemanager.recovery.enabled</name>      <value>true</value>    </property>    <!-- 配置ResourceManager状态存储方式,这里使用ZooKeeper作为存储后端 -->    <property>      <name>yarn.resourcemanager.store.class</name>      <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>    </property>	    <!-- 配置高可用性模式下的ResourceManager实例ID -->    <property>      <name>yarn.resourcemanager.ha.rm-ids</name>      <value>rm1,rm2,rm3</value>    </property>    <!-- 配置第一个ResourceManager的主机名 -->    <property>      <name>yarn.resourcemanager.hostname.rm1</name>      <value>hadoop01</value>    </property>    <!-- 配置第二个ResourceManager的主机名 -->    <property>      <name>yarn.resourcemanager.hostname.rm2</name>      <value>hadoop02</value>    </property>    <!-- 配置第三个ResourceManager的主机名 -->    <property>      <name>yarn.resourcemanager.hostname.rm3</name>      <value>hadoop03</value>    </property>    <!-- 配置第一个ResourceManager的Web应用访问地址 -->    <property>      <name>yarn.resourcemanager.webapp.address.rm1</name>      <value>hadoop01:8088</value>    </property>    <!-- 配置第二个ResourceManager的Web应用访问地址 -->    <property>      <name>yarn.resourcemanager.webapp.address.rm2</name>      <value>hadoop02:8088</value>    </property>    <!-- 配置第三个ResourceManager的Web应用访问地址 -->    <property>      <name>yarn.resourcemanager.webapp.address.rm3</name>      <value>hadoop03:8088</value>    </property>    <!-- 配置ZooKeeper的地址,用于YARN的高可用性和状态存储 -->    <property>      <name>hadoop.zk.address</name>      <value>zookeeper01:2181,zookeeper02:2181,zookeeper03:2181</value>    </property>    <!-- 启用YARN NodeManager的附加服务 -->    <property>      <name>yarn.nodemanager.aux-services</name>      <value>mapreduce_shuffle</value>    </property>    <!-- 配置YARN NodeManager的附加服务类,这里是MapReduce的Shuffle服务 -->    <property>      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>      <value>org.apache.hadoop.mapred.ShuffleHandler</value>    </property>	    <!-- 启用YARN日志聚合功能 -->    <property>      <name>yarn.log-aggregation-enable</name>      <value>true</value>    </property>    <!-- 配置日志聚合保留时间,单位为秒,这里设置为7天 -->    <property>      <name>yarn.log-aggregation.retain-seconds</name>      <value>604800</value>    </property>	</configuration>
workers配置文件
vim /export/service/etc/hadoop/workershadoop01hadoop02hadoop03
分发到其他服务器
scp /export/service/hadoop/etc/hadoop/* hadoop02:/export/service/hadoop/etc/hadoop/scp /export/service/hadoop/etc/hadoop/* hadoop03:/export/service/hadoop/etc/hadoop/

启动hadoop服务

1、hadoop02及hadoop03节点启动journalnode

hdfs --daemon start journalnode

补充:若在Hadoop启动过程找不到JAVA_HOME就到/export/service/hadoop-3.3.4/etc/hadoop/hadoop-env.sh文件手工配置JAVA_HOME

export JAVA_HOME='/export/service/jdk-1.8.0'
3、hadoop03节点执行:

hdfs namenode -bootstrapStandby

浏览器访问NameNode,当前所有NameNode都是standby状态:

http://192.168.174.201:9870/http://192.168.174.202:9870/http://192.168.174.203:9870/
4、id_rsa.pub(公钥)。

B是这个服务器的地址;

C是这个服务器Follower与集群中的Leader服务器交换信息的端口;

D是万一集群中的Leader服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口。

修改服务器主机名

vim /etc/hostname

配置主机映射(按实际个人ip为准)

sudo vim /etc/hosts

 追加以下:

192.168.174.201 hadoop01192.168.174.202 hadoop02192.168.174.203 hadoop03

 SSH无密登录配置

(1)hadoop01上生成公钥和私钥(所有机器要重复以下操作):

ssh-keygen -t rsa

然后敲(三个回车),就会生成两个文件id_rsa(私钥)、启动所有其他服务,包括zkfc

start-all.sh
编写hadoop一键启动脚本
sudo vim /home/hadoop/bin/hdp.sh
#!/bin/bashif [ $# -lt 1 ]then    echo "No Args Input..."    exit ;ficase $1 in"start")        echo " =================== 启动 hadoop集群 ==================="        echo " --------------- 启动 hdfs ---------------"        ssh hadoop01 "/export/service/hadoop-3.3.4/sbin/start-dfs.sh"        echo " --------------- 启动 yarn ---------------"        ssh hadoop02 "/export/service/hadoop-3.3.4/sbin/start-yarn.sh"        echo " --------------- 启动 historyserver ---------------"        ssh hadoop03 "/export/service/hadoop-3.3.4/bin/mapred --daemon start historyserver";;"stop")        echo " =================== 关闭 hadoop集群 ==================="        echo " --------------- 关闭 historyserver ---------------"        ssh hadoop01 "/export/service/hadoop-3.3.4/bin/mapred --daemon stop historyserver"        echo " --------------- 关闭 yarn ---------------"        ssh hadoop02 "/export/service/hadoop-3.3.4/sbin/stop-yarn.sh"        echo " --------------- 关闭 hdfs ---------------"        ssh hadoop03 "/export/service/hadoop-3.3.4/sbin/stop-dfs.sh";;*)    echo "Input Args Error...";;esacxcall jps

授权限

sudo chmod 777 hdp.sh

!!注意 关机之前必须要关闭hadoop,否则namenode和datanode在下次启动大概率会出现故障

启动namenode

hadoop01格式化namenode

hdfs namenode -format

 若journalnode正常启动,namenode格式出现问题,大概率是文件夹/export/service/hadoop-3.3.4/tmp权限不足,因此需要给这个文件夹授权

sudo chmod -R 777 /export/service/hadoop-3.3.4/tmp

在hadoop01启动namenode

hdfs --daemon start namenode

将hadoop01节点上namenode的数据同步到其他nameNode节点,在hadoop02、