• 【实验】Hadoop-2.7.2+zookeeper-3.4.6完全分布式环境搭建(HDFS、YARN HA)转载


    Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建

    一.版本

    组件名

    版本

    说明

    JRE

    java version “1.7.0_67”
    Java SE Runtime Environment (build 1.7.0_67-b01)
    Java HotSpot 64-Bit Server VM (build 24.65-b04, mixed mode)

    Hadoop

    hadoop-2.7.2.tar.gz

    主程序包

    Zookeeper

    zookeeper-3.4.6.tar.gz

    热切,Yarn 存储数据使用的协调服务

    二.主机规划

    IP

    Host 及安装软件

    部署模块

    进程

    172.16.101.55

    sht-sgmhadoopnn-01
    hadoop

    NameNode
    ResourceManager

    NameNode
    DFSZKFailoverController
    ResourceManager

    172.16.101.56

    sht-sgmhadoopnn-02
    hadoop

    NameNode
    ResourceManager

    NameNode
    DFSZKFailoverController
    ResourceManager

    172.16.101.58

    sht-sgmhadoopdn-01
    hadoop、zookeeper

    DataNode
    NodeManager
    Zookeeper

    DataNode
    NodeManager
    JournalNode
    QuorumPeerMain

    172.16.101.59

    sht-sgmhadoopdn-02
    Hadoop、zookeeper

    DataNode
    NodeManager
    Zookeeper

    DataNode
    NodeManager
    JournalNode
    QuorumPeerMain

    172.16.101.60

    sht-sgmhadoopdn-03
    Hadoop、zookeeper

    DataNode
    NodeManager
    Zookeeper

    DataNode
    NodeManager
    JournalNode
    QuorumPeerMain

    三.目录规划

    名称

    路径

    $HADOOP_HOME

    /hadoop/hadoop-2.7.2

    Data

    $ HADOOP_HOME/data

    Log

    $ HADOOP_HOME/logs

    四.常用脚本及命令

    1.启动集群

    start-dfs.sh

    start-yarn.sh

    2.关闭集群

    stop-yarn.sh

    stop-dfs.sh

    3.监控集群

    hdfs dfsadmin -report

    4.单个进程启动/关闭

    hadoop-daemon.sh start|stop namenode|datanode| journalnode

    yarn-daemon.sh start |stop resourcemanager|nodemanager

    http://blog.chinaunix.net/uid-25723371-id-4943894.html

    五.环境准备

    1 .设置ip地址(5)

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 ~]#vi /etc/sysconfig/network-scripts/ifcfg-eth0
    2. DEVICE=“eth0”
    3. BOOTPROTO=“static”
    4. DNS1=“172.16.101.63”
    5. DNS2=“172.16.101.64”
    6. GATEWAY=“172.16.101.1”
    7. HWADDR=“00:50:56:82:50:1E”
    8. IPADDR=“172.16.101.55”
    9. NETMASK=“255.255.255.0”
    10. NM_CONTROLLED=“yes”
    11. ONBOOT=“yes”
    12. TYPE=“Ethernet”
    13. UUID=“257c075f-6c6a-47ef-a025-e625367cbd9c”

    执行命令: service network restart

    验证:ifconfig

    2 .关闭防火墙(5)

    执行命:service iptables stop

    验证:service iptables status

    3.关闭防火墙的自动运行(5)

    执行命令:chkconfig iptables off

    验证:chkconfig --list | grep iptables

    4设置主机名(5)

    执行命令
    (1)hostname sht-sgmhadoopnn-01

    (2)vi /etc/sysconfig/network

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 ~]#vi /etc/sysconfig/network
    2. NETWORKING=yes
    3. HOSTNAME=sht-sgmhadoopnn-01.telenav.cn
    4. GATEWAY=172.16.101.1

    5 iphostname绑定(5)

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 ~]#vi /etc/hosts
    2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    4. 172.16.101.55 sht-sgmhadoopnn-01.telenav.cn sht-sgmhadoopnn-01
    5. 172.16.101.56 sht-sgmhadoopnn-02.telenav.cn sht-sgmhadoopnn-02
    6. 172.16.101.58 sht-sgmhadoopdn-01.telenav.cn sht-sgmhadoopdn-01
    7. 172.16.101.59 sht-sgmhadoopdn-02.telenav.cn sht-sgmhadoopdn-02
    8. 172.16.101.60 sht-sgmhadoopdn-03.telenav.cn sht-sgmhadoopdn-03
    9. 验证:ping sht-sgmhadoopnn-01

    6.设置5machines,SSH互相通信
    http://blog.itpub.net/30089851/viewspace-1992210/

    7 .安装JDK(5)

    点击(此处)折叠或打开

    1. (1)执行命令
    2. [root@sht-sgmhadoopnn-01 ~]#cd /usr/java
    3. [root@sht-sgmhadoopnn-01 java]#cp /tmp/jdk-7u67-linux-x64.gz./
    4. [root@sht-sgmhadoopnn-01 java]#tar-xzvf jdk-7u67-linux-x64.gz
    5. (2)vi /etc/profile 增加内容如下:
    6. export JAVA_HOME=/usr/java/jdk1.7.0_67
    7. export HADOOP_HOME=/hadoop/hadoop-2.7.2
    8. export ZOOKEEPER_HOME=/hadoop/zookeeper
    9. export PATH=.: H A D O O P _ H O M E / b i n : HADOOP\_HOME/bin: HADOOP_HOME/bin:JAVA_HOME/bin: Z O O K E E P E R _ H O M E / b i n : ZOOKEEPER\_HOME/bin: ZOOKEEPER_HOME/bin:PATH
    10. #先把HADOOP_HOME,ZOOKEEPER_HOME配置了
    11. #本次实验机器已经配置好了jdk1.7.0_67-cloudera
    12. (3)执行 source /etc/profile
    13. (4)验证:java –version

    8.创建文件夹(5)

    mkdir /hadoop

    **.安装Zookeeper**

    sht-sgmhadoopdn-01/02/03

    1.下载解压zookeeper-3.4.6.tar.gz

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopdn-01 tmp]#wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
    2. [root@sht-sgmhadoopdn-02 tmp]#wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
    3. [root@sht-sgmhadoopdn-03 tmp]#wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
    4. [root@sht-sgmhadoopdn-01 tmp]#tar-xvf zookeeper-3.4.6.tar.gz
    5. [root@sht-sgmhadoopdn-02 tmp]#tar-xvf zookeeper-3.4.6.tar.gz
    6. [root@sht-sgmhadoopdn-03 tmp]#tar-xvf zookeeper-3.4.6.tar.gz
    7. [root@sht-sgmhadoopdn-01 tmp]#mv zookeeper-3.4.6 /hadoop/zookeeper
    8. [root@sht-sgmhadoopdn-02 tmp]#mv zookeeper-3.4.6 /hadoop/zookeeper
    9. [root@sht-sgmhadoopdn-03 tmp]#mv zookeeper-3.4.6 /hadoop/zookeeper

    **2.**修改配置

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopdn-01 tmp]#cd /hadoop/zookeeper/conf
    2. [root@sht-sgmhadoopdn-01 conf]#cp zoo_sample.cfg zoo.cfg
    3. [root@sht-sgmhadoopdn-01 conf]#vi zoo.cfg
    4. 修改dataDir
    5. dataDir=/hadoop/zookeeper/data
    6. 添加下面三行
    7. server.1=sht-sgmhadoopdn-01:2888:3888
    8. server.2=sht-sgmhadoopdn-02:2888:3888
    9. server.3=sht-sgmhadoopdn-03:2888:3888
    10. [root@sht-sgmhadoopdn-01 conf]#cd…/
    11. [root@sht-sgmhadoopdn-01 zookeeper]#mkdir data
    12. [root@sht-sgmhadoopdn-01 zookeeper]#touch data/myid
    13. [root@sht-sgmhadoopdn-01 zookeeper]#echo1>data/myid
    14. [root@sht-sgmhadoopdn-01 zookeeper]#more data/myid
    15. 1
    16. ## sht-sgmhadoopdn-02/03,也修改配置,就如下不同
    17. [root@sht-sgmhadoopdn-02 zookeeper]#echo2>data/myid
    18. [root@sht-sgmhadoopdn-03 zookeeper]#echo3>data/myid

    **.安装Hadoop(HDFS HA+YARN HA)**

    **#step3~7,SecureCRT sshlinux的环境中,假如copy内容从windowlinux,中文乱码,**请参照修改http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html

    1.下载解压hadoop-2.7.2.tar.gz

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopdn-01 tmp]#cd /hadoop/zookeeper/conf
    2. [root@sht-sgmhadoopdn-01 conf]#cp zoo_sample.cfg zoo.cfg
    3. [root@sht-sgmhadoopdn-01 conf]#vi zoo.cfg
    4. 修改dataDir
    5. dataDir=/hadoop/zookeeper/data
    6. 添加下面三行
    7. server.1=sht-sgmhadoopdn-01:2888:3888
    8. server.2=sht-sgmhadoopdn-02:2888:3888
    9. server.3=sht-sgmhadoopdn-03:2888:3888
    10. [root@sht-sgmhadoopdn-01 conf]#cd…/
    11. [root@sht-sgmhadoopdn-01 zookeeper]#mkdir data
    12. [root@sht-sgmhadoopdn-01 zookeeper]#touch data/myid
    13. [root@sht-sgmhadoopdn-01 zookeeper]#echo1>data/myid
    14. [root@sht-sgmhadoopdn-01 zookeeper]#more data/myid
    15. 1
    16. ## sht-sgmhadoopdn-02/03,也修改配置,就如下不同
    17. [root@sht-sgmhadoopdn-02 zookeeper]#echo2>data/myid
    18. [root@sht-sgmhadoopdn-03 zookeeper]#echo3>data/myid

    2.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh

    export JAVA_HOME=“/usr/java/jdk1.7.0_67-cloudera”

    3.修改$HADOOP_HOME/etc/hadoop/core-site.xml

    点击(此处)折叠或打开

    1. fs.defaultFS
    2. hdfs://mycluster
    3. dfs.permissions.superusergroup
    4. root
    5. fs.trash.checkpoint.interval
    6. 0
    7. fs.trash.interval
    8. 1440

    **4.修改$HADOOP_HOME/etc/**hadoop/hdfs-site.xml

    点击(此处)折叠或打开

    1. dfs.webhdfs.enabled

    2. true

    3. dfs.namenode.name.dir

    4. /hadoop/hadoop-2.7.2/data/dfs/name

    5. namenode 存放name table(fsimage)本地目录(需要修改)

    6. dfs.namenode.edits.dir

    7. ${dfs.namenode.name.dir}

    8. namenode粗放 transaction file(edits)本地目录(需要修改)

    9. dfs.datanode.data.dir

    10. /hadoop/hadoop-2.7.2/data/dfs/data

    11. datanode存放block本地目录(需要修改)

    12. dfs.replication

    13. 3

    14. dfs.blocksize

    15. 268435456

    16. dfs.nameservices

    17. mycluster

    18. dfs.ha.namenodes.mycluster

    19. nn1,nn2

    20. dfs.namenode.rpc-address.mycluster.nn1

    21. sht-sgmhadoopnn-01:8020

    22. dfs.namenode.rpc-address.mycluster.nn2

    23. sht-sgmhadoopnn-02:8020

    24. dfs.namenode.http-address.mycluster.nn1

    25. sht-sgmhadoopnn-01:50070

    26. dfs.namenode.http-address.mycluster.nn2

    27. sht-sgmhadoopnn-02:50070

    28. dfs.journalnode.http-address

    29. 0.0.0.0:8480

    30. dfs.journalnode.rpc-address

    31. 0.0.0.0:8485

    32. dfs.namenode.shared.edits.dir

    33. qjournal://sht-sgmhadoopdn-01:8485;sht-sgmhadoopdn-02:8485;sht-sgmhadoopdn-03:8485/mycluster

    34. dfs.journalnode.edits.dir

    35. /hadoop/hadoop-2.7.2/data/dfs/jn

    36. dfs.client.failover.proxy.provider.mycluster

    37. org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

    38. dfs.ha.fencing.methods

    39. sshfence

    40. dfs.ha.fencing.ssh.private-key-files

    41. /root/.ssh/id_rsa

    42. dfs.ha.fencing.ssh.connect-timeout

    43. 30000

    44. dfs.ha.automatic-failover.enabled

    45. true

    46. ha.zookeeper.quorum

    47. sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181

    48. ha.zookeeper.session-timeout.ms

    49. 2000

    **5.修改$HADOOP_HOME/etc/**hadoop/yarn-env.sh

    #Yarn Daemon Options

    #export YARN_RESOURCEMANAGER_OPTS

    #export YARN_NODEMANAGER_OPTS

    #export YARN_PROXYSERVER_OPTS

    #export HADOOP_JOB_HISTORYSERVER_OPTS

    #Yarn Logs

    export YARN_LOG_DIR=“/hadoop/hadoop-2.7.2/logs”

    **6.修改$HADOOP_HOEM/etc/**hadoop/mapred-site.xml

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 hadoop]#cp mapred-site.xml.template mapred-site.xml
    2. [root@sht-sgmhadoopnn-01 hadoop]#vi mapred-site.xml
    3. mapreduce.framework.name
    4. yarn
    5. mapreduce.jobhistory.address
    6. sht-sgmhadoopnn-01:10020
    7. mapreduce.jobhistory.webapp.address
    8. sht-sgmhadoopnn-01:19888

    **7.修改$HADOOP_HOME/etc/**hadoop/yarn-site.xml

    点击(此处)折叠或打开

    1. yarn.nodemanager.aux-services

    2. mapreduce_shuffle

    3. yarn.nodemanager.aux-services.mapreduce.shuffle.class

    4. org.apache.hadoop.mapred.ShuffleHandler

    5. Address where the localizer IPC is.

    6. yarn.nodemanager.localizer.address

    7. 0.0.0.0:23344

    8. NM Webapp address.

    9. yarn.nodemanager.webapp.address

    10. 0.0.0.0:23999

    11. yarn.resourcemanager.connect.retry-interval.ms

    12. 2000

    13. yarn.resourcemanager.ha.enabled

    14. true

    15. yarn.resourcemanager.ha.automatic-failover.enabled

    16. true

    17. yarn.resourcemanager.ha.automatic-failover.embedded

    18. true

    19. yarn.resourcemanager.cluster-id

    20. yarn-cluster

    21. yarn.resourcemanager.ha.rm-ids

    22. rm1,rm2

    23. yarn.resourcemanager.ha.id

    24. rm2

    25. –>

    26. yarn.resourcemanager.scheduler.class

    27. org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

    28. yarn.resourcemanager.recovery.enabled

    29. true

    30. yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms

    31. 5000

    32. yarn.resourcemanager.store.class

    33. org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore

    34. yarn.resourcemanager.zk-address

    35. sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181

    36. yarn.resourcemanager.zk.state-store.address

    37. sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181

    38. yarn.resourcemanager.address.rm1

    39. sht-sgmhadoopnn-01:23140

    40. yarn.resourcemanager.address.rm2

    41. sht-sgmhadoopnn-02:23140

    42. yarn.resourcemanager.scheduler.address.rm1

    43. sht-sgmhadoopnn-01:23130

    44. yarn.resourcemanager.scheduler.address.rm2

    45. sht-sgmhadoopnn-02:23130

    46. yarn.resourcemanager.admin.address.rm1

    47. sht-sgmhadoopnn-01:23141

    48. yarn.resourcemanager.admin.address.rm2

    49. sht-sgmhadoopnn-02:23141

    50. yarn.resourcemanager.resource-tracker.address.rm1

    51. sht-sgmhadoopnn-01:23125

    52. yarn.resourcemanager.resource-tracker.address.rm2

    53. sht-sgmhadoopnn-02:23125

    54. yarn.resourcemanager.webapp.address.rm1

    55. sht-sgmhadoopnn-01:8088

    56. yarn.resourcemanager.webapp.address.rm2

    57. sht-sgmhadoopnn-02:8088

    58. yarn.resourcemanager.webapp.https.address.rm1

    59. sht-sgmhadoopnn-01:23189

    60. yarn.resourcemanager.webapp.https.address.rm2

    61. sht-sgmhadoopnn-02:23189

    8.修改slaves

    [root@sht-sgmhadoopnn-01 hadoop]# vi slaves

    sht-sgmhadoopdn-01

    sht-sgmhadoopdn-02

    sht-sgmhadoopdn-03

    **9.**分发文件夹

    [root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopnn-02:/hadoop

    [root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-01:/hadoop

    [root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-02:/hadoop

    [root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-03:/hadoop

    **.**启动集群

    另外一种启动方式:http://www.micmiu.com/bigdata/hadoop/hadoop2-cluster-ha-setup/

    1.启动zookeeper

    点击(此处)折叠或打开

    1. command:./zkServer.sh start|stop|status
    2. [root@sht-sgmhadoopdn-01 bin]#./zkServer.sh start
    3. JMX enabled by default
    4. Using config:/hadoop/zookeeper/bin/…/conf/zoo.cfg
    5. Starting zookeeper…STARTED
    6. [root@sht-sgmhadoopdn-01 bin]#jps
    7. 2073 QuorumPeerMain
    8. 2106 Jps
    9. [root@sht-sgmhadoopdn-02 bin]#./zkServer.sh start
    10. JMX enabled by default
    11. Using config:/hadoop/zookeeper/bin/…/conf/zoo.cfg
    12. Starting zookeeper…STARTED
    13. [root@sht-sgmhadoopdn-02 bin]#jps
    14. 2073 QuorumPeerMain
    15. 2106 Jps
    16. [root@sht-sgmhadoopdn-03 bin]#./zkServer.sh start
    17. JMX enabled by default
    18. Using config:/hadoop/zookeeper/bin/…/conf/zoo.cfg
    19. Starting zookeeper…STARTED
    20. [root@sht-sgmhadoopdn-03 bin]#jps
    21. 2073 QuorumPeerMain
    22. 2106 Jps

    2.启动hadoop(HDFS+YARN)

    **a.格式化前,先在journalnode节点机器上先启动JournalNode****进程**

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopdn-01 ~]#cd /hadoop/hadoop-2.7.2/sbin
    2. [root@sht-sgmhadoopdn-01 sbin]#hadoop-daemon.sh start journalnode
    3. starting journalnode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
    4. [root@sht-sgmhadoopdn-03 sbin]#jps
    5. 16722 JournalNode
    6. 16775 Jps
    7. 15519 QuorumPeerMain
    8. [root@sht-sgmhadoopdn-02 ~]#cd /hadoop/hadoop-2.7.2/sbin
    9. [root@sht-sgmhadoopdn-02 sbin]#hadoop-daemon.sh start journalnode
    10. starting journalnode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
    11. [root@sht-sgmhadoopdn-03 sbin]#jps
    12. 16722 JournalNode
    13. 16775 Jps
    14. 15519 QuorumPeerMain
    15. [root@sht-sgmhadoopdn-03 ~]#cd /hadoop/hadoop-2.7.2/sbin
    16. [root@sht-sgmhadoopdn-03 sbin]#hadoop-daemon.sh start journalnode
    17. starting journalnode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
    18. [root@sht-sgmhadoopdn-03 sbin]#jps
    19. 16722 JournalNode
    20. 16775 Jps
    21. 15519 QuorumPeerMain

    b.NameNode****格式化

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 bin]#hadoop namenode-format
    2. 16/02/25 14:05:04 INFO namenode.NameNode:STARTUP_MSG:
    3. /************************************************************
    4. STARTUP_MSG:Starting NameNode
    5. STARTUP_MSG:host=sht-sgmhadoopnn-01.telenav.cn/172.16.101.55
    6. STARTUP_MSG:args=[-format]
    7. STARTUP_MSG:version=2.7.2
    8. STARTUP_MSG:classpath=
    9. ………………
    10. ………………
    11. 16/02/25 14:05:07 INFO namenode.FSNamesystem:dfs.namenode.safemode.threshold-pct=0.9990000128746033
    12. 16/02/25 14:05:07 INFO namenode.FSNamesystem:dfs.namenode.safemode.min.datanodes=0
    13. 16/02/25 14:05:07 INFO namenode.FSNamesystem:dfs.namenode.safemode.extension=30000
    14. 16/02/25 14:05:07 INFO metrics.TopMetrics:NNTop conf:dfs.namenode.top.window.num.buckets=10
    15. 16/02/25 14:05:07 INFO metrics.TopMetrics:NNTop conf:dfs.namenode.top.num.users=10
    16. 16/02/25 14:05:07 INFO metrics.TopMetrics:NNTop conf:dfs.namenode.top.windows.minutes=1,5,25
    17. 16/02/25 14:05:07 INFO namenode.FSNamesystem:Retry cache on namenode is enabled
    18. 16/02/25 14:05:07 INFO namenode.FSNamesystem:Retry cache will use 0.03oftotal heapandretry cache entry expiry time is 600000 millis
    19. 16/02/25 14:05:07 INFO util.GSet:Computing capacityformap NameNodeRetryCache
    20. 16/02/25 14:05:07 INFO util.GSet:VMtype=64-bit
    21. 16/02/25 14:05:07 INFO util.GSet:0.029999999329447746% max memory 889 MB=273.1 KB
    22. 16/02/25 14:05:07 INFO util.GSet:capacity=2^15=32768 entries
    23. 16/02/25 14:05:08 INFO namenode.FSImage:Allocated new BlockPoolId:BP-1182930464-172.16.101.55-1456380308394
    24. 16/02/25 14:05:08 INFO common.Storage:Storage directory /hadoop/hadoop-2.7.2/data/dfs/namehas been successfully formatted.
    25. 16/02/25 14:05:08 INFO namenode.NNStorageRetentionManager:Going to retain 1 images with txid>=0
    26. 16/02/25 14:05:08 INFO util.ExitUtil:Exiting with status 0
    27. 16/02/25 14:05:08 INFO namenode.NameNode:SHUTDOWN_MSG:
    28. /************************************************************
    29. SHUTDOWN_MSG:Shutting down NameNode at sht-sgmhadoopnn-01.telenav.cn/172.16.101.55
    30. ************************************************************/

    c.同步NameNode****元数据

    点击(此处)折叠或打开

    1. 同步sht-sgmhadoopnn-01 元数据到sht-sgmhadoopnn-02
    2. 主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir还应该确保共享存储目录下(dfs.namenode.shared.edits.dir)包含NameNode 所有的元数据。
    3. [root@sht-sgmhadoopnn-01 hadoop-2.7.2]#pwd
    4. /hadoop/hadoop-2.7.2
    5. [root@sht-sgmhadoopnn-01 hadoop-2.7.2]#scp-r data/ root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2
    6. seen_txid 100% 2 0.0KB/s 00:00
    7. fsimage_0000000000000000000 100% 351 0.3KB/s 00:00
    8. fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00
    9. VERSION 100% 205 0.2KB/s 00:00

    d.初始化ZFCK

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 bin]#hdfs zkfc-formatZK
    2. ………………
    3. ………………
    4. 16/02/25 14:14:41 INFO zookeeper.ZooKeeper:Client environment:user.home=/root
    5. 16/02/25 14:14:41 INFO zookeeper.ZooKeeper:Client environment:user.dir=/hadoop/hadoop-2.7.2/bin
    6. 16/02/25 14:14:41 INFO zookeeper.ZooKeeper:Initiating client connection,connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=2000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5f4298a5
    7. 16/02/25 14:14:41 INFO zookeeper.ClientCnxn:Opening socket connection to server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181.Willnotattempt to authenticate using SASL(unknown error)
    8. 16/02/25 14:14:41 INFO zookeeper.ClientCnxn:Socket connection established to sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181,initiating session
    9. 16/02/25 14:14:42 INFO zookeeper.ClientCnxn:Session establishment complete on server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181,sessionid=0x15316c965750000,negotiated timeout=4000
    10. 16/02/25 14:14:42 INFO ha.ActiveStandbyElector:Session connected.
    11. 16/02/25 14:14:42 INFO ha.ActiveStandbyElector:Successfully created /hadoop-ha/myclusterinZK.
    12. 16/02/25 14:14:42 INFO zookeeper.ClientCnxn:EventThread shut down
    13. 16/02/25 14:14:42 INFO zookeeper.ZooKeeper:Session:0x15316c965750000 closed

    e.启动HDFS****系统

    集群启动,在sht-sgmhadoopnn-01执行start-dfs.sh

    集群关闭,在sht-sgmhadoopnn-01执行stop-dfs.sh

    #####集群启动############

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 sbin]#start-dfs.sh
    2. 16/02/25 14:21:42 WARN util.NativeCodeLoader:Unable to load native-hadoop libraryforyour platform…using builtin-java classes where applicable
    3. Starting namenodes on[sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
    4. sht-sgmhadoopnn-01:starting namenode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.telenav.cn.out
    5. sht-sgmhadoopnn-02:starting namenode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.telenav.cn.out
    6. sht-sgmhadoopdn-01:starting datanode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-01.telenav.cn.out
    7. sht-sgmhadoopdn-02:starting datanode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-02.telenav.cn.out
    8. sht-sgmhadoopdn-03:starting datanode,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-03.telenav.cn.out
    9. Starting journal nodes[sht-sgmhadoopdn-01 sht-sgmhadoopdn-02 sht-sgmhadoopdn-03]
    10. sht-sgmhadoopdn-01:journalnode running as process 6348.Stop it first.
    11. sht-sgmhadoopdn-03:journalnode running as process 16722.Stop it first.
    12. sht-sgmhadoopdn-02:journalnode running as process 7197.Stop it first.
    13. 16/02/25 14:21:56 WARN util.NativeCodeLoader:Unable to load native-hadoop libraryforyour platform…using builtin-java classes where applicable
    14. Starting ZK Failover Controllers on NN hosts[sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
    15. sht-sgmhadoopnn-01:starting zkfc,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.out
    16. sht-sgmhadoopnn-02:starting zkfc,logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-02.telenav.cn.out
    17. You have mailin/var/spool/mail/root

    ####单进程启动###########

    NameNode(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

    hadoop-daemon.sh start namenode

    DataNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

    hadoop-daemon.sh start datanode

    JournamNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

    hadoop-daemon.sh start journalnode

    ZKFC(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

    hadoop-daemon.sh start zkfc

    f.验证namenode,datanode,zkfc

    **1)**进程

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 sbin]#jps
    2. 12712 Jps
    3. 12593 DFSZKFailoverController
    4. 12278 NameNode
    5. [root@sht-sgmhadoopnn-02 ~]#jps
    6. 29714 NameNode
    7. 29849 DFSZKFailoverController
    8. 30229 Jps
    9. [root@sht-sgmhadoopdn-01 ~]#jps
    10. 6348 JournalNode
    11. 8775 Jps
    12. 559 QuorumPeerMain
    13. 8509 DataNode
    14. [root@sht-sgmhadoopdn-02 ~]#jps
    15. 9430 Jps
    16. 9160 DataNode
    17. 7197 JournalNode
    18. 2073 QuorumPeerMain
    19. [root@sht-sgmhadoopdn-03 ~]#jps
    20. 16722 JournalNode
    21. 17369 Jps
    22. 15519 QuorumPeerMain
    23. 17214 DataNode

    **2)**页面

    sht-sgmhadoopnn-01:

    http://172.16.101.55:50070/

    sht-sgmhadoopnn-02:

    http://172.16.101.56:50070/

    g.启动YARN****运算框架

    #####集群启动############

    1)sht-sgmhadoopnn-01启动Yarn**,命令所在目录:$HADOOP_HOME/sbin**

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 sbin]#start-yarn.sh
    2. starting yarn daemons
    3. starting resourcemanager,logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-01.telenav.cn.out
    4. sht-sgmhadoopdn-03:starting nodemanager,logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-03.telenav.cn.out
    5. sht-sgmhadoopdn-02:starting nodemanager,logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-02.telenav.cn.out
    6. sht-sgmhadoopdn-01:starting nodemanager,logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-01.telenav.cn.out

    2)sht-sgmhadoopnn-02备机启动RM

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-02 sbin]#yarn-daemon.sh start resourcemanager
    2. starting resourcemanager,logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-02.telenav.cn.out

    ####单进程启动###########

    1)****ResourceManager(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02)

    yarn-daemon.sh start resourcemanager

    2)****NodeManager(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03)

    yarn-daemon.sh start nodemanager

    ######关闭#############

    [root@sht-sgmhadoopnn-01 sbin]# stop-yarn.sh

    #包含namenode的resourcemanager进程,datanode的nodemanager进程

    [root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh stop resourcemanager

    h.验证resourcemanager,nodemanager

    **1)**进程

    点击(此处)折叠或打开

    1. [root@sht-sgmhadoopnn-01 sbin]#jps
    2. 13611 Jps
    3. 12593 DFSZKFailoverController
    4. 12278 NameNode
    5. 13384 ResourceManager
    6. [root@sht-sgmhadoopnn-02 sbin]#jps
    7. 32265 ResourceManager
    8. 32304 Jps
    9. 29714 NameNode
    10. 29849 DFSZKFailoverController
    11. [root@sht-sgmhadoopdn-01 ~]#jps
    12. 6348 JournalNode
    13. 559 QuorumPeerMain
    14. 8509 DataNode
    15. 10286 NodeManager
    16. 10423 Jps
    17. [root@sht-sgmhadoopdn-02 ~]#jps
    18. 9160 DataNode
    19. 10909 NodeManager
    20. 11937 Jps
    21. 7197 JournalNode
    22. 2073 QuorumPeerMain
    23. [root@sht-sgmhadoopdn-03 ~]#jps
    24. 18031 Jps
    25. 16722 JournalNode
    26. 17710 NodeManager
    27. 15519 QuorumPeerMain
    28. 17214 DataNode

    **2)**页面

    ResourceManger**(Active****):**http://172.16.101.55:8088

    ResourceManger**(Standby****):**http://172.16.101.56:8088/cluster/cluster

    **.**监控集群

    [root@sht-sgmhadoopnn-01 ~]# hdfs dfsadmin -report

    **.**附件及参考

    #http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.5.2.tar.gz

    #http://archive-primary.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.5.2.tar.gz

    hadoop :http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

    zookeeper :http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

    参考:

    Hadoop-2.3.0-cdh5.0.1完全分布式环境搭建(NameNode,ResourceManager HA):

    http://blog.itpub.net/30089851/viewspace-1987620/

    如何解决这类问题:The string “–” is not permitted within comments:

    http://blog.csdn.net/free4294/article/details/38681095

    SecureCRT连接linux终端中文显示乱码解决办法:

    http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html

    参照:http://blog.itpub.net/30089851/viewspace-1987620/

    转自:http://blog.itpub.net/30089851/viewspace-1994585/

  • 相关阅读:
    返回流文件前端处理方法(全)
    【网络安全】学过编程就是黑客?
    全球地表水数据集JRC Global Surface Water Mapping Layers, v1.2数据
    HTML的学习-通过创建相册WEB学习HTML-第一部分
    【模拟】螺旋矩阵问题
    TSINGSEE青犀车辆违停AI算法在园区道路管控场景中的应用方案
    linux虚拟机主机和子系统网络连接
    ElasticSearch(九)【SpringBoot整合】
    美容店预约小程序搭建流程
    我服了!SpringBoot升级后这服务我一个星期都没跑起来!(下)
  • 原文地址:https://blog.csdn.net/m0_67393039/article/details/126553368