• 07 hdfs 集群搭建


    前言

    呵呵 最近有一系列环境搭建的相关需求

    记录一下

    hdfs 三个节点 : 192.168.110.150, 192.168.110.151, 192.168.110.152

    150 为 master, 151 为 slave01, 152 为 slave02

    三台机器都做了 trusted shell  
     

    hdfs 集群搭建

    hdfs 三个节点 : 192.168.110.150, 192.168.110.151, 192.168.110.152

    1. 基础环境准备

    192.168.110.150, 192.168.110.151, 192.168.110.152 上面安装 jdk, 上传 hadoop 的安装包

    安装包来自于 Apache Hadoop 

    2. hdfs 配置调整

    在 master 上面更新如下系列配置文件, 然后将发布包 scp 到 slave01, slave02 

    更新 hdfs-site.xml 

    1. <configuration>
    2. <property>
    3. <name>dfs.namenode.secondary.http-addressname>
    4. <value>master:50090value>
    5. property>
    6. <property>
    7. <name>dfs.replicationname>
    8. <value>2value>
    9. property>
    10. configuration>

    更新 core-site.xml 

    1. <configuration>
    2. <property>
    3. <name>fs.defaultFSname>
    4. <value>hdfs://master:9000value>
    5. property>
    6. <property>
    7. <name>hadoop.tmp.dirname>
    8. <value>/var/hadoopvalue>
    9. property>
    10. configuration>

    更新 hadoop-env.sh 

    1. # The java implementation to use.
    2. export JAVA_HOME=/usr/local/ProgramFiles/jdk1.8.0_291

    编辑 slaves 节点

    1. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# cat etc/hadoop/slaves
    2. slave01
    3. slave02

    3. 启动集群 

    在 master 所在的节点执行 start-dfs 脚本, 就启动了 hdfs 集群

    1. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# ./sbin/start-dfs.sh
    2. Starting namenodes on [master]
    3. master: starting namenode, logging to /usr/local/ProgramFiles/hadoop-2.10.1/logs/hadoop-root-namenode-master.out
    4. slave01: starting datanode, logging to /usr/local/ProgramFiles/hadoop-2.10.1/logs/hadoop-root-datanode-slave01.out
    5. slave02: starting datanode, logging to /usr/local/ProgramFiles/hadoop-2.10.1/logs/hadoop-root-datanode-slave02.out
    6. Starting secondary namenodes [master]
    7. master: starting secondarynamenode, logging to /usr/local/ProgramFiles/hadoop-2.10.1/logs/hadoop-root-secondarynamenode-master.out

    测试集群

    基于 hadoop fs 的测试 

    1. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# hadoop fs -ls /
    2. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# hadoop fs -mkdir /2022-05-21
    3. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# hadoop fs -ls /
    4. Found 1 items
    5. drwxr-xr-x - root supergroup 0 2022-05-21 01:19 /2022-05-21
    6. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# hadoop fs -put README.txt /2022-05-21/upload.txt
    7. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# hadoop fs -ls /2022-05-21
    8. Found 1 items
    9. -rw-r--r-- 2 root supergroup 1366 2022-05-21 01:20 /2022-05-21/upload.txt
    10. root@master:/usr/local/ProgramFiles/hadoop-2.10.1# hadoop fs -cat /2022-05-21/upload.txt
    11. For the latest information about Hadoop, please visit our website at:
    12. http://hadoop.apache.org/core/
    13. and our wiki, at:
    14. http://wiki.apache.org/hadoop/
    15. This distribution includes cryptographic software. The country in
    16. which you currently reside may have restrictions on the import,
    17. possession, use, and/or re-export to another country, of
    18. encryption software. BEFORE using any encryption software, please
    19. check your country's laws, regulations and policies concerning the
    20. import, possession, or use, and re-export of encryption software, to
    21. see if this is permitted. See for more
    22. information.
    23. The U.S. Government Department of Commerce, Bureau of Industry and
    24. Security (BIS), has classified this software as Export Commodity
    25. Control Number (ECCN) 5D002.C.1, which includes information security
    26. software using or performing cryptographic functions with asymmetric
    27. algorithms. The form and manner of this Apache Software Foundation
    28. distribution makes it eligible for export under the License Exception
    29. ENC Technology Software Unrestricted (TSU) exception (see the BIS
    30. Export Administration Regulations, Section 740.13) for both object
    31. code and source code.
    32. The following provides more details on the included cryptographic
    33. software:
    34. Hadoop Core uses the SSL libraries from the Jetty project written
    35. by mortbay.org.
    36. root@master:/usr/local/ProgramFiles/hadoop-2.10.1#

    namenode 的 webui 

  • 相关阅读:
    技术对接36
    边玩边学!Python随机生成迷宫游戏的代码简单示例。
    百度竞价 - 百度单页竞价推广项目实操教程分享
    从零手搓一个【消息队列】项目设计、需求分析、模块划分、目录结构
    ArcGIS Pro实践技术应用、制图、空间分析、影像分析、三维建模、空间统计分析与建模、python融合
    学习记忆——记忆宫殿——编码——数字编码——三位数
    计算机毕业设计springboot教学物资管理系统f1v89源码+系统+程序+lw文档+部署
    Win10管理员权限怎么获取?Win10取得管理员权限的方法
    [计算机网络] VPN技术
    第三天:配置+运行代码+改个保存键
  • 原文地址:https://blog.csdn.net/u011039332/article/details/124899816