• hadoop-2.6.4集群编译搭建-阿里云和腾讯云


    腾讯云阿里云 hadoop集群编译搭建

    环境准备

    阿里云配置:

    [hadoop@lizer_ali ~]$ uname -a  Linux lizer_ali 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 03:35:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux[hadoop@lizer_ali ~]$ head -n 1 /etc/issueCentOS release 6.5 (Final)[hadoop@lizer_ali ~]$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c       1  Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz[hadoop@lizer_ali ~]$ getconf LONG_BIT 64[hadoop@lizer_ali ~]$ cat /proc/meminfo MemTotal:        1018508 kBMemFree:          353912 kB
    
    • 1

    腾讯云配置:

    [hadoop@lizer_tx ~]$ uname -a  Linux lizer_tx 2.6.32-573.18.1.el6.x86_64 #1 SMP Tue Feb 9 22:46:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux[hadoop@lizer_tx ~]$ head -n 1 /etc/issueCentOS release 6.7 (Final)[hadoop@lizer_tx ~]$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c       1  Intel(R) Xeon(R) CPU E5-26xx v3[hadoop@lizer_tx ~]$ getconf LONG_BIT 64[hadoop@lizer_tx ~]$ cat /proc/meminfo MemTotal:        1020224 kBMemFree:          688488 kB
    
    • 1

    创建用户

    useradd hadoop
    passwd haddop

    jdk1.7安装:

    下载:http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u80-oth-JPR

    wget http://download.oracle.com/otn/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz?AuthParam=1469844164_7ce09e1f99570835183215c3510e95e0

    mv jdk-7u80-linux-x64.tar.gzAuthParam=1469844164_7ce09e1f99570835183215c3510e95e0 jdk-7u80-linux-x64.tar.gz

    配置jdk
    tar zxf jdk-7u80-linux-x64.tar.gz -C /opt/

    配置环境变量

    vim /etc/profileexport JAVA_HOME=/opt/jdk1.7.0_80export JRE_HOME=/opt/jdk1.7.0_80/jreexport CLASSPATH=./$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATHexport PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
    
    • 1

    生效:
    source /etc/profile

    编译hadoop2.6.4所需软件

    yum install gcc cmake gcc-c++

    安装maven

    wget http://www-eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
    安装maven:http://www.blogjava.net/caojianhua/archive/2011/04/02/347559.html

    tar zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/vim /etc/profileexport MAVEN_HOME=/usr/local/apache-maven-3.3.9export PATH=$PATH:$MAVEN_HOME/bin
    
    • 1

    source /etc/profile

    [root@lizer_ali hadoop]# mvn -vApache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T00:41:47+08:00)Maven home: /usr/local/apache-maven-3.3.9Java version: 1.7.0_80, vendor: Oracle CorporationJava home: /opt/jdk1.7.0_80/jreDefault locale: en_US, platform encoding: UTF-8OS name: "linux", version: "2.6.32-573.22.1.el6.x86_64", arch: "amd64", family: "unix"
    
    • 1

    安装protobuf

    要求版本protobuf-2.5.0
    wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz

    cd protobuf-2.5.0/./configure -prefix=/usr/local/protobuf-2.5.0make && make install vim /etc/profileexport PROTOBUF=/usr/local/protobuf-2.5.0export PATH=$PROTOBUF/bin:$PATH
    
    • 1

    protoc --version

    安装ant

    wget http://www-eu.apache.org/dist//ant/binaries/apache-ant-1.9.7-bin.tar.gz

    tar zxf apache-ant-1.9.7-bin.tar.gz -C /usr/local/vim /etc/profileexport ANT_HOME=/usr/local/apache-ant-1.9.7export PATH=$PATH:$ANT_HOME/bin
    
    • 1

    source /etc/profile

    ant -version Apache Ant(TM) version 1.9.7 compiled on April 9 2016

    yum install autoconf automake libtoolyum install openssl-devel
    
    • 1

    安装findbugs

    http://findbugs.sourceforge.net/downloads.html

    wget http://prdownloads.sourceforge.net/findbugs/findbugs-3.0.1.tar.gz?download
    mv findbugs-3.0.1.tar.gz?download findbugs-3.0.1.tar.gz

    tar zxf findbugs-3.0.1.tar.gz -C /usr/local/vim /etc/profileexport FINDBUGS_HOME=/usr/local/findbugs-3.0.1export PATH=$FINDBUGS_HOME/bin:$PATH findbugs -version
    
    • 1

    hadoop编译安装:

    下载hadoop:http://hadoop.apache.org/releases.html

    wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gz

    tar zxf hadoop-2.6.4-src.tar.gz cd hadoop-2.6.4-src

    more BUILDING.txt
    查看如何编译安装

    mvn clean package -Pdist,native,docs -DskipTests -Dtar
    编译过程中,需要下载很多包,等待时间比较长。当看到hadoop各个项目都编译成功,即出现一系列的SUCCESS之后,即为编译成功。

    有些包下载卡住,重复执行上面的命令,或可以根据提示到相应的网址(https://repo.maven.apache.org/maven2)下载放到指定位置

    出现错误1:
    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-common: An Ant BuildException has occured: input file /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/findbugsXml.xml does not exist [ERROR] around Ant part ...... @ 44:256 in /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
    解决办法1:
    参考:http://www.itnose.net/detail/6143808.html
    从该命令删除docs参数再运行mvn package -Pdist,native -DskipTests -Dtar

    出现错误2:
    [INFO] Executing tasks main: [mkdir] Created dir: /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads [get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.41/bin/apache-tomcat-6.0.41.tar.gz [get] To: /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads/apache-tomcat-6.0.41.tar.gz
    解决2:
    卡在这里,应该是不能下载
    下载上传到指定位置

    出现错误3:
    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-hdfs: MavenReportException: Error while creating archive: [ERROR] ExcludePrivateAnnotationsStandardDoclet [ERROR] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f31a4000, 130400256, 0) failed; error='Cannot allocate memory' (errno=12) [ERROR] # [ERROR] # There is insufficient memory for the Java Runtime Environment to continue. [ERROR] # Native memory allocation (malloc) failed to allocate 130400256 bytes for committing reserved memory. [ERROR] # An error report file with more information is saved as: [ERROR] # /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs/target/hs_err_pid24729.log [ERROR] [ERROR] Error occurred during initialization of VM, try to reduce the Java heap size for the MAVEN_OPTS environnement variable using -Xms: and -Xmx:. [ERROR] Or, try to reduce the Java heap size for the Javadoc goal using -Dminmemory= and -Dmaxmemory=.
    解决3:
    应该是内存不够,没有分配swap
    添加2G swap分区
    添加或扩大交换分区
    dd if=/dev/zero of=/home/swap bs=512 count=4096000
    bs 是扇区大小 bs=512 指大小为512B count为扇区数量
    表示创建一个大小为4G 的文件 /home/swap 用空值填充。of位置可以自己调整。

    查看当前分区的大小
    free -m

    格式化并挂载
    mkswap /home/swap
    swapon /home/swap

    查看挂载情况
    swapon -s

    开机自动挂载
    vim /etc/fstab
    /home/swap swap swap defaults 0 0

    想写在分区
    swapoff /home/swap

    出现问题4:
    main: [mkdir] Created dir: /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads [get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.41/bin/apache-tomcat-6.0.41.tar.gz [get] To: /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/apache-tomcat-6.0.41.tar.gz
    解决4:
    网络问题,同上
    cp /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads/apache-tomcat-6.0.41.tar.gz /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/
    编译安装成功

    [INFO] Apache Hadoop Gridmix .............................. SUCCESS [  6.239 s][INFO] Apache Hadoop Data Join ............................ SUCCESS [  4.070 s][INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  3.304 s][INFO] Apache Hadoop Extras ............................... SUCCESS [  4.653 s][INFO] Apache Hadoop Pipes ................................ SUCCESS [  8.279 s][INFO] Apache Hadoop OpenStack support .................... SUCCESS [  7.736 s][INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [06:22 min][INFO] Apache Hadoop Client ............................... SUCCESS [  9.608 s][INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.258 s][INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  6.721 s][INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 15.171 s][INFO] Apache Hadoop Tools ................................ SUCCESS [  0.022 s][INFO] Apache Hadoop Distribution ......................... SUCCESS [ 37.343 s][INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 31:45 min[INFO] Finished at: 2016-07-30T23:59:50+08:00[INFO] Final Memory: 101M/241M[INFO] ------------------------------------------------------------------------
    
    • 1

    hadoop-dist/target/已经生成了可执行文件

    拷贝到用户目录
    cp -r hadoop-2.6.4 ~/

    配置

    两台机免密互通

    ssh-keygen
    各自生成的公钥放到另一台机上
    cat id_rsa_else.pub >> authorized_keys chmod 600 authorized_keys

    网络规划:

    hadoop1 123.206.33.182 slave hadoop0 114.215.92.77 master

    配置hosts

    vim /etc/hosts 123.206.33.182 hadoop1 tx lizer_tx 114.215.92.77 hadoop0 ali lizer_ali

    配置环境变量

    vim /etc/profile export HADOOP_HOME=/home/hadoop/hadoop-2.6.4 export PATH=$HADOOP_HOME/bin:$PATH

    Hadoop配置

    配置文件放在$HADOOP_HOME/etc/hadoop/
    修改一下配置:

    vim hadoop-env.sh
    export JAVA_HOME=/opt/jdk1.7.0_80

    vim yarn-env.sh
    export JAVA_HOME=/opt/jdk1.7.0_80

    vim slaves (这里没有了master配置文件)
    hadoop1

    vim core-site.xml

                            hadoop.tmp.dir                /home/hadoop/hadoop/tmp                A base for other temporary directories.                                fs.defaultFS                hdfs://hadoop0:9000                                io.file.buffer.size                4096        
    
    • 1

    vim hdfs-site.xml

                            dfs.http.address                hadoop0:50070                                dfs.secondary.http.address                hadoop0:50090                                dfs.namenode.name.dir                /home/hadoop/hadoop/name                                dfs.datanode.data.dir                /home/hadoop/hadoop/data                                dfs.replication                1                                dfs.nameservices                hadoop0                                dfs.namenode.secondary.http-address                hadoop0:50090                                dfs.webhdfs.enabled                true                                dfs.permissions                false        
    
    • 1

    cp mapred-site.xml.template mapred-site.xml

    vim /etc/mapred-site.xml

                            mapreduce.framework.name                yarn                true                                mapreduce.jobtracker.http.address                hadoop0:50030                                mapreduce.jobhistory.address                hadoop0:10020                                mapreduce.jobhistory.webapp.address                hadoop0:19888                                mapred.job.tracker                hadoop0:9001        
    
    • 1

    vim yarn-site.xml

                             yarn.resourcemanager.hostname                hadoop0                                yarn.nodemanager.aux-services                mapreduce_shuffle                                yarn.resourcemanager.address                hadoop0:8032                                yarn.resourcemanager.scheduler.address                hadoop0:8030                                yarn.resourcemanager.resource-tracker.address                hadoop0:8031                                yarn.resourcemanager.admin.address                hadoop0:8033                                yarn.resourcemanager.webapp.address                hadoop0:8088        
    
    • 1

    vim master
    hadoop0

    scp -r /home/hadoop/hadoop-2.6.4/etc/hadoop/* tx:~/hadoop-2.6.4/etc/hadoop/

    启动和关闭hadoop

    参考:http://my.oschina.net/penngo/blog/653049

    bin/hdfs namenode -formatsbin/start-dfs.shsbin/stop-dfs.shsbin/start-yarn.shsbin/stop-yarn.shsbin/mr-jobhistory-daemon.sh start historyserversbin/mr-jobhistory-daemon.sh stop historyserversbin/hadoop-daemon.sh start secondarynamenodesbin/hadoop-daemon.sh stop secondarynamenode
    
    • 1

    [hadoop@lizer_ali hadoop-2.6.4]$ jps
    3099 ResourceManager
    3430 SecondaryNameNode
    2879 NameNode
    3470 Jps
    3382 JobHistoryServer

    [hadoop@lizer_tx ~]$ jps
    9757 DataNode
    9853 NodeManager
    10064 Jps

    检查节点配置情况
    bin/hadoop dfsadmin -report

    网页节点管理
    http://114.215.92.77:8088/cluster

    网页资源管理
    http://114.215.92.77:50070/dfshealth.html#tab-overview

    新建文件夹

    bin/hdfs dfs -mkdir -p input

    参考文档:

    http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/ClusterSetup.html

    http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/SingleCluster.html

  • 相关阅读:
    学习和认知的四个阶段,以及学习方法分享
    技巧与思想——位运算
    下游批量推送的mysql库锁表
    【Redis】list列表
    数学建模方法-优劣解距离法(TOPSIS法)
    Redis-6.2.6 Linux 离线安装教程,让你一路畅通无阻,5分钟轻松完成安装。
    【面试系列】Java面试知识篇(一)
    Vue_Bug NPM下载速度过慢
    vue+Ts+element组件封装
    在谷歌浏览器上注册账号--具有偶然性的成功
  • 原文地址:https://blog.csdn.net/m0_52789121/article/details/126366231