• Docker-compose教程(安装,使用, 快速入门)


    目录​​​​​​​

     一、Docker-Compose概述

    二、安装docker-compose

    1.从github上下载docker-compose二进制文件安装

    2.pip安装

    三、Docker-compose实战

    1.MySQL示例

    1.1 MySQL run

    1.2 mysql-compose.yml

    2.CDH单机

    2.1系统镜像DockerFile

    2.2CDH镜像DockerFile

     2.3镜像构建

    2.4桥接网络创建

    2.5CDH-yml

    2.6CDH 启动


     一、Docker-Compose概述


            Compose 项目是 Docker 官方的开源项目,负责实现对 Docker 容器集群的快速编排。使用前面介绍的Dockerfile我们很容易定义一个单独的应用容器。然而在日常开发工作中,经常会碰到需要多个容器相互配合来完成某项任务的情况。例如要实现一个 Web 项目,除了 Web 服务容器本身,往往还需要再加上后端的数据库服务容器;再比如在分布式应用一般包含若干个服务,每个服务一般都会部署多个实例。如果每个服务都要手动启停,那么效率之低、维护量之大可想而知。这时候就需要一个工具能够管理一组相关联的的应用容器,这就是Docker Compose。

    Compose有2个重要的概念:

    • 项目(Project):由一组关联的应用容器组成的一个完整业务单元,在 docker-compose.yml 文件中定义。
    • 服务(Service):一个应用的容器,实际上可以包括若干运行相同镜像的容器实例。

             docker compose运行目录下的所有yml文件组成一个工程,一个工程包含多个服务,每个服务中定义了容器运行的镜像、参数、依赖。一个服务可包括多个容器实例。docker-compose就是docker容器的编排工具,主要就是解决相互有依赖关系的多个容器的管理。

    二、安装docker-compose


    1.从github上下载docker-compose二进制文件安装

    • 下载最新版的docker-compose文件

    官方文档地址:​​​​​​Install Docker Compose | Docker Documentation

    https://github.com/docker/compose/releases/download/v2.5.0/docker-compose-linux-x86_64
    • 添加可执行权限
    1. [root@offline-client bin]# mv docker-compose-linux-x86_64 docker-compose
    2. [root@offline-client bin]# docker-compose version
    3. Docker Compose version v2.5.0

    2.pip安装

    pip install docker-compose

    三、Docker-compose实战


    1.MySQL示例

    1.1 MySQL run

    docker run -itd -p 3306:3306 -m="1024M" --privileged=true \
    -v /data/software/mysql/conf/:/etc/mysql/conf.d \
    -v /data/software/mysql/data:/var/lib/mysql \
    -v /data/software/mysql/log/:/var/log/mysql/ \
    -e MYSQL_ROOT_PASSWORD=winner@001 --name=mysql mysql:5.7.19

    1.2 mysql-compose.yml

    1. version: "3"
    2. services:
    3. mysql:
    4. image: mysql:5.7.19
    5. restart: always
    6. container_name: mysql
    7. ports:
    8. - 3306:3306
    9. volumes:
    10. - /data/software/mysql/conf/:/etc/mysql/conf.d
    11. - /data/software/mysql/data:/var/lib/mysql
    12. - /data/software/mysql/log/:/var/log/mysql
    13. environment:
    14. MYSQL_ROOT_PASSWORD: 123456
    15. MYSQL_DATABASE: kangll
    16. MYSQL_USER: kangll
    17. MYSQL_PASSWORD: 123456

    MySQL 容器启动与关闭

    1. [root@offline-client software]# docker-compose -f docker-compose-mysql.yml down
    2. [root@offline-client software]# docker-compose -f docker-compose-mysql.yml up -d

    docker ps 查看MySQL进程

    2.CDH单机

    2.1系统镜像DockerFile

    1. FROM ubuntu:20.04
    2. # 作者信息
    3. LABEL kangll <kangll@winnerinf.com>
    4. # 安装 依赖包
    5. RUN apt-get clean && apt-get -y update \
    6. && apt-get install -y openssh-server lrzsz vim net-tools openssl gcc openssh-client inetutils-ping \
    7. && sed -ri 's/^#?PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
    8. && sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config \
    9. && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
    10. # 修改 root 密码
    11. && useradd winner \
    12. && echo "root:kangll" | chpasswd \
    13. && echo "winner:kangll" | chpasswd \
    14. && echo "root ALL=(ALL) ALL" >> /etc/sudoers \
    15. && echo "winner ALL=(ALL) ALL" >> /etc/sudoers
    16. # 生成密钥
    17. #&& ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key \
    18. #&& ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
    19. # 启动sshd服务并且暴露22端口
    20. RUN mkdir /var/run/sshd
    21. EXPOSE 22
    22. # 执行后台启动 ssh 服务命令
    23. CMD ["/usr/sbin/sshd", "-D"]

    2.2CDH镜像DockerFile

    1. ​FROM ubuntu20-ssh-base:v1.0 # 系统镜像
    2. LABEL kangll <kangll@winnerinf.com>
    3. RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' && \
    4. cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && \
    5. sed -i 's/PermitEmptyPasswords yes/PermitEmptyPasswords no /' /etc/ssh/sshd_config && \
    6. sed -i 's/PermitRootLogin without-password/PermitRootLogin yes /' /etc/ssh/sshd_config && \
    7. echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
    8. echo " UserKnownHostsFile /dev/null" >> /etc/ssh/ssh_config
    9. # JDK
    10. ADD jdk-8u162-linux-x64.tar.gz /hadoop/software/
    11. ENV JAVA_HOME=/hadoop/software/jdk1.8.0_162
    12. ENV CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    13. ENV PATH=$JAVA_HOME/bin:$PATH
    14. # hadoop
    15. ADD hadoop-2.6.0-cdh5.15.0.tar.gz /hadoop/software/
    16. ENV HADOOP_HOME=/hadoop/software/hadoop
    17. ENV HADOOP_CONF_DIR=/hadoop/software/hadoop/etc/hadoop
    18. ENV PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    19. # spark
    20. ADD spark-1.6.0-cdh5.15.0.tar.gz /hadoop/software/
    21. ENV SPARK_HOME=/hadoop/software/spark
    22. ENV PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
    23. # hbase
    24. ADD hbase-1.2.0-cdh5.15.0.tar.gz /hadoop/software/
    25. ENV HBASE_HOME=/hadoop/software/hbase
    26. ENV PATH=$PATH:$HBASE_HOME/bin
    27. # hive
    28. ADD hive-1.1.0-cdh5.15.0.tar.gz /hadoop/software/
    29. ENV HIVE_HOME=/hadoop/software/hive
    30. ENV HIVE_CONF_DIR=/hadoop/software/hive/conf
    31. ENV PATH=$HIVE_HOME/bin:$PATH
    32. # scala
    33. ADD scala-2.10.5.tar.gz /hadoop/software/
    34. ENV SCALA_HOME=/hadoop/software/scala
    35. ENV PATH=$PATH:$SCALA_HOME/bin
    36. ADD azkaban.3.30.1.tar.gz /hadoop/software/
    37. WORKDIR /hadoop/software/
    38. RUN ln -s jdk1.8.0_162 jdk \
    39. && ln -s spark-1.6.0-cdh5.15.0 spark \
    40. && ln -s scala-2.10.5 scala \
    41. && ln -s hadoop-2.6.0-cdh5.15.0 hadoop \
    42. && ln -s hbase-1.2.0-cdh5.15.0 hbase \
    43. && ln -s hive-1.1.0-cdh5.15.0 hive
    44. WORKDIR /hadoop/software/hadoop/share/hadoop
    45. RUN ln -s mapreduce2 mapreduce
    46. WORKDIR /hadoop/software/
    47. # 添加winner用户
    48. RUN chown -R winner:winner /hadoop/software/ && \
    49. chmod -R +x /hadoop/software/
    50. ## 集群启动脚本
    51. COPY start-cluster.sh /bin/
    52. RUN chmod +x /bin/start-cluster.sh
    53. CMD ["bash","-c","/bin/start-cluster.sh && tail -f /dev/null"]

     start-cluster脚本, 方便多次启动和在log中查看启动日志

    1. #!/bin/bash
    2. #########################################################################
    3. # #
    4. # 脚本功能: winner-aio-docker, docker-compose安装,环境初始化 #
    5. # 作 者: kangll #
    6. # 创建时间: 2022-02-28 #
    7. # 修改时间: 2022-04-07 #
    8. # 当前版本: 2.0v #
    9. # 调度周期: 初始化任务执行一次 #
    10. # 脚本参数: 无 #
    11. ##########################################################################
    12. source /etc/profile
    13. source ~/.bash_profile
    14. set -e
    15. rm -rf /tmp/*
    16. ### 判断NameNode初始化文件夹是否存在,决定是否格式化
    17. NNPath=/hadoop/software/hadoop/data/dfs/name/current
    18. if [ -d "$NNPath" ];then
    19. echo "------------------------------------------> NameNode已格式化 <--------------------------------------"
    20. else
    21. echo "------------------------------------------> NameNode开始格式化 <------------------------------------"
    22. /hadoop/software/hadoop/bin/hdfs namenode -format
    23. fi
    24. # NameNode, SecondaryNameNode monitor
    25. namenodeCount=`ps -ef |grep NameNode |grep -v "grep" |wc -l`
    26. if [ $namenodeCount -le 1 ];then
    27. /usr/sbin/sshd
    28. echo "-----------------------------------------> Namenode 启动 <---------------------------------------- "
    29. start-dfs.sh
    30. else
    31. echo "----------------------------> Namenode, SecondaryNameNode 正常 <---------------------------------- "
    32. fi
    33. #DataNode
    34. DataNodeCount=`ps -ef |grep DataNode |grep -v "grep" |wc -l`
    35. if [ $DataNodeCount == 0 ];then
    36. /usr/sbin/sshd
    37. echo "-------------------------------------> DataNode 启动 <-------------------------------------------"
    38. start-dfs.sh
    39. else
    40. echo "-------------------------------------> DataNode 正常 <-------------------------------------------"
    41. fi
    42. #HBase Monitor
    43. hmasterCount=`ps -ef |grep HMaster |grep -v "grep" |wc -l`
    44. if [ $hmasterCount == 0 ];then
    45. /usr/sbin/sshd
    46. echo "-------------------------------------> hmaster 启动 <-------------------------------------------"
    47. start-hbase.sh
    48. else
    49. echo "-------------------------------------> hmaster 正常 <-------------------------------------------"
    50. fi
    51. #HBase RegionServer Monitor
    52. regionCount=`ps -ef |grep HRegionServer |grep -v "grep" |wc -l`
    53. if [ $regionCount == 0 ];then
    54. echo "--------------------------------------> RegionServer 启动 <---------------------------------------"
    55. /usr/sbin/sshd
    56. start-hbase.sh
    57. else
    58. echo "--------------------------------------> RegionServer 正常 <---------------------------------------"
    59. fi
    60. #yarn ResourceManager
    61. ResourceManagerCount=`ps -ef |grep ResourceManager |grep -v "grep" |wc -l`
    62. if [ $ResourceManagerCount == 0 ];then
    63. /usr/sbin/sshd
    64. echo "-------------------------------------> ResourceManager 启动 <------------------------------------"
    65. start-yarn.sh
    66. else
    67. echo "-------------------------------------> ResourceManager 正常 <------------------------------------"
    68. fi
    69. #yarn NodeManager
    70. NodeManagerCount=`ps -ef |grep NodeManager |grep -v "grep" |wc -l`
    71. if [ $NodeManagerCount == 0 ];then
    72. /usr/sbin/sshd
    73. echo "-------------------------------------> NodeManager 启动 <-----------------------------------------"
    74. start-yarn.sh
    75. else
    76. echo "-------------------------------------> NodeManager 正常 <-----------------------------------------"
    77. fi
    78. #hive
    79. runJar=`ps -ef |grep RunJar |grep -v "grep" |wc -l`
    80. if [ $runJar == 0 ];then
    81. /usr/sbin/sshd
    82. echo "-----------------------------------------> hive 启动 <-----------------------------------------"
    83. hive --service metastore &
    84. else
    85. echo "------------------------------------------> hive 正常 <-----------------------------------------"
    86. fi

     2.3镜像构建

    [root@offline-client winner-aio]# ll
    total 1147020
    -rw-r--r-- 1 root root       906 May 21 17:24 dockerfile-centos
    -rw-r--r-- 1 root root      1700 May 20 18:44 dockerfile-hadoop
    -rw-r--r-- 1 root root        42 May 18 16:23 dockerfile-mysql5.7
    -rw-r--r-- 1 root root       971 May 19 15:00 dockerfile-ubuntu
    -rw-r--r-- 1 root root 433428169 May 20 10:15 hadoop-2.6.0-cdh5.15.0.tar.gz
    -rw-r--r-- 1 root root 266421606 May 20 10:15 hbase-1.2.0-cdh5.15.0.tar.gz
    -rw-r--r-- 1 root root 129551278 May 20 10:15 hive-1.1.0-cdh5.15.0.tar.gz
    -rw-r--r-- 1 root root 189815615 May 16 11:10 jdk-8u162-linux-x64.tar.gz
    ## 镜像构建
    docker build -f dockerfile-ubuntu -t ubuntu22-ssh-base:v1.0  .
    docker build -f dockerfile-hadoop -t hadoop-2.6.0-cdh5.15:v1.0  .

    镜像结果输出

    2.4桥接网络创建

    docker network create  -d bridge hadoop-network

    2.5CDH-yml

    1. version: "3"
    2. services:
    3. cdh:
    4. image: hadoop-cluster:v1.0
    5. container_name: hadoop-cluster
    6. hostname: hadoop
    7. networks:
    8. - hadoop-network
    9. ports:
    10. - 9000:9000
    11. - 8088:8088
    12. - 50070:50070
    13. - 8188:8188
    14. - 8042:8042
    15. - 10000:10000
    16. - 9083:9083
    17. - 8080:8080
    18. - 7077:7077
    19. - 8444:8444
    20. # 数据与日志路径映射到本地数据盘
    21. volumes:
    22. - /data/software/hadoop/data:/hadoop/software/hadoop/data
    23. - /data/software/hadoop/logs:/hadoop/software/hadoop/logs
    24. - /data/software/hadoop/conf:/hadoop/software/hadoop/etc/hadoop
    25. - /data/software/hbase/conf:/hadoop/software/hbase/conf
    26. - /data/software/hbase/logs:/hadoop/software/hbase/logs
    27. - /data/software/hive/conf:/hadoop/software/hive/conf
    28. networks:
    29. hadoop-network: {}

    2.6CDH 启动

    docker-compose  -f dockerfile-hadoop-cluster  up -d

    使用JPS查看服务进程

    访问hadoop-web

    访问yarn-web

    ------------------------ 感谢点赞!-------------------------------------

  • 相关阅读:
    Java实现自动发聊天消息
    第八周周报
    目标检测YOLO实战应用案例100讲-基于YOLOv5的航拍图像旋转目标检测
    数据湖:数据集成工具DataX
    【每日练习】从两个数字数组里生成最小数字
    HTML+CSS+JS网页设计期末课程大作业 html+css+javascript+jquery化妆品电商网站4页面
    pdf怎么转换成ppt?不要错过本篇文章
    带你快速掌握Linux最常用的命令(图文详解)- 最新版(面试笔试常考)
    基于java的企业办公自动化系统设计与实现【附源码+lun文】
    java计算机毕业设计基于安卓Android的校园外卖点餐APP
  • 原文地址:https://blog.csdn.net/qq_35995514/article/details/125468792