注:本文中所有配置文件都是我用echo或者cat输入的,不全,最好用docker cp
如下面的diamagnetic就是将mysql-service容器内的配置文件复制到宿主机上
然后在第二次运行的时候再挂载
docker cp mysql-service:/etc/mysql/my.cnf /root/docker/mysql/conf
我的服务器是本地虚拟机,所以直接关了防火墙
具体要开的端口是
mongodb:27017
redis:6379
es + kibana:9200 9300 5601
rabbitmq 5672 15672
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine \
docker-ce
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2 --skip-broken
yum-config-manager \
--add-repo \
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's/download.docker.com/mirrors.aliyun.com\/docker-ce/g' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum install -y docker-ce
systemctl start docker
systemctl enable docker
docker -v
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ppztf0yr.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
docker pull mongo
mkdir -p /myData/mongo/conf
mkdir -p /myData/mongo/data
mkdir -p /myData/mongo/log
cd /myData/mongo/conf
cat > mongodb.conf <<EOF
#端口
port=27017
#数据库文件存放目录
dbpath=/myData/mongo/data
#日志文件存放路径
logpath=/myData/mongo/log
#使用追加方式写日志
logappend=true
#以守护线程的方式运行,创建服务器进程
fork=true
#最大同时连接数
maxConns=100
#不启用验证
#noauth=true
#每次写入会记录一条操作日志
journal=true
#存储引擎有mmapv1、wiredTiger、mongorocks
storageEngine=wiredTiger
#访问IP
bind_ip=0.0.0.0
#用户验证
auth=true
EOF

docker run \
--name mongo \
-p 27017:27017 \
-v /myData/mongo/data:/data/db -v /myData/mongo/conf:/data/conf -v /myData/mongo/log:/data/log \
-d mongo --auth
docker ps
docker exec -it mongo mongo admin
接下来的角色权限配置可以参考MongoDB的角色创建及配置
db.createUser({user:"root",pwd:"root",roles:["root"]});
db.auth('root','root')
exit
docker pull redis
docker run -p 6379:6379 -d redis
mkdir -p /myData/redis/conf
mkdir -p /myData/redis/data
touch /myData/redis/conf/redis.conf
cat > /myData/redis/conf/redis.conf <<EOF
bind 0.0.0.0
protected-mode no
appendonly yes
requirepass root
daemonize no
EOF
chmod -R 777 /myData/redis/
-v /myData/redis/data:/data \【-v:目录挂载,将容器内部的data 文件夹挂载到Linux的/myData/redis/data目录里】-V /myData/redis/conf/redis.conf:/etc/redis/redis.conf \【将/etc/redis/redis.conf挂载到Linux中指定目录下】-d --restart=always 【配置开机启动】TIMEZONE解决redis时区不同步问题docker run -p 6379:6379 --name redis \
--sysctl net.core.somaxconn=1024 --privileged=true --restart=always \
-v /myData/redis/data:/data \
-v /myData/redis/conf/redis.conf:/etc/redis/redis.conf \
-e TIME_ZONE="Asia/Shanghai" -e TZ="Asia/Shanghai" \
-d redis redis-server /etc/redis/redis.conf
docker exec -it redis redis-cli
auth root
ping
exit
返回PONG即成功
7. 继续修改配置
参考自docker启动redis时reids服务自动关闭的问题
如果不修改会产生warning,阻止redis的重启
echo "net.core.somaxconn=1024" >> /etc/sysctl.conf
sysctl -p
echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
sysctl vm.overcommit_memory=1
echo "ignore-warnings ARM64-COW-BUG" >> /myData/redis/conf/redis.conf
docker restart redis
docker pull elasticsearch:7.6.2
docker pull kibana:7.6.2
mkdir -p /myData/elasticsearch/config
mkdir -p /myData/elasticsearch/plugins
mkdir -p /myData/kibana/config
mkdir -p /myData/kibana/data
cat > /myData/elasticsearch/config/elasticsearch.yml <<EOF
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
#Password config
#这一步是开启x-pack插件
xpack.security.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
EOF
cat > /myData/kibana/config/kibana.yml<<EOF
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://es服务器的IP:9200" ]
elasticsearch.username: "elastic"
elasticsearch.password: "自己待会会设置的密码"
i18n.locale: "zh-CN"
EOF
docker run -d -it --restart=always --privileged=true \
--name=elasticsearch -p 9200:9200 -p 9300:9300 -p 5601:5601 \
-e "discovery.type=single-node" -e "cluster.name=elasticsearch" \
-v /myData/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /myData/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-e ES_JAVA_OPTS="-Xms4G -Xmx4G" elasticsearch:7.6.2
#-e ES_JAVA_OPTS="-Xms4G -Xmx4G" 设置运行内存,这个内存不建议太大。因为es走的是直接内存也就是系统内存,所以要预留足够的系统内存。我这台服务器8核16G所以给了4G。如果系统内存预留不足,会导致以后检索时速度达不到预期速度。
docker exec -it elasticsearch /bin/bash
cd bin
#开启密码设置
elasticsearch-setup-passwords interactive
#输出 如下
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
#输入Y
Please confirm that you would like to continue [y/N]Y
#依次设置密码
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
exit
docker restart elasticsearch
docker run -itd -e ELASTICSEARCH_URL=http://ES服务器IP:9200 --name kibana \
-v /myData/kibana/config:/usr/share/kibana/config --network=container:elasticsearch kibana:7.6.2
http://ES服务器IP:9200
http://kibana服务器IP:5601/
docker pull rabbitmq:management
mkdir -p /myData/rabbitmq/{data,log,conf}
chmod -R 777 /myData/rabbitmq
docker run -d -p 5672:5672 -p 15672:15672 \
-v /myData/rabbitmq/data:/var/lib/rabbitmq -v /myData/rabbitmq/conf:/etc/rabbitmq -v /myData/rabbitmq/log:/var/log/rabbitmq \
--name rabbitmq --hostname=rabbitmq-1 --restart=always \
rabbitmq:management
docker exec -it rabbitmq /bin/bash
rabbitmq-plugins enable rabbitmq_management
添加账号 rabbitmqctl add_user 账号 密码设置权限 rabbitmqctl set_permissions -p / 账号 ".*" ".*" ".*"设置角色rabbitmqctl set_user_tags 账号 administratorrabbitmqctl add_user root root
rabbitmqctl set_permissions -p / root ".*" ".*" ".*"
rabbitmqctl set_user_tags root administrator
http://rabbitmq服务器的IP:15672/
安装
官网链接
参考博客:centos7 安装Anaconda3 亲测成功
注意安装过程中这个路径是anaconda的文件夹路径,是可以自定义的
