• Docker入门


    Docker简介

     Docker是开发人员和系统管理员构建、发布和运行分布式应用程序的开放平台。是应用容器(Application Container),是可以为应用提供可运行容器的一个平台。Docker看作一条船,船上的每一个集装箱就是一个容器(每个容器中都有一个Linux系统)

    Docker还有一重要的用处,就是可以保证开发,测试和生产环境的一致。

    1、Docker安装

    安装一组工具

    sudo yum install -y yum-utils

    设置 yum 仓库地址,设置2个仓库地址,一个失败会使用另外一个仓库。

    1. sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    2. sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

    更新 yum 缓存

    sudo yum makecache fast

    安装新版 docker

    sudo yum install -y docker-ce docker-ce-cli containerd.io

    2、Docker入门

    启动docker

    sudo systemctl start docker

    重启docker

    sudo systemctl restart docker

    设置 docker 开机启动

    sudo systemctl enable docker

    镜像加速

    由于国内网络问题,需要配置加速器来加速。
    修改配置文件:vi /etc/docker/daemon.json

    1. {
    2. "registry-mirrors": [
    3. "https://docker.mirrors.ustc.edu.cn",
    4. "http://hub-mirror.c.163.com"
    5. ],
    6. "max-concurrent-downloads": 10,
    7. "log-driver": "json-file",
    8. "log-level": "warn",
    9. "log-opts": {
    10. "max-size": "10m",
    11. "max-file": "3"
    12. },
    13. "data-root": "/var/lib/docker"
    14. }

    重新加载docker配置

    sudo systemctl daemon-reload

    重启docker服务

    sudo systemctl restart docker

    查看镜像配置

    docker info

    可以看到上面配置的仓库地址已经修为https://docker.mirrors.ustc.edu.cn/

    运行 hello-world 镜像,当没有hello-world这个镜像,他会到我们设置的仓库地址去下载这个镜像。

    sudo docker run hello-world

    检查docker 镜像

    docker images

     发现有一个hello-world,是上面运行后,本地没有,然后去仓库中下载的。

    1. #查看正在运行的容器
    2. docker ps

    假如希望查看所有镜像,包含没有运行的镜像容器,可以执行如下指令:

    docker ps -all

    停止docker服务

    docker stop 服务id

     删除docker 镜像

    docker image rm hello-world

    假如镜像被占用着是不可以直接被删除的,需要先删除应用此镜像的容器,例如

    1. #查询容器
    2. docker ps -all
    3. #删除容器
    4. docker container rm 容器名或容器id

    3、概念

    镜像(可以认为是一个容器的可执行文件,可以使用docker命令执行启动,启动后就是容器)

    Docker 镜像是一个特殊的文件系统,就是容器的运行文件,除了提供容器运行时所需的程序、库、资源、配置等文件外,还包含了一些为运行时准备的一些配置参数(如匿名卷、环境变量、用户等)。镜像不包含任何动态数据,其内容在构建之后也不会被改变。

    镜像只是一个虚拟的概念,其实际体现并非由一个文件组成,而是由一组文件系统组成,或者说,由多层文件系统联合组成。

    镜像构建时,会一层层构建,前一层是后一层的基础。每一层构建完就不会再发生改变,后一层上的任何改变只发生在自己这一层。比如,删除前一层文件的操作,实际不是真的删除前一层的文件,而是仅在当前层标记为该文件已删除。在最终容器运行的时候,虽然不会看到这个文件,但是实际上该文件会一直跟随镜像。因此,在构建镜像的时候,需要额外小心,每一层尽量只包含该层需要添加的东西,任何额外的东西应该在该层构建结束前清理掉。

    分层存储的特征还使得镜像的复用、定制变的更为容易。甚至可以用之前构建好的镜像作为基础层,然后进一步添加新的层,以定制自己所需的内容,构建新的镜像。

    从仓库拉取镜像

    1. docker pull 镜像名称
    2. 或者
    3. docker pull 镜像名称:版本号

    查看有哪些镜像

    docker images

    删除镜像

    docker image rm 镜像id

     如果镜像已经执行启动成一个容器,需要先删除这个容器。

    启动运行镜像启动后就是一个容器

    docker run -it xxxx bash
    

     -d: 后台运行容器,并返回容器ID。
    -it:在伪输入终端以交互模式运行容器。-i交互,-t终端。
    -p:指定端口映射,格式为:主机(宿主)端口:容器端口
    -v 用于指定挂载目录,格式为:宿主机目录:容器目录

    --restart always  设置容器随宿主机的开机而自启动
    -p 80:80 指端口映射,访问宿主机端口80,相当于访问的是nginx容器的80端口,
    --name 给容器设置一个名字,若没有设置默为镜像名。
    -v 指挂载,比如将 宿主机目录/usr/local/nginx/挂载到容器目录/etc/nginx/
    -d 后台运行 
    nginx:1.22.0 要运行的容器名:版本号
     

    一般运行镜像时,需要挂载主机目录,防止停止运行后数据丢失 

    docker run -it -v /usr/app:/opt/app centos:7 bash
    

     将宿主机目录/usr/app挂载到容器的/opt/app目录

    其中:
    1)/usr/app:为宿主机目录
    2)/opt/app: 为启动容器的一个目录
    3)-v 用于指定挂载目录,如果本地目录(宿主机目录)不存在, Docker 会自动为你按照挂载目录进行目录的创建。

    镜像导出 

    docker save jdk:8 | gzip > /root/jdk8.tar.gz
    

     镜像导入

    docker load < /root/jdk8.tar.gz
    

    容器(每个容器中都有一个独立的Linux操作系统

    镜像(Image)和容器(Container)的关系,就像是光盘和光驱,容器是镜像运行时的载体,镜像运行启动后就是一个容器。容器可以被创建、启动、停止、删除、暂停等。

    容器的本质是进程,但与直接在宿主机执行的进程不同,容器进程运行于属于自己的独立的 命名空间。因此容器可以拥有自己的 root 文件系统、自己的网络配置、自己的进程空间,甚至自己的用户 ID 空间。容器内的进程是运行在一个隔离的环境里,使用起来,就好像是在一个独立于宿主的系统下操作一样。这种特性使得容器封装的应用比直接在宿主运行更加安全。

     镜像文件启动后就是一个容器,容器里面有操作系统、有组件,启动容器的时候默认还会执行一个命令,让组件在操作系统中运行。

    docker history 镜像id或者镜像名

     可以看到容器启动后,会在容器的操作系统中执行 catalina.sh 命令,这个命令就会启动tomcat组件。

    docker container xx 命令  可以简写 docker xx 

    查看正在运行的镜像容器

    docker ps
    

    查看所有镜像容器,包含没有运行的镜像容器

    docker ps -all
    

    进入指定容器,退出exit

    sudo docker exec -it 容器id或者容器名 bash

     停止镜像容器

    1. docker container stop 容器id或者容器名
    2. 或者
    3. docker stop 容器id或者容器名

    启动镜像容器

    1. docker container start 容器id或者容器名
    2. 或者
    3. docker start 容器id或者容器名

     设置容器开机自启动

    docker update 容器名或容器id --restart=always

    重启容器

    1. docker container restart 容器名或容器id
    2. 或者
    3. docker restart 容器名或容器id

    查看容器安装到哪了?

    whereis 容器名或容器id

    查看容器启动时的日志

    docker container logs 容器名或容器id

    删除镜像容器

    docker container rm 容器名或容器id
    

    强制删除容器

    docker rm -f 容器名或容器id

    清除那些已经没有使用的容器

    docker container prune

     后台执行容器里的 jar包,因为容器不能直接访问宿主机里面的目录,还需要挂载一个目录。

    1. docker run -dit -p 宿主机端口:容器端口 -v 宿主机目录:容器目录 容器名 java -jar 要运行的jar包路径
    2. #例如
    3. docker run -dit -p 8081:8081 -v /root:/root jdk:8 java -jar /root/xx.jar

    容器打包镜像

    docker commit -m="描述信息" -a="作者" 容器id 目标镜像名:[TAG]

    docker commit -m="描述信息" -a="作者" b6ef283d6a47 myjdk:88

     

    数据卷

    镜像使用的是分层存储方式,容器也是如此。每一个容器运行时,是以镜像为基础层,在其上创建一个当前容器的存储层(我们可以称这个为容器运行时执行读写操作而准备的存储层)
    容器存储层的生存周期和容器一样,容器消亡时,容器存储层也随之消亡。因此,任何保存于容器存储层的信息都会随容器删除而丢失。
    按照 Docker 最佳实践的要求,容器不应该向其存储层内写入任何数据,容器存储层要保持无状态化。所有的文件写入操作,都应该使用 数据卷(Volume)、或者绑定宿主目录,在这些位置的读写会跳过容器存储层,直接对宿主(或网络存储)发生读写,其性能和稳定性更高。
    数据卷的生存周期独立于容器,容器消亡,数据卷不会消亡。因此,使用数据卷后,容器删除或者重新运行之后,数据却不会丢失。数据卷是一个可供一个或多个容器使用的特殊目录。默认会一直存在,即使容器被删除也是存储在宿主机上。

    就是说容器是一个独立的操作系统,当容器消失后,容器这个操作系统也消失了,里面的数据文件比如web的日志也没了,为了解决这个问题,使用数据卷或者将容器操作系统的数据文件挂载在宿主目录,这样好处就是容器消亡,数据文件不会消亡。

    1. #创建数据卷
    2. docker volume create 数据卷名称
    3. #查看所有数据卷
    4. docker volume ls
    5. #查看指定 数据卷 的信息
    6. docker volume inspect 数据卷名称
    7. #启动挂载数据卷的容器
    8. docker run -it --mount source=数据卷名称,target=/root centos:7 bash
    9. # 或者:
    10. docker run -it -v 数据卷名称:/root centos:7 bash
    11. -v container-vol:/root 把数据卷 container-vol 挂载到容器的 /root 目录
    12. #删除数据卷(如果数据卷被容器使用则无法删除)
    13. docker volume rm 数据卷名称
    14. #清理无主数据卷
    15. docker volume prune

    挂载主机目录(常用)

    作用和数据卷差不多

    docker run -it -v /usr/app:/opt/app centos:7 bash

    其中:
    1)/usr/app:为宿主机目录
    2)/opt/app: 为启动容器的一个目录
    3)-v 用于指定挂载目录,如果本地目录(宿主机目录)不存在, Docker 会自动为你按照挂载目录进行目录的创建。

    查看挂载目录信息

    docker inspect 容器名或容器id

    4、Docker镜像操作

    下载 Docker镜像,其实就是下载一个CentOS 镜像。

    假如是自己制作镜像,都会先下载一个空的centos镜像,假如后面我们要自己做镜像,都需要这样的一个空的系统镜像文件。

    冒号:7表示指定版本。

    docker pull centos:7

    查看centos7镜像文件

    docker images

    运行镜像

    通过docker启动运行 centos7镜像

    docker run -it xxxx bash

    xxxx - 镜像名, 或 image id 
    -it 这是两个参数,一个是 -i:交互式操作,一个是 -t 终端。-d表示后台运行。我们这里打算进入 bash 执行一些命令并查看返回结果,因此我们需要交互式终端。
    bash 放在镜像名后的是命令,这里我们希望有个交互式 Shell,因此用的是 bash。

    比如通过名字启动:docker run -it centos:7 bash
    比如通过image id启动:docker run -it eeb6ee3f44bd bash

     

     可以看到,启动容器以后,直接就进入了容器系统这个操作系统中了。

    后面的操作,其实就是在容器系统这个操作系统中操作了,exit退出,就会退出容器那个操作系统,回到宿主系统中。

    5、数据卷操作(了解)

    创建数据卷

    docker volume create container-vol

    查看所有数据卷

    docker volume ls

    查看指定 数据卷 的信息

    docker volume inspect container-vol

    启动挂载数据卷的容器

    docker run -it --mount source=container-vol,target=/root centos:7 bash

    简写:

    docker run -it -v container-vol:/root centos:7 bash

     -v container-vol:/root 把数据卷 container-vol 挂载到容器的 /root 目录

     删除数据卷(如果数据卷被容器使用则无法删除)

    docker volume rm container-vol

    清理无主数据卷(就是把数据卷在宿主机上的目录给删除了,比如上面的/var/lib/docker/volumes/container-vol/_data)

    docker volume prune

    6、挂载主机目录(常用)

    docker run -it -v /usr/app:/opt/app centos:7 bash

    其中:
    1)/usr/app:为宿主机目录
    2)/opt/app: 为启动容器的一个目录

    3)-v 用于指定挂载目录,如果本地目录(宿主机目录)不存在, Docker 会自动为你按照挂载目录进行目录的创建。

    比如将宿主机目录/usr/local/docker/test挂载到容器的/root目录

    docker run -it -v /usr/local/docker/test:/root centos:7 bash

    查看挂载目录信息

    docker ps -all
    docker inspect  容器id

     

     7、镜像制作-Dockerfile文件

    Dockerfile 是一个用来构建镜像的文本文件,文本内容包含了一条条构建镜像所需的指令和说明。我们通常会基于此文件创建docker镜像。

    准备工作

    1)centos:7镜像 (所有的镜像文件创建时都需要有一个空的centos镜像,就类似通过一个空的光盘或u盘创建一个系统启动盘是一样的)

    docker pull centos:7

    2)jdk压缩包jdk-8u311-linux-x64.tar.gz(可以从官网去下载:oracle.org),基于此压缩包,制作jdk镜像。

    Dockerfile文件
    在创建新的镜像时都需要有一个Dockerfile文件(文件名一定要注意大小写),这个文件要与你的资源放在一起(例如你下载的jdk)

    编辑Dockerfile文件

    vi Dockerfile

    1. FROM centos:7
    2. ADD jdk-8u311-linux-x64.tar.gz /usr/local/docker
    3. ENV JAVA_HOME=/usr/local/docker/jdk1.8.0_311 \
    4. PATH=/usr/local/docker/jdk1.8.0_311/bin:$PATH
    5. CMD ['bash']

    编写FROM语句(关键字一定要大写,例如FROM不能写小写)
    FROM centos:7
    通过ADD命令将宿主机中的压缩包传入镜像容器中的指定目录,并同时解压缩
    ADD jdk-8u311-linux-x64.tar.gz /usr/local/docker
    设置环境变量(通过ENV关键字实现,目录启动容器中的目录) 注意:\表示换行。:$PATH表示保留以前的path
    ENV JAVA_HOME=/usr/local/docker/jdk1.8.0_311 \
        PATH=/usr/local/docker/jdk1.8.0_311/bin:$PATH
    指定命令行操作(所有指令与后面内容要有空格)
    CMD ['bash']

    使用 Dockerfile 构建镜像(在Dockerfile所在目录执行docker指令)

    docker build -t jdk:8 .

    注意末尾的点,表示构建过程中从当前目录寻找文件,jdk:8为我们创建的镜像名。

    我们制作的这个镜像,依赖了一个空白的centos:7镜像,所以我们制作的jdk:8镜像是已经包含了centos:7镜像。

    8、运行镜像文件 

    docker run -it jdk:8 bash

    java -version 

    也可以不进入 bash ,直接运行 java version命令

    docker run -it jdk:8 java -version

     

    9、镜像导出导入操作

    镜像导出,导出后给他人使用

    docker save jdk:8 | gzip > /root/jdk8.tar.gz


    镜像导入

    docker load < /root/jdk8.tar.gz

     或者

    docker load -i /root/jdk8.tar.gz

     

    Docker 镜像资源安装

    安装MySql数据库

    在https://hub.docker.com/上搜索 mysql 镜像

     拉取版本指定版本mysql

    docker pull mysql:8.0.23

     启动运行mysql镜像

    1. docker run -p 3307:3306 --name mysql \
    2. -v /usr/local/mysql/mysql-files:/var/lib/mysql-files \
    3. -v /usr/local/mysql/conf:/etc/mysql \
    4. -v /usr/local/mysql/logs:/var/log/mysql \
    5. -v /usr/local/mysql/data:/var/lib/mysql \
    6. -e MYSQL_ROOT_PASSWORD=root \
    7. -d mysql:8.0.23

    -p 3307:3306 指端口映射,访问宿主机端口3307,相当于访问的是mysql容器的3306端口,
    --name 给容器设置一个名字,若没有设置默为镜像名。
    -v 指挂载,比如将 宿主机目录/usr/local/docker/mysql/mysql-files挂载到容器目录/var/lib/mysql-files
    -e 给mysql设置密码为root
    -d 后台运行 
    mysql:8.0.23 要运行的容器名:版本号

    1. #停止mysql服务
    2. docker stop mysql
    3. #启动mysql服务
    4. docker start mysql
    5. #设置mysql开机自启动
    6. docker update mysql --restart=always

    查看正则运行的容器

    docker ps

     进入容器 (退出容器用exit)

    sudo docker exec -it mysql bash

    登陆mysql(上面设置的密码为root)

    mysql -uroot -proot

     外部连接mysql

    1. #创建mysql账户。用户名和密码都是tony
    2. create user 'tony'@'%' identified by 'tony';
    3. #授权
    4. grant all on *.* to 'tony'@'%'

    出现Plugin caching_sha2_password could not be load错误(出现这个错误往往与数据库的版本有关系,mysql8.0以后改变对远程客户端的校验方式)分别执行如下三个指令 

    1. ALTER USER 'tony'@'%' IDENTIFIED BY 'tony' PASSWORD EXPIRE NEVER;
    2. ALTER USER 'tony'@'%' IDENTIFIED WITH mysql_native_password BY 'tony'
    3. FLUSH  PRIVILEGES; 

     开启宿主机端口3307

    1. #开启端口3307
    2. firewall-cmd --permanent --add-port=3307/tcp
    3. #重启防火墙是修改生效
    4. firewall-cmd --reload
    5. #查询有哪些端口是开启的
    6. firewall-cmd --list-port

    安装Nacos

    在https://hub.docker.com/上搜索 nacos镜像

     拉取指定版本nacos

    docker pull nacos/nacos-server:1.4.1

    ​在nacos官网https://github.com/alibaba/nacos/releases下载nacos-server.zip后解压,需要根据conf/nacos-mysql.sql创建nacos需要用到的数据库和表结构。
    GitHub网速不好点这里下载https://download.csdn.net/download/u014644574/86730745

    由于上面安装MySQL宿主机挂载的目录是/usr/local/mysql/conf,这里将nacos-server.zip后解压后的conf/nacos-mysql.sql文件上传到宿主机的/usr/local/mysql/conf目录。

    进入MySQL容器

    docker exec -it mysql bash

    登录mysql

    mysql -uroot -proot

     导入nacos依赖的数据库sql

    source /etc/mysql/nacos-mysql.sql

     

     

     创建nacos容器

    1. docker run \
    2. -e TZ="Asia/Shanghai" \
    3. -e MODE=standalone \
    4. -e SPRING_DATASOURCE_PLATFORM=mysql \
    5. -e MYSQL_DATABASE_NUM=1 \
    6. -e MYSQL_SERVICE_HOST=192.168.111.199 \
    7. -e MYSQL_SERVICE_PORT=3307 \
    8. -e MYSQL_SERVICE_USER=root \
    9. -e MYSQL_SERVICE_PASSWORD=root \
    10. -e MYSQL_SERVICE_DB_NAME=nacos_config \
    11. -p 8848:8848 \
    12. --name nacos \
    13. --restart=always \
    14. -d nacos/nacos-server:1.4.1

    -e 指定容器环境变量
    TZ,指定时区。
    MODE=standalone,单机模式
    SPRING_DATASOURCE_PLATFORM=mysql 连接数据库类型为mysql|
    MYSQL_SERVICE_HOST=192.168.111.199连接数据库地址(这里是docker的宿主机地址)
    MYSQL_SERVICE_PORT=3307 数据库端口号,这里注意,容器之间不能直接访问,需要使用宿主机的映射端口。
    MYSQL_SERVICE_USER=root 数据库用户名
    MYSQL_SERVICE_PASSWORD=root 数据库密码
    MYSQL_SERVICE_DB_NAME=nacos_config 数据库名
    -p 8848:8848 指端口映射,访问宿主机端口8848,相当于访问的是nacos容器的8848端口
    --name 给容器设置一个名字,若没有设置默为镜像名。
    --restart always  设置容器开机自启动
    -d 后台运行
    nacos/nacos-server:1.4.1 要运行的容器名:版本号

    请求地址:http://192.168.111.199:8848/nacos
    用户名/密码:nacos/nacos

     

    如果想保存nacos的配置文件和日志,可以做后面操作。后面步骤可以不管,一般不需要保存nacos。

    保存nacos的配置文件,这一步把就是 cp (复制)指令,将容器中的/home/nacos 目录复制到宿主机的/usr/local/nacos目录下。

    docker cp -a nacos:/home/nacos /usr/local/nacos

     停止nacos容器,并删除

    1. docker stop nacos
    2. docker rm nacos

     nacos容器启动配置,这次将上面保存/usr/local/nacos下的配置文件挂载在nacos容器中

    1. docker run \
    2. -e TZ="Asia/Shanghai" \
    3. -e MODE=standalone \
    4. -e SPRING_DATASOURCE_PLATFORM=mysql \
    5. -e MYSQL_DATABASE_NUM=1 \
    6. -e MYSQL_SERVICE_HOST=192.168.111.199 \
    7. -e MYSQL_SERVICE_PORT=3307 \
    8. -e MYSQL_SERVICE_USER=root \
    9. -e MYSQL_SERVICE_PASSWORD=root \
    10. -e MYSQL_SERVICE_DB_NAME=nacos_config \
    11. -v /usr/local/nacos:/home/nacos \
    12. -p 8848:8848 \
    13. --name nacos \
    14. --restart=always \
    15. -d nacos/nacos-server:1.4.1

    -v 挂载目录,宿主机目录:容器目录

    安装Redis

    在https://hub.docker.com/上搜索 redis镜像

      拉取版本指定版本redis

    docker pull redis:5.0.14

    创建redis配置文件目录

    mkdir -p /usr/local/redis/conf

    在配置文件录下创建redis.conf配置文件,

    touch /usr/local/redis/conf/redis.conf

     redis.conf可以在官网下载

    1. # Redis configuration file example.
    2. #
    3. # Note that in order to read the configuration file, Redis must be
    4. # started with the file path as first argument:
    5. #
    6. # ./redis-server /path/to/redis.conf
    7. # Note on units: when memory size is needed, it is possible to specify
    8. # it in the usual form of 1k 5GB 4M and so forth:
    9. #
    10. # 1k => 1000 bytes
    11. # 1kb => 1024 bytes
    12. # 1m => 1000000 bytes
    13. # 1mb => 1024*1024 bytes
    14. # 1g => 1000000000 bytes
    15. # 1gb => 1024*1024*1024 bytes
    16. #
    17. # units are case insensitive so 1GB 1Gb 1gB are all the same.
    18. ################################## INCLUDES ###################################
    19. # Include one or more other config files here. This is useful if you
    20. # have a standard template that goes to all Redis servers but also need
    21. # to customize a few per-server settings. Include files can include
    22. # other files, so use this wisely.
    23. #
    24. # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
    25. # from admin or Redis Sentinel. Since Redis always uses the last processed
    26. # line as value of a configuration directive, you'd better put includes
    27. # at the beginning of this file to avoid overwriting config change at runtime.
    28. #
    29. # If instead you are interested in using includes to override configuration
    30. # options, it is better to use include as the last line.
    31. #
    32. # include /path/to/local.conf
    33. # include /path/to/other.conf
    34. ################################## MODULES #####################################
    35. # Load modules at startup. If the server is not able to load modules
    36. # it will abort. It is possible to use multiple loadmodule directives.
    37. #
    38. # loadmodule /path/to/my_module.so
    39. # loadmodule /path/to/other_module.so
    40. ################################## NETWORK #####################################
    41. # By default, if no "bind" configuration directive is specified, Redis listens
    42. # for connections from all the network interfaces available on the server.
    43. # It is possible to listen to just one or multiple selected interfaces using
    44. # the "bind" configuration directive, followed by one or more IP addresses.
    45. #
    46. # Examples:
    47. #
    48. # bind 192.168.1.100 10.0.0.1
    49. # bind 127.0.0.1 ::1
    50. #
    51. # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
    52. # internet, binding to all the interfaces is dangerous and will expose the
    53. # instance to everybody on the internet. So by default we uncomment the
    54. # following bind directive, that will force Redis to listen only into
    55. # the IPv4 loopback interface address (this means Redis will be able to
    56. # accept connections only from clients running into the same computer it
    57. # is running).
    58. #
    59. # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
    60. # JUST COMMENT THE FOLLOWING LINE.
    61. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    62. bind 127.0.0.1
    63. # Protected mode is a layer of security protection, in order to avoid that
    64. # Redis instances left open on the internet are accessed and exploited.
    65. #
    66. # When protected mode is on and if:
    67. #
    68. # 1) The server is not binding explicitly to a set of addresses using the
    69. # "bind" directive.
    70. # 2) No password is configured.
    71. #
    72. # The server only accepts connections from clients connecting from the
    73. # IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
    74. # sockets.
    75. #
    76. # By default protected mode is enabled. You should disable it only if
    77. # you are sure you want clients from other hosts to connect to Redis
    78. # even if no authentication is configured, nor a specific set of interfaces
    79. # are explicitly listed using the "bind" directive.
    80. protected-mode yes
    81. # Accept connections on the specified port, default is 6379 (IANA #815344).
    82. # If port 0 is specified Redis will not listen on a TCP socket.
    83. port 6379
    84. # TCP listen() backlog.
    85. #
    86. # In high requests-per-second environments you need an high backlog in order
    87. # to avoid slow clients connections issues. Note that the Linux kernel
    88. # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
    89. # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
    90. # in order to get the desired effect.
    91. tcp-backlog 511
    92. # Unix socket.
    93. #
    94. # Specify the path for the Unix socket that will be used to listen for
    95. # incoming connections. There is no default, so Redis will not listen
    96. # on a unix socket when not specified.
    97. #
    98. # unixsocket /tmp/redis.sock
    99. # unixsocketperm 700
    100. # Close the connection after a client is idle for N seconds (0 to disable)
    101. timeout 0
    102. # TCP keepalive.
    103. #
    104. # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
    105. # of communication. This is useful for two reasons:
    106. #
    107. # 1) Detect dead peers.
    108. # 2) Take the connection alive from the point of view of network
    109. # equipment in the middle.
    110. #
    111. # On Linux, the specified value (in seconds) is the period used to send ACKs.
    112. # Note that to close the connection the double of the time is needed.
    113. # On other kernels the period depends on the kernel configuration.
    114. #
    115. # A reasonable value for this option is 300 seconds, which is the new
    116. # Redis default starting with Redis 3.2.1.
    117. tcp-keepalive 300
    118. ################################# GENERAL #####################################
    119. # By default Redis does not run as a daemon. Use 'yes' if you need it.
    120. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
    121. daemonize no
    122. # If you run Redis from upstart or systemd, Redis can interact with your
    123. # supervision tree. Options:
    124. # supervised no - no supervision interaction
    125. # supervised upstart - signal upstart by putting Redis into SIGSTOP mode
    126. # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
    127. # supervised auto - detect upstart or systemd method based on
    128. # UPSTART_JOB or NOTIFY_SOCKET environment variables
    129. # Note: these supervision methods only signal "process is ready."
    130. # They do not enable continuous liveness pings back to your supervisor.
    131. supervised no
    132. # If a pid file is specified, Redis writes it where specified at startup
    133. # and removes it at exit.
    134. #
    135. # When the server runs non daemonized, no pid file is created if none is
    136. # specified in the configuration. When the server is daemonized, the pid file
    137. # is used even if not specified, defaulting to "/var/run/redis.pid".
    138. #
    139. # Creating a pid file is best effort: if Redis is not able to create it
    140. # nothing bad happens, the server will start and run normally.
    141. pidfile /var/run/redis_6379.pid
    142. # Specify the server verbosity level.
    143. # This can be one of:
    144. # debug (a lot of information, useful for development/testing)
    145. # verbose (many rarely useful info, but not a mess like the debug level)
    146. # notice (moderately verbose, what you want in production probably)
    147. # warning (only very important / critical messages are logged)
    148. loglevel notice
    149. # Specify the log file name. Also the empty string can be used to force
    150. # Redis to log on the standard output. Note that if you use standard
    151. # output for logging but daemonize, logs will be sent to /dev/null
    152. logfile ""
    153. # To enable logging to the system logger, just set 'syslog-enabled' to yes,
    154. # and optionally update the other syslog parameters to suit your needs.
    155. # syslog-enabled no
    156. # Specify the syslog identity.
    157. # syslog-ident redis
    158. # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
    159. # syslog-facility local0
    160. # Set the number of databases. The default database is DB 0, you can select
    161. # a different one on a per-connection basis using SELECT where
    162. # dbid is a number between 0 and 'databases'-1
    163. databases 16
    164. # By default Redis shows an ASCII art logo only when started to log to the
    165. # standard output and if the standard output is a TTY. Basically this means
    166. # that normally a logo is displayed only in interactive sessions.
    167. #
    168. # However it is possible to force the pre-4.0 behavior and always show a
    169. # ASCII art logo in startup logs by setting the following option to yes.
    170. always-show-logo yes
    171. ################################ SNAPSHOTTING ################################
    172. #
    173. # Save the DB on disk:
    174. #
    175. # save
    176. #
    177. # Will save the DB if both the given number of seconds and the given
    178. # number of write operations against the DB occurred.
    179. #
    180. # In the example below the behaviour will be to save:
    181. # after 900 sec (15 min) if at least 1 key changed
    182. # after 300 sec (5 min) if at least 10 keys changed
    183. # after 60 sec if at least 10000 keys changed
    184. #
    185. # Note: you can disable saving completely by commenting out all "save" lines.
    186. #
    187. # It is also possible to remove all the previously configured save
    188. # points by adding a save directive with a single empty string argument
    189. # like in the following example:
    190. #
    191. # save ""
    192. save 900 1
    193. save 300 10
    194. save 60 10000
    195. # By default Redis will stop accepting writes if RDB snapshots are enabled
    196. # (at least one save point) and the latest background save failed.
    197. # This will make the user aware (in a hard way) that data is not persisting
    198. # on disk properly, otherwise chances are that no one will notice and some
    199. # disaster will happen.
    200. #
    201. # If the background saving process will start working again Redis will
    202. # automatically allow writes again.
    203. #
    204. # However if you have setup your proper monitoring of the Redis server
    205. # and persistence, you may want to disable this feature so that Redis will
    206. # continue to work as usual even if there are problems with disk,
    207. # permissions, and so forth.
    208. stop-writes-on-bgsave-error yes
    209. # Compress string objects using LZF when dump .rdb databases?
    210. # For default that's set to 'yes' as it's almost always a win.
    211. # If you want to save some CPU in the saving child set it to 'no' but
    212. # the dataset will likely be bigger if you have compressible values or keys.
    213. rdbcompression yes
    214. # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
    215. # This makes the format more resistant to corruption but there is a performance
    216. # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
    217. # for maximum performances.
    218. #
    219. # RDB files created with checksum disabled have a checksum of zero that will
    220. # tell the loading code to skip the check.
    221. rdbchecksum yes
    222. # The filename where to dump the DB
    223. dbfilename dump.rdb
    224. # The working directory.
    225. #
    226. # The DB will be written inside this directory, with the filename specified
    227. # above using the 'dbfilename' configuration directive.
    228. #
    229. # The Append Only File will also be created inside this directory.
    230. #
    231. # Note that you must specify a directory here, not a file name.
    232. dir ./
    233. ################################# REPLICATION #################################
    234. # Master-Replica replication. Use replicaof to make a Redis instance a copy of
    235. # another Redis server. A few things to understand ASAP about Redis replication.
    236. #
    237. # +------------------+ +---------------+
    238. # | Master | ---> | Replica |
    239. # | (receive writes) | | (exact copy) |
    240. # +------------------+ +---------------+
    241. #
    242. # 1) Redis replication is asynchronous, but you can configure a master to
    243. # stop accepting writes if it appears to be not connected with at least
    244. # a given number of replicas.
    245. # 2) Redis replicas are able to perform a partial resynchronization with the
    246. # master if the replication link is lost for a relatively small amount of
    247. # time. You may want to configure the replication backlog size (see the next
    248. # sections of this file) with a sensible value depending on your needs.
    249. # 3) Replication is automatic and does not need user intervention. After a
    250. # network partition replicas automatically try to reconnect to masters
    251. # and resynchronize with them.
    252. #
    253. # replicaof
    254. # If the master is password protected (using the "requirepass" configuration
    255. # directive below) it is possible to tell the replica to authenticate before
    256. # starting the replication synchronization process, otherwise the master will
    257. # refuse the replica request.
    258. #
    259. # masterauth
    260. # When a replica loses its connection with the master, or when the replication
    261. # is still in progress, the replica can act in two different ways:
    262. #
    263. # 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
    264. # still reply to client requests, possibly with out of date data, or the
    265. # data set may just be empty if this is the first synchronization.
    266. #
    267. # 2) if replica-serve-stale-data is set to 'no' the replica will reply with
    268. # an error "SYNC with master in progress" to all the kind of commands
    269. # but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
    270. # SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
    271. # COMMAND, POST, HOST: and LATENCY.
    272. #
    273. replica-serve-stale-data yes
    274. # You can configure a replica instance to accept writes or not. Writing against
    275. # a replica instance may be useful to store some ephemeral data (because data
    276. # written on a replica will be easily deleted after resync with the master) but
    277. # may also cause problems if clients are writing to it because of a
    278. # misconfiguration.
    279. #
    280. # Since Redis 2.6 by default replicas are read-only.
    281. #
    282. # Note: read only replicas are not designed to be exposed to untrusted clients
    283. # on the internet. It's just a protection layer against misuse of the instance.
    284. # Still a read only replica exports by default all the administrative commands
    285. # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
    286. # security of read only replicas using 'rename-command' to shadow all the
    287. # administrative / dangerous commands.
    288. replica-read-only yes
    289. # Replication SYNC strategy: disk or socket.
    290. #
    291. # -------------------------------------------------------
    292. # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
    293. # -------------------------------------------------------
    294. #
    295. # New replicas and reconnecting replicas that are not able to continue the replication
    296. # process just receiving differences, need to do what is called a "full
    297. # synchronization". An RDB file is transmitted from the master to the replicas.
    298. # The transmission can happen in two different ways:
    299. #
    300. # 1) Disk-backed: The Redis master creates a new process that writes the RDB
    301. # file on disk. Later the file is transferred by the parent
    302. # process to the replicas incrementally.
    303. # 2) Diskless: The Redis master creates a new process that directly writes the
    304. # RDB file to replica sockets, without touching the disk at all.
    305. #
    306. # With disk-backed replication, while the RDB file is generated, more replicas
    307. # can be queued and served with the RDB file as soon as the current child producing
    308. # the RDB file finishes its work. With diskless replication instead once
    309. # the transfer starts, new replicas arriving will be queued and a new transfer
    310. # will start when the current one terminates.
    311. #
    312. # When diskless replication is used, the master waits a configurable amount of
    313. # time (in seconds) before starting the transfer in the hope that multiple replicas
    314. # will arrive and the transfer can be parallelized.
    315. #
    316. # With slow disks and fast (large bandwidth) networks, diskless replication
    317. # works better.
    318. repl-diskless-sync no
    319. # When diskless replication is enabled, it is possible to configure the delay
    320. # the server waits in order to spawn the child that transfers the RDB via socket
    321. # to the replicas.
    322. #
    323. # This is important since once the transfer starts, it is not possible to serve
    324. # new replicas arriving, that will be queued for the next RDB transfer, so the server
    325. # waits a delay in order to let more replicas arrive.
    326. #
    327. # The delay is specified in seconds, and by default is 5 seconds. To disable
    328. # it entirely just set it to 0 seconds and the transfer will start ASAP.
    329. repl-diskless-sync-delay 5
    330. # Replicas send PINGs to server in a predefined interval. It's possible to change
    331. # this interval with the repl_ping_replica_period option. The default value is 10
    332. # seconds.
    333. #
    334. # repl-ping-replica-period 10
    335. # The following option sets the replication timeout for:
    336. #
    337. # 1) Bulk transfer I/O during SYNC, from the point of view of replica.
    338. # 2) Master timeout from the point of view of replicas (data, pings).
    339. # 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
    340. #
    341. # It is important to make sure that this value is greater than the value
    342. # specified for repl-ping-replica-period otherwise a timeout will be detected
    343. # every time there is low traffic between the master and the replica.
    344. #
    345. # repl-timeout 60
    346. # Disable TCP_NODELAY on the replica socket after SYNC?
    347. #
    348. # If you select "yes" Redis will use a smaller number of TCP packets and
    349. # less bandwidth to send data to replicas. But this can add a delay for
    350. # the data to appear on the replica side, up to 40 milliseconds with
    351. # Linux kernels using a default configuration.
    352. #
    353. # If you select "no" the delay for data to appear on the replica side will
    354. # be reduced but more bandwidth will be used for replication.
    355. #
    356. # By default we optimize for low latency, but in very high traffic conditions
    357. # or when the master and replicas are many hops away, turning this to "yes" may
    358. # be a good idea.
    359. repl-disable-tcp-nodelay no
    360. # Set the replication backlog size. The backlog is a buffer that accumulates
    361. # replica data when replicas are disconnected for some time, so that when a replica
    362. # wants to reconnect again, often a full resync is not needed, but a partial
    363. # resync is enough, just passing the portion of data the replica missed while
    364. # disconnected.
    365. #
    366. # The bigger the replication backlog, the longer the time the replica can be
    367. # disconnected and later be able to perform a partial resynchronization.
    368. #
    369. # The backlog is only allocated once there is at least a replica connected.
    370. #
    371. # repl-backlog-size 1mb
    372. # After a master has no longer connected replicas for some time, the backlog
    373. # will be freed. The following option configures the amount of seconds that
    374. # need to elapse, starting from the time the last replica disconnected, for
    375. # the backlog buffer to be freed.
    376. #
    377. # Note that replicas never free the backlog for timeout, since they may be
    378. # promoted to masters later, and should be able to correctly "partially
    379. # resynchronize" with the replicas: hence they should always accumulate backlog.
    380. #
    381. # A value of 0 means to never release the backlog.
    382. #
    383. # repl-backlog-ttl 3600
    384. # The replica priority is an integer number published by Redis in the INFO output.
    385. # It is used by Redis Sentinel in order to select a replica to promote into a
    386. # master if the master is no longer working correctly.
    387. #
    388. # A replica with a low priority number is considered better for promotion, so
    389. # for instance if there are three replicas with priority 10, 100, 25 Sentinel will
    390. # pick the one with priority 10, that is the lowest.
    391. #
    392. # However a special priority of 0 marks the replica as not able to perform the
    393. # role of master, so a replica with priority of 0 will never be selected by
    394. # Redis Sentinel for promotion.
    395. #
    396. # By default the priority is 100.
    397. replica-priority 100
    398. # It is possible for a master to stop accepting writes if there are less than
    399. # N replicas connected, having a lag less or equal than M seconds.
    400. #
    401. # The N replicas need to be in "online" state.
    402. #
    403. # The lag in seconds, that must be <= the specified value, is calculated from
    404. # the last ping received from the replica, that is usually sent every second.
    405. #
    406. # This option does not GUARANTEE that N replicas will accept the write, but
    407. # will limit the window of exposure for lost writes in case not enough replicas
    408. # are available, to the specified number of seconds.
    409. #
    410. # For example to require at least 3 replicas with a lag <= 10 seconds use:
    411. #
    412. # min-replicas-to-write 3
    413. # min-replicas-max-lag 10
    414. #
    415. # Setting one or the other to 0 disables the feature.
    416. #
    417. # By default min-replicas-to-write is set to 0 (feature disabled) and
    418. # min-replicas-max-lag is set to 10.
    419. # A Redis master is able to list the address and port of the attached
    420. # replicas in different ways. For example the "INFO replication" section
    421. # offers this information, which is used, among other tools, by
    422. # Redis Sentinel in order to discover replica instances.
    423. # Another place where this info is available is in the output of the
    424. # "ROLE" command of a master.
    425. #
    426. # The listed IP and address normally reported by a replica is obtained
    427. # in the following way:
    428. #
    429. # IP: The address is auto detected by checking the peer address
    430. # of the socket used by the replica to connect with the master.
    431. #
    432. # Port: The port is communicated by the replica during the replication
    433. # handshake, and is normally the port that the replica is using to
    434. # listen for connections.
    435. #
    436. # However when port forwarding or Network Address Translation (NAT) is
    437. # used, the replica may be actually reachable via different IP and port
    438. # pairs. The following two options can be used by a replica in order to
    439. # report to its master a specific set of IP and port, so that both INFO
    440. # and ROLE will report those values.
    441. #
    442. # There is no need to use both the options if you need to override just
    443. # the port or the IP address.
    444. #
    445. # replica-announce-ip 5.5.5.5
    446. # replica-announce-port 1234
    447. ################################## SECURITY ###################################
    448. # Require clients to issue AUTH before processing any other
    449. # commands. This might be useful in environments in which you do not trust
    450. # others with access to the host running redis-server.
    451. #
    452. # This should stay commented out for backward compatibility and because most
    453. # people do not need auth (e.g. they run their own servers).
    454. #
    455. # Warning: since Redis is pretty fast an outside user can try up to
    456. # 150k passwords per second against a good box. This means that you should
    457. # use a very strong password otherwise it will be very easy to break.
    458. #
    459. # requirepass foobared
    460. # Command renaming.
    461. #
    462. # It is possible to change the name of dangerous commands in a shared
    463. # environment. For instance the CONFIG command may be renamed into something
    464. # hard to guess so that it will still be available for internal-use tools
    465. # but not available for general clients.
    466. #
    467. # Example:
    468. #
    469. # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
    470. #
    471. # It is also possible to completely kill a command by renaming it into
    472. # an empty string:
    473. #
    474. # rename-command CONFIG ""
    475. #
    476. # Please note that changing the name of commands that are logged into the
    477. # AOF file or transmitted to replicas may cause problems.
    478. ################################### CLIENTS ####################################
    479. # Set the max number of connected clients at the same time. By default
    480. # this limit is set to 10000 clients, however if the Redis server is not
    481. # able to configure the process file limit to allow for the specified limit
    482. # the max number of allowed clients is set to the current file limit
    483. # minus 32 (as Redis reserves a few file descriptors for internal uses).
    484. #
    485. # Once the limit is reached Redis will close all the new connections sending
    486. # an error 'max number of clients reached'.
    487. #
    488. # maxclients 10000
    489. ############################## MEMORY MANAGEMENT ################################
    490. # Set a memory usage limit to the specified amount of bytes.
    491. # When the memory limit is reached Redis will try to remove keys
    492. # according to the eviction policy selected (see maxmemory-policy).
    493. #
    494. # If Redis can't remove keys according to the policy, or if the policy is
    495. # set to 'noeviction', Redis will start to reply with errors to commands
    496. # that would use more memory, like SET, LPUSH, and so on, and will continue
    497. # to reply to read-only commands like GET.
    498. #
    499. # This option is usually useful when using Redis as an LRU or LFU cache, or to
    500. # set a hard memory limit for an instance (using the 'noeviction' policy).
    501. #
    502. # WARNING: If you have replicas attached to an instance with maxmemory on,
    503. # the size of the output buffers needed to feed the replicas are subtracted
    504. # from the used memory count, so that network problems / resyncs will
    505. # not trigger a loop where keys are evicted, and in turn the output
    506. # buffer of replicas is full with DELs of keys evicted triggering the deletion
    507. # of more keys, and so forth until the database is completely emptied.
    508. #
    509. # In short... if you have replicas attached it is suggested that you set a lower
    510. # limit for maxmemory so that there is some free RAM on the system for replica
    511. # output buffers (but this is not needed if the policy is 'noeviction').
    512. #
    513. # maxmemory
    514. # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
    515. # is reached. You can select among five behaviors:
    516. #
    517. # volatile-lru -> Evict using approximated LRU among the keys with an expire set.
    518. # allkeys-lru -> Evict any key using approximated LRU.
    519. # volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
    520. # allkeys-lfu -> Evict any key using approximated LFU.
    521. # volatile-random -> Remove a random key among the ones with an expire set.
    522. # allkeys-random -> Remove a random key, any key.
    523. # volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
    524. # noeviction -> Don't evict anything, just return an error on write operations.
    525. #
    526. # LRU means Least Recently Used
    527. # LFU means Least Frequently Used
    528. #
    529. # Both LRU, LFU and volatile-ttl are implemented using approximated
    530. # randomized algorithms.
    531. #
    532. # Note: with any of the above policies, Redis will return an error on write
    533. # operations, when there are no suitable keys for eviction.
    534. #
    535. # At the date of writing these commands are: set setnx setex append
    536. # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
    537. # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
    538. # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
    539. # getset mset msetnx exec sort
    540. #
    541. # The default is:
    542. #
    543. # maxmemory-policy noeviction
    544. # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
    545. # algorithms (in order to save memory), so you can tune it for speed or
    546. # accuracy. For default Redis will check five keys and pick the one that was
    547. # used less recently, you can change the sample size using the following
    548. # configuration directive.
    549. #
    550. # The default of 5 produces good enough results. 10 Approximates very closely
    551. # true LRU but costs more CPU. 3 is faster but not very accurate.
    552. #
    553. # maxmemory-samples 5
    554. # Starting from Redis 5, by default a replica will ignore its maxmemory setting
    555. # (unless it is promoted to master after a failover or manually). It means
    556. # that the eviction of keys will be just handled by the master, sending the
    557. # DEL commands to the replica as keys evict in the master side.
    558. #
    559. # This behavior ensures that masters and replicas stay consistent, and is usually
    560. # what you want, however if your replica is writable, or you want the replica to have
    561. # a different memory setting, and you are sure all the writes performed to the
    562. # replica are idempotent, then you may change this default (but be sure to understand
    563. # what you are doing).
    564. #
    565. # Note that since the replica by default does not evict, it may end using more
    566. # memory than the one set via maxmemory (there are certain buffers that may
    567. # be larger on the replica, or data structures may sometimes take more memory and so
    568. # forth). So make sure you monitor your replicas and make sure they have enough
    569. # memory to never hit a real out-of-memory condition before the master hits
    570. # the configured maxmemory setting.
    571. #
    572. # replica-ignore-maxmemory yes
    573. ############################# LAZY FREEING ####################################
    574. # Redis has two primitives to delete keys. One is called DEL and is a blocking
    575. # deletion of the object. It means that the server stops processing new commands
    576. # in order to reclaim all the memory associated with an object in a synchronous
    577. # way. If the key deleted is associated with a small object, the time needed
    578. # in order to execute the DEL command is very small and comparable to most other
    579. # O(1) or O(log_N) commands in Redis. However if the key is associated with an
    580. # aggregated value containing millions of elements, the server can block for
    581. # a long time (even seconds) in order to complete the operation.
    582. #
    583. # For the above reasons Redis also offers non blocking deletion primitives
    584. # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
    585. # FLUSHDB commands, in order to reclaim memory in background. Those commands
    586. # are executed in constant time. Another thread will incrementally free the
    587. # object in the background as fast as possible.
    588. #
    589. # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
    590. # It's up to the design of the application to understand when it is a good
    591. # idea to use one or the other. However the Redis server sometimes has to
    592. # delete keys or flush the whole database as a side effect of other operations.
    593. # Specifically Redis deletes objects independently of a user call in the
    594. # following scenarios:
    595. #
    596. # 1) On eviction, because of the maxmemory and maxmemory policy configurations,
    597. # in order to make room for new data, without going over the specified
    598. # memory limit.
    599. # 2) Because of expire: when a key with an associated time to live (see the
    600. # EXPIRE command) must be deleted from memory.
    601. # 3) Because of a side effect of a command that stores data on a key that may
    602. # already exist. For example the RENAME command may delete the old key
    603. # content when it is replaced with another one. Similarly SUNIONSTORE
    604. # or SORT with STORE option may delete existing keys. The SET command
    605. # itself removes any old content of the specified key in order to replace
    606. # it with the specified string.
    607. # 4) During replication, when a replica performs a full resynchronization with
    608. # its master, the content of the whole database is removed in order to
    609. # load the RDB file just transferred.
    610. #
    611. # In all the above cases the default is to delete objects in a blocking way,
    612. # like if DEL was called. However you can configure each case specifically
    613. # in order to instead release memory in a non-blocking way like if UNLINK
    614. # was called, using the following configuration directives:
    615. lazyfree-lazy-eviction no
    616. lazyfree-lazy-expire no
    617. lazyfree-lazy-server-del no
    618. replica-lazy-flush no
    619. ############################## APPEND ONLY MODE ###############################
    620. # By default Redis asynchronously dumps the dataset on disk. This mode is
    621. # good enough in many applications, but an issue with the Redis process or
    622. # a power outage may result into a few minutes of writes lost (depending on
    623. # the configured save points).
    624. #
    625. # The Append Only File is an alternative persistence mode that provides
    626. # much better durability. For instance using the default data fsync policy
    627. # (see later in the config file) Redis can lose just one second of writes in a
    628. # dramatic event like a server power outage, or a single write if something
    629. # wrong with the Redis process itself happens, but the operating system is
    630. # still running correctly.
    631. #
    632. # AOF and RDB persistence can be enabled at the same time without problems.
    633. # If the AOF is enabled on startup Redis will load the AOF, that is the file
    634. # with the better durability guarantees.
    635. #
    636. # Please check http://redis.io/topics/persistence for more information.
    637. appendonly no
    638. # The name of the append only file (default: "appendonly.aof")
    639. appendfilename "appendonly.aof"
    640. # The fsync() call tells the Operating System to actually write data on disk
    641. # instead of waiting for more data in the output buffer. Some OS will really flush
    642. # data on disk, some other OS will just try to do it ASAP.
    643. #
    644. # Redis supports three different modes:
    645. #
    646. # no: don't fsync, just let the OS flush the data when it wants. Faster.
    647. # always: fsync after every write to the append only log. Slow, Safest.
    648. # everysec: fsync only one time every second. Compromise.
    649. #
    650. # The default is "everysec", as that's usually the right compromise between
    651. # speed and data safety. It's up to you to understand if you can relax this to
    652. # "no" that will let the operating system flush the output buffer when
    653. # it wants, for better performances (but if you can live with the idea of
    654. # some data loss consider the default persistence mode that's snapshotting),
    655. # or on the contrary, use "always" that's very slow but a bit safer than
    656. # everysec.
    657. #
    658. # More details please check the following article:
    659. # http://antirez.com/post/redis-persistence-demystified.html
    660. #
    661. # If unsure, use "everysec".
    662. # appendfsync always
    663. appendfsync everysec
    664. # appendfsync no
    665. # When the AOF fsync policy is set to always or everysec, and a background
    666. # saving process (a background save or AOF log background rewriting) is
    667. # performing a lot of I/O against the disk, in some Linux configurations
    668. # Redis may block too long on the fsync() call. Note that there is no fix for
    669. # this currently, as even performing fsync in a different thread will block
    670. # our synchronous write(2) call.
    671. #
    672. # In order to mitigate this problem it's possible to use the following option
    673. # that will prevent fsync() from being called in the main process while a
    674. # BGSAVE or BGREWRITEAOF is in progress.
    675. #
    676. # This means that while another child is saving, the durability of Redis is
    677. # the same as "appendfsync none". In practical terms, this means that it is
    678. # possible to lose up to 30 seconds of log in the worst scenario (with the
    679. # default Linux settings).
    680. #
    681. # If you have latency problems turn this to "yes". Otherwise leave it as
    682. # "no" that is the safest pick from the point of view of durability.
    683. no-appendfsync-on-rewrite no
    684. # Automatic rewrite of the append only file.
    685. # Redis is able to automatically rewrite the log file implicitly calling
    686. # BGREWRITEAOF when the AOF log size grows by the specified percentage.
    687. #
    688. # This is how it works: Redis remembers the size of the AOF file after the
    689. # latest rewrite (if no rewrite has happened since the restart, the size of
    690. # the AOF at startup is used).
    691. #
    692. # This base size is compared to the current size. If the current size is
    693. # bigger than the specified percentage, the rewrite is triggered. Also
    694. # you need to specify a minimal size for the AOF file to be rewritten, this
    695. # is useful to avoid rewriting the AOF file even if the percentage increase
    696. # is reached but it is still pretty small.
    697. #
    698. # Specify a percentage of zero in order to disable the automatic AOF
    699. # rewrite feature.
    700. auto-aof-rewrite-percentage 100
    701. auto-aof-rewrite-min-size 64mb
    702. # An AOF file may be found to be truncated at the end during the Redis
    703. # startup process, when the AOF data gets loaded back into memory.
    704. # This may happen when the system where Redis is running
    705. # crashes, especially when an ext4 filesystem is mounted without the
    706. # data=ordered option (however this can't happen when Redis itself
    707. # crashes or aborts but the operating system still works correctly).
    708. #
    709. # Redis can either exit with an error when this happens, or load as much
    710. # data as possible (the default now) and start if the AOF file is found
    711. # to be truncated at the end. The following option controls this behavior.
    712. #
    713. # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
    714. # the Redis server starts emitting a log to inform the user of the event.
    715. # Otherwise if the option is set to no, the server aborts with an error
    716. # and refuses to start. When the option is set to no, the user requires
    717. # to fix the AOF file using the "redis-check-aof" utility before to restart
    718. # the server.
    719. #
    720. # Note that if the AOF file will be found to be corrupted in the middle
    721. # the server will still exit with an error. This option only applies when
    722. # Redis will try to read more data from the AOF file but not enough bytes
    723. # will be found.
    724. aof-load-truncated yes
    725. # When rewriting the AOF file, Redis is able to use an RDB preamble in the
    726. # AOF file for faster rewrites and recoveries. When this option is turned
    727. # on the rewritten AOF file is composed of two different stanzas:
    728. #
    729. # [RDB file][AOF tail]
    730. #
    731. # When loading Redis recognizes that the AOF file starts with the "REDIS"
    732. # string and loads the prefixed RDB file, and continues loading the AOF
    733. # tail.
    734. aof-use-rdb-preamble yes
    735. ################################ LUA SCRIPTING ###############################
    736. # Max execution time of a Lua script in milliseconds.
    737. #
    738. # If the maximum execution time is reached Redis will log that a script is
    739. # still in execution after the maximum allowed time and will start to
    740. # reply to queries with an error.
    741. #
    742. # When a long running script exceeds the maximum execution time only the
    743. # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
    744. # used to stop a script that did not yet called write commands. The second
    745. # is the only way to shut down the server in the case a write command was
    746. # already issued by the script but the user doesn't want to wait for the natural
    747. # termination of the script.
    748. #
    749. # Set it to 0 or a negative value for unlimited execution without warnings.
    750. lua-time-limit 5000
    751. ################################ REDIS CLUSTER ###############################
    752. # Normal Redis instances can't be part of a Redis Cluster; only nodes that are
    753. # started as cluster nodes can. In order to start a Redis instance as a
    754. # cluster node enable the cluster support uncommenting the following:
    755. #
    756. # cluster-enabled yes
    757. # Every cluster node has a cluster configuration file. This file is not
    758. # intended to be edited by hand. It is created and updated by Redis nodes.
    759. # Every Redis Cluster node requires a different cluster configuration file.
    760. # Make sure that instances running in the same system do not have
    761. # overlapping cluster configuration file names.
    762. #
    763. # cluster-config-file nodes-6379.conf
    764. # Cluster node timeout is the amount of milliseconds a node must be unreachable
    765. # for it to be considered in failure state.
    766. # Most other internal time limits are multiple of the node timeout.
    767. #
    768. # cluster-node-timeout 15000
    769. # A replica of a failing master will avoid to start a failover if its data
    770. # looks too old.
    771. #
    772. # There is no simple way for a replica to actually have an exact measure of
    773. # its "data age", so the following two checks are performed:
    774. #
    775. # 1) If there are multiple replicas able to failover, they exchange messages
    776. # in order to try to give an advantage to the replica with the best
    777. # replication offset (more data from the master processed).
    778. # Replicas will try to get their rank by offset, and apply to the start
    779. # of the failover a delay proportional to their rank.
    780. #
    781. # 2) Every single replica computes the time of the last interaction with
    782. # its master. This can be the last ping or command received (if the master
    783. # is still in the "connected" state), or the time that elapsed since the
    784. # disconnection with the master (if the replication link is currently down).
    785. # If the last interaction is too old, the replica will not try to failover
    786. # at all.
    787. #
    788. # The point "2" can be tuned by user. Specifically a replica will not perform
    789. # the failover if, since the last interaction with the master, the time
    790. # elapsed is greater than:
    791. #
    792. # (node-timeout * replica-validity-factor) + repl-ping-replica-period
    793. #
    794. # So for example if node-timeout is 30 seconds, and the replica-validity-factor
    795. # is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
    796. # replica will not try to failover if it was not able to talk with the master
    797. # for longer than 310 seconds.
    798. #
    799. # A large replica-validity-factor may allow replicas with too old data to failover
    800. # a master, while a too small value may prevent the cluster from being able to
    801. # elect a replica at all.
    802. #
    803. # For maximum availability, it is possible to set the replica-validity-factor
    804. # to a value of 0, which means, that replicas will always try to failover the
    805. # master regardless of the last time they interacted with the master.
    806. # (However they'll always try to apply a delay proportional to their
    807. # offset rank).
    808. #
    809. # Zero is the only value able to guarantee that when all the partitions heal
    810. # the cluster will always be able to continue.
    811. #
    812. # cluster-replica-validity-factor 10
    813. # Cluster replicas are able to migrate to orphaned masters, that are masters
    814. # that are left without working replicas. This improves the cluster ability
    815. # to resist to failures as otherwise an orphaned master can't be failed over
    816. # in case of failure if it has no working replicas.
    817. #
    818. # Replicas migrate to orphaned masters only if there are still at least a
    819. # given number of other working replicas for their old master. This number
    820. # is the "migration barrier". A migration barrier of 1 means that a replica
    821. # will migrate only if there is at least 1 other working replica for its master
    822. # and so forth. It usually reflects the number of replicas you want for every
    823. # master in your cluster.
    824. #
    825. # Default is 1 (replicas migrate only if their masters remain with at least
    826. # one replica). To disable migration just set it to a very large value.
    827. # A value of 0 can be set but is useful only for debugging and dangerous
    828. # in production.
    829. #
    830. # cluster-migration-barrier 1
    831. # By default Redis Cluster nodes stop accepting queries if they detect there
    832. # is at least an hash slot uncovered (no available node is serving it).
    833. # This way if the cluster is partially down (for example a range of hash slots
    834. # are no longer covered) all the cluster becomes, eventually, unavailable.
    835. # It automatically returns available as soon as all the slots are covered again.
    836. #
    837. # However sometimes you want the subset of the cluster which is working,
    838. # to continue to accept queries for the part of the key space that is still
    839. # covered. In order to do so, just set the cluster-require-full-coverage
    840. # option to no.
    841. #
    842. # cluster-require-full-coverage yes
    843. # This option, when set to yes, prevents replicas from trying to failover its
    844. # master during master failures. However the master can still perform a
    845. # manual failover, if forced to do so.
    846. #
    847. # This is useful in different scenarios, especially in the case of multiple
    848. # data center operations, where we want one side to never be promoted if not
    849. # in the case of a total DC failure.
    850. #
    851. # cluster-replica-no-failover no
    852. # In order to setup your cluster make sure to read the documentation
    853. # available at http://redis.io web site.
    854. ########################## CLUSTER DOCKER/NAT support ########################
    855. # In certain deployments, Redis Cluster nodes address discovery fails, because
    856. # addresses are NAT-ted or because ports are forwarded (the typical case is
    857. # Docker and other containers).
    858. #
    859. # In order to make Redis Cluster working in such environments, a static
    860. # configuration where each node knows its public address is needed. The
    861. # following two options are used for this scope, and are:
    862. #
    863. # * cluster-announce-ip
    864. # * cluster-announce-port
    865. # * cluster-announce-bus-port
    866. #
    867. # Each instruct the node about its address, client port, and cluster message
    868. # bus port. The information is then published in the header of the bus packets
    869. # so that other nodes will be able to correctly map the address of the node
    870. # publishing the information.
    871. #
    872. # If the above options are not used, the normal Redis Cluster auto-detection
    873. # will be used instead.
    874. #
    875. # Note that when remapped, the bus port may not be at the fixed offset of
    876. # clients port + 10000, so you can specify any port and bus-port depending
    877. # on how they get remapped. If the bus-port is not set, a fixed offset of
    878. # 10000 will be used as usually.
    879. #
    880. # Example:
    881. #
    882. # cluster-announce-ip 10.1.1.5
    883. # cluster-announce-port 6379
    884. # cluster-announce-bus-port 6380
    885. ################################## SLOW LOG ###################################
    886. # The Redis Slow Log is a system to log queries that exceeded a specified
    887. # execution time. The execution time does not include the I/O operations
    888. # like talking with the client, sending the reply and so forth,
    889. # but just the time needed to actually execute the command (this is the only
    890. # stage of command execution where the thread is blocked and can not serve
    891. # other requests in the meantime).
    892. #
    893. # You can configure the slow log with two parameters: one tells Redis
    894. # what is the execution time, in microseconds, to exceed in order for the
    895. # command to get logged, and the other parameter is the length of the
    896. # slow log. When a new command is logged the oldest one is removed from the
    897. # queue of logged commands.
    898. # The following time is expressed in microseconds, so 1000000 is equivalent
    899. # to one second. Note that a negative number disables the slow log, while
    900. # a value of zero forces the logging of every command.
    901. slowlog-log-slower-than 10000
    902. # There is no limit to this length. Just be aware that it will consume memory.
    903. # You can reclaim memory used by the slow log with SLOWLOG RESET.
    904. slowlog-max-len 128
    905. ################################ LATENCY MONITOR ##############################
    906. # The Redis latency monitoring subsystem samples different operations
    907. # at runtime in order to collect data related to possible sources of
    908. # latency of a Redis instance.
    909. #
    910. # Via the LATENCY command this information is available to the user that can
    911. # print graphs and obtain reports.
    912. #
    913. # The system only logs operations that were performed in a time equal or
    914. # greater than the amount of milliseconds specified via the
    915. # latency-monitor-threshold configuration directive. When its value is set
    916. # to zero, the latency monitor is turned off.
    917. #
    918. # By default latency monitoring is disabled since it is mostly not needed
    919. # if you don't have latency issues, and collecting data has a performance
    920. # impact, that while very small, can be measured under big load. Latency
    921. # monitoring can easily be enabled at runtime using the command
    922. # "CONFIG SET latency-monitor-threshold " if needed.
    923. latency-monitor-threshold 0
    924. ############################# EVENT NOTIFICATION ##############################
    925. # Redis can notify Pub/Sub clients about events happening in the key space.
    926. # This feature is documented at http://redis.io/topics/notifications
    927. #
    928. # For instance if keyspace events notification is enabled, and a client
    929. # performs a DEL operation on key "foo" stored in the Database 0, two
    930. # messages will be published via Pub/Sub:
    931. #
    932. # PUBLISH __keyspace@0__:foo del
    933. # PUBLISH __keyevent@0__:del foo
    934. #
    935. # It is possible to select the events that Redis will notify among a set
    936. # of classes. Every class is identified by a single character:
    937. #
    938. # K Keyspace events, published with __keyspace@__ prefix.
    939. # E Keyevent events, published with __keyevent@__ prefix.
    940. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
    941. # $ String commands
    942. # l List commands
    943. # s Set commands
    944. # h Hash commands
    945. # z Sorted set commands
    946. # x Expired events (events generated every time a key expires)
    947. # e Evicted events (events generated when a key is evicted for maxmemory)
    948. # A Alias for g$lshzxe, so that the "AKE" string means all the events.
    949. #
    950. # The "notify-keyspace-events" takes as argument a string that is composed
    951. # of zero or multiple characters. The empty string means that notifications
    952. # are disabled.
    953. #
    954. # Example: to enable list and generic events, from the point of view of the
    955. # event name, use:
    956. #
    957. # notify-keyspace-events Elg
    958. #
    959. # Example 2: to get the stream of the expired keys subscribing to channel
    960. # name __keyevent@0__:expired use:
    961. #
    962. # notify-keyspace-events Ex
    963. #
    964. # By default all notifications are disabled because most users don't need
    965. # this feature and the feature has some overhead. Note that if you don't
    966. # specify at least one of K or E, no events will be delivered.
    967. notify-keyspace-events ""
    968. ############################### ADVANCED CONFIG ###############################
    969. # Hashes are encoded using a memory efficient data structure when they have a
    970. # small number of entries, and the biggest entry does not exceed a given
    971. # threshold. These thresholds can be configured using the following directives.
    972. hash-max-ziplist-entries 512
    973. hash-max-ziplist-value 64
    974. # Lists are also encoded in a special way to save a lot of space.
    975. # The number of entries allowed per internal list node can be specified
    976. # as a fixed maximum size or a maximum number of elements.
    977. # For a fixed maximum size, use -5 through -1, meaning:
    978. # -5: max size: 64 Kb <-- not recommended for normal workloads
    979. # -4: max size: 32 Kb <-- not recommended
    980. # -3: max size: 16 Kb <-- probably not recommended
    981. # -2: max size: 8 Kb <-- good
    982. # -1: max size: 4 Kb <-- good
    983. # Positive numbers mean store up to _exactly_ that number of elements
    984. # per list node.
    985. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
    986. # but if your use case is unique, adjust the settings as necessary.
    987. list-max-ziplist-size -2
    988. # Lists may also be compressed.
    989. # Compress depth is the number of quicklist ziplist nodes from *each* side of
    990. # the list to *exclude* from compression. The head and tail of the list
    991. # are always uncompressed for fast push/pop operations. Settings are:
    992. # 0: disable all list compression
    993. # 1: depth 1 means "don't start compressing until after 1 node into the list,
    994. # going from either the head or tail"
    995. # So: [head]->node->node->...->node->[tail]
    996. # [head], [tail] will always be uncompressed; inner nodes will compress.
    997. # 2: [head]->[next]->node->node->...->node->[prev]->[tail]
    998. # 2 here means: don't compress head or head->next or tail->prev or tail,
    999. # but compress all nodes between them.
    1000. # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
    1001. # etc.
    1002. list-compress-depth 0
    1003. # Sets have a special encoding in just one case: when a set is composed
    1004. # of just strings that happen to be integers in radix 10 in the range
    1005. # of 64 bit signed integers.
    1006. # The following configuration setting sets the limit in the size of the
    1007. # set in order to use this special memory saving encoding.
    1008. set-max-intset-entries 512
    1009. # Similarly to hashes and lists, sorted sets are also specially encoded in
    1010. # order to save a lot of space. This encoding is only used when the length and
    1011. # elements of a sorted set are below the following limits:
    1012. zset-max-ziplist-entries 128
    1013. zset-max-ziplist-value 64
    1014. # HyperLogLog sparse representation bytes limit. The limit includes the
    1015. # 16 bytes header. When an HyperLogLog using the sparse representation crosses
    1016. # this limit, it is converted into the dense representation.
    1017. #
    1018. # A value greater than 16000 is totally useless, since at that point the
    1019. # dense representation is more memory efficient.
    1020. #
    1021. # The suggested value is ~ 3000 in order to have the benefits of
    1022. # the space efficient encoding without slowing down too much PFADD,
    1023. # which is O(N) with the sparse encoding. The value can be raised to
    1024. # ~ 10000 when CPU is not a concern, but space is, and the data set is
    1025. # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
    1026. hll-sparse-max-bytes 3000
    1027. # Streams macro node max size / items. The stream data structure is a radix
    1028. # tree of big nodes that encode multiple items inside. Using this configuration
    1029. # it is possible to configure how big a single node can be in bytes, and the
    1030. # maximum number of items it may contain before switching to a new node when
    1031. # appending new stream entries. If any of the following settings are set to
    1032. # zero, the limit is ignored, so for instance it is possible to set just a
    1033. # max entires limit by setting max-bytes to 0 and max-entries to the desired
    1034. # value.
    1035. stream-node-max-bytes 4096
    1036. stream-node-max-entries 100
    1037. # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
    1038. # order to help rehashing the main Redis hash table (the one mapping top-level
    1039. # keys to values). The hash table implementation Redis uses (see dict.c)
    1040. # performs a lazy rehashing: the more operation you run into a hash table
    1041. # that is rehashing, the more rehashing "steps" are performed, so if the
    1042. # server is idle the rehashing is never complete and some more memory is used
    1043. # by the hash table.
    1044. #
    1045. # The default is to use this millisecond 10 times every second in order to
    1046. # actively rehash the main dictionaries, freeing memory when possible.
    1047. #
    1048. # If unsure:
    1049. # use "activerehashing no" if you have hard latency requirements and it is
    1050. # not a good thing in your environment that Redis can reply from time to time
    1051. # to queries with 2 milliseconds delay.
    1052. #
    1053. # use "activerehashing yes" if you don't have such hard requirements but
    1054. # want to free memory asap when possible.
    1055. activerehashing yes
    1056. # The client output buffer limits can be used to force disconnection of clients
    1057. # that are not reading data from the server fast enough for some reason (a
    1058. # common reason is that a Pub/Sub client can't consume messages as fast as the
    1059. # publisher can produce them).
    1060. #
    1061. # The limit can be set differently for the three different classes of clients:
    1062. #
    1063. # normal -> normal clients including MONITOR clients
    1064. # replica -> replica clients
    1065. # pubsub -> clients subscribed to at least one pubsub channel or pattern
    1066. #
    1067. # The syntax of every client-output-buffer-limit directive is the following:
    1068. #
    1069. # client-output-buffer-limit
    1070. #
    1071. # A client is immediately disconnected once the hard limit is reached, or if
    1072. # the soft limit is reached and remains reached for the specified number of
    1073. # seconds (continuously).
    1074. # So for instance if the hard limit is 32 megabytes and the soft limit is
    1075. # 16 megabytes / 10 seconds, the client will get disconnected immediately
    1076. # if the size of the output buffers reach 32 megabytes, but will also get
    1077. # disconnected if the client reaches 16 megabytes and continuously overcomes
    1078. # the limit for 10 seconds.
    1079. #
    1080. # By default normal clients are not limited because they don't receive data
    1081. # without asking (in a push way), but just after a request, so only
    1082. # asynchronous clients may create a scenario where data is requested faster
    1083. # than it can read.
    1084. #
    1085. # Instead there is a default limit for pubsub and replica clients, since
    1086. # subscribers and replicas receive data in a push fashion.
    1087. #
    1088. # Both the hard or the soft limit can be disabled by setting them to zero.
    1089. client-output-buffer-limit normal 0 0 0
    1090. client-output-buffer-limit replica 256mb 64mb 60
    1091. client-output-buffer-limit pubsub 32mb 8mb 60
    1092. # Client query buffers accumulate new commands. They are limited to a fixed
    1093. # amount by default in order to avoid that a protocol desynchronization (for
    1094. # instance due to a bug in the client) will lead to unbound memory usage in
    1095. # the query buffer. However you can configure it here if you have very special
    1096. # needs, such us huge multi/exec requests or alike.
    1097. #
    1098. # client-query-buffer-limit 1gb
    1099. # In the Redis protocol, bulk requests, that are, elements representing single
    1100. # strings, are normally limited ot 512 mb. However you can change this limit
    1101. # here.
    1102. #
    1103. # proto-max-bulk-len 512mb
    1104. # Redis calls an internal function to perform many background tasks, like
    1105. # closing connections of clients in timeout, purging expired keys that are
    1106. # never requested, and so forth.
    1107. #
    1108. # Not all tasks are performed with the same frequency, but Redis checks for
    1109. # tasks to perform according to the specified "hz" value.
    1110. #
    1111. # By default "hz" is set to 10. Raising the value will use more CPU when
    1112. # Redis is idle, but at the same time will make Redis more responsive when
    1113. # there are many keys expiring at the same time, and timeouts may be
    1114. # handled with more precision.
    1115. #
    1116. # The range is between 1 and 500, however a value over 100 is usually not
    1117. # a good idea. Most users should use the default of 10 and raise this up to
    1118. # 100 only in environments where very low latency is required.
    1119. hz 10
    1120. # Normally it is useful to have an HZ value which is proportional to the
    1121. # number of clients connected. This is useful in order, for instance, to
    1122. # avoid too many clients are processed for each background task invocation
    1123. # in order to avoid latency spikes.
    1124. #
    1125. # Since the default HZ value by default is conservatively set to 10, Redis
    1126. # offers, and enables by default, the ability to use an adaptive HZ value
    1127. # which will temporary raise when there are many connected clients.
    1128. #
    1129. # When dynamic HZ is enabled, the actual configured HZ will be used as
    1130. # as a baseline, but multiples of the configured HZ value will be actually
    1131. # used as needed once more clients are connected. In this way an idle
    1132. # instance will use very little CPU time while a busy instance will be
    1133. # more responsive.
    1134. dynamic-hz yes
    1135. # When a child rewrites the AOF file, if the following option is enabled
    1136. # the file will be fsync-ed every 32 MB of data generated. This is useful
    1137. # in order to commit the file to the disk more incrementally and avoid
    1138. # big latency spikes.
    1139. aof-rewrite-incremental-fsync yes
    1140. # When redis saves RDB file, if the following option is enabled
    1141. # the file will be fsync-ed every 32 MB of data generated. This is useful
    1142. # in order to commit the file to the disk more incrementally and avoid
    1143. # big latency spikes.
    1144. rdb-save-incremental-fsync yes
    1145. # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
    1146. # idea to start with the default settings and only change them after investigating
    1147. # how to improve the performances and how the keys LFU change over time, which
    1148. # is possible to inspect via the OBJECT FREQ command.
    1149. #
    1150. # There are two tunable parameters in the Redis LFU implementation: the
    1151. # counter logarithm factor and the counter decay time. It is important to
    1152. # understand what the two parameters mean before changing them.
    1153. #
    1154. # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
    1155. # uses a probabilistic increment with logarithmic behavior. Given the value
    1156. # of the old counter, when a key is accessed, the counter is incremented in
    1157. # this way:
    1158. #
    1159. # 1. A random number R between 0 and 1 is extracted.
    1160. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
    1161. # 3. The counter is incremented only if R < P.
    1162. #
    1163. # The default lfu-log-factor is 10. This is a table of how the frequency
    1164. # counter changes with a different number of accesses with different
    1165. # logarithmic factors:
    1166. #
    1167. # +--------+------------+------------+------------+------------+------------+
    1168. # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
    1169. # +--------+------------+------------+------------+------------+------------+
    1170. # | 0 | 104 | 255 | 255 | 255 | 255 |
    1171. # +--------+------------+------------+------------+------------+------------+
    1172. # | 1 | 18 | 49 | 255 | 255 | 255 |
    1173. # +--------+------------+------------+------------+------------+------------+
    1174. # | 10 | 10 | 18 | 142 | 255 | 255 |
    1175. # +--------+------------+------------+------------+------------+------------+
    1176. # | 100 | 8 | 11 | 49 | 143 | 255 |
    1177. # +--------+------------+------------+------------+------------+------------+
    1178. #
    1179. # NOTE: The above table was obtained by running the following commands:
    1180. #
    1181. # redis-benchmark -n 1000000 incr foo
    1182. # redis-cli object freq foo
    1183. #
    1184. # NOTE 2: The counter initial value is 5 in order to give new objects a chance
    1185. # to accumulate hits.
    1186. #
    1187. # The counter decay time is the time, in minutes, that must elapse in order
    1188. # for the key counter to be divided by two (or decremented if it has a value
    1189. # less <= 10).
    1190. #
    1191. # The default value for the lfu-decay-time is 1. A Special value of 0 means to
    1192. # decay the counter every time it happens to be scanned.
    1193. #
    1194. # lfu-log-factor 10
    1195. # lfu-decay-time 1
    1196. ########################### ACTIVE DEFRAGMENTATION #######################
    1197. #
    1198. # WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
    1199. # even in production and manually tested by multiple engineers for some
    1200. # time.
    1201. #
    1202. # What is active defragmentation?
    1203. # -------------------------------
    1204. #
    1205. # Active (online) defragmentation allows a Redis server to compact the
    1206. # spaces left between small allocations and deallocations of data in memory,
    1207. # thus allowing to reclaim back memory.
    1208. #
    1209. # Fragmentation is a natural process that happens with every allocator (but
    1210. # less so with Jemalloc, fortunately) and certain workloads. Normally a server
    1211. # restart is needed in order to lower the fragmentation, or at least to flush
    1212. # away all the data and create it again. However thanks to this feature
    1213. # implemented by Oran Agra for Redis 4.0 this process can happen at runtime
    1214. # in an "hot" way, while the server is running.
    1215. #
    1216. # Basically when the fragmentation is over a certain level (see the
    1217. # configuration options below) Redis will start to create new copies of the
    1218. # values in contiguous memory regions by exploiting certain specific Jemalloc
    1219. # features (in order to understand if an allocation is causing fragmentation
    1220. # and to allocate it in a better place), and at the same time, will release the
    1221. # old copies of the data. This process, repeated incrementally for all the keys
    1222. # will cause the fragmentation to drop back to normal values.
    1223. #
    1224. # Important things to understand:
    1225. #
    1226. # 1. This feature is disabled by default, and only works if you compiled Redis
    1227. # to use the copy of Jemalloc we ship with the source code of Redis.
    1228. # This is the default with Linux builds.
    1229. #
    1230. # 2. You never need to enable this feature if you don't have fragmentation
    1231. # issues.
    1232. #
    1233. # 3. Once you experience fragmentation, you can enable this feature when
    1234. # needed with the command "CONFIG SET activedefrag yes".
    1235. #
    1236. # The configuration parameters are able to fine tune the behavior of the
    1237. # defragmentation process. If you are not sure about what they mean it is
    1238. # a good idea to leave the defaults untouched.
    1239. # Enabled active defragmentation
    1240. # activedefrag yes
    1241. # Minimum amount of fragmentation waste to start active defrag
    1242. # active-defrag-ignore-bytes 100mb
    1243. # Minimum percentage of fragmentation to start active defrag
    1244. # active-defrag-threshold-lower 10
    1245. # Maximum percentage of fragmentation at which we use maximum effort
    1246. # active-defrag-threshold-upper 100
    1247. # Minimal effort for defrag in CPU percentage
    1248. # active-defrag-cycle-min 5
    1249. # Maximal effort for defrag in CPU percentage
    1250. # active-defrag-cycle-max 75
    1251. # Maximum number of set/hash/zset/list fields that will be processed from
    1252. # the main dictionary scan
    1253. # active-defrag-max-scan-fields 1000
    1254. # It is possible to pin different threads and processes of Redis to specific
    1255. # CPUs in your system, in order to maximize the performances of the server.
    1256. # This is useful both in order to pin different Redis threads in different
    1257. # CPUs, but also in order to make sure that multiple Redis instances running
    1258. # in the same host will be pinned to different CPUs.
    1259. #
    1260. # Normally you can do this using the "taskset" command, however it is also
    1261. # possible to this via Redis configuration directly, both in Linux and FreeBSD.
    1262. #
    1263. # You can pin the server/IO threads, bio threads, aof rewrite child process, and
    1264. # the bgsave child process. The syntax to specify the cpu list is the same as
    1265. # the taskset command:
    1266. #
    1267. # Set redis server/io threads to cpu affinity 0,2,4,6:
    1268. # server_cpulist 0-7:2
    1269. #
    1270. # Set bio threads to cpu affinity 1,3:
    1271. # bio_cpulist 1,3
    1272. #
    1273. # Set aof rewrite child process to cpu affinity 8,9,10,11:
    1274. # aof_rewrite_cpulist 8-11
    1275. #
    1276. # Set bgsave child process to cpu affinity 1,10,11
    1277. # bgsave_cpulist 1,10-11
    1278. # In some cases redis will emit warnings and even refuse to start if it detects
    1279. # that the system is in bad state, it is possible to suppress these warnings
    1280. # by setting the following config which takes a space delimited list of warnings
    1281. # to suppress
    1282. #
    1283. # ignore-warnings ARM64-COW-BUG

     启动运行 redis镜像

    1. docker run -p 6379:6379 --name redis \
    2. -v /usr/local/redis/data:/data \
    3. -v /usr/local/redis/conf/redis.conf:/etc/redis/redis.conf \
    4. -d redis:5.0.14 redis-server /etc/redis/redis.conf

    -p 6379:6379指端口映射,访问宿主机端口6379,相当于访问的是redis容器的6379端口,
    --name 给容器设置一个名字,若没有设置默为镜像名。
    -v 指挂载,比如将 宿主机目录/usr/local/redis/data挂载到容器目录/data
    -d 后台运行 
    redis:5.0.14 要运行的容器名:版本号
    redis-server 在容器中启动redis的命令
    /etc/redis/redis.conf 在容器中启动redis的参数

     进入容器 (退出容器用exit)

    docker exec -it redis bash

    安装Ngnix

    在https://hub.docker.com/上搜索 nginx 镜像

     拉取版本指定版本nginx

    docker pull nginx:1.22.0
    简单启动(这个目的是为了拿到 nginx的默认资源文件,默认会释放到/etc/nginx目录)
    docker run --name nginx -d nginx:1.22.0

    拷贝容器中安装好的nginx配置文件 到宿主机中

    1. docker cp -a 容器名:容器目录 宿主机目录
    2. #例如
    3. docker cp -a nginx:/etc/nginx /usr/local/nginx

    强制卸载刚刚安装的nginx

    docker rm -f nginx

     启动运行 nginx镜像

    1. docker run -p 80:80 --restart always --name nginx \
    2. -v /usr/local/nginx/:/etc/nginx/ \
    3. -v /usr/local/nginx/conf.d:/etc/nginx/conf.d \
    4. -d nginx:1.22.0

    --restart always  设置容器开机自启动
    -p 80:80 指端口映射,访问宿主机端口80,相当于访问的是nginx容器的80端口,
    --name 给容器设置一个名字,若没有设置默为镜像名。
    -v 指挂载,比如将 宿主机目录/usr/local/nginx/挂载到容器目录/etc/nginx/
    -d 后台运行 
    nginx:1.22.0 要运行的容器名:版本号

    Nginx配置

    docker中nginx配置文件 nginx.conf 有一个导入 include /etc/nginx/conf.d/*.conf;(这里地址不是错误的,因为已经挂载到了容器)的操作,说明docker nginx使用了局部配置文件,我们以后要在nginx/conf.d/目录下配置nginx。这里使用的是nginx/conf.d/default.conf

    运行3个测试jar包

    1. docker run -p 8901:8901 --name tomcat8901 \
    2. -v /root/servers:/usr/local/tomcat \
    3. -d jdk:8 \
    4. java -jar /usr/local/tomcat/tomcat8901.jar
    1. docker run -p 8902:8902 --name tomcat8902 \
    2. -v /root/servers:/usr/local/tomcat \
    3. -d jdk:8 \
    4. java -jar /usr/local/tomcat/tomcat8902.jar
    1. docker run -p 8903:8903 --name tomcat8903 \
    2. -v /root/servers:/usr/local/tomcat \
    3. -d jdk:8 \
    4. java -jar /usr/local/tomcat/tomcat8903.jar

    反向代理负载均衡配置 vi nginx/conf.d/default.conf

    1. upstream myserver{
    2. server 192.168.111.199:8901;
    3. server 192.168.111.199:8902;
    4. server 192.168.111.199:8903;
    5. }
    1. location / {
    2. proxy_pass http://myserver;
    3. }

     

     修改好配置文件后,重启ngingx命令

    docker restart nginx

    浏览器访问http://192.168.111.199/hello

    在容器中运行jar包

    第一步:将zikin,sentinel拷贝宿主机指定目录,例如/root/servers目录(servers目录不存在可以自己创建)。
    第二步:启动镜像容器,通过java执行运行web服务

    清除已经没有使用的容器:docker container prune
    查询容器日志:docker container logs 容器id

    基于jdk:8镜像启动运行zipkin服务(服务启动后可在宿主机通过localhost:9411进行访问).

    1. docker run -p 9411:9411 --name zipkin \
    2. -v /root/servers:/usr/local/zipkin \
    3. -d jdk:8 \
    4. java -jar /usr/local/zipkin/zipkin-server-2.23.16-exec.jar

    请求地址http://192.168.111.199:9411/zipkin/

    基于jdk:8镜像启动运行sentinel服务(服务启动后可在宿主机通过localhost:8180进行访问)

    1. docker run -p 8180:8080 --name sentinel \
    2. -v /root/servers:/usr/local/sentinel \
    3. -d jdk:8 \
    4. java -jar /usr/local/sentinel/sentinel-dashboard-1.8.0.jar

    请求地址http://192.168.111.199:8180/
    用户名/密码:sentinel/sentinel

    虚拟网络

    二、虚拟网络

    a
    容器键互联可以使用 Docker 的虚拟网络来连接。

    在 Docker 中可以创建任意多个虚拟网络,容器之间可以通过虚拟网络互联互通。创建虚拟网络时宿主机也会连接到虚拟网络。

     

    1. # 新建虚拟网络 my-net
    2. docker network create my-net
    3. # 查看虚拟网络
    4. docker network ls
    5. # 查看网络描述信息
    6. docker inspect my-net
    7. # 查看宿主机新建的虚拟网卡
    8. ifconfig
    9. # 清理容器
    10. docker rm -f $(docker ps -aq)
    11. # 新建两个容器 cat1 和 cat2
    12. # 连接到虚拟网络 my-net
    13. docker run -d --name cat1 \
    14. --net my-net \
    15. tomcat
    16. docker run -d --name cat2 \
    17. --net my-net \
    18. tomcat
    19. # 查看两个容器的虚拟网络ip
    20. docker inspect cat1
    21. docker inspect cat2
    22. # 测试网络能否互联互通
    23. # 从宿主机ping两个容器
    24. ping 172.18.0.2
    25. ping 172.18.0.3
    26. # 进入cat1,ping宿主机和cat2
    27. docker exec -it cat1 ping 172.18.0.1
    28. docker exec -it cat1 ping 172.18.0.3
    29. # 从容器访问另一个容器,可以使用容器名称访问,容器内部实现了解析环境
    30. docker exec -it cat1 ping cat2

  • 相关阅读:
    【对比Java学Kotlin】协程-创建和取消
    【转载】 Bytedance火山引擎智能拥塞控制算法 VICC
    后端:任何客户端的东西都不可信任
    mysql表引擎批量转换--mysql_convert_table_format
    快手创作者版App正式上线
    线下实体店铺会员引流的四种方法-未完待续
    超顺磁四氧化三铁@二氧化硅@硫化镉纳米核壳结构材料|表面接枝mPEG的Fe3O4磁性纳米颗粒(f-Fe3O4)|相关产品
    阿桂天山的技术小结:Flask实现对Ztree树状节点的增改删操作
    ORB-SLAM2从理论到代码实现(十五):KeyFrameDatabase类
    Java编程实践:使用面向对象编程(OOP)概念构建简单的国际象棋游戏
  • 原文地址:https://blog.csdn.net/u014644574/article/details/125730634