• influxDB 入门教程及使用过程中遇到的问题


    一、安装与使用

    1、下载地址
    GitHUb: https://github.com/Muscleape/influxdb_demo
    64位程序: https://dl.influx data.com/influxdb/releases/influxdb-1.7.4_windows_amd64.zip
    2、解压安装包

     3、修改配置文件
    InfluxDB 的数据存储主要有三个目录。默认情况下是 meta, wal 以及 data 三个目录,服务器运行后会自动生成。
    meta
    用于存储数据库的一些元数据,meta 目录下有一个 meta.db 文件。
    wal

    目录存放预写日志文件,以 .wal 结尾。
    data 
    目录存放实际存储的数据文件,以 .tsm 结尾。


    如果不使用 influxdb.conf 配置的话,那么直接双击打开 influxd.exe 就可以使用influx,此时上面三个文件夹的目录则存放在Windows系统的C盘User目录下的.Influx目录下,默认端口为8086,以下为修改文件夹地址,以及端口号方法。

    (1)修改以下部分路径

    (2)如果修改端口号,则修改以下部分配置

    (3)修改配置后启动方式

      InfluxDB 使用时需要首先打开Influxd.exe,直接打开会使用默认配置,需要使用已配置的配置文件的话,需要指定conf文件进行启动,启动命令:influxd --config influxdb.conf

     如果出现下列情况,启动失败,还需要修改influxdb.con

       在influxdb.conf中修改如下一行,修改地址并且打开注释,修改后保存

         再次运行 influxd --config influxdb.conf 命令;出现如下信息启动成功

      (4)、启动influx后,窗口不要关闭,在启动一个cmd窗口,执行如下命令 influx

     InfluxDB自带一个客户端程序influx,可用来增删改查等操作数据库。

    influx 客户端指定端口号访问,-port是使用特定port号启动

    influx.exe -port 8083


    4、开启InfluxDb

    可以直接打开Influxd.exe,也可以采用命令启动,

    使用命令时可以指定配置文件和端口号。( 运行influx.exe 时,influxd.exe不可关闭。

     启动成功

    5、其他配置
    官方文档:https://archive.docs.influxdata.com/influxdb/v1.2/administration/config/
    一些配置项的汉化注释已加在文件中


    6、常用操作(使用)
    在 Influxd.exe 正常运行的情况下打开 Influx.exe,链接成功后如图所示,若链接失败需要检查地址和端口是否一致

    1. 1. influxdb数据库操作
    2. show databases 查看有什么数据库
    3. create database shijiange 创建数据库,数据库名称为shijiange
    4. drop database shijiange 删除数据库,数据库名称为shijiange
    5. 2.measurement(类似于表)操作
    6. use shijiange #操作哪个库需要用use
    7. show measurements #查询所有measurement (没有表则不返回)
    8. insert cpuinfo,item=shijiange_47.105.99.75_cpu.idle value=90
    9. select * from cpuinfo #查询所有cpuinfo的数据
    10. drop measurement cpuinfo #删除measurement
    11. update更新语句没有,不过有alter命令,在influxdb中,删除操作用和更新基本不用到 。在针对数据保存策略方面,有一个特殊的删除方式,这个后面再提。
    12. 例子:插入数据的格式
    13. insert cpuinfo(measurement:表名),item=shijiange_1.1.1.1_cpu.idle(tags:数据标识) value=90(fields:数据)
    14. 其中item和value名字都可以变化
    15. 3.influxdb常用查询和删除操作
    16. select * from cpuinfo
    17. select * from cpuinfo limit 2 #如果数据量太大,得使用limit,限制输出多少行
    18. delete from cpuinfo where time=1531992939634316937 删除一条数据
    19. delete from cpuinfo 删除所有数据
    20. 4.influxdb中数据保留时间的设置
    21. SHOW RETENTION POLICIES ON shijiange 查看数据库shijiange 中表的保留策略
    22. CREATE RETENTION POLICY rp_shijiange ON shijiange DURATION 30d REPLICATION 1 DEFAULT #数据要保留一个月
    23. alter RETENTION POLICY rp_shijiange ON shijiange DURATION 90d REPLICATION 1 DEFAULT 改变保留策略
    24. DROP RETENTION POLICY rp_shijiange on shijiange #删除保存时间和策略,同时会删除该表,一般来说是不删除
    25. 5.influxdb使用易看的时间格式
    26. 用标准时间格式展示数据,使time更容易看:precision rfc3339

    数据保存策略

    一般情况下基于时间序列的point数据不会进行直接删除操作,一般我们平时只关心当前数据,历史数据不需要一直保存,不然会占用太多空间。这里可以配置数据保存策略(Retention Policies),当数据超过了指定的时间之后,就会被删除。

    1. SHOW RETENTION POLICIES ON "testDB" //查看当前数据库的Retention Policies
    2. CREATE RETENTION POLICY "rp_name" ON "db_name" DURATION 30d REPLICATION 1 DEFAULT //创建新的Retention Policies
    3. #注释如下:
    4. rp_name:策略名
    5. db_name:具体的数据库名
    6. 30d:保存30天,30天之前的数据将被删除
    7. 它具有各种时间参数,比如:h(小时),w(星期)
    8. REPLICATION 1:副本个数,这里填1就可以了
    9. DEFAULT 设为默认的策略
    10. 也可以通过如下命令修改和删策略:
    11. ALTER RETENTION POLICY "rp_name" ON db_name" DURATION 3w DEFAULT
    12. DROP RETENTION POLICY "rp_name" ON "db_name"


    name--名称,此示例名称为 default 

    duration--数据可以持久化数据库的时间,0代表无限制

    shardGroupDuration--shardGroup的存储时间,shardGroup是InfluxDB的一个基本储存结构,应该大于这个时间的数据在查询效率上应该有所降低。

    replicaN--全称是REPLICATION,副本个数

    default--是否是默认策略
     

     新建表和插入数据 ( 新建表没有具体的语法,只是增加第一条数据时,会自动建立表 )
    语法:

    [,=...]

    =[,=...]

    [unix-nano-timestamp]

    下面给出一个简单的实例:

    insert add_test,name=YiHui,phone=110 user_id=20,email="b**zewu@126.com"

    新增一条数据,measurement为add_test, tag为name,phone, field为user_id,email

    空格前的为tag,空格后的为field

    1. insert sensor_data,sensor_type="风速",sensor_id="1" sensor_data=12.12
    2. insert Battery_Level,Change="处于充电状态",Device_ID="01" Battery_Level=1.0

    注意:插入数据时,如果插入的字段的类型为字符型,那么要用" "包括,不包含或者用' '都是错误的

    1. > insert maintest,temperature=35.6 cputype=cpu001
    2. ERR: {"error":"unable to parse 'maintest,temperature=35.6 cputype=cpu001': invalid boolean"}
    3. > insert maintest,temperature=35.6 cputype="cpu001"
    4. >

    influxdb的point比较特殊,一个point(行)包括了timestamp(时间戳)tags(带索引)fields(不带索引)这几种属性

    influxdb数据的构成:

    Point由时间戳(time)、数据(field)、标签(tags)组成。

    Point属性

    传统数据库中的概念

    time:每个数据记录时间,是数据库中的主索引(会自动生成)

    fields:各种记录值(没有索引的属性)也就是记录的值:温度, 湿度

    tags:各种有索引的属性:地区,海拔

    这里不得不提另一个名词:series:所有在数据库中的数据,都需要通过图表来展示,而这个series表示这个表里面的数据,可以在图表上画成几条线:通过tags排列组合算出来。具体可以通过

    SHOW  SERIES FROM "表名" 进行查询

     field中有四种存储类型,分别是int, float, string, boolean,用法为

    显示所有用户

    show users

    创建 普通用户:

    1. create user "..." with password '...'
    2. create user "ems" with password 'ems123' # 密码用单引号

    创建 管理员用户:

    1. create user "..." with password '...' with all privileges
    2. create user "ems33" with password '1234' with all privileges

    删除用户:

     drop user "..."

    influxdb的权限设置比较简单,只有读、写、ALL几种。更多用户权限设置可以参看官方文档:https://docs.influxdata.com/influxdb/v1.0/query_language/authentication_and_authorization/ 。

    默认情况下,influxdb类似与mongodb,是不开启用户认证的,可以修改其

    conf文件,配置http块内容如下:

    1. [http]
    2. enable = true
    3. bind-address = ":8086"
    4. auth-enabled = true # 开启认证

    参考:

    https://blog.csdn.net/vtnews/article/details/80197045

    https://www.cnblogs.com/stone-s/p/16457720.html

    https://www.likecs.com/show-306856081.html

    https://blog.csdn.net/weixin_43135178/article/details/108733373


     

    二、遇到问题

    1.关于influxDB,修改过端口号无法连接的问题

    再启动influxdb的时候,可能会遇到端口被占用等情况,此时便需要修改influx的配置文件来更换端口。

    但是当修改过influxdb.config文件的路径之后,再本地输入influx时发现,他所连接的端口并不是我所更改后的端口。当我输入show databases时,报401错。

    从网上找了一大堆资料,发现只需要再启动的时候输入指定端口即可:

    influx -host localhost -port 6086

    参考:

    https://blog.csdn.net/weixin_39770653/article/details/119808573

     https://blog.csdn.net/ColorfulChen/article/details/125946615

    三、 Influxdb配置文件详解---influxdb.conf

    官方介绍:Database Configuration | InfluxData Documentation Archive

    全局配置

    1. reporting-disabled = false # 该选项用于上报influxdb的使用信息给InfluxData公司,默认值为false
    2. bind-address = ":8088" # 备份恢复时使用,默认值为8088

    1、meta相关配置

    1. [meta]
    2. dir = "/var/lib/influxdb/meta" # meta数据存放目录
    3. retention-autocreate = true # 用于控制默认存储策略,数据库创建时,会自动生成autogen的存储策略,默认值:true
    4. logging-enabled = true # 是否开启meta日志,默认值:true

    2、data相关配置

    1. [data]
    2. dir = "/var/lib/influxdb/data" # 最终数据(TSM文件)存储目录
    3. wal-dir = "/var/lib/influxdb/wal" # 预写日志存储目录
    4. query-log-enabled = true # 是否开启tsm引擎查询日志,默认值: true
    5. cache-max-memory-size = 1048576000 # 用于限定shard最大值,大于该值时会拒绝写入,默认值:1000MB,单位:byte
    6. cache-snapshot-memory-size = 26214400 # 用于设置快照大小,大于该值时数据会刷新到tsm文件,默认值:25MB,单位:byte
    7. cache-snapshot-write-cold-duration = "10m" # tsm引擎 snapshot写盘延迟,默认值:10Minute
    8. compact-full-write-cold-duration = "4h" # tsm文件在压缩前可以存储的最大时间,默认值:4Hour
    9. max-series-per-database = 1000000 # 限制数据库的级数,该值为0时取消限制,默认值:1000000
    10. max-values-per-tag = 100000 # 一个tag最大的value数,0取消限制,默认值:100000

    3、coordinator查询管理的配置选项

    1. [coordinator]
    2. write-timeout = "10s" # 写操作超时时间,默认值: 10s
    3. max-concurrent-queries = 0 # 最大并发查询数,0无限制,默认值: 0
    4. query-timeout = "0s # 查询操作超时时间,0无限制,默认值:0s
    5. log-queries-after = "0s" # 慢查询超时时间,0无限制,默认值:0s
    6. max-select-point = 0 # SELECT语句可以处理的最大点数(points),0无限制,默认值:0
    7. max-select-series = 0 # SELECT语句可以处理的最大级数(series),0无限制,默认值:0
    8. max-select-buckets = 0 # SELECT语句可以处理的最大"GROUP BY time()"的时间周期,0无限制,默认值:0

    4、retention旧数据的保留策略

    1. [retention]
    2. enabled = true # 是否启用该模块,默认值 : true
    3. check-interval = "30m" # 检查时间间隔,默认值 :"30m"

    5、shard-precreation分区预创建

    1. [shard-precreation]
    2. enabled = true # 是否启用该模块,默认值 : true
    3. check-interval = "10m" # 检查时间间隔,默认值 :"10m"
    4. advance-period = "30m" # 预创建分区的最大提前时间,默认值 :"30m"

    6、monitor 控制InfluxDB自有的监控系统。 默认情况下,InfluxDB把这些数据写入_internal 数据库,如果这个库不存在则自动创建。 _internal 库默认的retention策略是7天,如果你想使用一个自己的retention策略,需要自己创建。

    1. [monitor]
    2. store-enabled = true # 是否启用该模块,默认值 :true
    3. store-database = "_internal" # 默认数据库:"_internal"
    4. store-interval = "10s # 统计间隔,默认值:"10s"

    7、admin web管理页面

    1. [admin]
    2. enabled = true # 是否启用该模块,默认值 : false
    3. bind-address = ":8083" # 绑定地址,默认值 :":8083"
    4. https-enabled = false # 是否开启https ,默认值 :false
    5. https-certificate = "/etc/ssl/influxdb.pem" # https证书路径,默认值:"/etc/ssl/influxdb.pem"

    8、http API

    1. [http]
    2. enabled = true # 是否启用该模块,默认值 :true
    3. bind-address = ":8086" # 绑定地址,默认值:":8086"
    4. auth-enabled = false # 是否开启认证,默认值:false
    5. realm = "InfluxDB" # 配置JWT realm,默认值: "InfluxDB"
    6. log-enabled = true # 是否开启日志,默认值:true
    7. write-tracing = false # 是否开启写操作日志,如果置成true,每一次写操作都会打日志,默认值:false
    8. pprof-enabled = true # 是否开启pprof,默认值:true
    9. https-enabled = false # 是否开启https,默认值:false
    10. https-certificate = "/etc/ssl/influxdb.pem" # 设置https证书路径,默认值:"/etc/ssl/influxdb.pem"
    11. https-private-key = "" # 设置https私钥,无默认值
    12. shared-secret = "" # 用于JWT签名的共享密钥,无默认值
    13. max-row-limit = 0 # 配置查询返回最大行数,0无限制,默认值:0
    14. max-connection-limit = 0 # 配置最大连接数,0无限制,默认值:0
    15. unix-socket-enabled = false # 是否使用unix-socket,默认值:false
    16. bind-socket = "/var/run/influxdb.sock" # unix-socket路径,默认值:"/var/run/influxdb.sock"

    9、subscriber 控制Kapacitor接受数据的配置

    1. [subscriber]
    2. enabled = true # 是否启用该模块,默认值 :true
    3. http-timeout = "30s" # http超时时间,默认值:"30s"
    4. insecure-skip-verify = false # 是否允许不安全的证书
    5. ca-certs = "" # 设置CA证书
    6. write-concurrency = 40 # 设置并发数目,默认值:40
    7. write-buffer-size = 1000 # 设置buffer大小,默认值:1000

    10、graphite 相关配置

    1. [[graphite]]
    2. enabled = false # 是否启用该模块,默认值 :false
    3. database = "graphite" # 数据库名称,默认值:"graphite"
    4. retention-policy = "" # 存储策略,无默认值
    5. bind-address = ":2003" # 绑定地址,默认值:":2003"
    6. protocol = "tcp" # 协议,默认值:"tcp"
    7. consistency-level = "one" # 一致性级别,默认值:"one
    8. batch-size = 5000 # 批量size,默认值:5000
    9. batch-pending = 10 # 配置在内存中等待的batch数,默认值:10
    10. batch-timeout = "1s" # 超时时间,默认值:"1s"
    11. udp-read-buffer = 0 # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0
    12. separator = "." # 多个measurement间的连接符,默认值: "."

    11、collectd

    1. [[collectd]]
    2. enabled = false # 是否启用该模块,默认值 :false
    3. bind-address = ":25826" # 绑定地址,默认值: ":25826"
    4. database = "collectd" # 数据库名称,默认值:"collectd"
    5. retention-policy = "" # 存储策略,无默认值
    6. typesdb = "/usr/local/share/collectd" # 路径,默认值:"/usr/share/collectd/types.db"
    7. auth-file = "/etc/collectd/auth_file"
    8. batch-size = 5000
    9. batch-pending = 10
    10. batch-timeout = "10s"
    11. read-buffer = 0 # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。默认值:0

    12、opentsdb

    1. [[opentsdb]]
    2. enabled = false # 是否启用该模块,默认值:false
    3. bind-address = ":4242" # 绑定地址,默认值:":4242"
    4. database = "opentsdb" # 默认数据库:"opentsdb"
    5. retention-policy = "" # 存储策略,无默认值
    6. consistency-level = "one" # 一致性级别,默认值:"one"
    7. tls-enabled = false # 是否开启tls,默认值:false
    8. certificate= "/etc/ssl/influxdb.pem" # 证书路径,默认值:"/etc/ssl/influxdb.pem"
    9. log-point-errors = true # 出错时是否记录日志,默认值:true
    10. batch-size = 1000
    11. batch-pending = 5
    12. batch-timeout = "1s"

    13、udp

    1. [[udp]]
    2. enabled = false # 是否启用该模块,默认值:false
    3. bind-address = ":8089" # 绑定地址,默认值:":8089"
    4. database = "udp" # 数据库名称,默认值:"udp"
    5. retention-policy = "" # 存储策略,无默认值
    6. batch-size = 5000
    7. batch-pending = 10
    8. batch-timeout = "1s"
    9. read-buffer = 0 # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0 

    14、continuous_queries

    1. [continuous_queries]
    2. enabled = true # enabled 是否开启CQs,默认值:true
    3. log-enabled = true # 是否开启日志,默认值:true
    4. run-interval = "1s" # 时间间隔,默认值:"1s"

    我的配置文件

    1. ### Welcome to the InfluxDB configuration file.
    2. # The values in this file override the default values used by the system if
    3. # a config option is not specified. The commented out lines are the configuration
    4. # field and the default value used. Uncommenting a line and changing the value
    5. # will change the value used at runtime when the process is restarted.
    6. # Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
    7. # The data includes a random ID, os, arch, version, the number of series and other
    8. # usage data. No data from user databases is ever transmitted.
    9. # Change this option to true to disable reporting.
    10. # reporting-disabled = false
    11. # Bind address to use for the RPC service for backup and restore.
    12. bind-address = "0.0.0.0:8088"
    13. ###
    14. ### [meta]
    15. ###
    16. ### Controls the parameters for the Raft consensus group that stores metadata
    17. ### about the InfluxDB cluster.
    18. ###
    19. [meta]
    20. # Where the metadata/raft database is stored
    21. dir = "D:/softwore/influxdb-1.7.4-1/meta"
    22. # Automatically create a default retention policy when creating a database.
    23. # retention-autocreate = true
    24. # If log messages are printed for the meta service
    25. # logging-enabled = true
    26. ###
    27. ### [data]
    28. ###
    29. ### Controls where the actual shard data for InfluxDB lives and how it is
    30. ### flushed from the WAL. "dir" may need to be changed to a suitable place
    31. ### for your system, but the WAL settings are an advanced configuration. The
    32. ### defaults should work for most systems.
    33. ###
    34. [data]
    35. # The directory where the TSM storage engine stores TSM files.
    36. dir = "D:/softwore/influxdb-1.7.4-1/data"
    37. # The directory where the TSM storage engine stores WAL files.
    38. wal-dir = "D:/softwore/influxdb-1.7.4-1/wal"
    39. # The amount of time that a write will wait before fsyncing. A duration
    40. # greater than 0 can be used to batch up multiple fsync calls. This is useful for slower
    41. # disks or when WAL write contention is seen. A value of 0s fsyncs every write to the WAL.
    42. # Values in the range of 0-100ms are recommended for non-SSD disks.
    43. # wal-fsync-delay = "0s"
    44. # The type of shard index to use for new shards. The default is an in-memory index that is
    45. # recreated at startup. A value of "tsi1" will use a disk based index that supports higher
    46. # cardinality datasets.
    47. # index-version = "inmem"
    48. # Trace logging provides more verbose output around the tsm engine. Turning
    49. # this on can provide more useful output for debugging tsm engine issues.
    50. # trace-logging-enabled = false
    51. # Whether queries should be logged before execution. Very useful for troubleshooting, but will
    52. # log any sensitive data contained within a query.
    53. # query-log-enabled = true
    54. # Validates incoming writes to ensure keys only have valid unicode characters.
    55. # This setting will incur a small overhead because every key must be checked.
    56. # validate-keys = false
    57. # Settings for the TSM engine
    58. # CacheMaxMemorySize is the maximum size a shard's cache can
    59. # reach before it starts rejecting writes.
    60. # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
    61. # Values without a size suffix are in bytes.
    62. # cache-max-memory-size = "1g"
    63. # CacheSnapshotMemorySize is the size at which the engine will
    64. # snapshot the cache and write it to a TSM file, freeing up memory
    65. # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
    66. # Values without a size suffix are in bytes.
    67. # cache-snapshot-memory-size = "25m"
    68. # CacheSnapshotWriteColdDuration is the length of time at
    69. # which the engine will snapshot the cache and write it to
    70. # a new TSM file if the shard hasn't received writes or deletes
    71. # cache-snapshot-write-cold-duration = "10m"
    72. # CompactFullWriteColdDuration is the duration at which the engine
    73. # will compact all TSM files in a shard if it hasn't received a
    74. # write or delete
    75. # compact-full-write-cold-duration = "4h"
    76. # The maximum number of concurrent full and level compactions that can run at one time. A
    77. # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime. Any number greater
    78. # than 0 limits compactions to that value. This setting does not apply
    79. # to cache snapshotting.
    80. # max-concurrent-compactions = 0
    81. # CompactThroughput is the rate limit in bytes per second that we
    82. # will allow TSM compactions to write to disk. Note that short bursts are allowed
    83. # to happen at a possibly larger value, set by CompactThroughputBurst
    84. # compact-throughput = "48m"
    85. # CompactThroughputBurst is the rate limit in bytes per second that we
    86. # will allow TSM compactions to write to disk.
    87. # compact-throughput-burst = "48m"
    88. # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
    89. # TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
    90. # It might help users who have slow disks in some cases.
    91. # tsm-use-madv-willneed = false
    92. # Settings for the inmem index
    93. # The maximum series allowed per database before writes are dropped. This limit can prevent
    94. # high cardinality issues at the database level. This limit can be disabled by setting it to
    95. # 0.
    96. # max-series-per-database = 1000000
    97. # The maximum number of tag values per tag that are allowed before writes are dropped. This limit
    98. # can prevent high cardinality tag values from being written to a measurement. This limit can be
    99. # disabled by setting it to 0.
    100. # max-values-per-tag = 100000
    101. # Settings for the tsi1 index
    102. # The threshold, in bytes, when an index write-ahead log file will compact
    103. # into an index file. Lower sizes will cause log files to be compacted more
    104. # quickly and result in lower heap usage at the expense of write throughput.
    105. # Higher sizes will be compacted less frequently, store more series in-memory,
    106. # and provide higher write throughput.
    107. # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
    108. # Values without a size suffix are in bytes.
    109. # max-index-log-file-size = "1m"
    110. # The size of the internal cache used in the TSI index to store previously
    111. # calculated series results. Cached results will be returned quickly from the cache rather
    112. # than needing to be recalculated when a subsequent query with a matching tag key/value
    113. # predicate is executed. Setting this value to 0 will disable the cache, which may
    114. # lead to query performance issues.
    115. # This value should only be increased if it is known that the set of regularly used
    116. # tag key/value predicates across all measurements for a database is larger than 100. An
    117. # increase in cache size may lead to an increase in heap usage.
    118. series-id-set-cache-size = 100
    119. ###
    120. ### [coordinator]
    121. ###
    122. ### Controls the clustering service configuration.
    123. ###
    124. [coordinator]
    125. # The default time a write request will wait until a "timeout" error is returned to the caller.
    126. # write-timeout = "10s"
    127. # The maximum number of concurrent queries allowed to be executing at one time. If a query is
    128. # executed and exceeds this limit, an error is returned to the caller. This limit can be disabled
    129. # by setting it to 0.
    130. # max-concurrent-queries = 0
    131. # The maximum time a query will is allowed to execute before being killed by the system. This limit
    132. # can help prevent run away queries. Setting the value to 0 disables the limit.
    133. # query-timeout = "0s"
    134. # The time threshold when a query will be logged as a slow query. This limit can be set to help
    135. # discover slow or resource intensive queries. Setting the value to 0 disables the slow query logging.
    136. # log-queries-after = "0s"
    137. # The maximum number of points a SELECT can process. A value of 0 will make
    138. # the maximum point count unlimited. This will only be checked every second so queries will not
    139. # be aborted immediately when hitting the limit.
    140. # max-select-point = 0
    141. # The maximum number of series a SELECT can run. A value of 0 will make the maximum series
    142. # count unlimited.
    143. # max-select-series = 0
    144. # The maxium number of group by time bucket a SELECT can create. A value of zero will max the maximum
    145. # number of buckets unlimited.
    146. # max-select-buckets = 0
    147. ###
    148. ### [retention]
    149. ###
    150. ### Controls the enforcement of retention policies for evicting old data.
    151. ###
    152. [retention]
    153. # Determines whether retention policy enforcement enabled.
    154. # enabled = true
    155. # The interval of time when retention policy enforcement checks run.
    156. # check-interval = "30m"
    157. ###
    158. ### [shard-precreation]
    159. ###
    160. ### Controls the precreation of shards, so they are available before data arrives.
    161. ### Only shards that, after creation, will have both a start- and end-time in the
    162. ### future, will ever be created. Shards are never precreated that would be wholly
    163. ### or partially in the past.
    164. [shard-precreation]
    165. # Determines whether shard pre-creation service is enabled.
    166. # enabled = true
    167. # The interval of time when the check to pre-create new shards runs.
    168. # check-interval = "10m"
    169. # The default period ahead of the endtime of a shard group that its successor
    170. # group is created.
    171. # advance-period = "30m"
    172. ###
    173. ### Controls the system self-monitoring, statistics and diagnostics.
    174. ###
    175. ### The internal database for monitoring data is created automatically if
    176. ### if it does not already exist. The target retention within this database
    177. ### is called 'monitor' and is also created with a retention period of 7 days
    178. ### and a replication factor of 1, if it does not exist. In all cases the
    179. ### this retention policy is configured as the default for the database.
    180. [monitor]
    181. # Whether to record statistics internally.
    182. # store-enabled = true
    183. # The destination database for recorded statistics
    184. # store-database = "_internal"
    185. # The interval at which to record statistics
    186. # store-interval = "10s"
    187. ###
    188. ### [http]
    189. ###
    190. ### Controls how the HTTP endpoints are configured. These are the primary
    191. ### mechanism for getting data into and out of InfluxDB.
    192. ###
    193. [http]
    194. # Determines whether HTTP endpoint is enabled.
    195. enabled = true
    196. # Determines whether the Flux query endpoint is enabled.
    197. # flux-enabled = false
    198. # Determines whether the Flux query logging is enabled.
    199. # flux-log-enabled = false
    200. # The bind address used by the HTTP service.
    201. bind-address = ":8083"
    202. # Determines whether user authentication is enabled over HTTP/HTTPS.
    203. # auth-enabled = false
    204. # The default realm sent back when issuing a basic auth challenge.
    205. # realm = "InfluxDB"
    206. # Determines whether HTTP request logging is enabled.
    207. # log-enabled = true
    208. # Determines whether the HTTP write request logs should be suppressed when the log is enabled.
    209. # suppress-write-log = false
    210. # When HTTP request logging is enabled, this option specifies the path where
    211. # log entries should be written. If unspecified, the default is to write to stderr, which
    212. # intermingles HTTP logs with internal InfluxDB logging.
    213. #
    214. # If influxd is unable to access the specified path, it will log an error and fall back to writing
    215. # the request log to stderr.
    216. # access-log-path = ""
    217. # Filters which requests should be logged. Each filter is of the pattern NNN, NNX, or NXX where N is
    218. # a number and X is a wildcard for any number. To filter all 5xx responses, use the string 5xx.
    219. # If multiple filters are used, then only one has to match. The default is to have no filters which
    220. # will cause every request to be printed.
    221. # access-log-status-filters = []
    222. # Determines whether detailed write logging is enabled.
    223. # write-tracing = false
    224. # Determines whether the pprof endpoint is enabled. This endpoint is used for
    225. # troubleshooting and monitoring.
    226. # pprof-enabled = true
    227. # Enables a pprof endpoint that binds to localhost:6060 immediately on startup.
    228. # This is only needed to debug startup issues.
    229. # debug-pprof-enabled = false
    230. # Determines whether HTTPS is enabled.
    231. # https-enabled = false
    232. # The SSL certificate to use when HTTPS is enabled.
    233. # https-certificate = "/etc/ssl/influxdb.pem"
    234. # Use a separate private key location.
    235. # https-private-key = ""
    236. # The JWT auth shared secret to validate requests using JSON web tokens.
    237. # shared-secret = ""
    238. # The default chunk size for result sets that should be chunked.
    239. # max-row-limit = 0
    240. # The maximum number of HTTP connections that may be open at once. New connections that
    241. # would exceed this limit are dropped. Setting this value to 0 disables the limit.
    242. # max-connection-limit = 0
    243. # Enable http service over unix domain socket
    244. # unix-socket-enabled = false
    245. # The path of the unix domain socket.
    246. # bind-socket = "/var/run/influxdb.sock"
    247. # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
    248. # max-body-size = 25000000
    249. # The maximum number of writes processed concurrently.
    250. # Setting this to 0 disables the limit.
    251. # max-concurrent-write-limit = 0
    252. # The maximum number of writes queued for processing.
    253. # Setting this to 0 disables the limit.
    254. # max-enqueued-write-limit = 0
    255. # The maximum duration for a write to wait in the queue to be processed.
    256. # Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.
    257. # enqueued-write-timeout = 0
    258. ###
    259. ### [logging]
    260. ###
    261. ### Controls how the logger emits logs to the output.
    262. ###
    263. [logging]
    264. # Determines which log encoder to use for logs. Available options
    265. # are auto, logfmt, and json. auto will use a more a more user-friendly
    266. # output format if the output terminal is a TTY, but the format is not as
    267. # easily machine-readable. When the output is a non-TTY, auto will use
    268. # logfmt.
    269. # format = "auto"
    270. # Determines which level of logs will be emitted. The available levels
    271. # are error, warn, info, and debug. Logs that are equal to or above the
    272. # specified level will be emitted.
    273. # level = "info"
    274. # Suppresses the logo output that is printed when the program is started.
    275. # The logo is always suppressed if STDOUT is not a TTY.
    276. # suppress-logo = false
    277. ###
    278. ### [subscriber]
    279. ###
    280. ### Controls the subscriptions, which can be used to fork a copy of all data
    281. ### received by the InfluxDB host.
    282. ###
    283. [subscriber]
    284. # Determines whether the subscriber service is enabled.
    285. # enabled = true
    286. # The default timeout for HTTP writes to subscribers.
    287. # http-timeout = "30s"
    288. # Allows insecure HTTPS connections to subscribers. This is useful when testing with self-
    289. # signed certificates.
    290. # insecure-skip-verify = false
    291. # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used
    292. # ca-certs = ""
    293. # The number of writer goroutines processing the write channel.
    294. # write-concurrency = 40
    295. # The number of in-flight writes buffered in the write channel.
    296. # write-buffer-size = 1000
    297. ###
    298. ### [[graphite]]
    299. ###
    300. ### Controls one or many listeners for Graphite data.
    301. ###
    302. [[graphite]]
    303. # Determines whether the graphite endpoint is enabled.
    304. # enabled = false
    305. # database = "graphite"
    306. # retention-policy = ""
    307. # bind-address = ":2003"
    308. # protocol = "tcp"
    309. # consistency-level = "one"
    310. # These next lines control how batching works. You should have this enabled
    311. # otherwise you could get dropped metrics or poor performance. Batching
    312. # will buffer points in memory if you have many coming in.
    313. # Flush if this many points get buffered
    314. # batch-size = 5000
    315. # number of batches that may be pending in memory
    316. # batch-pending = 10
    317. # Flush at least this often even if we haven't hit buffer limit
    318. # batch-timeout = "1s"
    319. # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    320. # udp-read-buffer = 0
    321. ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
    322. # separator = "."
    323. ### Default tags that will be added to all metrics. These can be overridden at the template level
    324. ### or by tags extracted from metric
    325. # tags = ["region=us-east", "zone=1c"]
    326. ### Each template line requires a template pattern. It can have an optional
    327. ### filter before the template and separated by spaces. It can also have optional extra
    328. ### tags following the template. Multiple tags should be separated by commas and no spaces
    329. ### similar to the line protocol format. There can be only one default template.
    330. # templates = [
    331. # "*.app env.service.resource.measurement",
    332. # # Default template
    333. # "server.*",
    334. # ]
    335. ###
    336. ### [collectd]
    337. ###
    338. ### Controls one or many listeners for collectd data.
    339. ###
    340. [[collectd]]
    341. # enabled = false
    342. # bind-address = ":25826"
    343. # database = "collectd"
    344. # retention-policy = ""
    345. #
    346. # The collectd service supports either scanning a directory for multiple types
    347. # db files, or specifying a single db file.
    348. # typesdb = "/usr/local/share/collectd"
    349. #
    350. # security-level = "none"
    351. # auth-file = "/etc/collectd/auth_file"
    352. # These next lines control how batching works. You should have this enabled
    353. # otherwise you could get dropped metrics or poor performance. Batching
    354. # will buffer points in memory if you have many coming in.
    355. # Flush if this many points get buffered
    356. # batch-size = 5000
    357. # Number of batches that may be pending in memory
    358. # batch-pending = 10
    359. # Flush at least this often even if we haven't hit buffer limit
    360. # batch-timeout = "10s"
    361. # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    362. # read-buffer = 0
    363. # Multi-value plugins can be handled two ways.
    364. # "split" will parse and store the multi-value plugin data into separate measurements
    365. # "join" will parse and store the multi-value plugin as a single multi-value measurement.
    366. # "split" is the default behavior for backward compatability with previous versions of influxdb.
    367. # parse-multivalue-plugin = "split"
    368. ###
    369. ### [opentsdb]
    370. ###
    371. ### Controls one or many listeners for OpenTSDB data.
    372. ###
    373. [[opentsdb]]
    374. # enabled = false
    375. # bind-address = ":4242"
    376. # database = "opentsdb"
    377. # retention-policy = ""
    378. # consistency-level = "one"
    379. # tls-enabled = false
    380. # certificate= "/etc/ssl/influxdb.pem"
    381. # Log an error for every malformed point.
    382. # log-point-errors = true
    383. # These next lines control how batching works. You should have this enabled
    384. # otherwise you could get dropped metrics or poor performance. Only points
    385. # metrics received over the telnet protocol undergo batching.
    386. # Flush if this many points get buffered
    387. # batch-size = 1000
    388. # Number of batches that may be pending in memory
    389. # batch-pending = 5
    390. # Flush at least this often even if we haven't hit buffer limit
    391. # batch-timeout = "1s"
    392. ###
    393. ### [[udp]]
    394. ###
    395. ### Controls the listeners for InfluxDB line protocol data via UDP.
    396. ###
    397. [[udp]]
    398. # enabled = false
    399. # bind-address = ":8089"
    400. # database = "udp"
    401. # retention-policy = ""
    402. # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")
    403. # precision = ""
    404. # These next lines control how batching works. You should have this enabled
    405. # otherwise you could get dropped metrics or poor performance. Batching
    406. # will buffer points in memory if you have many coming in.
    407. # Flush if this many points get buffered
    408. # batch-size = 5000
    409. # Number of batches that may be pending in memory
    410. # batch-pending = 10
    411. # Will flush at least this often even if we haven't hit buffer limit
    412. # batch-timeout = "1s"
    413. # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    414. # read-buffer = 0
    415. ###
    416. ### [continuous_queries]
    417. ###
    418. ### Controls how continuous queries are run within InfluxDB.
    419. ###
    420. [continuous_queries]
    421. # Determines whether the continuous query service is enabled.
    422. # enabled = true
    423. # Controls whether queries are logged when executed by the CQ service.
    424. # log-enabled = true
    425. # Controls whether queries are logged to the self-monitoring data store.
    426. # query-stats-enabled = false
    427. # interval for how often continuous queries will be checked if they need to run
    428. # run-interval = "1s"
    429. ###
    430. ### [tls]
    431. ###
    432. ### Global configuration settings for TLS in InfluxDB.
    433. ###
    434. [tls]
    435. # Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants
    436. # for a list of available ciphers, which depends on the version of Go (use the query
    437. # SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses
    438. # the default settings from Go's crypto/tls package.
    439. # ciphers = [
    440. # "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
    441. # "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
    442. # ]
    443. # Minimum version of the tls protocol that will be negotiated. If not specified, uses the
    444. # default settings from Go's crypto/tls package.
    445. # min-version = "tls1.2"
    446. # Maximum version of the tls protocol that will be negotiated. If not specified, uses the
    447. # default settings from Go's crypto/tls package.
    448. # max-version = "tls1.2"

    参考:https://www.cnblogs.com/guyeshanrenshiwoshifu/p/9188368.html

  • 相关阅读:
    23.在springboot中使用thymeleaf表达式(标准变量表达式,选择变量表达式)
    Spring--getBean()与@Autowired的对比
    defer 延迟调用【GO 基础】
    布隆过滤器
    Java运算符
    Transformer算法完全解读
    springboot集成xxl-job详解
    GitHub与GitHubDesktop的使用
    企业级前端组件建设
    C++11——“=default“和“=delete“函数特性
  • 原文地址:https://blog.csdn.net/lxw1844912514/article/details/126462714