• postgresql|数据库|centos7下基于postgresql-12的主从复制的pgpool-4.4的部署和使用


    前言:

    postgresql数据库只用自身的一些配置是无法做到最优的优化的,需要通过一些外置插件(中间件)来提高服务器的整体性能,通俗的说就是数据库仅仅依靠自身是无法达到性能最优的,很多时候需要更改数据库的整体架构,使用一些目前来说成熟的技术,比如,读写分离技术,负载均衡技术,高速缓存技术等等集群方面的技术。

    下图是一些较为成熟的集群方案:

    从上表可以看到,pgpool是比较全面的一个中间件,什么连接池,负载均衡都有,还有没有写出来的缓存功能,其实使用此中间件的不能拒绝的诱惑就是负载均衡和缓存了,其它的功能倒是没有什么。

    本文将就pgpool的负载均衡和高速缓存功能做一个详细的介绍。

    一,

    pgpool的简单介绍

    Pgpool-II是一个在PostgreSQL服务器和PostgreSQL数据库客户端之间工作的中间件。它是根据BSD许可证授权的。它提供以下功能。

    连接池

    Pgpool-II保存与PostgreSQL服务器的连接,并在具有相同属性(即用户名,数据库,协议版本)的新连接进入时重用它们。它减少了连接开销,并提高了系统的整体吞吐量。

    复制

    Pgpool-II可以管理多个PostgreSQL服务器。使用复制功能可以在2个或更多物理磁盘上创建实时备份,以便在磁盘发生故障时服务可以继续运行而不会停止服务器。

    负载均衡

    如果复制了数据库,则在任何服务器上执行SELECT查询都将返回相同的结果。Pgpool-II利用复制功能,通过在多个服务器之间分配SELECT查询来减少每个PostgreSQL服务器的负载,从而提高系统的整体吞吐量。充其量,性能与PostgreSQL服务器的数量成比例地提高。在许多用户同时执行许多查询的情况下,负载平衡最有效。

    限制超出连接

    PostgreSQL的最大并发连接数有限制,连接在这么多连接后被拒绝。但是,设置最大连接数会增加资源消耗并影响系统性能。pgpool-II对最大连接数也有限制,但额外连接将排队,而不是立即返回错误。

    看门狗

    Watchdog可以协调多个Pgpool-II,创建一个强大的集群系统,避免单点故障或脑裂。看门狗可以对其他pgpool-II节点执行生命检查,以检测Pgpoll-II的故障。如果活动Pgpool-II发生故障,则可以将备用Pgpool-II提升为活动状态,并接管虚拟IP。

    查询缓存

    在内存中查询缓存允许保存一对SELECT语句及其结果。如果有相同的SELECT,Pgpool-II将从缓存中返回值。由于不涉及SQL解析或访问PostgreSQL,因此在内存缓存中使用速度非常快。另一方面,在某些情况下,它可能比正常路径慢,因为它增加了存储缓存数据的一些开销。

    Pgpool-II讲PostgreSQL的后端和前端协议,并在后端和前端之间传递消息。因此,数据库应用程序(前端)认为Pgpool-II是实际的PostgreSQL服务器,服务器(后端)将Pgpool-II视为其客户端之一。因为Pgpool-II对服务器和客户端都是透明的,所以现有的数据库应用程序可以与Pgpool-II一起使用。Pgpool-II讲述PostgreSQL的后端和前端协议,并在它们之间传递连接。因此,数据库应用程序(前端)认为Pgpool-II是实际的PostgreSQL服务器,服务器(后端)将Pgpool-II视为其客户端之一。因为Pgpool-II对服务器和客户端都是透明的,所以现有的数据库应用程序可以与Pgpool-II一起使用,不需要对现有的业务系统进行更改。




    那么,架构方面应该是通过看门狗,利用一个虚拟IP(也就是VIP)代理一个自身就是主从流复制的集群,VIP可以看做是前端,数据库可以看做后端,而主从流复制的数据库集群是具有这么一个特点:主服务器可读可写,从服务只读不可写。VIP通过pgpool的负载均衡功能就可以即可读也可以写了,负载均衡的策略是主从根据特定算法分配读任务,写任务仍然是交给主服务器完成。

    这样,集群的使用率就自然的增高了,一些常用的查询语句通过pgpool的缓存功能,缓存起来,自然的整个集群的查询效率就提高了。

    OK,下面开始讲述如何搭建pgpool。

    二,

    pgpool的官方网址:pgpool Wiki

    下载和安装教程都有,里面也提供了一些比较新的rpm安装包,rpm仓库地址是:Index of /yum/rpms/4.4/redhat/rhel-7-x86_64

    直接将该地址配置成yum仓库就可以了

    postgresql的版本和大体情况如下:

    11服务器是主服务器,12是从服务器

    主从复制的搭建见我的博客:postgresql|数据库|【postgresql-12的基于pg_basebackup的主从复制部署】_postgresql12 主备_晚风_END的博客-CSDN博客

    三,

    pgpool的部署

    该部署工作是比较繁琐的,难度是比较高的,主要是权限问题需要小心处理,其次是参数众多,很多地方需要根据实际的情况调整,最后是pgpool的功能比较多,如何配置好是需要比较多的耐心。

    首先,大概介绍一下pgpool的组件,该中间件的管理组件有三个,一个是数据库侧使用的pool工具,该工具是以插件的形式安装在postgresql数据库内的,第二个是操作系统侧的pcp工具,这个工具需要在pgpool的主配置文件内配置,第三个是pgpoolAdm,此工具是PHP编写的web端管理工具,可以在web端方便的查看pgpool并且管理配置pgpool,目前的版本应该是需要高版本的PHP支持,暂时没有使用。

    1,

    管理工具的安装

    本案例中,只安装数据库侧的管理工具pool和pcp,pool工具在源码包内。

    pgpool-II-4.4.4.tar.gz这个文件上传到服务器解压后,和普通的postgresql插件没什么区别,一样的make && make install 就可以了,前提是环境变量里有定义PGHOME和PGDATA这两个变量。

    2,

    yum安装

    配置好本地仓库和上面提到的官网仓库就可以运行以下命令安装了,这里安装了一个memcached服务,该服务后面作为缓存服务使用的。

    yum install pgpool-II-pg12-debuginfo-4.4.2 pgpool-II-pg12-4.4.2 pgpool-II-pg12-devel-4.4.2 pgpool-II-pg12-extensions-4.4.2 -y
    yum install memcached -y &&systemctl enable memcached && systemctl enable pgpool && systemctl start pgpool memcached

    安装完毕后将会在/etc目录下看到pgpool-II,此目录里是pgpool的配置文件和一些高可用故障转移脚本,本案例中这些脚本不打算使用,只配置pgpool服务,另外需要注意,两个服务器都需要安装,memcached只在一个服务器安装就可以了

    3,

    配置文件

    可以看到,这些文件都是postgres的属组,这些一定要注意哦

    1. [root@node1 pgpool-II]# ls -al
    2. total 144
    3. drwxr-xr-x. 3 root root 202 Sep 18 06:18 .
    4. drwxr-xr-x. 83 root root 8192 Sep 17 19:16 ..
    5. -rw------- 1 postgres postgres 900 Sep 17 11:15 pcp.conf
    6. -rw-------. 1 postgres postgres 858 Jan 22 2023 pcp.conf.sample
    7. -rw------- 1 postgres postgres 52960 Sep 18 02:01 pgpool.conf
    8. -rw-------. 1 postgres postgres 52964 Jan 22 2023 pgpool.conf.sample
    9. -rw------- 1 postgres postgres 2 Sep 17 10:21 pgpool_node_id
    10. -rw------- 1 postgres postgres 3537 Sep 17 11:54 pool_hba.conf
    11. -rw-------. 1 postgres postgres 3476 Jan 22 2023 pool_hba.conf.sample
    12. -rw-------. 1 postgres postgres 45 Sep 17 11:05 pool_passwd
    13. drwxr-xr-x. 2 root root 4096 Sep 17 10:02 sample_scripts

    pcp.conf 的配置

    该文件是存放pgpool的管理密码,此密码可以和postgresql数据库的密码不一样,也就是说随便定,定义方式非常简单,用户名:密码的形式添加在该文件末尾即可,只是需要注意一点,密码是md5加密的,不能明文(两种方式都可以,嫌麻烦的话就第三行那个命令,用户是postgres,密码是123456)

    1. [root@node1 pgpool-II]# pg_md5 123456
    2. e10adc3949ba59abbe56e057f20f883e
    3. [root@node1 pgpool-II]# echo "postgres:e10adc3949ba59abbe56e057f20f883e">>./pcp.conf
    4. [root@node1 pgpool-II]# echo "postgres:`pg_md5 123456`">>./pcp.conf
    1. [root@node1 pgpool-II]# cat pcp.conf
    2. # PCP Client Authentication Configuration File
    3. # ============================================
    4. #
    5. # This file contains user ID and his password for pgpool
    6. # communication manager authentication.
    7. #
    8. # Note that users defined here do not need to be PostgreSQL
    9. # users. These users are authorized ONLY for pgpool
    10. # communication manager.
    11. #
    12. # File Format
    13. # ===========
    14. #
    15. # List one UserID and password on a single line. They must
    16. # be concatenated together using ':' (colon) between them.
    17. # No spaces or tabs are allowed anywhere in the line.
    18. #
    19. # Example:
    20. # postgres:e8a48653851e28c69d0506508fb27fc5
    21. #
    22. # Be aware that there will be no spaces or tabs at the
    23. # beginning of the line! although the above example looks
    24. # like so.
    25. #
    26. # Lines beginning with '#' (pound) are comments and will
    27. # be ignored. Again, no spaces or tabs allowed before '#'.
    28. # USERID:MD5PASSWD
    29. postgres:e10adc3949ba59abbe56e057f20f883e

    pgpool.conf文件的配置:

    该文件是pgpool的主配置文件,其中注释的行已经全部去掉了,只保留了放开的内容

    说明:该配置文件内定义的文件路径需要手动建立,/var/run/postgresql  属组是postgres

    sr_check_user = 'nobody'  这个nobody用户需要在主数据库创建,创建命令为create role nobody login replication encrypted password 'replica';

    为什么是主数据库呢?因为是流复制,主数据库创建了 ,从数据库自然就有了嘛,上面提到的插件也是如此的哦。

    1. [root@node1 pgpool-II]# sed -e '/^$/d' pgpool.conf |grep -v "\#"
    2. backend_clustering_mode = 'streaming_replication'
    3. listen_addresses = '*'
    4. port = 15433
    5. unix_socket_directories = '/var/run/postgresql'
    6. pcp_listen_addresses = '*'
    7. pcp_port = 19999
    8. pcp_socket_dir = '/var/run/postgresql'
    9. backend_hostname0 = '192.168.123.11'
    10. backend_port0 = 5432
    11. backend_weight0 = 1
    12. backend_data_directory0 = '/usr/local/pgsql/data'
    13. backend_flag0 = 'ALLOW_TO_FAILOVER'
    14. backend_application_name0 = 'node1'
    15. backend_hostname1 = '192.168.123.12'
    16. backend_port1 = 5432
    17. backend_weight1 = 1
    18. backend_data_directory1 = '/usr/local/pgsql/data'
    19. backend_flag1 = 'ALLOW_TO_FAILOVER'
    20. backend_application_name1 = 'node2'
    21. enable_pool_hba = on
    22. pool_passwd = 'pool_passwd'
    23. process_management_mode = dynamic
    24. num_init_children = 32
    25. min_spare_children = 5
    26. max_spare_children = 10
    27. max_pool = 4
    28. child_life_time = 5min
    29. log_destination = 'stderr'
    30. log_connections = on
    31. log_disconnections = on
    32. log_hostname = on
    33. log_statement = on
    34. log_per_node_statement = on
    35. log_client_messages = on
    36. logging_collector = on
    37. log_directory = '/var/log/pgpool_log'
    38. log_filename = 'pgpool-%a.log'
    39. log_file_mode = 0600
    40. log_truncate_on_rotation = on
    41. log_rotation_age = 1d
    42. log_rotation_size = 0
    43. pid_file_name = '/var/run/postgresql/pgpool.pid'
    44. logdir = '/tmp'
    45. connection_cache = on
    46. reset_query_list = 'ABORT; DISCARD ALL'
    47. load_balance_mode = on
    48. database_redirect_preference_list = 'postgres:1'
    49. sr_check_period = 10
    50. sr_check_user = 'nobody'
    51. sr_check_password = 'replica'
    52. sr_check_database = 'postgres'
    53. delay_threshold = 1
    54. delay_threshold_by_time = 1
    55. prefer_lower_delay_standby = on
    56. use_watchdog = on
    57. hostname0 = '192.168.123.11'
    58. wd_port0 = 9000
    59. pgpool_port0 = 15433
    60. hostname1 = '192.168.123.12'
    61. wd_port1 = 9000
    62. pgpool_port1 = 15433
    63. wd_ipc_socket_dir = '/var/run/postgresql'
    64. delegate_ip = '192.168.123.222'
    65. if_cmd_path = '/sbin'
    66. if_up_cmd = 'ip addr add $_IP_$/24 dev ens33 label ens33:0'
    67. if_down_cmd = 'ip addr del $_IP_$/24 dev ens33'
    68. arping_path = '/usr/sbin'
    69. arping_cmd = 'arping -U $_IP_$ -w 1 -I ens33'
    70. wd_monitoring_interfaces_list = ''
    71. wd_lifecheck_method = 'heartbeat'
    72. wd_interval = 10
    73. heartbeat_hostname0 = '192.168.123.11'
    74. heartbeat_port0 = 19694
    75. heartbeat_device0 = 'ens33'
    76. heartbeat_hostname1 = '192.168.123.12'
    77. heartbeat_port1 = 19694
    78. heartbeat_device1 = 'ens33'
    79. wd_life_point = 3
    80. wd_lifecheck_query = 'SELECT 1'
    81. memory_cache_enabled = off
    82. memqcache_method = 'memcached'
    83. memqcache_memcached_host = '192.168.123.11'
    84. memqcache_memcached_port = 11211
    85. memqcache_total_size = 64MB
    86. memqcache_max_num_cache = 1000000
    87. memqcache_expire = 0
    88. memqcache_cache_block_size = 1MB

    pool_passwd文件的配置:

    1. 重要:
    2. su - postgres
    3. pg_md5 -m -p -u postgres pool_passwd
    4. #此时会提示输入密码,此密码是postgresql服务器的postgres用户的密码,一会会用此命令登录postgresql数据库的哦
    1. [root@node1 pgpool-II]# su - postgres
    2. Last login: Mon Sep 18 06:34:54 CST 2023 on pts/1
    3. [postgres@node1 ~]$ pg_md5 -m -p -u postgres pool_passwd
    4. password:
    5. [postgres@node1 ~]$ logout
    6. [root@node1 pgpool-II]# cat pool_passwd
    7. postgres:md5a3556571e93b0d20722ba62be61e8c2d

    pool_hab.conf文件的配置

    该文件的作用是定义pgpool哪些用户可以访问哪些后端的postgresql数据库,功能和postgresql数据库的pg_hba.conf文件类似

    如果不想太麻烦(也就是不太安全),那么,如下配置即可:

    1. # IPv4 local connections:
    2. host all all 127.0.0.1/32 trust
    3. host all all ::1/128 trust
    4. host all all 0.0.0.0/0 md5

    pgpool_node_id文件的配置

    此文件是标识文件,表明该pgpool 对应于哪个后端,因此,11服务器上此文件内容就一个0,12服务器上次文件内容就一个1即可,当然了,如果有其它的节点,就依次增加数字即可,最多好像是127个节点。

    创建用户

    Pcp.conf 文件内的用户和密码可以自己随意指定,pool_pass文件内的用户必须是数据库的真实用户和密码,目前只使用了postgres用户


    1. 配置pcp.conf文件
    2. [root@node1 pgpool-II]# pg_md5 123456
    3. e10adc3949ba59abbe56e057f20f883e
    4. [root@node1 pgpool-II]# echo "postgres:e10adc3949ba59abbe56e057f20f883e">>./pcp.conf
    5. [root@node1 pgpool-II]# echo "postgres:`pg_md5 123456`">>./pcp.conf
    6. [root@node1 pgpool-II]# cat pcp.conf
    7. # PCP Client Authentication Configuration File
    8. # ============================================
    9. #
    10. # This file contains user ID and his password for pgpool
    11. # communication manager authentication.
    12. #
    13. # Note that users defined here do not need to be PostgreSQL
    14. # users. These users are authorized ONLY for pgpool
    15. # communication manager.
    16. #
    17. # File Format
    18. # ===========
    19. #
    20. # List one UserID and password on a single line. They must
    21. # be concatenated together using ':' (colon) between them.
    22. # No spaces or tabs are allowed anywhere in the line.
    23. #
    24. # Example:
    25. # postgres:e8a48653851e28c69d0506508fb27fc5
    26. #
    27. # Be aware that there will be no spaces or tabs at the
    28. # beginning of the line! although the above example looks
    29. # like so.
    30. #
    31. # Lines beginning with '#' (pound) are comments and will
    32. # be ignored. Again, no spaces or tabs allowed before '#'.
    33. # USERID:MD5PASSWD
    34. postgres:e10adc3949ba59abbe56e057f20f883e
    35. 重要—配置pool_passwd:
    36. su - postgres
    37. pg_md5 -m -p -u postgres pool_passwd
    38. #此时会提示输入密码,此密码是postgresql服务器的postgres用户的密码,一会会用此命令登录postgresql数据库的哦
    39. [root@node1 pgpool-II]# su - postgres
    40. Last login: Mon Sep 18 06:34:54 CST 2023 on pts/1
    41. [postgres@node1 ~]$ pg_md5 -m -p -u postgres pool_passwd
    42. password:
    43. [postgres@node1 ~]$ logout
    44. [root@node1 pgpool-II]# cat pool_passwd
    45. postgres:md5a3556571e93b0d20722ba62be61e8c2d
    46. pool_hab.conf文件的配置
    47. 该文件的作用是定义pgpool哪些用户可以访问哪些后端的postgresql数据库,功能和postgresql数据库的pg_hba.conf文件类似
    48. 如果不想太麻烦(也就是不太安全),那么,建议是直接把数据库的pg_hba.conf文件复制过来即可
    49. 配置文件内的命令需要有粘滞特殊权限,ip和arping命令,因此,命令如下:
    50. chmod u+s /sbin/ip
    51. chmod u+s /usr/sbin/arping

    ​​​​​​​启动服务和停止服务

    1. 根据以上的配置文件,我们需要把配置文件里用到的文件夹手动创建出来,并赋予postgres属组:
    2. mkdir /var/run/postgresql
    3. chown -Rf postgres. /var/run/postgresql
    4. ​​
    5. pgpool的启动和停止
    6. 该中间件的启停是比较特殊的,既可以使用systemctl管理器管理也可以直接二进制启停,为了规范操作,就使用systemctl管理进行吧:
    7. systemctl enable pgpool && systemctl start pgpool
    8. 服务正常启动的样子:
    9. [root@node1 pgpool-II]# systemctl status pgpool
    10. ● pgpool.service - Pgpool-II
    11.    Loaded: loaded (/usr/lib/systemd/system/pgpool.service; enabled; vendor preset: disabled)
    12.    Active: active (running) since Mon 2023-09-18 01:04:32 CST; 19h ago
    13.   Process: 58354 ExecStop=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $STOP_OPTS stop (code=exited, status=0/SUCCESS)
    14.   Process: 45217 ExecReload=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf reload (code=exited, status=0/SUCCESS)
    15.  Main PID: 58360 (pgpool)
    16.     Tasks: 20
    17.    Memory: 14.1M
    18.    CGroup: /system.slice/pgpool.service
    19.            ├─58360 /usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf -n
    20.            ├─58361 pgpool: PgpoolLogger
    21.            ├─58362 pgpool: watchdog
    22.            ├─58371 pgpool: lifecheck
    23.            ├─58372 pgpool: postgres postgres 192.168.123.1(50284) idle
    24.            ├─58373 pgpool: heartbeat receiver
    25.            ├─58374 pgpool: wait for connection request
    26.            ├─58375 pgpool: heartbeat sender
    27.            ├─58376 pgpool: wait for connection request
    28.            ├─58377 pgpool: wait for connection request
    29.            ├─58378 pgpool: postgres postgres 192.168.123.1(50279) idle
    30.            ├─58379 pgpool: wait for connection request
    31.            ├─58380 pgpool: wait for connection request
    32.            ├─58381 pgpool: wait for connection request
    33.            ├─58382 pgpool: wait for connection request
    34.            ├─58383 pgpool: wait for connection request
    35.            ├─58385 pgpool: PCP: wait for connection request
    36.            ├─58386 pgpool: worker process
    37.            ├─58387 pgpool: health check process(0)
    38.            └─58388 pgpool: health check process(1)
    39. Sep 18 01:20:32 node1 systemd[1]: Reloading Pgpool-II.
    40. Sep 18 01:20:32 node1 systemd[1]: Reloaded Pgpool-II.
    41. Sep 18 01:25:42 node1 systemd[1]: Reloading Pgpool-II.
    42. Sep 18 01:25:42 node1 systemd[1]: Reloaded Pgpool-II.
    43. Sep 18 01:37:32 node1 systemd[1]: Reloading Pgpool-II.
    44. Sep 18 01:37:32 node1 systemd[1]: Reloaded Pgpool-II.
    45. Sep 18 01:47:36 node1 systemd[1]: Reloading Pgpool-II.
    46. Sep 18 01:47:36 node1 systemd[1]: Reloaded Pgpool-II.
    47. Sep 18 02:01:32 node1 systemd[1]: Reloading Pgpool-II.
    48. Sep 18 02:01:32 node1 systemd[1]: Reloaded Pgpool-II.
    49. 直接二进制形式启停的命令:
    50. [root@node2 ~]#              pgpool  ###启动
    51. [root@node2 ~]# pgpool -m fast stop
    52. 2023-09-18 21:01:30.055: main pid 44987: LOG:  stop request sent to pgpool (pid: 39228). waiting for termination...
    53. 2023-09-18 21:01:30.055: main pid 44987: LOCATION:  main.c:546
    54. .done.
    55. (这里需要注意一点,systemctl和二进制不能混用,否则另一个是不会生效的,下面是混用后的演示)
    56. [root@node2 ~]# pgpool
    57. [root@node2 ~]# systemctl status pgpool
    58. ● pgpool.service - Pgpool-II
    59.    Loaded: loaded (/usr/lib/systemd/system/pgpool.service; disabled; vendor preset: disabled)
    60.    Active: failed (Result: exit-code) since Mon 2023-09-18 21:01:30 CST; 22s ago
    61.   Process: 44988 ExecStop=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $STOP_OPTS stop (code=exited, status=3)
    62.   Process: 44599 ExecReload=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf reload (code=exited, status=0/SUCCESS)
    63.   Process: 39228 ExecStart=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $OPTS (code=exited, status=0/SUCCESS)
    64.  Main PID: 39228 (code=exited, status=0/SUCCESS)
    65. Sep 18 01:25:38 node2 systemd[1]: Reloaded Pgpool-II.
    66. Sep 18 01:37:24 node2 systemd[1]: Reloading Pgpool-II.
    67. Sep 18 01:37:24 node2 systemd[1]: Reloaded Pgpool-II.
    68. Sep 18 01:47:31 node2 systemd[1]: Reloading Pgpool-II.
    69. Sep 18 01:47:31 node2 systemd[1]: Reloaded Pgpool-II.
    70. Sep 18 02:01:29 node2 systemd[1]: Reloading Pgpool-II.
    71. Sep 18 02:01:29 node2 systemd[1]: Reloaded Pgpool-II.
    72. Sep 18 21:01:30 node2 systemd[1]: pgpool.service: control process exited, code=exited status=3
    73. Sep 18 21:01:30 node2 systemd[1]: Unit pgpool.service entered failed state.
    74. Sep 18 21:01:30 node2 systemd[1]: pgpool.service failed.
    75. [root@node2 ~]# systemctl start pgpool
    76. [root@node2 ~]# systemctl status pgpool
    77. ● pgpool.service - Pgpool-II
    78.    Loaded: loaded (/usr/lib/systemd/system/pgpool.service; disabled; vendor preset: disabled)
    79.    Active: failed (Result: exit-code) since Mon 2023-09-18 21:02:01 CST; 1s ago
    80.   Process: 45637 ExecStop=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $STOP_OPTS stop (code=exited, status=3)
    81.   Process: 44599 ExecReload=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf reload (code=exited, status=0/SUCCESS)
    82.   Process: 45635 ExecStart=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $OPTS (code=exited, status=3)
    83.  Main PID: 45635 (code=exited, status=3)
    84. Sep 18 21:02:01 node2 systemd[1]: Started Pgpool-II.
    85. Sep 18 21:02:01 node2 pgpool[45635]: 2023-09-18 21:02:01.310: main pid 45635: FATAL:  could not open pid file "/var/run/postgresql/pgpool.pid"
    86. Sep 18 21:02:01 node2 systemd[1]: pgpool.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
    87. Sep 18 21:02:01 node2 pgpool[45637]: 2023-09-18 21:02:01.316: main pid 45637: FATAL:  could not read pid file
    88. Sep 18 21:02:01 node2 pgpool[45637]: 2023-09-18 21:02:01.316: main pid 45637: LOCATION:  main.c:532
    89. Sep 18 21:02:01 node2 systemd[1]: pgpool.service: control process exited, code=exited status=3
    90. Sep 18 21:02:01 node2 systemd[1]: Unit pgpool.service entered failed state.
    91. Sep 18 21:02:01 node2 systemd[1]: pgpool.service failed.
    92. [root@node2 ~]# pgpool -m fast stop
    93. 2023-09-18 21:02:13.716: main pid 45805: LOG:  stop request sent to pgpool (pid: 45161). waiting for termination...
    94. 2023-09-18 21:02:13.716: main pid 45805: LOCATION:  main.c:546
    95. .done.
    96. [root@node2 ~]# systemctl start pgpool
    97. [root@node2 ~]# systemctl status pgpool
    98. ● pgpool.service - Pgpool-II
    99.    Loaded: loaded (/usr/lib/systemd/system/pgpool.service; disabled; vendor preset: disabled)
    100.    Active: active (running) since Mon 2023-09-18 21:02:21 CST; 2s ago
    101.   Process: 45637 ExecStop=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $STOP_OPTS stop (code=exited, status=3)
    102.   Process: 44599 ExecReload=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf reload (code=exited, status=0/SUCCESS)
    103.  Main PID: 46021 (pgpool)
    104.     Tasks: 20
    105.    Memory: 6.3M
    106.    CGroup: /system.slice/pgpool.service
    107.            ├─46021 /usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf -n
    108.            ├─46022 pgpool: PgpoolLogger
    109.            ├─46023 pgpool: watchdog
    110.            ├─46025 pgpool: lifecheck
    111.            ├─46026 pgpool: heartbeat receiver
    112.            ├─46027 pgpool: heartbeat sender
    113.            ├─46028 pgpool: wait for connection request
    114.            ├─46029 pgpool: wait for connection request
    115.            ├─46030 pgpool: wait for connection request
    116.            ├─46031 pgpool: wait for connection request
    117.            ├─46032 pgpool: wait for connection request

    ​​​​​​​测试读写分离,负载均衡

    1.  pgpool的管理工具简单的使用
    2. pcp是一套管理工具,也就是说有很多pcp名称开始的一族命令
    3. [root@node1 pgpool-II]# pcp_
    4. pcp_attach_node         pcp_health_check_stats  pcp_node_info           pcp_proc_count          pcp_promote_node        pcp_reload_config       pcp_watchdog_info      
    5. pcp_detach_node         pcp_node_count          pcp_pool_status         pcp_proc_info           pcp_recovery_node       pcp_stop_pgpool        
    6. 查看有几个pgpool节点:
    7. 注意,pcp定义的端口需要写哦,如果你更改过了的话,本文使用的是19999,这个命令可以随意定义,定义在pcp.conf 文件内,只是记得要md5加密哦。本例是 用户是postgres,密码是123456
    8. [root@node2 pgpool-II]# echo  "postgres:`pg_md5 123456`" >>pcp.conf
    9. [root@node2 pgpool-II]# pcp_node_count -U postgres -p 19999
    10. Password:
    11. 2
    12. 查询pgpool集群的信息:
    13. [root@node2 pgpool-II]# pcp_node_info -Upostgres -p19999
    14. Password:
    15. 192.168.123.11 5432 1 0.500000 waiting unknown primary unknown 0 none none 2023-09-18 21:06:40
    16. 192.168.123.12 5432 1 0.500000 waiting unknown standby unknown 0.000000 none none 2023-09-18 21:06:40
    17. 重新加载所有配置:
    18. [root@node2 pgpool-II]# pcp_reload_config -Upostgres -p19999
    19. Password:
    20. pcp_reload_config -- Command Successful
    21. 查看看门狗的状态:
    22. [root@node2 pgpool-II]# pcp_watchdog_info -U postgres -p19999
    23. Password:
    24. 2 2 NO 192.168.123.11:15433 Linux node1 192.168.123.11
    25. 192.168.123.12:15433 Linux node2 192.168.123.12 15433 9000 7 STANDBY 0 MEMBER
    26. 192.168.123.11:15433 Linux node1 192.168.123.11 15433 9000 4 LEADER 0 MEMBER
    27. 数据库内使用插件管理pgpool:
    28. 查看pool的状态:
    29. [root@node2 pgpool-II]# su - postgres -c "psql -Upostgres -p 15433 -h 192.168.123.222"
    30. Password for user postgres:
    31. psql (12.5)
    32. Type "help" for help.
    33. postgres=# show pool_nodes;
    34.  node_id |    hostname    | port | status | pg_status | lb_weight |  role   | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change 
    35. ---------+----------------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
    36.  0       | 192.168.123.11 | 5432 | up     | unknown   | 0.500000  | primary | unknown | 0          | false             | 0                 |                   |                        | 2023-09-18 21:53:08
    37.  1       | 192.168.123.12 | 5432 | up     | unknown   | 0.500000  | standby | unknown | 0          | true              | 0.000000 second   |                   |                        | 2023-09-18 21:53:08
    38. (2 rows)
    39. 查看缓存命中率:
    40. 目前暂时是关闭的
    41. postgres=# show pool_cache;
    42.  num_cache_hits | num_selects | cache_hit_ratio | num_hash_entries | used_hash_entries | num_cache_entries | used_cache_entries_size | free_cache_entries_size | fragment_cache_entries_size
    43. ----------------+-------------+-----------------+------------------+-------------------+-------------------+-------------------------+-------------------------+-----------------------------
    44.  0              | 0           | 0.00            | 0                | 0                 | 0                 | 0                       | 0                       | 0
    45. (1 row)
    46. ​​
    47.  测试以及一些需要注意的地方
    48. OK,开始测试啦,这里需要着重说明,VIP是非常非常重要的,测试的时候是使用VIP登录的哦:
    49. VIP登录数据库:
    50. [root@node1 pgpool-II]# su - postgres -c "psql -Upostgres -p 15433 -h 192.168.123.222"
    51. Password for user postgres:
    52. psql (12.5)
    53. Type "help" for help.
    54. 随便建立一个测试用的表,并向表内些一个测试数据,这个就不班门弄斧了,非常的简单,然后查询pool的状态:
    55. postgres=# show pool_nodes;
    56.  node_id |    hostname    | port | status | pg_status | lb_weight |  role   | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change 
    57. ---------+----------------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
    58.  0       | 192.168.123.11 | 5432 | up     | unknown   | 0.500000  | primary | unknown | 11         | false             | 0                 |                   |                        | 2023-09-18 21:53:08
    59.  1       | 192.168.123.12 | 5432 | up     | unknown   | 0.500000  | standby | unknown | 12         | true              | 0.000000 second   |                   |                        | 2023-09-18 21:53:08
    60. (2 rows)
    61. 可以看到,负载均衡是生效的,select_cnt 是查询次数,主节点查询了11次,从节点查询了12次
    62. 利用navicat再次查询,或者使用pgbench压测工具:
    63. ​编辑
    64. postgres=# show pool_nodes;
    65.  node_id |    hostname    | port | status | pg_status | lb_weight |  role   | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change 
    66. ---------+----------------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
    67.  0       | 192.168.123.11 | 5432 | up     | unknown   | 0.500000  | primary | unknown | 32         | false             | 0                 |                   |                        | 2023-09-18 21:53:08
    68.  1       | 192.168.123.12 | 5432 | up     | unknown   | 0.500000  | standby | unknown | 33         | true              | 0.000000 second   |                   |                        | 2023-09-18 21:53:08
    69. (2 rows)
    70. 查看pgpool的日志(截取相关部分日志,如下):
    71. 2023-09-18 22:41:50.701: Navicat pid 39838: LOCATION:  pool_proto_modules.c:2712
    72. 2023-09-18 22:41:50.701: Navicat pid 39838: LOG:  statement: select * from test1131
    73. 2023-09-18 22:41:50.701: Navicat pid 39838: LOCATION:  pool_proto_modules.c:211
    74. 2023-09-18 22:41:50.701: Navicat pid 39838: LOG:  DB node id: 1 backend pid: 8540 statement: select * from test1131
    75. 2023-09-18 22:41:50.701: Navicat pid 39838: LOCATION:  pool_proto_modules.c:3569
    76. 2023-09-18 22:41:50.702: Navicat pid 39845: LOG:  Query message from frontend.
    77. 2023-09-18 22:41:50.702: Navicat pid 39845: DETAIL:  query: "SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398"
    78. 2023-09-18 22:41:50.702: Navicat pid 39845: LOCATION:  pool_proto_modules.c:2712
    79. 2023-09-18 22:41:50.702: Navicat pid 39845: LOG:  statement: SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398
    80. 2023-09-18 22:41:50.702: Navicat pid 39845: LOCATION:  pool_proto_modules.c:211
    81. 2023-09-18 22:41:50.702: Navicat pid 39845: LOG:  DB node id: 0 backend pid: 61668 statement: SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398
    82. 2023-09-18 22:41:50.702: Navicat pid 39845: LOCATION:  pool_proto_modules.c:3569
    83. 2023-09-18 22:41:52.695: Navicat pid 39838: LOG:  Query message from frontend.
    84. 2023-09-18 22:41:52.695: Navicat pid 39838: DETAIL:  query: "show pool_nodes"
    85. 2023-09-18 22:41:52.695: Navicat pid 39838: LOCATION:  pool_proto_modules.c:2712
    86. 2023-09-18 22:41:52.695: Navicat pid 39838: LOG:  statement: show pool_nodes
    87. 2023-09-18 22:41:52.695: Navicat pid 39838: LOCATION:  pool_proto_modules.c:211
    88. 2023-09-18 22:42:28.368: psql pid 39844: LOG:  Query message from frontend.
    89. 2023-09-18 22:42:28.368: psql pid 39844: DETAIL:  query: "show pool_nodes;"
    90. 2023-09-18 22:42:28.368: psql pid 39844: LOCATION:  pool_proto_modules.c:2712
    91. 2023-09-18 22:42:28.368: psql pid 39844: LOG:  statement: show pool_nodes;
    92. 2023-09-18 22:42:28.368: psql pid 39844: LOCATION:  pool_proto_modules.c:211
    93. 可以看到,负载均衡功能完美运行
    94. 缓存服务的测试:
    95. 首先,查看memcache服务是否正常,确认正常后,修改主配置文件,打开缓存功能:
    96. [root@node1 pgpool-II]# systemctl status memcached
    97. ● memcached.service - Memcached
    98.    Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor preset: disabled)
    99.    Active: active (running) since Sun 2023-09-17 19:16:26 CST; 1 day 3h ago
    100.  Main PID: 1443 (memcached)
    101.     Tasks: 6
    102.    Memory: 4.4M
    103.    CGroup: /system.slice/memcached.service
    104.            └─1443 /usr/bin/memcached -u memcached -p 11211 -m 64 -c 1024
    105. Sep 17 19:16:26 node1 systemd[1]: Started Memcached.
    106. ​编辑
    107. 重新加载pgpool服务,两个服务器都要修改,重新加载:
    108. 可以看到部分查询落在了主节点,但没有缓存,部分查询落在了从节点,走的是缓存
    109. 2023-09-18 22:50:54.481: Navicat pid 47139: LOG:  fetch from memory cache
    110. 2023-09-18 22:50:54.342: Navicat pid 47139: DETAIL:  query result fetched from cache. statement: select * from test1131
    111. 2023-09-18 22:50:54.185: Navicat pid 47138: LOG:  DB node id: 0 backend pid: 25501 statement: SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398
    112. 2023-09-18 22:50:54.185: Navicat pid 47138: LOCATION:  pool_proto_modules.c:3569
    113. 2023-09-18 22:50:54.342: Navicat pid 47139: LOG:  Query message from frontend.
    114. 2023-09-18 22:50:54.342: Navicat pid 47139: DETAIL:  query: "select * from test1131"
    115. 2023-09-18 22:50:54.342: Navicat pid 47139: LOCATION:  pool_proto_modules.c:2712
    116. 2023-09-18 22:50:54.342: Navicat pid 47139: LOG:  statement: select * from test1131
    117. 2023-09-18 22:50:54.342: Navicat pid 47139: LOCATION:  pool_proto_modules.c:211
    118. 2023-09-18 22:50:54.342: Navicat pid 47139: LOG:  fetch from memory cache
    119. 2023-09-18 22:50:54.342: Navicat pid 47139: DETAIL:  query result fetched from cache. statement: select * from test1131
    120. 2023-09-18 22:50:54.342: Navicat pid 47139: LOCATION:  pool_memqcache.c:821
    121. 2023-09-18 22:50:54.343: Navicat pid 47138: LOG:  Query message from frontend.
    122. 2023-09-18 22:50:54.343: Navicat pid 47138: DETAIL:  query: "SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398"
    123. 2023-09-18 22:50:54.343: Navicat pid 47138: LOCATION:  pool_proto_modules.c:2712
    124. 2023-09-18 22:50:54.343: Navicat pid 47138: LOG:  statement: SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398
    125. 2023-09-18 22:50:54.343: Navicat pid 47138: LOCATION:  pool_proto_modules.c:211
    126. 2023-09-18 22:50:54.343: Navicat pid 47138: LOG:  DB node id: 0 backend pid: 25501 statement: SELECT c.conkey FROM pg_constraint c WHERE c.contype = 'p' and c.conrelid = 16398
    127. 2023-09-18 22:50:54.343: Navicat pid 47138: LOCATION:  pool_proto_modules.c:3569
    128. 2023-09-18 22:50:54.480: Navicat pid 47139: LOG:  Query message from frontend.
    129. 2023-09-18 22:50:54.481: Navicat pid 47139: DETAIL:  query: "select * from test1131"
    130. 2023-09-18 22:50:54.481: Navicat pid 47139: LOCATION:  pool_proto_modules.c:2712
    131. 2023-09-18 22:50:54.481: Navicat pid 47139: LOG:  statement: select * from test1131
    132. 2023-09-18 22:50:54.481: Navicat pid 47139: LOCATION:  pool_proto_modules.c:211
    133. 2023-09-18 22:50:54.481: Navicat pid 47139: LOG:  fetch from memory cache
    134. 2023-09-18 22:50:54.481: Navicat pid 47139: DETAIL:  query result fetched from cache. statement: select * from test1131
    135. 2023-09-18 22:50:54.481: Navicat pid 47139: LOCATION:  pool_memqcache.c:821
    136.  查看缓存命中率:
    137. 可以看到查询了53次,总命中率是0.5(cache_hit_ratio就是缓存命中率,可以知道,该数据库集群的性能会得到极大的提升)
    138. postgres=# show pool_cache;
    139.  num_cache_hits | num_selects | cache_hit_ratio | num_hash_entries | used_hash_entries | num_cache_entries | used_cache_entries_size | free_cache_entries_size | fragment_cache_entries_size
    140. ----------------+-------------+-----------------+------------------+-------------------+-------------------+-------------------------+-------------------------+-----------------------------
    141.  53             | 52          | 0.50            | 0                | 0                 | 0                 | 0                       | 0                       | 0
    142. (1 row)
    143. postgres=# show pool_nodes;
    144.  node_id |    hostname    | port | status | pg_status | lb_weight |  role   | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change 
    145. ---------+----------------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
    146.  0       | 192.168.123.11 | 5432 | up     | unknown   | 0.500000  | primary | unknown | 52         | false             | 0                 |                   |                        | 2023-09-18 22:48:37
    147.  1       | 192.168.123.12 | 5432 | up     | unknown   | 0.500000  | standby | unknown | 0          | true              | 0.000000 second   |                   |                        | 2023-09-18 22:48:37

    ​​​​​​​主备切换(这个功能用不上)

    ​​​​​​​安装及使用总结

    首先,需要明白一点,pgpool并不是十分容易配置的服务,可能在配置和调试的过程中会有非常多的错误,但是不需要害怕这些错误,善于利用百度等等搜索引擎,其次,主配置文件里的很多地方配置好后可能是需要重启服务的,而服务的重启顺序是先从节点重启在主节点重启,这么做的原因是可能会配置故障恢复脚本等情形下的高可用,防止主节点乱跑,从而给自己造成不必要的麻烦。

    最后pool_hba.conf 里最好使用数据库内的pg_hab.conf的所有内容,一个是安全,一个是防止两者不一致造成的麻烦(毕竟报错后的问题分析还是比较麻烦的嘛)

  • 相关阅读:
    【数据分享】上海市出租车GPS数据
    (附源码)springboot基于微信小程序的超市收银系统 毕业设计 271611
    搭建Radius认证服务器 安当加密
    JVM —— 运行时数据区域
    JVM是什么?Java程序为啥需要运行在JVM中?
    PV 操作与案例分析
    0xc0000142修复解决方法
    Real-Time Rendering——9.11 Wave Optics BRDF Models波动光学BRDF模型
    毫米波点云雷达 论文阅读 | 3DRIMR, IPCCC 2021
    vue的computed如果没有出现在模板里面,当它依赖的响应式属性发生变化,getter会触发吗?
  • 原文地址:https://blog.csdn.net/alwaysbefine/article/details/132942114