• DB2和mysql活动日志满了会自动回滚最早未提交的事务吗?


    如下场景:一个事务1插入了或者更新了少量的数据但是一直没有提交,活动日志使用率慢慢的增长到90%甚至99%,最后一个事务N正好把活动日志用满,这个时候数据库会怎么处理?将事务1回滚掉,活动日志使用率立即降下来了,事务N成功commit还是事务N报错:活动日志满,然后被回滚,事务1仍旧没有被回滚呢?

    让我们做个测试:

    当前的活动日志大小和个数设置:
     [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 get db cfg for test |grep -i log
     Log file size (4KB)                         (LOGFILSIZ) = 1024
     Number of primary log files                (LOGPRIMARY) = 13
     Number of secondary log files               (LOGSECOND) = 12
     
    然后设置LOGPRIMARY=6 LOGSECOND=0
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 get db cfg for test show detail |grep -i log
     Log retain for recovery status                          = NO
     User exit for logging status                            = YES
     Catalog cache size (4KB)              (CATALOGCACHE_SZ) = 300                        300                       
     Log buffer size (4KB)                        (LOGBUFSZ) = 2150                       2150                      
     Log file size (4KB)                         (LOGFILSIZ) = 1024                       1024                      
     Number of primary log files                (LOGPRIMARY) = 6                          6                         
     Number of secondary log files               (LOGSECOND) = 0                          0 
     


    事务1:
    db2 +c "insert into log_test values(1,'未提交的最早的事务‘)"

    事务2:

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'
    Log space available to the database (Bytes)= 25042659
    Log space used by the database (Bytes)     = 285

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "begin atomic declare i int default 0;while(i <10000) do insert into log_test values (i,'中间提交的事务');set i=i+1;end while;end"
    DB20000I  The SQL command completed successfully.
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'                                                              Log space available to the database (Bytes)= 24046227
    Log space used by the database (Bytes)     = 996717

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "begin atomic declare i int default 0;while(i <10000) do insert into log_test values (i,'中间提交的事务');set i=i+1;end while;end"
    DB20000I  The SQL command completed successfully.
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'                                                              Log space available to the database (Bytes)= 23049795
    Log space used by the database (Bytes)     = 1993149

    ....

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'                                                              Log space available to the database (Bytes)= 8102587
    Log space used by the database (Bytes)     = 16940357

    ...

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'                                                              Log space available to the database (Bytes)= 3120147
    Log space used by the database (Bytes)     = 21922797

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'                                                              Log space available to the database (Bytes)= 254787
    Log space used by the database (Bytes)     = 24788157
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "begin atomic declare i int default 0;while(i <10000) do insert into log_test values (i,'中间提交的事务');set i=i+1;end while;end"
    DB21034E  The command was processed as an SQL statement because it was not a 
    valid Command Line Processor command.  During SQL processing it returned:
    SQL0964C  The transaction log for the database is full.  SQLSTATE=57011

    随着插入的数据越多,available to the database (Bytes)值会越来越小,而Log space used by the database (Bytes)越来越大,到最后一个插入就报错SQL0964C  The transaction log for the database is full.  SQLSTATE=57011了,,其实最早没有提交的事务1并没有被数据库回滚,是可以查到这条数据的。
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "select * from log_test where id=88888888 with ur"

    ID          NAME                                                                                                                                                                 
       88888888 未提交的最早的事务 

    我们把这个最早的Log force掉,就看到可用的活动日志释放掉了

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep -i 'oldest'
    Appl id holding the oldest transaction     = 16360
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "force applications(16360)"
    DB20000I  The FORCE APPLICATION command completed successfully.
    DB21024I  This command is asynchronous and may not be effective immediately.

    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ db2 "get snapshot for database on test" |grep 'Log space'
    Log space available to the database (Bytes)= 21123760
    Log space used by the database (Bytes)     = 3919184
    [db2inst1@t3-dtpoc-dtpoc-web04 ~]$ 
       

    MYSQL:
    查看日志设置大小,每个50M,一个两个Log文件,DB2测试的一共4M*6=24M
    mysql> SHOW VARIABLES LIKE 'innodb_log_file%';
    +---------------------------+----------+
    | Variable_name             | Value    |
    +---------------------------+----------+
    | innodb_log_file_size      | 50331648 |
    | innodb_log_files_in_group | 2        |
    +---------------------------+----------+
    2 rows in set (0.00 sec)

    mysql>
     
    事务1:
    mysql> create table log_test(id int,name varchar(1000));
    Query OK, 0 rows affected (0.02 sec)

    mysql> start transaction;
    Query OK, 0 rows affected (0.00 sec)

    mysql> insert into log_test values(1,'未提交的最早的事务');
    Query OK, 1 row affected (0.00 sec)

    事务2:

    delimiter //
    drop procedure if exists insert_log_test;
    create procedure insert_log_test()
    begin
        declare i int;
        set i = 0;
         start transaction;
        while i < 1000000 do
            insert into log_test values (i,'中间提交的事务+++++++++**********++++++++');
            insert into log_test values (i,'中间提交的事务+++++++++**********++++++++');
           insert into log_test values (i,'中间提交的事务+++++++++**********++++++++');
           insert into log_test values (i,'中间提交的事务+++++++++**********++++++++');
            set i = i + 1;
        end while;
        commit;

    end//
    delimiter ;

    事务2:
    先证明事务1没有提交
    mysql> select * from log_test;
    Empty set (0.00 sec)

    mysql> set transaction_isolation='READ-unCOMMITTED';
    Query OK, 0 rows affected (0.00 sec)

    mysql> select * from log_test;
    +----------+-----------------------------+
    | id       | name                        |
    +----------+-----------------------------+
    | 88888888 | 未提交的最早的事务          |
    +----------+-----------------------------+
    1 row in set (0.00 sec)

    然后开始插入数据。
    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 14.57 sec)

    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 15.63 sec)

    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 16.85 sec)

    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 15.07 sec)

    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 15.42 sec)

    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 14.68 sec)

    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 14.99 sec)

    mysql> select count(*) from log_test;
    +----------+
    | count(*) |
    +----------+
    | 28000001 |
    +----------+
    1 row in set (13.32 sec)

    mysql> set transaction_isolation='READ-COMMITTED';    
    Query OK, 0 rows affected (0.02 sec)

    mysql> select count(*) from log_test;
    +----------+
    | count(*) |
    +----------+
    | 28000000 |
    +----------+
    1 row in set (12.27 sec)

    插入了280000W行,还是没有触发活动日志满,说明未提交的redo log也会被覆盖,那bufferpool会把这条未提交的数据刷到磁盘吗?如果一直不刷会不会造成bufferpool满了?
    mysql> show engine innodb status;

    BUFFER POOL AND MEMORY
    ----------------------
    Total large memory allocated 137428992
    Dictionary memory allocated 186438
    Buffer pool size   8192
    Free buffers       1
    Database pages     8189
    Old database pages 3041

    我们再插入400W条,看看这几个数据有什么变化
    mysql> call insert_log_test();
    Query OK, 0 rows affected (1 min 15.13 sec)

    ----------------------
    BUFFER POOL AND MEMORY
    ----------------------
    Total large memory allocated 137428992
    Dictionary memory allocated 186438
    Buffer pool size   8192
    Free buffers       1024
    Database pages     7166
    Old database pages 2625

    看起来bufferpool也是安全的,看来对MYSQL的日志和刷data逻辑还是不理解,查资料看到这段解释就明白了:

      redo日志文件组的容量是有限的,我们需要循环使用redo日志文件组中的文件,这就需要判断某些redo日志占用的磁盘空间是否可以覆盖了,即它对应的脏页是否已经刷新到了磁盘中。全局变量checkpoint_lsn,用来表示当前系统中可以被覆盖的redo日志总量是多少,初始值是8704,如果某个页被刷新到了磁盘上,它对应的redo日志就没用了,可以被覆盖,则checkpoint_lsn+1,这个过程称为执行一次checkpoint操作。

    因为MYSQL的undo log和redo Log是放在不同的文件里的,所以即便一个事务一直不提交,redo log也可以覆盖删除,如果需要回滚那直接用undo Log来回滚就可以了,从这个问题看,MYSQL是比DB2设计理念先进复杂的,DB2经常会遇到一个很log的事务不提交造成活动日志写满了的问题

  • 相关阅读:
    【Linux笔记】基本指令(一)
    JS-回到顶部
    RocketMQ消息消费(Consumer)源码解析
    首发scitb包,一个为制作统计表格而生的R包
    Linux系统中,如果您遇到“系统资源不足,无法创建新的管道“的错误
    【PAT甲级 - C++题解】1114 Family Property
    三大运营商乘风破浪,为什么离不开BAT等互联网企业?
    详解:生产线平衡改善的四大方法与八大步骤!
    Web前端入门(十四)元素显示模式
    利用将网页项目部署到阿里云上(ngnix)
  • 原文地址:https://blog.csdn.net/liys0811/article/details/132706541