引言
分享一个很有意思的案例,使用 pt-osc 执行 DDL 过程中发现执行暂停,准备强制停止 DDL,但是复制停止时持续重连导致 kill 无效,最终通过锁定用户中断该连接。
现象
工单执行异常
时间:20230106 10:00
域名:percona-cn-north-1-rds.jdcloud.com
现象:始终显示执行中
版本:5.7.21-20-log
复制停止的原因是从库磁盘打满,期间发现该实例是 percona,报错表的存储引擎为 TokuDB。
kill 连接过程中发现持续重连导致 kill 无效,参考官方文档,复制延迟或停止时 pt-osc 都会持续等待,但是该现象未复现,测试过程中显示并不会持续等待。
主库
COMMAND: Sleep,表明执行暂停,怀疑从库有延迟导致。
mysql> select * from information_schema.processlist where user='db_auto';
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| 85274516 | db_auto | x.x.x.x:48116 | las_cloud_his | Sleep | 0 | | NULL | 153 | 1 | 0 |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
1 row in set (0.00 sec)
从库
查看复制,显示复制报错并停止。
mysql> SHOW SLAVE STATUS G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: x.x.x.x
Master_User: replicater
Master_Port: 3358
Connect_Retry: 60
Master_Log_File: mysql-bin.303904
Read_Master_Log_Pos: 194
Relay_Log_File: relay-log.153071
Relay_Log_Pos: 353562363
Relay_Master_Log_File: mysql-bin.303853
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1030
Last_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '13339e05-643e-11ea-b105-fa163e231687:402057589' at master log mysql-bin.303853, end_log_pos 354180060. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.
Skip_Counter: 0
Exec_Master_Log_Pos: 353562150
Relay_Log_Space: 370592967
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1030
Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '13339e05-643e-11ea-b105-fa163e231687:402057589' at master log mysql-bin.303853, end_log_pos 354180060. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1682449630
Master_UUID: 13339e05-643e-11ea-b105-fa163e231687
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp: 230106 05:30:11
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: 13339e05-643e-11ea-b105-fa163e231687:340434562-402057594
Executed_Gtid_Set: 13339e05-643e-11ea-b105-fa163e231687:1-402057588
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.01 sec)
其中:
Slave_IO_Running: Yes
Slave_SQL_Running: No
Seconds_Behind_Master: NULL
因此 IO 线程正常,SQL 线程停止。
复制暂停时的三组位点坐标如下所示:
Master_Log_File: mysql-bin.303904
Read_Master_Log_Pos: 194
Relay_Log_File: relay-log.153071
Relay_Log_Pos: 353562363
Relay_Master_Log_File: mysql-bin.303853
Exec_Master_Log_Pos: 353562150
分析
复制停止后等待
查看 pt-osc 日志,显示当从库复制出现延迟或停止后主库 DDL 都会持续等待。
The tool waits forever for replicas to stop lagging. If any replica is stopped, the tool waits forever until the replica is started. The data copy continues when all replicas are running and not lagging too much.
复制停止分两种情况:IO 线程停止与 SQL 线程停止,需后续测试验证。
kill 不掉
修复从库需要时间,因此首先尝试 kill 连接结束 DDL 执行,但是 kill 过程中发现持续重连导致 kill 无效。
mysql> kill 85274516;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from information_schema.processlist where user='db_auto';
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| 85274598 | db_auto | x.x.x.x:52208 | las_cloud_his | Sleep | 0 | | NULL | 123 | 1 | 0 |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
1 row in set (0.00 sec)
mysql> kill 85274598;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from information_schema.processlist where user='db_auto';
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| 85274605 | db_auto | x.x.x.x:52482 | las_cloud_his | Sleep | 1 | | NULL | 717 | 1 | 0 |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
1 row in set (0.00 sec)
mysql> kill 85274605;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from information_schema.processlist where user='db_auto';
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| 85290449 | db_auto | x.x.x.x:47254 | las_cloud_his | Sleep | 3 | | NULL | 2528 | 1 | 0 |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
1 row in set (0.00 sec)
之前在 pt-osc 执行过程中强制 kill 时遇到过类似现象,比如数据拷贝过程中主库磁盘打满时需要强制停止 DDL,多次 kill 直到超过重试次数即可。
查看重试次数,显示针对复制停止操作不支持指定重试次数。
OPERATION TRIES WAIT
=================== ===== ====
create_triggers 10 1
drop_triggers 10 1
copy_rows 10 0.25
swap_tables 10 1
update_foreign_keys 10 1
analyze_table 10 1
因此怀疑复制延迟的情况下会持续等待,每秒检测一次,导致无法 kill。
由于 kill 不掉连接,因此计划在修复好从库后继续执行。
复制报错
查看详细报错,显示写入失败。
mysql> select * from performance_schema.replication_applier_status_by_worker limit 3 G
*************************** 1. row ***************************
CHANNEL_NAME:
WORKER_ID: 1
THREAD_ID: NULL
SERVICE_STATE: OFF
LAST_SEEN_TRANSACTION: 13339e05-643e-11ea-b105-fa163e231687:402057589
LAST_ERROR_NUMBER: 1030
LAST_ERROR_MESSAGE: Worker 1 failed executing transaction '13339e05-643e-11ea-b105-fa163e231687:402057589' at master log mysql-bin.303853, end_log_pos 354180060; Could not execute Write_rows event on table las_cloud_his._order_master_new; Got error 28 from storage engine, Error_code: 1030; handler error No Error!; the event's master log mysql-bin.303853, end_log_pos 354180060
LAST_ERROR_TIMESTAMP: 2023-01-06 05:30:11
...
其中复制报错 1030 got error 28 from storage engine,原因是从库磁盘打满。
查看错误代码,显示无空间。
mysql> ! perror 1030
MySQL error code 1030 (ER_GET_ERRNO): Got error %d from storage engine
mysql>
mysql> ! perror 28
OS error code 28: No space left on device
查看实例大小,磁盘大小 1T。
但是查看数据表,显示 total_mb 远超过 1T。
查看监控,显示磁盘使用量最高达到 1T,并在 09:20 左右下降,原因是执行了 drop table。
可见,该实例磁盘大小为 1T,但是数据表 total_mb 远超过 1T,是什么原因呢?
系统表大小有误差,但是误差这么大不太合理。
操作系统层面查看数据库文件大小,显示数据表的存储引擎为 tokudb。
93G ../las_cloud_his/order_master_main_3e7df851_5_1d.tokudb
72G ../las_cloud_his/reserve_task_main_1b9f9abb_1_1d_B_0.tokudb
42G ../las_cloud_his/order_reserve_history_main_1b8e39fb_1_1d_B_0.tokudb
37G ../las_cloud_his/sale_order_main_1ba46925_1_1d_B_0.tokudb
37G ../las_cloud_his/order_items_main_1b7fc219_1_1d_B_0.tokudb
35G ../las_cloud_his/order_status_task_main_1b932b0d_1_1d_B_0.tokudb
33G ../las_cloud_his/order_operation_log_main_1b8791a3_1_1d_B_0.tokudb
28G ../las_cloud_his/production_plan_order.ibd
23G ../las_cloud_his/sale_order_goods_main_1ba75668_1_1d_B_0.tokudb
20G ../las_cloud_his/production_plan_order_goods.ibd
18G ../las_cloud_his/order_master_extend.ibd
14G ../las_cloud_his/order_master_reserve_main_1b856e7e_1_1d_B_0.tokudb
13G ../las_cloud_his/history_order_master_report_main_1b787d90_1_1d_B_0.tokudb
10G ../las_cloud_his/sale_order_task_main_1ba8ccde_1_1d_B_0.tokudb
9.4G ../las_cloud_his/order_status_task_key_idx_order_sysno_1b932b0d_1_1d_B_1.tokudb
7.8G ../las_cloud_his/wms_orders_monitor_main_1bacc7e9_1_1d_B_0.tokudb
7.6G ../las_cloud_his/order_operation_log_key_idx_order_id_1b8791a3_1_1d_B_1.tokudb
7.1G ../las_cloud_his/order_operation_log_key_idx_order_sysno_1b8791a3_1_1d_B_2.tokudb
6.5G ../las_cloud_his/mads_orders_monitor_main_1b7a848e_1_1d_B_0.tokudb
6.4G ../las_cloud_his/reserve_task_20180320_main_1ba306fe_1_1d_B_0.tokudb
6.4G ../las_cloud_his/order_fill_task_main_1b7d881a_1_1d_B_0.tokudb
6.3G ../las_cloud_his/tms_orders_monitor_main_1ba96290_1_1d_B_0.tokudb
5.0G ../las_cloud_his/dispatch_orders_main_1b77dfd5_1_1d_B_0.tokudb
4.4G ../las_cloud_his/trans_haier_task_main_1bab9435_1_1d_B_0.tokudb
3.9G ../las_cloud_his/message_sending_task_main_1b7cc54d_1_1d_B_0.tokudb
3.9G ../las_cloud_his/dispatch_master_main_2f1734e6_3_1d.tokudb
3.7G ../las_cloud_his/reserve_task_key_order_sysno_index_1b9f9abb_1_1d_B_3.tokudb
3.7G ../las_cloud_his/order_items_key_idx_order_id_1b7fc219_1_1d_B_1.tokudb
3.6G ../las_cloud_his/order_status_task_key_idx_create_time_1b932b0d_1_1d_B_2.tokudb
3.1G ../las_cloud_his/order_operation_log_key_index_create_time_1b8791a3_1_1d_B_3.tokudb
3.1G ../las_cloud_his/order_items_key_idx_order_sysno_1b7fc219_1_1d_B_2.tokudb
2.7G ../las_cloud_his/dispatch_orders_key_idx_order_id_1b77dfd5_1_1d_B_1.tokudb
2.3G ../las_cloud_his/order_master_key_idx_order_id_3e7df851_2_1d.tokudb
2.2G ../las_cloud_his/production_plan_dispatch_order.ibd
2.2G ../las_cloud_his/dispatch_orders_key_idx_order_sysno_1b77dfd5_1_1d_B_2.tokudb
2.1G ../las_cloud_his/reserve_task_key_ts_index_1b9f9abb_1_1d_B_4.tokudb
2.1G ../las_cloud_his/reserve_task_key_create_time_index_1b9f9abb_1_1d_B_2.tokudb
2.1G ../las_cloud_his/order_reserve_history_key_idx_order_id_1b8e39fb_1_1d_B_1.tokudb
2.0G ../las_cloud_his/reserve_task_key_idx_status_execute_count_1b9f9abb_1_1d_B_1.tokudb
1.6G ../las_cloud_his/dispatch_task_main_1b7864f8_1_1d_B_0.tokudb
1.5G ../las_cloud_his/order_master_key_idx_receive_time_3e7df851_3_1d.tokudb
1.3G ../las_cloud_his/order_cancel_task_main_1b7d68cd_1_1d_B_0.tokudb
1.2G ../las_cloud_his/message_reply_timeout_task_main_1b7ca4a8_1_1d_B_0.tokudb
查看表结构,显示 ENGINE=TokuDB,ROW_FORMAT=TOKUDB_LZMA。
mysql> show create table las_cloud_his.order_master G
*************************** 1. row ***************************
Table: order_master
Create Table: CREATE TABLE `order_master` (
`sysno` bigint(20) NOT NULL AUTO_INCREMENT,
`order_id` varchar(20) NOT NULL,
`receive_time` datetime DEFAULT NULL,
`waybill_code` varchar(20) DEFAULT NULL COMMENT '运单号(无运单号同order_id)',
...
PRIMARY KEY (`sysno`),
KEY `idx_order_id` (`order_id`) USING BTREE,
KEY `idx_receive_time` (`receive_time`) USING BTREE,
KEY `idx_waybillcode` (`waybill_code`)
) ENGINE=TokuDB AUTO_INCREMENT=10460647360 DEFAULT CHARSET=utf8 ROW_FORMAT=TOKUDB_LZMA
1 row in set (0.03 sec)
处理
从库重做
由于从库磁盘已经打满,因此通过从库重做的方式重新创建了一个从库。
对比 server_id 与 Slave_UUID,可以证明创建了新从库进行了替换。
从库重做前查看。
mysql> show slave hosts;
+------------+------+------+------------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID |
+------------+------+------+------------+--------------------------------------+
| 1683086877 | | 3358 | 1682449630 | 91948252-bfb1-11ec-9c01-fa163e6ce615 |
+------------+------+------+------------+--------------------------------------+
1 row in set (0.01 sec)
从库重做后查看。
mysql> show slave hosts;
+------------+------+------+------------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID |
+------------+------+------+------------+--------------------------------------+
| 1683059315 | | 3358 | 1682449630 | cd3c8fbb-8d76-11ed-a659-fa163e1c1deb |
+------------+------+------+------------+--------------------------------------+
1 row in set (0.00 sec)
kill
理论上修复好从库后 DDL 可以继续执行,但实际上仍显示 COMMAND: Sleep,并且依然 kill 无效。
mysql> select * from information_schema.processlist where user='db_auto';
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| 85290449 | db_auto | x.x.x.x:47254 | las_cloud_his | Sleep | 3 | | NULL | 2528 | 1 | 0 |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
1 row in set (0.00 sec)
mysql> select * from information_schema.processlist where user='db_auto';
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
| 85290449 | db_auto | x.x.x.x:47254 | las_cloud_his | Sleep | 2 | | NULL | 1966 | 1 | 0 |
+----------+---------+---------------------+---------------+---------+------+-------+------+---------+-----------+---------------+
1 row in set (0.00 sec)
查看 processlist_id=85290449 对应 SQL。
mysql> select
-> *
-> from
-> performance_schema.threads t1
-> join performance_schema.events_statements_current t2 on t1.THREAD_ID = t2.THREAD_ID
-> join information_schema.processlist t3 on t3.ID = t1.PROCESSLIST_ID
-> where
-> t3.ID = 85290449 G
*************************** 1. row ***************************
THREAD_ID: 85291565
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 85290449
PROCESSLIST_USER: db_auto
PROCESSLIST_HOST: x.x.x.x
PROCESSLIST_DB: las_cloud_his
PROCESSLIST_COMMAND: Sleep
PROCESSLIST_TIME: 4
PROCESSLIST_STATE: NULL
PROCESSLIST_INFO: NULL
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
HISTORY: YES
CONNECTION_TYPE: TCP/IP
THREAD_OS_ID: 20622
THREAD_ID: 85291565
EVENT_ID: 27
END_EVENT_ID: 27
EVENT_NAME: statement/sql/select
SOURCE: socket_connection.cc:101
TIMER_START: 15202992244257537536
TIMER_END: 15202992244366080536
TIMER_WAIT: 108543000
LOCK_TIME: 0
SQL_TEXT: SELECT 'pt-online-schema-change keepalive'
DIGEST: 512e888765927f4393e9f938df43b1ec
DIGEST_TEXT: SELECT ?
CURRENT_SCHEMA: las_cloud_his
OBJECT_TYPE: NULL
OBJECT_SCHEMA: NULL
OBJECT_NAME: NULL
OBJECT_INSTANCE_BEGIN: NULL
MYSQL_ERRNO: 0
RETURNED_SQLSTATE: NULL
MESSAGE_TEXT: NULL
ERRORS: 0
WARNINGS: 0
ROWS_AFFECTED: 0
ROWS_SENT: 1
ROWS_EXAMINED: 0
CREATED_TMP_DISK_TABLES: 0
CREATED_TMP_TABLES: 0
SELECT_FULL_JOIN: 0
SELECT_FULL_RANGE_JOIN: 0
SELECT_RANGE: 0
SELECT_RANGE_CHECK: 0
SELECT_SCAN: 0
SORT_MERGE_PASSES: 0
SORT_RANGE: 0
SORT_ROWS: 0
SORT_SCAN: 0
NO_INDEX_USED: 0
NO_GOOD_INDEX_USED: 0
NESTING_EVENT_ID: NULL
NESTING_EVENT_TYPE: NULL
NESTING_EVENT_LEVEL: 0
ID: 85290449
USER: db_auto
HOST: 11.41.119.163:47254
DB: las_cloud_his
COMMAND: Sleep
TIME: 4
STATE:
INFO: NULL
TIME_MS: 4513
ROWS_SENT: 1
ROWS_EXAMINED: 0
1 row in set (0.08 sec)
显示 SQL_TEXT: SELECT ‘pt-online-schema-change keepalive’,猜测是探活 SQL。
从库已经修复好了,但是依然无法 kill,是什么原因呢?
首先查看主从延迟,显示复制正常无延迟。
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: x.x.x.x
Master_User: replicater
Master_Port: 3358
Connect_Retry: 60
Master_Log_File: mysql-bin.303978
Read_Master_Log_Pos: 194
Relay_Log_File: relay-log.000104
Relay_Log_Pos: 407
Relay_Master_Log_File: mysql-bin.303978
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 194
Relay_Log_Space: 695
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1682449630
Master_UUID: 13339e05-643e-11ea-b105-fa163e231687
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: 13339e05-643e-11ea-b105-fa163e231687:402057596-402057598
Executed_Gtid_Set: 13339e05-643e-11ea-b105-fa163e231687:1-402057598
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.01 sec)
但是显示PROCESSLIST_COMMAND: Sleep,Sleep 的原因是什么?
原因暂时未找到,而且该连接无法 kill。由于权限原因无法登录应用服务器,因此无法通过 kill 进程的方式处理,最终通过锁定账号达到 kill 的目的。
mysql> alter user db_auto account lock;
Query OK, 0 rows affected (0.00 sec)
查看连接,未重连。
mysql> select * from information_schema.processlist where user='db_auto';
Empty set (0.01 sec)
查看日志。
EXECUTE START AT 2023-01-05 23:17:51
Copying `las_cloud_his`.`order_master`: 20% 06:29:11 remain
Copying `las_cloud_his`.`order_master`: 40% 04:54:05 remain
Copying `las_cloud_his`.`order_master`: 60% 03:18:36 remain
Replica MSS-je74pe529t is stopped. Waiting.
2023-01-06T18:33:41 Error copying rows from `las_cloud_his`.`order_master` to `las_cloud_his`.`_order_master_new`: Lost connection to MSS-z1w3992ikq while waiting for replica lag (DBI connect('las_cloud_his;host=percona-cn-north-1-008d31fae83a43f7.rds.jdcloud.com;port=3358;charset=utf8;mysql_read_default_group=client','db_auto',...) failed: Access denied for user 'db_auto'@'x.x.x.x'. Account is locked. at /usr/bin/pt-online-schema-change line 2345.)
EXECUTE FAIL AT 2023-01-06 18:33:41
其中:
-
Replica MSS-je74pe529t is stopped. Waiting,表明从库复制停止; -
Access denied for user ‘db_auto’@’x.x.x.x’. Account is locked,表明登录数据库失败。
结束以后给账号解除锁定。
mysql> alter user db_auto account unlock;
Query OK, 0 rows affected (0.00 sec)
online DDL
实际上,TokuDB 存储引擎本身支持秒级加列,因此适合使用原生 online DDL。
如下所示,三亿行的数据表加列执行用时 0.14 s。
EXECUTE START AT 2023-01-06 14:15:08
--------------
alter table order_master add column `dispatch_type` tinyint(4) default null COMMENT '派车方式', add column `is_manual_intervene` tinyint(4) default null COMMENT '是否人工干预', add column `dispatch_mode` tinyint(4) default null COMMENT '分派模式', add column `is_reserve_first_day` tinyint(4) default null COMMENT '是否第一天预约'
--------------
Query OK, 0 rows affected (0.14 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql: [Warning] Using a password on the command line interface can be insecure.
EXECUTE DONE AT 2023-01-06 14:15:08
因此,如果在 SQL 提交语法校验过程中可以通过识别存储引擎推荐适用的执行方式就可以有效提高执行效率。
语法校验依赖开源项目 goinception,目前返回结果中不包括存储引擎相关信息,语法校验的详细结果如下所示。
{'id': 2, 'stage': 'CHECKED', 'errlevel': 0, 'stagestatus': 'Audit Completed', 'errormessage': '', 'sql': "alter table order_master add column `dispatch_type` tinyint(4) default null COMMENT '派车方式'", 'affected_rows': 333373008, 'sequence': '0_0_00000001', 'backup_dbname': '', 'execute_time': '0', 'sqlsha1': '', 'backup_time': '0', 'actual_affected_rows': ''}
原理
TokuDB
TokuDB 是一种数据库存储引擎,在机械硬盘时代曾风靡一时,它有着不错的压缩,同时又兼具着不错的读、写能力,被各大厂用来解决容量+性能问题。
TokuDB 存储引擎的主要特性包括:支持事务、支持 MVCC、支持数据的高压缩比。
相比于 InnoDB,TokuDB 最大的优点是高压缩比。根据官方文档,TokuDB 存储引擎的压缩比最高可以达到 1/25。
TokuDB achieves up to 25x space savings on data and indexes over InnoDB due to high compression.
实际上,TokuDB 中索引的数据结构也与 InnoDB 不同。
InnoDB 中索引的数据结构是 B+ Tree,核心思想是通过增加每层节点的数量,降低树的高度,适配磁盘的页,从而减少磁盘 IO 次数。分析读写性能:
-
由于 B+ 树中任何关键字的查找必须走一条从根结点到叶子结点的路,因此 B+ Tree 查询性能稳定; -
写入与更新操作的性能一般,具体有两方面原因。首先,需要定位到磁盘中的位置并进行原地操作,随机 IO 成本较高。其次,由于数据页大小固定,因此大量的插入与删除操作有可能触发页分裂或页合并,CPU 被 IO 拖慢。
因此 B+ Tree 适用读多写少的场景。
TokuDB 中索引的数据结构是 Fractal tree(分形树),核心思想是将随机 IO 变为顺序 IO。分析读写性能:
-
写入性能较好,适合随机写入,原因是 Fractal Tree 的内部节点不仅有指向父节点和子节点的指针,还有一块 Buffer 区。当数据写入时会先落到这个 Buffer 区上,该区是一个 FIFO 结构,因此将数据插入转换成顺序写入; -
查询性能一般,如果要访问的数据在磁盘上,TokuDB 需要先解压然后在内存中访问,因此性能上会有损耗,所以不适用于对性能要求较高的业务。
因此 Fractal tree 适用写多读少的场景。
如果业务数据量很大,性能要求不高,可以通过使用 TokuDB 存储引擎降低磁盘使用率,如大量随机插入的存档场景。
MySQL 的插件式存储引擎设计,允许在线将 innodb 的存储引擎修改为 TokuDB 的存储引擎。
通过以下 SQL 语句就可以将数据表的存储引擎修改为 TokuDB,不过前提是数据库支持 TokuDB 存储引擎。
alter table table_name engine=tokudb;
pt-osc 执行 DDL 时会新建表,那么该表的存储引擎是 TokuDB 吗?
测试表明,执行过程中会通过show create table
命令查看建表语句,因此执行前后表的存储引擎不变。
pt-osc
retry
pt-osc 中支持部分场景下配置失败重试,具体包括:
The tool retries each operation if these errors occur:
Lock wait timeout (innodb_lock_wait_timeout and lock_wait_timeout) Deadlock found Query is killed (KILL QUERY <thread_id>) Connection is killed (KILL CONNECTION <thread_id>) Lost connection to MySQL
注意其中不包括主从延迟或复制中断的场景。
MySQL 原生 DDL 天然会导致主从延迟,原因是主从依次执行。因此大表 DDL 会导致较大的主从延迟,甚至影响到业务。
pt-osc 执行 DDL 的优点之一就是主从延迟较小,原因是可以理解为主从同时执行,相应也可能引发一些看似反常的现象,比如主库执行 DDL 影响到从库,比如导致从库创建触发器锁表。
实际上,pt-osc 中支持多个参数配置主从延迟的场景,在拷贝数据过程中持续检测主从延迟,如果超过阈值,则暂停拷贝。注意是持续监测,不存在重试的概念。
–max-lag
主要包括以下几个参数:
-
–max-lag,用于控制主从延迟阈值,超过阈值后检测; -
–check-interval,用于控制检测频率; -
–recursion-method,用于控制从库的检测方式; -
–recurse,用于控制是否检测级联从库。
其中 –max-lag 参数默认值为 1s,基于 Seconds_Behind_Master 字段判断主从延迟。
–max-lag
type: time; default: 1s
Pause the data copy until all replicas’ lag is less than this value. After each data-copy query (each chunk), the tool looks at the replication lag of all replicas to which it connects, using Seconds_Behind_Master. If any replica is lagging more than the value of this option, then the tool will sleep for –check-interval seconds, then check all replicas again. If you specify –check-slave-lag, then the tool only examines that server for lag, not all servers. If you want to control exactly which servers the tool monitors, use the DSN value to –recursion-method.
当前配置如下所示。
# slave 相关 #
max-lag: =1 # 当从库复制延迟大于该值时,暂停 data copy,直到所有复制的延迟都小于这个值
recursion-method: =processlist,hosts # 判断是否存在从库的方式,hosts 表示 show slave hosts,processlist 表示 show processlist
recurse: =1 # 不需要检测级联从库
# check 相关 #
check-interval: =1 # 当任何从库的延迟时间超过 --max-lag 的值时,指定检查 --max-lag 的时间间隔,默认值 1
slavelag check
首先查看线上运行的 pt-osc 对应版本。
zhangkai321@sql-inception-45c94ea9:~$ pt-online-schema-change --version
pt-online-schema-change 3.0.6
上文提到,PS.events_statements_current 表中的 SQL_TEXT 字段显示 SELECT ‘pt-online-schema-change keepalive’。
在源码中查询该关键字,显示在检测从库延迟前会检测主库连通性,并且会等待 check-interval 后再次检测。
# #####################################################################
# Make a ReplicaLagWaiter to help wait for slaves after each chunk.
# Note: the "sleep" function is also used by MySQLStatusWaiter and
# FlowControlWaiter
# #####################################################################
my $sleep = sub {
# Don't let the master dbh die while waiting for slaves because we
# may wait a very long time for slaves.
my $dbh = $cxn->dbh();
if ( !$dbh || !$dbh->ping() ) {
eval { $dbh = $cxn->connect() }; # connect or die trying
if ( $EVAL_ERROR ) {
$oktorun = 0; # flag for cleanup tasks
chomp $EVAL_ERROR;
die "Lost connection to " . $cxn->name() . " while waiting for "
. "replica lag ($EVAL_ERROR)n";
}
}
$dbh->do("SELECT 'pt-online-schema-change keepalive'");
sleep $o->get('check-interval');
return;
};
从库延迟检测过程中会依次检测从库的连通性与延迟大小,即使发现从库复制停止,也不会中止执行。
my $get_lag;
# The plugin is able to override the slavelag check so tools like
# pt-heartbeat or other replicators (Tungsten...) can be used to
# measure replication lag
if ( $plugin && $plugin->can('get_slave_lag') ) {
$get_lag = $plugin->get_slave_lag(oktorun => $oktorun);
}
else {
$get_lag = sub {
my ($cxn) = @_;
my $dbh = $cxn->dbh();
if ( !$dbh || !$dbh->ping() ) {
eval { $dbh = $cxn->connect() }; # connect or die trying
if ( $EVAL_ERROR ) {
# As the docs say: "The tool waits forever for replicas
# to stop lagging. If any replica is stopped, the tool
# waits forever until the replica is started."
# https://bugs.launchpad.net/percona-toolkit/+bug/1402051
PTDEBUG && _d('Cannot connect to', $cxn->name(), ':',
$EVAL_ERROR);
# Make ReplicaLagWaiter::wait() report slave is stopped.
return undef;
}
}
my $lag;
eval {
$lag = $ms->get_slave_lag($dbh);
};
if ( $EVAL_ERROR ) {
PTDEBUG && _d('Cannot get lag for', $cxn->name(), ':',
$EVAL_ERROR);
}
return $lag; # undef if error
};
}
MySQL 开启 general log 后使用 pt-osc 执行 DDL,数据库日志如下所示。
但是,测试过程中未复现该问题,在从库复制停止或从库关机的情况下,主库 DDL 均没有等待而是正常执行。待后续继续分析。
测试使用的语句如下所示。
pt-online-schema-change --host=127.0.0.1 -uadmin -p3341 --alter "engine=innodb" P=3341,D=test_zk,t=t3_bak --print --charset=utf8 --no-check-alter --no-check-replication-filters --critical-load=Threads_running=500,Threads_connected=800 --recursion-method dsn=D=percona,t=dsns --max-lag=1 --recurse=1 --check-interval=1 --progress percentage,20 --execute
注意其中在数据表 percona.dsns 中指定了从库信息。
mysql> select * from percona.dsns;
+----+-----------+----------------------------------------------------+
| id | parent_id | dsn |
+----+-----------+----------------------------------------------------+
| 1 | NULL | D=percona,t=dsns,h=127.0.0.1,P=3342,u=admin,p=3341 |
+----+-----------+----------------------------------------------------+
1 row in set (0.00 sec)
原因是该测试数据库在指定 –recursion-method=processlist,hosts 时无法识别到从库。
mysql> show slave hosts;
+-----------+------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID |
+-----------+------+------+-----------+--------------------------------------+
| 3342 | | 3342 | 3341 | f248888e-2feb-11ec-a7b0-fa163e19c3b7 |
+-----------+------+------+-----------+--------------------------------------+
1 row in set (0.00 sec)
mysql> SHOW PROCESSLIST;
+------+------------+-----------------+------+-------------+------+---------------------------------------------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+------------+-----------------+------+-------------+------+---------------------------------------------------------------+------------------+
| 1248 | admin | 127.0.0.1:49371 | NULL | Query | 0 | starting | SHOW PROCESSLIST |
| 1262 | replicater | 127.0.0.1:45850 | NULL | Binlog Dump | 12 | Master has sent all binlog to slave; waiting for more updates | NULL |
+------+------------+-----------------+------+-------------+------+---------------------------------------------------------------+------------------+
2 rows in set (0.00 sec)
账号锁定
MySQL 中账号的锁定与解锁语法如下所示:
alter user db_auto account lock;
alter user db_auto account unlock;
根据系统表 mysql.user 可以查看用户的锁定状态,其中 account_locked
列中保存了帐户是否被锁定的状态,其中 Y
表示用户被锁定, N
表示用户未锁定。
如下所示,锁定与解锁后分别查看用户的锁定状态。
mysql> alter user db_auto account lock;
Query OK, 0 rows affected (0.00 sec)
mysql> select user, host, account_locked from mysql.user where user='db_auto' G
*************************** 1. row ***************************
Host: %
User: db_auto
account_locked: Y
1 row in set (0.00 sec)
mysql> alter user db_auto account unlock;
Query OK, 0 rows affected (0.00 sec)
mysql> select user, host, account_locked from mysql.user where user='db_auto' G
*************************** 1. row ***************************
Host: %
User: db_auto
account_locked: N
1 row in set (0.00 sec)
那么,一般什么场景下可以锁定用户账号呢?
比如:
-
创建一个锁定的用户,等授权完成后再解锁; -
此用户账户已经不被使用; -
此用户账户已经被泄露; -
此用户只是临时使用,使用完成后将用户锁定。
结论
生产环境中遇到个一个案例,percona 实例中存储引擎为 TokuDB 的数据表执行 DDL 期间始终显示 COMMAND: Sleep,表明执行暂停。原因是从库磁盘空间打满导致 SQL 线程停止,pt-osc 持续检测主从延迟导致 kill 无效,甚至从库修复后依然如此。
根据官方文档,pt-osc 执行 DDL 时如果配置了延迟检测,当发现主从延迟或复制停止时程序将持续监测并等待,因此并且无法 kill,最终通过锁定用户强制中断该连接。官方文档中的描述与该案例的现象一致,可见 pt-osc 执行过程中也需要关注从库的状态,只有在从库复制正常且无延迟的前提下 pt-osc 才会继续执行。
但是测试过程中未复现该现象,在从库复制停止或从库关机的情况下,主库 DDL 均没有等待而是正常执行,与官方文档中的描述不一致。待后续继续分析。
简单对比 InnoDB 与 TokuDB 两种存储引擎。
InnoDB 中索引的数据结构是 B+ Tree,核心思想是通过增加每层节点的数量,降低树的高度,适配磁盘的页,从而减少磁盘 IO 次数。
TokuDB 中索引的数据结构是 Fractal tree(分形树),核心思想是将随机 IO 变为顺序 IO。数据写入时直接写入 Buffer 区,该区是一个 FIFO 结构。
因此 B+ Tree 适用读多写少的场景,而 Fractal tree 适用写多读少的场景。
如果业务数据量很大,性能要求不高,可以通过使用 TokuDB 存储引擎高压缩比的特性降低磁盘使用率。
待办
-
TokuDB 中索引的数据结构 Fractal tree -
复制中断或从库关机时 pt-osc 是否持续等待? -
pt-osc 显示 COMMAND: Sleep 的原因
参考教程
-
pt-online-schema-change Document
-
3.0.6/bin/pt-online-schema-change
-
如何在 MySQL 中锁定用户帐户
-
MySQL磁盘占用太多的一种优化思路 -
Percona Product Documentation: TokuDB introduction
-
虎哥与TokuDB的故事
原文始发于微信公众号(丹柿小院):pt-osc 从库 SQL 线程停止导致 DDL 无法 kill
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/178618.html