- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
MySQL Replication and HA at Facebook - Part 2
MySQL复制和HA工具和技术套件确保我们在Facebook范围内实现高效、安全的复制。尽管存在不同类型的进程和系统故障,MySQL仍然可以每年安全地复制数万亿个事务。Facebook上的mysql replication/ha堆栈提供了这个规模,包括Facebook增强的半同步复制插件、binlog服务器和高可用性工具套件,这些工具称为dbstatus、logtailer和fastfailover。来了解这些令人兴奋的技术以及我们如何应对这些规模挑战。
在第二部分的讨论中,我们将重点讨论在FB MySQL复制技术基础上构建的自动化。自动化系统通过处理日常的突发事件来维护MySQL的高可用性。我们将涉及的自动化领域包括自动主故障转移、数据一致性不变量、复制故障域、断电故障恢复和连续灾难演练。
展开查看详情
1 .
2 . MySQL Replication and HA at Facebook Part-II Jeff Jiang Production Engineer Facebook, Inc jjj@fb.com
3 .Agenda ❖MySQL HA: theory and Facebook solutions ❖Facebook MySQL HA Automations • MySQL replication management at Facebook • FB MySQL Semisync and strong consistent failovers ❖Disaster Recovery Practices • Enforcement of Semisync failure domains • Maintain availability during power loss and network cut • Practice disasters: large scale testbed and drills
4 .MySQL HA: theory and Facebook solutions
5 .MySQL HA: the theory ❖Master-Slave replication + Master Failover = MySQL HA ❑A single MySQL instance is not reliable • In contrast, a group of MySQL instances are more reliable • MySQL master-slave replication spins up a group of instances ❑A single MySQL master is not reliable • If a group of instances are available, we can failover
6 .MySQL HA: Facebook solution ❖Master-Slave replication + Master Failover = MySQL HA ❑Master-Slave asynchronous replication to achieve read HA ❑Master failover to achieve write HA ❑Lossless MySQL Semisync to achieve data consistency ❖At Facebook, we develop automations to manage replications and master failovers
7 .MySQL HA automations at Facebook
8 .MySQL HA Automation: an overview ❖Facebook HA automation is production driven • Discovery: automatic discovery of replication topology • Monitoring: actively polling the state of master and slave, trigger remediations and alerts when failure happens. • Remediation: automatically fixing issues
9 .MySQL HA automation: discovery (1) ❖To achieve high-availability , we create master-slave replication topology ❖The “model” of replication topology is defined in config manager service • Where is the master ? where are the slaves ? • How many slaves are in location X? ❖The materialized topology is stored in the discovery service
10 .MySQL HA automation: discovery (2) Discovery of master/slaves is critical for both clients and automations Config Manager Service Master Master Perferred Master: California Slave Slave Slave Fallbacks: Iowa, Oregon Read-only: Sweden Slave Slave Slave Discovery Service Clients and Automations
11 .MySQL HA automation: monitoring (1) ❖Planet-scale materialized replication topologies have to be monitored • Many master-slaves replication topologies: The Replicasets • Failures are frequent and normal ❖DBStatus: distributed Facebook’s MySQL replication monitoring • Monitoring replication behavior on a single node • Quorum based voting to decide the topology’s healthiness
12 .MySQL HA automation: monitoring (2) Once replication topology is discovered, we need to monitor it Master dbstatus Alert Slave dbstatus Slave dbstatus Slave dbstatus SHOW SLAVE STATUS SHOW BINARY LOGS …
13 .MySQL HA automation: monitoring (3) ❖Different roles of DBStatus on master and slaves • DBStatus on slave is responsible for monitoring the replication status of the slave itself • DBStatus on master is responsible for monitoring that quorum of the slaves are online and healthy • DBStatus on slaves also send heartbeat writes to master • All DBStatus polls master status from others and vote for master being offline
14 .MySQL HA automation: remediation (1) Large scale auto-alarming naturally leads to large scale auto remediation ❖Human DBAs cannot effectively deal with regular failures / disasters from a planet scale fleet ❖At Facebook, we automate the traditional DBA routines into DBStatus to automatically remediate most failures • Disable/replace bad slaves • Master failover • Repoint slaves
15 . MySQL HA automation: remediation (2) Handling of a broken slave Discovery Service Master dbstatus Master Slave Slave Slave dbstatu dbstatu dbstatu Slave Slave Slave s s s clients
16 .MySQL HA automation: remediation (3) But what if master dies? Automation does failovers: FastFailover ❖DBStatus talks with each other and votes that master is offline ❖One DBStatus gets the coordinator lock and elects the new master ❖The coordinator DBStatus continues to finish the rest of master failover • Do replication catch-up on the candidate new master
17 . MySQL HA automation: remediation (4) Semisync is deployed to assist replication catchup in FastFailover ❖Catch up candidate master with the offline master • lossless Semisync is deployed by developing Binlog Server(BLS) Query Dump Writer BLS Thread Thread Thread Master Commit BLS binlog Update Binlog Pos Slave Slave Slave ACK binlog Engine BLS BLS BLS BLS ACK MySQL Binlog Server(BLS)
18 .MySQL HA automation: remediation (5) Node-fence: another way of stopping writes on master ❖Lossless Semisync in FB MySQL 5.6 waits for Semisync ack to come back to the master before engine commit ❖Node-fence automation: stopping Semisync acking to effectively disable write on the master • Especially effective when master itself is inaccessible or cannot respond to ‘SET SUPER_READ_ONLY = 1’
19 . MySQL HA automation: remediation (6) Case study: failover away from a broken master by node-fencing Discovery Service BLS Master dbstatus Master Slave BLS Slave Master Slave Slave dbstatu dbstatu dbstatu Slave Master Slave Slave s s s clients
20 .MySQL HA automation: remediation (7) Repointing of slaves are needed when network partition happens ❖Network partition can cause slave pointing to a previous master, repointing it back to the current master is the fastest remediation. • GTID auto-position makes repointing straightforward Master Slave BLS Network Partition BLS BLS Slave Master Slave Slave BLS BLS BLS
21 .FastFailover and Semisync enhancements Failover is easy, data consistency is not ❖Async slaves can go ahead of Semisync • Sacrifice failover availability by enforcing check on all slaves? ❖Semisync might be turned off accidentally • rpl_semi_sync_master_enabled • rpl_semi_sync_master_timeout ❖BLS not in topology might still be acking the master
22 .FB Semisync: Async Behind Semisync (1) FastFailover only needs to check BLS during a failover ❖Vanilla MySQL 5.6/5.7/8.0 does not guarantee that Semisync slaves are ahead of async slaves • Master prepares TX1 then dies, async slave gets TX1 but Semisync slave might not • Failover has to check ALL slaves to protect against phantom read ❖FB MySQL can enforce that Async slaves are always behind of Semisync slaves
23 .FB Semisync: Async Behind Semisync (2) FastFailover only needs to check BLS during a failover Vanilla MySQL 5.6/5.7/8.0 Async Behind Semisync BLS BLS Master Master M:123 Prepare M:123 Prepare M:123 Binlog Commit M:123 Binlog Commit M:123 BLS BLS M:123 Slave Slave Slave Slave Prepare M:123 Prepare M:123 Binlog Commit M: 123 Binlog Commit M: 123 Engine Commit M:123 Engine Commit M:123 Question: what to do? Catch-up from BLS is enough
24 .FB Semisync: “Safe-Turnoff” of Semisync No need to worry about Semisync is accidentally turned off ❖Accidental turning off Semisync leads to data drift • On slaves, we turn off Semisync for replication performance • On masters, rpl_semi_sync_master_timeout may be set to a too short duration ❖FB Semisync feature: server automatically exit when Semisync is turned off and there are pending transactions • Dynamic variable rpl_semi_sync_master_crash_if_active_trxs
25 .FB Semisync: Semisync Whitelist (1) BLS can become strayed and stealthily send acks to the master ❖At Facebook scale, BLS replacements is regular events • Unhealthy BLS is removed from the Discovery Service ❖Automations might not be able to force strayed BLS to stop • Strayed BLS might come back into life afterwards ❖FB MySQL enforces that only acks from whitelisted Semisync slaves are respected by master • Dynamic variable rpl_semi_sync_master_whitelist
26 .FB Semisync: Semisync Whitelist (2) Safe replacement of temporarily unresponsive Binlog Server ❖BLS_B becomes unresponsive ❖Replacement happens by updating Semisync Whitelist first ❖Node-Fence happens ❖BLS_B reconnects, and is rejected (master dump thread Discovery Service exits) BLS_A BLS_A Master Whitelist=[BLS_A, BLS_C] BLS_B] BLS_C Master BLS_C BLS_B BLS_B
27 .FB Semisync: Trim Binlog To Recover (1) Cleaning up the leftover of FastFailover is non-trivial ❖After FastFailover, node-fenced instance cannot rejoin replication • Node-fenced instance cannot take replication writes • Executed_Gtid is ahead of storage engine on the instance ❖FB MySQL truncates uncommitted transactions in Binlog during crash-recovery • Static flag trim-binlog-to-recover • Automation can then rejoin the slave instance into
28 .FB Semisync: Trim Binlog To Recover (2) Light-weighted recovery of node-fenced instance ❖FastFailover happens ❖New writes reaches original BLS_A master Slave Master ❖Semisync master timeouts, Executed_Gtid:101 Executed_Gtid:100 master restarts BLS_B ❖Crash-recovery happens and prepared binlog is truncated ❖Original master is repointed to Master Slave Slave Executed_Gtid:100 the new master
29 .MySQL Disaster Recovery at Facebook