1
0
Fork 0
arangodb/Documentation/Books/Manual/Scalability/ActiveFailover
sleto-it 0ba532b16a
Doc - Replication Refactor - Part 1 (#4555)
Next steps after DC2DC and Cluster doc improvements:

- We refactor replication sections and make more intuitive separation between Master/Slave and the new Active Failover in 3.3
- We create corresponding sections for Master/Slave and Active Failover in the Administration and Deployment chapters, as well as in the Scalability chapter, where these "modes" are introduced
- We touch and improve the "Architecture" chapter as well, where some architecture info have to be placed
- We reorg the TOC having in more "logical" order:
-- Deployment
-- Administration
-- Security
-- Monitoring
-- Troubleshooting
- We adds parts in the TOC
- We add toc per pages, using page-toc plugin
- We also put close together "Scalability" and "Architecture" chapters, preliminary steps of further improvements / aggregation
- We improve swagger

Internal Ref:
- https://github.com/arangodb/planning/issues/1692
- https://github.com/arangodb/planning/issues/1655
- https://github.com/arangodb/planning/issues/1858
- https://github.com/arangodb/planning/issues/973 (partial fix)
- https://github.com/arangodb/planning/issues/1498 (partial fix)
2018-02-28 12:23:19 +01:00
..
Architecture.md Doc - Replication Refactor - Part 1 (#4555) 2018-02-28 12:23:19 +01:00
Limitations.md Doc - Replication Refactor - Part 1 (#4555) 2018-02-28 12:23:19 +01:00
README.md Doc - Replication Refactor - Part 1 (#4555) 2018-02-28 12:23:19 +01:00
leader-follower.png Doc - Replication Refactor - Part 1 (#4555) 2018-02-28 12:23:19 +01:00
leader-follower.xcf Doc - Replication Refactor - Part 1 (#4555) 2018-02-28 12:23:19 +01:00

README.md

Active Failover

This Chapter introduces ArangoDB's Active Failover environment.

An active failover is defined as:

  • One ArangoDB Single-Server instance which is read / writeable by clients called Leader
  • An ArangoDB Single-Server instance, which is passive and not read or writeable called Follower
  • At least one Agency node, acting as a "witness" to determine which server becomes the leader in a failure situation

Simple Leader / Follower setup, with a single node agency

The advantage compared to a traditional Master-Slave setup is that there is an active third party which observes and supervises all involved server processes. Follower instances can rely on the agency to determine the correct leader server. This setup is made resilient by the fact that all our official ArangoDB drivers can now automatically determine the correct leader server and redirect requests appropriately. Furthermore Foxx Services do also automatically perform a failover: Should your leader instance fail (which is also the Foxxmaster) the newly elected leader will reinstall all Foxx services and resume executing queued Foxx tasks. Database users which were created on the leader will also be valid on the newly elected leader (always depending on the condition that they were synced already).

For further information about Active Failover in ArangoDB, please refer to the following sections:

Note: Asynchronous Failover, Resilient Single, Active-Passive or Hot Standby are other terms that have been used to define the Active Failover environment. Starting from version 3.3 Active Failover is the preferred term to identify such environment.