* Do nothing in phaseTwo if leader has not been touched.
* Drop follower if it refuses to cooperate.
This is important since a dbserver that is follower for a shard will
after a reboot think that it is a leader, at least for a short amount
of time. If it came back quickly enough, the leader might not have
noticed that it was away.
* Initialize theLeader non-empty, thus not assuming leadership.
* Correct ClusterInfo to look into Target/CleanedServers.
* Prevent usage of to be cleaned out servers in new collections.
* After a restart, do not assume to be leader for a shard.
* agents' is obtained from leader's configuration
* corrections in Supervision for advertised endpoints
* change log
* Updated Documentation for cluster/health.
* Unified naming convention.
* Fixed missing update of volatile fields.
* Set version in right order.
* Removed debug output.
* Fixed jslint - missing ;
* Fix index creation in cluster.
Simplify and correct error handling logic in ensureIndexCoordinator.
* After index creation, wait until index appears.
We wait until the Supervision has removed the isBuilding flag and
the coordinator has reloaded the Plan.
* More index handling fixes.
* Explicitly remove isBuilding flag in coordinator (again).
* Fix order of arguments in REPLACE call.
* Take out debugging output again.
* Fix catch tests by holding mutex shorter.
* Better mutex handling in ClusterInfo.
* issue 506.3: backport 3.4: issue 506.3: use camel-case configuration parameter names consistntly, add a configuration version property to iresearch view meta
* backport: ensure meta version is supported
* backport: hide 'version' property from non-persistence json
* issue 506.2: backport 3.4: add optimization to not reexecute a primary-key filter if a match was already found
* backport: explicitly check type of instance of the primary-key filter
* backport: return non-null prepared filter and convert check to assert
* Ungreylist move shard test.
* Move leader shard: wait until all but the old leader are in sync.
* Increate moveShard timeout to 10000 seconds.
* Add CHANGELOG.
* Fix compilation.
* Fix a misleading comment.
* issue 153: ensure views are dropped in Agency when database is dropped in cluster, minor fixes
* backport: add test to ensure views are dropped when database is dropped from plan, fix some issues in ClusterInfo
* optimize primary key lookups in ArangoSearch
* fix test
* Add JS tests
* temporary comment optimizations
# Conflicts:
# arangod/Cluster/ClusterInfo.cpp
* Added some DEBUG output for replication rest handler
* Some more debug logging.
* Increased the priority of the ReplicationHandler. This way we will not get stuck with locks that cannot be canceled. Also cancel the lock on the correct database.
* Added extensive log output for replication thins
* Added tombstones to RestReplicationHandler. In a very unlikely case the cancel of a lock can be executed BEFORE the code that actually registers the lock, in this case we will now write a tombstone and do not lock.
* Revert "Added extensive log output for replication thins"
This reverts commit 6d4e37ea1e59e3b3457336019cc7dbc4c979504d.
* Added extensive log output for replication things, now in ERR level instead of MAINTAINER only
* Now actually use hours for synchronization
* React to errors under soft lock if they show up.
* Added a retry loop to increase the read-lock timer.
* Added more timeing output in RocksDB collection internals to figure out why the followers are dropped
* Tweaked RocksDB options
* Revert "Tweaked RocksDB options"
This reverts commit 2bf9c43280beda4792c47d079387fe5154cdd896.
* Removed debug output
* Applied all requested changes by goedderz
* Deleted unused variable