* Consolidated _servers and _serverAdvertisedEndpoints, added rebootId, prepared change notifications
* Cleanup
* Added a RebootId type
* Began implementing RebootTracker (still WIP)
* Moved RebootId operators into the class
* Removed RebootId operator<< again
* Added tests, added CallbackGuard, removed/commented old RebootTracker code
* Fix: do not try to call unset callbacks
* Split one test, added another
* Added more tests
* Renamed tests, added more tests
* Fixed missing variable declarations
* Let MockServer appear to be started
* Reorded test, fixed naming
* Implemented callMeOnChange()
* Re-implemented RebootTracker (not yet working)
* Resolved a TODO, updated a test, added comments
* Call old callbacks immediately
* Fixed tests
* Use EXPECT_* instead of ASSERT_*
* Suppress a log message
* Resolved TODOs
* Reverted changes on reading ServersRegistered
* Update RebootTracker
* Introduce `rebootId` into ServerState for Cluster
* A server *boots* if it is started on a previously non-existing data
directory and hence does not have a UUID yet.
* A server *reboots* if it is started on a pre-existing data directory
We keep the rebootId in the cluster's agency under
Current/ServersKnown/$uuid/rebootId.
When rebooting (and subsequently re-joining a cluster), the server increments
its rebootId in Phase 2 of registration. This way it can be detected within the
cluster whether a server was restarted.
This information will later be used to handle cases where server restarts can
lead to problems, for example with transactions or in-progress queries.
* Move rebootId into Current/ServersKnown/
* Fixed typo
* Fixed log ids
* Add deletion of ServersKnown/UUID from agency
* Add deletion of Current/ServersKnown/UUID to removeServer
* Clean up readRebootIdFromAgency and add retry loop around it
* Bugfix
* Added nolint comments
* Fixed initialization order
* Fixed ClusterInfo-test
* Added log messages
* Revert "Fixed ClusterInfo-test"
This reverts commit d983596979.
* Disabled assertion for google tests
* Ignore windows compile warning
* Always call loadServers in loadCurrent
* Fix really subtle bug when not returning a value
* Introduce `rebootId` into ServerState for Cluster
* A server *boots* if it is started on a previously non-existing data
directory and hence does not have a UUID yet.
* A server *reboots* if it is started on a pre-existing data directory
We keep the rebootId in the cluster's agency under
Current/ServersKnown/$uuid/rebootId.
When rebooting (and subsequently re-joining a cluster), the server increments
its rebootId in Phase 2 of registration. This way it can be detected within the
cluster whether a server was restarted.
This information will later be used to handle cases where server restarts can
lead to problems, for example with transactions or in-progress queries.
* Move rebootId into Current/ServersKnown/
* Add deletion of ServersKnown/UUID from agency
* Add deletion of Current/ServersKnown/UUID to removeServer
* Clean up readRebootIdFromAgency and add retry loop around it
* Fixed compile error due to forbidden implicit cast
* Fixed compile error on windows
* Fixed compile error due to devel merge
* Removed dead comment
* Removed TODO note
* Extended comment
* Removed TODO note
* Fixed using an invalidated iterator
* Copy string only if necessary
* Fixed compile error
* First version of ResignLeadership Job.
* Port some performance optimizations from CleanOutServerJob.
* Draft of resigning leadership on shutdown.
* Moved code into Maintenance Feature. Fixed beginShutdown.
* agents' is obtained from leader's configuration
* corrections in Supervision for advertised endpoints
* change log
* Updated Documentation for cluster/health.
* Unified naming convention.
* Fixed missing update of volatile fields.
* Set version in right order.
* Removed debug output.
* Fixed jslint - missing ;
- Schmutz now called "Maintenance" and completely implemented in C++
- Fix index locking bug in mmfiles
- Fix a bug in mmfiles with silent option and repsert
- Slightly increase supervision okperiod and graceperiod
* Supervision goes to Maintenance mode, when /arango/Supervision/Maintenance exists
* coordinator route stands
* stop updates in transient, when supervision off
* UI: document/edge editor now remembering their modes (e.g. code or tree)
* changed shardDistribution api behaviour, added PUT route to only fetch collection based shard distribution
* ui: optimized shards view, added missing cleanup function in nodes view
* broken test
* shard distribution tests not fit the new api behaviour
* variables as reference
* CHANGELOG
* Added a more sophisticated test for shardDistribution format
* Updated shard distribution test to use request instead of download
* Added a cxx reporter for the shard distribuation. WIP
* Added some virtual functions/classes for Mocking
* Added a unittest for the new CXX ShardDistribution Reporter.
* The ShardDsitributionReporter now reports Plan and Current correctly. However it does not dare to find a good total/current value and just returns a default. Hence these tests are still red
* Shard distribution now uses the cxx variant
* The ShardDistribution reporter now tries to execute count on the shards
* Updated changelog
* Added error case tests. If the servers time out the mechanism will stop bothering after two seconds and just report default values.
* remove some non-unused V8 persistents
* do not throw that many bogus assertions
* do not rely on server role being defined
* slightly better debug output for V8 context debugging
* fix collection ids in inventory response
* simplify bootstrap a bit
* slightly better error handling
* make elapsed time a queryable value
* use less memory for stub collections
* added assertions that will always make sense
* added assertions
* do not garbage-collect while waiting
* less copying of parameters
* do not show "load indexes into memory" buttons for mmfiles engine
as all indexes are in memory anyway
* when a collection is truncated via the web interface, flush the WAL and rotate all active journals
this will make close all open journals on leader and followers and make them subject to compaction opportunities
* fix invalid server id values being passed from web interface to backend
* introduce afterTruncate method for indexes
* added test case for issue #3447
* updated CHANGELOG
* don't warn about replicationFactor for system collections
* check that the queries actually use the geo index and not some other index
* properly report error in web interface
* fix some internals checks that made truncate fail for bigger collections in maintainer mode
* also run a compact() operation after a serious truncate
in order to make iteration over the truncated range much faster
when the collection is next accessed
* increase default maximum number of V8 contexts to at least 16
* fixed a bug, where when servers failed, when also agency leadership changes
* redid entire design of checkDBServers/checkCoordinators.
* comparison in supervision must be between oldPersisted and newHealth
* UI stuff
* UI stuff
* FailedServer test needed adjustment
* Hopefully final round
* fixed supervision failure detection
* FailedServer tests back to origin devel
* oldNot documented among preconditions in Agency HTTP API docs
* changed only look for status updated
* non action line in api-cluster
* added unique id to cluster, added access to Health
* added agents to health api
* added agents to health api
* added agents to health api
* transaction information for api
* agents listed like other servers
* missing line through merge conflict
* fixed git merge glitch
* added unique id to cluster, added access to Health
* added agents to health api
* added agents to health api
* added agents to health api
* transaction information for api
* agents listed like other servers
* missing line through merge conflict