* Use int type for server id
Change serverId to an int
Pass syncerId only for synchronous replication
Added UrlBuilder
structs to classes, reordering
Added Location class, cleanup
Fixed initialization order
Use Location class
Use string for large ints
Documentation
Added clientInfo to ReplicationClientProgressTracker and corresponding rest handlers
Pass clientInfo string in sync replication
Pass clientInfo in addFollower, too
Updated docu
Renamed UrlBuilder to UrlHelper
Updated docu
Try to fix compile error on windows
Fixed a bug and a test
* Implemented @jsteeman's comments
* potential bugfix for planning/#2865
* speed up dump tests setup
* enable authentication for backup tests
* make arangodump provide a "serverId" to the server
this allows the server to track arangodump as an active dump client
so any data required for the dumping may be retained while the
dump is ongoing
* don't log binary stuff into the logfile
* fix some deadlocks found by evil lock manager (tm)
* fix duplicate lock
* fix indentation
* ensure proper lock dependencies
* fix lock acquisition
* removed useless comment
* do not lock twice
* create either a V8 transaction context or a standalone transaction context, depending on if we are called from within V8 or not
* AQL micro optimizations
* use explicit constructor
* only use V8DealerFeature's ConditionLocker for acquiring a free V8 context
entering and exiting the selected context is then done later on without having to hold the ConditionLocker
* remove some recursive locks
* Disable custom deadlock detection when Thread Sanitizer is enabled
* Changing ifdef's
* grr
* broke gcc
* Using atomic for ApplicationServer::_server
* fix premature unlock
* add some asserts
* honor collection locking in cluster
* yet one more lock fix
* removed assertion
* some more bugfixes
* Fixing assert
(cherry picked from commit 1155df173bfb67303077fbe04ee8d909517bfd21)
if yes, the server may dynamically adapt the size of the response, in order to ensure that
HTTP responses do not get out of hand size-wise. This is a new feature in devel, and this
commit now makes it optional so that older clients do not need to be changed.
* Make isRestore work in the cluster.
This covers sharded collections with default sharding and non-default
sharding.
* always use locally generate revision ids for storing and looking up documents
* do not use V8 variant of AQL functions in early optimization stage when a C++ variant is available
* additionally, simplify AQL function definitions and aliases
* warn when more than 90% of max mappings are in use
* added C++ variant of replication catchup
* added `--log.role` option
* updated CHANGELOG
* removed non-existing scheduler.threads option from config
* removed useless __FILE__, __LINE__ invocations
* updated CHANGELOG
* allow a priority V8 context
* remove TRI_CORE_MEM_ZONE
* try to fix Windows errors & warnings
* cleanup
* removed memory zones altogether
* exclude system collections from collection tests
* Count as checksum
* Make readLockId optional as well so upgrades are possible
* fix option name in startup script
* fix some replication issues with RocksDB engine
* Added a backup test suite. This suite is supposed to entirely drop an ArangoDB _system and restore it into a fresh one. This also includes system collections.
* Added more test cases for backup suite. Now tests several authorization/rights cenarios
* Fixed RestReplication Handlers to restore _user collections Properly.
* Updated Changelog
* Added special handling of _users in Restore for MMFiles as well.
* Added JWT secret for cluster execution of this test, also added JWT secret to shutdown call
* Take out 503 timeouts altogether.
* Overhaul of AgencyComm::sendWithFailover loop.
* Let performRequests optionally ignore 404 coll not found.
* Fix error message "database not found" when AgencyComm failed.
* Add log entries in Agency if locks are acquired too slowly.
* Reexecute the javascript cluster sync stuff even if there was no plan/current change...So failed sync jobs can retry later...
* Cover callbacks in Communicator by lock. This fixes https://github.com/arangodb/planning/issues/370
* Put in delay in waiting for leader in agency test.
* Schmutz logging to heartbeat topic.
* Add more lock time diagnostic in agent.
* Switch on agencycomm tracing in coordinator.