* Improve logging on coordinator when doing `arangorestore`.
* Return more error information in `mergeResults`.
* Longer timeout for communication coordinator -> leader for writes.
This is taking into account possible write stops from followers needed
to get in sync.
* Fix compilation.
* Get rid of numbers in exception log messages.
* Fix compilation.
* Fix indentation.
* Feature/arangosearch speedup removals (#7134)
* speedup document removals and optimize data model
* fix invalid constexpr
* reduce number of heap allocations for removals (#7157)
* open up connect limit to allow one DNS retry
* add CHANGELOG entry for dns retry fix
* update warning message wording
* Add lock line that was missed in recent scheduler merge
* backport: switch scope of responsibility between a TRI_vocbase_t and a LogicalView in respect to view creation/deletion
* backport: ensure arangosearch links get exported in the dump
* backport: ensure view is created during restore on the coordinator
* Updates for ArangoSearch DDL tests, IResearchView unregistration and known issues
* Add fix for internal issue 483
* Stop libcurl from trying to POST stdin
* Stop relocking every iteration in wait
* Remove unimplemented function
* Restrict setting of empty POSTFIELDS to POST requests
* Revert locking change
* Implement `syncCollectionCatchup` in DatabaseTailingSyncer.
First stab, might not even compile.
* Fixed a typo.
* Fix a typo and a compilation problem.
* Further compilation fix.
* Implement two stage catchup.
* Two small corrections.
* Unified error messages in Synchronize shard job.
* Improved a code comment.
* Fixed autocasting bool->double and double->bool issue. That is truely one of the best features ever invented... </irony>
* Renamed doHardLock => toSoftLockOnly and inverted default value
* Merged soft/hard foot logic with Transaction splits
* Use scopeguards to cancel readlocks
* Removed incorrect skipping of Batches in RocksDB Tailing syncer. This caused issues, whenever one transaction was spiltted.
* Added a test for Splitting a large transaction in RocksDB
* Reactivated skipping in RocksDB Wal Tailing (reverts initial fix)
* Actually include lastScannedTick in CollectionFinalize. Proper fix, kudos to @jsteemann.
* Fixed healFollower task in split-large-transaction test
* First attempt to not block the thread that requires the EXCLUSIVE sync-up lock
* Fixed insertion of query into registry in rest replication handler.
* Removed unnecessary / false asserts as suggested in review. Fixed code comments.
* Replaced auto with a correct type as suggested in review
* Added a helper function to validate if a query is in use in the registry
* Fixed logic bug in usage of query registry
* Fixed compile issue
* Implemented optional 'doHardLock' parameter in the replication acquire read-lock call. A hard-lock guarntees to stop all writes, a soft-lock may not.
* Fixed compile issue
* Automaticly transfrom int -> bool in initializerlist sucks...
* Inverted boolen logic bug hidden due to int->bool beeing logically inverted.
* Today it seems that bools are too complicated for my brain.
* Removed failure point, didn't write a test for it, and it is hard to write it in the current test environment. Need to find a better solution in future
* Applied chenges required by @goedderz in review
* Renamed doHardLock => toSoftLockOnly and inverted default value
* First attempt to not block the thread that requires the EXCLUSIVE sync-up lock
* Fixed insertion of query into registry in rest replication handler.
* Removed unnecessary / false asserts as suggested in review. Fixed code comments.
* Replaced auto with a correct type as suggested in review
* Added a helper function to validate if a query is in use in the registry
* Fixed logic bug in usage of query registry
* Fixed compile issue
* Automaticly transfrom int -> bool in initializerlist sucks...
* Inverted boolen logic bug hidden due to int->bool beeing logically inverted.
* Today it seems that bools are too complicated for my brain.
* Removed failure point, didn't write a test for it, and it is hard to write it in the current test environment. Need to find a better solution in future
* Applied chenges required by @goedderz in review
* Start ClusterComm threads in `ClusterFeature::start`. Stop ClusterComm threads in `ClusterFeature::stop`.
* Do not free objects in `Scheduler::shutdown`. Let the `unique_ptr` do their job. Stop ClusterComm threads in `ClusterFeature::stop`, but free instance in `ClusterFeature::unprepare`.
* `io_context` may contains lambdas that hold `shared_ptr`s to `Tasks` the required a functional `VocBase` in their destructor.
* Clean up.