* Bug fix/issue #9612 (#9764)
* Fixed ViewExecutionNode retrieval with deleted documents present in view
* Ported solution from 3.4 branch
* Changed index store in collection from vector to set. To make reversable indexes always last to execute
* Fixed re-enter hung
* Index storage fix
* Made index order deterministic
* Fix Mac build
* Added tests for index reversal
* Fixed Mac build
* Code cleanup
* Some cleanup
* Removed some redundand copy constructor calls
* Applied review comments
* Applied review comments
* Update CHANGELOG
* Update CHANGELOG
* make TTL indexes behave like other indexes on creation
if a TTL index is already present on a collection, the previous behavior
was to make subsequent calls to `ensureIndex` fail unconditionally with
the error "there can only be one ttl index per collection".
now, we are comparing the attributes of the to-be-created index with the
attributes of the existing TTL index and make it only fail when the
attributes differ. if the attributes are identical, the `ensureIndex`
call succeeds and returns the existing index.
* added tests
* updated CHANGELOG
* don't run compact() on a collection after a truncate() was done in the same transaction
running compact() in the same transaction will only increase the data size on disk due to RocksDB not being able to remove
any documents physically due to the snapshot we take at transaction start.
Decoupling the truncate transaction from the compact operation allows finishing the truncate transaction first, so we can
get rid of the snapshot. Running compact afterwards is then free to physically remove all the data.
As a nice side effect this change will also speed up the truncation of larger collections, because the compact will run
faster.
This change also exposes db.<collection>.compact() in the arangosh, in order to manually run a compaction on the data
range of a collection should it be needed for maintenance.
* fix documentation anchors
* Added some DEBUG output for replication rest handler
* Some more debug logging.
* Increased the priority of the ReplicationHandler. This way we will not get stuck with locks that cannot be canceled. Also cancel the lock on the correct database.
* Added extensive log output for replication thins
* Added tombstones to RestReplicationHandler. In a very unlikely case the cancel of a lock can be executed BEFORE the code that actually registers the lock, in this case we will now write a tombstone and do not lock.
* Revert "Added extensive log output for replication thins"
This reverts commit 6d4e37ea1e59e3b3457336019cc7dbc4c979504d.
* Added extensive log output for replication things, now in ERR level instead of MAINTAINER only
* Now actually use hours for synchronization
* React to errors under soft lock if they show up.
* Added a retry loop to increase the read-lock timer.
* Added more timeing output in RocksDB collection internals to figure out why the followers are dropped
* Tweaked RocksDB options
* Revert "Tweaked RocksDB options"
This reverts commit 2bf9c43280beda4792c47d079387fe5154cdd896.
* Removed debug output
* Applied all requested changes by goedderz
* Deleted unused variable