running compact() in the same transaction will only increase the data size on disk due to
RocksDB not being able to remove any documents physically due to the snapshot that is
taken at transaction start.
This change also exposes db.<collection>.compact() in the arangosh, in order to manually
run a compaction on the data range of a collection should it be needed for maintenance.
* Fix resign order
* Fixed a typo
* Get followers later, add TODOs
* Added a callback parameter to collection insert methods
* Get followers under the lock if necessary
* Extracted the replication of inserts into a separate method
* Move shortcut into replicate method
* Added callbacks for remove, replace and update
* Added missing overrides
* Extracted replication code from modifyLocal and removeLocal
* Update followers under lock also during replace, update, remove
* Fix changes from the last commit for update/replace
* Update comments, add asserts
* Remove changes for document-level locks that will be done in another PR
* Unify replication
* Adapt log messages to the devel ones
* Move common methods from its descendants to TransactionCollection, fix Mock on the way
* More IResearch test / mock fixes
* Relax asserts for nested transactions
* Reformat
* Fix non-babies remove and modify replication
* integrate ArangoSearch into GatherNode/GatherBlock logic
* fix inconsistency after merge
* add serialization/deserealization logic for IResearchViewNode sorting
* fix optimizer rule order
* fix copy/past typo
* fix tests
* ensure `GatherNode` is properly set up for ArangoSearch views
* adjust optimizer rule order
* issue 373.2: move toVelocyPack into LogicalDataSource
* backport: move static DataSource related strings into StaticStrings, add support for registering snapshots on DBServer views
* Added a more sophisticated test for shardDistribution format
* Updated shard distribution test to use request instead of download
* Added a cxx reporter for the shard distribuation. WIP
* Added some virtual functions/classes for Mocking
* Added a unittest for the new CXX ShardDistribution Reporter.
* The ShardDsitributionReporter now reports Plan and Current correctly. However it does not dare to find a good total/current value and just returns a default. Hence these tests are still red
* Shard distribution now uses the cxx variant
* The ShardDistribution reporter now tries to execute count on the shards
* Updated changelog
* Added error case tests. If the servers time out the mechanism will stop bothering after two seconds and just report default values.
* remove some non-unused V8 persistents
* do not throw that many bogus assertions
* do not rely on server role being defined
* slightly better debug output for V8 context debugging
* fix collection ids in inventory response
* simplify bootstrap a bit
* slightly better error handling
* make elapsed time a queryable value
* use less memory for stub collections
* added assertions that will always make sense
* added assertions
* do not garbage-collect while waiting
* less copying of parameters
* do not show "load indexes into memory" buttons for mmfiles engine
as all indexes are in memory anyway
* when a collection is truncated via the web interface, flush the WAL and rotate all active journals
this will make close all open journals on leader and followers and make them subject to compaction opportunities
* fix invalid server id values being passed from web interface to backend
* introduce afterTruncate method for indexes
* added test case for issue #3447
* updated CHANGELOG
* don't warn about replicationFactor for system collections
* check that the queries actually use the geo index and not some other index
* properly report error in web interface
* fix some internals checks that made truncate fail for bigger collections in maintainer mode
* also run a compact() operation after a serious truncate
in order to make iteration over the truncated range much faster
when the collection is next accessed
* increase default maximum number of V8 contexts to at least 16