* Make isRestore work in the cluster.
This covers sharded collections with default sharding and non-default
sharding.
* always use locally generate revision ids for storing and looking up documents
* make the different values influencing the compaction run configurable
* Compaction statistics handling
- we mustn't keep the number of dead objects on the compacted datafiles statistics, else it will be compacted again.
- keep statistics of the compaction runs on the DatafileStatistics object
- add the new statistics on DatafileStatistics to the figures api
- implement test that assures only one compaction is run, and the statistic values are maintained
* don't mention the version number
* Implement review
- fix documentation
- allow 0 maxfiles to enable users to disable combined of datafiles
- add statistic element that counts the number of combined datafiles
* Implement review
- fix documentation
- use locks to make statistic values consistent.
- fix typo in variable name
* fix temporary variable unnecessary.
* changelog
* add "cluster selectivity estimates" to CHANGELOG
* add some documentation to RocksDBRestReplicationHandler
* fix building with relative paths
* add some more doc
* add some tests for the replication api
* fix RocksDBRestReplicationHandler and add tests
* update documentation
* remove obsolete parameter
* fix error message
* Implementing logger-first-tick, logger-tick-ranges. Fixing dump `chunkSize` documentation
* we must now ignore that datafiles are not sealed
this is because an unsealed datafile may have been produced by
renaming multiple journals to datafiles at server start
* acquire collection count after we have acquired the lock
* count the null byte as well
* fix count value acquisition
* send query fragments to the correct servers, even after failover or when a follower drops
the problem with using the previous shard-based approach is that responsibilities for shards may change at runtime
however, an AQL query must send all requests for the query to the initially used servers.
if there is a failover while the query is executing, we must still send all following requests to the same servers, and not the newly responsible servers
otherwise we potentially would try to get data from a query from server B while the query was only
instanciated on server A.
database feature now comes first. this ensures that the mmfiles collector thread
(owned by the mmfiles logfile manager) can always access the list of databases,
and that this list is not destroyed while the collector thread is still running.
* Added recovery tests for views and fixed a few related bugs.
* Added more view recovery tests.
* Modified view recovery tests to add a waitForSync operation after.
* fixed usage of wrong view type
* fixed recovery of view change markers
* 'devel' of https://github.com/arangodb/arangodb:
Fixing index markers
exclusive locks for indexes
better incremental sync
grunt build
Avoid log spam.
Removed code paths that wrote objectIds into the Agency. This did break replication.
WAL: honor tick end value
WAL fiter after collection
Reactivated client-side filtering of unnecessary markers
Added an Assert when persisting an index it's objectId is not allowed to be 0
The RestReplicationHandler now inserts new ObjectIds when replicating collections
moved files
add dependencies for TransactionManager
fix include
move engine-specific test into engine test