* Added feature phases
* BasicsPhase and DatabasePhase to the required files. Server now has Feature circles and does not boot. Will be sorted out later on.
* Added ClusterPhase to features
* Added V8Phase to the required features
* Added AQLPhase to the affected features
* Added ServerPhase to Features
* Added FoxxPhase to the relevant features
* Added AgencyPhase to the relevant features
* Moved registration from local variable SYS_SYSTEM_REPLICATION_FACTOR from cluster to V8 as their ordering is now vice versa
* Moved Bootstrap feature into FoxxPhase. It could be moved to ServerPhase easily if the FoxxQueue dependency would be removed
* Final movement of Startup Phases. Now solved all circles.
* Removed merge conflict
* Moved ReplicationTimeout into cluster phase and fixed cross-phase requirements
* Added greetings phase. This phase separates the Basics Phase and is the first to be run. Includes Logger and Hello/Goodbye
* Added the GreetingsPhase in the corresponding features. Now all BasicsPhase features start after greetings Phase. There is some issue in this branch which prevents the Agency from Gossipping right now. Will be fixed next
* Moved creation of the Agent into the prepare phase of the feature. THereby it is guaranteed that agents at least exists before the GeneralServer is activating endpoints
* Recovery needs to be started after the ServerID
* Moved log output of FeaturePhases to DEBUG instead of ERROR.
* Added feature phases for clients
* ClusterFeature now does not directly require AgencyFeature any more
* Added requirement of TravEngineRegistryFeature in AQL feature. Otherwise shutdown may be undefined
* The ApplicationServer can now handout the list of ordered features. Used for testing purposes
* Fixed IResearchVew Tests Setup to honor new feature ordering
* Fixed IResearchViewDBServer Tests Setup to honor new feature ordering
* Started fixing IResearchView Coordinator tests with startup ordering. Not finished yet
* Added startup phases to ViewCoordinator test
* Disabled expected logoutput in ClusterRepairsTest
* Fixed indention in test code
* LinkCoordinator now honors startup ordering
* Link meta now honors startup rdering
* Supress expected cluster logs in ViewTest
* Removed '#' accidentially added.
* Changes since last PR: use LogicalDataSource for Methods::StateRegistrationCallback instead of TRI_voc_cid_t to avoid unnecessary lookups
* backport: address cluster LogicalDataSource resolution failure
* remove some non-unused V8 persistents
* do not throw that many bogus assertions
* do not rely on server role being defined
* slightly better debug output for V8 context debugging
* fix collection ids in inventory response
* simplify bootstrap a bit
* slightly better error handling
* make elapsed time a queryable value
* use less memory for stub collections
* added assertions that will always make sense
* added assertions
* do not garbage-collect while waiting
* less copying of parameters
* do not show "load indexes into memory" buttons for mmfiles engine
as all indexes are in memory anyway
* when a collection is truncated via the web interface, flush the WAL and rotate all active journals
this will make close all open journals on leader and followers and make them subject to compaction opportunities
* fix invalid server id values being passed from web interface to backend
* introduce afterTruncate method for indexes
* added test case for issue #3447
* updated CHANGELOG
* don't warn about replicationFactor for system collections
* check that the queries actually use the geo index and not some other index
* properly report error in web interface
* fix some internals checks that made truncate fail for bigger collections in maintainer mode
* also run a compact() operation after a serious truncate
in order to make iteration over the truncated range much faster
when the collection is next accessed
* increase default maximum number of V8 contexts to at least 16
* Make isRestore work in the cluster.
This covers sharded collections with default sharding and non-default
sharding.
* always use locally generate revision ids for storing and looking up documents
* make the different values influencing the compaction run configurable
* Compaction statistics handling
- we mustn't keep the number of dead objects on the compacted datafiles statistics, else it will be compacted again.
- keep statistics of the compaction runs on the DatafileStatistics object
- add the new statistics on DatafileStatistics to the figures api
- implement test that assures only one compaction is run, and the statistic values are maintained
* don't mention the version number
* Implement review
- fix documentation
- allow 0 maxfiles to enable users to disable combined of datafiles
- add statistic element that counts the number of combined datafiles
* Implement review
- fix documentation
- use locks to make statistic values consistent.
- fix typo in variable name
* fix temporary variable unnecessary.
* changelog
* add "cluster selectivity estimates" to CHANGELOG
* add some documentation to RocksDBRestReplicationHandler
* fix building with relative paths
* add some more doc
* add some tests for the replication api
* fix RocksDBRestReplicationHandler and add tests
* update documentation
* remove obsolete parameter
* fix error message
* Implementing logger-first-tick, logger-tick-ranges. Fixing dump `chunkSize` documentation
* we must now ignore that datafiles are not sealed
this is because an unsealed datafile may have been produced by
renaming multiple journals to datafiles at server start
* acquire collection count after we have acquired the lock
* count the null byte as well
* fix count value acquisition
* send query fragments to the correct servers, even after failover or when a follower drops
the problem with using the previous shard-based approach is that responsibilities for shards may change at runtime
however, an AQL query must send all requests for the query to the initially used servers.
if there is a failover while the query is executing, we must still send all following requests to the same servers, and not the newly responsible servers
otherwise we potentially would try to get data from a query from server B while the query was only
instanciated on server A.
database feature now comes first. this ensures that the mmfiles collector thread
(owned by the mmfiles logfile manager) can always access the list of databases,
and that this list is not destroyed while the collector thread is still running.
* Added recovery tests for views and fixed a few related bugs.
* Added more view recovery tests.
* Modified view recovery tests to add a waitForSync operation after.
* fixed usage of wrong view type
* fixed recovery of view change markers