* spawn an arangosh that checks cluster nodes are responsive
* attempting to kill 0 may end up bad for us, prohibit it
* don't trip over this optional stuff.
* fix killing of spectator
* only write one line per minute
* fix function name
* print sHitlist of tests, add resource usage
* fix lint
* start refactoring result processing into its own library
* /proc reading only works on linux
* new result processing library
* clean up test loading flow, remove unused blacklist implementation
* measure SUT start/stop + test time
* lint
* more non-test variables
* improve test runner naming
* finish gathering statistics about start/run/stop
* use internal stats code for external processes too
* fix status in sample testcase
* tell that procdump is gone - it seems this happenes in reality without coredumps being written
* refactor test result analyzing, add utility to work with analyzers of json dumps from CI systems
* fix testcase error handling
* also run watcher for agency
* lint
* fix default options for test added default options
* if arguments occur multiple times, update value to an array
* start implementing some test analyzers
* fix color
* write json report unconditional
* add analyzer that searches for long setup/teardown tests
* enable thread dump; log error if we fail to acquire the threads
* disable procdump for the agency
* output error if we fail to get process stats
* fix json invocation
* add debug logging
* only add buildType if that directory actually exists
* trap sleepers
* trap sleepers
* trap sleepers
* trap sleepers
* trap sleepers
* disable thread counting on the wintendo
* disable thread counting on the wintendo
* more measurements
* more measurements
* one more place
* one more place
* remove debugging code
* Update js/client/modules/@arangodb/process-utils.js
Co-Authored-By: Dan Larkin-York <danielhlarkin@users.noreply.github.com>
* Update js/client/modules/@arangodb/process-utils.js
Co-Authored-By: Dan Larkin-York <danielhlarkin@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: Dan Larkin-York <danielhlarkin@users.noreply.github.com>
* rename as sugested by @dan
* undo debug changes
* lint, make cluster health monitor optional per default
* fix spawning of active failover SUT, fix cluster health monitor shutdown
* use std::find_if (as @dan sugested), fix log ids
* fix scope of before-time
* port of feature-3.4/mv-gzip-export to devel branch
* add explicit namespaces so gcc 6.3.0 would successfully compile
* add conditional cleanup of ENCRYPTION file from another branch
* use _lseek() for Windows build to avoid deprecated warning.
* change from ifdef for lseek variants to TRI_LSEEK.
* force Windows lseek to return Linux expected type.
* Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it
* Added some assertion son minReplicationFactor
* Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled
* added minReplicationFactor to the user interface, preparation for the collection api changes
* added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo
* added minReplicationFactor usage to tests
* TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME
* minReplicationFactor now able to change via collection properties route
* fixed wrongly assert
* added minReplicationFactor to the graph management ui
* added minReplicationFactor to the gharial api
* Fixed off-by-one error in minReplicationFactor. We actually enforced one more.
* adjusted description of minReplicationFactor
* FollowerInfo Refactoring
* added gharial api graph creation tests with minimal replication factor
* proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests
* added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor
* Debug logging
* MORE Debug logging
* Included replication fast lane
* Use correct minreplicationfactor
* modified debug logging
* Fixed compileissues
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* Revert "MORE Debug logging"
This reverts commit dab5af28c0.
* Revert "MORE Debug logging"
This reverts commit 6134b664bd.
* Revert "MORE Debug logging"
This reverts commit 80160bdf3b.
* Revert "MORE Debug logging"
This reverts commit 06aabcdfe1.
* Removed debug output
* Added replication fast lane. Also refactored the commands as i cannot take it any more...
* Put some requests of RocksDBReplication onto CATCHUP Lane.
* Put some requests of MMFilesReplication onto CATCHUP Lane.
* Adjusted Fast and MED lane usage in Supervised scheduler
* Added changelog entry
* Added new features entry
* A new leader will now keep old followers in case of failover
* Update arangod/Cluster/ClusterCollectionCreationInfo.cpp
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Fixed JSLINT
* Unified lane handling of replication handlers
* Sorry forgotten in last commit
* replaced strings with static strings
* more use of static strings
* optimized min repl description in the ui
* decr initial loop variable
* clean up of the createWithId test
* more use of static strings
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Added some comments on condition, renamed variable as suggested in review
* Added check for min replicationFactor to be non-zero
* Added assertion
* Added function to modify min and max replication factor in one go
* added missing semicolon
* rm log devel
* Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place
* Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers
* check replFactor against nr dbservers
* Add lie reporting in CURRENT
* Reverted most of my recent commits about Failover situation. The intended plan simply does not work out
* move replication checks from logical collection to rest collection handler
* added more replication tests
* Include assert only if we are not in gtest
* jslint
* set min repl factor to zero if satellite collection
* check replication attributes in v8 collection
* fixed ires tests
* fixed wrong assert
* properly check uint
* repl factor attr check
* adjusted test to be more preciese now
* Fixed race on atomics comparison
* Fixed invalid number type
* Update tests/js/common/shell/shell-cluster-collection.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Review fixes
* More review fixes
* migrate mochaGrep into --testCase parametrizing
* fix filter forwarding for spawned arangoshs - add mocha version
* sometimes we have the string undefined to ignore
* fix location - while they work with the server these tests are run in arangosh so they need to be in client/
* rename testsuite
* rename file as its testsuite
* Rename permissions_server to server_permissions
* apply filters before starting the server, so we can detect whether no test would be executed
* skipTimecritial and skipNondeterministic also qualify for non-errnous empty testcase
* simplify code, if is issued no matching testcase is an error
* Update test-utils.js