* Remove unused function mkdir()
* Remove some outdated comments
* Add include guards to AgencyStrings.h
* Move check for initialized agency out of registerAtAgencyPhase1
* Whitespace cleanup
* Address two minor comments from review
* Make request forwarding (load-balancing) non-blocking.
* Apply suggestions from code review
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Address review comments.
* Use more appropriate method.
* allow agency operations in active failover too
* Added regression test
* Allowed more calls in active failover for the health endpoint to work
* Updated CHANGELOG
* Added a skeleton framework for agency paths
* Added some basic tests
* Added missing header
* Move to shared_ptrs to parents
* Added a virtual base class
* Sprinkle some final specifiers
* Moved some code into class Path and simplified tests
* Added root() function
* Added assertions
* Added /arango/Supervision
* Replaced PathComponent by StaticComponent and added DynamicComponent
* Added /arango/Target
* Added /arango/Current
* Added /arango/Plan
* Added a TODO note
* Added the last missing top-level paths in /arango/
* Added aliases, cleaned up comment
* Fixed some specifiers
* s/typeof/decltype/
* Made usage of named variables for targets a little more consistent
* Bumped minimum cmake version
- removed policies superseded by minimum cmake version
- fixed static linking due to new policy CMP0060
* Fixed library suffixes for windows
* Extracted libs arango_mmfiles and _rocksdb from arangoserver
* Replaced target variables by constants
* Extracted arango_cluster_engine from arangoserver
* Extracted llhttp from arangoserver
* First successful split of arangoserver
* Moved enterprise files to enterprise
* Again only optionally include RestTestHandler and AcceptorUnixDomain
* Cleaned source files from other libraries
* Removed old commented sources
* Split off a small third library
* Fixed boost dependency for cluster engine
* Added some missing dependencies
* Added arango_geo dep, and -J on windows
* Began to split off an arango_graph lib
* Moved more files to arango_graph
* Do not set /J globally for ATL
* Moved more files to arango_graph
* Moved graph-RestHandlers to arango_graph
* Added arango_geo dependency to _mmfiles and _rocksdb
* Updated graph dependencies
* Split off arango_pregel
* Split off arango_aql
* Added missing boost_system dependency to pregel
* Split off arango_vocbase
* Cleanup
* Added missing boost_system dependency for arango_vocbase
* Split off arango_v8server
* Split off arango_utils
* Minor cleanup
* Split of arango_storage_engine
* Split off arango_indexes and arango_cache
* Fixed some dependencies
* Split off arango_replication
* Resolved two todos
* Split off arango_agency
* Reordered some statements
* Ordered dependency definitions alphabetically
* Cleaned some deps
* Break one cycle, comment on another
* Merge the remaining arangoserver_part[123] sources
* Moved some utils to vocbase to break cycles
* Added missing backtrace dependency to iresearch-s
* Added missing boost dependency
* Added dependency arango_indexes -> arango_geo
* Added deps to arango_cluster_engine, cleaned duplicate deps
* Broke remaining dependency cycles
* Actually, missed one cycle...
* Re-added include for Mac
* arango_cache needs SharedPRNG
* Added a stop to Network feature. There is a race condition on the garbage collection post
* Added cleanup for analyzer test and view test
* Update tests/js/client/shell/shell-analyzer-rest-api.js
Fixed usage of wrong command
* spawn an arangosh that checks cluster nodes are responsive
* attempting to kill 0 may end up bad for us, prohibit it
* don't trip over this optional stuff.
* fix killing of spectator
* only write one line per minute
* fix function name
* print sHitlist of tests, add resource usage
* fix lint
* start refactoring result processing into its own library
* /proc reading only works on linux
* new result processing library
* clean up test loading flow, remove unused blacklist implementation
* measure SUT start/stop + test time
* lint
* more non-test variables
* improve test runner naming
* finish gathering statistics about start/run/stop
* use internal stats code for external processes too
* fix status in sample testcase
* tell that procdump is gone - it seems this happenes in reality without coredumps being written
* refactor test result analyzing, add utility to work with analyzers of json dumps from CI systems
* fix testcase error handling
* also run watcher for agency
* lint
* fix default options for test added default options
* if arguments occur multiple times, update value to an array
* start implementing some test analyzers
* fix color
* write json report unconditional
* add analyzer that searches for long setup/teardown tests
* enable thread dump; log error if we fail to acquire the threads
* disable procdump for the agency
* output error if we fail to get process stats
* fix json invocation
* add debug logging
* only add buildType if that directory actually exists
* trap sleepers
* trap sleepers
* trap sleepers
* trap sleepers
* trap sleepers
* disable thread counting on the wintendo
* disable thread counting on the wintendo
* more measurements
* more measurements
* one more place
* one more place
* remove debugging code
* Update js/client/modules/@arangodb/process-utils.js
Co-Authored-By: Dan Larkin-York <danielhlarkin@users.noreply.github.com>
* Update js/client/modules/@arangodb/process-utils.js
Co-Authored-By: Dan Larkin-York <danielhlarkin@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: Dan Larkin-York <danielhlarkin@users.noreply.github.com>
* rename as sugested by @dan
* undo debug changes
* lint, make cluster health monitor optional per default
* fix spawning of active failover SUT, fix cluster health monitor shutdown
* use std::find_if (as @dan sugested), fix log ids
* fix scope of before-time
* remove 404-ed callbacks from agency
* revert callback documents to published api :)
* array needs be inside so that multiple unobserves to same key are possible
* Fixed ViewExecutionNode retrieval with deleted documents present in view
* Ported solution from 3.4 branch
* Changed index store in collection from vector to set. To make reversable indexes always last to execute
* Fixed re-enter hung
* Index storage fix
* Made index order deterministic
* Fix Mac build
* Added tests for index reversal
* Fixed Mac build
* Code cleanup
* Some cleanup
* Removed some redundand copy constructor calls
* Applied review comments
* Applied review comments
* make index selection more deterministic
* serialize indexes used by traversal with their estimates
* serialize selectivity estimates for shortest path nodes too
* fix assertion that doesn't hold true in unit tests
* fix test
* Obtain more ids via a background thread.
* Wait for thread to stop on shutdown.
* Added scope guard.
* Atomic weapons.
* Fix log level.
* One big lock!
* Added mutex for cleanup.
* Fixed unused variable.
* Fixed analyzer properties equality check.
* Added re-normalization of stored analyzer properties
* Fixed analyzer name rules in rest and v8 handlers
* Fixed linux build
* Added tests for implicit system db name. Code cleanup.
* Analyzers access logic moved to separate function
* Consolidated _servers and _serverAdvertisedEndpoints, added rebootId, prepared change notifications
* Cleanup
* Added a RebootId type
* Began implementing RebootTracker (still WIP)
* Moved RebootId operators into the class
* Removed RebootId operator<< again
* Added tests, added CallbackGuard, removed/commented old RebootTracker code
* Fix: do not try to call unset callbacks
* Split one test, added another
* Added more tests
* Renamed tests, added more tests
* Fixed missing variable declarations
* Let MockServer appear to be started
* Reorded test, fixed naming
* Implemented callMeOnChange()
* Re-implemented RebootTracker (not yet working)
* Resolved a TODO, updated a test, added comments
* Call old callbacks immediately
* Fixed tests
* Use EXPECT_* instead of ASSERT_*
* Suppress a log message
* Resolved TODOs
* Reverted changes on reading ServersRegistered
* Update RebootTracker
* Introduce `rebootId` into ServerState for Cluster
* A server *boots* if it is started on a previously non-existing data
directory and hence does not have a UUID yet.
* A server *reboots* if it is started on a pre-existing data directory
We keep the rebootId in the cluster's agency under
Current/ServersKnown/$uuid/rebootId.
When rebooting (and subsequently re-joining a cluster), the server increments
its rebootId in Phase 2 of registration. This way it can be detected within the
cluster whether a server was restarted.
This information will later be used to handle cases where server restarts can
lead to problems, for example with transactions or in-progress queries.
* Move rebootId into Current/ServersKnown/
* Fixed typo
* Fixed log ids
* Add deletion of ServersKnown/UUID from agency
* Add deletion of Current/ServersKnown/UUID to removeServer
* Clean up readRebootIdFromAgency and add retry loop around it
* Bugfix
* Added nolint comments
* Fixed initialization order
* Fixed ClusterInfo-test
* Added log messages
* Revert "Fixed ClusterInfo-test"
This reverts commit d983596979.
* Disabled assertion for google tests
* Ignore windows compile warning
* Always call loadServers in loadCurrent
* Fix really subtle bug when not returning a value
* Introduce `rebootId` into ServerState for Cluster
* A server *boots* if it is started on a previously non-existing data
directory and hence does not have a UUID yet.
* A server *reboots* if it is started on a pre-existing data directory
We keep the rebootId in the cluster's agency under
Current/ServersKnown/$uuid/rebootId.
When rebooting (and subsequently re-joining a cluster), the server increments
its rebootId in Phase 2 of registration. This way it can be detected within the
cluster whether a server was restarted.
This information will later be used to handle cases where server restarts can
lead to problems, for example with transactions or in-progress queries.
* Move rebootId into Current/ServersKnown/
* Add deletion of ServersKnown/UUID from agency
* Add deletion of Current/ServersKnown/UUID to removeServer
* Clean up readRebootIdFromAgency and add retry loop around it
* Fixed compile error due to forbidden implicit cast
* Fixed compile error on windows
* Fixed compile error due to devel merge
* Removed dead comment
* Removed TODO note
* Extended comment
* Removed TODO note
* Fixed using an invalidated iterator
* Copy string only if necessary
* Fixed compile error
* First version of ResignLeadership Job.
* Port some performance optimizations from CleanOutServerJob.
* Draft of resigning leadership on shutdown.
* Moved code into Maintenance Feature. Fixed beginShutdown.
* First draft of keeping in sync during controlled leader change.
* Test if server is actually the leader in plan.
* Update changelog.
* Added oldLeader check for set-the-leader request.
* Small fixes.
* Added tests for analyzer removal and get with wrong db context
* Added Link ddl tests
* Added cross-base access to analyzers tests
* Fixed v8 analyzer remove/get operation. Removed redundant sorting.
* Fixed link creation with analyzer from other database
* Added check for cross-use analyzer from system db
* Fixed tests
* Fix typo in test
* Fixed comments
* Fixed indentation
* Link validation moved to LinkHelper
* Code cleanup
* Applied review comments
* Applied review comments
* Small reformatting
* added assert
* port of feature-3.4/mv-gzip-export to devel branch
* add explicit namespaces so gcc 6.3.0 would successfully compile
* add conditional cleanup of ENCRYPTION file from another branch
* use _lseek() for Windows build to avoid deprecated warning.
* change from ifdef for lseek variants to TRI_LSEEK.
* force Windows lseek to return Linux expected type.
* Added startup error for bad temporary directory setting.
If the temporary directory (--temp.path) setting is identical to the database
directory (`--database.directory`) this can eventually lead to data loss, as
temporary files may be created inside the temporary directory, causing overwrites of
existing database files/directories with the same names.
Additionally the temporary directory may be cleaned at some point, and this would lead
to an unintended cleanup of the database files/directories as well.
Now, if the database directory and temporary directory are set to the same path, there
will be a startup warning about potential data loss (though in ArangoDB 3.4 allowing to
continue the startup - in 3.5 and higher we will abort the startup).
* make TTL indexes behave like other indexes on creation
if a TTL index is already present on a collection, the previous behavior
was to make subsequent calls to `ensureIndex` fail unconditionally with
the error "there can only be one ttl index per collection".
now, we are comparing the attributes of the to-be-created index with the
attributes of the existing TTL index and make it only fail when the
attributes differ. if the attributes are identical, the `ensureIndex`
call succeeds and returns the existing index.
* Bug fix 3.5/min replication factor (#9524)
* Cherry-pick minReplicationFactor
* Bug fix/failover with min replication factor (#9486)
* Improve collection time of IResearchQueryOptimizationTest
* Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it
* Added some assertion son minReplicationFactor
* Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled
* added minReplicationFactor to the user interface, preparation for the collection api changes
* added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo
* added minReplicationFactor usage to tests
* TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME
* minReplicationFactor now able to change via collection properties route
* fixed wrongly assert
* added minReplicationFactor to the graph management ui
* added minReplicationFactor to the gharial api
* Fixed off-by-one error in minReplicationFactor. We actually enforced one more.
* adjusted description of minReplicationFactor
* FollowerInfo Refactoring
* added gharial api graph creation tests with minimal replication factor
* proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests
* added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor
* Debug logging
* MORE Debug logging
* Included replication fast lane
* Use correct minreplicationfactor
* modified debug logging
* Fixed compileissues
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* Revert "MORE Debug logging"
This reverts commit dab5af28c0.
* Revert "MORE Debug logging"
This reverts commit 6134b664bd.
* Revert "MORE Debug logging"
This reverts commit 80160bdf3b.
* Revert "MORE Debug logging"
This reverts commit 06aabcdfe1.
* Removed debug output
* Added replication fast lane. Also refactored the commands as i cannot take it any more...
* Put some requests of RocksDBReplication onto CATCHUP Lane.
* Put some requests of MMFilesReplication onto CATCHUP Lane.
* Adjusted Fast and MED lane usage in Supervised scheduler
* Added changelog entry
* Added new features entry
* A new leader will now keep old followers in case of failover
* Update arangod/Cluster/ClusterCollectionCreationInfo.cpp
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Fixed JSLINT
* Unified lane handling of replication handlers
* Sorry forgotten in last commit
* replaced strings with static strings
* more use of static strings
* optimized min repl description in the ui
* decr initial loop variable
* clean up of the createWithId test
* more use of static strings
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Added some comments on condition, renamed variable as suggested in review
* Added check for min replicationFactor to be non-zero
* Added assertion
* Added function to modify min and max replication factor in one go
* added missing semicolon
* rm log devel
* Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place
* Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers
* check replFactor against nr dbservers
* Add lie reporting in CURRENT
* Reverted most of my recent commits about Failover situation. The intended plan simply does not work out
* move replication checks from logical collection to rest collection handler
* added more replication tests
* Include assert only if we are not in gtest
* jslint
* set min repl factor to zero if satellite collection
* check replication attributes in v8 collection
* Initial commit, old plan, does not yet work
* fixed ires tests
* Included FailoverCandidates key. Not fully implemented
* fixed wrong assert
* unified in sync follower reporting
* fixed compiler errors
* Cleanup locking, and fixed potential deadlocks
* Comments about locking order in FollowerInfo.
* properly check uint
* Keep old leader as potential failover candidate
* Transaction methods now use followerInfo to check if the leader can write, this might have the sideeffect that 'failoverCandidates' are updated
* Let agency check failoverCandidates if possible
* Initialize member variables
* Use unified follower reporting in DBServerAgencySync
* Removed obsolete variable, collecting it somewhere else
* repl factor attr check
* Reimplemented previous followers, second attempt now. PhaseOne and PhaseTwo can now synchronize on current.
* Fixed assertion, forgot an off-by-one
* adjusted test to be more preciese now
* Fixed failove candidates list
* Disable write on dropping too many followers
* Allow to run updateFailoerCandidates multiple times with same leader.
* Final fixes, resilience tests now green, crossing fingers for jenkins
* Fixed race on atomics comparison
* Fixed invalid number type
* added nullptr handling
* added nullptr handling
* Removed invalid assert
* Make takeover of leadership an atomic operation
* Update tests/js/common/shell/shell-cluster-collection.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Review fixes
* Fixed creation code to use takeoverLeadership
* Update arangod/Cluster/FollowerInfo.h
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Applied review fixes
* There is no timeout
* Moved AQL + Pregel to INTERNAL_AQL lane, which is medium priority, to avoid deadlocks with Sync replication
* More review fixes
* Use difference if you want to compare two vectors...
* Use std::string ...
* Now check if we are in recovery mode
* Added documentation for minReplicationFactor
* Added readme update as well in documenation
* Removed merge conflict leftovers 0o, i should not trust the IDE
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update Documentation/Books/Manual/Architecture/Replication/README.md
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update CHANGELOG
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update Documentation/Books/Manual/DataModeling/Collections/DatabaseMethods.md
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update Documentation/Books/Manual/ReleaseNotes/NewFeatures35.md
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update Documentation/DocuBlocks/Rest/Collections/1_structs.md
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/graphManagementView.js
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/graphManagementView.js
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Update Documentation/DocuBlocks/Rest/Graph/1_structs.md
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Adepted review requests, thanks for finding!
* Removed unnecessary const
* Apply suggestions from code review
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Moved initilization of variable more downwards
* Apply lock before notify_all()
* Remove documentation except DocuBlocks, covered by PR in docs repo
* Remove accidental indent
* Removed leftover merge conflict in documentation block
* Improve collection time of IResearchQueryOptimizationTest
* Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it
* Added some assertion son minReplicationFactor
* Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled
* added minReplicationFactor to the user interface, preparation for the collection api changes
* added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo
* added minReplicationFactor usage to tests
* TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME
* minReplicationFactor now able to change via collection properties route
* fixed wrongly assert
* added minReplicationFactor to the graph management ui
* added minReplicationFactor to the gharial api
* Fixed off-by-one error in minReplicationFactor. We actually enforced one more.
* adjusted description of minReplicationFactor
* FollowerInfo Refactoring
* added gharial api graph creation tests with minimal replication factor
* proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests
* added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor
* Debug logging
* MORE Debug logging
* Included replication fast lane
* Use correct minreplicationfactor
* modified debug logging
* Fixed compileissues
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* Revert "MORE Debug logging"
This reverts commit dab5af28c0.
* Revert "MORE Debug logging"
This reverts commit 6134b664bd.
* Revert "MORE Debug logging"
This reverts commit 80160bdf3b.
* Revert "MORE Debug logging"
This reverts commit 06aabcdfe1.
* Removed debug output
* Added replication fast lane. Also refactored the commands as i cannot take it any more...
* Put some requests of RocksDBReplication onto CATCHUP Lane.
* Put some requests of MMFilesReplication onto CATCHUP Lane.
* Adjusted Fast and MED lane usage in Supervised scheduler
* Added changelog entry
* Added new features entry
* A new leader will now keep old followers in case of failover
* Update arangod/Cluster/ClusterCollectionCreationInfo.cpp
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Fixed JSLINT
* Unified lane handling of replication handlers
* Sorry forgotten in last commit
* replaced strings with static strings
* more use of static strings
* optimized min repl description in the ui
* decr initial loop variable
* clean up of the createWithId test
* more use of static strings
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Added some comments on condition, renamed variable as suggested in review
* Added check for min replicationFactor to be non-zero
* Added assertion
* Added function to modify min and max replication factor in one go
* added missing semicolon
* rm log devel
* Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place
* Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers
* check replFactor against nr dbservers
* Add lie reporting in CURRENT
* Reverted most of my recent commits about Failover situation. The intended plan simply does not work out
* move replication checks from logical collection to rest collection handler
* added more replication tests
* Include assert only if we are not in gtest
* jslint
* set min repl factor to zero if satellite collection
* check replication attributes in v8 collection
* Initial commit, old plan, does not yet work
* fixed ires tests
* Included FailoverCandidates key. Not fully implemented
* fixed wrong assert
* unified in sync follower reporting
* fixed compiler errors
* Cleanup locking, and fixed potential deadlocks
* Comments about locking order in FollowerInfo.
* properly check uint
* Keep old leader as potential failover candidate
* Transaction methods now use followerInfo to check if the leader can write, this might have the sideeffect that 'failoverCandidates' are updated
* Let agency check failoverCandidates if possible
* Initialize member variables
* Use unified follower reporting in DBServerAgencySync
* Removed obsolete variable, collecting it somewhere else
* repl factor attr check
* Reimplemented previous followers, second attempt now. PhaseOne and PhaseTwo can now synchronize on current.
* Fixed assertion, forgot an off-by-one
* adjusted test to be more preciese now
* Fixed failove candidates list
* Disable write on dropping too many followers
* Allow to run updateFailoerCandidates multiple times with same leader.
* Final fixes, resilience tests now green, crossing fingers for jenkins
* Fixed race on atomics comparison
* Fixed invalid number type
* added nullptr handling
* added nullptr handling
* Removed invalid assert
* Make takeover of leadership an atomic operation
* Update tests/js/common/shell/shell-cluster-collection.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Review fixes
* Fixed creation code to use takeoverLeadership
* Update arangod/Cluster/FollowerInfo.h
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Applied review fixes
* There is no timeout
* Moved AQL + Pregel to INTERNAL_AQL lane, which is medium priority, to avoid deadlocks with Sync replication
* More review fixes
* Use difference if you want to compare two vectors...
* Use std::string ...
* Now check if we are in recovery mode
* Added documentation for minReplicationFactor
* Added readme update as well in documenation
* Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it
* Added some assertion son minReplicationFactor
* Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled
* added minReplicationFactor to the user interface, preparation for the collection api changes
* added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo
* added minReplicationFactor usage to tests
* TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME
* minReplicationFactor now able to change via collection properties route
* fixed wrongly assert
* added minReplicationFactor to the graph management ui
* added minReplicationFactor to the gharial api
* Fixed off-by-one error in minReplicationFactor. We actually enforced one more.
* adjusted description of minReplicationFactor
* FollowerInfo Refactoring
* added gharial api graph creation tests with minimal replication factor
* proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests
* added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor
* Debug logging
* MORE Debug logging
* Included replication fast lane
* Use correct minreplicationfactor
* modified debug logging
* Fixed compileissues
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* MORE Debug logging
* Revert "MORE Debug logging"
This reverts commit dab5af28c0.
* Revert "MORE Debug logging"
This reverts commit 6134b664bd.
* Revert "MORE Debug logging"
This reverts commit 80160bdf3b.
* Revert "MORE Debug logging"
This reverts commit 06aabcdfe1.
* Removed debug output
* Added replication fast lane. Also refactored the commands as i cannot take it any more...
* Put some requests of RocksDBReplication onto CATCHUP Lane.
* Put some requests of MMFilesReplication onto CATCHUP Lane.
* Adjusted Fast and MED lane usage in Supervised scheduler
* Added changelog entry
* Added new features entry
* A new leader will now keep old followers in case of failover
* Update arangod/Cluster/ClusterCollectionCreationInfo.cpp
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Fixed JSLINT
* Unified lane handling of replication handlers
* Sorry forgotten in last commit
* replaced strings with static strings
* more use of static strings
* optimized min repl description in the ui
* decr initial loop variable
* clean up of the createWithId test
* more use of static strings
* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Added some comments on condition, renamed variable as suggested in review
* Added check for min replicationFactor to be non-zero
* Added assertion
* Added function to modify min and max replication factor in one go
* added missing semicolon
* rm log devel
* Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place
* Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers
* check replFactor against nr dbservers
* Add lie reporting in CURRENT
* Reverted most of my recent commits about Failover situation. The intended plan simply does not work out
* move replication checks from logical collection to rest collection handler
* added more replication tests
* Include assert only if we are not in gtest
* jslint
* set min repl factor to zero if satellite collection
* check replication attributes in v8 collection
* fixed ires tests
* fixed wrong assert
* properly check uint
* repl factor attr check
* adjusted test to be more preciese now
* Fixed race on atomics comparison
* Fixed invalid number type
* Update tests/js/common/shell/shell-cluster-collection.js
Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>
* Review fixes
* More review fixes
* allow to access last operation tick of the transaction
* modify IResearchLink to rely on last commtted tick
* get rid of writing arangosearch markers to WAL
* further implementation
* local changes in iresearch
* add recovery states
* properly handle link creation during recovery
* adjust test cases
* properly handle nested transactions
* ungreylist recovery tests
* add more recovery tests
* do not use transaction to pass recovery tick
* do not store recoveryTick in MMFilesRecoveryState
* adjust tests
* fix mmfiles
* cleanup
* add context validity check
* fix tests
* fix more tests
* ensure subscription is not being released during commit
* address review comment
* final cleanup
* fix crash
* fix tests
* address test failures
* address review comments
* address review comments
* properly set recovery tick even if no recovery happened
* some error results have messages that are not reporting
* update CHANGELOG for rocksdb reporting fix
* Add mutex protection to errMsg usage per Jans code review
* start maintenance arangosearch threads after recovery is done
* ensure flush subscription is not being deallocated while in use
* add some tests
* properly determine storage engine
* adjust default view options
* stick to old index meta format in 3.5
* address test failures
* and cluster tests
* Let sync replication go through FAST lane instead of SLOW lane. This should reduce the amount of drop followers under high load.
* First draft of a generic MockServer to test RestHandlers. Needs adaption later on
* Added a test case for the RestDocumentHandler lane decission, synchronous replications should be fast-lane.
* Added CHANGELOG entry
* Applied review fixes
* Update CHANGELOG
Thanks for finding!
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Test with absurd timeout
* Removed debug timeout
* Fixed stem, norm, text analyzer creation without properties. Added tests.
* Removed non-determenistic test. Same functionality is tested by gtest (IResearchAnalyzerFeatureTest.test_remove)
* Added check for analyzer type existence.
* Made error messages for analyzer creation more specific and readable
* Fixed test
* Applied review suggestions
* Better logging and error reporting.
* Preconditions for FollowerInfo.
* Preconditions when updating Current as leader.
* Change a log level.
* Fix unit tests.
* CHANGELOG.
* LOG_TOPIC ids.
* Fix a log id.
* Fix Windows compilation.
* Removet lazy creation ob analyzers collection. Test fixes (added explicit creation of analyzers collection for test enviroments)
* Fixed test runs. Removed cluster tests for analyzer DDL
* Analyzers collection name moved to tests common
* Disbaled load_library in normalize calls
* Legacy code cleanup
* added db analyzers collection checks in analyzer creation test to cover functions removed from gtest
* Reverted analyzer properties comparsion to non-utf8 as that must be binary equal
* Fixed analyzer definition in test (definition should be object). Fixed proper reporting of invalid parameter error.
* Restored ability to parse string-encoded json object in analyzers rest hadler. Test was reverted to pass string again.
* Fixed test run
* Get rid of shared ptr to simplify code.
* ensure flush subscriptions are being unsubscribed
* update tick even if no changes happened
* remove debug output
* fix test
* address review comments
* address test failures
* validate analyzer properties, do not return `null` for identity analyzer properties
* fix some tests
* Fixed tests for new analyzer parameters validation. Added explicit test for analyzer parameter validation
* update iresearch, fix analyzer tests
* fix more tests
* fix tests
* store analyzer properties as vpack
* more compilation fixes
* Updated iresearch
* Tests compilation fixed
* Test run fixes
* Test run fixes
* Fix test run
* Fixed all IresearchTests
* Fixed V8Analyzers tests
* Added ngram and delimeter analyzers vpack config
* Added ngram and stem analyzers vpack parsers. Added tests
* Fixed internal issue #593 Fixed build issue
* Fixed tests
* Fixed Gtest tests
* Changed test run to fix Mac failure
* Mac tests debugging
* Fixed jslint errors
* test tracing added
* Taken account for VPackSlice operator== false negatives
* Fix agency election lockstep bug.
Reset the base point for the random election timeout to now whenever we have
cast a vote, be it for us or for some other server.
* CHANGELOG.
* Fix compilation.