* Let sync replication go through FAST lane instead of SLOW lane. This should reduce the amount of drop followers under high load.
* First draft of a generic MockServer to test RestHandlers. Needs adaption later on
* Added a test case for the RestDocumentHandler lane decission, synchronous replications should be fast-lane.
* Added CHANGELOG entry
* Applied review fixes
* Update CHANGELOG
Thanks for finding!
Co-Authored-By: Jan <jsteemann@users.noreply.github.com>
* Test with absurd timeout
* Removed debug timeout
* Fixed analyzer definition in test (definition should be object). Fixed proper reporting of invalid parameter error.
* Restored ability to parse string-encoded json object in analyzers rest hadler. Test was reverted to pass string again.
* Fixed test run
* Get rid of shared ptr to simplify code.
* validate analyzer properties, do not return `null` for identity analyzer properties
* fix some tests
* Fixed tests for new analyzer parameters validation. Added explicit test for analyzer parameter validation
* update iresearch, fix analyzer tests
* fix more tests
* fix tests
* store analyzer properties as vpack
* more compilation fixes
* Updated iresearch
* Tests compilation fixed
* Test run fixes
* Test run fixes
* Fix test run
* Fixed all IresearchTests
* Fixed V8Analyzers tests
* Added ngram and delimeter analyzers vpack config
* Added ngram and stem analyzers vpack parsers. Added tests
* Fixed internal issue #593 Fixed build issue
* Fixed tests
* Fixed Gtest tests
* Changed test run to fix Mac failure
* Mac tests debugging
* Fixed jslint errors
* test tracing added
* Taken account for VPackSlice operator== false negatives
* Bug fix 3.4/collection babies (#9033)
* Prepare API to create multiple collections in a single request to ClusterMethods to improve speedup
* Added counter on how many collections are successfully created
* Allow multi collection creation one level higher
* CollectionMethods now allow batch createion of Collections
* Improved array size assertions
* Now a graph is createad within a single roundtrip in the agency.
* Added new header files
* Insert collections in the AGENCY with TTL and a isBuilding flag, collections with this flag should not be visisible in the coordinator
* Added forgotten C++ file
* Fixed a rare race condition, and the failing IResearch Tests
* readded callback on DONE, otherwise lists are out of sync
* Fixed assertions to let mocked tests pass...
* Fixed community cluster
* Started fixing IResearch analyzer test, catch-tests are failing ;(
* Solved missed merge-conflict
* Added helper functions in AnalyzerFeature-test
* Refactoring AnalyzerTest Section-Auth
* Refactoring AnalyzerTest Section-Emplace-Duplicates
* Refactoring AnalyzerTest Section-Emplace-Error-Cases. Recovery-Test is now red, it seemed to be green because of invalid test case before.
* Refactoring AnalyzerTest, split GET test into multiple parts, still left 'cluster simulation'.
* Attempt to extract Coordinator / DBServer tests a little bit. This commit starts to break all Coordinator tests. However i am convinced that earlier version did NOT test a cluster situation at all, but some hybrid of SingleServer with full local storage that got told to be a Coordinator from now on, but without any Coordinator setup...
* Temporarly disabled some tests in AnalyzerFeature, as discussed with @gnusi.
* Fixed include guard.
* Temporarily deactivated failing tests
* You shall save your files before you commit...
* Fixed test asserting on plan version, which is now higher than before
* issue 526.9.1: implement swagger interface, add documentation
* address review comments
* add ngram
* Formatting
* Move REST description to new Analyzers top chapter in HTTP book
* Missed a DocuBlock
* Add Analyzers chapter to Manual SUMMARY.md
* Move REST API description back to Manual
Headlines were broken
* Add n-gram example
* issue 526.6: implement REST and V8 handlers for the iresearch analyzer feature
* address typo
* remove excess comments
* temporarily comment out tests failing on MacOS
* temporarily comment out more MacOS-only test failures
* don't run compact() on a collection after a truncate() was done in the same transaction
running compact() in the same transaction will only increase the data size on disk due to RocksDB not being able to remove
any documents physically due to the snapshot we take at transaction start.
Decoupling the truncate transaction from the compact operation allows finishing the truncate transaction first, so we can
get rid of the snapshot. Running compact afterwards is then free to physically remove all the data.
As a nice side effect this change will also speed up the truncation of larger collections, because the compact will run
faster.
This change also exposes db.<collection>.compact() in the arangosh, in order to manually run a compaction on the data
range of a collection should it be needed for maintenance.
* fix documentation anchors
* Ignore satellite collections in shrinkCluster in agency.
* Abort RemoveFollower job if not enough in-sync followers or leader failure.
* Break quick wait loop in supervision if leadership is lost.
* In case of resigned leader, set isReady=false in clusterInventory.
* Fix catch tests.
* added missing return statements
* only spend up to 10 seconds for initially fetching the list of collections in arangosh
fetching the list of collections is a blocking operation, and the default timeout for this is very high.
If the server is blocked by whatever reason, then the shell is unusable until the collections list request returns.
To avoid this, the initial request is limited to 10 seconds, so the shell can be used afterwards.
* if an index cannot be used for sorting, its sort
cost was previously returned as 0. this will in fact favor
indexes that can be used for filtering but not for sorting
over indexes that can be used for both.
this change is to report the sort cost for indexes that
cannot be used for sorting to n * log(n), where n is the
number of documents that optimizer expects to come out of the
index after filtering
* Decoupled IO handling from Scheduler.
* Fixed SSL start up bug.
* Replaced Scheduler with new worker farm implementation.
* Added minimal statistics and info string for Scheduler.
* Added support for timed submissions.
* Updated delayed submission api. Updated code that used timers.
* Extracted new Scheduler into a virtual parent class. The implementation can now depend on the usecase.
* Signal handler now working.
* Changed threads names, `_stop` is atomic, check for failure during thread start + exception handling like old scheduler did.
* Commented on source code and added TODOs.
* Played around with start-stop-conditions
* Play around with start stop condition.
* start stop cond
* Sart Stop Conditions
* Removed bad cv_status check.
* Bug fix: now compare the actual objects instead of pointer values. Setup t1 and t2 depending on the thread id.
* Moved most of the stuff now unrelated to the Scheduler to GeneralServer. Got rid of JobGuard.
* Instead of waiting for a thread to terminate, put it on a clean up list and check for its termination in each supervisor run.
* Allow detaching long running threads.
* Fixed test mock.
* Updated the WorkHandle logic. Removed post functions.
* Fixed crash when obtaining shared_ptr from this in destructor.
* Added lost mutex.
* Fixed memory leak.
* Fixed merge bug.
* Changed a lot of code to optimize the scheduler.
* Fixed bug of invalidated iterator. Dont remove task on shutdown at different places. Let scheduler threads run until queue is empty.
* Only by value calls to queue.
* Added options again.
* Clean up of code.
* UI Request Lane added.
* Bug fixes in Scheduler.
* Applied reformat.
* Use sigaction.
* Added some DEBUG output for replication rest handler
* Some more debug logging.
* Increased the priority of the ReplicationHandler. This way we will not get stuck with locks that cannot be canceled. Also cancel the lock on the correct database.
* Added extensive log output for replication thins
* Added tombstones to RestReplicationHandler. In a very unlikely case the cancel of a lock can be executed BEFORE the code that actually registers the lock, in this case we will now write a tombstone and do not lock.
* Revert "Added extensive log output for replication thins"
This reverts commit 6d4e37ea1e59e3b3457336019cc7dbc4c979504d.
* Added extensive log output for replication things, now in ERR level instead of MAINTAINER only
* Now actually use hours for synchronization
* React to errors under soft lock if they show up.
* Added a retry loop to increase the read-lock timer.
* Added more timeing output in RocksDB collection internals to figure out why the followers are dropped
* Tweaked RocksDB options
* Revert "Tweaked RocksDB options"
This reverts commit 2bf9c43280beda4792c47d079387fe5154cdd896.
* Removed debug output
* Applied all requested changes by goedderz
* Deleted unused variable
* merged fixes from 3.4
* odd fix
* Bug fix 3.4/sync repl release thread (#6784)
* First attempt to not block the thread that requires the EXCLUSIVE sync-up lock
* Fixed insertion of query into registry in rest replication handler.
* Removed unnecessary / false asserts as suggested in review. Fixed code comments.
* Replaced auto with a correct type as suggested in review
* Added a helper function to validate if a query is in use in the registry
* Fixed logic bug in usage of query registry
* Fixed compile issue
* Automaticly transfrom int -> bool in initializerlist sucks...
* Inverted boolen logic bug hidden due to int->bool beeing logically inverted.
* Today it seems that bools are too complicated for my brain.
* Removed failure point, didn't write a test for it, and it is hard to write it in the current test environment. Need to find a better solution in future
* Applied chenges required by @goedderz in review
* Bug fix 3.4/shorter foot in door (#7084)
* Implement `syncCollectionCatchup` in DatabaseTailingSyncer.
First stab, might not even compile.
* Fixed a typo.
* Fix a typo and a compilation problem.
* Further compilation fix.
* Implement two stage catchup.
* Two small corrections.
* Unified error messages in Synchronize shard job.
* Improved a code comment.
* Fixed autocasting bool->double and double->bool issue. That is truely one of the best features ever invented... </irony>
* Renamed doHardLock => toSoftLockOnly and inverted default value
* Merged soft/hard foot logic with Transaction splits
* Use scopeguards to cancel readlocks
* Bug fix 3.4/sync replication allow soft and hard lock (#6864)
* First attempt to not block the thread that requires the EXCLUSIVE sync-up lock
* Fixed insertion of query into registry in rest replication handler.
* Removed unnecessary / false asserts as suggested in review. Fixed code comments.
* Replaced auto with a correct type as suggested in review
* Added a helper function to validate if a query is in use in the registry
* Fixed logic bug in usage of query registry
* Fixed compile issue
* Implemented optional 'doHardLock' parameter in the replication acquire read-lock call. A hard-lock guarntees to stop all writes, a soft-lock may not.
* Fixed compile issue
* Automaticly transfrom int -> bool in initializerlist sucks...
* Inverted boolen logic bug hidden due to int->bool beeing logically inverted.
* Today it seems that bools are too complicated for my brain.
* Removed failure point, didn't write a test for it, and it is hard to write it in the current test environment. Need to find a better solution in future
* Applied chenges required by @goedderz in review
* Renamed doHardLock => toSoftLockOnly and inverted default value
* issue 496.5: minor API cleanup and error reportin enhancements, update iresearch to commit d69f7bd184e009da7bf0a478efd34a0c85b74291
* add workaround for shell-collection-rocksdb-noncluster.js::testSystemSpecial test failure
* fix typo
* issue 496.3: move more coordinator-related logic out of TRI_vocbase_t, rename some arangosearch view configuration parameters, remove some consolidation policies, update iresearch to revision 6fd9760d81b136f769e277ea5b8f53996ed7a1ca
* address potential deadlock between link creation and FlushThread
* remove code causing nullptr access
* add back lock around reader reopen
* revert: address potential deadlock between link creation and FlushThread
* invalidate payload for each field in FieldIterator before setting a value
* Improve logging on coordinator when doing `arangorestore`.
* Return more error information in `mergeResults`.
* Longer timeout for communication coordinator -> leader for writes.
This is taking into account possible write stops from followers needed
to get in sync.
* Fix compilation.
* Get rid of numbers in exception log messages.
* Fix a typo.
* Fix compilation.
* issue 496.1: switch scope of responsibility between a TRI_vocbase_t and a LogicalView in respect to view creation/deletion
* backport: address test failures
* backport: ensure arangosearch links get exported in the dump
* backport: ensure view is created during restore on the coordinator
* Updates for ArangoSearch DDL tests, IResearchView unregistration and known issues
* Add fix for internal issue 483
* issue 458.2.1: ensure LogicalCollection presence is checked before granting/revoking permissions
* try to address test failures
* backport: support wildcard for database too
* create collection before granting
* adjust ruby tests to expect behaviour as defined by issue #458
* adjust expected Ruby test result
* create required collection in Ruby test
* revert back to previous test code since Ruby refuses to create required collection
* missed revert
* potential bugfix for planning/#2865
* speed up dump tests setup
* enable authentication for backup tests
* make arangodump provide a "serverId" to the server
this allows the server to track arangodump as an active dump client
so any data required for the dumping may be retained while the
dump is ongoing
* don't log binary stuff into the logfile
* issue 459.3: ensure collection permissions are checked before updating/dropping an IResearch view
* backport: ensure collection permissions are checked before updating/dropping an IResearch view on cluster
* backport: address test failures
* backport: address more test failures
* reuse existing classes for scoping ExecContext