* Refactor ConstantWeightShortestPathFinder tests
* Move the functions which are used both in ConstantWeightShortestPathFinder
and KShortestPathsFinder into GraphTestTools
* Don't display spurious error messages when there are multiple options
for found paths anymore
* Use factored out code for Graph testing in KShortestPathsFinder
* Removed useless parameters of buildCallback
* Renamed produceRow to produceRows and adapted a comment
* Renamed BlockFetcher to DependencyProxy
* Applied git-clang-format
* K_SHORTEST_PATHS queries only support one variable in FOR
* catch if the user gives more than one variable in grammar.y
* also give a correct error message
* Test code that catches too many variables for K_SHORTEST_PATHS
* issue 535.1: ensure recovery success if link recovery starts right at the previous marker
* backport: initialize members
* backport: use string_ref instead if string copies
* issue 526.9.1: implement swagger interface, add documentation
* address review comments
* add ngram
* Formatting
* Move REST description to new Analyzers top chapter in HTTP book
* Missed a DocuBlock
* Add Analyzers chapter to Manual SUMMARY.md
* Move REST API description back to Manual
Headlines were broken
* Add n-gram example
* Added RemoteExecutor skeleton
* Moved RemoteBlock implementations to ExecutionBlockImpl<RemoteExecutor>
* Remove unnecessary include to avoid unused function warnings
* Fixed gcc compile error
* Moved Scatter/Distribute block implementations to their new Executor versions
* Applied clang-format
* Added factory, infos and a skeleton for the unordered view executor
* Removed assert based on wrong assumption
* Added members from IResearchViewBlockBase to IResearchViewExecutor
* Moved more code into the ViewExecutor, hopefully enough to produce a working version now
* Added missing reset code, made produceRow work mostly correct
* Removed superfluous parentheses to get more useful output from Catch
* Ported fix 923b6e81ac723d1fe37f8e7bf1ab81149f3a08ef
Original commit message was:
Fixed a race condition in RemoteBlock which was triggered during
shutdown overtaking getSome.
* Applied review comments
* Inject input row instead of an item block + pos into the expression context, plus fixed some tests
* Adapted test. Search tests are now green.
* Do not ask upstream when already DONE
* Removed `limit` from next()
* Simplified code that could handle producing more than one document
* Minor readability change
* Solved two TODOs noted in the review
* Removed leftover references to DistributeNode members in the DistributeBlock
* Reverted removal of "exhausted"
* WIP: Implemented variant with scorers
* Fixed compile errors of the last commit
* Fixed some asserts and calculations
* Fixed violated assertions
* Moved files from IResearch/ to Aql/
* Replaced recursive call with a loop
* Worked on a few TODOs
* Removed IResearchViewBlock
* Set input registers correctly
* Eliminated dependency to the Node in the Executor
* Don't misuse the volatility variables for initialization
* Extended a TODO note
* Removed obsolete includes
* Removed an obsolete include from the tests
* Added missing include
* Read PKs in batches
* Fixed merge conflict
* Fixed merge conflict
* Restrict prefetching of PKs to the number of rows in the current output block
* Fixed merge
* Fix IResearch ASan errors
* Revert "Restrict prefetching of PKs to the number of rows in the current output block"
This reverts commit e0fd8698a3.
* Revert "Read PKs in batches"
This reverts commit c06c4d7a36.
* Began some small step refactoring to introduce batch-reading correctly
* Extracted method fillBuffer
* Extracted method evaluateScores
* Minor changes
* Read data from iresearch index in batches
* Replaced std::deque<IndexResult> buffer by a new class
* Solved minor TODOs
* Fixed last commit
* Fixed merge conflict
* Removed accidentally re-added view blocks
* Implemented SharedAqlItemBlockPtr
* Replaced all AqlItemBlockShell, shared_ptr<AqlItemBlock>, unique_ptr<AqlItemBlock> with SharedAqlItemBlockPtr
* Removed AqlItemBlockShell
* Bugfixes
* Added missing noexcept (used in returnBlock())
* Added nullptr constructor/operator= and noexcept specs
* Removed references to the shell
* Implemented review comments
* Fixed a compile error clang somehow ignored
* Two bugfixes and additional asserts
* Fixed ASan error
* protected AqlItemBlock destructor, and some cleanup
* added SharedAqlItemBlockPtr include in QueryCursor.h
* Added some asserts
* Fixed merge conflicts
* protected returnBlock in AqlItemBlockManager
* optimize away SortNode in case it is covered by an arangosearch view
this implementation is a stub with hard-coded attribute names
* extend IResearchViewMeta with sorting order definition
* make 'IResearchViewMeta::Sort' compatible with 'SortCondition' API
* ensure ArangSearch sort is immutable after creation
* address review comments
* remove unused functions
* more cleanup, virtual functions
* rm old functions
* more cleanup
* more cleanup, rm unused code
* rm of inheritRegisters and cleaerRegisters
* more cleanup, move functions into IMPL and cluster blocks
* used wrong shutdown function
* rm not reachable code
* moved lots of ExecutionBlock stuff to Impl and ClusterBlocks
* trace functions back to executionblock
* rm trace
* rm empty protected
* this getBlock addition, might be useless
* more fixes
* fixes for the distribute executor, hopefully almost done now
* removed obsolete todos
* shutdown var order
* applied requested changes
* re added const
* suppress a warning
* added forgotten test changes
* default destructor, removed not needed function
* refactor name
* Update tests/CMakeLists.txt
Co-Authored-By: hkernbach <hkernbach@users.noreply.github.com>
* Port agency performance tuning for many shards to devel.
* Add more IDs to LOG_TOPIC calls.
* Even more IDs for LOG_TOPIC.
* Fix a duplicate LOG_TOPIC ID.
* Fix an old merging bug in devel.
* Don't hesitate between phases one and two for small clusters.
* Added RemoteExecutor skeleton
* Moved RemoteBlock implementations to ExecutionBlockImpl<RemoteExecutor>
* Remove unnecessary include to avoid unused function warnings
* Fixed gcc compile error
* Moved Scatter/Distribute block implementations to their new Executor versions
* Applied clang-format
* Added factory, infos and a skeleton for the unordered view executor
* Removed assert based on wrong assumption
* Added members from IResearchViewBlockBase to IResearchViewExecutor
* Moved more code into the ViewExecutor, hopefully enough to produce a working version now
* Added missing reset code, made produceRow work mostly correct
* Removed superfluous parentheses to get more useful output from Catch
* Ported fix 923b6e81ac723d1fe37f8e7bf1ab81149f3a08ef
Original commit message was:
Fixed a race condition in RemoteBlock which was triggered during
shutdown overtaking getSome.
* Applied review comments
* Inject input row instead of an item block + pos into the expression context, plus fixed some tests
* Adapted test. Search tests are now green.
* Do not ask upstream when already DONE
* Removed `limit` from next()
* Simplified code that could handle producing more than one document
* Minor readability change
* Solved two TODOs noted in the review
* Removed leftover references to DistributeNode members in the DistributeBlock
* Reverted removal of "exhausted"
* WIP: Implemented variant with scorers
* Fixed compile errors of the last commit
* Fixed some asserts and calculations
* Fixed violated assertions
* Moved files from IResearch/ to Aql/
* Replaced recursive call with a loop
* Worked on a few TODOs
* Removed IResearchViewBlock
* Set input registers correctly
* Eliminated dependency to the Node in the Executor
* Don't misuse the volatility variables for initialization
* Extended a TODO note
* Removed obsolete includes
* Removed an obsolete include from the tests
* Added missing include
* Read PKs in batches
* Fixed merge conflict
* Fixed merge conflict
* Restrict prefetching of PKs to the number of rows in the current output block
* Fixed merge
* Fix IResearch ASan errors
* Revert "Restrict prefetching of PKs to the number of rows in the current output block"
This reverts commit e0fd8698a3.
* Revert "Read PKs in batches"
This reverts commit c06c4d7a36.
* Began some small step refactoring to introduce batch-reading correctly
* Extracted method fillBuffer
* Extracted method evaluateScores
* Minor changes
* Read data from iresearch index in batches
* Replaced std::deque<IndexResult> buffer by a new class
* Solved minor TODOs
* Fixed last commit
* Fixed merge conflict
* Removed accidentally re-added view blocks
* Implemented review comments
* issue 526.6: implement REST and V8 handlers for the iresearch analyzer feature
* address typo
* remove excess comments
* temporarily comment out tests failing on MacOS
* temporarily comment out more MacOS-only test failures
* precondition plan / version in compaction / store TTL removal independent of local _ttl set
* Agency init loops break when shutting down.
* assertion failures in store on restarting following agents
* Minor porting fixes from 3.4
* Fixed maybe-uninitialized warnings by removing unnecessary boost:optionals
* Fixed use after free
* Update arangod/Aql/SingleRemoteModificationExecutor.cpp
* Fixed another autocast number=>bool thanks c !
* Fixed wrong usage of almost identically named variables
* issue 526.3: update analyzer feature to store analyzer definitions in per-vocbase system collections
* address merge issues
* address another merge issue
* don't run compact() on a collection after a truncate() was done in the same transaction
running compact() in the same transaction will only increase the data size on disk due to RocksDB not being able to remove
any documents physically due to the snapshot we take at transaction start.
Decoupling the truncate transaction from the compact operation allows finishing the truncate transaction first, so we can
get rid of the snapshot. Running compact afterwards is then free to physically remove all the data.
As a nice side effect this change will also speed up the truncation of larger collections, because the compact will run
faster.
This change also exposes db.<collection>.compact() in the arangosh, in order to manually run a compaction on the data
range of a collection should it be needed for maintenance.
* fix documentation anchors
* Ignore satellite collections in shrinkCluster in agency.
* Abort RemoveFollower job if not enough in-sync followers or leader failure.
* Break quick wait loop in supervision if leadership is lost.
* In case of resigned leader, set isReady=false in clusterInventory.
* Fix catch tests.
* issue 523.1: address build issues, ensure FlushFeature subscriptions are cleared during stop(), assert that they are deallocated
* backport: account for Flush subscriptions validly surviving past FlushFeature::stop()
* fix comment typo
* init commit
requested changes
added tests
removed old distinct collect block code
distinct exec test
* devel merge and added tests including input data
* added test
* added one more input variable, added another waiting test