* issue 526.9.1: implement swagger interface, add documentation
* address review comments
* add ngram
* Formatting
* Move REST description to new Analyzers top chapter in HTTP book
* Missed a DocuBlock
* Add Analyzers chapter to Manual SUMMARY.md
* Move REST API description back to Manual
Headlines were broken
* Add n-gram example
* issue 526.6: implement REST and V8 handlers for the iresearch analyzer feature
* address typo
* remove excess comments
* temporarily comment out tests failing on MacOS
* temporarily comment out more MacOS-only test failures
* don't run compact() on a collection after a truncate() was done in the same transaction
running compact() in the same transaction will only increase the data size on disk due to RocksDB not being able to remove
any documents physically due to the snapshot we take at transaction start.
Decoupling the truncate transaction from the compact operation allows finishing the truncate transaction first, so we can
get rid of the snapshot. Running compact afterwards is then free to physically remove all the data.
As a nice side effect this change will also speed up the truncation of larger collections, because the compact will run
faster.
This change also exposes db.<collection>.compact() in the arangosh, in order to manually run a compaction on the data
range of a collection should it be needed for maintenance.
* fix documentation anchors
* Ignore satellite collections in shrinkCluster in agency.
* Abort RemoveFollower job if not enough in-sync followers or leader failure.
* Break quick wait loop in supervision if leadership is lost.
* In case of resigned leader, set isReady=false in clusterInventory.
* Fix catch tests.
* added missing return statements
* only spend up to 10 seconds for initially fetching the list of collections in arangosh
fetching the list of collections is a blocking operation, and the default timeout for this is very high.
If the server is blocked by whatever reason, then the shell is unusable until the collections list request returns.
To avoid this, the initial request is limited to 10 seconds, so the shell can be used afterwards.
* if an index cannot be used for sorting, its sort
cost was previously returned as 0. this will in fact favor
indexes that can be used for filtering but not for sorting
over indexes that can be used for both.
this change is to report the sort cost for indexes that
cannot be used for sorting to n * log(n), where n is the
number of documents that optimizer expects to come out of the
index after filtering