* always import system collections first, so if importing some other collection fails later, the system collections are already in place
* properly disable enterprise features when restoring an enterprise dump into a community cluster
* disable enterprise features when restoring an enterprise dump into the community version
* fix handling of "duplicate name" issues when restoring system
collections
* Prepare API to create multiple collections in a single request to ClusterMethods to improve speedup
* Added counter on how many collections are successfully created
* Allow multi collection creation one level higher
* CollectionMethods now allow batch createion of Collections
* Improved array size assertions
* Now a graph is createad within a single roundtrip in the agency.
* Added new header files
* Insert collections in the AGENCY with TTL and a isBuilding flag, collections with this flag should not be visisible in the coordinator
* Added forgotten C++ file
* Fixed a rare race condition, and the failing IResearch Tests
* readded callback on DONE, otherwise lists are out of sync
* Fixed assertions to let mocked tests pass...
* Fixed community cluster
running compact() in the same transaction will only increase the data size on disk due to
RocksDB not being able to remove any documents physically due to the snapshot that is
taken at transaction start.
This change also exposes db.<collection>.compact() in the arangosh, in order to manually
run a compaction on the data range of a collection should it be needed for maintenance.
* Ignore satellite collections in shrinkCluster in agency.
* Abort RemoveFollower job if not enough in-sync followers or leader failure.
* Break quick wait loop in supervision if leadership is lost.
* In case of resigned leader, set isReady=false in clusterInventory.
* Fix catch tests.
* added missing return statements
* only spend up to 10 seconds for initially fetching the list of collections in arangosh
fetching the list of collections is a blocking operation, and the default timeout for this is very high.
If the server is blocked by whatever reason, then the shell is unusable until the collections list request returns.
To avoid this, the initial request is limited to 10 seconds, so the shell can be used afterwards.
* if an index cannot be used for sorting, its sort
cost was previously returned as 0. this will in fact favor
indexes that can be used for filtering but not for sorting
over indexes that can be used for both.
this change is to report the sort cost for indexes that
cannot be used for sorting to n * log(n), where n is the
number of documents that optimizer expects to come out of the
index after filtering
* added check for empty scheduler
* removed log, old is 1 not 0
* require running in this thread
* test
* added isDirect to callback
* signature fixed
* added drain
* added allowDirectHandling
* disabled for testing
* Add ExecContextScope object to direct call.
* try alternate initialization of ExecContextScope
* remove ExecContextScope, no help. try _fifoSize as part of direct decision.
* strand management to minimize reuse of same strand per listen socket
* blind attempt to address Jenkins shutdown lock up. may remove quickly.
* add filename and line to existing error log message
* Adjust queueOperation() to stop accepting items once isStopping() becomes true.
* revert previous check-in to MMFilesCollectorThread.cpp
* big reformat
* fixed merge conflicts
* Add CHANGELOG entry.
* Added some DEBUG output for replication rest handler
* Some more debug logging.
* Increased the priority of the ReplicationHandler. This way we will not get stuck with locks that cannot be canceled. Also cancel the lock on the correct database.
* Added extensive log output for replication thins
* Added tombstones to RestReplicationHandler. In a very unlikely case the cancel of a lock can be executed BEFORE the code that actually registers the lock, in this case we will now write a tombstone and do not lock.
* Revert "Added extensive log output for replication thins"
This reverts commit 6d4e37ea1e59e3b3457336019cc7dbc4c979504d.
* Added extensive log output for replication things, now in ERR level instead of MAINTAINER only
* Now actually use hours for synchronization
* React to errors under soft lock if they show up.
* Added a retry loop to increase the read-lock timer.
* Added more timeing output in RocksDB collection internals to figure out why the followers are dropped
* Tweaked RocksDB options
* Revert "Tweaked RocksDB options"
This reverts commit 2bf9c43280beda4792c47d079387fe5154cdd896.
* Removed debug output
* Applied all requested changes by goedderz
* Deleted unused variable
* issue 496.5: backport 3.4: minor API cleanup and error reportin enhancements, update iresearch to commit d69f7bd184e009da7bf0a478efd34a0c85b74291
* add workaround for shell-collection-rocksdb-noncluster.js::testSystemSpecial test failure
* fix typo
* issue 496.3: backport 3.4: move more coordinator-related logic out of TRI_vocbase_t, rename some arangosearch view configuration parameters, remove some consolidation policies, update iresearch to revision 6fd9760d81b136f769e277ea5b8f53996ed7a1ca
* address merge issue
* backport: remove code causing nullptr access
* invalidate payload for each field in FieldIterator before setting a value
* address compilation issues
* Improve logging on coordinator when doing `arangorestore`.
* Return more error information in `mergeResults`.
* Longer timeout for communication coordinator -> leader for writes.
This is taking into account possible write stops from followers needed
to get in sync.
* Fix compilation.
* Get rid of numbers in exception log messages.
* Fix compilation.
* Fix indentation.
* backport: switch scope of responsibility between a TRI_vocbase_t and a LogicalView in respect to view creation/deletion
* backport: ensure arangosearch links get exported in the dump
* backport: ensure view is created during restore on the coordinator
* Updates for ArangoSearch DDL tests, IResearchView unregistration and known issues
* Add fix for internal issue 483
* Implement `syncCollectionCatchup` in DatabaseTailingSyncer.
First stab, might not even compile.
* Fixed a typo.
* Fix a typo and a compilation problem.
* Further compilation fix.
* Implement two stage catchup.
* Two small corrections.
* Unified error messages in Synchronize shard job.
* Improved a code comment.
* Fixed autocasting bool->double and double->bool issue. That is truely one of the best features ever invented... </irony>
* Renamed doHardLock => toSoftLockOnly and inverted default value
* Merged soft/hard foot logic with Transaction splits
* Use scopeguards to cancel readlocks