* issue 459.3: ensure collection permissions are checked before updating/dropping an IResearch view
* backport: ensure collection permissions are checked before updating/dropping an IResearch view on cluster
* backport: address test failures
* backport: address more test failures
* reuse existing classes for scoping ExecContext
- Schmutz now called "Maintenance" and completely implemented in C++
- Fix index locking bug in mmfiles
- Fix a bug in mmfiles with silent option and repsert
- Slightly increase supervision okperiod and graceperiod
* initial checkin of isRetryOK(). Includes fixes to known code that has previously hung shutdowns by performing infinite retries.
* slight help on getting out of a loop faster during shutdown. not essential.
* Only update Plan and Current from Agency if not already done.
* Add read protection for getPlanVersion and getCurrentVersion.
* Add a further check to loadPlan and loadCurrent.
* Fix tests to new behaviour.
* Try to increase Plan/Version and Current/Version with every change.
* Add two more increments of Plan/Version
* Add missing increments in tests for Plan/Version.
* Add changelog entry.
* start implementing arangosearch cluster tests.
* backport: ensure view lookup is done via collectionNameResover, ensure updateProperties returns current view properties
* first attempt to fix failing tests
* refactor cluster wide view creation logic
* if view is not found in the new plan then check the old plan too
* ensure the cluster-wide view is looked up in vocbase as well on startup/recovery
* do not store cluster-wide IResearchView in vocbase
* move stale view cleanup to the shared pointer deleter, address test failures
* do not print warning
* enable arangosearch tests by default
* fix catch tests
* address icorrect return value for cluster-wide links
* address some issues with test failures due to cluster-view allocated within TRI_vocbase_t
* simplify per-cid view name, address 'catch' test failures
* ensure IResearchViewNode volatility is properly calculated in cluster
* invoke callbacks directly in AgencyMock instead of waiting for timeout
* ensure view updates via JavaScript always use the latest view definition
* pass a list of shards to `IResearchViewDBServer::snapshot`
* extend cluster aql tests
* fixes after merge
* fix class/struct inconsistencies
* comment failing tests
* remove debug logging
* add debug function
* tests cleanup
* simplify upcoming merge: pass resolver from a side
* backport: move all transaction status callback logic to Methods
* add changes missed from previous commit
* fix js and ruby tests
* more tests for IResearchViewNode
* pass transaction to IResearchViewDBServer::snapshot, address IResearchViewDBServer tests segfault
* pass transaction to IResearchView::snapshot instead of transaction state
* temporarily add trace log output to tests to try to find the cause of the core dump on Jenkins
* add more temporary debug output to trace down the segfault on Jenkins
* add even more temporary debug output to trace down the segfault on Jenkins
* ensure Vieew related maps are cleared during shutdown
* reset ClusterInfo::instance() before DatabaseFeature::unprepare()
* remove extraneous debug output
* missed line from previous commit
* uncomment required line
* add nullptr checks to RocksDBIndexFactory::prepareIndexes(...) similar to the ones in MMFilesIndexFactory::prepareIndexes(...)
* attempt to fix deadlock in tests
* add comment as per reviewer request
* fix aql test suite name
* add some debug logging
* address deadlock between ClusterInfo::loadPlan() and CollectionNameResolver::localNameLookup(...)
* eplicitly state which index definition failed in the log message
* use vocbase from shard-view isntead just in case
* explicitly state which index definition failed in the log message
* do not create shard-view instances from cluster-link instances (only register existing ones)
* add some tests
* add initial implementation of scatter view rule and node
* add tests for `IResearchViewNode` and `IResearchViewScatterNode`
* add missing check
* modify IResearch execution nodes to use references instead of pointers
* use view id in searialized `ExecutionNode` representation instead of the name
* add cluster mode stubs and checks
* very first attempt to distribute IResearchViewNode
* further implementation of cluster-wide arangosearch views
* fix invalid json format
* add tests for coordinator iresearch view
* allow to retrieve a list of existing views on a coordinator
* more tests for coordinator iresearch view
* some fixes to enable query explanation
* remove Collection dependency from RemoteNode
* remove unnecessary remote ArangoSearch view scatter
* fix explanation appearance
* add some assertions
* minor fixes
* implement IResearchViewCoordinator::updateProperties
* fix view DDL issues
* handle link modifications in DDL operations
* add coordinator implementation of iresearch view links
* fix tests
* further coordinator based view DDL implementation
* further IResearchViewCoordinator implementation
* add initial implementation of AgencyMock
* fix some tests
* code cleanup
* extend test + some fixes
* more tests for IResearchViewCoordinator
* fix tests for IResearchLinkCoordinator
* some fixes after merge
* fix tests
* remove declaration of nonexistent (previously removed) method
* some fixes after review
* remove string duplication
* more tests and fixes
* more fixes and tests
* more tests
* one more test
* fix 'use-after-free' asan error
* fix non-enterprise tests issues
* fixed the missed changes to plan after agency callback is registred for create collection
* Force check in timeout case.
* Sort out RestAgencyHandler behaviour for inquire.
* Take "ongoing" stuff out of AgencyComm.
* remove some non-unused V8 persistents
* do not throw that many bogus assertions
* do not rely on server role being defined
* slightly better debug output for V8 context debugging
* fix collection ids in inventory response
* simplify bootstrap a bit
* slightly better error handling
* make elapsed time a queryable value
* use less memory for stub collections
* added assertions that will always make sense
* added assertions
* do not garbage-collect while waiting
* less copying of parameters
* do not show "load indexes into memory" buttons for mmfiles engine
as all indexes are in memory anyway
* when a collection is truncated via the web interface, flush the WAL and rotate all active journals
this will make close all open journals on leader and followers and make them subject to compaction opportunities
* fix invalid server id values being passed from web interface to backend
* introduce afterTruncate method for indexes
* added test case for issue #3447
* updated CHANGELOG
* don't warn about replicationFactor for system collections
* check that the queries actually use the geo index and not some other index
* properly report error in web interface
* fix some internals checks that made truncate fail for bigger collections in maintainer mode
* also run a compact() operation after a serious truncate
in order to make iteration over the truncated range much faster
when the collection is next accessed
* increase default maximum number of V8 contexts to at least 16
* Get rid of a compiler warning in community edition.
* Teach /_api/replication/clusterInventory to report Plan/Version and readiness.
This is first implemented in the ClusterInfo library.
Then the clusterInventory code uses it and checks readiness.
Readiness of a collection means that it is created and all shards
and all replications have been created and are in sync.
* added debugging methods
* try to fix invalid access in case of error
* remove unused members
* bugfixes and comments
* all agency fixes in
* merge bug
* partially unguarded Agent::lead fixed
* all agency fixes in
* added nrBlocked to thread startup eval
* added nrBlocked to thread startup eval
* recombination of cases in State::get
* some maps replaced with unordered_maps
* optimized maps some
* added unique id to cluster, added access to Health
* added agents to health api
* added agents to health api
* added agents to health api
* transaction information for api
* agents listed like other servers
* missing line through merge conflict
* Take out 503 timeouts altogether.
* Overhaul of AgencyComm::sendWithFailover loop.
* Let performRequests optionally ignore 404 coll not found.
* Fix error message "database not found" when AgencyComm failed.
* Add log entries in Agency if locks are acquired too slowly.
* Reexecute the javascript cluster sync stuff even if there was no plan/current change...So failed sync jobs can retry later...
* Cover callbacks in Communicator by lock. This fixes https://github.com/arangodb/planning/issues/370
* Put in delay in waiting for leader in agency test.
* Schmutz logging to heartbeat topic.
* Add more lock time diagnostic in agent.
* Switch on agencycomm tracing in coordinator.
* Improve Foxx cluster resilience
Fixes#2083Fixes#2384Fixes#2408
Addresses #1892
* Port old Foxx API
* Implement single-file services
* Add console.errorStack/warnStack/infoStack helpers
* Simplify serviceInfo validation
* Extract github/upload logic into Aardvark and old FM API
* Move generator logic into Aardvark
* Move zip/js buffer logic into FM core
* Add Foxxmanager tests
* Send empty response when no README
* Disambiguate script arg format
Historically we allow passing an array of positional arguments or an arbitrary first argument.
This is surprising behaviour, so we should just always treat the value as a first argument.
* Rebuild bundle in development mode
* Nicer HTTP docs formatting
* Create Foxx HTTP docs
* Simplify service upload handling
* Remove inline swagger docs
* Implement public download route
* Consistency
* Rebuild aardvark
* Move bundle route into /_api/foxx/_local
* Rebuild Swagger API docs
* Add changes to CHANGELOG
* More docs