* we must now ignore that datafiles are not sealed
this is because an unsealed datafile may have been produced by
renaming multiple journals to datafiles at server start
* acquire collection count after we have acquired the lock
* count the null byte as well
* fix count value acquisition
* send query fragments to the correct servers, even after failover or when a follower drops
the problem with using the previous shard-based approach is that responsibilities for shards may change at runtime
however, an AQL query must send all requests for the query to the initially used servers.
if there is a failover while the query is executing, we must still send all following requests to the same servers, and not the newly responsible servers
otherwise we potentially would try to get data from a query from server B while the query was only
instanciated on server A.
* Count as checksum
* Make readLockId optional as well so upgrades are possible
* fix option name in startup script
* fix some replication issues with RocksDB engine
* simplify index API a bit
* fix fulltext index removal performance, simplified code
* updated CHANGELOG
* fix hanging test
* try to fix shutdown problem
* improve fulltext query performance
* fixed duplicate var
* removed obsolete code
* fix some shutdown races
* do not call ensureIndex that often
* Added a backup test suite. This suite is supposed to entirely drop an ArangoDB _system and restore it into a fresh one. This also includes system collections.
* Added more test cases for backup suite. Now tests several authorization/rights cenarios
* Fixed RestReplication Handlers to restore _user collections Properly.
* Updated Changelog
* Added special handling of _users in Restore for MMFiles as well.
* Added JWT secret for cluster execution of this test, also added JWT secret to shutdown call
* reduce extractions to projections
* recycle string buffers in SocketTask
* micro optimizations for mmfiles indexes
* added special lookup function for _key
* moved function into the correct file
* speed up key buffer allocations a bit
* added noexcept specifier
* correctly name variable
* explicitly move bounds
* fix and speedup from/toPersistent functions
* reuse string from ManagedDocumentResult for multiple lookups
* use move-assign
* a bit less work for single server
* speedup AQL function HASH
* single fetch optimization
* performance optimization for the case when no documents need to be returned
* make reduce-extraction-to-projection a RocksDB-only optimizer rule
* cppcheck
* try to fix compile error on MacOS
* bug fix for MacOSX
* missing namespace (in Windows compile)
* Take out 503 timeouts altogether.
* Overhaul of AgencyComm::sendWithFailover loop.
* Let performRequests optionally ignore 404 coll not found.
* Fix error message "database not found" when AgencyComm failed.
* Add log entries in Agency if locks are acquired too slowly.
* Reexecute the javascript cluster sync stuff even if there was no plan/current change...So failed sync jobs can retry later...
* Cover callbacks in Communicator by lock. This fixes https://github.com/arangodb/planning/issues/370
* Put in delay in waiting for leader in agency test.
* Schmutz logging to heartbeat topic.
* Add more lock time diagnostic in agent.
* Switch on agencycomm tracing in coordinator.
* reduce extractions to projections
* recycle string buffers in SocketTask
* micro optimizations for mmfiles indexes
* added special lookup function for _key
* moved function into the correct file
* speed up key buffer allocations a bit
* added noexcept specifier
* correctly name variable
* explicitly move bounds
* fix and speedup from/toPersistent functions
* reuse string from ManagedDocumentResult for multiple lookups
* use move-assign
* a bit less work for single server
* speedup AQL function HASH
* single fetch optimization
* performance optimization for the case when no documents need to be returned
* make reduce-extraction-to-projection a RocksDB-only optimizer rule
database feature now comes first. this ensures that the mmfiles collector thread
(owned by the mmfiles logfile manager) can always access the list of databases,
and that this list is not destroyed while the collector thread is still running.
* remove unused code
* print values of all options
* don't print "warning(s):" if there are none
* remove remainders of old file deletion functionality
* remove unused function
* Added recovery tests for views and fixed a few related bugs.
* Added more view recovery tests.
* Modified view recovery tests to add a waitForSync operation after.
* fixed usage of wrong view type
* fixed recovery of view change markers
arangod may report a "lastLogTick" value of "0" when a request to
/_api/replication/logger-state was made before the first WAL write
operation was synced to disk. After the first sync operation, the
"lastLogTick" values were ok then.
Now a "lastLogTick" value of "0" should not be returned even when
the logger state API is queried directly after the startup.