* Port agency performance tuning for many shards to devel.
* Add more IDs to LOG_TOPIC calls.
* Even more IDs for LOG_TOPIC.
* Fix a duplicate LOG_TOPIC ID.
* Fix an old merging bug in devel.
* Don't hesitate between phases one and two for small clusters.
* merged fixes from 3.4
* odd fix
* Bug fix 3.4/sync repl release thread (#6784)
* First attempt to not block the thread that requires the EXCLUSIVE sync-up lock
* Fixed insertion of query into registry in rest replication handler.
* Removed unnecessary / false asserts as suggested in review. Fixed code comments.
* Replaced auto with a correct type as suggested in review
* Added a helper function to validate if a query is in use in the registry
* Fixed logic bug in usage of query registry
* Fixed compile issue
* Automaticly transfrom int -> bool in initializerlist sucks...
* Inverted boolen logic bug hidden due to int->bool beeing logically inverted.
* Today it seems that bools are too complicated for my brain.
* Removed failure point, didn't write a test for it, and it is hard to write it in the current test environment. Need to find a better solution in future
* Applied chenges required by @goedderz in review
* Bug fix 3.4/shorter foot in door (#7084)
* Implement `syncCollectionCatchup` in DatabaseTailingSyncer.
First stab, might not even compile.
* Fixed a typo.
* Fix a typo and a compilation problem.
* Further compilation fix.
* Implement two stage catchup.
* Two small corrections.
* Unified error messages in Synchronize shard job.
* Improved a code comment.
* Fixed autocasting bool->double and double->bool issue. That is truely one of the best features ever invented... </irony>
* Renamed doHardLock => toSoftLockOnly and inverted default value
* Merged soft/hard foot logic with Transaction splits
* Use scopeguards to cancel readlocks
* Bug fix 3.4/sync replication allow soft and hard lock (#6864)
* First attempt to not block the thread that requires the EXCLUSIVE sync-up lock
* Fixed insertion of query into registry in rest replication handler.
* Removed unnecessary / false asserts as suggested in review. Fixed code comments.
* Replaced auto with a correct type as suggested in review
* Added a helper function to validate if a query is in use in the registry
* Fixed logic bug in usage of query registry
* Fixed compile issue
* Implemented optional 'doHardLock' parameter in the replication acquire read-lock call. A hard-lock guarntees to stop all writes, a soft-lock may not.
* Fixed compile issue
* Automaticly transfrom int -> bool in initializerlist sucks...
* Inverted boolen logic bug hidden due to int->bool beeing logically inverted.
* Today it seems that bools are too complicated for my brain.
* Removed failure point, didn't write a test for it, and it is hard to write it in the current test environment. Need to find a better solution in future
* Applied chenges required by @goedderz in review
* Renamed doHardLock => toSoftLockOnly and inverted default value
This was added (and is currently not being used) to find a problem where a server in shutdown would hammer the agency with new connections. It was decided to implement a corresponding monitoring directly in the monitoring system (ganglia) rather than in the context of `testing.js`. Nevertheless, the script is retained for posteriority here.
- Schmutz now called "Maintenance" and completely implemented in C++
- Fix index locking bug in mmfiles
- Fix a bug in mmfiles with silent option and repsert
- Slightly increase supervision okperiod and graceperiod
* add "DBSERVER" as an alias for "PRIMARY"
This allows specifying the value "DBSERVER" for `--cluster.my-role`.
"DBSERVER" is only treated as an alias for "PRIMARY", because several
other parts of the code and APIs use the string "PRIMARY".
Changing these from "PRIMARY" to "DBSERVER" would make the change
downwards-incompatible, which we do not want.
The downside of this alias-only solution is that even when specifying
a role value of "DBSERVER", the server will still report its role as
"PRIMARY", which may be a bit confusing. The server will also generate
its id as "PRMR-XXXX" as before:
2018-08-03T15:23:09Z [9584] INFO {cluster} Starting up with role PRIMARY
2018-08-03T15:23:09Z [9584] INFO {cluster} Cluster feature is turned on. Agency version: {"server":"arango","version":"3.4.devel","license":"enterprise"}, Agency endpoints: http+tcp://[::]:4001, server id: 'PRMR-f655b728-4cea-44ac-88e9-8b34baa80958', internal address: tcp://[::1]:8629, role: PRIMARY
* adjusted documentation to use "DBSERVER" instead of "PRIMARY"
* api doc
- secondary role not used anymore. stated.
- primary database is not clear. replaced with dbserver
- brief referenced only dbserver and coordinator - better to provide wider description, in line with what is described below, as other roles can be returned
* typo
* typo
* added starting from 3.4
* additional warning
* cited in the release note
* Really enforce the hidden option --server.maximal-threads if given.
* Switch off --log.force-direct in scripts/startStandAloneAgency.sh
* Lower the timeout for sending AppendEntriesRPC to 150s.
* Erase _earliestPackage when becoming a leader.
* Challenge leadership in agent main loop.
* Use steady_clock for _earliestPackage.
* Change _lastAcked and _leaderSince to steady_clock as well.
* time difference calculations based on old readSystemClock to steadyClockToDouble
* All system_clock transitioned to steady_clock in Agent. Remaining system_clock are user input / output or timestamps
* Inception system_clock to steady_clock
Store parameters of startLocalCluster.sh
This should help you to bring the cluster up again when you
continue to work after some time without investigating the
port numbers.
Restore parameters if no arguments are given