1
0
Fork 0
Commit Graph

556 Commits

Author SHA1 Message Date
Jan df02bcd505
test attempt to increase max collection name length from 64 chars to 256 (#9890) 2019-10-25 18:00:10 +02:00
Jan 09134ee8d4
micro optimizations for case conversion (#10291) 2019-10-22 09:39:16 +02:00
Jan Christoph Uhde ea20f7aeb2 use C++17 [[fallthrough]] (#10280) 2019-10-19 18:14:26 +02:00
Simon 4d8c909666 A mixed bag of goods (#10220) 2019-10-14 17:02:36 +02:00
Wilfried Goesgens 7a40fcef89 if we answer a head request, we mustn't create a body for errors (#10227) 2019-10-14 11:05:03 +02:00
Dan Larkin-York 13e24b2db9 Convert many uses of ClusterComm to Fuerte (#10154) 2019-10-10 14:03:33 +02:00
Jan 35786aa517
decrease payload size of some lambdas (#10167) 2019-10-08 10:12:22 +02:00
Tobias Gödderz e2c84acfaf Use explicit default destructors where possible (#10166) 2019-10-04 15:58:30 +02:00
Kaveh Vahedipour dc6dba27a2 add repository address normalisation (#10113)
* add repository address normalisation
2019-10-02 11:38:38 +02:00
Michael Hackstein ccb2a3c62e Fixed invalid log ids 2019-10-02 10:26:38 +02:00
Simon 9040f1d18a Fuerte + Pregel + Agency = 🥑 (#10110) 2019-10-01 11:19:18 +02:00
Dan Larkin-York 1d7225b289 Pass connection pool directly to network methods. (#10096) 2019-09-30 12:44:47 +02:00
Dan Larkin-York a83c2323c9 Refactor ApplicationServer stack (#9965) 2019-09-25 17:31:59 +02:00
Jan 9f51c03dd7
re-open the PR, queue length counter still underflows (#10065) 2019-09-24 15:39:37 +02:00
Simon ac2158ee22 Async el cheapo (#10061) 2019-09-24 12:00:13 +02:00
jsteemann 63b4df5138 Revert "Make scheduler enforce queue limits (#10027)"
This reverts commit aa532d6cad.
2019-09-23 19:57:01 +02:00
Jan aa532d6cad
Make scheduler enforce queue limits (#10027) 2019-09-23 18:21:04 +02:00
Jan 41b0717bc9
remove unused code (#10043) 2019-09-19 12:25:25 +02:00
Simon 002bb47264 Catch unhandled exceptions (#10009) 2019-09-13 16:03:58 +02:00
Simon 3d2952b23a Non block modify (#9963) 2019-09-11 15:37:02 +02:00
Dan Larkin-York 976a88c723 Make request forwarding (load-balancing) non-blocking (#9948)
* Make request forwarding (load-balancing) non-blocking.

* Apply suggestions from code review

Co-Authored-By: Jan <jsteemann@users.noreply.github.com>

* Address review comments.

* Use more appropriate method.
2019-09-09 20:32:37 +02:00
Simon 6b7fb0994e Non-Blocking reads (#9883) 2019-09-05 19:26:01 +02:00
Tobias Gödderz b7f8f01a22 Feature/split libarangoserver (#9670)
* Made usage of named variables for targets a little more consistent

* Bumped minimum cmake version

- removed policies superseded by minimum cmake version
- fixed static linking due to new policy CMP0060

* Fixed library suffixes for windows

* Extracted libs arango_mmfiles and _rocksdb from arangoserver

* Replaced target variables by constants

* Extracted arango_cluster_engine from arangoserver

* Extracted llhttp from arangoserver

* First successful split of arangoserver

* Moved enterprise files to enterprise

* Again only optionally include RestTestHandler and AcceptorUnixDomain

* Cleaned source files from other libraries

* Removed old commented sources

* Split off a small third library

* Fixed boost dependency for cluster engine

* Added some missing dependencies

* Added arango_geo dep, and -J on windows

* Began to split off an arango_graph lib

* Moved more files to arango_graph

* Do not set /J globally for ATL

* Moved more files to arango_graph

* Moved graph-RestHandlers to arango_graph

* Added arango_geo dependency to _mmfiles and _rocksdb

* Updated graph dependencies

* Split off arango_pregel

* Split off arango_aql

* Added missing boost_system dependency to pregel

* Split off arango_vocbase

* Cleanup

* Added missing boost_system dependency for arango_vocbase

* Split off arango_v8server

* Split off arango_utils

* Minor cleanup

* Split of arango_storage_engine

* Split off arango_indexes and arango_cache

* Fixed some dependencies

* Split off arango_replication

* Resolved two todos

* Split off arango_agency

* Reordered some statements

* Ordered dependency definitions alphabetically

* Cleaned some deps

* Break one cycle, comment on another

* Merge the remaining arangoserver_part[123] sources

* Moved some utils to vocbase to break cycles

* Added missing backtrace dependency to iresearch-s

* Added missing boost dependency

* Added dependency arango_indexes -> arango_geo

* Added deps to arango_cluster_engine, cleaned duplicate deps

* Broke remaining dependency cycles

* Actually, missed one cycle...

* Re-added include for Mac

* arango_cache needs SharedPRNG
2019-09-05 09:37:12 +02:00
Jan 5572675106
Bug fix/remove base directory from include path (#9885) 2019-09-04 17:39:01 +02:00
Simon 0ee0cebb11 Non-Blocking inserts (#9823) 2019-08-30 09:17:58 +02:00
Simon 9a43b28f8f Improve ExecContext usability (#9806) 2019-08-28 19:05:23 +02:00
Frank Celler aa3d3f8e40
Feature/cleanup ccpcheck (#9665) 2019-08-12 11:11:49 +02:00
Simon 1e8234ad38 Bug fix/simon 19 08 07 (#9655) 2019-08-09 14:13:12 +02:00
Dan Larkin-York 3d0246cb18 Decentralize includes (#9623) 2019-08-06 15:32:09 +02:00
Lars Maier ed496fe5dd Feature/hotbackup devel (#9495)
Hotbackup
2019-08-02 11:39:46 +02:00
Simon efa0cf4034 Add missing shared_from_this (#9594) 2019-07-29 16:53:26 +02:00
Matthew Von-Maszewski 91b56a50a3
Feature: Add gzip and encryption to import/export (#9560)
* port of feature-3.4/mv-gzip-export to devel branch

* add explicit namespaces so gcc 6.3.0 would successfully compile

* add conditional cleanup of ENCRYPTION file from another branch

* use _lseek() for Windows build to avoid deprecated warning.

* change from ifdef for lseek variants to TRI_LSEEK.

* force Windows lseek to return Linux expected type.
2019-07-26 07:53:39 -04:00
Simon f7411627af Bug fix/http comm tasks (#9543) 2019-07-24 14:38:11 +02:00
Michael Hackstein 36b1d290a9
Bug fix/failover with min replication factor (#9486)
* Improve collection time of IResearchQueryOptimizationTest

* Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it

* Added some assertion son minReplicationFactor

* Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled

* added minReplicationFactor to the user interface, preparation for the collection api changes

* added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo

* added minReplicationFactor usage to tests

* TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME

* minReplicationFactor now able to change via collection  properties route

* fixed wrongly assert

* added minReplicationFactor to the graph management ui

* added minReplicationFactor to the gharial api

* Fixed off-by-one error in minReplicationFactor. We actually enforced one more.

* adjusted description of minReplicationFactor

* FollowerInfo Refactoring

* added gharial api graph creation tests with minimal replication factor

* proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests

* added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor

* Debug logging

* MORE Debug logging

* Included replication fast lane

* Use correct minreplicationfactor

* modified debug logging

* Fixed compileissues

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* Revert "MORE Debug logging"

This reverts commit dab5af28c0.

* Revert "MORE Debug logging"

This reverts commit 6134b664bd.

* Revert "MORE Debug logging"

This reverts commit 80160bdf3b.

* Revert "MORE Debug logging"

This reverts commit 06aabcdfe1.

* Removed debug output

* Added replication fast lane. Also refactored the commands as i cannot take it any more...

* Put some requests of RocksDBReplication onto CATCHUP Lane.

* Put some requests of MMFilesReplication onto CATCHUP Lane.

* Adjusted Fast and MED lane usage in Supervised scheduler

* Added changelog entry

* Added new features entry

* A new leader will now keep old followers in case of failover

* Update arangod/Cluster/ClusterCollectionCreationInfo.cpp

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Fixed JSLINT

* Unified lane handling of replication handlers

* Sorry forgotten in last commit

* replaced strings with static strings

* more use of static strings

* optimized min repl description in the ui

* decr initial loop variable

* clean up of the createWithId test

* more use of static strings

* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Added some comments on condition, renamed variable as suggested in review

* Added check for min replicationFactor to be non-zero

* Added assertion

* Added function to modify min and max replication factor in one go

* added missing semicolon

* rm log devel

* Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place

* Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers

* check replFactor against nr dbservers

* Add lie reporting in CURRENT

* Reverted most of my recent commits about Failover situation. The intended plan simply does not work out

* move replication checks from logical collection to rest collection handler

* added more replication tests

* Include assert only if we are not in gtest

* jslint

* set min repl factor to zero if satellite collection

* check replication attributes in v8 collection

* Initial commit, old plan, does not yet work

* fixed ires tests

* Included FailoverCandidates key. Not fully implemented

* fixed wrong assert

* unified in sync follower reporting

* fixed compiler errors

* Cleanup locking, and fixed potential deadlocks

* Comments about locking order in FollowerInfo.

* properly check uint

* Keep old leader as potential failover candidate

* Transaction methods now use followerInfo to check if the leader can write, this might have the sideeffect that 'failoverCandidates' are updated

* Let agency check failoverCandidates if possible

* Initialize member variables

* Use unified follower reporting in DBServerAgencySync

* Removed obsolete variable, collecting it somewhere else

* repl factor attr check

* Reimplemented previous followers, second attempt now. PhaseOne and PhaseTwo can now synchronize on current.

* Fixed assertion, forgot an off-by-one

* adjusted test to be more preciese now

* Fixed failove candidates list

* Disable write on dropping too many followers

* Allow to run updateFailoerCandidates multiple times with same leader.

* Final fixes, resilience tests now green, crossing fingers for jenkins

* Fixed race on atomics comparison

* Fixed invalid number type

* added nullptr handling

* added nullptr handling

* Removed invalid assert

* Make takeover of leadership an atomic operation

* Update tests/js/common/shell/shell-cluster-collection.js

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Review fixes

* Fixed creation code to use takeoverLeadership

* Update arangod/Cluster/FollowerInfo.h

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Applied review fixes

* There is no timeout

* Moved AQL + Pregel to INTERNAL_AQL lane, which is medium priority, to avoid deadlocks with Sync replication

* More review fixes

* Use difference if you want to compare two vectors...

* Use std::string ...

* Now check if we are in recovery mode

* Added documentation for minReplicationFactor

* Added readme update as well in documenation
2019-07-19 15:00:30 +02:00
Michael Hackstein cbcf561450
Feature/min replication factor (#9433)
* Added a minReplicationFactor field in Collections. It is not possible to modify it yet and noone cares for it

* Added some assertion son minReplicationFactor

* Transaction API will now reject writes as soon as minimal replication factor is NOT fulfilled

* added minReplicationFactor to the user interface, preparation for the collection api changes

* added minReplicationFactor to VocBaseCollection, RestReplicationHandler, RestCollectionHandler, ClusterMethods, ClusterInfo and ClusterCollectionCreationInfo

* added minReplicationFactor usage to tests

* TODO TEMOPORARY COMMIT FOR TESTING PLEASE REVERT ME

* minReplicationFactor now able to change via collection  properties route

* fixed wrongly assert

* added minReplicationFactor to the graph management ui

* added minReplicationFactor to the gharial api

* Fixed off-by-one error in minReplicationFactor. We actually enforced one more.

* adjusted description of minReplicationFactor

* FollowerInfo Refactoring

* added gharial api graph creation tests with minimal replication factor

* proper cleanup of shell collection tests, removed lots of duplicate code, preparation for some new tests

* added collection create tests using invalid/valid names, replicationFactor and minReplicationFactor

* Debug logging

* MORE Debug logging

* Included replication fast lane

* Use correct minreplicationfactor

* modified debug logging

* Fixed compileissues

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* MORE Debug logging

* Revert "MORE Debug logging"

This reverts commit dab5af28c0.

* Revert "MORE Debug logging"

This reverts commit 6134b664bd.

* Revert "MORE Debug logging"

This reverts commit 80160bdf3b.

* Revert "MORE Debug logging"

This reverts commit 06aabcdfe1.

* Removed debug output

* Added replication fast lane. Also refactored the commands as i cannot take it any more...

* Put some requests of RocksDBReplication onto CATCHUP Lane.

* Put some requests of MMFilesReplication onto CATCHUP Lane.

* Adjusted Fast and MED lane usage in Supervised scheduler

* Added changelog entry

* Added new features entry

* A new leader will now keep old followers in case of failover

* Update arangod/Cluster/ClusterCollectionCreationInfo.cpp

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Fixed JSLINT

* Unified lane handling of replication handlers

* Sorry forgotten in last commit

* replaced strings with static strings

* more use of static strings

* optimized min repl description in the ui

* decr initial loop variable

* clean up of the createWithId test

* more use of static strings

* Update js/apps/system/_admin/aardvark/APP/frontend/js/views/collectionsView.js

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Added some comments on condition, renamed variable as suggested in review

* Added check for min replicationFactor to be non-zero

* Added assertion

* Added function to modify min and max replication factor in one go

* added missing semicolon

* rm log devel

* Added a second information to follower info that can keep track of followers that have been in sync before a failover has taken place

* Maintenance reports previous version now to follower info. instead of lying by itself. The Follower Info now gets a failover save mode to report insync followers

* check replFactor against nr dbservers

* Add lie reporting in CURRENT

* Reverted most of my recent commits about Failover situation. The intended plan simply does not work out

* move replication checks from logical collection to rest collection handler

* added more replication tests

* Include assert only if we are not in gtest

* jslint

* set min repl factor to zero if satellite collection

* check replication attributes in v8 collection

* fixed ires tests

* fixed wrong assert

* properly check uint

* repl factor attr check

* adjusted test to be more preciese now

* Fixed race on atomics comparison

* Fixed invalid number type

* Update tests/js/common/shell/shell-cluster-collection.js

Co-Authored-By: Tobias Gödderz <tobias@arangodb.com>

* Review fixes

* More review fixes
2019-07-19 13:02:28 +02:00
Simon d79ecd3870 Bug fix/comm task vst10 (#9516) 2019-07-19 11:00:49 +02:00
Jan 6d39a48435
Bug fix/fix uninitialized value (#9517) 2019-07-19 09:16:37 +02:00
Simon e5507d840f Feature/comm task refactor (#9426) 2019-07-16 09:43:25 +02:00
Lars Maier eb1aa6e024 Response compression (#9300)
* First draft of response compression.

* Cleanup.

* Removed compression from /_api/version.

* Added ruby test for response compression.
2019-06-28 10:02:48 +02:00
Tobias Gödderz da6e9da820 Measure time a request takes to get executed by the scheduler (#9224) 2019-06-18 17:34:33 +02:00
Frank Celler 2d732be590
wait until task is shut down (#9239) 2019-06-10 10:07:31 +02:00
Simon 1402395f1c Port fuerte changes from network pool branch (#9125) 2019-06-04 16:35:15 +02:00
jsteemann a0676dcf20 add commtask debugging again 2019-06-03 11:10:31 +02:00
Jan 208f22c842
only execute requests directly if there is one client on the IoContext (#9090) 2019-05-24 14:29:58 +02:00
Jan 3d79491f36
exclude VST requests from direct execution (#9075) 2019-05-24 09:53:15 +02:00
Jan 79258e072a
Bug fix/remove io task (#9056) 2019-05-22 14:34:49 +02:00
jsteemann 0e24c18253 re-add optimization for single servers 2019-05-22 14:29:12 +02:00
Lars Maier 4fc2790863 [devel] Direct Exec Scheduler (#9004) 2019-05-20 11:38:57 +02:00
jsteemann 0f6af587ad remove debug output 2019-05-16 19:49:22 +02:00
jsteemann 01dad862a4 move stopping more down 2019-05-16 13:52:46 +02:00