1
0
Fork 0

Merge branch 'devel' of https://github.com/arangodb/arangodb into devel

This commit is contained in:
Kaveh Vahedipour 2016-06-06 09:30:42 +02:00
commit a496d6bea8
32 changed files with 514 additions and 229 deletions

View File

@ -17,35 +17,64 @@ Let's start with the basics: `INSERT`, `UPDATE` and `REMOVE` operations on singl
Here is an example that insert a document in an existing collection *users*:
```js
INSERT { firstName: "Anna", name: "Pavlova", profession: "artist" } IN users
INSERT {
firstName: "Anna",
name: "Pavlova",
profession: "artist"
} IN users
```
You may provide a key for the new document; if not provided, ArangoDB will create one for you.
```js
INSERT { _key: "GilbertoGil", firstName: "Gilberto", name: "Gil", city: "Fortalezza" } IN users
INSERT {
_key: "GilbertoGil",
firstName: "Gilberto",
name: "Gil",
city: "Fortalezza"
} IN users
```
As Arango is schema-free, attributes of the documents may vary:
As ArangoDB is schema-free, attributes of the documents may vary:
```js
INSERT { _key: "PhilCarpenter", firstName: "Phil", name: "Carpenter", middleName: "G.", status: "inactive" } IN users
INSERT {
_key: "PhilCarpenter",
firstName: "Phil",
name: "Carpenter",
middleName: "G.",
status: "inactive"
} IN users
```
```js
INSERT { _key: "NatachaDeclerck", firstName: "Natacha", name: "Declerck", location: "Antwerp" } IN users
INSERT {
_key: "NatachaDeclerck",
firstName: "Natacha",
name: "Declerck",
location: "Antwerp"
} IN users
```
Update is quite simple. The following AQL statement will add or change the attributes status and location
```js
UPDATE "PhilCarpenter" WITH { status: "active", location: "Beijing" } IN users
UPDATE "PhilCarpenter" WITH {
status: "active",
location: "Beijing"
} IN users
```
Replace is an alternative to update where all attributes of the document are replaced.
```js
REPLACE { _key: "NatachaDeclerck", firstName: "Natacha", name: "Leclerc", status: "active", level: "premium" } IN users
REPLACE {
_key: "NatachaDeclerck",
firstName: "Natacha",
name: "Leclerc",
status: "active",
level: "premium"
} IN users
```
Removing a document if you know its key is simple as well :

View File

@ -35,6 +35,10 @@ APPEND([ 1, 2, 3 ], [ 3, 4, 5, 2, 9 ], true)
// [ 1, 2, 3, 4, 5, 9 ]
```
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION FIRST()
`FIRST(anyArray) → firstElement`

View File

@ -6,25 +6,28 @@ additional language constructs.
!SUBSECTION ATTRIBUTES()
`ATTRIBUTES(document, removeInternal, sort) → attributes`
`ATTRIBUTES(document, removeInternal, sort) → strArray`
Return the attribute keys of the *document* as an array. Optionally omit
system attributes.
- **document** (object): an arbitrary document / object
- **removeInternal** (bool, *optional*): whether all system attributes (*_key*, *_id* etc.,
every attribute key that starts with an underscore) shall be omitted in the result.
The default is *false*.
- **sort** (bool, *optional*): optionally sort the resulting array alphabetically.
The default is *false* and will return the attribute names in a random order.
- returns **attributes** (array): the attribute keys of the input *document* as an
The default is *false* and will return the attribute names in any order.
- returns **strArray** (array): the attribute keys of the input *document* as an
array of strings
```js
ATTRIBUTES( {"foo": "bar", "_key": "123", "_custom": "yes" } )
ATTRIBUTES( { "foo": "bar", "_key": "123", "_custom": "yes" } )
// [ "foo", "_key", "_custom" ]
ATTRIBUTES( {"foo": "bar", "_key": "123", "_custom": "yes" }, true )
ATTRIBUTES( { "foo": "bar", "_key": "123", "_custom": "yes" }, true )
// [ "foo" ]
ATTRIBUTES( {"foo": "bar", "_key": "123", "_custom": "yes" }, false, true )
ATTRIBUTES( { "foo": "bar", "_key": "123", "_custom": "yes" }, false, true )
// [ "_custom", "_key", "foo" ]
```
@ -42,6 +45,10 @@ FOR attributeArray IN attributesPerDocument
RETURN {attr, count}
```
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION HAS()
`HAS(document, attributeName) → isPresent`

View File

@ -53,6 +53,10 @@ Return an array of collections.
- returns **docArray** (array): each collection as a document with attributes
*name* and *_id* in an array
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION CURRENT_USER()
`CURRENT_USER() → userName`
@ -196,14 +200,87 @@ CALL( "SUBSTRING", "this is a test", 0, 4 )
!SECTION Internal functions
The following functions are used during development of ArangoDB as a database
system, primarily for unit testing. They are not intended to be used by end
users, especially not in production environments.
!SUBSECTION FAIL()
`FAIL(reason)`
!SUBSECTION NOOP()
Let a query fail on purpose. Can be used in a conditional branch, or to verify
if lazy evaluation / short circuiting is used for instance.
`NOOP(value) → retVal`
- **reason** (string): an error message
- returns nothing, because the query is aborted
```js
RETURN 1 == 1 ? "okay" : FAIL("error") // "okay"
RETURN 1 == 1 || FAIL("error") ? true : false // true
RETURN 1 == 2 && FAIL("error") ? true : false // false
RETURN 1 == 1 && FAIL("error") ? true : false // aborted with error
```
!SUBSECTION NOOPT()
`NOOPT(expression) → retVal`
No-operation that prevents query compile-time optimizations. Constant expressions
can be forced to be evaluated at runtime with this.
If there is a C++ implementation as well as a JavaScript implementation of an
AQL function, then it will enforce the use of the C++ version.
- **expression** (any): arbitray expression
- returns **retVal** (any): the return value of the *expression*
```js
// differences in execution plan (explain)
FOR i IN 1..3 RETURN (1 + 1) // const assignment
FOR i IN 1..3 RETURN NOOPT(1 + 1) // simple expression
NOOPT( RAND() ) // C++ implementation
V8( RAND() ) // JavaScript implementation
```
!SUBSECTION PASSTHRU()
`PASSTHRU(value) → retVal`
This function is marked as non-deterministic so its argument withstands
query optimization.
- **value** (any): a value of arbitrary type
- returns **retVal** (any): *value*, without optimizations
!SUBSECTION SLEEP()
`SLEEP(seconds) → null`
Wait for a certain amount of time before continuing the query.
- **seconds** (number): amount of time to wait
- returns a *null* value
```js
SLEEP(1) // wait 1 second
SLEEP(0.02) // wait 20 milliseconds
```
!SUBSECTION V8()
`V8(value) → retVal`
`V8(expression) → retVal`
No-operation that enforces the usage of the V8 JavaScript engine. If there is a
JavaScript implementation of an AQL function, for which there is also a C++
implementation, the JavaScript version will be used.
- **expression** (any): arbitray expression
- returns **retVal** (any): the return value of the *expression*
```js
// differences in execution plan (explain)
FOR i IN 1..3 RETURN (1 + 1) // const assignment
FOR i IN 1..3 RETURN V8(1 + 1) // const assignment
FOR i IN 1..3 RETURN NOOPT(V8(1 + 1)) // v8 expression
```

View File

@ -390,6 +390,31 @@ Result:
]
```
!SUBSECTION RANGE()
`RANGE(start, stop, step) → numArray`
Return an array of numbers in the specified range, optionally with increments
other than 1.
For integer ranges, use the [range operator](../Operators.md#range-operator)
instead for better performance.
- **start** (number): the value to start the range at (inclusive)
- **stop** (number): the value to end the range with (inclusive)
- **step** (number, *optional*): how much to increment in every step,
the default is *1.0*
- returns **numArray** (array): all numbers in the range as array
```js
RANGE(1, 4) // [ 1, 2, 3, 4 ]
RANGE(1, 4, 2) // [ 1, 3 ]
RANGE(1, 4, 3) // [ 1, 4 ]
RANGE(1.5, 2.5) // [ 1.5, 2.5 ]
RANGE(1.5, 2.5, 0.5) // [ 1.5, 2, 2.5 ]
RANGE(-0.75, 1.1, 0.5) // [ -0.75, -0.25, 0.25, 0.75 ]
```
!SUBSECTION ROUND()
`ROUND(value) → roundedValue`

View File

@ -92,6 +92,10 @@ CONTAINS("foobarbaz", "ba", true) // 3
CONTAINS("foobarbaz", "horse", true) // -1
```
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION FIND_FIRST()
`FIND_FIRST(text, search, start, end) → position`
@ -180,7 +184,7 @@ using wildcard matching.
- **text** (string): the string to search in
- **search** (string): a search pattern that can contain the wildcard characters
*%* (meaning any sequence of characters, including none) and *_* (any single
`%` (meaning any sequence of characters, including none) and `_` (any single
character). Literal *%* and *:* must be escaped with two backslashes.
*search* cannot be a variable or a document attribute. The actual value must
be present at query parse time already.
@ -189,6 +193,19 @@ using wildcard matching.
- returns **bool** (bool): *true* if the pattern is contained in *text*,
and *false* otherwise
```js
LIKE("cart", "ca_t") // true
LIKE("carrot", "ca_t") // false
LIKE("carrot", "ca%t") // true
LIKE("foo bar baz", "bar") // false
LIKE("foo bar baz", "%bar%") // true
LIKE("bar", "%bar%") // true
LIKE("FoO bAr BaZ", "fOo%bAz") // false
LIKE("FoO bAr BaZ", "fOo%bAz", true) // true
```
!SUBSECTION LOWER()
`LOWER(value) → lowerCaseString`
@ -255,7 +272,7 @@ RANDOM_TOKEN(8) // "m9w50Ft9"
`REGEX(text, search, caseInsensitive) → bool`
Check whether the pattern *search* is contained in the string *text*,
using regular expression matching.
using regular expression matching.
- **text** (string): the string to search in
- **search** (string): a regular expression search pattern
@ -265,38 +282,46 @@ using regular expression matching.
The regular expression may consist of literal characters and the following
characters and sequences:
- *.*: the dot matches any single character except line terminators
- *\d*: matches a single digit, equivalent to [0-9]
- *\s*: matches a single whitespace character
- *\t*: matches a tab character
- *\r*: matches a carriage return
- *\n*: matches a line-feed character
- *[xyz]*: set of characters. matches any of the enclosed characters (i.e.
- `.` the dot matches any single character except line terminators.
To include line terminators, use `[\s\S]` instead to simulate `.` with *DOTALL* flag.
- `\d` matches a single digit, equivalent to `[0-9]`
- `\s` matches a single whitespace character
- `\S` matches a single non-whitespace character
- `\t` matches a tab character
- `\r` matches a carriage return
- `\n` matches a line-feed character
- `[xyz]` set of characters. matches any of the enclosed characters (i.e.
*x*, *y* or *z* in this case
- *[^xyz]*: negated set of characters. matches any other character than the
- `[^xyz]` negated set of characters. matches any other character than the
enclosed ones (i.e. anything but *x*, *y* or *z* in this case)
- *[x-z]*: range of characters. matches any of the characters in the
specified range
- *[^x-z]*: negated range of characters. matches any other character than the
- `[x-z]` range of characters. Matches any of the characters in the
specified range, e.g. `[0-9A-F]` to match any character in
*0123456789ABCDEF*
- `[^x-z]` negated range of characters. Matches any other character than the
ones specified in the range
- *(x|y)*: matches either *x* or *y*
- *^*: matches the beginning of the string
- *$*: matches the end of the string
- `(xyz)` defines and matches a pattern group
- `(x|y)` matches either *x* or *y*
- `^` matches the beginning of the string (e.g. `^xyz`)
- `$` matches the end of the string (e.g. `xyz$`)
Note that the characters *.*, <i>\*</i>, *?*, *[*, *]*, *(*, *)*, *{*, *}*, *^*,
and *$* have a special meaning in regular expressions and may need to be
escaped using a backslash (*\\*). A literal backslash should also be escaped
using another backslash, i.e. *\\\\*.
Note that the characters `.`, `*`, `?`, `[`, `]`, `(`, `)`, `{`, `}`, `^`,
and `$` have a special meaning in regular expressions and may need to be
escaped using a backslash (`\\`). A literal backslash should also be escaped
using another backslash, i.e. `\\\\`.
Characters and sequences may optionally be repeated using the following
quantifiers:
- *x\**: matches zero or more occurrences of *x*
- *x+*: matches one or more occurrences of *x*
- *x?*: matches one or zero occurrences of *x*
- *x{y}*: matches exactly *y* occurrences of *x*
- *x{y,z}*: matches between *y* and *z* occurrences of *x*
- *x{y,}*: matches at least *y* occurences of *x*
- `x*` matches zero or more occurrences of *x*
- `x+` matches one or more occurrences of *x*
- `x?` matches one or zero occurrences of *x*
- `x{y}` matches exactly *y* occurrences of *x*
- `x{y,z}` matches between *y* and *z* occurrences of *x*
- `x{y,}` matches at least *y* occurences of *x*
Note that `xyz+` matches *xyzzz*, but if you want to match *xyzxyz* instead,
you need to define a pattern group by wrapping the subexpression in parentheses
and place the quantifier right behind it: `(xyz)+`.
If the regular expression in *search* is invalid, a warning will be raised
and the function will return *false*.

View File

@ -161,8 +161,8 @@ The following type check functions are available:
- `IS_DOCUMENT(value) → bool`: This is an alias for *IS_OBJECT()*
- `IS_DATESTRING(value) → bool`: Check whether *value* is a string that can be used
in a date function. This includes partial dates such as *2015* or *2015-10* and
strings containing invalid dates such as *2015-02-31*. The function will return
in a date function. This includes partial dates such as *"2015"* or *"2015-10"* and
strings containing invalid dates such as *"2015-02-31"*. The function will return
false for all non-string values, even if some of them may be usable in date functions.
- `TYPENAME(value) → typeName`: Return the data type name of *value*. The data type

View File

@ -59,7 +59,7 @@ or underscore.
```
"abc" LIKE "a%" // true
"abc" LIKE "_bc" // true
"a_b_foo" LIKE "a\\_b\\_f%" // true
"a_b_foo" LIKE "a\\_b\\_foo" // true
```
The pattern matching performed by the *LIKE* operator is case-sensitive.
@ -182,7 +182,14 @@ AQL supports the following arithmetic operators:
- */* division
- *%* modulus
The unary plus and unary minus are supported as well.
Unary plus and unary minus are supported as well:
```js
LET x = -5
LET y = 1
RETURN [-x, +y]
// [5, 1]
```
For exponentiation, there is a [numeric function](Functions/Numeric.md#pow) *POW()*.
@ -202,7 +209,7 @@ Some example arithmetic operations:
The arithmetic operators accept operands of any type. Passing non-numeric values to an
arithmetic operator will cast the operands to numbers using the type casting rules
applied by the `TO_NUMBER` function:
applied by the [TO_NUMBER()](Functions/TypeCast.md#tonumber) function:
- `null` will be converted to `0`
- `false` will be converted to `0`, true will be converted to `1`
@ -215,8 +222,8 @@ applied by the `TO_NUMBER` function:
`0`.
- objects / documents are converted to the number `0`.
An arithmetic operation that produces an invalid value, such as `1 / 0` will also produce
a result value of `0`.
An arithmetic operation that produces an invalid value, such as `1 / 0` (division by zero)
will also produce a result value of `0`.
Here are a few examples:

View File

@ -813,6 +813,8 @@ in 3.0:
is unchanged.
- `--server.foxx-queues-poll-interval` was renamed to `--foxx.queues-poll-interval`.
The meaning of the option is unchanged.
- `--no-server` was renamed to `--server.rest-server`. Note that the meaning of the
option `--server.rest-server` is the opposite of the previous `--no-server`.
- `--database.query-cache-mode` was renamed to `--query.cache-mode`. The meaning of
the option is unchanged.
- `--database.query-cache-max-results` was renamed to `--query.cache-entries`. The

View File

@ -5,10 +5,9 @@ This file contains documentation about the build process, documentation generati
CMake
=====
* *--enable-relative* - relative mode so you can run without make install
* *--enable-maintainer-mode* - generate lex/yacc files
* *--with-backtrace* - add backtraces to native code asserts & exceptions
* *--enable-failure-tests* - adds javascript hook to crash the server for data integrity tests
* *-DUSE_MAINTAINER_MODE* - generate lex/yacc files
* *-DUSE_BACKTRACE=1* - add backtraces to native code asserts & exceptions
* *-DUSE_FAILURE_TESTS=1* - adds javascript hook to crash the server for data integrity tests
CFLAGS
------
@ -33,7 +32,7 @@ If the compile goes wrong for no particular reason, appending 'verbose=' adds mo
Runtime
-------
* start arangod with `--console` to get a debug console
* Cheapen startup for valgrind: `--no-server --javascript.gc-frequency 1000000 --javascript.gc-interval 65536 --scheduler.threads=1 --javascript.v8-contexts=1`
* Cheapen startup for valgrind: `--server.rest-server false --javascript.gc-frequency 1000000 --javascript.gc-interval 65536 --scheduler.threads=1 --javascript.v8-contexts=1`
* to have backtraces output set this on the prompt: `ENABLE_NATIVE_BACKTRACES(true)`
Startup
@ -60,8 +59,8 @@ JSLint
======
(we switched to jshint a while back - this is still named jslint for historical reasons)
Make target
-----------
checker Script
--------------
use
./utils/gitjslint.sh
@ -187,17 +186,12 @@ jsUnity via arangosh
--------------------
arangosh is similar, however, you can only run tests which are intended to be ran via arangosh:
require("jsunity").runTest("js/client/tests/shell-client.js");
require("jsunity").runTest("js/client/tests/shell/shell-client.js");
mocha tests
-----------
All tests with -spec in their names are using the [mochajs.org](https://mochajs.org) framework.
jasmine tests
-------------
Jasmine tests cover testing the UI components of aardvark
Javascript framework
--------------------
(used in our local Jenkins and TravisCI integration; required for running cluster tests)
@ -232,7 +226,7 @@ A commandline for running a single test (-> with the facility 'single_server') u
valgrind could look like this. Options are passed as regular long values in the
syntax --option value --sub:option value. Using Valgrind could look like this:
./scripts/unittest single_server --test js/server/tests/aql-escaping.js \
./scripts/unittest single_server --test js/server/tests/aql/aql-escaping.js \
--extraargs:server.threads 1 \
--extraargs:scheduler.threads 1 \
--extraargs:javascript.gc-frequency 1000000 \
@ -249,11 +243,11 @@ Running a single unittestsuite
------------------------------
Testing a single test with the framework directly on a server:
scripts/unittest single_server --test js/server/tests/aql-escaping.js
scripts/unittest single_server --test js/server/tests/aql/aql-escaping.js
Testing a single test with the framework via arangosh:
scripts/unittest single_client --test js/server/tests/aql-escaping.js
scripts/unittest single_client --test js/server/tests/aql/aql-escaping.js
Testing a single rspec test:
@ -268,23 +262,23 @@ Since downloading fox apps from github can be cumbersome with shaky DSL
and DOS'ed github, we can fake it like this:
export FOXX_BASE_URL="http://germany/fakegit/"
./scripts/unittest single_server --test 'js/server/tests/shell-foxx-manager-spec.js'
./scripts/unittest single_server --test 'js/server/tests/shell/shell-foxx-manager-spec.js'
arangod Emergency console
-------------------------
require("jsunity").runTest("js/server/tests/aql-escaping.js");
require("jsunity").runTest("js/server/tests/aql/aql-escaping.js");
arangosh client
---------------
require("jsunity").runTest("js/server/tests/aql-escaping.js");
require("jsunity").runTest("js/server/tests/aql/aql-escaping.js");
arangod commandline arguments
-----------------------------
bin/arangod /tmp/dataUT --javascript.unit-tests="js/server/tests/aql-escaping.js" --no-server
bin/arangod /tmp/dataUT --javascript.unit-tests="js/server/tests/aql/aql-escaping.js" --no-server
js/common/modules/loadtestrunner.js
@ -351,7 +345,7 @@ Dependencies to build documentation:
- MarkdownPP
https://github.com/triAGENS/markdown-pp/
https://github.com/arangodb-helper/markdown-pp/
Checkout the code with Git, use your system python to install:
@ -366,6 +360,7 @@ Dependencies to build documentation:
- `npm`
If not, add the installation path to your environment variable PATH.
Gitbook requires more recent node versions.
- [Gitbook](https://github.com/GitbookIO/gitbook)
@ -381,7 +376,7 @@ Dependencies to build documentation:
Generate users documentation
============================
If you've edited examples, see below how to regenerate them.
If you've edited examples, see below how to regenerate them with `./utils/generateExamples.sh`.
If you've edited REST documentation, first invoke `./utils/generateSwagger.sh`.
Run the `make` command in `arangodb/Documentation/Books` to generate it.
The documentation will be generated in subfolders in `arangodb/Documentation/Books/books` -
@ -431,6 +426,8 @@ Generate an ePub:
gitbook epub ./ppbooks/Manual ./target/path/filename.epub
Examples
========
Where to add new...
-------------------
- Documentation/DocuBlocks/* - markdown comments with execution section
@ -609,6 +606,8 @@ Attributes:
can be either a swaggertype, or a *RESTRUCT*
- format: if type is a native swagger type, some support a format to specify them
--------------------------------------------------------------------------------
Local cluster startup
=====================
@ -630,6 +629,7 @@ up in the GNU debugger in separate windows (using `xterm`s). In that
case one has to hit ENTER in the original terminal where the script runs
to continue, once all processes have been start up in the debugger.
--------------------------------------------------------------------------------
Front-End (WebUI)
=========

View File

@ -289,7 +289,7 @@ void Constituent::callElection() {
}
std::string body;
std::vector<ClusterCommResult> results(config().endpoints.size());
std::vector<OperationID> operationIDs(config().endpoints.size());
std::stringstream path;
path << "/_api/agency_priv/requestVote?term=" << _term
@ -301,7 +301,7 @@ void Constituent::callElection() {
if (i != _id && endpoint(i) != "") {
auto headerFields =
std::make_unique<std::unordered_map<std::string, std::string>>();
results[i] = arangodb::ClusterComm::instance()->asyncRequest(
operationIDs[i] = arangodb::ClusterComm::instance()->asyncRequest(
"1", 1, config().endpoints[i], GeneralRequest::RequestType::GET,
path.str(), std::make_shared<std::string>(body), headerFields,
nullptr, config().minPing, true);
@ -313,14 +313,16 @@ void Constituent::callElection() {
sleepFor(.5 * config().minPing, .8 * config().minPing));
// Collect votes
// FIXME: This code can be improved: One can wait for an arbitrary
// result by creating a coordinatorID and waiting for a pattern.
for (arangodb::consensus::id_t i = 0; i < config().endpoints.size(); ++i) {
if (i != _id && endpoint(i) != "") {
ClusterCommResult res =
arangodb::ClusterComm::instance()->enquire(results[i].operationID);
arangodb::ClusterComm::instance()->enquire(operationIDs[i]);
if (res.status == CL_COMM_SENT) { // Request successfully sent
res = arangodb::ClusterComm::instance()->wait(
"1", 1, results[i].operationID, "1");
"1", 1, operationIDs[i], "1");
std::shared_ptr<Builder> body = res.result->getBodyVelocyPack();
if (body->isEmpty()) { // body empty
continue;

View File

@ -31,17 +31,11 @@ void NotifierThread::scheduleNotification(const std::string& endpoint) {
auto headerFields =
std::make_unique<std::unordered_map<std::string, std::string>>();
while (true) {
auto res = arangodb::ClusterComm::instance()->asyncRequest(
"", TRI_NewTickServer(), endpoint, GeneralRequest::RequestType::POST,
_path, std::make_shared<std::string>(_body->toJson()), headerFields,
std::make_shared<NotifyCallback>(cb), 5.0, true);
if (res.status == CL_COMM_SUBMITTED) {
break;
}
usleep(500000);
}
// This is best effort: We do not guarantee at least once delivery!
arangodb::ClusterComm::instance()->asyncRequest(
"", TRI_NewTickServer(), endpoint, GeneralRequest::RequestType::POST,
_path, std::make_shared<std::string>(_body->toJson()), headerFields,
std::make_shared<NotifyCallback>(cb), 5.0, true);
}
bool NotifierThread::start() { return Thread::start(); }

View File

@ -249,7 +249,7 @@ std::vector<bool> Store::apply(std::vector<VPackSlice> const& queries,
auto headerFields =
std::make_unique<std::unordered_map<std::string, std::string>>();
ClusterCommResult res = arangodb::ClusterComm::instance()->asyncRequest(
arangodb::ClusterComm::instance()->asyncRequest(
"1", 1, endpoint, GeneralRequest::RequestType::POST, path,
std::make_shared<std::string>(body.toString()), headerFields,
std::make_shared<StoreCallback>(), 0.0, true);

View File

@ -1091,18 +1091,19 @@ static bool throwExceptionAfterBadSyncRequest(ClusterCommResult* res,
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_TIMEOUT, errorMessage);
}
if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
// there is no result
std::string errorMessage =
std::string("Empty result in communication with shard '") +
std::string(res->shardID) + std::string("' on cluster node '") +
std::string(res->serverID) + std::string("'");
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_CONNECTION_LOST,
errorMessage);
}
if (res->status == CL_COMM_ERROR) {
std::string errorMessage;
// This could be a broken connection or an Http error:
if (res->result == nullptr || !res->result->isComplete()) {
// there is no result
errorMessage +=
std::string("Empty result in communication with shard '") +
std::string(res->shardID) + std::string("' on cluster node '") +
std::string(res->serverID) + std::string("'");
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_CONNECTION_LOST,
errorMessage);
}
TRI_ASSERT(nullptr != res->result);
StringBuffer const& responseBodyBuf(res->result->getBody());

View File

@ -506,9 +506,9 @@ struct CoordinatorInstanciator : public WalkerWorker<ExecutionNode> {
auto headers = std::make_unique<std::unordered_map<std::string, std::string>>();
(*headers)["X-Arango-Nolock"] = shardId; // Prevent locking
auto res = cc->asyncRequest("", coordTransactionID, "shard:" + shardId,
arangodb::GeneralRequest::RequestType::POST,
url, body, headers, nullptr, 30.0);
cc->asyncRequest("", coordTransactionID, "shard:" + shardId,
arangodb::GeneralRequest::RequestType::POST,
url, body, headers, nullptr, 30.0);
}
/// @brief aggregateQueryIds, get answers for all shards in a Scatter/Gather

View File

@ -65,7 +65,7 @@ void ClusterCommResult::setDestination(std::string const& dest,
serverID = (*resp)[0];
} else {
serverID = "";
status = CL_COMM_ERROR;
status = CL_COMM_BACKEND_UNAVAILABLE;
if (logConnectionErrors) {
LOG(ERR) << "cannot find responsible server for shard '"
<< shardID << "'";
@ -89,7 +89,7 @@ void ClusterCommResult::setDestination(std::string const& dest,
shardID = "";
serverID = "";
endpoint = "";
status = CL_COMM_ERROR;
status = CL_COMM_BACKEND_UNAVAILABLE;
errorMessage = "did not understand destination'" + dest + "'";
if (logConnectionErrors) {
LOG(ERR) << "did not understand destination '" << dest << "'";
@ -102,7 +102,7 @@ void ClusterCommResult::setDestination(std::string const& dest,
auto ci = ClusterInfo::instance();
endpoint = ci->getServerEndpoint(serverID);
if (endpoint.empty()) {
status = CL_COMM_ERROR;
status = CL_COMM_BACKEND_UNAVAILABLE;
errorMessage = "did not find endpoint of server '" + serverID + "'";
if (logConnectionErrors) {
LOG(ERR) << "did not find endpoint of server '" << serverID
@ -193,16 +193,17 @@ OperationID ClusterComm::getOperationID() { return TRI_NewTickServer(); }
/// DBServer back to us. Therefore ClusterComm also creates an entry in
/// a list of expected answers. One either has to use a callback for
/// the answer, or poll for it, or drop it to prevent memory leaks.
/// The result of this call is just a record that the initial HTTP
/// request has been queued (`status` is CL_COMM_SUBMITTED). Use @ref
/// enquire below to get information about the progress. The actual
/// answer is then delivered either in the callback or via poll. The
/// ClusterCommResult is returned by value.
/// If `singleRequest` is set to `true`, then the destination can be
/// an arbitrary server, the functionality can also be used in single-Server
/// mode, and the operation is complete when the single request is sent
/// and the corresponding answer has been received. We use this functionality
/// for the agency mode of ArangoDB.
/// This call never returns a result directly, rather, it returns an
/// operation ID under which one can query the outcome with a wait() or
/// enquire() call (see below).
///
/// Use @ref enquire below to get information about the progress. The
/// actual answer is then delivered either in the callback or via
/// poll. If `singleRequest` is set to `true`, then the destination
/// can be an arbitrary server, the functionality can also be used in
/// single-Server mode, and the operation is complete when the single
/// request is sent and the corresponding answer has been received. We
/// use this functionality for the agency mode of ArangoDB.
/// The library takes ownerships of the pointer `headerFields` by moving
/// the unique_ptr to its own storage, this is necessary since this
/// method sometimes has to add its own headers. The library retains shared
@ -221,7 +222,7 @@ OperationID ClusterComm::getOperationID() { return TRI_NewTickServer(); }
/// "tcp://..." or "ssl://..." endpoints, if `singleRequest` is true.
////////////////////////////////////////////////////////////////////////////////
ClusterCommResult const ClusterComm::asyncRequest(
OperationID ClusterComm::asyncRequest(
ClientTransactionID const clientTransactionID,
CoordTransactionID const coordTransactionID, std::string const& destination,
arangodb::GeneralRequest::RequestType reqtype,
@ -235,9 +236,11 @@ ClusterCommResult const ClusterComm::asyncRequest(
auto op = std::make_unique<ClusterCommOperation>();
op->result.clientTransactionID = clientTransactionID;
op->result.coordTransactionID = coordTransactionID;
OperationID opId = 0;
do {
op->result.operationID = getOperationID();
} while (op->result.operationID == 0); // just to make sure
opId = getOperationID();
} while (opId == 0); // just to make sure
op->result.operationID = opId;
op->result.status = CL_COMM_SUBMITTED;
op->result.single = singleRequest;
op->reqtype = reqtype;
@ -250,20 +253,27 @@ ClusterCommResult const ClusterComm::asyncRequest(
op->result.setDestination(destination, logConnectionErrors());
if (op->result.status == CL_COMM_ERROR) {
// In the non-singleRequest mode we want to put it into the received
// queue right away for backward compatibility:
// We put it into the received queue right away for error reporting:
ClusterCommResult const resCopy(op->result);
if (!singleRequest) {
LOG(DEBUG) << "In asyncRequest, putting failed request "
<< resCopy.operationID << " directly into received queue.";
CONDITION_LOCKER(locker, somethingReceived);
received.push_back(op.get());
op.release();
auto q = received.end();
receivedByOpID[resCopy.operationID] = --q;
somethingReceived.broadcast();
LOG(DEBUG) << "In asyncRequest, putting failed request "
<< resCopy.operationID << " directly into received queue.";
CONDITION_LOCKER(locker, somethingReceived);
received.push_back(op.get());
op.release();
auto q = received.end();
receivedByOpID[opId] = --q;
if (nullptr != callback) {
op.reset(*q);
if ( (*callback.get())(&(op->result)) ) {
auto i = receivedByOpID.find(opId);
receivedByOpID.erase(i);
received.erase(q);
} else {
op.release();
}
}
return resCopy;
somethingReceived.broadcast();
return opId;
}
if (destination.substr(0, 6) == "shard:") {
@ -312,20 +322,18 @@ ClusterCommResult const ClusterComm::asyncRequest(
// }
// std::cout << std::endl;
ClusterCommResult const res(op->result);
{
CONDITION_LOCKER(locker, somethingToSend);
toSend.push_back(op.get());
TRI_ASSERT(nullptr != op.get());
op.release();
std::list<ClusterCommOperation*>::iterator i = toSend.end();
toSendByOpID[res.operationID] = --i;
toSendByOpID[opId] = --i;
}
LOG(DEBUG) << "In asyncRequest, put into queue " << res.operationID;
LOG(DEBUG) << "In asyncRequest, put into queue " << opId;
somethingToSend.signal();
return res;
return opId;
}
////////////////////////////////////////////////////////////////////////////////
@ -371,7 +379,7 @@ std::unique_ptr<ClusterCommResult> ClusterComm::syncRequest(
res->setDestination(destination, logConnectionErrors());
if (res->status == CL_COMM_ERROR) {
if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
return res;
}
@ -887,9 +895,10 @@ std::string ClusterComm::processAnswer(std::string& coordinatorHeader,
if ((*op->callback.get())(&op->result)) {
// This is fully processed, so let's remove it from the queue:
QueueIterator q = i->second;
std::unique_ptr<ClusterCommOperation> o(op);
receivedByOpID.erase(i);
received.erase(q);
delete op;
return std::string("");
}
}
} else {
@ -910,9 +919,10 @@ std::string ClusterComm::processAnswer(std::string& coordinatorHeader,
if ((*op->callback)(&op->result)) {
// This is fully processed, so let's remove it from the queue:
QueueIterator q = i->second;
std::unique_ptr<ClusterCommOperation> o(op);
toSendByOpID.erase(i);
toSend.erase(q);
delete op;
return std::string("");
}
}
} else {
@ -1077,21 +1087,17 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
localTimeOut = endTime - now;
dueTime[i] = endTime + 10;
}
auto res = asyncRequest("", coordinatorTransactionID,
requests[i].destination,
requests[i].requestType,
requests[i].path,
requests[i].body,
requests[i].headerFields,
nullptr, localTimeOut,
false);
if (res.status == CL_COMM_ERROR) {
// We did not find the destination, this could change in the
// future, therefore we will retry at some stage:
drop("", 0, res.operationID, ""); // forget about it
} else {
opIDtoIndex.insert(std::make_pair(res.operationID, i));
}
OperationID opId = asyncRequest("", coordinatorTransactionID,
requests[i].destination,
requests[i].requestType,
requests[i].path,
requests[i].body,
requests[i].headerFields,
nullptr, localTimeOut,
false);
opIDtoIndex.insert(std::make_pair(opId, i));
// It is possible that an error occurs right away, we will notice
// below after wait(), though, and retry in due course.
}
}
@ -1145,8 +1151,7 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
LOG_TOPIC(TRACE, logTopic) << "ClusterComm::performRequests: "
<< "got BACKEND_UNAVAILABLE or TIMEOUT from "
<< requests[index].destination << ":"
<< requests[index].path << " with return code "
<< (int) res.answer_code;
<< requests[index].path;
// In this case we will retry at the dueTime
} else { // a "proper error"
requests[index].result = res;
@ -1334,13 +1339,29 @@ void ClusterCommThread::run() {
CONDITION_LOCKER(locker, cc->somethingReceived);
ClusterComm::QueueIterator q;
for (q = cc->received.begin(); q != cc->received.end(); ++q) {
for (q = cc->received.begin(); q != cc->received.end(); ) {
bool deleted = false;
op = *q;
if (op->result.status == CL_COMM_SENT) {
if (op->endTime < currentTime) {
op->result.status = CL_COMM_TIMEOUT;
if (nullptr != op->callback.get()) {
if ( (*op->callback.get())(&op->result) ) {
// This is fully processed, so let's remove it from the queue:
auto i = cc->receivedByOpID.find(op->result.operationID);
TRI_ASSERT(i != cc->receivedByOpID.end());
cc->receivedByOpID.erase(i);
std::unique_ptr<ClusterCommOperation> o(op);
auto qq = q++;
cc->received.erase(qq);
deleted = true;
}
}
}
}
if (!deleted) {
++q;
}
}
}

View File

@ -70,7 +70,7 @@ enum ClusterCommOpStatus {
CL_COMM_SENT = 3, // initial request sent, response available
CL_COMM_TIMEOUT = 4, // no answer received until timeout
CL_COMM_RECEIVED = 5, // answer received
CL_COMM_ERROR = 6, // original request could not be sent
CL_COMM_ERROR = 6, // original request could not be sent or HTTP error
CL_COMM_DROPPED = 7, // operation was dropped, not known
// this is only used to report an error
// in the wait or enquire methods
@ -82,7 +82,87 @@ enum ClusterCommOpStatus {
////////////////////////////////////////////////////////////////////////////////
/// @brief used to report the status, progress and possibly result of
/// an operation
/// an operation, this is used for the asyncRequest (with singleRequest
/// equal to true or to false), and for syncRequest.
///
/// Here is a complete overview of how the request can happen and how this
/// is reflected in the ClusterCommResult. We first cover the asyncRequest
/// case and then describe the differences for syncRequest:
///
/// First, the actual destination is determined. If the responsible server
/// for a shard is not found or the endpoint for a named server is not found,
/// or if the given endpoint is no known protocol (currently "tcp://" or
/// "ssl://", then `status` is set to CL_COMM_BACKEND_UNAVAILABLE,
/// `errorMessage` is set but `result` and `answer` are both set
/// to nullptr. The flag `sendWasComplete` remains false and the
/// `answer_code` remains GeneralResponse::ResponseCode::PROCESSING.
/// A potentially given ClusterCommCallback is called.
///
/// If no error occurs so far, the status is set to CL_COMM_SUBMITTED.
/// Still, `result`, `answer` and `answer_code` are not yet set.
/// A call to ClusterComm::enquire can return a result with this status.
/// A call to ClusterComm::wait cannot return a result wuth this status.
/// The request is queued for sending.
///
/// As soon as the sending thread discovers the submitted request, it
/// sets its status to CL_COMM_SENDING and tries to open a connection
/// or reuse an existing connection. If opening a connection fails
/// the status is set to CL_COMM_BACKEND_UNAVAILABLE. If the given timeout
/// is already reached, the status is set to CL_COMM_TIMEOUT. In both
/// error cases `result`, `answer` and `answer_code` are still unset.
///
/// If the connection was successfully created the request is sent.
/// If the request ended with a timeout, `status` is set to
/// CL_COMM_TIMEOUT as above. If another communication error (broken
/// connection) happens, `status` is set to CL_COMM_BACKEND_UNAVAILABLE.
/// In both cases, `result` can be set or can still be a nullptr.
/// `answer` and `answer_code` are still unset.
///
/// If the request is completed, but an HTTP status code >= 400 occurred,
/// the status is set to CL_COMM_ERROR, but `result` is set correctly
/// to indicate the error. If all is well, `status` is set to CL_COMM_SENT.
///
/// In the `singleRequest==true` mode, the operation is finished at this
/// stage. The callback is called, and the result either left in the
/// receiving queue or dropped. A call to ClusterComm::enquire or
/// ClusterComm::wait can return a result in this state. Note that
/// `answer` and `answer_code` are still not set. The flag
/// `sendWasComplete` is correctly set, though.
///
/// In the `singleRequest==false` mode, an asynchronous operation happens
/// at the server side and eventually, an HTTP request in the opposite
/// direction is issued. During that time, `status` remains CL_COMM_SENT.
/// A call to ClusterComm::enquire can return a result in this state.
/// A call to ClusterComm::wait does not.
///
/// If the answer does not arrive in the specified timeout, `status`
/// is set to CL_COMM_TIMEOUT and a potential callback is called. If
/// From then on, ClusterComm::wait will return it (unless deleted
/// by the callback returning true).
///
/// If an answer comes in in time, then `answer` and `answer_code`
/// are finally set, and `status` is set to CL_COMM_RECEIVED. The callback
/// is called, and the result either left in the received queue for
/// pickup by ClusterComm::wait or deleted. Note that if we get this
/// far, `status` is set to CL_COMM_RECEIVED, even if the status code
/// of the answer is >= 400.
///
/// Summing up, we have the following outcomes:
/// `status` `result` set `answer` set wait() returns
/// CL_COMM_SUBMITTED no no no
/// CL_COMM_SENDING no no no
/// CL_COMM_SENT yes no yes if single
/// CL_COMM_BACKEND_UN... yes or no no yes
/// CL_COMM_TIMEOUT yes or no no yes
/// CL_COMM_ERROR yes no yes
/// CL_COMM_RECEIVED yes yes yes
/// CL_COMM_DROPPED no no yes
///
/// The syncRequest behaves essentially in the same way, except that
/// no callback is ever called, the outcome cannot be CL_COMM_RECEIVED
/// or CL_COMM_DROPPED, and CL_COMM_SENT indicates a successful completion.
/// CL_COMM_ERROR means that the request was complete, but an HTTP error
/// occurred.
////////////////////////////////////////////////////////////////////////////////
struct ClusterCommResult {
@ -286,7 +366,7 @@ class ClusterComm {
/// @brief submit an HTTP request to a shard asynchronously.
//////////////////////////////////////////////////////////////////////////////
ClusterCommResult const asyncRequest(
OperationID asyncRequest(
ClientTransactionID const clientTransactionID,
CoordTransactionID const coordTransactionID,
std::string const& destination,

View File

@ -1536,6 +1536,10 @@ int ClusterInfo::ensureIndexCoordinator(
AgencyCommResult previous = ac.getValues(key);
if (!previous.successful()) {
return TRI_ERROR_CLUSTER_READING_PLAN_AGENCY;
}
velocypack::Slice collection =
previous.slice()[0].get(std::vector<std::string>(
{ AgencyComm::prefix(), "Plan", "Collections",
@ -1719,10 +1723,18 @@ int ClusterInfo::dropIndexCoordinator(std::string const& databaseName,
AgencyCommResult res = ac.getValues(key);
if (!res.successful()) {
return TRI_ERROR_CLUSTER_READING_PLAN_AGENCY;
}
velocypack::Slice previous =
res.slice()[0].get(std::vector<std::string>(
{ AgencyComm::prefix(), "Plan", "Collections", databaseName, collectionID }
));
if (previous.isObject()) {
return TRI_ERROR_ARANGO_COLLECTION_NOT_FOUND;
}
TRI_ASSERT(VPackObjectIterator(previous).size()>0);
std::string where =

View File

@ -47,16 +47,22 @@ static double const CL_DEFAULT_TIMEOUT = 60.0;
namespace arangodb {
static int handleGeneralCommErrors(ClusterCommResult const* res) {
// This function creates an error code from a ClusterCommResult.
// If TRI_ERROR_NO_ERROR is returned, then the result was CL_COMM_RECEIVED
// and .answer can safely be inspected.
if (res->status == CL_COMM_TIMEOUT) {
// No reply, we give up:
return TRI_ERROR_CLUSTER_TIMEOUT;
} else if (res->status == CL_COMM_ERROR) {
// This could be a broken connection or an Http error:
if (res->result == nullptr || !res->result->isComplete()) {
// there is not result
return TRI_ERROR_CLUSTER_CONNECTION_LOST;
} else if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
if (res->result == nullptr) {
return TRI_ERROR_CLUSTER_CONNECTION_LOST;
}
if (!res->result->isComplete()) {
// there is no result
return TRI_ERROR_CLUSTER_CONNECTION_LOST;
}
} else if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
return TRI_ERROR_CLUSTER_BACKEND_UNAVAILABLE;
}
@ -337,27 +343,26 @@ static void collectResultsFromAllShards(
responseCode = GeneralResponse::ResponseCode::SERVER_ERROR;
for (auto const& req : requests) {
auto res = req.result;
if (res.status == CL_COMM_RECEIVED) {
int commError = handleGeneralCommErrors(&res);
if (commError != TRI_ERROR_NO_ERROR) {
auto tmpBuilder = std::make_shared<VPackBuilder>();
auto weSend = shardMap.find(res.shardID);
TRI_ASSERT(weSend != shardMap.end()); // We send sth there earlier.
size_t count = weSend->second.size();
for (size_t i = 0; i < count; ++i) {
tmpBuilder->openObject();
tmpBuilder->add("error", VPackValue(true));
tmpBuilder->add("errorNum", VPackValue(commError));
tmpBuilder->close();
}
resultMap.emplace(res.shardID, tmpBuilder);
} else {
TRI_ASSERT(res.answer != nullptr);
resultMap.emplace(res.shardID,
res.answer->toVelocyPack(&VPackOptions::Defaults));
extractErrorCodes(res, errorCounter, true);
responseCode = res.answer_code;
int commError = handleGeneralCommErrors(&res);
if (commError != TRI_ERROR_NO_ERROR) {
auto tmpBuilder = std::make_shared<VPackBuilder>();
auto weSend = shardMap.find(res.shardID);
TRI_ASSERT(weSend != shardMap.end()); // We send sth there earlier.
size_t count = weSend->second.size();
for (size_t i = 0; i < count; ++i) {
tmpBuilder->openObject();
tmpBuilder->add("error", VPackValue(true));
tmpBuilder->add("errorNum", VPackValue(commError));
tmpBuilder->close();
}
resultMap.emplace(res.shardID, tmpBuilder);
} else {
TRI_ASSERT(res.answer != nullptr);
resultMap.emplace(res.shardID,
res.answer->toVelocyPack(&VPackOptions::Defaults));
extractErrorCodes(res, errorCounter, true);
responseCode = res.answer_code;
}
}
}
@ -791,8 +796,8 @@ int createDocumentOnCoordinator(
if (!useMultiple) {
TRI_ASSERT(requests.size() == 1);
auto const& req = requests[0];
auto res = req.result;
if (nrDone == 0) {
auto& res = req.result;
if (nrDone == 0 || res.status != CL_COMM_RECEIVED) {
// There has been Communcation error. Handle and return it.
return handleGeneralCommErrors(&res);
}
@ -949,8 +954,8 @@ int deleteDocumentOnCoordinator(
if (!useMultiple) {
TRI_ASSERT(requests.size() == 1);
auto const& req = requests[0];
auto res = req.result;
if (nrDone == 0) {
auto& res = req.result;
if (nrDone == 0 || res.status != CL_COMM_RECEIVED) {
return handleGeneralCommErrors(&res);
}
responseCode = res.answer_code;
@ -1030,7 +1035,7 @@ int deleteDocumentOnCoordinator(
auto res = req.result;
int error = handleGeneralCommErrors(&res);
if (error != TRI_ERROR_NO_ERROR) {
// Local data structores are automatically freed
// Local data structures are automatically freed
return error;
}
if (res.answer_code == GeneralResponse::ResponseCode::OK ||
@ -1562,12 +1567,11 @@ int getFilteredEdgesOnCoordinator(
int error = handleGeneralCommErrors(&res);
if (error != TRI_ERROR_NO_ERROR) {
cc->drop("", coordTransactionID, 0, "");
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) {
return TRI_ERROR_INTERNAL;
}
return error;
}
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) {
cc->drop("", coordTransactionID, 0, "");
return TRI_ERROR_INTERNAL;
}
std::shared_ptr<VPackBuilder> shardResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
if (shardResult == nullptr) {

View File

@ -1667,11 +1667,11 @@ static void JS_AsyncRequest(v8::FunctionCallbackInfo<v8::Value> const& args) {
*headerFields, clientTransactionID,
coordTransactionID, timeout, singleRequest);
ClusterCommResult const res = cc->asyncRequest(
OperationID opId = cc->asyncRequest(
clientTransactionID, coordTransactionID, destination, reqType, path, body,
headerFields, 0, timeout, singleRequest);
if (res.status == CL_COMM_ERROR) {
ClusterCommResult res = cc->enquire(opId);
if (res.status == CL_COMM_BACKEND_UNAVAILABLE) {
TRI_V8_THROW_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL,
"couldn't queue async request");
}

View File

@ -799,15 +799,16 @@ void RestReplicationHandler::handleTrampolineCoordinator() {
"timeout within cluster");
return;
}
if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
// there is no result
generateError(GeneralResponse::ResponseCode::BAD,
TRI_ERROR_CLUSTER_CONNECTION_LOST,
"lost connection within cluster");
return;
}
if (res->status == CL_COMM_ERROR) {
// This could be a broken connection or an Http error:
if (res->result == nullptr || !res->result->isComplete()) {
// there is no result
generateError(GeneralResponse::ResponseCode::BAD,
TRI_ERROR_CLUSTER_CONNECTION_LOST,
"lost connection within cluster");
return;
}
TRI_ASSERT(nullptr != res->result && res->result->isComplete());
// In this case a proper HTTP error was reported by the DBserver,
// we simply forward the result.
// We intentionally fall through here.

View File

@ -123,6 +123,7 @@ void RestServerFeature::collectOptions(
"http.hide-product-header");
options->addOldOption("server.keep-alive-timeout", "http.keep-alive-timeout");
options->addOldOption("server.default-api-compatibility", "");
options->addOldOption("no-server", "server.rest-server");
options->addOption("--server.authentication",
"enable or disable authentication for ALL client requests",

View File

@ -1187,7 +1187,8 @@ static bool clusterSendToAllServers(
cc->drop("", coordTransactionID, 0, "");
return TRI_ERROR_CLUSTER_TIMEOUT;
}
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) {
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED ||
res.status == CL_COMM_BACKEND_UNAVAILABLE) {
cc->drop("", coordTransactionID, 0, "");
return TRI_ERROR_INTERNAL;
}

View File

@ -69,18 +69,6 @@ class Slice;
#define TRI_COL_VERSION TRI_COL_VERSION_20
////////////////////////////////////////////////////////////////////////////////
/// @brief predefined system collection name for transactions
////////////////////////////////////////////////////////////////////////////////
#define TRI_COL_NAME_TRANSACTION "_trx"
////////////////////////////////////////////////////////////////////////////////
/// @brief predefined system collection name for replication
////////////////////////////////////////////////////////////////////////////////
#define TRI_COL_NAME_REPLICATION "_replication"
////////////////////////////////////////////////////////////////////////////////
/// @brief predefined collection name for users
////////////////////////////////////////////////////////////////////////////////

View File

@ -840,7 +840,7 @@ int TRI_SaveConfigurationReplicationApplier(
std::shared_ptr<VPackBuilder> builder;
try {
builder = config->toVelocyPack(false);
builder = config->toVelocyPack(true);
} catch (...) {
return TRI_ERROR_OUT_OF_MEMORY;
}

View File

@ -73,9 +73,7 @@ bool TRI_ExcludeCollectionReplication(char const* name, bool includeSystem) {
return true;
}
if (TRI_EqualString(name, TRI_COL_NAME_REPLICATION) ||
TRI_EqualString(name, TRI_COL_NAME_TRANSACTION) ||
TRI_IsPrefixString(name, TRI_COL_NAME_STATISTICS) ||
if (TRI_IsPrefixString(name, TRI_COL_NAME_STATISTICS) ||
TRI_EqualString(name, "_apps") ||
TRI_EqualString(name, "_configuration") ||
TRI_EqualString(name, "_cluster_kickstarter_plans") ||

View File

@ -469,11 +469,13 @@ ConsoleFeature::Prompt ConsoleFeature::buildPrompt(ClientFeature* client) {
if (c == 'E') {
// replace protocol
if (ep.find("tcp://") == 0) {
ep = ep.substr(6);
ep = ep.substr(strlen("tcp://"));
} else if (ep.find("http+tcp://") == 0) {
ep = ep.substr(strlen("http+tcp://"));
} else if (ep.find("ssl://") == 0) {
ep = ep.substr(6);
ep = ep.substr(strlen("ssl://"));
} else if (ep.find("unix://") == 0) {
ep = ep.substr(7);
ep = ep.substr(strlen("unix://"));
}
}

View File

@ -491,7 +491,7 @@ function analyzeServerCrash(arangod, options, checkStr)
statusExternal(arangod.monitor, true);
analyzeCoreDumpWindows(arangod);
} else {
fs.copyFile("bin/arangod", storeArangodPath);
fs.copyFile(ARANGOD_BIN, storeArangodPath);
analyzeCoreDump(arangod, options, storeArangodPath, arangod.pid);
}

View File

@ -557,9 +557,9 @@
}
});
// updates the users model
// updates the users models
addTask({
name: "updateUserModel",
name: "updateUserModels",
description: "convert documents in _users collection to new format",
system: DATABASE_SYSTEM,

View File

@ -451,7 +451,7 @@ void WindowsServiceFeature::shutDownFailure () {
/// @brief service control handler
////////////////////////////////////////////////////////////////////////////////
static void WINAPI ServiceCtrl(DWORD dwCtrlCode) {
void WINAPI ServiceCtrl(DWORD dwCtrlCode) {
DWORD dwState = SERVICE_RUNNING;
switch (dwCtrlCode) {

View File

@ -25,10 +25,12 @@
#include "ApplicationFeatures/ApplicationFeature.h"
extern SERVICE_STATUS_HANDLE ServiceStatus;
void SetServiceStatus(DWORD dwCurrentState, DWORD dwWin32ExitCode,
DWORD dwCheckPoint, DWORD dwWaitHint);
extern SERVICE_STATUS_HANDLE ServiceStatus;
void WINAPI ServiceCtrl(DWORD dwCtrlCode);
namespace arangodb {
class WindowsServiceFeature final : public application_features::ApplicationFeature {

View File

@ -343,6 +343,7 @@ bool copyRecursive(std::string const& source, std::string const& target,
bool copyDirectoryRecursive(std::string const& source,
std::string const& target, std::string& error) {
bool rc = true;
#ifdef TRI_HAVE_WIN32_LIST_FILES
auto isSubDirectory = [](struct _finddata_t item) -> bool {
@ -362,8 +363,8 @@ bool copyDirectoryRecursive(std::string const& source,
do {
#else
auto isSubDirectory = [](struct dirent* item) -> bool {
return isDirectory(item->d_name);
auto isSubDirectory = [](std::string const& name) -> bool {
return isDirectory(name);
};
struct dirent* d = (struct dirent*)TRI_Allocate(
@ -397,7 +398,7 @@ bool copyDirectoryRecursive(std::string const& source,
std::string src = source + TRI_DIR_SEPARATOR_STR + TRI_DIR_FN(oneItem);
// Handle subdirectories:
if (isSubDirectory(oneItem)) {
if (isSubDirectory(src)) {
long systemError;
int rc = TRI_CreateDirectory(dst.c_str(), systemError, error);
if (rc != TRI_ERROR_NO_ERROR) {
@ -410,7 +411,7 @@ bool copyDirectoryRecursive(std::string const& source,
break;
}
#ifndef _WIN32
} else if (isSymbolicLink(oneItem->d_name)) {
} else if (isSymbolicLink(src)) {
if (!TRI_CopySymlink(src, dst, error)) {
break;
}
@ -485,6 +486,7 @@ std::vector<std::string> listFiles(std::string const& directory) {
bool isDirectory(std::string const& path) {
TRI_stat_t stbuf;
int res = TRI_STAT(path.c_str(), &stbuf);
#ifdef _WIN32
return (res == 0) && ((stbuf.st_mode & S_IFMT) == S_IFDIR);
#else