1
0
Fork 0

Merge branch 'devel' of https://github.com/arangodb/arangodb into devel

This commit is contained in:
Kaveh Vahedipour 2016-06-06 09:30:42 +02:00
commit a496d6bea8
32 changed files with 514 additions and 229 deletions

View File

@ -17,35 +17,64 @@ Let's start with the basics: `INSERT`, `UPDATE` and `REMOVE` operations on singl
Here is an example that insert a document in an existing collection *users*: Here is an example that insert a document in an existing collection *users*:
```js ```js
INSERT { firstName: "Anna", name: "Pavlova", profession: "artist" } IN users INSERT {
firstName: "Anna",
name: "Pavlova",
profession: "artist"
} IN users
``` ```
You may provide a key for the new document; if not provided, ArangoDB will create one for you. You may provide a key for the new document; if not provided, ArangoDB will create one for you.
```js ```js
INSERT { _key: "GilbertoGil", firstName: "Gilberto", name: "Gil", city: "Fortalezza" } IN users INSERT {
_key: "GilbertoGil",
firstName: "Gilberto",
name: "Gil",
city: "Fortalezza"
} IN users
``` ```
As Arango is schema-free, attributes of the documents may vary: As ArangoDB is schema-free, attributes of the documents may vary:
```js ```js
INSERT { _key: "PhilCarpenter", firstName: "Phil", name: "Carpenter", middleName: "G.", status: "inactive" } IN users INSERT {
_key: "PhilCarpenter",
firstName: "Phil",
name: "Carpenter",
middleName: "G.",
status: "inactive"
} IN users
``` ```
```js ```js
INSERT { _key: "NatachaDeclerck", firstName: "Natacha", name: "Declerck", location: "Antwerp" } IN users INSERT {
_key: "NatachaDeclerck",
firstName: "Natacha",
name: "Declerck",
location: "Antwerp"
} IN users
``` ```
Update is quite simple. The following AQL statement will add or change the attributes status and location Update is quite simple. The following AQL statement will add or change the attributes status and location
```js ```js
UPDATE "PhilCarpenter" WITH { status: "active", location: "Beijing" } IN users UPDATE "PhilCarpenter" WITH {
status: "active",
location: "Beijing"
} IN users
``` ```
Replace is an alternative to update where all attributes of the document are replaced. Replace is an alternative to update where all attributes of the document are replaced.
```js ```js
REPLACE { _key: "NatachaDeclerck", firstName: "Natacha", name: "Leclerc", status: "active", level: "premium" } IN users REPLACE {
_key: "NatachaDeclerck",
firstName: "Natacha",
name: "Leclerc",
status: "active",
level: "premium"
} IN users
``` ```
Removing a document if you know its key is simple as well : Removing a document if you know its key is simple as well :

View File

@ -35,6 +35,10 @@ APPEND([ 1, 2, 3 ], [ 3, 4, 5, 2, 9 ], true)
// [ 1, 2, 3, 4, 5, 9 ] // [ 1, 2, 3, 4, 5, 9 ]
``` ```
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION FIRST() !SUBSECTION FIRST()
`FIRST(anyArray) → firstElement` `FIRST(anyArray) → firstElement`

View File

@ -6,25 +6,28 @@ additional language constructs.
!SUBSECTION ATTRIBUTES() !SUBSECTION ATTRIBUTES()
`ATTRIBUTES(document, removeInternal, sort) → attributes` `ATTRIBUTES(document, removeInternal, sort) → strArray`
Return the attribute keys of the *document* as an array. Optionally omit
system attributes.
- **document** (object): an arbitrary document / object - **document** (object): an arbitrary document / object
- **removeInternal** (bool, *optional*): whether all system attributes (*_key*, *_id* etc., - **removeInternal** (bool, *optional*): whether all system attributes (*_key*, *_id* etc.,
every attribute key that starts with an underscore) shall be omitted in the result. every attribute key that starts with an underscore) shall be omitted in the result.
The default is *false*. The default is *false*.
- **sort** (bool, *optional*): optionally sort the resulting array alphabetically. - **sort** (bool, *optional*): optionally sort the resulting array alphabetically.
The default is *false* and will return the attribute names in a random order. The default is *false* and will return the attribute names in any order.
- returns **attributes** (array): the attribute keys of the input *document* as an - returns **strArray** (array): the attribute keys of the input *document* as an
array of strings array of strings
```js ```js
ATTRIBUTES( {"foo": "bar", "_key": "123", "_custom": "yes" } ) ATTRIBUTES( { "foo": "bar", "_key": "123", "_custom": "yes" } )
// [ "foo", "_key", "_custom" ] // [ "foo", "_key", "_custom" ]
ATTRIBUTES( {"foo": "bar", "_key": "123", "_custom": "yes" }, true ) ATTRIBUTES( { "foo": "bar", "_key": "123", "_custom": "yes" }, true )
// [ "foo" ] // [ "foo" ]
ATTRIBUTES( {"foo": "bar", "_key": "123", "_custom": "yes" }, false, true ) ATTRIBUTES( { "foo": "bar", "_key": "123", "_custom": "yes" }, false, true )
// [ "_custom", "_key", "foo" ] // [ "_custom", "_key", "foo" ]
``` ```
@ -42,6 +45,10 @@ FOR attributeArray IN attributesPerDocument
RETURN {attr, count} RETURN {attr, count}
``` ```
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION HAS() !SUBSECTION HAS()
`HAS(document, attributeName) → isPresent` `HAS(document, attributeName) → isPresent`

View File

@ -53,6 +53,10 @@ Return an array of collections.
- returns **docArray** (array): each collection as a document with attributes - returns **docArray** (array): each collection as a document with attributes
*name* and *_id* in an array *name* and *_id* in an array
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION CURRENT_USER() !SUBSECTION CURRENT_USER()
`CURRENT_USER() → userName` `CURRENT_USER() → userName`
@ -196,14 +200,87 @@ CALL( "SUBSTRING", "this is a test", 0, 4 )
!SECTION Internal functions !SECTION Internal functions
The following functions are used during development of ArangoDB as a database
system, primarily for unit testing. They are not intended to be used by end
users, especially not in production environments.
!SUBSECTION FAIL() !SUBSECTION FAIL()
`FAIL(reason)` `FAIL(reason)`
!SUBSECTION NOOP() Let a query fail on purpose. Can be used in a conditional branch, or to verify
if lazy evaluation / short circuiting is used for instance.
`NOOP(value) → retVal` - **reason** (string): an error message
- returns nothing, because the query is aborted
```js
RETURN 1 == 1 ? "okay" : FAIL("error") // "okay"
RETURN 1 == 1 || FAIL("error") ? true : false // true
RETURN 1 == 2 && FAIL("error") ? true : false // false
RETURN 1 == 1 && FAIL("error") ? true : false // aborted with error
```
!SUBSECTION NOOPT()
`NOOPT(expression) → retVal`
No-operation that prevents query compile-time optimizations. Constant expressions
can be forced to be evaluated at runtime with this.
If there is a C++ implementation as well as a JavaScript implementation of an
AQL function, then it will enforce the use of the C++ version.
- **expression** (any): arbitray expression
- returns **retVal** (any): the return value of the *expression*
```js
// differences in execution plan (explain)
FOR i IN 1..3 RETURN (1 + 1) // const assignment
FOR i IN 1..3 RETURN NOOPT(1 + 1) // simple expression
NOOPT( RAND() ) // C++ implementation
V8( RAND() ) // JavaScript implementation
```
!SUBSECTION PASSTHRU()
`PASSTHRU(value) → retVal`
This function is marked as non-deterministic so its argument withstands
query optimization.
- **value** (any): a value of arbitrary type
- returns **retVal** (any): *value*, without optimizations
!SUBSECTION SLEEP()
`SLEEP(seconds) → null`
Wait for a certain amount of time before continuing the query.
- **seconds** (number): amount of time to wait
- returns a *null* value
```js
SLEEP(1) // wait 1 second
SLEEP(0.02) // wait 20 milliseconds
```
!SUBSECTION V8() !SUBSECTION V8()
`V8(value) → retVal` `V8(expression) → retVal`
No-operation that enforces the usage of the V8 JavaScript engine. If there is a
JavaScript implementation of an AQL function, for which there is also a C++
implementation, the JavaScript version will be used.
- **expression** (any): arbitray expression
- returns **retVal** (any): the return value of the *expression*
```js
// differences in execution plan (explain)
FOR i IN 1..3 RETURN (1 + 1) // const assignment
FOR i IN 1..3 RETURN V8(1 + 1) // const assignment
FOR i IN 1..3 RETURN NOOPT(V8(1 + 1)) // v8 expression
```

View File

@ -390,6 +390,31 @@ Result:
] ]
``` ```
!SUBSECTION RANGE()
`RANGE(start, stop, step) → numArray`
Return an array of numbers in the specified range, optionally with increments
other than 1.
For integer ranges, use the [range operator](../Operators.md#range-operator)
instead for better performance.
- **start** (number): the value to start the range at (inclusive)
- **stop** (number): the value to end the range with (inclusive)
- **step** (number, *optional*): how much to increment in every step,
the default is *1.0*
- returns **numArray** (array): all numbers in the range as array
```js
RANGE(1, 4) // [ 1, 2, 3, 4 ]
RANGE(1, 4, 2) // [ 1, 3 ]
RANGE(1, 4, 3) // [ 1, 4 ]
RANGE(1.5, 2.5) // [ 1.5, 2.5 ]
RANGE(1.5, 2.5, 0.5) // [ 1.5, 2, 2.5 ]
RANGE(-0.75, 1.1, 0.5) // [ -0.75, -0.25, 0.25, 0.75 ]
```
!SUBSECTION ROUND() !SUBSECTION ROUND()
`ROUND(value) → roundedValue` `ROUND(value) → roundedValue`

View File

@ -92,6 +92,10 @@ CONTAINS("foobarbaz", "ba", true) // 3
CONTAINS("foobarbaz", "horse", true) // -1 CONTAINS("foobarbaz", "horse", true) // -1
``` ```
!SUBSECTION COUNT()
This is an alias for [LENGTH()](#length).
!SUBSECTION FIND_FIRST() !SUBSECTION FIND_FIRST()
`FIND_FIRST(text, search, start, end) → position` `FIND_FIRST(text, search, start, end) → position`
@ -180,7 +184,7 @@ using wildcard matching.
- **text** (string): the string to search in - **text** (string): the string to search in
- **search** (string): a search pattern that can contain the wildcard characters - **search** (string): a search pattern that can contain the wildcard characters
*%* (meaning any sequence of characters, including none) and *_* (any single `%` (meaning any sequence of characters, including none) and `_` (any single
character). Literal *%* and *:* must be escaped with two backslashes. character). Literal *%* and *:* must be escaped with two backslashes.
*search* cannot be a variable or a document attribute. The actual value must *search* cannot be a variable or a document attribute. The actual value must
be present at query parse time already. be present at query parse time already.
@ -189,6 +193,19 @@ using wildcard matching.
- returns **bool** (bool): *true* if the pattern is contained in *text*, - returns **bool** (bool): *true* if the pattern is contained in *text*,
and *false* otherwise and *false* otherwise
```js
LIKE("cart", "ca_t") // true
LIKE("carrot", "ca_t") // false
LIKE("carrot", "ca%t") // true
LIKE("foo bar baz", "bar") // false
LIKE("foo bar baz", "%bar%") // true
LIKE("bar", "%bar%") // true
LIKE("FoO bAr BaZ", "fOo%bAz") // false
LIKE("FoO bAr BaZ", "fOo%bAz", true) // true
```
!SUBSECTION LOWER() !SUBSECTION LOWER()
`LOWER(value) → lowerCaseString` `LOWER(value) → lowerCaseString`
@ -255,7 +272,7 @@ RANDOM_TOKEN(8) // "m9w50Ft9"
`REGEX(text, search, caseInsensitive) → bool` `REGEX(text, search, caseInsensitive) → bool`
Check whether the pattern *search* is contained in the string *text*, Check whether the pattern *search* is contained in the string *text*,
using regular expression matching. using regular expression matching.
- **text** (string): the string to search in - **text** (string): the string to search in
- **search** (string): a regular expression search pattern - **search** (string): a regular expression search pattern
@ -265,38 +282,46 @@ using regular expression matching.
The regular expression may consist of literal characters and the following The regular expression may consist of literal characters and the following
characters and sequences: characters and sequences:
- *.*: the dot matches any single character except line terminators - `.` the dot matches any single character except line terminators.
- *\d*: matches a single digit, equivalent to [0-9] To include line terminators, use `[\s\S]` instead to simulate `.` with *DOTALL* flag.
- *\s*: matches a single whitespace character - `\d` matches a single digit, equivalent to `[0-9]`
- *\t*: matches a tab character - `\s` matches a single whitespace character
- *\r*: matches a carriage return - `\S` matches a single non-whitespace character
- *\n*: matches a line-feed character - `\t` matches a tab character
- *[xyz]*: set of characters. matches any of the enclosed characters (i.e. - `\r` matches a carriage return
- `\n` matches a line-feed character
- `[xyz]` set of characters. matches any of the enclosed characters (i.e.
*x*, *y* or *z* in this case *x*, *y* or *z* in this case
- *[^xyz]*: negated set of characters. matches any other character than the - `[^xyz]` negated set of characters. matches any other character than the
enclosed ones (i.e. anything but *x*, *y* or *z* in this case) enclosed ones (i.e. anything but *x*, *y* or *z* in this case)
- *[x-z]*: range of characters. matches any of the characters in the - `[x-z]` range of characters. Matches any of the characters in the
specified range specified range, e.g. `[0-9A-F]` to match any character in
- *[^x-z]*: negated range of characters. matches any other character than the *0123456789ABCDEF*
- `[^x-z]` negated range of characters. Matches any other character than the
ones specified in the range ones specified in the range
- *(x|y)*: matches either *x* or *y* - `(xyz)` defines and matches a pattern group
- *^*: matches the beginning of the string - `(x|y)` matches either *x* or *y*
- *$*: matches the end of the string - `^` matches the beginning of the string (e.g. `^xyz`)
- `$` matches the end of the string (e.g. `xyz$`)
Note that the characters *.*, <i>\*</i>, *?*, *[*, *]*, *(*, *)*, *{*, *}*, *^*, Note that the characters `.`, `*`, `?`, `[`, `]`, `(`, `)`, `{`, `}`, `^`,
and *$* have a special meaning in regular expressions and may need to be and `$` have a special meaning in regular expressions and may need to be
escaped using a backslash (*\\*). A literal backslash should also be escaped escaped using a backslash (`\\`). A literal backslash should also be escaped
using another backslash, i.e. *\\\\*. using another backslash, i.e. `\\\\`.
Characters and sequences may optionally be repeated using the following Characters and sequences may optionally be repeated using the following
quantifiers: quantifiers:
- *x\**: matches zero or more occurrences of *x* - `x*` matches zero or more occurrences of *x*
- *x+*: matches one or more occurrences of *x* - `x+` matches one or more occurrences of *x*
- *x?*: matches one or zero occurrences of *x* - `x?` matches one or zero occurrences of *x*
- *x{y}*: matches exactly *y* occurrences of *x* - `x{y}` matches exactly *y* occurrences of *x*
- *x{y,z}*: matches between *y* and *z* occurrences of *x* - `x{y,z}` matches between *y* and *z* occurrences of *x*
- *x{y,}*: matches at least *y* occurences of *x* - `x{y,}` matches at least *y* occurences of *x*
Note that `xyz+` matches *xyzzz*, but if you want to match *xyzxyz* instead,
you need to define a pattern group by wrapping the subexpression in parentheses
and place the quantifier right behind it: `(xyz)+`.
If the regular expression in *search* is invalid, a warning will be raised If the regular expression in *search* is invalid, a warning will be raised
and the function will return *false*. and the function will return *false*.

View File

@ -161,8 +161,8 @@ The following type check functions are available:
- `IS_DOCUMENT(value) → bool`: This is an alias for *IS_OBJECT()* - `IS_DOCUMENT(value) → bool`: This is an alias for *IS_OBJECT()*
- `IS_DATESTRING(value) → bool`: Check whether *value* is a string that can be used - `IS_DATESTRING(value) → bool`: Check whether *value* is a string that can be used
in a date function. This includes partial dates such as *2015* or *2015-10* and in a date function. This includes partial dates such as *"2015"* or *"2015-10"* and
strings containing invalid dates such as *2015-02-31*. The function will return strings containing invalid dates such as *"2015-02-31"*. The function will return
false for all non-string values, even if some of them may be usable in date functions. false for all non-string values, even if some of them may be usable in date functions.
- `TYPENAME(value) → typeName`: Return the data type name of *value*. The data type - `TYPENAME(value) → typeName`: Return the data type name of *value*. The data type

View File

@ -59,7 +59,7 @@ or underscore.
``` ```
"abc" LIKE "a%" // true "abc" LIKE "a%" // true
"abc" LIKE "_bc" // true "abc" LIKE "_bc" // true
"a_b_foo" LIKE "a\\_b\\_f%" // true "a_b_foo" LIKE "a\\_b\\_foo" // true
``` ```
The pattern matching performed by the *LIKE* operator is case-sensitive. The pattern matching performed by the *LIKE* operator is case-sensitive.
@ -182,7 +182,14 @@ AQL supports the following arithmetic operators:
- */* division - */* division
- *%* modulus - *%* modulus
The unary plus and unary minus are supported as well. Unary plus and unary minus are supported as well:
```js
LET x = -5
LET y = 1
RETURN [-x, +y]
// [5, 1]
```
For exponentiation, there is a [numeric function](Functions/Numeric.md#pow) *POW()*. For exponentiation, there is a [numeric function](Functions/Numeric.md#pow) *POW()*.
@ -202,7 +209,7 @@ Some example arithmetic operations:
The arithmetic operators accept operands of any type. Passing non-numeric values to an The arithmetic operators accept operands of any type. Passing non-numeric values to an
arithmetic operator will cast the operands to numbers using the type casting rules arithmetic operator will cast the operands to numbers using the type casting rules
applied by the `TO_NUMBER` function: applied by the [TO_NUMBER()](Functions/TypeCast.md#tonumber) function:
- `null` will be converted to `0` - `null` will be converted to `0`
- `false` will be converted to `0`, true will be converted to `1` - `false` will be converted to `0`, true will be converted to `1`
@ -215,8 +222,8 @@ applied by the `TO_NUMBER` function:
`0`. `0`.
- objects / documents are converted to the number `0`. - objects / documents are converted to the number `0`.
An arithmetic operation that produces an invalid value, such as `1 / 0` will also produce An arithmetic operation that produces an invalid value, such as `1 / 0` (division by zero)
a result value of `0`. will also produce a result value of `0`.
Here are a few examples: Here are a few examples:

View File

@ -813,6 +813,8 @@ in 3.0:
is unchanged. is unchanged.
- `--server.foxx-queues-poll-interval` was renamed to `--foxx.queues-poll-interval`. - `--server.foxx-queues-poll-interval` was renamed to `--foxx.queues-poll-interval`.
The meaning of the option is unchanged. The meaning of the option is unchanged.
- `--no-server` was renamed to `--server.rest-server`. Note that the meaning of the
option `--server.rest-server` is the opposite of the previous `--no-server`.
- `--database.query-cache-mode` was renamed to `--query.cache-mode`. The meaning of - `--database.query-cache-mode` was renamed to `--query.cache-mode`. The meaning of
the option is unchanged. the option is unchanged.
- `--database.query-cache-max-results` was renamed to `--query.cache-entries`. The - `--database.query-cache-max-results` was renamed to `--query.cache-entries`. The

View File

@ -5,10 +5,9 @@ This file contains documentation about the build process, documentation generati
CMake CMake
===== =====
* *--enable-relative* - relative mode so you can run without make install * *-DUSE_MAINTAINER_MODE* - generate lex/yacc files
* *--enable-maintainer-mode* - generate lex/yacc files * *-DUSE_BACKTRACE=1* - add backtraces to native code asserts & exceptions
* *--with-backtrace* - add backtraces to native code asserts & exceptions * *-DUSE_FAILURE_TESTS=1* - adds javascript hook to crash the server for data integrity tests
* *--enable-failure-tests* - adds javascript hook to crash the server for data integrity tests
CFLAGS CFLAGS
------ ------
@ -33,7 +32,7 @@ If the compile goes wrong for no particular reason, appending 'verbose=' adds mo
Runtime Runtime
------- -------
* start arangod with `--console` to get a debug console * start arangod with `--console` to get a debug console
* Cheapen startup for valgrind: `--no-server --javascript.gc-frequency 1000000 --javascript.gc-interval 65536 --scheduler.threads=1 --javascript.v8-contexts=1` * Cheapen startup for valgrind: `--server.rest-server false --javascript.gc-frequency 1000000 --javascript.gc-interval 65536 --scheduler.threads=1 --javascript.v8-contexts=1`
* to have backtraces output set this on the prompt: `ENABLE_NATIVE_BACKTRACES(true)` * to have backtraces output set this on the prompt: `ENABLE_NATIVE_BACKTRACES(true)`
Startup Startup
@ -60,8 +59,8 @@ JSLint
====== ======
(we switched to jshint a while back - this is still named jslint for historical reasons) (we switched to jshint a while back - this is still named jslint for historical reasons)
Make target checker Script
----------- --------------
use use
./utils/gitjslint.sh ./utils/gitjslint.sh
@ -187,17 +186,12 @@ jsUnity via arangosh
-------------------- --------------------
arangosh is similar, however, you can only run tests which are intended to be ran via arangosh: arangosh is similar, however, you can only run tests which are intended to be ran via arangosh:
require("jsunity").runTest("js/client/tests/shell-client.js"); require("jsunity").runTest("js/client/tests/shell/shell-client.js");
mocha tests mocha tests
----------- -----------
All tests with -spec in their names are using the [mochajs.org](https://mochajs.org) framework. All tests with -spec in their names are using the [mochajs.org](https://mochajs.org) framework.
jasmine tests
-------------
Jasmine tests cover testing the UI components of aardvark
Javascript framework Javascript framework
-------------------- --------------------
(used in our local Jenkins and TravisCI integration; required for running cluster tests) (used in our local Jenkins and TravisCI integration; required for running cluster tests)
@ -232,7 +226,7 @@ A commandline for running a single test (-> with the facility 'single_server') u
valgrind could look like this. Options are passed as regular long values in the valgrind could look like this. Options are passed as regular long values in the
syntax --option value --sub:option value. Using Valgrind could look like this: syntax --option value --sub:option value. Using Valgrind could look like this:
./scripts/unittest single_server --test js/server/tests/aql-escaping.js \ ./scripts/unittest single_server --test js/server/tests/aql/aql-escaping.js \
--extraargs:server.threads 1 \ --extraargs:server.threads 1 \
--extraargs:scheduler.threads 1 \ --extraargs:scheduler.threads 1 \
--extraargs:javascript.gc-frequency 1000000 \ --extraargs:javascript.gc-frequency 1000000 \
@ -249,11 +243,11 @@ Running a single unittestsuite
------------------------------ ------------------------------
Testing a single test with the framework directly on a server: Testing a single test with the framework directly on a server:
scripts/unittest single_server --test js/server/tests/aql-escaping.js scripts/unittest single_server --test js/server/tests/aql/aql-escaping.js
Testing a single test with the framework via arangosh: Testing a single test with the framework via arangosh:
scripts/unittest single_client --test js/server/tests/aql-escaping.js scripts/unittest single_client --test js/server/tests/aql/aql-escaping.js
Testing a single rspec test: Testing a single rspec test:
@ -268,23 +262,23 @@ Since downloading fox apps from github can be cumbersome with shaky DSL
and DOS'ed github, we can fake it like this: and DOS'ed github, we can fake it like this:
export FOXX_BASE_URL="http://germany/fakegit/" export FOXX_BASE_URL="http://germany/fakegit/"
./scripts/unittest single_server --test 'js/server/tests/shell-foxx-manager-spec.js' ./scripts/unittest single_server --test 'js/server/tests/shell/shell-foxx-manager-spec.js'
arangod Emergency console arangod Emergency console
------------------------- -------------------------
require("jsunity").runTest("js/server/tests/aql-escaping.js"); require("jsunity").runTest("js/server/tests/aql/aql-escaping.js");
arangosh client arangosh client
--------------- ---------------
require("jsunity").runTest("js/server/tests/aql-escaping.js"); require("jsunity").runTest("js/server/tests/aql/aql-escaping.js");
arangod commandline arguments arangod commandline arguments
----------------------------- -----------------------------
bin/arangod /tmp/dataUT --javascript.unit-tests="js/server/tests/aql-escaping.js" --no-server bin/arangod /tmp/dataUT --javascript.unit-tests="js/server/tests/aql/aql-escaping.js" --no-server
js/common/modules/loadtestrunner.js js/common/modules/loadtestrunner.js
@ -351,7 +345,7 @@ Dependencies to build documentation:
- MarkdownPP - MarkdownPP
https://github.com/triAGENS/markdown-pp/ https://github.com/arangodb-helper/markdown-pp/
Checkout the code with Git, use your system python to install: Checkout the code with Git, use your system python to install:
@ -366,6 +360,7 @@ Dependencies to build documentation:
- `npm` - `npm`
If not, add the installation path to your environment variable PATH. If not, add the installation path to your environment variable PATH.
Gitbook requires more recent node versions.
- [Gitbook](https://github.com/GitbookIO/gitbook) - [Gitbook](https://github.com/GitbookIO/gitbook)
@ -381,7 +376,7 @@ Dependencies to build documentation:
Generate users documentation Generate users documentation
============================ ============================
If you've edited examples, see below how to regenerate them. If you've edited examples, see below how to regenerate them with `./utils/generateExamples.sh`.
If you've edited REST documentation, first invoke `./utils/generateSwagger.sh`. If you've edited REST documentation, first invoke `./utils/generateSwagger.sh`.
Run the `make` command in `arangodb/Documentation/Books` to generate it. Run the `make` command in `arangodb/Documentation/Books` to generate it.
The documentation will be generated in subfolders in `arangodb/Documentation/Books/books` - The documentation will be generated in subfolders in `arangodb/Documentation/Books/books` -
@ -431,6 +426,8 @@ Generate an ePub:
gitbook epub ./ppbooks/Manual ./target/path/filename.epub gitbook epub ./ppbooks/Manual ./target/path/filename.epub
Examples
========
Where to add new... Where to add new...
------------------- -------------------
- Documentation/DocuBlocks/* - markdown comments with execution section - Documentation/DocuBlocks/* - markdown comments with execution section
@ -609,6 +606,8 @@ Attributes:
can be either a swaggertype, or a *RESTRUCT* can be either a swaggertype, or a *RESTRUCT*
- format: if type is a native swagger type, some support a format to specify them - format: if type is a native swagger type, some support a format to specify them
--------------------------------------------------------------------------------
Local cluster startup Local cluster startup
===================== =====================
@ -630,6 +629,7 @@ up in the GNU debugger in separate windows (using `xterm`s). In that
case one has to hit ENTER in the original terminal where the script runs case one has to hit ENTER in the original terminal where the script runs
to continue, once all processes have been start up in the debugger. to continue, once all processes have been start up in the debugger.
--------------------------------------------------------------------------------
Front-End (WebUI) Front-End (WebUI)
========= =========

View File

@ -289,7 +289,7 @@ void Constituent::callElection() {
} }
std::string body; std::string body;
std::vector<ClusterCommResult> results(config().endpoints.size()); std::vector<OperationID> operationIDs(config().endpoints.size());
std::stringstream path; std::stringstream path;
path << "/_api/agency_priv/requestVote?term=" << _term path << "/_api/agency_priv/requestVote?term=" << _term
@ -301,7 +301,7 @@ void Constituent::callElection() {
if (i != _id && endpoint(i) != "") { if (i != _id && endpoint(i) != "") {
auto headerFields = auto headerFields =
std::make_unique<std::unordered_map<std::string, std::string>>(); std::make_unique<std::unordered_map<std::string, std::string>>();
results[i] = arangodb::ClusterComm::instance()->asyncRequest( operationIDs[i] = arangodb::ClusterComm::instance()->asyncRequest(
"1", 1, config().endpoints[i], GeneralRequest::RequestType::GET, "1", 1, config().endpoints[i], GeneralRequest::RequestType::GET,
path.str(), std::make_shared<std::string>(body), headerFields, path.str(), std::make_shared<std::string>(body), headerFields,
nullptr, config().minPing, true); nullptr, config().minPing, true);
@ -313,14 +313,16 @@ void Constituent::callElection() {
sleepFor(.5 * config().minPing, .8 * config().minPing)); sleepFor(.5 * config().minPing, .8 * config().minPing));
// Collect votes // Collect votes
// FIXME: This code can be improved: One can wait for an arbitrary
// result by creating a coordinatorID and waiting for a pattern.
for (arangodb::consensus::id_t i = 0; i < config().endpoints.size(); ++i) { for (arangodb::consensus::id_t i = 0; i < config().endpoints.size(); ++i) {
if (i != _id && endpoint(i) != "") { if (i != _id && endpoint(i) != "") {
ClusterCommResult res = ClusterCommResult res =
arangodb::ClusterComm::instance()->enquire(results[i].operationID); arangodb::ClusterComm::instance()->enquire(operationIDs[i]);
if (res.status == CL_COMM_SENT) { // Request successfully sent if (res.status == CL_COMM_SENT) { // Request successfully sent
res = arangodb::ClusterComm::instance()->wait( res = arangodb::ClusterComm::instance()->wait(
"1", 1, results[i].operationID, "1"); "1", 1, operationIDs[i], "1");
std::shared_ptr<Builder> body = res.result->getBodyVelocyPack(); std::shared_ptr<Builder> body = res.result->getBodyVelocyPack();
if (body->isEmpty()) { // body empty if (body->isEmpty()) { // body empty
continue; continue;

View File

@ -31,17 +31,11 @@ void NotifierThread::scheduleNotification(const std::string& endpoint) {
auto headerFields = auto headerFields =
std::make_unique<std::unordered_map<std::string, std::string>>(); std::make_unique<std::unordered_map<std::string, std::string>>();
while (true) { // This is best effort: We do not guarantee at least once delivery!
auto res = arangodb::ClusterComm::instance()->asyncRequest( arangodb::ClusterComm::instance()->asyncRequest(
"", TRI_NewTickServer(), endpoint, GeneralRequest::RequestType::POST, "", TRI_NewTickServer(), endpoint, GeneralRequest::RequestType::POST,
_path, std::make_shared<std::string>(_body->toJson()), headerFields, _path, std::make_shared<std::string>(_body->toJson()), headerFields,
std::make_shared<NotifyCallback>(cb), 5.0, true); std::make_shared<NotifyCallback>(cb), 5.0, true);
if (res.status == CL_COMM_SUBMITTED) {
break;
}
usleep(500000);
}
} }
bool NotifierThread::start() { return Thread::start(); } bool NotifierThread::start() { return Thread::start(); }

View File

@ -249,7 +249,7 @@ std::vector<bool> Store::apply(std::vector<VPackSlice> const& queries,
auto headerFields = auto headerFields =
std::make_unique<std::unordered_map<std::string, std::string>>(); std::make_unique<std::unordered_map<std::string, std::string>>();
ClusterCommResult res = arangodb::ClusterComm::instance()->asyncRequest( arangodb::ClusterComm::instance()->asyncRequest(
"1", 1, endpoint, GeneralRequest::RequestType::POST, path, "1", 1, endpoint, GeneralRequest::RequestType::POST, path,
std::make_shared<std::string>(body.toString()), headerFields, std::make_shared<std::string>(body.toString()), headerFields,
std::make_shared<StoreCallback>(), 0.0, true); std::make_shared<StoreCallback>(), 0.0, true);

View File

@ -1091,18 +1091,19 @@ static bool throwExceptionAfterBadSyncRequest(ClusterCommResult* res,
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_TIMEOUT, errorMessage); THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_TIMEOUT, errorMessage);
} }
if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
// there is no result
std::string errorMessage =
std::string("Empty result in communication with shard '") +
std::string(res->shardID) + std::string("' on cluster node '") +
std::string(res->serverID) + std::string("'");
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_CONNECTION_LOST,
errorMessage);
}
if (res->status == CL_COMM_ERROR) { if (res->status == CL_COMM_ERROR) {
std::string errorMessage; std::string errorMessage;
// This could be a broken connection or an Http error: TRI_ASSERT(nullptr != res->result);
if (res->result == nullptr || !res->result->isComplete()) {
// there is no result
errorMessage +=
std::string("Empty result in communication with shard '") +
std::string(res->shardID) + std::string("' on cluster node '") +
std::string(res->serverID) + std::string("'");
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_CLUSTER_CONNECTION_LOST,
errorMessage);
}
StringBuffer const& responseBodyBuf(res->result->getBody()); StringBuffer const& responseBodyBuf(res->result->getBody());

View File

@ -506,9 +506,9 @@ struct CoordinatorInstanciator : public WalkerWorker<ExecutionNode> {
auto headers = std::make_unique<std::unordered_map<std::string, std::string>>(); auto headers = std::make_unique<std::unordered_map<std::string, std::string>>();
(*headers)["X-Arango-Nolock"] = shardId; // Prevent locking (*headers)["X-Arango-Nolock"] = shardId; // Prevent locking
auto res = cc->asyncRequest("", coordTransactionID, "shard:" + shardId, cc->asyncRequest("", coordTransactionID, "shard:" + shardId,
arangodb::GeneralRequest::RequestType::POST, arangodb::GeneralRequest::RequestType::POST,
url, body, headers, nullptr, 30.0); url, body, headers, nullptr, 30.0);
} }
/// @brief aggregateQueryIds, get answers for all shards in a Scatter/Gather /// @brief aggregateQueryIds, get answers for all shards in a Scatter/Gather

View File

@ -65,7 +65,7 @@ void ClusterCommResult::setDestination(std::string const& dest,
serverID = (*resp)[0]; serverID = (*resp)[0];
} else { } else {
serverID = ""; serverID = "";
status = CL_COMM_ERROR; status = CL_COMM_BACKEND_UNAVAILABLE;
if (logConnectionErrors) { if (logConnectionErrors) {
LOG(ERR) << "cannot find responsible server for shard '" LOG(ERR) << "cannot find responsible server for shard '"
<< shardID << "'"; << shardID << "'";
@ -89,7 +89,7 @@ void ClusterCommResult::setDestination(std::string const& dest,
shardID = ""; shardID = "";
serverID = ""; serverID = "";
endpoint = ""; endpoint = "";
status = CL_COMM_ERROR; status = CL_COMM_BACKEND_UNAVAILABLE;
errorMessage = "did not understand destination'" + dest + "'"; errorMessage = "did not understand destination'" + dest + "'";
if (logConnectionErrors) { if (logConnectionErrors) {
LOG(ERR) << "did not understand destination '" << dest << "'"; LOG(ERR) << "did not understand destination '" << dest << "'";
@ -102,7 +102,7 @@ void ClusterCommResult::setDestination(std::string const& dest,
auto ci = ClusterInfo::instance(); auto ci = ClusterInfo::instance();
endpoint = ci->getServerEndpoint(serverID); endpoint = ci->getServerEndpoint(serverID);
if (endpoint.empty()) { if (endpoint.empty()) {
status = CL_COMM_ERROR; status = CL_COMM_BACKEND_UNAVAILABLE;
errorMessage = "did not find endpoint of server '" + serverID + "'"; errorMessage = "did not find endpoint of server '" + serverID + "'";
if (logConnectionErrors) { if (logConnectionErrors) {
LOG(ERR) << "did not find endpoint of server '" << serverID LOG(ERR) << "did not find endpoint of server '" << serverID
@ -193,16 +193,17 @@ OperationID ClusterComm::getOperationID() { return TRI_NewTickServer(); }
/// DBServer back to us. Therefore ClusterComm also creates an entry in /// DBServer back to us. Therefore ClusterComm also creates an entry in
/// a list of expected answers. One either has to use a callback for /// a list of expected answers. One either has to use a callback for
/// the answer, or poll for it, or drop it to prevent memory leaks. /// the answer, or poll for it, or drop it to prevent memory leaks.
/// The result of this call is just a record that the initial HTTP /// This call never returns a result directly, rather, it returns an
/// request has been queued (`status` is CL_COMM_SUBMITTED). Use @ref /// operation ID under which one can query the outcome with a wait() or
/// enquire below to get information about the progress. The actual /// enquire() call (see below).
/// answer is then delivered either in the callback or via poll. The ///
/// ClusterCommResult is returned by value. /// Use @ref enquire below to get information about the progress. The
/// If `singleRequest` is set to `true`, then the destination can be /// actual answer is then delivered either in the callback or via
/// an arbitrary server, the functionality can also be used in single-Server /// poll. If `singleRequest` is set to `true`, then the destination
/// mode, and the operation is complete when the single request is sent /// can be an arbitrary server, the functionality can also be used in
/// and the corresponding answer has been received. We use this functionality /// single-Server mode, and the operation is complete when the single
/// for the agency mode of ArangoDB. /// request is sent and the corresponding answer has been received. We
/// use this functionality for the agency mode of ArangoDB.
/// The library takes ownerships of the pointer `headerFields` by moving /// The library takes ownerships of the pointer `headerFields` by moving
/// the unique_ptr to its own storage, this is necessary since this /// the unique_ptr to its own storage, this is necessary since this
/// method sometimes has to add its own headers. The library retains shared /// method sometimes has to add its own headers. The library retains shared
@ -221,7 +222,7 @@ OperationID ClusterComm::getOperationID() { return TRI_NewTickServer(); }
/// "tcp://..." or "ssl://..." endpoints, if `singleRequest` is true. /// "tcp://..." or "ssl://..." endpoints, if `singleRequest` is true.
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
ClusterCommResult const ClusterComm::asyncRequest( OperationID ClusterComm::asyncRequest(
ClientTransactionID const clientTransactionID, ClientTransactionID const clientTransactionID,
CoordTransactionID const coordTransactionID, std::string const& destination, CoordTransactionID const coordTransactionID, std::string const& destination,
arangodb::GeneralRequest::RequestType reqtype, arangodb::GeneralRequest::RequestType reqtype,
@ -235,9 +236,11 @@ ClusterCommResult const ClusterComm::asyncRequest(
auto op = std::make_unique<ClusterCommOperation>(); auto op = std::make_unique<ClusterCommOperation>();
op->result.clientTransactionID = clientTransactionID; op->result.clientTransactionID = clientTransactionID;
op->result.coordTransactionID = coordTransactionID; op->result.coordTransactionID = coordTransactionID;
OperationID opId = 0;
do { do {
op->result.operationID = getOperationID(); opId = getOperationID();
} while (op->result.operationID == 0); // just to make sure } while (opId == 0); // just to make sure
op->result.operationID = opId;
op->result.status = CL_COMM_SUBMITTED; op->result.status = CL_COMM_SUBMITTED;
op->result.single = singleRequest; op->result.single = singleRequest;
op->reqtype = reqtype; op->reqtype = reqtype;
@ -250,20 +253,27 @@ ClusterCommResult const ClusterComm::asyncRequest(
op->result.setDestination(destination, logConnectionErrors()); op->result.setDestination(destination, logConnectionErrors());
if (op->result.status == CL_COMM_ERROR) { if (op->result.status == CL_COMM_ERROR) {
// In the non-singleRequest mode we want to put it into the received // We put it into the received queue right away for error reporting:
// queue right away for backward compatibility:
ClusterCommResult const resCopy(op->result); ClusterCommResult const resCopy(op->result);
if (!singleRequest) { LOG(DEBUG) << "In asyncRequest, putting failed request "
LOG(DEBUG) << "In asyncRequest, putting failed request " << resCopy.operationID << " directly into received queue.";
<< resCopy.operationID << " directly into received queue."; CONDITION_LOCKER(locker, somethingReceived);
CONDITION_LOCKER(locker, somethingReceived); received.push_back(op.get());
received.push_back(op.get()); op.release();
op.release(); auto q = received.end();
auto q = received.end(); receivedByOpID[opId] = --q;
receivedByOpID[resCopy.operationID] = --q; if (nullptr != callback) {
somethingReceived.broadcast(); op.reset(*q);
if ( (*callback.get())(&(op->result)) ) {
auto i = receivedByOpID.find(opId);
receivedByOpID.erase(i);
received.erase(q);
} else {
op.release();
}
} }
return resCopy; somethingReceived.broadcast();
return opId;
} }
if (destination.substr(0, 6) == "shard:") { if (destination.substr(0, 6) == "shard:") {
@ -312,20 +322,18 @@ ClusterCommResult const ClusterComm::asyncRequest(
// } // }
// std::cout << std::endl; // std::cout << std::endl;
ClusterCommResult const res(op->result);
{ {
CONDITION_LOCKER(locker, somethingToSend); CONDITION_LOCKER(locker, somethingToSend);
toSend.push_back(op.get()); toSend.push_back(op.get());
TRI_ASSERT(nullptr != op.get()); TRI_ASSERT(nullptr != op.get());
op.release(); op.release();
std::list<ClusterCommOperation*>::iterator i = toSend.end(); std::list<ClusterCommOperation*>::iterator i = toSend.end();
toSendByOpID[res.operationID] = --i; toSendByOpID[opId] = --i;
} }
LOG(DEBUG) << "In asyncRequest, put into queue " << res.operationID; LOG(DEBUG) << "In asyncRequest, put into queue " << opId;
somethingToSend.signal(); somethingToSend.signal();
return res; return opId;
} }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -371,7 +379,7 @@ std::unique_ptr<ClusterCommResult> ClusterComm::syncRequest(
res->setDestination(destination, logConnectionErrors()); res->setDestination(destination, logConnectionErrors());
if (res->status == CL_COMM_ERROR) { if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
return res; return res;
} }
@ -887,9 +895,10 @@ std::string ClusterComm::processAnswer(std::string& coordinatorHeader,
if ((*op->callback.get())(&op->result)) { if ((*op->callback.get())(&op->result)) {
// This is fully processed, so let's remove it from the queue: // This is fully processed, so let's remove it from the queue:
QueueIterator q = i->second; QueueIterator q = i->second;
std::unique_ptr<ClusterCommOperation> o(op);
receivedByOpID.erase(i); receivedByOpID.erase(i);
received.erase(q); received.erase(q);
delete op; return std::string("");
} }
} }
} else { } else {
@ -910,9 +919,10 @@ std::string ClusterComm::processAnswer(std::string& coordinatorHeader,
if ((*op->callback)(&op->result)) { if ((*op->callback)(&op->result)) {
// This is fully processed, so let's remove it from the queue: // This is fully processed, so let's remove it from the queue:
QueueIterator q = i->second; QueueIterator q = i->second;
std::unique_ptr<ClusterCommOperation> o(op);
toSendByOpID.erase(i); toSendByOpID.erase(i);
toSend.erase(q); toSend.erase(q);
delete op; return std::string("");
} }
} }
} else { } else {
@ -1077,21 +1087,17 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
localTimeOut = endTime - now; localTimeOut = endTime - now;
dueTime[i] = endTime + 10; dueTime[i] = endTime + 10;
} }
auto res = asyncRequest("", coordinatorTransactionID, OperationID opId = asyncRequest("", coordinatorTransactionID,
requests[i].destination, requests[i].destination,
requests[i].requestType, requests[i].requestType,
requests[i].path, requests[i].path,
requests[i].body, requests[i].body,
requests[i].headerFields, requests[i].headerFields,
nullptr, localTimeOut, nullptr, localTimeOut,
false); false);
if (res.status == CL_COMM_ERROR) { opIDtoIndex.insert(std::make_pair(opId, i));
// We did not find the destination, this could change in the // It is possible that an error occurs right away, we will notice
// future, therefore we will retry at some stage: // below after wait(), though, and retry in due course.
drop("", 0, res.operationID, ""); // forget about it
} else {
opIDtoIndex.insert(std::make_pair(res.operationID, i));
}
} }
} }
@ -1145,8 +1151,7 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
LOG_TOPIC(TRACE, logTopic) << "ClusterComm::performRequests: " LOG_TOPIC(TRACE, logTopic) << "ClusterComm::performRequests: "
<< "got BACKEND_UNAVAILABLE or TIMEOUT from " << "got BACKEND_UNAVAILABLE or TIMEOUT from "
<< requests[index].destination << ":" << requests[index].destination << ":"
<< requests[index].path << " with return code " << requests[index].path;
<< (int) res.answer_code;
// In this case we will retry at the dueTime // In this case we will retry at the dueTime
} else { // a "proper error" } else { // a "proper error"
requests[index].result = res; requests[index].result = res;
@ -1334,13 +1339,29 @@ void ClusterCommThread::run() {
CONDITION_LOCKER(locker, cc->somethingReceived); CONDITION_LOCKER(locker, cc->somethingReceived);
ClusterComm::QueueIterator q; ClusterComm::QueueIterator q;
for (q = cc->received.begin(); q != cc->received.end(); ++q) { for (q = cc->received.begin(); q != cc->received.end(); ) {
bool deleted = false;
op = *q; op = *q;
if (op->result.status == CL_COMM_SENT) { if (op->result.status == CL_COMM_SENT) {
if (op->endTime < currentTime) { if (op->endTime < currentTime) {
op->result.status = CL_COMM_TIMEOUT; op->result.status = CL_COMM_TIMEOUT;
if (nullptr != op->callback.get()) {
if ( (*op->callback.get())(&op->result) ) {
// This is fully processed, so let's remove it from the queue:
auto i = cc->receivedByOpID.find(op->result.operationID);
TRI_ASSERT(i != cc->receivedByOpID.end());
cc->receivedByOpID.erase(i);
std::unique_ptr<ClusterCommOperation> o(op);
auto qq = q++;
cc->received.erase(qq);
deleted = true;
}
}
} }
} }
if (!deleted) {
++q;
}
} }
} }

View File

@ -70,7 +70,7 @@ enum ClusterCommOpStatus {
CL_COMM_SENT = 3, // initial request sent, response available CL_COMM_SENT = 3, // initial request sent, response available
CL_COMM_TIMEOUT = 4, // no answer received until timeout CL_COMM_TIMEOUT = 4, // no answer received until timeout
CL_COMM_RECEIVED = 5, // answer received CL_COMM_RECEIVED = 5, // answer received
CL_COMM_ERROR = 6, // original request could not be sent CL_COMM_ERROR = 6, // original request could not be sent or HTTP error
CL_COMM_DROPPED = 7, // operation was dropped, not known CL_COMM_DROPPED = 7, // operation was dropped, not known
// this is only used to report an error // this is only used to report an error
// in the wait or enquire methods // in the wait or enquire methods
@ -82,7 +82,87 @@ enum ClusterCommOpStatus {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief used to report the status, progress and possibly result of /// @brief used to report the status, progress and possibly result of
/// an operation /// an operation, this is used for the asyncRequest (with singleRequest
/// equal to true or to false), and for syncRequest.
///
/// Here is a complete overview of how the request can happen and how this
/// is reflected in the ClusterCommResult. We first cover the asyncRequest
/// case and then describe the differences for syncRequest:
///
/// First, the actual destination is determined. If the responsible server
/// for a shard is not found or the endpoint for a named server is not found,
/// or if the given endpoint is no known protocol (currently "tcp://" or
/// "ssl://", then `status` is set to CL_COMM_BACKEND_UNAVAILABLE,
/// `errorMessage` is set but `result` and `answer` are both set
/// to nullptr. The flag `sendWasComplete` remains false and the
/// `answer_code` remains GeneralResponse::ResponseCode::PROCESSING.
/// A potentially given ClusterCommCallback is called.
///
/// If no error occurs so far, the status is set to CL_COMM_SUBMITTED.
/// Still, `result`, `answer` and `answer_code` are not yet set.
/// A call to ClusterComm::enquire can return a result with this status.
/// A call to ClusterComm::wait cannot return a result wuth this status.
/// The request is queued for sending.
///
/// As soon as the sending thread discovers the submitted request, it
/// sets its status to CL_COMM_SENDING and tries to open a connection
/// or reuse an existing connection. If opening a connection fails
/// the status is set to CL_COMM_BACKEND_UNAVAILABLE. If the given timeout
/// is already reached, the status is set to CL_COMM_TIMEOUT. In both
/// error cases `result`, `answer` and `answer_code` are still unset.
///
/// If the connection was successfully created the request is sent.
/// If the request ended with a timeout, `status` is set to
/// CL_COMM_TIMEOUT as above. If another communication error (broken
/// connection) happens, `status` is set to CL_COMM_BACKEND_UNAVAILABLE.
/// In both cases, `result` can be set or can still be a nullptr.
/// `answer` and `answer_code` are still unset.
///
/// If the request is completed, but an HTTP status code >= 400 occurred,
/// the status is set to CL_COMM_ERROR, but `result` is set correctly
/// to indicate the error. If all is well, `status` is set to CL_COMM_SENT.
///
/// In the `singleRequest==true` mode, the operation is finished at this
/// stage. The callback is called, and the result either left in the
/// receiving queue or dropped. A call to ClusterComm::enquire or
/// ClusterComm::wait can return a result in this state. Note that
/// `answer` and `answer_code` are still not set. The flag
/// `sendWasComplete` is correctly set, though.
///
/// In the `singleRequest==false` mode, an asynchronous operation happens
/// at the server side and eventually, an HTTP request in the opposite
/// direction is issued. During that time, `status` remains CL_COMM_SENT.
/// A call to ClusterComm::enquire can return a result in this state.
/// A call to ClusterComm::wait does not.
///
/// If the answer does not arrive in the specified timeout, `status`
/// is set to CL_COMM_TIMEOUT and a potential callback is called. If
/// From then on, ClusterComm::wait will return it (unless deleted
/// by the callback returning true).
///
/// If an answer comes in in time, then `answer` and `answer_code`
/// are finally set, and `status` is set to CL_COMM_RECEIVED. The callback
/// is called, and the result either left in the received queue for
/// pickup by ClusterComm::wait or deleted. Note that if we get this
/// far, `status` is set to CL_COMM_RECEIVED, even if the status code
/// of the answer is >= 400.
///
/// Summing up, we have the following outcomes:
/// `status` `result` set `answer` set wait() returns
/// CL_COMM_SUBMITTED no no no
/// CL_COMM_SENDING no no no
/// CL_COMM_SENT yes no yes if single
/// CL_COMM_BACKEND_UN... yes or no no yes
/// CL_COMM_TIMEOUT yes or no no yes
/// CL_COMM_ERROR yes no yes
/// CL_COMM_RECEIVED yes yes yes
/// CL_COMM_DROPPED no no yes
///
/// The syncRequest behaves essentially in the same way, except that
/// no callback is ever called, the outcome cannot be CL_COMM_RECEIVED
/// or CL_COMM_DROPPED, and CL_COMM_SENT indicates a successful completion.
/// CL_COMM_ERROR means that the request was complete, but an HTTP error
/// occurred.
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
struct ClusterCommResult { struct ClusterCommResult {
@ -286,7 +366,7 @@ class ClusterComm {
/// @brief submit an HTTP request to a shard asynchronously. /// @brief submit an HTTP request to a shard asynchronously.
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
ClusterCommResult const asyncRequest( OperationID asyncRequest(
ClientTransactionID const clientTransactionID, ClientTransactionID const clientTransactionID,
CoordTransactionID const coordTransactionID, CoordTransactionID const coordTransactionID,
std::string const& destination, std::string const& destination,

View File

@ -1536,6 +1536,10 @@ int ClusterInfo::ensureIndexCoordinator(
AgencyCommResult previous = ac.getValues(key); AgencyCommResult previous = ac.getValues(key);
if (!previous.successful()) {
return TRI_ERROR_CLUSTER_READING_PLAN_AGENCY;
}
velocypack::Slice collection = velocypack::Slice collection =
previous.slice()[0].get(std::vector<std::string>( previous.slice()[0].get(std::vector<std::string>(
{ AgencyComm::prefix(), "Plan", "Collections", { AgencyComm::prefix(), "Plan", "Collections",
@ -1719,10 +1723,18 @@ int ClusterInfo::dropIndexCoordinator(std::string const& databaseName,
AgencyCommResult res = ac.getValues(key); AgencyCommResult res = ac.getValues(key);
if (!res.successful()) {
return TRI_ERROR_CLUSTER_READING_PLAN_AGENCY;
}
velocypack::Slice previous = velocypack::Slice previous =
res.slice()[0].get(std::vector<std::string>( res.slice()[0].get(std::vector<std::string>(
{ AgencyComm::prefix(), "Plan", "Collections", databaseName, collectionID } { AgencyComm::prefix(), "Plan", "Collections", databaseName, collectionID }
)); ));
if (previous.isObject()) {
return TRI_ERROR_ARANGO_COLLECTION_NOT_FOUND;
}
TRI_ASSERT(VPackObjectIterator(previous).size()>0); TRI_ASSERT(VPackObjectIterator(previous).size()>0);
std::string where = std::string where =

View File

@ -47,16 +47,22 @@ static double const CL_DEFAULT_TIMEOUT = 60.0;
namespace arangodb { namespace arangodb {
static int handleGeneralCommErrors(ClusterCommResult const* res) { static int handleGeneralCommErrors(ClusterCommResult const* res) {
// This function creates an error code from a ClusterCommResult.
// If TRI_ERROR_NO_ERROR is returned, then the result was CL_COMM_RECEIVED
// and .answer can safely be inspected.
if (res->status == CL_COMM_TIMEOUT) { if (res->status == CL_COMM_TIMEOUT) {
// No reply, we give up: // No reply, we give up:
return TRI_ERROR_CLUSTER_TIMEOUT; return TRI_ERROR_CLUSTER_TIMEOUT;
} else if (res->status == CL_COMM_ERROR) { } else if (res->status == CL_COMM_ERROR) {
// This could be a broken connection or an Http error: return TRI_ERROR_CLUSTER_CONNECTION_LOST;
if (res->result == nullptr || !res->result->isComplete()) { } else if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
// there is not result if (res->result == nullptr) {
return TRI_ERROR_CLUSTER_CONNECTION_LOST;
}
if (!res->result->isComplete()) {
// there is no result
return TRI_ERROR_CLUSTER_CONNECTION_LOST; return TRI_ERROR_CLUSTER_CONNECTION_LOST;
} }
} else if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
return TRI_ERROR_CLUSTER_BACKEND_UNAVAILABLE; return TRI_ERROR_CLUSTER_BACKEND_UNAVAILABLE;
} }
@ -337,27 +343,26 @@ static void collectResultsFromAllShards(
responseCode = GeneralResponse::ResponseCode::SERVER_ERROR; responseCode = GeneralResponse::ResponseCode::SERVER_ERROR;
for (auto const& req : requests) { for (auto const& req : requests) {
auto res = req.result; auto res = req.result;
if (res.status == CL_COMM_RECEIVED) {
int commError = handleGeneralCommErrors(&res); int commError = handleGeneralCommErrors(&res);
if (commError != TRI_ERROR_NO_ERROR) { if (commError != TRI_ERROR_NO_ERROR) {
auto tmpBuilder = std::make_shared<VPackBuilder>(); auto tmpBuilder = std::make_shared<VPackBuilder>();
auto weSend = shardMap.find(res.shardID); auto weSend = shardMap.find(res.shardID);
TRI_ASSERT(weSend != shardMap.end()); // We send sth there earlier. TRI_ASSERT(weSend != shardMap.end()); // We send sth there earlier.
size_t count = weSend->second.size(); size_t count = weSend->second.size();
for (size_t i = 0; i < count; ++i) { for (size_t i = 0; i < count; ++i) {
tmpBuilder->openObject(); tmpBuilder->openObject();
tmpBuilder->add("error", VPackValue(true)); tmpBuilder->add("error", VPackValue(true));
tmpBuilder->add("errorNum", VPackValue(commError)); tmpBuilder->add("errorNum", VPackValue(commError));
tmpBuilder->close(); tmpBuilder->close();
}
resultMap.emplace(res.shardID, tmpBuilder);
} else {
TRI_ASSERT(res.answer != nullptr);
resultMap.emplace(res.shardID,
res.answer->toVelocyPack(&VPackOptions::Defaults));
extractErrorCodes(res, errorCounter, true);
responseCode = res.answer_code;
} }
resultMap.emplace(res.shardID, tmpBuilder);
} else {
TRI_ASSERT(res.answer != nullptr);
resultMap.emplace(res.shardID,
res.answer->toVelocyPack(&VPackOptions::Defaults));
extractErrorCodes(res, errorCounter, true);
responseCode = res.answer_code;
} }
} }
} }
@ -791,8 +796,8 @@ int createDocumentOnCoordinator(
if (!useMultiple) { if (!useMultiple) {
TRI_ASSERT(requests.size() == 1); TRI_ASSERT(requests.size() == 1);
auto const& req = requests[0]; auto const& req = requests[0];
auto res = req.result; auto& res = req.result;
if (nrDone == 0) { if (nrDone == 0 || res.status != CL_COMM_RECEIVED) {
// There has been Communcation error. Handle and return it. // There has been Communcation error. Handle and return it.
return handleGeneralCommErrors(&res); return handleGeneralCommErrors(&res);
} }
@ -949,8 +954,8 @@ int deleteDocumentOnCoordinator(
if (!useMultiple) { if (!useMultiple) {
TRI_ASSERT(requests.size() == 1); TRI_ASSERT(requests.size() == 1);
auto const& req = requests[0]; auto const& req = requests[0];
auto res = req.result; auto& res = req.result;
if (nrDone == 0) { if (nrDone == 0 || res.status != CL_COMM_RECEIVED) {
return handleGeneralCommErrors(&res); return handleGeneralCommErrors(&res);
} }
responseCode = res.answer_code; responseCode = res.answer_code;
@ -1030,7 +1035,7 @@ int deleteDocumentOnCoordinator(
auto res = req.result; auto res = req.result;
int error = handleGeneralCommErrors(&res); int error = handleGeneralCommErrors(&res);
if (error != TRI_ERROR_NO_ERROR) { if (error != TRI_ERROR_NO_ERROR) {
// Local data structores are automatically freed // Local data structures are automatically freed
return error; return error;
} }
if (res.answer_code == GeneralResponse::ResponseCode::OK || if (res.answer_code == GeneralResponse::ResponseCode::OK ||
@ -1562,12 +1567,11 @@ int getFilteredEdgesOnCoordinator(
int error = handleGeneralCommErrors(&res); int error = handleGeneralCommErrors(&res);
if (error != TRI_ERROR_NO_ERROR) { if (error != TRI_ERROR_NO_ERROR) {
cc->drop("", coordTransactionID, 0, ""); cc->drop("", coordTransactionID, 0, "");
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) {
return TRI_ERROR_INTERNAL;
}
return error; return error;
} }
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) {
cc->drop("", coordTransactionID, 0, "");
return TRI_ERROR_INTERNAL;
}
std::shared_ptr<VPackBuilder> shardResult = res.answer->toVelocyPack(&VPackOptions::Defaults); std::shared_ptr<VPackBuilder> shardResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
if (shardResult == nullptr) { if (shardResult == nullptr) {

View File

@ -1667,11 +1667,11 @@ static void JS_AsyncRequest(v8::FunctionCallbackInfo<v8::Value> const& args) {
*headerFields, clientTransactionID, *headerFields, clientTransactionID,
coordTransactionID, timeout, singleRequest); coordTransactionID, timeout, singleRequest);
ClusterCommResult const res = cc->asyncRequest( OperationID opId = cc->asyncRequest(
clientTransactionID, coordTransactionID, destination, reqType, path, body, clientTransactionID, coordTransactionID, destination, reqType, path, body,
headerFields, 0, timeout, singleRequest); headerFields, 0, timeout, singleRequest);
ClusterCommResult res = cc->enquire(opId);
if (res.status == CL_COMM_ERROR) { if (res.status == CL_COMM_BACKEND_UNAVAILABLE) {
TRI_V8_THROW_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL, TRI_V8_THROW_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL,
"couldn't queue async request"); "couldn't queue async request");
} }

View File

@ -799,15 +799,16 @@ void RestReplicationHandler::handleTrampolineCoordinator() {
"timeout within cluster"); "timeout within cluster");
return; return;
} }
if (res->status == CL_COMM_BACKEND_UNAVAILABLE) {
// there is no result
generateError(GeneralResponse::ResponseCode::BAD,
TRI_ERROR_CLUSTER_CONNECTION_LOST,
"lost connection within cluster");
return;
}
if (res->status == CL_COMM_ERROR) { if (res->status == CL_COMM_ERROR) {
// This could be a broken connection or an Http error: // This could be a broken connection or an Http error:
if (res->result == nullptr || !res->result->isComplete()) { TRI_ASSERT(nullptr != res->result && res->result->isComplete());
// there is no result
generateError(GeneralResponse::ResponseCode::BAD,
TRI_ERROR_CLUSTER_CONNECTION_LOST,
"lost connection within cluster");
return;
}
// In this case a proper HTTP error was reported by the DBserver, // In this case a proper HTTP error was reported by the DBserver,
// we simply forward the result. // we simply forward the result.
// We intentionally fall through here. // We intentionally fall through here.

View File

@ -123,6 +123,7 @@ void RestServerFeature::collectOptions(
"http.hide-product-header"); "http.hide-product-header");
options->addOldOption("server.keep-alive-timeout", "http.keep-alive-timeout"); options->addOldOption("server.keep-alive-timeout", "http.keep-alive-timeout");
options->addOldOption("server.default-api-compatibility", ""); options->addOldOption("server.default-api-compatibility", "");
options->addOldOption("no-server", "server.rest-server");
options->addOption("--server.authentication", options->addOption("--server.authentication",
"enable or disable authentication for ALL client requests", "enable or disable authentication for ALL client requests",

View File

@ -1187,7 +1187,8 @@ static bool clusterSendToAllServers(
cc->drop("", coordTransactionID, 0, ""); cc->drop("", coordTransactionID, 0, "");
return TRI_ERROR_CLUSTER_TIMEOUT; return TRI_ERROR_CLUSTER_TIMEOUT;
} }
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) { if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED ||
res.status == CL_COMM_BACKEND_UNAVAILABLE) {
cc->drop("", coordTransactionID, 0, ""); cc->drop("", coordTransactionID, 0, "");
return TRI_ERROR_INTERNAL; return TRI_ERROR_INTERNAL;
} }

View File

@ -69,18 +69,6 @@ class Slice;
#define TRI_COL_VERSION TRI_COL_VERSION_20 #define TRI_COL_VERSION TRI_COL_VERSION_20
////////////////////////////////////////////////////////////////////////////////
/// @brief predefined system collection name for transactions
////////////////////////////////////////////////////////////////////////////////
#define TRI_COL_NAME_TRANSACTION "_trx"
////////////////////////////////////////////////////////////////////////////////
/// @brief predefined system collection name for replication
////////////////////////////////////////////////////////////////////////////////
#define TRI_COL_NAME_REPLICATION "_replication"
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief predefined collection name for users /// @brief predefined collection name for users
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -840,7 +840,7 @@ int TRI_SaveConfigurationReplicationApplier(
std::shared_ptr<VPackBuilder> builder; std::shared_ptr<VPackBuilder> builder;
try { try {
builder = config->toVelocyPack(false); builder = config->toVelocyPack(true);
} catch (...) { } catch (...) {
return TRI_ERROR_OUT_OF_MEMORY; return TRI_ERROR_OUT_OF_MEMORY;
} }

View File

@ -73,9 +73,7 @@ bool TRI_ExcludeCollectionReplication(char const* name, bool includeSystem) {
return true; return true;
} }
if (TRI_EqualString(name, TRI_COL_NAME_REPLICATION) || if (TRI_IsPrefixString(name, TRI_COL_NAME_STATISTICS) ||
TRI_EqualString(name, TRI_COL_NAME_TRANSACTION) ||
TRI_IsPrefixString(name, TRI_COL_NAME_STATISTICS) ||
TRI_EqualString(name, "_apps") || TRI_EqualString(name, "_apps") ||
TRI_EqualString(name, "_configuration") || TRI_EqualString(name, "_configuration") ||
TRI_EqualString(name, "_cluster_kickstarter_plans") || TRI_EqualString(name, "_cluster_kickstarter_plans") ||

View File

@ -469,11 +469,13 @@ ConsoleFeature::Prompt ConsoleFeature::buildPrompt(ClientFeature* client) {
if (c == 'E') { if (c == 'E') {
// replace protocol // replace protocol
if (ep.find("tcp://") == 0) { if (ep.find("tcp://") == 0) {
ep = ep.substr(6); ep = ep.substr(strlen("tcp://"));
} else if (ep.find("http+tcp://") == 0) {
ep = ep.substr(strlen("http+tcp://"));
} else if (ep.find("ssl://") == 0) { } else if (ep.find("ssl://") == 0) {
ep = ep.substr(6); ep = ep.substr(strlen("ssl://"));
} else if (ep.find("unix://") == 0) { } else if (ep.find("unix://") == 0) {
ep = ep.substr(7); ep = ep.substr(strlen("unix://"));
} }
} }

View File

@ -491,7 +491,7 @@ function analyzeServerCrash(arangod, options, checkStr)
statusExternal(arangod.monitor, true); statusExternal(arangod.monitor, true);
analyzeCoreDumpWindows(arangod); analyzeCoreDumpWindows(arangod);
} else { } else {
fs.copyFile("bin/arangod", storeArangodPath); fs.copyFile(ARANGOD_BIN, storeArangodPath);
analyzeCoreDump(arangod, options, storeArangodPath, arangod.pid); analyzeCoreDump(arangod, options, storeArangodPath, arangod.pid);
} }

View File

@ -557,9 +557,9 @@
} }
}); });
// updates the users model // updates the users models
addTask({ addTask({
name: "updateUserModel", name: "updateUserModels",
description: "convert documents in _users collection to new format", description: "convert documents in _users collection to new format",
system: DATABASE_SYSTEM, system: DATABASE_SYSTEM,

View File

@ -451,7 +451,7 @@ void WindowsServiceFeature::shutDownFailure () {
/// @brief service control handler /// @brief service control handler
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
static void WINAPI ServiceCtrl(DWORD dwCtrlCode) { void WINAPI ServiceCtrl(DWORD dwCtrlCode) {
DWORD dwState = SERVICE_RUNNING; DWORD dwState = SERVICE_RUNNING;
switch (dwCtrlCode) { switch (dwCtrlCode) {

View File

@ -25,10 +25,12 @@
#include "ApplicationFeatures/ApplicationFeature.h" #include "ApplicationFeatures/ApplicationFeature.h"
extern SERVICE_STATUS_HANDLE ServiceStatus;
void SetServiceStatus(DWORD dwCurrentState, DWORD dwWin32ExitCode, void SetServiceStatus(DWORD dwCurrentState, DWORD dwWin32ExitCode,
DWORD dwCheckPoint, DWORD dwWaitHint); DWORD dwCheckPoint, DWORD dwWaitHint);
extern SERVICE_STATUS_HANDLE ServiceStatus; void WINAPI ServiceCtrl(DWORD dwCtrlCode);
namespace arangodb { namespace arangodb {
class WindowsServiceFeature final : public application_features::ApplicationFeature { class WindowsServiceFeature final : public application_features::ApplicationFeature {

View File

@ -343,6 +343,7 @@ bool copyRecursive(std::string const& source, std::string const& target,
bool copyDirectoryRecursive(std::string const& source, bool copyDirectoryRecursive(std::string const& source,
std::string const& target, std::string& error) { std::string const& target, std::string& error) {
bool rc = true; bool rc = true;
#ifdef TRI_HAVE_WIN32_LIST_FILES #ifdef TRI_HAVE_WIN32_LIST_FILES
auto isSubDirectory = [](struct _finddata_t item) -> bool { auto isSubDirectory = [](struct _finddata_t item) -> bool {
@ -362,8 +363,8 @@ bool copyDirectoryRecursive(std::string const& source,
do { do {
#else #else
auto isSubDirectory = [](struct dirent* item) -> bool { auto isSubDirectory = [](std::string const& name) -> bool {
return isDirectory(item->d_name); return isDirectory(name);
}; };
struct dirent* d = (struct dirent*)TRI_Allocate( struct dirent* d = (struct dirent*)TRI_Allocate(
@ -397,7 +398,7 @@ bool copyDirectoryRecursive(std::string const& source,
std::string src = source + TRI_DIR_SEPARATOR_STR + TRI_DIR_FN(oneItem); std::string src = source + TRI_DIR_SEPARATOR_STR + TRI_DIR_FN(oneItem);
// Handle subdirectories: // Handle subdirectories:
if (isSubDirectory(oneItem)) { if (isSubDirectory(src)) {
long systemError; long systemError;
int rc = TRI_CreateDirectory(dst.c_str(), systemError, error); int rc = TRI_CreateDirectory(dst.c_str(), systemError, error);
if (rc != TRI_ERROR_NO_ERROR) { if (rc != TRI_ERROR_NO_ERROR) {
@ -410,7 +411,7 @@ bool copyDirectoryRecursive(std::string const& source,
break; break;
} }
#ifndef _WIN32 #ifndef _WIN32
} else if (isSymbolicLink(oneItem->d_name)) { } else if (isSymbolicLink(src)) {
if (!TRI_CopySymlink(src, dst, error)) { if (!TRI_CopySymlink(src, dst, error)) {
break; break;
} }
@ -485,6 +486,7 @@ std::vector<std::string> listFiles(std::string const& directory) {
bool isDirectory(std::string const& path) { bool isDirectory(std::string const& path) {
TRI_stat_t stbuf; TRI_stat_t stbuf;
int res = TRI_STAT(path.c_str(), &stbuf); int res = TRI_STAT(path.c_str(), &stbuf);
#ifdef _WIN32 #ifdef _WIN32
return (res == 0) && ((stbuf.st_mode & S_IFMT) == S_IFDIR); return (res == 0) && ((stbuf.st_mode & S_IFMT) == S_IFDIR);
#else #else