1
0
Fork 0

Merge branch 'devel' of github.com:triAGENS/ArangoDB into devel

This commit is contained in:
Esteban Lombeyda 2014-06-06 10:55:11 +02:00
commit b848ff00b5
72 changed files with 2621 additions and 1233 deletions

View File

@ -3,6 +3,11 @@ v2.2.0 (XXXX-XX-XX)
* added mountedApp function for foxx-manager
* fixed issue #883: arango 2.1 - when starting multi-machine cluster, UI web
does not change to cluster overview
* fixed dfdb: should not start any other V8 threads
* cleanup of version-check, added module org/arangodb/database-version,
added --check-version option

View File

@ -6,7 +6,7 @@ Wherever an expression is allowed in AQL, a subquery can be placed. A subquery
is a query part that can introduce its own local variables without affecting
variables and values in its outer scope(s).
It is required that subqueries be put inside parentheses `(` and `)` to
It is required that subqueries be put inside parentheses *(* and *)* to
explicitly mark their start and end points:
FOR u IN users
@ -32,16 +32,16 @@ Subqueries might also include other subqueries themselves.
!SUBSECTION Variable expansion
In order to access a named attribute from all elements in a list easily, AQL
offers the shortcut operator `[*]` for variable expansion.
offers the shortcut operator *[*]* for variable expansion.
Using the `[*]` operator with a variable will iterate over all elements in the
Using the *[*]* operator with a variable will iterate over all elements in the
variable thus allowing to access a particular attribute of each element. It is
required that the expanded variable is a list. The result of the `[*]`
required that the expanded variable is a list. The result of the *[*]*
operator is again a list.
FOR u IN users
RETURN { "user" : u, "friendNames" : u.friends[*].name }
In the above example, the attribute `name` is accessed for each element in the
list `u.friends`. The result is a flat list of friend names, made available as
the attribute `friendNames`.
In the above example, the attribute *name* is accessed for each element in the
list *u.friends*. The result is a flat list of friend names, made available as
the attribute *friendNames*.

View File

@ -50,7 +50,7 @@ An example AQL query might look like this:
FILTER u.type == "newbie" && u.active == true
RETURN u.name
In this example query, the terms `FOR`, `FILTER`, and `RETURN` initiate the
In this example query, the terms *FOR*, *FILTER*, and *RETURN* initiate the
higher-level operation according to their name. These terms are also keywords,
meaning that they have a special meaning in the language.
@ -102,7 +102,7 @@ allows using otherwise-reserved keywords as names. An example for this is:
FOR f IN `filter`
RETURN f.`sort`
Due to the backticks, `filter` and `sort` are interpreted as names and not as
Due to the backticks, *filter* and *sort* are interpreted as names and not as
keywords here.
!SUBSUBSECTION Collection names
@ -128,8 +128,8 @@ attribute naming conventions.
FILTER u.active == true && f.active == true && u.id == f.userId
RETURN u.name
In the above example, the attribute names `active`, `name`, `id`, and `userId`
are qualified using the collection names they belong to (`u` and `f`
In the above example, the attribute names *active*, *name*, *id*, and *userId*
are qualified using the collection names they belong to (*u* and *f*
respectively).
!SUBSUBSECTION Variable names
@ -143,12 +143,12 @@ collection name used in the same query.
LET friends = u.friends
RETURN { "name" : u.name, "friends" : friends }
In the above query, `users` is a collection name, and both `u` and `friends` are
variable names. This is because the `FOR` and `LET` operations need target
In the above query, *users* is a collection name, and both *u* and *friends* are
variable names. This is because the *FOR* and *LET* operations need target
variables to store their intermediate results.
Allowed characters in variable names are the letters `a` to `z` (both in lower
and upper case), the numbers `0` to `9` and the underscore (`_`) symbol. A
Allowed characters in variable names are the letters *a* to *z* (both in lower
and upper case), the numbers *0* to *9* and the underscore (*_*) symbol. A
variable name must not start with a number. If a variable name starts with the
underscore character, it must also contain at least one letter (a-z or A-Z).
@ -159,7 +159,7 @@ available:
- Primitive types: Consisting of exactly one value
- null: An empty value, also: The absence of a value
- bool: Boolean truth value with possible values `false` and `true`
- bool: Boolean truth value with possible values *false* and *true*
- number: Signed (real) number
- string: UTF-8 encoded text value
- Compound types: Consisting of multiple values
@ -169,7 +169,7 @@ available:
!SUBSUBSECTION Numeric literals
Numeric literals can be integers or real values. They can optionally be signed
using the `+` or `-` symbols. The scientific notation is also supported.
using the *+* or *-* symbols. The scientific notation is also supported.
1
42
@ -217,22 +217,22 @@ The first supported compound type is the list type. Lists are effectively
sequences of (unnamed/anonymous) values. Individual list elements can be
accessed by their positions. The order of elements in a list is important.
A `list-declaration` starts with the `[` symbol and ends with the `]` symbol. A
`list-declaration` contains zero or many `expression`s, separated from each
other with the `,` symbol.
A *list-declaration* starts with the *[* symbol and ends with the *]* symbol. A
*list-declaration* contains zero or many *expression*s, separated from each
other with the *,* symbol.
In the easiest case, a list is empty and thus looks like:
[ ]
List elements can be any legal `expression` values. Nesting of lists is
List elements can be any legal *expression* values. Nesting of lists is
supported.
[ 1, 2, 3 ]
[ -99, "yikes!", [ true, [ "no"], [ ] ], 1 ]
[ [ "fox", "marshal" ] ]
Individual list values can later be accesses by their positions using the `[]`
Individual list values can later be accesses by their positions using the *[]*
accessor. The position of the accessed element must be a numeric
value. Positions start at 0. It is also possible to use negative index values
to access list values starting from the end of the list. This is convenient if
@ -257,15 +257,15 @@ The other supported compound type is the document type. Documents are a
composition of zero to many attributes. Each attribute is a name/value pair.
Document attributes can be accessed individually by their names.
Document declarations start with the `{` symbol and end with the `}` symbol. A
Document declarations start with the *{* symbol and end with the *}* symbol. A
document contains zero to many attribute declarations, separated from each other
with the `,` symbol. In the simplest case, a document is empty. Its
with the *,* symbol. In the simplest case, a document is empty. Its
declaration would then be:
{ }
Each attribute in a document is a name/value pair. Name and value of an
attribute are separated using the `:` symbol.
attribute are separated using the *:* symbol.
The attribute name is mandatory and must be specified as a quoted or unquoted
string. If a keyword is to be used as an attribute name, the name must be
@ -279,7 +279,7 @@ documents can be used as attribute values
{ "name" : "John", likes : [ "Swimming", "Skiing" ], "address" : { "street" : "Cucumber lane", "zip" : "94242" } }
Individual document attributes can later be accesses by their names using the
`.` accessor. If a non-existing attribute is accessed, the result is `null`.
*.* accessor. If a non-existing attribute is accessed, the result is *null*.
u.address.city.name
u.friends[0].name.first
@ -296,7 +296,7 @@ query.
Using bind parameters, the meaning of an existing query cannot be changed. Bind
parameters can be used everywhere in a query where literals can be used.
The syntax for bind parameters is `@nameparameter` where `nameparameter` is the
The syntax for bind parameters is *@nameparameter* where *nameparameter* is the
actual parameter name. The bind parameter values need to be passed along with
the query when it is executed, but not as part of the query text itself. Please
refer to the @ref HttpCursorHttp manual section for information about how to
@ -306,13 +306,13 @@ pass the bind parameter values to the server.
FILTER u.id == @id && u.name == @nameparameter
RETURN u
Bind parameter names must start with any of the letters `a` to `z` (both in
lower and upper case) or a digit (`0` to `9`), and can be followed by any
Bind parameter names must start with any of the letters *a* to *z* (both in
lower and upper case) or a digit (*0* to *9*), and can be followed by any
letter, digit or the underscore symbol.
A special type of bind parameter exists for injecting collection names. This
type of bind parameter has a name prefixed with an additional `@` symbol (thus
when using the bind parameter in a query, two `@` symbols must be used).
type of bind parameter has a name prefixed with an additional *@* symbol (thus
when using the bind parameter in a query, two *@* symbols must be used).
FOR u IN @@collection
FILTER u.active == true
@ -331,14 +331,14 @@ The following type order is used when comparing data types:
null < bool < number < string < list < document
This means `null` is the smallest type in AQL and `document` is the type with
This means *null* is the smallest type in AQL and *document* is the type with
the highest order. If the compared operands have a different type, then the
comparison result is determined and the comparison is finished.
For example, the boolean `true` value will always be less than any numeric or
For example, the boolean *true* value will always be less than any numeric or
string value, any list (even an empty list) or any document. Additionally, any
string value (even an empty string) will always be greater than any numeric
value, a boolean value, `true` or `false`.
value, a boolean value, *true* or *false*.
null < false
null < true
@ -386,14 +386,14 @@ If the two compared operands have the same data types, then the operands values
are compared. For the primitive types (null, boolean, number, and string), the
result is defined as follows:
- null: `null` is equal to `null`
- boolean: `false` is less than `true`
- null: *null* is equal to *null*
- boolean: *false* is less than *true*
- number: numeric values are ordered by their cardinal value
- string: string values are ordered using a localized comparison,
see @ref CommandLineDefaultLanguage "--default-language"
Note: unlike in SQL, `null` can be compared to any value, including `null`
itself, without the result being converted into `null` automatically.
Note: unlike in SQL, *null* can be compared to any value, including *null*
itself, without the result being converted into *null* automatically.
For compound, types the following special rules are applied:
@ -402,7 +402,7 @@ position, starting at the first element. For each position, the element types
are compared first. If the types are not equal, the comparison result is
determined, and the comparison is finished. If the types are equal, then the
values of the two elements are compared. If one of the lists is finished and
the other list still has an element at a compared position, then `null` will be
the other list still has an element at a compared position, then *null* will be
used as the element value of the fully traversed list.
If a list element is itself a compound value (a list or a document), then the
@ -425,7 +425,7 @@ in a document is not relevant when comparing two documents.
The combined and sorted list of attribute names is then traversed, and the
respective attributes from the two compared operands are then looked up. If one
of the documents does not have an attribute with the sought name, its attribute
value is considered to be `null`. Finally, the attribute value of both
value is considered to be *null*. Finally, the attribute value of both
documents is compared using the before mentioned data type and value comparison.
The comparisons are performed for all document attributes until there is an
unambiguous comparison result. If an unambiguous comparison result is found, the
@ -446,9 +446,9 @@ compared documents are considered equal.
Collection data can be accessed by specifying a collection name in a query. A
collection can be understood as a list of documents, and that is how they are
treated in AQL. Documents from collections are normally accessing using the
`FOR` keyword. Note that when iterating over documents from a collection, the
*FOR* keyword. Note that when iterating over documents from a collection, the
order of documents is undefined. To traverse documents in an explicit and
deterministic order, the `SORT` keyword should be used in addition.
deterministic order, the *SORT* keyword should be used in addition.
Data in collections is stored in documents, with each document potentially
having different attributes than other documents. This is true even for
@ -457,26 +457,26 @@ documents of the same collection.
It is therefore quite normal to encounter documents that do not have some or all
of the attributes that are queried in an AQL query. In this case, the
non-existing attributes in the document will be treated as if they would exist
with a value of `null`. That means that comparing a document attribute to
`null` will return true if the document has the particular attribute and the
attribute has a value of `null`, or that the document does not have the
with a value of *null*. That means that comparing a document attribute to
*null* will return true if the document has the particular attribute and the
attribute has a value of *null*, or that the document does not have the
particular attribute at all.
For example, the following query will return all documents from the collection
`users` that have a value of `null` in the attribute `name`, plus all documents
from `users` that do not have the `name` attribute at all:
*users* that have a value of *null* in the attribute *name*, plus all documents
from *users* that do not have the *name* attribute at all:
FOR u IN users
FILTER u.name == null
RETURN u
Furthermore, `null` is less than any other value (excluding `null` itself). That
Furthermore, *null* is less than any other value (excluding *null* itself). That
means documents with non-existing attributes might be included in the result
when comparing attribute values with the less than or less equal operators.
For example, the following query will return all documents from the collection
`users` that have an attribute `age` with a value less than `39`, but also all
documents from the collection that do not have the attribute `age` at all.
*users* that have an attribute *age* with a value less than *39*, but also all
documents from the collection that do not have the attribute *age* at all.
FOR u IN users
FILTER u.age < 39

View File

@ -9,13 +9,13 @@ ArangoStatement object as follows:
arangosh> stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
[object ArangoStatement]
To execute the query, use the `execute` method:
To execute the query, use the *execute* method:
arangosh> c = stmt.execute();
[object ArangoQueryCursor]
This has executed the query. The query results are available in a cursor
now. The cursor can return all its results at once using the `toArray` method.
now. The cursor can return all its results at once using the *toArray* method.
This is a short-cut that you can use if you want to access the full result
set without iterating over it yourself.
@ -23,7 +23,7 @@ set without iterating over it yourself.
[2, 4]
Cursors can also be used to iterate over the result set document-by-document.
To do so, use the `hasNext` and `next` methods of the cursor:
To do so, use the *hasNext* and *next* methods of the cursor:
arangosh> while (c.hasNext()) { require("internal").print(c.next()); }
2
@ -57,7 +57,7 @@ or
2
4
Please note that bind variables can also be passed into the `_createStatement` method directly,
Please note that bind variables can also be passed into the *_createStatement* method directly,
making it a bit more convenient:
arangosh> stmt = db._createStatement( {
@ -69,12 +69,12 @@ making it a bit more convenient:
} );
Cursors also optionally provide the total number of results. By default, they do not.
To make the server return the total number of results, you may set the `count` attribute to
`true` when creating a statement:
To make the server return the total number of results, you may set the *count* attribute to
*true* when creating a statement:
arangosh> stmt = db._createStatement( { "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i", "count": true } );
After executing this query, you can use the `count` method of the cursor to get the
After executing this query, you can use the *count* method of the cursor to get the
number of total results from the result set:
arangosh> c = stmt.execute();
@ -82,18 +82,18 @@ number of total results from the result set:
arangosh> c.count();
4
Please note that the `count` method returns nothing if you did not specify the `count`
Please note that the *count* method returns nothing if you did not specify the *count*
attribute when creating the query.
This is intentional so that the server may apply optimizations when executing the query and
construct the result set incrementally. Incremental creating of the result sets would not be possible
if the total number of results needs to be shipped to the client anyway. Therefore, the client
has the choice to specify `count` and retrieve the total number of results for a query (and
has the choice to specify *count* and retrieve the total number of results for a query (and
disable potential incremental result set creation on the server), or to not retrieve the total
number of results and allow the server to apply optimizations.
Please note that at the moment the server will always create the full result set for each query so
specifying or omitting the `count` attribute currently does not have any impact on query execution.
specifying or omitting the *count* attribute currently does not have any impact on query execution.
This might change in the future. Future versions of ArangoDB might create result sets incrementally
on the server-side and might be able to apply optimizations if a result set is not fully fetched by
a client.

View File

@ -3,29 +3,29 @@
!SUBSECTION FOR
The `FOR` keyword can be to iterate over all elements of a list.
The *FOR* keyword can be to iterate over all elements of a list.
The general syntax is:
FOR variable-name IN expression
Each list element returned by `expression` is visited exactly once. It is
required that `expression` returns a list in all cases. The empty list is
Each list element returned by *expression* is visited exactly once. It is
required that *expression* returns a list in all cases. The empty list is
allowed, too. The current list element is made available for further processing
in the variable specified by `variable-name`.
in the variable specified by *variable-name*.
FOR u IN users
RETURN u
This will iterate over all elements from the list `users` (note: this list
This will iterate over all elements from the list *users* (note: this list
consists of all documents from the collection named "users" in this case) and
make the current list element available in variable `u`. `u` is not modified in
this example but simply pushed into the result using the `RETURN` keyword.
make the current list element available in variable *u*. *u* is not modified in
this example but simply pushed into the result using the *RETURN* keyword.
Note: When iterating over collection-based lists as shown here, the order of
documents is undefined unless an explicit sort order is defined using a `SORT`
documents is undefined unless an explicit sort order is defined using a *SORT*
statement.
The variable introduced by `FOR` is available until the scope the `FOR` is
The variable introduced by *FOR* is available until the scope the *FOR* is
placed in is closed.
Another example that uses a statically declared list of values to iterate over:
@ -33,8 +33,8 @@ Another example that uses a statically declared list of values to iterate over:
FOR year IN [ 2011, 2012, 2013 ]
RETURN { "year" : year, "isLeapYear" : year % 4 == 0 && (year % 100 != 0 || year % 400 == 0) }
Nesting of multiple `FOR` statements is allowed, too. When `FOR` statements are
nested, a cross product of the list elements returned by the individual `FOR`
Nesting of multiple *FOR* statements is allowed, too. When *FOR* statements are
nested, a cross product of the list elements returned by the individual *FOR*
statements will be created.
FOR u IN users
@ -42,23 +42,23 @@ statements will be created.
RETURN { "user" : u, "location" : l }
In this example, there are two list iterations: an outer iteration over the list
`users` plus an inner iteration over the list `locations`. The inner list is
*users* plus an inner iteration over the list *locations*. The inner list is
traversed as many times as there are elements in the outer list. For each
iteration, the current values of `users` and `locations` are made available for
further processing in the variable `u` and `l`.
iteration, the current values of *users* and *locations* are made available for
further processing in the variable *u* and *l*.
!SUBSECTION RETURN
The `RETURN` statement can (and must) be used to produce the result of a query.
It is mandatory to specify a `RETURN` statement at the end of each block in a
The *RETURN* statement can (and must) be used to produce the result of a query.
It is mandatory to specify a *RETURN* statement at the end of each block in a
query, otherwise the query result would be undefined.
The general syntax for `return` is:
The general syntax for *return* is:
RETURN expression
The `expression` returned by `RETURN` is produced for each iteration the
`RETURN` statement is placed in. That means the result of a `RETURN` statement
The *expression* returned by *RETURN* is produced for each iteration the
*RETURN* statement is placed in. That means the result of a *RETURN* statement
is always a list (this includes the empty list). To return all elements from
the currently iterated list without modification, the following simple form can
be used:
@ -66,21 +66,21 @@ be used:
FOR variable-name IN expression
RETURN variable-name
As `RETURN` allows specifying an expression, arbitrary computations can be
As *RETURN* allows specifying an expression, arbitrary computations can be
performed to calculate the result elements. Any of the variables valid in the
scope the `RETURN` is placed in can be used for the computations.
scope the *RETURN* is placed in can be used for the computations.
Note: Return will close the current scope and eliminate all local variables in
it.
!SUBSECTION FILTER
The `FILTER` statement can be used to restrict the results to elements that
The *FILTER* statement can be used to restrict the results to elements that
match an arbitrary logical condition. The general syntax is:
FILTER condition
`condition` must be a condition that evaluates to either `false` or `true`. If
*condition* must be a condition that evaluates to either *false* or *true*. If
the condition result is false, the current element is skipped, so it will not be
processed further and not be part of the result. If the condition is true, the
current element is not skipped and can be further processed.
@ -89,13 +89,13 @@ current element is not skipped and can be further processed.
FILTER u.active == true && u.age < 39
RETURN u
In the above example, all list elements from `users` will be included that have
an attribute `active` with value `true` and that have an attribute `age` with a
value less than `39`. All other elements from `users` will be skipped and not be
included the result produced by `RETURN`.
In the above example, all list elements from *users* will be included that have
an attribute *active* with value *true* and that have an attribute *age* with a
value less than *39*. All other elements from *users* will be skipped and not be
included the result produced by *RETURN*.
It is allowed to specify multiple `FILTER` statements in a query, and even in
the same block. If multiple `FILTER` statements are used, their results will be
It is allowed to specify multiple *FILTER* statements in a query, and even in
the same block. If multiple *FILTER* statements are used, their results will be
combined with a logical and, meaning all filter conditions must be true to
include an element.
@ -106,19 +106,19 @@ include an element.
!SUBSECTION SORT
The `SORT` statement will force a sort of the list of already produced
intermediate results in the current block. `SORT` allows specifying one or
The *SORT* statement will force a sort of the list of already produced
intermediate results in the current block. *SORT* allows specifying one or
multiple sort criteria and directions. The general syntax is:
SORT expression direction
Specifying the `direction` is optional. The default (implicit) direction for a
Specifying the *direction* is optional. The default (implicit) direction for a
sort is the ascending order. To explicitly specify the sort direction, the
keywords `ASC` (ascending) and `DESC` can be used. Multiple sort criteria can be
keywords *ASC* (ascending) and *DESC* can be used. Multiple sort criteria can be
separated using commas.
Note: when iterating over collection-based lists, the order of documents is
always undefined unless an explicit sort order is defined using `SORT`.
always undefined unless an explicit sort order is defined using *SORT*.
FOR u IN users
SORT u.lastName, u.firstName, u.id DESC
@ -126,19 +126,19 @@ always undefined unless an explicit sort order is defined using `SORT`.
!SUBSECTION LIMIT
The `LIMIT` statement allows slicing the list of result documents using an
The *LIMIT* statement allows slicing the list of result documents using an
offset and a count. It reduces the number of elements in the result to at most
the specified number. Two general forms of `LIMIT` are followed:
the specified number. Two general forms of *LIMIT* are followed:
LIMIT count
LIMIT offset, count
The first form allows specifying only the `count` value whereas the second form
allows specifying both `offset` and `count`. The first form is identical using
the second form with an `offset` value of `0`.
The first form allows specifying only the *count* value whereas the second form
allows specifying both *offset* and *count*. The first form is identical using
the second form with an *offset* value of *0*.
The `offset` value specifies how many elements from the result shall be
discarded. It must be 0 or greater. The `count` value specifies how many
The *offset* value specifies how many elements from the result shall be
discarded. It must be 0 or greater. The *count* value specifies how many
elements should be at most included in the result.
FOR u IN users
@ -148,13 +148,13 @@ elements should be at most included in the result.
!SUBSECTION LET
The `LET` statement can be used to assign an arbitrary value to a variable. The
variable is then introduced in the scope the `LET` statement is placed in. The
The *LET* statement can be used to assign an arbitrary value to a variable. The
variable is then introduced in the scope the *LET* statement is placed in. The
general syntax is:
LET variable-name = expression
`LET` statements are mostly used to declare complex computations and to avoid
*LET* statements are mostly used to declare complex computations and to avoid
repeated computations of the same value at multiple parts of a query.
FOR u IN users
@ -162,10 +162,10 @@ repeated computations of the same value at multiple parts of a query.
RETURN { "user" : u, "numRecommendations" : numRecommendations, "isPowerUser" : numRecommendations >= 10 }
In the above example, the computation of the number of recommendations is
factored out using a `LET` statement, thus avoiding computing the value twice in
the `RETURN` statement.
factored out using a *LET* statement, thus avoiding computing the value twice in
the *RETURN* statement.
Another use case for `LET` is to declare a complex computation in a subquery,
Another use case for *LET* is to declare a complex computation in a subquery,
making the whole query more readable.
FOR u IN users
@ -183,42 +183,42 @@ making the whole query more readable.
!SUBSECTION COLLECT
The `COLLECT` keyword can be used to group a list by one or multiple group
criteria. The two general syntaxes for `COLLECT` are:
The *COLLECT* keyword can be used to group a list by one or multiple group
criteria. The two general syntaxes for *COLLECT* are:
COLLECT variable-name = expression
COLLECT variable-name = expression INTO groups
The first form only groups the result by the defined group criteria defined by
`expression`. In order to further process the results produced by `COLLECT`, a
new variable (specified by `variable-name`) is introduced. This variable
*expression*. In order to further process the results produced by *COLLECT*, a
new variable (specified by *variable-name*) is introduced. This variable
contains the group value.
The second form does the same as the first form, but additionally introduces a
variable (specified by `groups`) that contains all elements that fell into the
group. Specifying the `INTO` clause is optional-
variable (specified by *groups*) that contains all elements that fell into the
group. Specifying the *INTO* clause is optional-
FOR u IN users
COLLECT city = u.city INTO g
RETURN { "city" : city, "users" : g }
In the above example, the list of `users` will be grouped by the attribute
`city`. The result is a new list of documents, with one element per distinct
`city` value. The elements from the original list (here: `users`) per city are
made available in the variable `g`. This is due to the `INTO` clause.
In the above example, the list of *users* will be grouped by the attribute
*city*. The result is a new list of documents, with one element per distinct
*city* value. The elements from the original list (here: *users*) per city are
made available in the variable *g*. This is due to the *INTO* clause.
`COLLECT` also allows specifying multiple group criteria. Individual group
*COLLECT* also allows specifying multiple group criteria. Individual group
criteria can be separated by commas.
FOR u IN users
COLLECT first = u.firstName, age = u.age INTO g
RETURN { "first" : first, "age" : age, "numUsers" : LENGTH(g) }
In the above example, the list of `users` is grouped by first names and ages
In the above example, the list of *users* is grouped by first names and ages
first, and for each distinct combination of first name and age, the number of
users found is returned.
Note: The `COLLECT` statement eliminates all local variables in the current
scope. After `COLLECT` only the variables introduced by `COLLECT` itself are
Note: The *COLLECT* statement eliminates all local variables in the current
scope. After *COLLECT* only the variables introduced by *COLLECT* itself are
available.

View File

@ -10,19 +10,19 @@ any input data types, and will return a boolean result value.
The following comparison operators are supported:
- `==` equality
- `!=` inequality
- `<` less than
- `<=` less or equal
- `>` greater than
- `>=` greater or equal
- `in` test if a value is contained in a list
- *==* equality
- *!=* inequality
- *<* less than
- *<=* less or equal
- *>* greater than
- *>=* greater or equal
- *in* test if a value is contained in a list
The `in` operator expects the second operand to be of type list. All other
The *in* operator expects the second operand to be of type list. All other
operators accept any data types for the first and second operands.
Each of the comparison operators returns a boolean value if the comparison can
be evaluated and returns `true` if the comparison evaluates to true, and `false`
be evaluated and returns *true* if the comparison evaluates to true, and *false*
otherwise.
Some examples for comparison operations in AQL:
@ -42,9 +42,9 @@ a boolean result value.
The following logical operators are supported:
- `&&` logical and operator
- `||` logical or operator
- `!` logical not/negation operator
- *&&* logical and operator
- *||* logical or operator
- *!* logical not/negation operator
Some examples for logical operations in AQL:
@ -52,12 +52,12 @@ Some examples for logical operations in AQL:
true || false
!u.isInvalid
The `&&`, `||`, and `!` operators expect their input operands to be boolean
The *&&*, *||*, and *!* operators expect their input operands to be boolean
values each. If a non-boolean operand is used, the operation will fail with an
error. In case all operands are valid, the result of each logical operator is a
boolean value.
Both the `&&` and `||` operators use short-circuit evaluation and only evaluate
Both the *&&* and *||* operators use short-circuit evaluation and only evaluate
the second operand if the result of the operation cannot be determined by
checking the first operand alone.
@ -69,11 +69,11 @@ Operators are supported.
AQL supports the following arithmetic operators:
- `+` addition
- `-` subtraction
- `*` multiplication
- `/` division
- `%` modulus
- *+* addition
- *-* subtraction
- *** multiplication
- */* division
- *%* modulus
These operators work with numeric operands only. Invoking any of the operators
with non-numeric operands will result in an error. An error will also be raised
@ -106,11 +106,11 @@ Example:
!SUBSUBSECTION Range operator
AQL supports expressing simple numeric ranges with the `..` operator.
AQL supports expressing simple numeric ranges with the *..* operator.
This operator can be used to easily iterate over a sequence of numeric
values.
The `..` operator will produce a list of values in the defined range, with
The *..* operator will produce a list of values in the defined range, with
both bounding values included.
Example:
@ -125,22 +125,22 @@ will produce the following result:
The operator precedence in AQL is as follows (lowest precedence first):
- `? :` ternary operator
- `||` logical or
- `&&` logical and
- `==`, `!=` equality and inequality
- `in` in operator
- `<`, `<=`, `>=`, `>` less than, less equal,
- *? :* ternary operator
- *||* logical or
- *&&* logical and
- *==*, *!=* equality and inequality
- *in* in operator
- *<*, *<=*, *>=*, *>* less than, less equal,
greater equal, greater than
- `+`, `-` addition, subtraction
- `*`, `/`, `%` multiplication, division, modulus
- `!`, `+`, `-` logical negation, unary plus, unary minus
- `[*]` expansion
- `()` function call
- `.` member access
- `[]` indexed value access
- *+*, *-* addition, subtraction
- ***, */*, *%* multiplication, division, modulus
- *!*, *+*, *-* logical negation, unary plus, unary minus
- *[*]* expansion
- *()* function call
- *.* member access
- *[]* indexed value access
The parentheses `(` and `)` can be used to enforce a different operator
The parentheses *(* and *)* can be used to enforce a different operator
evaluation order.
!SUBSECTION Functions
@ -151,7 +151,7 @@ function call syntax is:
FUNCTIONNAME(arguments)
where `FUNCTIONNAME` is the name of the function to be called, and `arguments`
where *FUNCTIONNAME* is the name of the function to be called, and *arguments*
is a comma-separated list of function arguments. If a function does not need any
arguments, the argument list can be left empty. However, even if the argument
list is empty the parentheses around it are still mandatory to make function
@ -164,7 +164,7 @@ Some example function calls:
COLLECTIONS()
In contrast to collection and variable names, function names are case-insensitive,
i.e. `LENGTH(foo)` and `length(foo)` are equivalent.
i.e. *LENGTH(foo)* and *length(foo)* are equivalent.
!SUBSUBSECTION Extending AQL
@ -175,10 +175,10 @@ in a query.
Please refer to [Extending AQL](../ExtendingAql/README.md) for more details on this.
By default, any function used in an AQL query will be sought in the built-in
function namespace `_aql`. This is the default namespace that contains all AQL
function namespace *_aql*. This is the default namespace that contains all AQL
functions that are shipped with ArangoDB.
To refer to a user-defined AQL function, the function name must be fully qualified
to also include the user-defined namespace. The `::` symbol is used as the namespace
to also include the user-defined namespace. The *::* symbol is used as the namespace
separator:
MYGROUP::MYFUNC()
@ -201,33 +201,33 @@ In an AQL query, type casts are performed only upon request and not implicitly.
This helps avoiding unexpected results. All type casts have to be performed by
invoking a type cast function. AQL offers several type cast functions for this
task. Each of the these functions takes an operand of any data type and returns
a result value of type corresponding to the function name (e.g. `TO_NUMBER()`
a result value of type corresponding to the function name (e.g. *TO_NUMBER()*
will return a number value):
- *TO_BOOL(value)*: Takes an input *valu*e of any type and converts it
into the appropriate boolean value as follows:
- `null` is converted to `false`.
- Numbers are converted to `true` if they are unequal to 0, and to `false` otherwise.
- Strings are converted to `true` if they are non-empty, and to `false` otherwise.
- Lists are converted to `true` if they are non-empty, and to `false` otherwise.
- Documents are converted to `true` if they are non-empty, and to `false` otherwise.
- *null* is converted to *false*.
- Numbers are converted to *true* if they are unequal to 0, and to *false* otherwise.
- Strings are converted to *true* if they are non-empty, and to *false* otherwise.
- Lists are converted to *true* if they are non-empty, and to *false* otherwise.
- Documents are converted to *true* if they are non-empty, and to *false* otherwise.
- *TO_NUMBER(value)*: Takes an input *value* of any type and converts it
into a numeric value as follows:
- `null`, `false`, lists, and documents are converted to the value `0`.
- `true` is converted to `1`.
- *null*, *false*, lists, and documents are converted to the value *0*.
- *true* is converted to *1*.
- Strings are converted to their numeric equivalent if the full string content is
is a valid number, and to `0` otherwise.
is a valid number, and to *0* otherwise.
- *TO_STRING(value)*: Takes an input *value* of any type and converts it
into a string value as follows:
- `null` is converted to the string `"null"`
- `false` is converted to the string `"false"`, `true` to the string `"true"`
- *null* is converted to the string *"null"*
- *false* is converted to the string *"false"*, *true* to the string *"true"*
- Numbers, lists and documents are converted to their string equivalents.
- *TO_LIST(value)*: Takes an input *value* of any type and converts it
into a list value as follows:
- `null` is converted to an empty list
- *null* is converted to an empty list
- Boolean values, numbers and strings are converted to a list containing the original
value as its single element
- Documents are converted to a list containing their attribute values as list elements
@ -241,28 +241,28 @@ checked for, and false otherwise.
The following type check functions are available:
- *IS_NULL(value)*: Checks whether *value* is a `null` value
- *IS_NULL(value)*: Checks whether *value* is a *null* value
- *IS_BOOL(value)*: Checks whether *value* is a `boolean` value
- *IS_BOOL(value)*: Checks whether *value* is a *boolean* value
- *IS_NUMBER(value)*: Checks whether *value* is a `numeric` value
- *IS_NUMBER(value)*: Checks whether *value* is a *numeric* value
- *IS_STRING(value)*: Checks whether *value* is a `string` value
- *IS_STRING(value)*: Checks whether *value* is a *string* value
- *IS_LIST(value)*: Checks whether *value* is a `list` value
- *IS_LIST(value)*: Checks whether *value* is a *list* value
- *IS_DOCUMENT(value)*: Checks whether *value* is a `document` value
- *IS_DOCUMENT(value)*: Checks whether *value* is a *document* value
!SUBSUBSECTION String functions
For string processing, AQL offers the following functions:
- *CONCAT(value1, value2, ... valuen)*: Concatenate the strings
passed as in *value1* to *valuen*. `null` values are ignored
passed as in *value1* to *valuen*. *null* values are ignored
- *CONCAT_SEPARATOR(separator, value1, value2, ... valuen)*:
Concatenate the strings passed as arguments *value1* to *valuen* using the
*separator* string. `null` values are ignored
*separator* string. *null* values are ignored
- *CHAR_LENGTH(value)*: Return the number of characters in *value*. This is
a synonym for LENGTH(value)*
@ -292,21 +292,21 @@ For string processing, AQL offers the following functions:
- *CONTAINS(text, search, return-index)*: Checks whether the string
*search* is contained in the string *text*. By default, this function returns
`true` if *search* is contained in *text*, and `false` otherwise. By
passing `true` as the third function parameter *return-index*, the function
*true* if *search* is contained in *text*, and *false* otherwise. By
passing *true* as the third function parameter *return-index*, the function
will return the position of the first occurrence of *search* within *text*,
starting at offset 0, or `-1` if *search* is not contained in *text*.
starting at offset 0, or *-1* if *search* is not contained in *text*.
The string matching performed by *CONTAINS* is case-sensitive.
- *LIKE(text, search, case-insensitive)*: Checks whether the pattern
*search* is contained in the string *text*, using wildcard matching.
Returns `true` if the pattern is contained in *text*, and `false` otherwise.
The *pattern* string can contain the wildcard characters `%` (meaning any
sequence of characters) and `_` (any single character).
Returns *true* if the pattern is contained in *text*, and *false* otherwise.
The *pattern* string can contain the wildcard characters *%* (meaning any
sequence of characters) and *_* (any single character).
The string matching performed by *LIKE* is case-sensitive by default, but by
passing `true` as the third parameter, the matching will be case-insensitive.
passing *true* as the third parameter, the matching will be case-insensitive.
The value for *search* cannot be a variable or a document attribute. The actual
value must be present at query parse time already.
@ -345,7 +345,7 @@ There are two date functions in AQL to create dates for further use:
All parameters after *day* are optional.
- *DATE_ISO8601(date)*: Returns an ISO8601 date time string from *date*.
The date time string will always use UTC time, indicated by the `Z` at its end.
The date time string will always use UTC time, indicated by the *Z* at its end.
- *DATE_ISO8601(year, month, day, hour, minute, second, millisecond)*:
same as before, but allows specifying the individual date components separately.
@ -355,21 +355,21 @@ These two above date functions accept the following input values:
- numeric timestamps, indicating the number of milliseconds elapsed since the UNIX
epoch (i.e. January 1st 1970 00:00:00 UTC).
An example timestamp value is `1399472349522`, which translates to
`2014-05-07T14:19:09.522Z`.
An example timestamp value is *1399472349522*, which translates to
*2014-05-07T14:19:09.522Z*.
- date time strings in formats *YYYY-MM-DDTHH:MM:SS.MMM*,
*YYYY-MM-DD HH:MM:SS.MMM*, or *YYYY-MM-DD* Milliseconds are always optional.
A timezone difference may optionally be added at the end of the string, with the
hours and minutes that need to be added or subtracted to the date time value.
For example, `2014-05-07T14:19:09+01:00` can be used to specify a one hour offset,
and `2014-05-07T14:19:09+07:30` can be specified for seven and half hours offset.
Negative offsets are also possible. Alternatively to an offset, a `Z` can be used
For example, *2014-05-07T14:19:09+01:00* can be used to specify a one hour offset,
and *2014-05-07T14:19:09+07:30* can be specified for seven and half hours offset.
Negative offsets are also possible. Alternatively to an offset, a *Z* can be used
to indicate UTC / Zulu time.
An example value is `2014-05-07T14:19:09.522Z` meaning May 7th 2014, 14:19:09 and
An example value is *2014-05-07T14:19:09.522Z* meaning May 7th 2014, 14:19:09 and
522 milliseconds, UTC / Zulu time. Another example value without time component is
`2014-05-07Z`.
*2014-05-07Z*.
Please note that if no timezone offset is specified in a datestring, ArangoDB will
assume UTC time automatically. This is done to ensure portability of queries across
@ -385,12 +385,12 @@ These two above date functions accept the following input values:
- second
- millisecond
All components following `day` are optional and can be omitted. Note that no
All components following *day* are optional and can be omitted. Note that no
timezone offsets can be specified when using separate date components, and UTC /
Zulu time will be used.
The following calls to `DATE_TIMESTAMP` are equivalent and will all return
`1399472349522`:
The following calls to *DATE_TIMESTAMP* are equivalent and will all return
*1399472349522*:
DATE_TIMESTAMP("2014-05-07T14:19:09.522")
DATE_TIMESTAMP("2014-05-07T14:19:09.522Z")
@ -399,7 +399,7 @@ The following calls to `DATE_TIMESTAMP` are equivalent and will all return
DATE_TIMESTAMP(2014, 5, 7, 14, 19, 9, 522)
DATE_TIMESTAMP(1399472349522)
The same is true for calls to `DATE_ISO8601` that also accepts variable input
The same is true for calls to *DATE_ISO8601* that also accepts variable input
formats:
DATE_ISO8601("2014-05-07T14:19:09.522Z")
@ -407,10 +407,10 @@ formats:
DATE_ISO8601(2014, 5, 7, 14, 19, 9, 522)
DATE_ISO8601(1399472349522)
The above functions are all equivalent and will return `"2014-05-07T14:19:09.522Z"`.
The above functions are all equivalent and will return *"2014-05-07T14:19:09.522Z"*.
The following date functions can be used with dates created by `DATE_TIMESTAMP` and
`DATE_ISO8601`:
The following date functions can be used with dates created by *DATE_TIMESTAMP* and
*DATE_ISO8601*:
- *DATE_DAYOFWEEK(date)*: Returns the weekday number of *date*. The
return values have the following meanings:
@ -472,63 +472,63 @@ AQL supports the following functions to operate on list values:
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
- *MIN(list)*: Returns the smallest element of *list*. `null` values
are ignored. If the list is empty or only `null` values are contained in the list, the
function will return `null`.
- *MIN(list)*: Returns the smallest element of *list*. *null* values
are ignored. If the list is empty or only *null* values are contained in the list, the
function will return *null*.
- *MAX(list)*: Returns the greatest element of *list*. `null` values
are ignored. If the list is empty or only `null` values are contained in the list, the
function will return `null`.
- *MAX(list)*: Returns the greatest element of *list*. *null* values
are ignored. If the list is empty or only *null* values are contained in the list, the
function will return *null*.
- *AVERAGE(list)*: Returns the average (arithmetic mean) of the values in *list*.
This requires the elements in *list* to be numbers. `null` values are ignored.
If the list is empty or only `null` values are contained in the list, the function
will return `null`.
This requires the elements in *list* to be numbers. *null* values are ignored.
If the list is empty or only *null* values are contained in the list, the function
will return *null*.
- *SUM(list)*: Returns the sum of the values in *list*. This
requires the elements in *list* to be numbers. `null` values are ignored.
requires the elements in *list* to be numbers. *null* values are ignored.
- *MEDIAN(list)*: Returns the median value of the values in *list*. This
requires the elements in *list* to be numbers. `null` values are ignored. If the
list is empty or only `null` values are contained in the list, the function will return
`null`.
requires the elements in *list* to be numbers. *null* values are ignored. If the
list is empty or only *null* values are contained in the list, the function will return
*null*.
- *VARIANCE_POPULATION(list)*: Returns the population variance of the values in
*list*. This requires the elements in *list* to be numbers. `null` values
are ignored. If the list is empty or only `null` values are contained in the list,
the function will return `null`.
*list*. This requires the elements in *list* to be numbers. *null* values
are ignored. If the list is empty or only *null* values are contained in the list,
the function will return *null*.
- *VARIANCE_SAMPLE(list)*: Returns the sample variance of the values in
*list*. This requires the elements in *list* to be numbers. `null` values
are ignored. If the list is empty or only `null` values are contained in the list,
the function will return `null`.
*list*. This requires the elements in *list* to be numbers. *null* values
are ignored. If the list is empty or only *null* values are contained in the list,
the function will return *null*.
- *STDDEV_POPULATION(list)*: Returns the population standard deviation of the
values in *list*. This requires the elements in *list* to be numbers. `null`
values are ignored. If the list is empty or only `null` values are contained in the list,
the function will return `null`.
values in *list*. This requires the elements in *list* to be numbers. *null*
values are ignored. If the list is empty or only *null* values are contained in the list,
the function will return *null*.
- *STDDEV_SAMPLE(list)*: Returns the sample standard deviation of the values in
*list*. This requires the elements in *list* to be numbers. `null` values
are ignored. If the list is empty or only `null` values are contained in the list,
the function will return `null`.
*list*. This requires the elements in *list* to be numbers. *null* values
are ignored. If the list is empty or only *null* values are contained in the list,
the function will return *null*.
- *REVERSE(list)*: Returns the elements in *list* in reversed order.
- *FIRST(list)*: Returns the first element in *list* or `null` if the
- *FIRST(list)*: Returns the first element in *list* or *null* if the
list is empty.
- *LAST(list)*: Returns the last element in *list* or `null` if the
- *LAST(list)*: Returns the last element in *list* or *null* if the
list is empty.
- *NTH(list, position)*: Returns the list element at position @FA{position}.
Positions start at 0. If *position* is negative or beyond the upper bound of the list
specified by *list*, then `null` will be returned.
specified by *list*, then *null* will be returned.
- *POSITION(list, search, return-index)*: Returns the position of the
element *search* in list *list*. Positions start at 0. If the element is not
found, then `-1` is returned. If *return-index* is `false`, then instead of the
position only `true` or `false` are returned, depending on whether the sought element
found, then *-1* is returned. If *return-index* is *false*, then instead of the
position only *true* or *false* are returned, depending on whether the sought element
is contained in the list.
- *SLICE(list, start, length)*: Extracts a slice of the list specified
@ -542,23 +542,23 @@ AQL supports the following functions to operate on list values:
SLICE([ 1, 2, 3, 4, 5 ], 0, 1)
will return `[ 1 ]`
will return *[ 1 ]*
SLICE([ 1, 2, 3, 4, 5 ], 1, 2)
will return `[ 2, 3 ]`
will return *[ 2, 3 ]*
SLICE([ 1, 2, 3, 4, 5 ], 3)
will return `[ 4, 5 ]`
will return *[ 4, 5 ]*
SLICE([ 1, 2, 3, 4, 5 ], 1, -1)
will return `[ 2, 3, 4 ]`
will return *[ 2, 3, 4 ]*
SLICE([ 1, 2, 3, 4, 5 ], 0, -2)
will return `[ 1, 2, 3 ]`
will return *[ 1, 2, 3 ]*
- *UNIQUE(list)*: Returns all unique elements in *list*. To determine
uniqueness, the function will use the comparison order defined in @ref AqlTypeOrder.
@ -614,7 +614,7 @@ AQL supports the following functions to operate on list values:
Apart from these functions, AQL also offers several language constructs (e.g.
`FOR`, `SORT`, `LIMIT`, `COLLECT`) to operate on lists.
*FOR*, *SORT*, *LIMIT*, *COLLECT*) to operate on lists.
!SUBSUBSECTION Document functions
@ -622,11 +622,11 @@ AQL supports the following functions to operate on document values:
- *MATCHES(document, examples, return-index)*: Compares the document
*document* against each example document provided in the list *examples*.
If *document* matches one of the examples, `true` is returned, and if there is
no match `false` will be returned. The default return value type can be changed by
passing `true` as the third function parameter *return-index*. Setting this
If *document* matches one of the examples, *true* is returned, and if there is
no match *false* will be returned. The default return value type can be changed by
passing *true* as the third function parameter *return-index*. Setting this
flag will return the index of the example that matched (starting at offset 0), or
`-1` if there was no match.
*-1* if there was no match.
The comparisons will be started with the first example. All attributes of the example
will be compared against the attributes of *document*. If all attributes match, the
@ -645,8 +645,8 @@ AQL supports the following functions to operate on document values:
{ "test : 1 }
], true)
This will return `2`, because the third example matches, and because the
`return-index` flag is set to `true`.
This will return *2*, because the third example matches, and because the
*return-index* flag is set to *true*.
- *MERGE(document1, document2, ... *documentn)*: Merges the documents
in *document1* to *documentn* into a single document. If document attribute
@ -678,7 +678,7 @@ AQL supports the following functions to operate on document values:
]
Please note that merging will only be done for top-level attributes. If you wish to
merge sub-attributes, you should consider using `MERGE_RECURSIVE` instead.
merge sub-attributes, you should consider using *MERGE_RECURSIVE* instead.
- *MERGE_RECURSIVE(document1, document2, ... documentn)*: Recursively
merges the documents in *document1* to *documentn* into a single document. If
@ -697,13 +697,13 @@ AQL supports the following functions to operate on document values:
]
- *HAS(document, attributename)*: Returns `true` if *document* has an
attribute named *attributename*, and `false` otherwise.
- *HAS(document, attributename)*: Returns *true* if *document* has an
attribute named *attributename*, and *false* otherwise.
- *ATTRIBUTES(document, *removeInternal, sort)*: Returns the attribute
names of the document *document as a list.
If *removeInternal* is set to `true`, then all internal attributes (such as `_id`,
`_key` etc.) are removed from the result. If *sort* is set to `true`, then the
If *removeInternal* is set to *true*, then all internal attributes (such as *_id*,
*_key* etc.) are removed from the result. If *sort* is set to *true*, then the
attribute names in the result will be sorted. Otherwise they will be returned in any order.
- *UNSET(document, attributename, ...)*: Removes the attributes *attributename*
@ -724,8 +724,8 @@ AQL supports the following functions to operate on document values:
*document-handle* and returns a the handle's individual parts a separate attributes.
This function can be used to easily determine the collection name and key from a given document.
The *document-handle* can either be a regular document from a collection, or a document
identifier string (e.g. `_users/1234`). Passing either a non-string or a non-document or a
document without an `_id` attribute will result in an error.
identifier string (e.g. *_users/1234*). Passing either a non-string or a non-document or a
document without an *_id* attribute will result in an error.
RETURN PARSE_IDENTIFIER('_users/my-user')
@ -777,50 +777,50 @@ AQL offers the following functions to filter data based on fulltext indexes:
matches the fulltext query *query*.
*query* is a comma-separated list of sought words (or prefixes of sought words). To
distinguish between prefix searches and complete-match searches, each word can optionally be
prefixed with either the `prefix:` or `complete:` qualifier. Different qualifiers can
prefixed with either the *prefix:* or *complete:* qualifier. Different qualifiers can
be mixed in the same query. Not specifying a qualifier for a search word will implicitly
execute a complete-match search for the given word:
- `FULLTEXT(emails, "body", "banana")` Will look for the word `banana` in the
attribute `body` of the collection `collection`.
- *FULLTEXT(emails, "body", "banana")* Will look for the word *banana* in the
attribute *body* of the collection *collection*.
- `FULLTEXT(emails, "body", "banana,orange")` Will look for boths the words
`banana` and `orange` in the mentioned attribute. Only those documents will be
- *FULLTEXT(emails, "body", "banana,orange")* Will look for boths the words
*banana* and *orange* in the mentioned attribute. Only those documents will be
returned that contain both words.
- `FULLTEXT(emails, "body", "prefix:head")` Will look for documents that contain any
words starting with the prefix `head`.
- *FULLTEXT(emails, "body", "prefix:head")* Will look for documents that contain any
words starting with the prefix *head*.
- `FULLTEXT(emails, "body", "prefix:head,complete:aspirin")` Will look for all
documents that contain a word starting with the prefix `head` and that also contain
the (complete) word `aspirin`. Note: specifying `complete` is optional here.
- *FULLTEXT(emails, "body", "prefix:head,complete:aspirin")* Will look for all
documents that contain a word starting with the prefix *head* and that also contain
the (complete) word *aspirin*. Note: specifying *complete* is optional here.
- `FULLTEXT(emails, "body", "prefix:cent,prefix:subst")` Will look for all documents
that contain a word starting with the prefix `cent` and that also contain a word
starting with the prefix `subst`.
- *FULLTEXT(emails, "body", "prefix:cent,prefix:subst")* Will look for all documents
that contain a word starting with the prefix *cent* and that also contain a word
starting with the prefix *subst*.
If multiple search words (or prefixes) are given, then by default the results will be
AND-combined, meaning only the logical intersection of all searches will be returned.
It is also possible to combine partial results with a logical OR, and with a logical NOT:
- `FULLTEXT(emails, "body", "+this,+text,+document")` Will return all documents that
contain all the mentioned words. Note: specifying the `+` symbols is optional here.
- *FULLTEXT(emails, "body", "+this,+text,+document")* Will return all documents that
contain all the mentioned words. Note: specifying the *+* symbols is optional here.
- `FULLTEXT(emails, "body", "banana,|apple")` Will return all documents that contain
either (or both) words `banana` or `apple`.
- *FULLTEXT(emails, "body", "banana,|apple")* Will return all documents that contain
either (or both) words *banana* or *apple*.
- `FULLTEXT(emails, "body", "banana,-apple")` Will return all documents that contain
the word `banana` but do not contain the word `apple`.
- *FULLTEXT(emails, "body", "banana,-apple")* Will return all documents that contain
the word *banana* but do not contain the word *apple*.
- `FULLTEXT(emails, "body", "banana,pear,-cranberry")` Will return all documents that
contain both the words `banana` and `pear` but do not contain the word
`cranberry`.
- *FULLTEXT(emails, "body", "banana,pear,-cranberry")* Will return all documents that
contain both the words *banana* and *pear* but do not contain the word
*cranberry*.
No precedence of logical operators will be honored in a fulltext query. The query will simply
be evaluated from left to right.
Note: the `FULLTEXT` function requires the collection *collection* to have a
fulltext index on `attribute`. If no fulltext index is available, this function
Note: the *FULLTEXT* function requires the collection *collection* to have a
fulltext index on *attribute*. If no fulltext index is available, this function
will fail with an error.
!SUBSUBSECTION Graph functions
@ -832,19 +832,19 @@ AQL has the following functions to traverse graphs:
*vertexcollection* and edges in the collection *edgecollection*. For each vertex
in *vertexcollection*, it will determine the paths through the graph depending on the
value of @FA{direction}:
- `"outbound"`: Follow all paths that start at the current vertex and lead to another vertex
- `"inbound"`: Follow all paths that lead from another vertex to the current vertex
- `"any"`: Combination of `"outbound"` and `"inbound"`
The default value for *direction* is `"outbound"`.
- *"outbound"*: Follow all paths that start at the current vertex and lead to another vertex
- *"inbound"*: Follow all paths that lead from another vertex to the current vertex
- *"any"*: Combination of *"outbound"* and *"inbound"*
The default value for *direction* is *"outbound"*.
If *followcycles* is true, cyclic paths will be followed as well. This is turned off by
default.
The result of the function is a list of paths. Paths of length 0 will also be returned. Each
path is a document consisting of the following attributes:
- `vertices`: list of vertices visited along the path
- `edges`: list of edges visited along the path (might be empty)
- `source`: start vertex of path
- `destination`: destination vertex of path
- *vertices*: list of vertices visited along the path
- *edges*: list of edges visited along the path (might be empty)
- *source*: start vertex of path
- *destination*: destination vertex of path
Example calls:
@ -858,87 +858,87 @@ AQL has the following functions to traverse graphs:
Traverses the graph described by *vertexcollection* and *edgecollection*,
starting at the vertex identified by id *startVertex*. Vertex connectivity is
specified by the *direction* parameter:
- `"outbound"`: Vertices are connected in `_from` to `_to` order
- `"inbound"`: Vertices are connected in `_to` to `_from` order
- `"any"`: Vertices are connected in both `_to` to `_from` and in
`_from` to `_to` order
- *"outbound"*: Vertices are connected in *_from* to *_to* order
- *"inbound"*: Vertices are connected in *_to* to *_from* order
- *"any"*: Vertices are connected in both *_to* to *_from* and in
*_from* to *_to* order
Additional options for the traversal can be provided via the *options* document:
- `strategy`: Defines the traversal strategy. Possible values are `depthfirst`
and `breadthfirst`. Defaults to `depthfirst`
- `order`: Defines the traversal order: Possible values are `preorder` and
`postorder`. Defaults to `preorder`
- `itemOrder`: Defines the level item order. Can be `forward` or
`backward`. Defaults to `forward`
- `minDepth`: Minimum path depths for vertices to be included. This can be used to
- *strategy*: Defines the traversal strategy. Possible values are *depthfirst*
and *breadthfirst*. Defaults to *depthfirst*
- *order*: Defines the traversal order: Possible values are *preorder* and
*postorder*. Defaults to *preorder*
- *itemOrder*: Defines the level item order. Can be *forward* or
*backward*. Defaults to *forward*
- *minDepth*: Minimum path depths for vertices to be included. This can be used to
include only vertices in the result that are found after a certain minimum depth.
Defaults to 0
- `maxIterations`: Maximum number of iterations in each traversal. This number can be
- *maxIterations*: Maximum number of iterations in each traversal. This number can be
set to prevent endless loops in traversal of cyclic graphs. When a traversal performs
as many iterations as the `maxIterations` value, the traversal will abort with an
error. If `maxIterations` is not set, a server-defined value may be used
- `maxDepth`: Maximum path depth for sub-edges expansion. This can be used to
as many iterations as the *maxIterations* value, the traversal will abort with an
error. If *maxIterations* is not set, a server-defined value may be used
- *maxDepth*: Maximum path depth for sub-edges expansion. This can be used to
limit the depth of the traversal to a sensible amount. This should especially be used
for big graphs to limit the traversal to some sensible amount, and for graphs
containing cycles to prevent infinite traversals. The maximum depth defaults to 256,
with the chance of this value being non-sensical. For several graphs, a much lower
maximum depth is sensible, whereas for other, more list-oriented graphs a higher
depth should be used
- `paths`: If `true`, the paths encountered during the traversal will
also be returned along with each traversed vertex. If `false`, only the
- *paths*: If *true*, the paths encountered during the traversal will
also be returned along with each traversed vertex. If *false*, only the
encountered vertices will be returned.
- `uniqueness`: An optional document with the following attributes:
- `vertices`:
- `none`: No vertex uniqueness is enforced
- `global`: A vertex may be visited at most once. This is the default.
- `path`: A vertex is visited only if not already contained in the current
- *uniqueness*: An optional document with the following attributes:
- *vertices*:
- *none*: No vertex uniqueness is enforced
- *global*: A vertex may be visited at most once. This is the default.
- *path*: A vertex is visited only if not already contained in the current
traversal path
- `edges`:
- `none`: No edge uniqueness is enforced
- `global`: An edge may be visited at most once. This is the default
- `path`: An edge is visited only if not already contained in the current
- *edges*:
- *none*: No edge uniqueness is enforced
- *global*: An edge may be visited at most once. This is the default
- *path*: An edge is visited only if not already contained in the current
traversal path
- `followEdges`: An optional list of example edge documents that the traversal will
- *followEdges*: An optional list of example edge documents that the traversal will
expand into. If no examples are given, the traversal will follow all edges. If one
or many edge examples are given, the traversal will only follow an edge if it matches
at least one of the specified examples. `followEdges` can also be a string with the
at least one of the specified examples. *followEdges* can also be a string with the
name of an AQL user-defined function that should be responsible for checking if an
edge should be followed. In this case, the AQL function will is expected to have the
following signature:
function (config, vertex, edge, path)
The function is expected to return a boolean value. If it returns `true`, the edge
will be followed. If `false` is returned, the edge will be ignored.
The function is expected to return a boolean value. If it returns *true*, the edge
will be followed. If *false* is returned, the edge will be ignored.
- `filterVertices`: An optional list of example vertex documents that the traversal will
- *filterVertices*: An optional list of example vertex documents that the traversal will
treat specially. If no examples are given, the traversal will handle all encountered
vertices equally. If one or many vertex examples are given, the traversal will exclude
any non-matching vertex from the result and/or not descend into it. Optionally,
`filterVertices` can contain the name of a user-defined AQL function that should be responsible
*filterVertices* can contain the name of a user-defined AQL function that should be responsible
for filtering. If so, the AQL function is expected to have the following signature:
function (config, vertex, path)
If a custom AQL function is used, it is expected to return one of the following values:
- `[ ]`: Include the vertex in the result and descend into its connected edges
- `[ "prune" ]`: Will include the vertex in the result but not descend into its connected edges
- `[ "exclude" ]`: Will not include the vertex in the result but descend into its connected edges
- `[ "prune", "exclude" ]`: Will completely ignore the vertex and its connected edges
- *[ ]*: Include the vertex in the result and descend into its connected edges
- *[ "prune" ]*: Will include the vertex in the result but not descend into its connected edges
- *[ "exclude" ]*: Will not include the vertex in the result but descend into its connected edges
- *[ "prune", "exclude" ]*: Will completely ignore the vertex and its connected edges
- `vertexFilterMethod`: Only useful in conjunction with `filterVertices` and if no user-defined
- *vertexFilterMethod*: Only useful in conjunction with *filterVertices* and if no user-defined
AQL function is used. If specified, it will influence how vertices are handled that don't match
the examples in `filterVertices`:
- `[ "prune" ]`: Will include non-matching vertices in the result but not descend into them
- `[ "exclude" ]`: Will not include non-matching vertices in the result but descend into them
- `[ "prune", "exclude" ]`: Will neither include non-matching vertices in the result nor descend into them
the examples in *filterVertices*:
- *[ "prune" ]*: Will include non-matching vertices in the result but not descend into them
- *[ "exclude" ]*: Will not include non-matching vertices in the result but descend into them
- *[ "prune", "exclude" ]*: Will neither include non-matching vertices in the result nor descend into them
The result of the TRAVERSAL function is a list of traversed points. Each point is a
document consisting of the following attributes:
- `vertex`: The vertex at the traversal point
- `path`: The path history for the traversal point. The path is a document with the
attributes `vertices` and `edges`, which are both lists. Note that `path` is only present
in the result if the `paths` attribute is set in the @FA{options}
- *vertex*: The vertex at the traversal point
- *path*: The path history for the traversal point. The path is a document with the
attributes *vertices* and *edges*, which are both lists. Note that *path* is only present
in the result if the *paths* attribute is set in the @FA{options}
Example calls:
@ -997,9 +997,9 @@ AQL has the following functions to traverse graphs:
the *connectName* parameter. Connected vertices will be placed in this attribute as a
list.
The *options* are the same as for the `TRAVERSAL` function, except that the result will
The *options* are the same as for the *TRAVERSAL* function, except that the result will
be set up in a way that resembles a depth-first, pre-order visitation result. Thus, the
`strategy` and `order` attributes of the *options* attribute will be ignored.
*strategy* and *order* attributes of the *options* attribute will be ignored.
Example calls:
@ -1019,22 +1019,22 @@ time and memory for the result set.
Both vertices must be present in the vertex collection specified in *vertexcollection*,
and any connecting edges must be present in the collection specified by *edgecollection*.
Vertex connectivity is specified by the @FA{direction} parameter:
- `"outbound"`: Vertices are connected in `_from` to `_to` order
- `"inbound"`: Vertices are connected in `_to` to `_from` order
- `"any"`: Vertices are connected in both `_to` to `_from` and in
`_from` to `_to` order
- *"outbound"*: Vertices are connected in *_from* to *_to* order
- *"inbound"*: Vertices are connected in *_to* to *_from* order
- *"any"*: Vertices are connected in both *_to* to *_from* and in
*_from* to *_to* order
The search is aborted when a shortest path is found. Only the first shortest path will be
returned. Any vertex will be visited at most once by the search.
Additional options for the traversal can be provided via the *options* document:
- `maxIterations`: Maximum number of iterations in the search. This number can be
- *maxIterations*: Maximum number of iterations in the search. This number can be
set to bound long-running searches. When a search performs as many iterations as the
`maxIterations` value, the search will abort with an error. If `maxIterations` is not
*maxIterations* value, the search will abort with an error. If *maxIterations* is not
set, a server-defined value may be used.
- `paths`: If `true`, the result will not only contain the vertices along the shortest
path, but also the connecting edges. If `false`, only the encountered vertices will
- *paths*: If *true*, the result will not only contain the vertices along the shortest
path, but also the connecting edges. If *false*, only the encountered vertices will
be returned.
- `distance`: An optional custom function to be used when calculating the distance
- *distance*: An optional custom function to be used when calculating the distance
between a vertex and a neighboring vertex. The expected function signature is:
function (config, vertex1, vertex2, edge)
@ -1047,40 +1047,40 @@ time and memory for the result set.
same distance (1) to each other. If a function name is specified, it must have been
registered as a regular user-defined AQL function.
- `followEdges`: An optional list of example edge documents that the search will
- *followEdges*: An optional list of example edge documents that the search will
expand into. If no examples are given, the search will follow all edges. If one
or many edge examples are given, the search will only follow an edge if it matches
at least one of the specified examples. `followEdges` can also be a string with the
at least one of the specified examples. *followEdges* can also be a string with the
name of an AQL user-defined function that should be responsible for checking if an
edge should be followed. In this case, the AQL function will is expected to have the
following signature:
function (config, vertex, edge, path)
The function is expected to return a boolean value. If it returns `true`, the edge
will be followed. If `false` is returned, the edge will be ignored.
The function is expected to return a boolean value. If it returns *true*, the edge
will be followed. If *false* is returned, the edge will be ignored.
- `filterVertices`: An optional list of example vertex documents that the search will
- *filterVertices*: An optional list of example vertex documents that the search will
treat specially. If no examples are given, the search will handle all encountered
vertices equally. If one or many vertex examples are given, the search will exclude
the vertex from the result and/or not descend into it. Optionally, `filterVertices` can
the vertex from the result and/or not descend into it. Optionally, *filterVertices* can
contain the name of a user-defined AQL function that should be responsible for filtering.
If so, the AQL function is expected to have the following signature:
function (config, vertex, path)
If a custom AQL function is used, it is expected to return one of the following values:
- `[ ]`: Include the vertex in the result and descend into its connected edges
- `[ "prune" ]`: Will include the vertex in the result but not descend into its connected edges
- `[ "exclude" ]`: Will not include the vertex in the result but descend into its connected edges
- `[ "prune", "exclude" ]`: Will completely ignore the vertex and its connected edges
- *[ ]*: Include the vertex in the result and descend into its connected edges
- *[ "prune" ]*: Will include the vertex in the result but not descend into its connected edges
- *[ "exclude" ]*: Will not include the vertex in the result but descend into its connected edges
- *[ "prune", "exclude" ]*: Will completely ignore the vertex and its connected edges
The result of the SHORTEST_PATH function is a list with the components of the shortest
path. Each component is a document consisting of the following attributes:
- `vertex`: The vertex at the traversal point
- `path`: The path history for the traversal point. The path is a document with the
attributes `vertices` and `edges`, which are both lists. Note that `path` is only present
in the result if the `paths` attribute is set in the @FA{options}.
- *vertex*: The vertex at the traversal point
- *path*: The path history for the traversal point. The path is a document with the
attributes *vertices* and *edges*, which are both lists. Note that *path* is only present
in the result if the *paths* attribute is set in the @FA{options}.
Example calls:
@ -1117,9 +1117,9 @@ time and memory for the result set.
- *EDGES(edgecollection, startvertex, direction, edgeexamples)*:
Return all edges connected to the vertex *startvertex* as a list. The possible values for
*direction* are:
- `outbound`: Return all outbound edges
- `inbound`: Return all inbound edges
- `any`: Return outbound and inbound edges
- *outbound*: Return all outbound edges
- *inbound*: Return all inbound edges
- *any*: Return outbound and inbound edges
The *edgeexamples* parameter can optionally be used to restrict the results to specific
edge connections only. The matching is then done via the *MATCHES* function.
@ -1134,9 +1134,9 @@ time and memory for the result set.
- *NEIGHBORS(vertexcollection, edgecollection, startvertex, direction, edgeexamples)*:
Return all neighbors that are directly connected to the vertex *startvertex* as a list.
The possible values for *direction* are:
- `outbound`: Return all outbound edges
- `inbound`: Return all inbound edges
- `any`: Return outbound and inbound edges
- *outbound*: Return all outbound edges
- *inbound*: Return all inbound edges
- *any*: Return outbound and inbound edges
The *edgeexamples* parameter can optionally be used to restrict the results to specific
edge connections only. The matching is then done via the *MATCHES* function.
@ -1152,14 +1152,14 @@ time and memory for the result set.
AQL offers the following functions to let the user control the flow of operations:
- *NOT_NULL(alternative, ...)*: Returns the first alternative that is not `null`,
and `null` if all alternatives are `null` themselves
- *NOT_NULL(alternative, ...)*: Returns the first alternative that is not *null*,
and *null* if all alternatives are *null* themselves
- *FIRST_LIST(alternative, ...)*: Returns the first alternative that is a list, and
`null` if none of the alternatives is a list
*null* if none of the alternatives is a list
- *FIRST_DOCUMENT(alternative, ...)*: Returns the first alternative that is a document,
and `null` if none of the alternatives is a document
and *null* if none of the alternatives is a document
!SUBSUBSECTION Miscellaneous functions
@ -1167,19 +1167,19 @@ Finally, AQL supports the following functions that do not belong to any of the o
function categories:
- *COLLECTIONS()*: Returns a list of collections. Each collection is returned as a document
with attributes `name` and `_id`
with attributes *name* and *_id*
- *CURRENT_USER()*: Returns the name of the current user. The current user is the user
account name that was specified in the `Authorization` HTTP header of the request. It will
account name that was specified in the *Authorization* HTTP header of the request. It will
only be populated if authentication on the server is turned on, and if the query was executed
inside a request context. Otherwise, the return value of this function will be `null`.
inside a request context. Otherwise, the return value of this function will be *null*.
- *DOCUMENT(collection, id)*: Returns the document which is uniquely identified by
the *id*. ArangoDB will try to find the document using the `_id` value of the document
the *id*. ArangoDB will try to find the document using the *_id* value of the document
in the specified collection. If there is a mismatch between the *collection* passed and
the collection specified in *id*, then `null` will be returned. Additionally, if the
the collection specified in *id*, then *null* will be returned. Additionally, if the
*collection* matches the collection value specified in *id* but the document cannot be
found, `null` will be returned. This function also allows *id* to be a list of ids.
found, *null* will be returned. This function also allows *id* to be a list of ids.
In this case, the function will return a list of all documents that could be found.
Examples:

View File

@ -60,7 +60,7 @@ queries might use data from collections that might also be inhomogeneous. Some
examples that will cause run-time errors are:
- Division by zero: Will be triggered when an attempt is made to use the value
`0` as the divisor in an arithmetic division or modulus operation
*0* as the divisor in an arithmetic division or modulus operation
- Invalid operands for arithmetic operations: Will be triggered when an attempt
is made to use any non-numeric values as operands in arithmetic operations.
This includes unary (unary minus, unary plus) and binary operations (plus,

View File

@ -1,6 +1,6 @@
!CHAPTER Conventions
The `::` symbol is used inside AQL as the namespace separator. Using
The *::* symbol is used inside AQL as the namespace separator. Using
the namespace separator, users can create a multi-level hierarchy of
function groups if required.
@ -13,9 +13,9 @@ Examples:
Note: As all function names in AQL, user function names are also
case-insensitive.
Built-in AQL functions reside in the namespace `_aql`, which is also
Built-in AQL functions reside in the namespace *_aql*, which is also
the default namespace to look in if an unqualified function name is
found. Adding user functions to the `_aql` namespace is disallowed and
found. Adding user functions to the *_aql* namespace is disallowed and
will fail.
User functions can take any number of input arguments and should
@ -30,13 +30,13 @@ that existed at the time of declaration. If user function code requires
access to any external data, it must take care to set up the data by
itself.
User functions must only return primitive types (i.e. `null`, boolean
User functions must only return primitive types (i.e. *null*, boolean
values, numeric values, string values) or aggregate types (lists or
documents) composed of these types.
Returning any other Javascript object type from a user function may lead
to undefined behavior and should be avoided.
Internally, user functions are stored in a system collection named
`_aqlfunctions`. That means that by default they are excluded from dumps
*_aqlfunctions*. That means that by default they are excluded from dumps
created with arangodump. To include AQL user functions in a dump, the
dump should be started with the option `--include-system-collections true`.
dump should be started with the option *--include-system-collections true*.

View File

@ -1,6 +1,6 @@
!CHAPTER Registering and Unregistering User Functions
AQL user functions can be registered using the `aqlfunctions` object as
AQL user functions can be registered using the *aqlfunctions* object as
follows:
var aqlfunctions = require("org/arangodb/aql/functions");

View File

@ -1,32 +0,0 @@
!CHAPTER Address of a Collection
All collections in ArangoDB have an unique identifier and an unique
name. ArangoDB internally uses the collection's unique identifier to look up
collections. This identifier, however, is managed by ArangoDB and the user has
no control over it. In order to allow users to use their own names, each collection
also has an unique name which is specified by the user. To access a collection
from the user perspective, the collection name should be used, i.e.:
*db._collection(collection-name)*
A collection is created by a ["db._create"](../Collections/DatabaseMethods.md) call.
For example: Assume that the collection identifier is `7254820` and the name is
`demo`, then the collection can be accessed as:
db._collection("demo")
If no collection with such a name exists, then *null* is returned.
There is a short-cut that can be used for non-system collections:
*db.collection-name*
This call will either return the collection named *db.collection-name* or create
a new one with that name and a set of default properties.
Note: Creating a collection on the fly using *db.collection-name* is
not recommend and does not work in _arangosh_. To create a new collection, please
use
*db._create(collection-name)*

View File

@ -1,6 +1,7 @@
!CHAPTER Collection Methods
`collection.drop()`
Drops a collection and all its indexes.
*Examples*

View File

@ -7,4 +7,40 @@ corresponding language API.
The most import call is the call to create a new collection
<!--
, see @ref HandlingCollectionsCreate "db._create".
-->
-->
!SECTION Address of a Collection
All collections in ArangoDB have an unique identifier and an unique
name. ArangoDB internally uses the collection's unique identifier to look up
collections. This identifier, however, is managed by ArangoDB and the user has
no control over it. In order to allow users to use their own names, each collection
also has an unique name which is specified by the user. To access a collection
from the user perspective, the collection name should be used, i.e.:
`db._collection(collection-name)``
A collection is created by a ["db._create"](../Collections/DatabaseMethods.md) call.
For example: Assume that the collection identifier is *7254820* and the name is
*demo*, then the collection can be accessed as:
db._collection("demo")
If no collection with such a name exists, then *null* is returned.
There is a short-cut that can be used for non-system collections:
*db.collection-name*
This call will either return the collection named *db.collection-name* or create
a new one with that name and a set of default properties.
Note: Creating a collection on the fly using *db.collection-name* is
not recommend and does not work in _arangosh_. To create a new collection, please
use
`db._create(collection-name)`
This call will create a new collection called *collection-name*.

View File

@ -1,10 +1,12 @@
!CHAPTER Notes about Databases
Please keep in mind that each database contains its own system collections,
which need to set up when a database is created. This will make the creation
of a database take a while. Replication is configured on a per-database level,
meaning that any replication logging or applying for a new database must
be configured explicitly after a new database has been created. Foxx applications
which need to set up when a database is created. This will make the creation of a database take a while.
Replication is configured on a per-database level, meaning that any replication logging or applying for a new database must
be configured explicitly after a new database has been created.
Foxx applications
are also available only in the context of the database they have been installed
in. A new database will only provide access to the system applications shipped
with ArangoDB (that is the web interface at the moment) and no other Foxx

View File

@ -4,7 +4,7 @@ This is an introduction to managing databases in ArangoDB from within
JavaScript.
While being in an established connection to ArangoDB, the current
database can be changed explicitly by using the `db._useDatabase()`
database can be changed explicitly by using the *db._useDatabase()*
method. This will switch to the specified database (provided it
exists and the user can connect to it). From this point on, any
following actions in the same shell or connection will use the
@ -18,7 +18,7 @@ is contained in the HTTP request/response data.
Connecting to a specific database from arangosh is possible with
the above command after arangosh has been started, but it is also
possible to specify a database name when invoking arangosh.
For this purpose, use the command-line parameter `--server.database`,
For this purpose, use the command-line parameter *--server.database*,
e.g.
> arangosh --server.database test

View File

@ -7,16 +7,16 @@ documents of a collection as:
db.collection.document("document-handle")
For example: Assume that the document handle, which is stored in the `_id` field
of the document, is `demo/362549` and the document lives in a collection
named @FA{demo}, then that document can be accessed as:
of the document, is *demo/362549* and the document lives in a collection
named *demo*, then that document can be accessed as:
db.demo.document("demo/362549736")
Because the document handle is unique within the database, you
can leave out the @FA{collection} and use the shortcut:
can leave out the *collection* and use the shortcut:
db._document("demo/362549736")
Each document also has a document revision or ETag which is returned in the
`_rev` field when requesting a document. The document's key is returned in the
`_key` attribute.
*_rev* field when requesting a document. The document's key is returned in the
*_key* attribute.

View File

@ -33,11 +33,11 @@ For example:
@END_EXAMPLE_ARANGOSH_OUTPUT
-->
All documents contain special attributes: the document handle in `_id`, the
document's unique key in `_key` and and the ETag aka document revision in
`_rev`. The value of the `_key` attribute can be specified by the user when
creating a document. `_id` and `_key` values are immutable once the document
has been created. The `_rev` value is maintained by ArangoDB autonomously.
All documents contain special attributes: the document handle in *_id*, the
document's unique key in *_key* and and the ETag aka document revision in
*_rev*. The value of the *_key* attribute can be specified by the user when
creating a document. *_id* and *_key* values are immutable once the document
has been created. The *_rev* value is maintained by ArangoDB autonomously.
A document handle uniquely identifies a document in the database. It is a string and
consists of the collection's name and the document key (_key attribute) separated by /.

View File

@ -14,7 +14,7 @@ ArangoDB is a database that serves documents to clients.
["joins"](../AqlExamples/Join.md) using many collections or graph structures
- *Cursors* are used to iterate over the result of a query
- *Indexes* are used to speed up of searches. There are various different
types of indexes like @ref IndexHash, @ref IndexGeo and @ref IndexBitArray
types of indexes like [Index Hash](../IndexHandling/Hash.md), [Index Geo](../IndexHandling/Geo.md) and [Index BitArray](../IndexHandling/BitArray.md)
If you are familiar with RDBMS then it is safe to compare collections
to tables and documents to rows. However, bringing structure to the
@ -76,13 +76,13 @@ this.
unix> arangosh --server.endpoint tcp://127.0.0.1:8529 --server.username root
A default configuration is normally installed under
`/etc/arangodb/arangosh.conf`. It contains a default endpoint and an
*/etc/arangodb/arangosh.conf*. It contains a default endpoint and an
empty password.
!SUBSECTION Troubleshooting
If the ArangoDB server does not start or if you cannot connect to it
using `arangosh` or other clients, you can try to find the problem cause by
using *arangosh* or other clients, you can try to find the problem cause by
executing the following steps. If the server starts up without problems
you can skip this section.
@ -90,13 +90,13 @@ you can skip this section.
check it because it might contain relevant error context information.
* *Check the configuration*: The server looks for a configuration file
named `arangod.conf` on startup. The contents of this file will be used
named *arangod.conf* on startup. The contents of this file will be used
as a base configuration that can optionally be overridden with command-line
configuration parameters. You should check the config file for the most
relevant parameters such as:
* `server.endpoint`: What IP address and port to bind to
* `log parameters`: If and where to log
* `database.directory`: Path the database files are stored in
* *server.endpoint*: What IP address and port to bind to
* *log parameters*: If and where to log
* *database.directory*: Path the database files are stored in
If the configuration reveals that something is not configured right the config
file should be adjusted and the server be restarted.
@ -123,9 +123,9 @@ you can skip this section.
It is generally good advice to not use DNS when specifying the endpoints
and connection addresses. Using IP addresses instead will rule out DNS as
a source of errors. Another alternative is to use a hostname specified
in the local `/etc/hosts` file, which will then bypass DNS.
in the local */etc/hosts* file, which will then bypass DNS.
* *Test if `curl` can connect*: Once the server is started, you can quickly
* *Test if *curl* can connect*: Once the server is started, you can quickly
verify if it responds to requests at all. This check allows you to
determine whether connection errors are client-specific or not. If at
least one client can connect, it is likely that connection problems of
@ -134,28 +134,28 @@ you can skip this section.
You can test connectivity using a simple command such as:
`curl --dump - -X GET http://127.0.0.1:8529/_api/version && echo`
**curl --dump - -X GET http://127.0.0.1:8529/_api/version && echo**
This should return a response with an `HTTP 200` status code when the
This should return a response with an *HTTP 200* status code when the
server is running. If it does it also means the server is generally
accepting connections. Alternative tools to check connectivity are `lynx`
or `ab`.
accepting connections. Alternative tools to check connectivity are *lynx*
or *ab*.
!SECTIONS Querying for Documents
!SECTION Querying for Documents
All documents are stored in collections. All collections are stored in a
database. The database object is accessible via the variable `db`.
database. The database object is accessible via the variable *db*.
Creating a collection is simple. You can use the `_create` method
of the `db` variable.
Creating a collection is simple. You can use the *_create* method
of the *db* variable.
arangosh> db._create("example");
[ArangoCollection 70628, "example" (status loaded)]
After the collection has been created you can easily access it using
the path `db.example`. The collection currently shows as `loaded`,
the path *db.example*. The collection currently shows as *loaded*,
meaning that it's loaded into memory. If you restart the server and
access the collection again it will now show as `unloaded`. You can
access the collection again it will now show as *unloaded*. You can
also manually unload a collection.
arangosh> db.example.unload();
@ -165,7 +165,7 @@ also manually unload a collection.
Whenever you use a collection ArangoDB will automatically load it
into memory for you.
In order to create new documents in a collection use the `save`
In order to create new documents in a collection use the *save*
operation.
arangosh> db.example.save({ Hello : "World" });
@ -177,13 +177,13 @@ operation.
Just storing documents would be no fun. We now want to select some of
the stored documents again. In order to select all elements of a
collection, one can use the `all` operator. Because this might return
collection, one can use the *all* operator. Because this might return
a lot of data, we switch on pretty printing before.
arangosh> start_pretty_print();
use pretty printing
The command `stop_pretty_print()` will switch off pretty printing again.
The command *stop_pretty_print()* will switch off pretty printing again.
Now extract all elements:
arangosh> db.example.all().toArray()
@ -233,7 +233,7 @@ The last document was a mistake so let's delete it:
]
Now we want to look for a person with a given name. We can use
`byExample` for this. The method returns a list of documents
*byExample* for this. The method returns a list of documents
matching a given example.
arangosh> db.example.byExample({ name: "Jane Smith" }).toArray()
@ -247,9 +247,9 @@ matching a given example.
}
]
While the `byExample` works very well for simple queries where you
combine the conditions with an `and`. The syntax above becomes messy for joins
and `or` conditions. Therefore ArangoDB also supports a full-blown
While the *byExample* works very well for simple queries where you
combine the conditions with an `and`. The syntax above becomes messy for *joins*
and *or* conditions. Therefore ArangoDB also supports a full-blown
query language.
arangosh> db._query('FOR user IN example FILTER user.name == "Jane Smith" RETURN user').toArray()
@ -276,8 +276,8 @@ Search for all persons over 30:
}
]
You can learn all about the query language @ref Aql "here". Note that
`_query` is a short-cut for `_createStatement` and `execute`. We will
You can learn all about the query language [Aql](../Aql/README.md). Note that
*_query* is a short-cut for *_createStatement* and *execute*. We will
come back to these functions when we talk about cursors.
!SECTION ArangoDB's Front-End

View File

@ -12,13 +12,13 @@ installing ArangoDB locally.
and download the correct package for your Linux distribution
- Install the package using your favorite package manager
- Start up the database server, normally this is done by
executing `/etc/init.d/arangod start`. The exact command
executing */etc/init.d/arangod start*. The exact command
depends on your Linux distribution
!SUBSECTION For MacOS X
- Execute `brew install arangodb`
- And start the server using `/usr/local/sbin/arangod &`
- Execute *brew install arangodb*
- And start the server using */usr/local/sbin/arangod &*
!SUBSECTION For Microsoft Windows
@ -26,20 +26,18 @@ installing ArangoDB locally.
and download the installer for Windows
- Start up the database server
After these steps there should be a running instance of _arangod_ -
After these steps there should be a running instance of *_arangod_* -
the ArangoDB database server.
unix> ps auxw | fgrep arangod
arangodb 14536 0.1 0.6 5307264 23464 s002 S 1:21pm 0:00.18 /usr/local/sbin/arangod
If there is no such process, check the log file
`/var/log/arangodb/arangod.log` for errors. If you see a log message
*/var/log/arangodb/arangod.log* for errors. If you see a log message
like
2012-12-03T11:35:29Z [12882] ERROR Database directory version (1) is lower than server version (1.2).
2012-12-03T11:35:29Z [12882] ERROR It seems like you have upgraded the ArangoDB binary. If this is what you wanted to do, please restart with the --upgrade option to upgrade the data in the database directory.
2012-12-03T11:35:29Z [12882] FATAL Database version check failed. Please start the server with the --upgrade option
make sure to start the server once with the `--upgrade` option.
make sure to start the server once with the *--upgrade* option.

View File

@ -1,15 +1,18 @@
!CHAPTER Fluent AQL Interface
This chapter describes a fluent interface to query your graph.
The philosophy of this interface is to at first select a group of starting elements (vertices or edges) and from there on explore the graph with your query by selecting connected elements.
The philosophy of this interface is to select a group of starting elements (vertices or edges) at first and from there on explore the graph with your query by selecting connected elements.
As an example you can start with a set of vertices, select their direct neighbors and finally their outgoing edges.
The result of this query will be the set of outgoing edges.
For each part of the query it is possible to further refine the resulting set of elements by giving examples for them.
!SECTION Starting Points
This section describes the entry points for the fluent interface.
In the philosophy of this module you have to start with a specific subset of vertices or edges and from there on iterate over the graph.
The philosophy of this module is to start with a specific subset of vertices or edges and from there on iterate over the graph.
Therefore you get exactly this two entry points:
* Select a set of edges
@ -101,15 +104,13 @@ g._vertices([{name: "Alice"}, {name: "Bob"}]).toArray();
@END_EXAMPLE_ARANGOSH_OUTPUT
<!-- @endDocuBlock -->
!SECTION Fluent query options
!SECTION Working with the query cursor
After the selection of the entry point you can now query your graph in
a fluent way, meaning each of the functions on your query returns the query again.
Hence it is possible to chain arbitrary many executions one after the other.
The query object itself handles cursor creation and maintenance for you.
The fluent query object handles cursor creation and maintenance for you.
A cursor will be created as soon as you request the first result.
If you are unhappy with the current result and want to refine it further you can execute a further step in the query which cleans up the cursor for you.
In this interface you get the complete functionality available for general AQL cursors directly on your query.
The cursor functionality is described in this section.
!SUBSECTION ToArray
@ -236,6 +237,13 @@ query.count();
@END_EXAMPLE_ARANGOSH_OUTPUT
<!-- @endDocuBlock -->
!SECTION Fluent queries
After the selection of the entry point you can now query your graph in
a fluent way, meaning each of the functions on your query returns the query again.
Hence it is possible to chain arbitrary many executions one after the other.
In this section all available query statements are described.
!SUBSECTION Edges
<!-- @startDocuBlock JSF_general_graph_fluent_aql_edges -->

View File

@ -1,69 +0,0 @@
!CHAPTER BitArray Indexes
!SUBSECTION Introduction to Bit-Array Indexes
It is possible to define a bit-array index on one or more attributes (or paths)
of a documents.
!SUBSECTION Accessing BitArray Indexes from the Shell
`collection.ensureBitarray( field1, value1, ..., fieldn, valuen)`
Creates a bitarray index on documents using attributes as paths to the fields ( field1,..., fieldn). A value ( value1,..., valuen) consists of an array of possible values that the field can take. At least one field and one set of possible values must be given.
All documents, which do not have all of the attribute paths are ignored (that is, are not part of the bitarray index, they are however stored within the collection). A document which contains all of the attribute paths yet has one or more values which are not part of the defined range of values will be rejected and the document will not inserted within the collection. Note that, if a bitarray index is created subsequent to any documents inserted in the given collection, then the creation of the index will fail if one or more documents are rejected (due to attribute values being outside the designated range).
In case that the index was successfully created, the index identifier is returned.
In the example below we create a bitarray index with one field and that field can have the values of either 0 or 1. Any document which has the attribute x defined and does not have a value of 0 or 1 will be rejected and therefore not inserted within the collection. Documents without the attribute x defined will not take part in the index.
arango> arangod> db.example.ensureBitarray("x", [0,1]);
{
"id" : "2755894/3607862",
"unique" : false,
"type" : "bitarray",
"fields" : [["x", [0, 1]]],
"undefined" : false,
"isNewlyCreated" : true
}
In the example below we create a bitarray index with one field and that field can have the values of either 0, 1 or other (indicated by []). Any document which has the attribute x defined will take part in the index. Documents without the attribute x defined will not take part in the index.
arangod> db.example.ensureBitarray("x", [0,1,[]]);
{
"id" : "2755894/4263222",
"unique" : false,
"type" : "bitarray",
"fields" : [["x", [0, 1, [ ]]]],
"undefined" : false,
"isNewlyCreated" : true
}
In the example below we create a bitarray index with two fields. Field x can have the values of either 0 or 1; while field y can have the values of 2 or "a". A document which does not have both attributes x and y will not take part within the index. A document which does have both attributes x and y defined must have the values 0 or 1 for attribute x and 2 or a for attribute y, otherwise the document will not be inserted within the collection.
arangod> db.example.ensureBitarray("x", [0,1], "y", [2,"a"]);
{
"id" : "2755894/5246262",
"unique" : false,
"type" : "bitarray",
"fields" : [["x", [0, 1]], ["y", [0, 1]]],
"undefined" : false,
"isNewlyCreated" : false
}
In the example below we create a bitarray index with two fields. Field x can have the values of either 0 or 1; while field y can have the values of 2, "a" or other . A document which does not have both attributes x and y will not take part within the index. A document which does have both attributes x and y defined must have the values 0 or 1 for attribute x and any value for attribute y will be acceptable, otherwise the document will not be inserted within the collection.
arangod> db.example.ensureBitarray("x", [0,1], "y", [2,"a",[]]);
{
"id" : "2755894/5770550",
"unique" : false,
"type" : "bitarray",
"fields" : [["x", [0, 1]], ["y", [2, "a", [ ]]]],
"undefined" : false,
"isNewlyCreated" : true
}
<!--
@anchor IndexBitArrayShellEnsureBitarray
@copydetails JSF_ArangoCollection_prototype_ensureBitarray
-->

View File

@ -67,8 +67,8 @@ Download the latest source using GIT:
git clone git://github.com/triAGENS/ArangoDB.git
Note: if you only plan to compile ArangoDB locally and do not want to modify or push
any changes, you can speed up cloning substantially by using the `--single-branch` and
`--depth` parameters for the clone command as follws:
any changes, you can speed up cloning substantially by using the *--single-branch* and
*--depth* parameters for the clone command as follws:
git clone --single-branch --depth 1 git://github.com/triAGENS/ArangoDB.git
@ -152,7 +152,7 @@ The ArangoShell will be installed in
/usr/local/bin/arangosh
When upgrading from a previous version of ArangoDB, please make sure you inspect ArangoDB's
log file after an upgrade. It may also be necessary to start ArangoDB with the `--upgrade`
log file after an upgrade. It may also be necessary to start ArangoDB with the *--upgrade*
parameter once to perform required upgrade or initialisation tasks.
!SECTION Devel Version
@ -161,25 +161,25 @@ parameter once to perform required upgrade or initialisation tasks.
Verify that your system contains
- the GNU C/C++ compilers "gcc" and "g++" and the standard C/C++ libraries. You will
compiler and library support for C++11. To be on the safe side with gcc/g++, you will
need version number 4.8 or higher. For "clang" and "clang++", you will need at least
version 3.4.
- the GNU autotools (autoconf, automake)
- the GNU make
- the GNU scanner generator FLEX, at least version 2.3.35
- the GNU parser generator BISON, at least version 2.4
- Python, version 2 or 3
- Go, version 1.2 or higher
* the GNU C/C++ compilers "gcc" and "g++" and the standard C/C++ libraries. You will
* compiler and library support for C++11. To be on the safe side with gcc/g++, you will
* need version number 4.8 or higher. For "clang" and "clang++", you will need at least
* version 3.4.
* the GNU autotools (autoconf, automake)
* the GNU make
* the GNU scanner generator FLEX, at least version 2.3.35
* the GNU parser generator BISON, at least version 2.4
* Python, version 2 or 3
* Go, version 1.2 or higher
In addition you will need the following libraries
- libev in version 3 or 4 (only when configured with `--disable-all-in-one-libev`)
- Google's V8 engine (only when configured with `--disable-all-in-one-v8`)
- the ICU library (only when not configured with `--enable-all-in-one-icu`)
- the GNU readline library
- the OpenSSL library
- the Boost test framework library (boost_unit_test_framework)
* libev in version 3 or 4 (only when configured with `--disable-all-in-one-libev`)
* Google's V8 engine (only when configured with `--disable-all-in-one-v8`)
* the ICU library (only when not configured with `--enable-all-in-one-icu`)
* the GNU readline library
* the OpenSSL library
* the Boost test framework library (boost_unit_test_framework)
To compile Google V8 yourself, you will also need Python 2 and SCons.
@ -246,47 +246,69 @@ correct versions on your system.
The following configuration options exist:
`--enable-relative` will make relative paths be used in the compiled binaries and
`--enable-relative`
This will make relative paths be used in the compiled binaries and
scripts. It allows to run ArangoDB from the compile directory directly, without the
need for a `make install` command and specifying much configuration parameters.
need for a *make install* command and specifying much configuration parameters.
When used, you can start ArangoDB using this command:
bin/arangod /tmp/database-dir
ArangoDB will then automatically use the configuration from file `etc/relative/arangod.conf`.
ArangoDB will then automatically use the configuration from file *etc/relative/arangod.conf*.
`--enable-all-in-one-libev` tells the build system to use the bundled version
`--enable-all-in-one-libev`
This tells the build system to use the bundled version
of libev instead of using the system version.
`--disable-all-in-one-libev` tells the build system to use the installed
`--disable-all-in-one-libev`
This tells the build system to use the installed
system version of libev instead of compiling the supplied version from the
3rdParty directory in the make run.
`--enable-all-in-one-v8` tells the build system to use the bundled version of
`--enable-all-in-one-v8`
This tells the build system to use the bundled version of
V8 instead of using the system version.
`--disable-all-in-one-v8` tells the build system to use the installed system
`--disable-all-in-one-v8`
This tells the build system to use the installed system
version of V8 instead of compiling the supplied version from the 3rdParty
directory in the make run.
`--enable-all-in-one-icu` tells the build system to use the bundled version of
`--enable-all-in-one-icu`
This tells the build system to use the bundled version of
ICU instead of using the system version.
`--disable-all-in-one-icu` tells the build system to use the installed system
`--disable-all-in-one-icu`
This tells the build system to use the installed system
version of ICU instead of compiling the supplied version from the 3rdParty
directory in the make run.
`--enable-all-in-one-boost` tells the build system to use the bundled version of
`--enable-all-in-one-boost`
This tells the build system to use the bundled version of
Boost header files. This is the default and recommended.
`--enable-all-in-one-etcd` tells the build system to use the bundled version
`--enable-all-in-one-etcd`
This tells the build system to use the bundled version
of ETCD. This is the default and recommended.
`--enable-internal-go` tells the build system to use Go binaries located in the
`--enable-internal-go`
This tells the build system to use Go binaries located in the
3rdParty directory. Note that ArangoDB does not ship with Go binaries, and that
the Go binaries must be copied into this directory manually.
`--enable-maintainer-mode` tells the build system to use BISON and FLEX to
`--enable-maintainer-mode`
This tells the build system to use BISON and FLEX to
regenerate the parser and scanner files. If disabled, the supplied files will be
used so you cannot make changes to the parser and scanner files. You need at
least BISON 2.4.1 and FLEX 2.5.35. This option also allows you to make changes

View File

@ -10,7 +10,7 @@ graphical user interface to start and stop the server.
!SECTION Homebrew
If you are using [homebrew](http://brew.sh/),
then you can install the ArangoDB using `brew` as follows:
then you can install the ArangoDB using *brew* as follows:
brew install arangodb
@ -51,8 +51,8 @@ days or weeks until the latest versions are available.
In case you are not using homebrew, we also provide a command-line app. You can
download it from [here](http://www.arangodb.org/download).
Choose `Mac OS X` and go to `Grab binary packages directly`. This allows you to
install the application `ArangoDB-CLI` in your application folder.
Choose *Mac OS X* and go to *Grab binary packages directly*. This allows you to
install the application *ArangoDB-CLI* in your application folder.
Starting the application will start the server and open a terminal window
showing you the log-file.

View File

@ -2,9 +2,9 @@
!SECTION Choices
The default installation directory is `c:\Program Files\ArangoDB-1.x.y`. During the
The default installation directory is *c:\Program Files\ArangoDB-1.x.y*. During the
installation process you may change this. In the following description we will assume
that ArangoDB has been installed in the location `<ROOTDIR>`.
that ArangoDB has been installed in the location *<ROOTDIR>*.
You have to be careful when choosing an installation directory. You need either
write permission to this directory or you need to modify the config file for the
@ -12,11 +12,11 @@ server process. In the latter case the database directory and the Foxx directory
has to be writable by the user.
Installing for a single user: Select a different directory during
installation. For example `C:/Users/<username>/arangodb` or `C:/ArangoDB`.
installation. For example *C:/Users/<username>/arangodb* or *C:/ArangoDB*.
Installing for multiple users: Keep the default directory. After the
installation edit the file `<ROOTDIR>/etc/arangodb/arangod.conf`. Adjust the
`directory` and `app-path` so that these paths point into your home directory.
installation edit the file *<ROOTDIR>/etc/arangodb/arangod.conf*. Adjust the
*directory* and *app-path* so that these paths point into your home directory.
[database]
directory = @HOMEDRIVE@/@HOMEPATH@/arangodb/databases
@ -27,8 +27,8 @@ installation edit the file `<ROOTDIR>/etc/arangodb/arangod.conf`. Adjust the
Create the directories for each user that wants to use ArangoDB.
Installing as Service: Keep the default directory. After the installation open
a command line as administrator (search for `cmd` and right click `run as
administrator`).
a command line as administrator (search for *cmd* and right click *run as
administrator*).
cmd> arangod --install-service
INFO: adding service 'ArangoDB - the multi-purpose database' (internal 'ArangoDB')
@ -53,25 +53,25 @@ not proceed correctly or if the server terminated unexpectedly.
!SUBSECTION Starting
To start an ArangoDB server instance with networking enabled, use the executable
`arangod.exe` located in `<ROOTDIR>/bin`. This will use the configuration
file `arangod.conf` located in `<ROOTDIR>/etc/arangodb`, which you can adjust
to your needs and use the data directory `<ROOTDIR>/var/lib/arangodb`. This
*arangod.exe* located in *<ROOTDIR>/bin*. This will use the configuration
file *arangod.conf* located in *<ROOTDIR>/etc/arangodb*, which you can adjust
to your needs and use the data directory *<ROOTDIR>/var/lib/arangodb*. This
is the place where all your data (databases and collections) will be stored
by default.
Please check the output of the `arangod.exe` executable before going on. If the
Please check the output of the *arangod.exe* executable before going on. If the
server started successfully, you should see a line `ArangoDB is ready for
business. Have fun!` at the end of its output.
We now wish to check that the installation is working correctly and to do this
we will be using the administration web interface. Execute `arangod.exe` if you
we will be using the administration web interface. Execute *arangod.exe* if you
have not already done so, then open up your web browser and point it to the
page:
http://127.0.0.1:8529/
To check if your installation was successful, click the `Collection` tab and
open the configuration. Select the `System` type. If the installation was
To check if your installation was successful, click the *Collection* tab and
open the configuration. Select the *System* type. If the installation was
successful, then the page should display a few system collections.
Try to add a new collection and then add some documents to this new collection.
@ -81,22 +81,22 @@ documents, then your installation is working correctly.
!SUBSECTION Advanced Starting
If you want to provide our own start scripts, you can set the environment
variable `ARANGODB_CONFIG_PATH`. This variable should point to a directory
variable *ARANGODB_CONFIG_PATH*. This variable should point to a directory
containing the configuration files.
!SUBSECTION Using the Client
To connect to an already running ArangoDB server instance, there is a shell
`arangosh.exe` located in `<ROOTDIR>/bin`. This starts a shell which can be
*arangosh.exe* located in *<ROOTDIR>/bin*. This starts a shell which can be
used amongst other things to administer and query a local or remote
ArangoDB server.
Note that `arangosh.exe` does NOT start a separate server, it only starts the
Note that *arangosh.exe* does NOT start a separate server, it only starts the
shell. To use it you must have a server running somewhere, e.g. by using
the `arangod.exe` executable.
the *arangod.exe* executable.
`arangosh.exe` uses configuration from the file `arangosh.conf` located in
`<ROOTDIR>/etc/arangodb/`. Please adjust this to your needs if you want to
*arangosh.exe* uses configuration from the file *arangosh.conf* located in
*<ROOTDIR>/etc/arangodb/*. Please adjust this to your needs if you want to
use different connection settings etc.
!SUBSECTION 32bit
@ -110,8 +110,8 @@ database, and vice versa.
!SUBSECTION Upgrading
To upgrade an EXISTING database created with a previous version of ArangoDB,
please execute the server `arangod.exe` with the option
`--upgrade`. Otherwise starting ArangoDB may fail with errors.
please execute the server *arangod.exe* with the option
*--upgrade*. Otherwise starting ArangoDB may fail with errors.
Note that there is no harm in running the upgrade. So you should run this
batch file if you are unsure of the database version you are using.
@ -123,17 +123,17 @@ completed successfully.
To uninstall the Arango server application you can use the windows control panel
(as you would normally uninstall an application). Note however, that any data
files created by the Arango server will remain as well as the `<ROOTDIR>`
files created by the Arango server will remain as well as the *<ROOTDIR>*
directory. To complete the uninstallation process, remove the data files and
the `<ROOTDIR>` directory manually.
the *<ROOTDIR>* directory manually.
!SUBSECTION Limitations for Cygwin
Please note some important limitations when running ArangoDB under Cygwin:
Starting ArangoDB can be started from out of a Cygwin terminal, but pressing
`CTRL-C` will forcefully kill the server process without giving it a chance to
*CTRL-C* will forcefully kill the server process without giving it a chance to
handle the kill signal. In this case, a regular server shutdown is not possible,
which may leave a file `LOCK` around in the server's data directory. This file
which may leave a file *LOCK* around in the server's data directory. This file
needs to be removed manually to make ArangoDB start again. Additionally, as
ArangoDB does not have a chance to handle the kill signal, the server cannot
forcefully flush any data to disk on shutdown, leading to potential data loss.

View File

@ -36,7 +36,6 @@ If you have any questions don't hesitate to ask on:
- [github](https://github.com/triAGENS/ArangoDB/issues)
- [google groups](https://groups.google.com/forum/?hl=de#!forum/arangodb)
or
- [stackoverflow](http://stackoverflow.com/questions/tagged/arangodb)
We will respond as soon as possible.

View File

@ -7,34 +7,32 @@
* [Compiling](Installing/Compiling.md)
<!-- 2 -->
* [First Steps](FirstSteps/README.md)
* [Getting Familiar](FirstSteps/GettingFamiliar.md)
* [Collections and Documents](FirstSteps/CollectionsAndDocuments.md)
* [The ArangoDB Server](FirstSteps/Arangod.md)
* [ArangoDB Shell](FirstSteps/Arangosh.md)
* [Getting Familiar](FirstSteps/GettingFamiliar.md)
* [The ArangoDB Server](FirstSteps/Arangod.md)
* [The ArangoDB Shell](FirstSteps/README.md)
* [Shell Output](Arangosh/Output.md)
* [Configuration](Arangosh/Configuration.md)
* [Details](FirstSteps/Arangosh.md)
* [Collections](FirstSteps/CollectionsAndDocuments.md)
<!-- 3 -->
* [The ArangoDB Shell](Arangosh/README.md)
* [Shell Output](Arangosh/Output.md)
* [Shell Configuration](Arangosh/Configuration.md)
<!-- 4 -->
* [ArangoDB Web Interface](WebInterface/README.md)
* [Some Features](WebInterface/Features.md)
<!-- 5 -->
<!-- 4 -->
* [Handling Databases](Databases/README.md)
* [Working with Databases](Databases/WorkingWith.md)
* [Notes about Databases](Databases/Notes.md)
<!-- 6 -->
<!-- 5 -->
* [Handling Collections](Collections/README.md)
* [Address of a Collection](Collections/CollectionAddress.md)
* [Collection Methods](Collections/CollectionMethods.md)
* [Database Methods](Collections/DatabaseMethods.md)
<!-- 7 -->
<!-- 6 -->
* [Handling Documents](Documents/README.md)
* [Address and ETag](Documents/DocumentAddress.md)
* [Collection Methods](Documents/DocumentMethods.md)
* [Database Methods](Documents/DatabaseMethods.md)
<!-- 8 -->
<!-- 7 -->
* [Handling Edges](Edges/README.md)
<!-- 9 -->
<!-- 8 -->
* [Simple Queries](SimpleQueries/README.md)
* [Queries](SimpleQueries/Queries.md)
* [Geo Queries](SimpleQueries/GeoQueries.md)
@ -42,6 +40,13 @@
* [Pagination](SimpleQueries/Pagination.md)
* [Sequential Access](SimpleQueries/Access.md)
* [Modification Queries](SimpleQueries/ModificationQueries.md)
<!-- 9 -->
* [Transactions](Transactions/README.md)
* [Transaction invocation](Transactions/TransactionInvocation.md)
* [Passing parameters](Transactions/Passing.md)
* [Locking and isolation](Transactions/LockingAndIsolation.md)
* [Durability](Transactions/Durability.md)
* [Limitations](Transactions/Limitations.md)
<!-- 10 -->
* [AQL](Aql/README.md)
* [How to invoke AQL](Aql/Invoke.md)
@ -55,8 +60,8 @@
* [Conventions](AqlExtending/Conventions.md)
* [Registering Functions](AqlExtending/Functions.md)
<!-- 12 -->
* [AQL Examples](AqlExamples/README.md)
* [Simple queries](AqlExamples/SimpleQueries.md)
* [AQL](AqlExamples/README.md)
* [AqlExamples](AqlExamples/Examples.md)
* [Collection based queries](AqlExamples/CollectionQueries.md)
* [Projections and filters](AqlExamples/ProjectionsAndFilters.md)
* [Joins](AqlExamples/Join.md)
@ -66,28 +71,22 @@
* [Graph Constructor](Blueprint-Graphs/GraphConstructor.md)
* [Vertex Methods](Blueprint-Graphs/VertexMethods.md)
* [Edge Methods](Blueprint-Graphs/EdgeMethods.md)
<!-- 13.5 -->
<!-- 14 -->
* [General-Graphs](General-Graphs/README.md)
* [Graph Functions](General-Graphs/GeneralGraphFunctions.md)
* [Fluent AQL Interface](General-Graphs/FluentAQLInterface.md)
<!-- 14 -->
<!-- 15 -->
* [Traversals](Traversals/README.md)
* [Starting from Scratch](Traversals/StartingFromScratch.md)
* [Using Traversal Objects](Traversals/UsingTraversalObjects.md)
* [Example Data](Traversals/ExampleData.md)
<!-- 15 -->
<!-- 16 -->
* [Transactions](Transactions/README.md)
* [Transaction invocation](Transactions/TransactionInvocation.md)
* [Passing parameters](Transactions/Passing.md)
* [Locking and isolation](Transactions/LockingAndIsolation.md)
* [Durability](Transactions/Durability.md)
* [Limitations](Transactions/Limitations.md)
<!-- 16 -->
* [Replication](Replication/README.md)
* [Components](Replication/Components.md)
* [Example Setup](Replication/ExampleSetup.md)
* [Replication Limitations](Replication/Limitations.md)
* [Replication Overhead](Replication/Overhead.md)
<!-- 17 -->
* [Foxx](Foxx/README.md)
* [Handling Request](Foxx/HandlingRequest.md)
@ -105,150 +104,120 @@
* [Manager Commands](FoxxManager/ManagerCommands.md)
* [Frequently Used Options](FoxxManager/FrequentlyUsedOptions.md)
<!-- 19 -->
* [ArangoDB's Actions](ArangoActions/README.md)
<!-- 20 -->
* [Replication](Replication/README.md)
* [Components](Replication/Components.md)
* [Example Setup](Replication/ExampleSetup.md)
* [Replication Limitations](Replication/Limitations.md)
* [Replication Overhead](Replication/Overhead.md)
* [Replication Events](Replication/Events.md)
<!-- 21 -->
* [Sharding](Sharding/README.md)
* [How to try it out](Sharding/HowTo.md)
* [Implementation](Sharding/StatusOfImplementation.md)
* [Authentication](Sharding/Authentication.md)
* [Firewall setup](Sharding/FirewallSetup.md)
<!-- 20 -->
* [Managing Endpoints](ManagingEndpoints/README.md)
<!-- 21 -->
* [Command-line Options](CommandLineOptions/README.md)
* [General options](CommandLineOptions/GeneralOptions.md)
* [Arangod options](CommandLineOptions/Arangod.md)
* [Development options](CommandLineOptions/Development.md)
* [Cluster options](CommandLineOptions/Cluster.md)
* [Logging options](CommandLineOptions/Logging.md)
* [Communication options](CommandLineOptions/Communication.md)
* [Random numbers](CommandLineOptions/RandomNumbers.md)
<!-- 22 -->
* [Arangoimp](Arangoimp/README.md)
* [Configure ArangoDB](ConfigureArango/README.md)
* [General options](ConfigureArango/GeneralOptions.md)
* [Arangod options](ConfigureArango/Arangod.md)
* [Endpoints options](ConfigureArango/Endpoint.md)
* [Development options](ConfigureArango/Development.md)
* [Cluster options](ConfigureArango/Cluster.md)
* [Logging options](ConfigureArango/Logging.md)
* [Communication options](ConfigureArango/Communication.md)
* [Random numbers](ConfigureArango/RandomNumbers.md)
* [Authentication](ConfigureArango/Authentication.md)
* [Emergency Console](ConfigureArango/EmergencyConsole.md)
<!-- 23 -->
* [Arangodump](Arangodump/README.md)
* [Arangoimp](Arangoimp/README.md)
<!-- 24 -->
* [Arangorestore](Arangorestore/README.md)
* [Arangodump](Arangodump/README.md)
<!-- 25 -->
* [HTTP Databases](HttpDatabase/README.md)
* [Database-to-Endpoint](HttpDatabase/DatabaseEndpoint.md)
* [Database Management](HttpDatabase/DatabaseManagement.md)
* [Managing Databases (http)](HttpDatabase/ManagingDatabasesUsingHttp.md)
* [Note on Databases](HttpDatabase/NotesOnDatabases.md)
* [Arangorestore](Arangorestore/README.md)
<!-- 26 -->
* [HTTP Documents](HttpDocuments/README.md)
* [Address and ETag](HttpDocuments/AddressAndEtag.md)
* [Working with Documents](HttpDocuments/WorkingWithDocuments.md)
* [HTTP API](HttpApi/README.md)
* [Databases](HttpDatabase/README.md)
* [To-Endpoint](HttpDatabase/DatabaseEndpoint.md)
* [Management](HttpDatabase/DatabaseManagement.md)
* [Managing (http)](HttpDatabase/ManagingDatabasesUsingHttp.md)
* [Note on Databases](HttpDatabase/NotesOnDatabases.md)
* [Documents](HttpDocuments/README.md)
* [Address and ETag](HttpDocuments/AddressAndEtag.md)
* [Working with](HttpDocuments/WorkingWithDocuments.md)
* [Edges](HttpEdges/README.md)
* [Documents](HttpEdges/Documents.md)
* [Address and ETag](HttpEdges/AddressAndEtag.md)
* [Working with Edges](HttpEdges/WorkingWithEdges.md)
* [AQL Query Cursors](HttpAqlQueryCursor/README.md)
* [Query Results](HttpAqlQueryCursor/QueryResults.md)
* [Accessing Cursors](HttpAqlQueryCursor/AccessingCursors.md)
* [AQL Queries](HttpAqlQueries/README.md)
* [AQL User Functions Management](HttpAqlUserFunctions/README.md)
* [Simple Queries](HttpSimpleQueries/README.md)
* [Collections](HttpCollections/README.md)
* [Address](HttpCollections/Address.md)
* [Creating](HttpCollections/Creating.md)
* [Getting Information](HttpCollections/Getting.md)
* [Modifying](HttpCollections/Modifying.md)
* [Indexes](HttpIndexes/README.md)
* [Address of an Index](HttpIndexes/Address.md)
* [Working with Indexes](HttpIndexes/WorkingWith.md)
* [Index Type](HttpIndexes/SpecializedIndex.md)
* [Transactions](HttpTransactions/README.md)
* [Graphs](HttpGraphs/README.md)
* [Vertex](HttpGraphs/Vertex.md)
* [Edges](HttpGraphs/Edge.md)
* [Traversals](HttpTraversal/README.md)
* [Replication](HttpReplications/README.md)
* [Replication Dump](HttpReplications/ReplicationDump.md)
* [Replication Logger](HttpReplications/ReplicationLogger.md)
* [Replication Applier](HttpReplications/ReplicationApplier.md)
* [Other Replications](HttpReplications/OtherReplication.md)
* [Bulk Imports](HttpBulkImports/README.md)
* [JSON Documents](HttpBulkImports/ImportingSelfContained.md)
* [Headers and Values](HttpBulkImports/ImportingHeadersAndValues.md)
* [Edge Collections](HttpBulkImports/ImportingIntoEdges.md)
* [Batch Requests](HttpBatchRequest/README.md)
* [Monitoring](HttpAdministrationAndMonitoring/README.md)
* [User Management](HttpUserManagement/README.md)
* [Async Result](HttpAsyncResultsManagement/README.md)
* [Management](HttpAsyncResultsManagement/ManagingAsyncResults.md)
* [Endpoints](HttpEndpoints/README.md)
* [Sharding](HttpSharding/README.md)
* [Miscellaneous functions](HttpMiscellaneousFunctions/README.md)
* [General Handling](GeneralHttp/README.md)
<!-- 27 -->
* [HTTP Edges](HttpEdges/README.md)
* [Documents, Identifiers, Handles](HttpEdges/Documents.md)
* [Address and ETag](HttpEdges/AddressAndEtag.md)
* [Working with Edges](HttpEdges/WorkingWithEdges.md)
<!-- 28 -->
* [HTTP AQL Query Cursors](HttpAqlQueryCursor/README.md)
* [Retrieving query results](HttpAqlQueryCursor/QueryResults.md)
* [Accessing Cursors](HttpAqlQueryCursor/AccessingCursors.md)
<!-- 29 -->
* [HTTP AQL Queries](HttpAqlQueries/README.md)
<!-- 30 -->
* [HTTP AQL User Functions Management](HttpAqlUserFunctions/README.md)
<!-- 31 -->
* [HTTP Simple Queries](HttpSimpleQueries/README.md)
<!-- 32 -->
* [HTTP Collections](HttpCollections/README.md)
* [Address of a Collection](HttpCollections/Address.md)
* [Creating Collections](HttpCollections/Creating.md)
* [Getting Information](HttpCollections/Getting.md)
* [Modifying a Collection](HttpCollections/Modifying.md)
<!-- 33 -->
* [HTTP Indexes](HttpIndexes/README.md)
* [HTTP Address of an Index](HttpIndexes/Address.md)
* [HTTP Working with Indexes](HttpIndexes/WorkingWith.md)
* [HTTP Specialized Index Type Methods](HttpIndexes/SpecializedIndex.md)
<!-- 34 -->
* [HTTP Transactions](HttpTransactions/README.md)
<!-- 35 -->
* [HTTP Graphs](HttpGraphs/README.md)
* [Vertex](HttpGraphs/Vertex.md)
* [Edges](HttpGraphs/Edge.md)
<!-- 36 -->
* [HTTP Traversals](HttpTraversal/README.md)
<!-- 37 -->
* [HTTP Replication](HttpReplications/README.md)
* [Replication Dump](HttpReplications/ReplicationDump.md)
* [Replication Logger](HttpReplications/ReplicationLogger.md)
* [Replication Applier](HttpReplications/ReplicationApplier.md)
* [Other Replications](HttpReplications/OtherReplication.md)
<!-- 38 -->
* [HTTP Bulk Imports](HttpBulkImports/README.md)
* [Self-Contained JSON Documents](HttpBulkImports/ImportingSelfContained.md)
* [Headers and Values](HttpBulkImports/ImportingHeadersAndValues.md)
* [Edge Collections](HttpBulkImports/ImportingIntoEdges.md)
<!-- 39 -->
* [HTTP Batch Requests](HttpBatchRequest/README.md)
<!-- 40 -->
* [HTTP Administration and Monitoring](HttpAdministrationAndMonitoring/README.md)
<!-- 41 -->
* [HTTP User Management](HttpUserManagement/README.md)
<!-- 42 -->
* [HTTP Async Results Management](HttpAsyncResultsManagement/README.md)
* [Async Results Management](HttpAsyncResultsManagement/ManagingAsyncResults.md)
<!-- 43 -->
* [HTTP Endpoints](HttpEndpoints/README.md)
<!-- 44 -->
* [HTTP Sharding](HttpSharding/README.md)
<!-- 45 -->
* [HTTP Miscellaneous functions](HttpMiscellaneousFunctions/README.md)
<!-- 46 -->
* [General HTTP Handling](GeneralHttp/README.md)
<!-- 47 -->
* [Javascript Modules](ModuleJavaScript/README.md)
* [Common JSModules](ModuleJavaScript/JSModules.md)
* [Modules Path](ModuleJavaScript/ModulesPath.md)
<!-- 48 -->
* [Module "console"](ModuleConsole/README.md)
<!-- 49 -->
* [Module "fs"](ModuleFs/README.md)
<!-- 50 -->
* [Module "graph"](ModuleGraph/README.md)
* [Graph Constructors](ModuleGraph/GraphConstructor.md)
* [Vertex Methods](ModuleGraph/VertexMethods.md)
* [Edge Methods](ModuleGraph/EdgeMethods.md)
<!-- 51 -->
* [Module "actions"](ModuleActions/README.md)
<!-- 52 -->
* [Module "planner"](ModulePlanner/README.md)
<!-- 53 -->
* [Using jsUnity](UsingJsUnity/README.md)
<!-- 54 -->
* [ArangoDB's Actions](ArangoActions/README.md)
<!-- 55 -->
* [Replication Events](ReplicationEvents/README.md)
<!-- 56 -->
* [Common JSModules](ModuleJavaScript/JSModules.md)
* [Path](ModuleJavaScript/ModulesPath.md)
* ["console"](ModuleConsole/README.md)
* ["fs"](ModuleFs/README.md)
* ["graph"](ModuleGraph/README.md)
* [Graph Constructors](ModuleGraph/GraphConstructor.md)
* [Vertex Methods](ModuleGraph/VertexMethods.md)
* [Edge Methods](ModuleGraph/EdgeMethods.md)
* ["actions"](ModuleActions/README.md)
* ["planner"](ModulePlanner/README.md)
* [Using jsUnity](UsingJsUnity/README.md)
<!-- 28 -->
* [Administrating ArangoDB](AdministratingArango/README.md)
<!-- 57 -->
<!-- 29 -->
* [Handling Indexes](IndexHandling/README.md)
<!-- 58 -->
* [Cap Constraint](IndexCap/README.md)
<!-- 59 -->
* [Geo Indexes](IndexGeo/README.md)
<!-- 60 -->
* [Fulltext Indexes](IndexFulltext/README.md)
<!-- 61 -->
* [Hash Indexes](IndexHash/README.md)
<!-- 62 -->
* [Skip-Lists](IndexSkiplist/README.md)
<!-- 63 -->
* [BitArray Indexes](IndexBitArray/README.md)
<!-- 64 -->
* [Authentication](Authentication/README.md)
<!-- 65 -->
* [Cap Constraint](IndexHandling/Cap.md)
* [Geo Indexes](IndexHandling/Geo.md)
* [Fulltext Indexes](IndexHandling/Fulltext.md)
* [Hash Indexes](IndexHandling/Hash.md)
* [Skip-Lists](IndexHandling/Skiplist.md)
* [BitArray Indexes](IndexHandling/BitArray.md)
<!-- 30 -->
* [Datafile Debugger](DatafileDebugger/README.md)
<!-- 66 -->
* [Emergency Console](EmergencyConsole/README.md)
<!-- 67 -->
<!-- 31 -->
* [Naming Conventions](NamingConventions/README.md)
* [Database Names](NamingConventions/DatabaseNames.md)
* [Collection Names](NamingConventions/CollectionNames.md)
* [Document Keys](NamingConventions/DocumentKeys.md)
* [Attribute Names](NamingConventions/AttributeNames.md)
<!-- 68 -->
<!-- 32 -->
* [Error codes and meanings](ErrorCodes/README.md)

View File

@ -39,30 +39,30 @@ documents will be returned that contain all search words. This default behavior
can be changed by providing the extra control characters in the fulltext query,
which are:
- `+`: logical AND (intersection)
- `|`: logical OR (union)
- `-`: negation (exclusion)
- *+*: logical AND (intersection)
- *|*: logical OR (union)
- *-*: negation (exclusion)
*Examples:*
- `"banana"`: searches for documents containing "banana"
- `"banana,apple"`: searches for documents containing both "banana" AND "apple"
- `"banana,|orange"`: searches for documents containing eihter "banana" OR "orange"
- *"banana"*: searches for documents containing "banana"
- *"banana,apple"*: searches for documents containing both "banana" AND "apple"
- *"banana,|orange"*: searches for documents containing either "banana" OR "orange"
(or both)
- `"banana,-apple"`: searches for documents that contain "banana" but NOT "apple".
- *"banana,-apple"*: searches for documents that contain "banana" but NOT "apple".
Logical operators are evaluated from left to right.
Each search word can optionally be prefixed with `complete:` or `prefix:`, with
`complete:` being the default. This allows searching for complete words or for
Each search word can optionally be prefixed with *complete*: or *prefix*:, with
*complete*: being the default. This allows searching for complete words or for
word prefixes. Suffix searches or any other forms are partial-word matching are
currently not supported.
Examples:
- `"complete:banana"`: searches for documents containing the exact word "banana"
- `"prefix:head"`: searches for documents with words that start with prefix "head"
- `"prefix:head,banana"`: searches for documents contain words starting with prefix
- *"complete:banana"*: searches for documents containing the exact word "banana"
- *"prefix:head"*: searches for documents with words that start with prefix "head"
- *"prefix:head,banana"*: searches for documents contain words starting with prefix
"head" and that also contain the exact word "banana".
Complete match and prefix search options can be combined with the logical
@ -71,7 +71,7 @@ operators.
Please note that only words with a minimum length will get indexed. This minimum
length can be defined when creating the fulltext index. For words tokenisation,
the libicu text boundary analysis is used, which takes into account the default
as defined at server startup (`--server.default-language` startup
as defined at server startup (*--server.default-language* startup
option). Generally, the word boundary analysis will filter out punctuation but
will not do much more.

View File

@ -6,8 +6,8 @@ use a very elaborate algorithm to lookup neighbors that is a magnitude faster
than a simple R* index.
In general a geo coordinate is a pair of latitude and longitude. This can
either be a list with two elements like `[-10, +30]` (latitude first, followed
by longitude) or an object like `{lon: -10, lat: +30`}. In order to find all
either be a list with two elements like *[-10, +30]* (latitude first, followed
by longitude) or an object like `*{lon: -10, lat: +30}*. In order to find all
documents within a given radius around a coordinate use the *within*
operator. In order to find all documents near a given document use the *near*
operator.

View File

@ -8,20 +8,20 @@ queries, which you can use within the ArangoDB shell and within actions and
transactions. For other languages see the corresponding language API
documentation.
If a query returns a cursor, then you can use `hasNext` and `next` to
iterate over the result set or `toArray` to convert it to an array.
If a query returns a cursor, then you can use *hasNext* and *next* to
iterate over the result set or *toArray* to convert it to an array.
If the number of query results is expected to be big, it is possible to
limit the amount of documents transferred between the server and the client
to a specific value. This value is called `batchSize`. The `batchSize`
to a specific value. This value is called *batchSize*. The *batchSize*
can optionally be set before or when a simple query is executed.
If the server has more documents than should be returned in a single batch,
the server will set the `hasMore` attribute in the result. It will also
return the id of the server-side cursor in the `id` attribute in the result.
the server will set the *hasMore* attribute in the result. It will also
return the id of the server-side cursor in the *id* attribute in the result.
This id can be used with the cursor API to fetch any outstanding results from
the server and dispose the server-side cursor afterwards.
The initial `batchSize` value can be set using the `setBatchSize`
The initial *batchSize* value can be set using the *setBatchSize*
method that is available for each type of simple query, or when the simple
query is executed using its `execute` method. If no `batchSize` value
query is executed using its *execute* method. If no *batchSize* value
is specified, the server will pick a reasonable default value.

View File

@ -5,30 +5,30 @@ or a commit. On rollback, no data will be written to disk, but the operations
from the transaction will be reversed in memory.
On commit, all modifications done in the transaction will be written to the
collection datafiles. These writes will be synchronised to disk if any of the
modified collections has the `waitForSync` property set to `true`, or if any
individual operation in the transaction was executed with the `waitForSync`
collection datafiles. These writes will be synchronized to disk if any of the
modified collections has the *waitForSync* property set to *true*, or if any
individual operation in the transaction was executed with the *waitForSync*
attribute.
Additionally, transactions that modify data in more than one collection are
automatically synchronised to disk. This synchronisation is done to not only
automatically synchronized to disk. This synchronization is done to not only
ensure durability, but to also ensure consistency in case of a server crash.
That means if you only modify data in a single collection, and that collection
has its `waitForSync` property set to `false`, the whole transaction will not
be synchronised to disk instantly, but with a small delay.
has its *waitForSync* property set to *false*, the whole transaction will not
be synchronized to disk instantly, but with a small delay.
There is thus the potential risk of losing data between the commit of the
transaction and the actual (delayed) disk synchronisation. This is the same as
writing into collections that have the `waitForSync` property set to `false`
transaction and the actual (delayed) disk synchronization. This is the same as
writing into collections that have the *waitForSync* property set to *false*
outside of a transaction.
In case of a crash with `waitForSync` set to false, the operations performed in
In case of a crash with *waitForSync* set to false, the operations performed in
the transaction will either be visible completely or not at all, depending on
whether the delayed synchronisation had kicked in or not.
whether the delayed synchronization had kicked in or not.
To ensure durability of transactions on a collection that have the `waitForSync`
property set to `false`, you can set the `waitForSync` attribute of the object
that is passed to `executeTransaction`. This will force a synchronisation of the
transaction to disk even for collections that have `waitForSync´ set to `false`:
To ensure durability of transactions on a collection that have the *waitForSync*
property set to *false*, you can set the *waitForSync* attribute of the object
that is passed to *executeTransaction*. This will force a synchronization of the
transaction to disk even for collections that have *waitForSync set to *false*:
db._executeTransaction({
collections: {
@ -39,24 +39,24 @@ transaction to disk even for collections that have `waitForSync´ set to `false`
});
An alternative is to perform an operation with an explicit `sync` request in
An alternative is to perform an operation with an explicit *sync* request in
a transaction, e.g.
db.users.save({ _key: "1234" }, true);
In this case, the `true` value will make the whole transaction be synchronised
In this case, the *true* value will make the whole transaction be synchronized
to disk at the commit.
In any case, ArangoDB will give users the choice of whether or not they want
full durability for single collection transactions. Using the delayed synchronisation
(i.e. `waitForSync` with a value of `false`) will potentially increase throughput
full durability for single collection transactions. Using the delayed synchronization
(i.e. *waitForSync* with a value of *false*) will potentially increase throughput
and performance of transactions, but will introduce the risk of losing the last
committed transactions in the case of a crash.
In contrast, transactions that modify data in more than one collection are
automatically synchronised to disk. This comes at the cost of several disk sync
For a multi-collection transaction, the call to the `_executeTransaction` function
will only return only after the data of all modified collections has been synchronised
automatically synchronized to disk. This comes at the cost of several disk sync
For a multi-collection transaction, the call to the *_executeTransaction* function
will only return only after the data of all modified collections has been synchronized
to disk and the transaction has been made fully durable. This not only reduces the
risk of losing data in case of a crash but also ensures consistency after a
restart.
@ -67,7 +67,7 @@ committed or in preparation to be committed will be rolled back on server restar
For multi-collection transactions, there will be at least one disk sync operation
per modified collection. Multi-collection transactions thus have a potentially higher
cost than single collection transactions. There is no configuration to turn off disk
synchronisation for multi-collection transactions in ArangoDB.
synchronization for multi-collection transactions in ArangoDB.
The disk sync speed of the system will thus be the most important factor for the
performance of multi-collection transactions.

View File

@ -17,8 +17,8 @@ into one transaction.
Additionally, transactions in ArangoDB cannot be nested, i.e. a transaction
must not call any other transaction. If an attempt is made to call a transaction
from inside a running transaction, the server will throw error `1651 (nested
transactions detected)`.
from inside a running transaction, the server will throw error *1651 (nested
transactions detected)*.
It is also disallowed to execute user transaction on some of ArangoDB's own system
collections. This shouldn't be a problem for regular usage as system collections will
@ -26,11 +26,11 @@ not contain user data and there is no need to access them from within a user
transaction.
Finally, all collections that may be modified during a transaction must be
declared beforehand, i.e. using the `collections` attribute of the object passed
to the `_executeTransaction` function. If any attempt is made to carry out a data
modification operation on a collection that was not declared in the `collections`
attribute, the transaction will be aborted and ArangoDB will throw error `1652
unregistered collection used in transaction`.
declared beforehand, i.e. using the *collections* attribute of the object passed
to the *_executeTransaction* function. If any attempt is made to carry out a data
modification operation on a collection that was not declared in the *collections*
attribute, the transaction will be aborted and ArangoDB will throw error *1652
unregistered collection used in transaction*.
It is legal to not declare read-only collections, but this should be avoided if
possible to reduce the probability of deadlocks and non-repeatable reads.

View File

@ -1,6 +1,6 @@
!CHAPTER Locking and Isolation
All collections specified in the `collections` attribute are locked in the
All collections specified in the *collections* attribute are locked in the
requested mode (read or write) at transaction start. Locking of multiple collections
is performed in alphabetical order.
When a transaction commits or rolls back, all locks are released in reverse order.
@ -16,7 +16,7 @@ isolation. A transaction should never see uncommitted or rolled back modificatio
other transactions. Additionally, reads inside a transaction are repeatable.
Note that the above is true only for all collections that are declared in the
`collections` attribute of the transaction.
*collections* attribute of the transaction.
There might be situations when declaring all collections a priori is not possible,
for example, because further collections are determined by a dynamic AQL query
@ -53,15 +53,15 @@ that try to acquire locks on the same collections lazily.
To recover from a deadlock state, ArangoDB will give up waiting for a collection
after a configurable amount of time. The wait time can be specified per transaction
using the optional`lockTimeout`attribute. If no value is specified, some default
using the optional*lockTimeout*attribute. If no value is specified, some default
value will be applied.
If ArangoDB was waited at least `lockTimeout` seconds during lock acquisition, it
will give up and rollback the transaction. Note that the `lockTimeout` is used per
If ArangoDB was waited at least *lockTimeout* seconds during lock acquisition, it
will give up and rollback the transaction. Note that the *lockTimeout* is used per
lock acquisition in a transaction, and not just once per transaction. There will be
at least as many lock acquisition attempts as there are collections used in the
transaction. The total lock wait time may thus be much higher than the value of
`lockTimeout`.
*lockTimeout*.
To avoid both deadlocks and non-repeatable reads, all collections used in a

View File

@ -1,6 +1,6 @@
!CHAPTER Passing parameters to transactions
Arbitrary parameters can be passed to transactions by setting the `params`
Arbitrary parameters can be passed to transactions by setting the *params*
attribute when declaring the transaction. This feature is handy to re-use the
same transaction code for multiple calls but with different parameters.
@ -12,7 +12,7 @@ A basic example:
params: [ 1, 2, 3 ]
});
The above example will return `1`.
The above example will return *1*.
Some example that uses collections:
@ -31,10 +31,10 @@ Some example that uses collections:
!SUBSECTION Disallowed operations
Some operations are not allowed inside ArangoDB transactions:
- creation and deletion of collections (`db._create()`, `db._drop()`, `db._rename()`)
- creation and deletion of indexes (`db.ensure...Index()`, `db.dropIndex()`)
- creation and deletion of collections (*db._create()*, *db._drop()*, *db._rename()*)
- creation and deletion of indexes (*db.ensure...Index()*, *db.dropIndex()*)
If an attempt is made to carry out any of these operations during a transaction,
ArangoDB will abort the transaction with error code `1653 (disallowed operation inside
transaction)`.
ArangoDB will abort the transaction with error code *1653 (disallowed operation inside
transaction)*.

View File

@ -2,10 +2,10 @@
ArangoDB transactions are different from transactions in SQL.
In SQL, transactions are started with explicit `BEGIN` or `START TRANSACTION`
In SQL, transactions are started with explicit *BEGIN* or *START TRANSACTION*
command. Following any series of data retrieval or modification operations, an
SQL transaction is finished with a `COMMIT` command, or rolled back with a
`ROLLBACK` command. There may be client/server communication between the start
SQL transaction is finished with a *COMMIT* command, or rolled back with a
*ROLLBACK* command. There may be client/server communication between the start
and the commit/rollback of an SQL transaction.
In ArangoDB, a transaction is always a server-side operation, and is executed
@ -13,9 +13,9 @@ on the server in one go, without any client interaction. All operations to be
executed inside a transaction need to be known by the server when the transaction
is started.
There are no individual `BEGIN`, `COMMIT` or `ROLLBACK` transaction commands
There are no individual *BEGIN*, *COMMIT* or *ROLLBACK* transaction commands
in ArangoDB. Instead, a transaction in ArangoDB is started by providing a
description of the transaction to the `db._executeTransaction` Javascript
description of the transaction to the *db._executeTransaction* Javascript
function:
db._executeTransaction(description);
@ -41,9 +41,9 @@ Contrary, using a collection in read-only mode will only allow performing
read operations on a collection. Any attempt to write into a collection used
in read-only mode will make the transaction fail.
Collections for a transaction are declared by providing them in the `collections`
attribute of the object passed to the `_executeTransaction` function. The
`collections` attribute has the sub-attributes `read` and `write`:
Collections for a transaction are declared by providing them in the *collections*
attribute of the object passed to the *_executeTransaction* function. The
*collections* attribute has the sub-attributes *read* and *write*:
db._executeTransaction({
collections: {
@ -53,10 +53,10 @@ attribute of the object passed to the `_executeTransaction` function. The
...
});
`read` and `write` are optional attributes, and only need to be specified if
*read* and *write* are optional attributes, and only need to be specified if
the operations inside the transactions demand for it.
The contents of `read` or `write` can each be lists with collection names or a
The contents of *read* or *write* can each be lists with collection names or a
single collection name (as a string):
db._executeTransaction({
@ -76,7 +76,7 @@ from within a transaction, but with relaxed isolation. Please refer to
!SUBSECTION Declaration of data modification and retrieval operations
All data modification and retrieval operations that are to be executed inside
the transaction need to be specified in a Javascript function, using the `action`
the transaction need to be specified in a Javascript function, using the *action*
attribute:
db._executeTransaction({
@ -88,9 +88,9 @@ attribute:
}
});
Any valid Javascript code is allowed inside `action` but the code may only
access the collections declared in `collections`.
`action` may be a Javascript function as shown above, or a string representation
Any valid Javascript code is allowed inside *action* but the code may only
access the collections declared in *collections*.
*action* may be a Javascript function as shown above, or a string representation
of a Javascript function:
db._executeTransaction({
@ -100,11 +100,11 @@ of a Javascript function:
action: "function () { doSomething(); }"
});
Please note that any operations specified in `action` will be executed on the
Please note that any operations specified in *action* will be executed on the
server, in a separate scope. Variables will be bound late. Accessing any Javascript
variables defined on the client-side or in some other server context from inside
a transaction may not work.
Instead, any variables used inside `action` should be defined inside `action` itself:
Instead, any variables used inside *action* should be defined inside *action* itself:
db._executeTransaction({
collections: {
@ -116,9 +116,9 @@ Instead, any variables used inside `action` should be defined inside `action` it
}
});
When the code inside the `action` attribute is executed, the transaction is
When the code inside the *action* attribute is executed, the transaction is
already started and all required locks have been acquired. When the code inside
the `action` attribute finishes, the transaction will automatically commit.
the *action* attribute finishes, the transaction will automatically commit.
There is no explicit commit command.
To make a transaction abort and roll back all changes, an exception needs to
@ -140,7 +140,7 @@ be thrown and not caught inside the transaction:
There is no explicit abort or roll back command.
As mentioned earlier, a transaction will commit automatically when the end of
the `action` function is reached and no exception has been thrown. In this
the *action* function is reached and no exception has been thrown. In this
case, the user can return any legal Javascript value from the function:
db._executeTransaction({
@ -158,11 +158,11 @@ case, the user can return any legal Javascript value from the function:
!SUBSECTION Examples
The first example will write 3 documents into a collection named `c1`.
The `c1` collection needs to be declared in the `write` attribute of the
`collections` attribute passed to the `executeTransaction` function.
The first example will write 3 documents into a collection named *c1*.
The *c1* collection needs to be declared in the *write* attribute of the
*collections* attribute passed to the *executeTransaction* function.
The `action` attribute contains the actual transaction code to be executed.
The *action* attribute contains the actual transaction code to be executed.
This code contains all data modification operations (3 in this example).
// setup
@ -183,7 +183,7 @@ This code contains all data modification operations (3 in this example).
db.c1.count(); // 3
Aborting the transaction by throwing an exception in the `action` function
Aborting the transaction by throwing an exception in the *action* function
will revert all changes, so as if the transaction never happened:
// setup
@ -271,7 +271,7 @@ start. The following example using a cap constraint should illustrate that:
!SUBSECTION Cross-collection transactions
There's also the possibility to run a transaction across multiple collections.
In this case, multiple collections need to be declared in the `collections`
In this case, multiple collections need to be declared in the *collections*
attribute, e.g.:
// setup
@ -293,7 +293,7 @@ attribute, e.g.:
db.c2.count(); // 1
Again, throwing an exception from inside the `action` function will make the
Again, throwing an exception from inside the *action* function will make the
transaction abort and roll back all changes in all collections:
// setup

View File

@ -54,7 +54,7 @@ application can be installed multiple times using different mount points.
!SECTION Graphs Tab
The *Graphs* tab provides a viewer facility for graph data stored in ArangoDB. It
allows browsing ArangoDB graphs stored in the `_graphs` system collection or a
allows browsing ArangoDB graphs stored in the *_graphs* system collection or a
graph consisting of an arbitrary vertex and edge collection.
Please note that the graph viewer requires client-side SVG and that you need a
@ -80,7 +80,7 @@ database server.
Any valid JavaScript code can be executed inside the shell. The code will be
executed inside your browser. To contact the ArangoDB server you can use the
`db` object, for example as follows:
*db* object, for example as follows:
JSH> db._create("mycollection");
JSH> db.mycollection.save({ _key: "test", value: "something" });

View File

@ -14,13 +14,13 @@ application instead. In this case use
(note: _aardvark_ is the web interface's internal name).
If no database name is specified in the URL, you will in most cases get
routed to the web interface for the `_system` database. To access the web
routed to the web interface for the *_system* database. To access the web
interface for any other ArangoDB database, put the database name into the
request URI path as follows:
http://localhost:8529/_db/mydb/
The above will load the web interface for the database `mydb`.
The above will load the web interface for the database *mydb*.
To restrict access to the web interface, use
[ArangoDB's authentication feature](../GeneralHttp/README.html#Authentication).

View File

@ -38,7 +38,7 @@ def fetch_comments(dirpath):
for comment in file_comments:
fh.write("\n<!-- filename: %s -->\n" % filename)
for _com in comment:
_text = _com.replace("/", "")
_text = _com.replace("///", "")
if len(_text.strip()) == 0:
_text = _text.replace("\n", "<br />")
_text = _text.strip()

View File

@ -170,7 +170,8 @@ noinst_LIBRARIES = \
lib/libarango.a \
lib/libarango_v8.a \
lib/libarango_fe.a \
lib/libarango_client.a
lib/libarango_client.a \
arangod/libarangod.a
if ENABLE_MRUBY
noinst_LIBRARIES += lib/libarango_mruby.a

View File

@ -721,6 +721,9 @@ TRI_associative_pointer_t* TRI_CreateFunctionsAql (void) {
REGISTER_FUNCTION("GRAPH_ECCENTRICITY", "GENERAL_GRAPH_ECCENTRICITY", false, false, "s|a", NULL);
REGISTER_FUNCTION("GRAPH_BETWEENNESS", "GENERAL_GRAPH_BETWEENNESS", false, false, "s|a", NULL);
REGISTER_FUNCTION("GRAPH_CLOSENESS", "GENERAL_GRAPH_CLOSENESS", false, false, "s|a", NULL);
REGISTER_FUNCTION("GRAPH_ABSOLUTE_ECCENTRICITY", "GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY", false, false, "s,als|a", NULL);
REGISTER_FUNCTION("GRAPH_ABSOLUTE_BETWEENNESS", "GENERAL_GRAPH_ABSOLUTE_BETWEENNESS", false, false, "s,als|a", NULL);
REGISTER_FUNCTION("GRAPH_ABSOLUTE_CLOSENESS", "GENERAL_GRAPH_ABSOLUTE_CLOSENESS", false, false, "s,als|a", NULL);
REGISTER_FUNCTION("GRAPH_DIAMETER", "GENERAL_GRAPH_DIAMETER", false, false, "s|a", NULL);
REGISTER_FUNCTION("GRAPH_RADIUS", "GENERAL_GRAPH_RADIUS", false, false, "s|a", NULL);

View File

@ -157,7 +157,7 @@ static int HashIndexHelper (TRI_hash_index_t const* hashIndex,
acc = TRI_FindAccessorVocShaper(shaper, shapedJson._sid, path);
// field not part of the object
if (acc == NULL || acc->_shape == NULL) {
if (acc == NULL || acc->_resultSid == 0) {
shapedSub._sid = TRI_LookupBasicSidShaper(TRI_SHAPE_NULL);
shapedSub._length = 0;
shapedSub._offset = 0;

View File

@ -1,25 +1,18 @@
# -*- mode: Makefile; -*-
################################################################################
## --SECTION-- PROGRAM
################################################################################
## -----------------------------------------------------------------------------
## --SECTION-- LIBRARY
## -----------------------------------------------------------------------------
################################################################################
### @brief program "arangod"
### @brief library "libarangod.a"
################################################################################
bin_arangod_CPPFLAGS = \
arangod_libarangod_a_CPPFLAGS = \
-I@top_srcdir@/arangod \
$(AM_CPPFLAGS)
bin_arangod_LDADD = \
lib/libarango_fe.a \
lib/libarango_v8.a \
lib/libarango.a \
$(LIBS) \
@V8_LIBS@
bin_arangod_SOURCES = \
arangod_libarangod_a_SOURCES = \
arangod/Actions/actions.cpp \
arangod/Actions/RestActionHandler.cpp \
arangod/Ahuacatl/ahuacatl-access-optimiser.c \
@ -127,7 +120,7 @@ bin_arangod_SOURCES = \
if ENABLE_CLUSTER
bin_arangod_SOURCES += \
arangod_libarangod_a_SOURCES += \
arangod/Cluster/AgencyComm.cpp \
arangod/Cluster/ApplicationCluster.cpp \
arangod/Cluster/ClusterComm.cpp \
@ -141,15 +134,43 @@ bin_arangod_SOURCES += \
endif
if ENABLE_MRUBY
arangod_libarangod_a_SOURCES += \
arangod/MRServer/ApplicationMR.cpp \
arangod/MRServer/mr-actions.cpp
endif
## -----------------------------------------------------------------------------
## --SECTION-- PROGRAM
## -----------------------------------------------------------------------------
################################################################################
### @brief program "arangod"
################################################################################
bin_arangod_CPPFLAGS = \
-I@top_srcdir@/arangod \
$(AM_CPPFLAGS)
bin_arangod_LDADD = \
arangod/libarangod.a \
lib/libarango_fe.a \
lib/libarango_v8.a \
lib/libarango.a \
$(LIBS) \
@V8_LIBS@
bin_arangod_SOURCES = \
arangod/RestServer/arango.cpp
if ENABLE_MRUBY
bin_arangod_LDADD += \
lib/libarango_mruby.a \
@MRUBY_LIBS@
bin_arangod_SOURCES += \
arangod/MRServer/ApplicationMR.cpp \
arangod/MRServer/mr-actions.cpp
endif
################################################################################

View File

@ -1361,6 +1361,8 @@ bool RestDocumentHandler::modifyDocument (bool isPatch) {
const string cidString = StringUtils::itoa(primary->base._info._planId);
#endif
Barrier barrier(primary);
if (isPatch) {
// patching an existing document
bool nullMeansRemove;

View File

@ -33,6 +33,7 @@
#include "Rest/HttpRequest.h"
#include "VocBase/document-collection.h"
#include "VocBase/edge-collection.h"
#include "Utils/Barrier.h"
#ifdef TRI_ENABLE_CLUSTER
#include "Cluster/ServerState.h"
@ -252,8 +253,10 @@ bool RestEdgeHandler::createDocument () {
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);
return false;
}
if (trx.primaryCollection()->base._info._type != TRI_COL_TYPE_EDGE) {
TRI_primary_collection_t* primary = trx.primaryCollection();
if (primary->base._info._type != TRI_COL_TYPE_EDGE) {
// check if we are inserting with the EDGE handler into a non-EDGE collection
generateError(HttpResponse::BAD, TRI_ERROR_ARANGO_COLLECTION_TYPE_INVALID);
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);
@ -306,6 +309,8 @@ bool RestEdgeHandler::createDocument () {
// .............................................................................
// inside write transaction
// .............................................................................
Barrier barrier(primary);
// will hold the result
TRI_doc_mptr_t document;

View File

@ -2081,6 +2081,8 @@ static v8::Handle<v8::Value> ExistsVocbaseCol (bool useCollection,
TRI_V8_EXCEPTION(scope, res);
}
Barrier barrier(trx.primaryCollection());
v8::Handle<v8::Value> result;
TRI_doc_mptr_t document;
res = trx.read(&document, key);
@ -2304,6 +2306,7 @@ static v8::Handle<v8::Value> ReplaceVocbaseCol (bool useCollection,
TRI_memory_zone_t* zone = primary->_shaper->_memoryZone;
TRI_doc_mptr_t document;
Barrier barrier(primary);
// we must lock here, because below we are
// - reading the old document in coordinator case
@ -2359,8 +2362,6 @@ static v8::Handle<v8::Value> ReplaceVocbaseCol (bool useCollection,
TRI_V8_EXCEPTION_MESSAGE(scope, TRI_errno(), "<data> cannot be converted into JSON shape");
}
Barrier barrier(primary);
res = trx.updateDocument(key, &document, shaped, policy, options.waitForSync, rid, &actualRevision);
res = trx.finish(res);
@ -2551,11 +2552,11 @@ static v8::Handle<v8::Value> SaveEdgeCol (
TRI_V8_EXCEPTION_MESSAGE(scope, TRI_errno(), "<data> cannot be converted into JSON shape");
}
Barrier barrier(primary);
TRI_doc_mptr_t document;
res = trx->createEdge(key, &document, shaped, forceSync, &edge);
Barrier barrier(primary);
res = trx->finish(res);
TRI_FreeShapedJson(zone, shaped);
@ -2710,7 +2711,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
// we must use a write-lock that spans both the initial read and the update.
// otherwise the operation is not atomic
trx.lockWrite();
TRI_doc_mptr_t document;
res = trx.read(&document, key);
@ -2721,6 +2722,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
}
TRI_primary_collection_t* primary = trx.primaryCollection();
Barrier barrier(primary);
TRI_memory_zone_t* zone = primary->_shaper->_memoryZone;
TRI_shaped_json_t shaped;
@ -2759,7 +2761,6 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
res = trx.updateDocument(key, &document, patchedJson, policy, options.waitForSync, rid, &actualRevision);
Barrier barrier(primary);
res = trx.finish(res);
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, patchedJson);
@ -10071,7 +10072,7 @@ static v8::Handle<v8::Integer> PropertyQueryShapedJson (v8::Local<v8::String> na
TRI_shape_access_t const* acc = TRI_FindAccessorVocShaper(shaper, sid, pid);
// key not found
if (acc == 0 || acc->_shape == 0) {
if (acc == 0 || acc->_resultSid == 0) {
return scope.Close(v8::Handle<v8::Integer>());
}

View File

@ -1195,7 +1195,7 @@ static int SkiplistIndexHelper (const TRI_skiplist_index_t* skiplistIndex,
TRI_shape_access_t const* acc = TRI_FindAccessorVocShaper(skiplistIndex->base._collection->_shaper, shapedJson._sid, shape);
if (acc == NULL || acc->_shape == NULL) {
if (acc == NULL || acc->_resultSid == 0) {
return TRI_ERROR_ARANGO_INDEX_DOCUMENT_ATTRIBUTE_MISSING;
}
@ -2035,7 +2035,7 @@ static int BitarrayIndexHelper(const TRI_bitarray_index_t* baIndex,
acc = TRI_FindAccessorVocShaper(baIndex->base._collection->_shaper, shapedDoc->_sid, shape);
if (acc == NULL || acc->_shape == NULL) {
if (acc == NULL || acc->_resultSid == 0) {
return TRI_ERROR_ARANGO_INDEX_BITARRAY_UPDATE_ATTRIBUTE_MISSING;
}
@ -2078,7 +2078,7 @@ static int BitarrayIndexHelper(const TRI_bitarray_index_t* baIndex,
acc = TRI_FindAccessorVocShaper(baIndex->base._collection->_shaper, shapedJson._sid, shape);
if (acc == NULL || acc->_shape == NULL) {
if (acc == NULL || acc->_resultSid == 0) {
return TRI_ERROR_ARANGO_INDEX_DOCUMENT_ATTRIBUTE_MISSING;
}

View File

@ -908,24 +908,34 @@ bool TRI_ExtractShapedJsonVocShaper (TRI_shaper_t* shaper,
return false;
}
*shape = accessor->_shape;
if (accessor->_shape == NULL) {
LOG_TRACE("expecting any object for path %lu, got nothing",
(unsigned long) pid);
return sid == 0;
}
if (sid != 0 && sid != accessor->_shape->_sid) {
if (sid != 0 && sid != accessor->_resultSid) {
LOG_TRACE("expecting sid %lu for path %lu, got sid %lu",
(unsigned long) sid,
(unsigned long) pid,
(unsigned long) accessor->_shape->_sid);
(unsigned long) accessor->_resultSid);
return false;
}
if (accessor->_resultSid == 0) {
LOG_TRACE("expecting any object for path %lu, got nothing",
(unsigned long) pid);
return false;
}
*shape = shaper->lookupShapeId(shaper, accessor->_resultSid);
if (*shape == nullptr) {
LOG_TRACE("expecting any object for path %lu, got unknown shape id %lu",
(unsigned long) pid,
(unsigned long) accessor->_resultSid);
return sid == 0;
return false;
}
ok = TRI_ExecuteShapeAccessor(accessor, document, result);
if (! ok) {

View File

@ -1066,6 +1066,14 @@ function dijkstraSearch () {
var weight = 1;
if (config.distance) {
weight = config.distance(config, currentNode.vertex, neighbor.vertex, edge);
} else if (config.weight) {
if (typeof edge[config.weight] === "number") {
weight = edge[config.weight];
} else if (config.defaultWeight) {
weight = config.defaultWeight;
} else {
weight = Infinity;
}
}
var alt = dist + weight;

View File

@ -368,14 +368,16 @@ AQLGenerator.prototype._edges = function(edgeExample, options) {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_edges
/// Select all edges for the vertices selected before
/// Select all edges for the vertices selected before.
///
/// `graph-query.edges(examples)`
///
/// Creates an AQL statement to select all edges for each of the vertices selected
/// in the step before.
/// This will include `inbound` as well as `outbound` edges.
/// The resulting set of edges can be filtered by defining one or more `examples`.
/// This will include *inbound* as well as *outbound* edges.
/// The resulting set of edges can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all edges are valid.
/// * A string, only the edge having this value as it's id is returned.
@ -423,15 +425,15 @@ AQLGenerator.prototype.edges = function(example) {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_outEdges
/// Select all outbound edges for the vertices selected before
/// Select all outbound edges for the vertices selected before.
///
/// `graph-query.outEdges(examples)`
///
/// Creates an AQL statement to select all `outbound` edges for each of the vertices selected
/// Creates an AQL statement to select all *outbound* edges for each of the vertices selected
/// in the step before.
/// The resulting set of edges can be filtered by defining one or more `examples`.
/// The resulting set of edges can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all edges are valid.
/// * A string, only the edge having this value as it's id is returned.
@ -479,15 +481,15 @@ AQLGenerator.prototype.outEdges = function(example) {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_inEdges
/// Select all inbound edges for the vertices selected before
/// Select all inbound edges for the vertices selected before.
///
/// `graph-query.inEdges(examples)`
///
/// Creates an AQL statement to select all `inbound` edges for each of the vertices selected
/// Creates an AQL statement to select all *inbound* edges for each of the vertices selected
/// in the step before.
/// The resulting set of edges can be filtered by defining one or more `examples`.
/// The resulting set of edges can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all edges are valid.
/// * A string, only the edge having this value as it's id is returned.
@ -564,16 +566,16 @@ AQLGenerator.prototype._vertices = function(example, options) {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_vertices
/// Select all vertices connected to the edges selected before
/// Select all vertices connected to the edges selected before.
///
/// `graph-query.vertices(examples)`
///
/// Creates an AQL statement to select all vertices for each of the edges selected
/// in the step before.
/// This includes all vertices contained in `_from` as well as `_to` attribute of the edges.
/// The resulting set of vertices can be filtered by defining one or more `examples`.
/// This includes all vertices contained in *_from* as well as *_to* attribute of the edges.
/// The resulting set of vertices can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all vertices are valid.
/// * A string, only the vertex having this value as it's id is returned.
@ -634,16 +636,16 @@ AQLGenerator.prototype.vertices = function(example) {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_fromVertices
/// Select all vertices where the edges selected before start
/// Select all vertices where the edges selected before start.
///
/// `graph-query.vertices(examples)`}
/// `graph-query.vertices(examples)`
///
/// Creates an AQL statement to select the set of vertices where the edges selected
/// in the step before start at.
/// This includes all vertices contained in `_from` attribute of the edges.
/// The resulting set of vertices can be filtered by defining one or more `examples`.
/// This includes all vertices contained in *_from* attribute of the edges.
/// The resulting set of vertices can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all vertices are valid.
/// * A string, only the vertex having this value as it's id is returned.
@ -702,16 +704,16 @@ AQLGenerator.prototype.fromVertices = function(example) {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_toVertices
/// Select all vertices targeted by the edges selected before
/// Select all vertices targeted by the edges selected before.
///
/// `graph-query.vertices(examples)`
///
/// Creates an AQL statement to select the set of vertices where the edges selected
/// in the step before end in.
/// This includes all vertices contained in `_to` attribute of the edges.
/// The resulting set of vertices can be filtered by defining one or more `examples`.
/// This includes all vertices contained in *_to* attribute of the edges.
/// The resulting set of vertices can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all vertices are valid.
/// * A string, only the vertex having this value as it's id is returned.
@ -832,9 +834,9 @@ AQLGenerator.prototype.path = function() {
///
/// Creates an AQL statement to select all neighbors for each of the vertices selected
/// in the step before.
/// The resulting set of vertices can be filtered by defining one or more `examples`.
/// The resulting set of vertices can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all vertices are valid.
/// * A string, only the vertex having this value as it's id is returned.
@ -925,14 +927,16 @@ AQLGenerator.prototype._getLastRestrictableStatementInfo = function() {
/// Restricts the last statement in the chain to return
/// only elements of a specified set of collections
///
/// `graph-query.restrict(restrictions)`
///
/// By default all collections in the graph are searched for matching elements
/// whenever vertices and edges are requested.
/// Using `restrict` after such a statement allows to restrict the search
/// Using *restrict* after such a statement allows to restrict the search
/// to a specific set of collections within the graph.
/// Restriction is only applied to this one part of the query.
/// It does not effect earlier or later statements.
///
/// `restrictions` can have the following values:
/// *restrictions* can have the following values:
///
/// * A string defining the name of one specific collection in the graph.
/// Only elements from this collection are used for matching
@ -1003,10 +1007,12 @@ AQLGenerator.prototype.restrict = function(restrictions) {
/// @startDocuBlock JSF_general_graph_fluent_aql_filter
/// Filter the result of the query
///
/// `graph-query.filter(examples)`
///
/// This can be used to further specfiy the expected result of the query.
/// The result set is reduced to the set of elements that matches the given `examples`.
/// The result set is reduced to the set of elements that matches the given *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * A string, only the elements having this value as it's id is returned.
/// * An example object, defining a set of attributes.
@ -1094,6 +1100,8 @@ AQLGenerator.prototype.execute = function() {
/// @startDocuBlock JSF_general_graph_fluent_aql_toArray
/// Returns an array containing the complete result.
///
/// `graph-query.toArray()`
///
/// This function executes the generated query and returns the
/// entire result as one array.
/// ToArray does not return the generated query anymore and
@ -1124,11 +1132,13 @@ AQLGenerator.prototype.toArray = function() {
/// @startDocuBlock JSF_general_graph_fluent_aql_count
/// Returns the number of returned elements if the query is executed.
///
/// `graph-query.count()`
///
/// This function determines the amount of elements to be expected within the result of the query.
/// It can be used at the beginning of execution of the query
/// before using `next()` or in between `next()` calls.
/// before using *next()* or in between *next()* calls.
/// The query object maintains a cursor of the query for you.
/// `count()` does not change the cursor position.
/// *count()* does not change the cursor position.
///
/// @EXAMPLES
///
@ -1153,11 +1163,13 @@ AQLGenerator.prototype.count = function() {
/// @startDocuBlock JSF_general_graph_fluent_aql_hasNext
/// Checks if the query has further results.
///
/// `graph-query.neighbors(examples)`
///
/// The generated statement maintains a cursor for you.
/// If this cursor is already present `hasNext()` will
/// If this cursor is already present *hasNext()* will
/// use this cursors position to determine if there are
/// further results available.
/// If the query has not yet been executed `hasNext()`
/// If the query has not yet been executed *hasNext()*
/// will execute it and create the cursor for you.
///
/// @EXAMPLES
@ -1192,13 +1204,15 @@ AQLGenerator.prototype.hasNext = function() {
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_general_graph_fluent_aql_next
/// Request the next element in the result
/// Request the next element in the result.
///
/// `graph-query.next()`
///
/// The generated statement maintains a cursor for you.
/// If this cursor is already present `next()` will
/// If this cursor is already present *next()* will
/// use this cursors position to deliver the next result.
/// Also the cursor position will be moved by one.
/// If the query has not yet been executed `next()`
/// If the query has not yet been executed *next()*
/// will execute it and create the cursor for you.
/// It will throw an error of your query has no further results.
///
@ -1216,7 +1230,7 @@ AQLGenerator.prototype.hasNext = function() {
/// query.next();
/// @END_EXAMPLE_ARANGOSH_OUTPUT
///
/// The cursor is recreated if the query is changed.
/// The cursor is recreated if the query is changed:
///
/// @EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLToArray}
/// var examples = require("org/arangodb/graph-examples/example-graph.js");
@ -1244,10 +1258,10 @@ AQLGenerator.prototype.next = function() {
///
/// `general-graph._undirectedRelationDefinition(relationName, vertexCollections)`
///
/// Defines an undirected relation with the name `relationName` using the
/// list of `vertexCollections`. This relation allows the user to store
/// Defines an undirected relation with the name *relationName* using the
/// list of *vertexCollections*. This relation allows the user to store
/// edges in any direction between any pair of vertices within the
/// `vertexCollections`.
/// *vertexCollections*.
///
/// @EXAMPLES
///
@ -1850,13 +1864,15 @@ Graph.prototype._OUTEDGES = function(vertexId) {
/// @startDocuBlock JSF_general_graph_edges
/// Select some edges from the graph.
///
/// `graph.edges(examples)`
///
/// Creates an AQL statement to select a subset of the edges stored in the graph.
/// This is one of the entry points for the fluent AQL interface.
/// It will return a mutable AQL statement which can be further refined, using the
/// functions described below.
/// The resulting set of edges can be filtered by defining one or more `examples`.
/// The resulting set of edges can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all edges are valid.
/// * A string, only the edge having this value as it's id is returned.
@ -1867,7 +1883,7 @@ Graph.prototype._OUTEDGES = function(vertexId) {
///
/// @EXAMPLES
///
/// In the examples the `toArray` function is used to print the result.
/// In the examples the *toArray* function is used to print the result.
/// The description of this module can be found below.
///
/// To request unfiltered edges:
@ -1900,13 +1916,15 @@ Graph.prototype._edges = function(edgeExample) {
/// @startDocuBlock JSF_general_graph_vertices
/// Select some vertices from the graph.
///
/// `graph.vertices(examples)`
///
/// Creates an AQL statement to select a subset of the vertices stored in the graph.
/// This is one of the entry points for the fluent AQL interface.
/// It will return a mutable AQL statement which can be further refined, using the
/// functions described below.
/// The resulting set of edges can be filtered by defining one or more `examples`.
/// The resulting set of edges can be filtered by defining one or more *examples*.
///
/// `examples` can have the following values:
/// *examples* can have the following values:
///
/// * Empty, there is no matching executed all vertices are valid.
/// * A string, only the vertex having this value as it's id is returned.
@ -1917,7 +1935,7 @@ Graph.prototype._edges = function(edgeExample) {
///
/// @EXAMPLES
///
/// In the examples the `toArray` function is used to print the result.
/// In the examples the *toArray* function is used to print the result.
/// The description of this module can be found below.
///
/// To request unfiltered vertices:

View File

@ -1065,6 +1065,14 @@ function dijkstraSearch () {
var weight = 1;
if (config.distance) {
weight = config.distance(config, currentNode.vertex, neighbor.vertex, edge);
} else if (config.weight) {
if (typeof edge[config.weight] === "number") {
weight = edge[config.weight];
} else if (config.defaultWeight) {
weight = config.defaultWeight;
} else {
weight = Infinity;
}
}
var alt = dist + weight;

View File

@ -4590,6 +4590,46 @@ function DETERMINE_WEIGHT (edge, weight, defaultWeight) {
return Infinity;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief visitor callback function for traversal
////////////////////////////////////////////////////////////////////////////////
function TRAVERSAL_SHORTEST_PATH_VISITOR (config, result, vertex, path) {
"use strict";
if (config.endVertex && config.endVertex === vertex._id) {
result.push(CLONE({ vertex: vertex, path: path , startVertex : config.startVertex}));
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief visitor callback function for traversal
////////////////////////////////////////////////////////////////////////////////
function TRAVERSAL_DISTANCE_VISITOR (config, result, vertex, path) {
"use strict";
if (config.endVertex && config.endVertex === vertex._id) {
var dist = 0;
if (config.weight) {
path.edges.forEach(function (e) {
if (typeof e[config.weight] === "number") {
dist = dist + e[config.weight];
} else if (config.defaultWeight) {
dist = dist + config.defaultWeight;
}
});
} else {
dist = path.edges.length;
}
result.push(
CLONE({ vertex: vertex, distance: dist , path: path , startVertex : config.startVertex})
);
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief helper function to determine parameters for SHORTEST_PATH and
@ -4647,6 +4687,235 @@ function GRAPH_SHORTEST_PATH (vertexCollection,
/// @brief shortest path algorithm
////////////////////////////////////////////////////////////////////////////////
function CALCULATE_SHORTEST_PATHES_WITH_FLOYD_WARSHALL (graphData, options) {
"use strict";
var graph = graphData, result = [];
graph.fromVerticesIDs = {};
graph.fromVertices.forEach(function (a) {
graph.fromVerticesIDs[a._id] = a;
});
graph.toVerticesIDs = {};
graph.toVertices.forEach(function (a) {
graph.toVerticesIDs[a._id] = a;
});
var paths = {};
var vertices = {};
graph.edges.forEach(function(e) {
if (options.direction === "outbound") {
if (!paths[e._from]) {
paths[e._from] = {};
}
paths[e._from][e._to] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, paths : [{edges : [e], vertices : [e._from, e._to]}]};
} else if (options.direction === "inbound") {
if (!paths[e._to]) {
paths[e._to] = {};
}
paths[e._to][e._from] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, paths : [{edges : [e], vertices : [e._from, e._to]}]};
} else {
if (!paths[e._from]) {
paths[e._from] = {};
}
if (!paths[e._to]) {
paths[e._to] = {};
}
if (paths[e._from][e._to]) {
paths[e._from][e._to].distance =
Math.min(paths[e._from][e._to].distance, DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight));
} else {
paths[e._from][e._to] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, paths : [{edges : [e], vertices : [e._from, e._to]}]};
}
if (paths[e._to][e._from]) {
paths[e._to][e._from].distance =
Math.min(paths[e._to][e._from].distance, DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight));
} else {
paths[e._to][e._from] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, paths : [{edges : [e], vertices : [e._from, e._to]}]};
}
}
vertices[e._to] = 1;
vertices[e._from] = 1;
});
var removeDuplicates = function(elem, pos, self) {
return self.indexOf(elem) === pos;
};
Object.keys(graph.fromVerticesIDs).forEach(function (v) {
vertices[v] = 1;
});
var allVertices = Object.keys(vertices);
allVertices.forEach(function (k) {
allVertices.forEach(function (i) {
allVertices.forEach(function (j) {
if (i === j ) {
if (!paths[i]) {
paths[i] = {};
}
paths[i][j] = null;
return;
}
if (paths[i] && paths[i][k] && paths[i][k].distance >=0
&& paths[i][k].distance < Infinity &&
paths[k] && paths[k][j] && paths[k][j].distance >=0
&& paths[k][j].distance < Infinity &&
( !paths[i][j] ||
paths[i][k].distance + paths[k][j].distance <= paths[i][j].distance
)
) {
if (!paths[i][j]) {
paths[i][j] = {paths : [], distance : paths[i][k].distance + paths[k][j].distance};
}
if (paths[i][k].distance + paths[k][j].distance < paths[i][j].distance) {
paths[i][j].distance = paths[i][k].distance+paths[k][j].distance;
paths[i][j].paths = [];
}
paths[i][k].paths.forEach(function (p1) {
paths[k][j].paths.forEach(function (p2) {
paths[i][j].paths.push({
edges : p1.edges.concat(p2.edges),
vertices: p1.vertices.concat(p2.vertices).filter(removeDuplicates)
});
});
});
}
});
});
});
Object.keys(paths).forEach(function (from) {
if (!graph.fromVerticesIDs[from]) {
return;
}
Object.keys(paths[from]).forEach(function (to) {
if (!graph.toVerticesIDs[to]) {
return;
}
if (from === to) {
result.push({
startVertex : from,
vertex : graph.toVerticesIDs[to],
paths : [{edges : [], vertices : []}],
distance : 0
});
return;
}
result.push({
startVertex : from,
vertex : graph.toVerticesIDs[to],
paths : paths[from][to].paths,
distance : paths[from][to].distance
});
});
});
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief helper function to determine parameters for TRAVERSAL and
/// GRAPH_TRAVERSAL
////////////////////////////////////////////////////////////////////////////////
function TRAVERSAL_PARAMS (params) {
"use strict";
if (params === undefined) {
params = { };
}
params.visitor = TRAVERSAL_VISITOR;
return params;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief merge list of edges with list of examples
////////////////////////////////////////////////////////////////////////////////
function MERGE_EXAMPLES_WITH_EDGES (examples, edges) {
var result = [],filter;
if (examples.length === 0) {
return edges;
}
edges.forEach(function(edge) {
examples.forEach(function(example) {
filter = CLONE(example);
if (!(filter._id || filter._key)) {
filter._id = edge._id;
}
result.push(filter);
});
});
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief calculate shortest paths by dijkstra
////////////////////////////////////////////////////////////////////////////////
function CALCULATE_SHORTEST_PATHES_WITH_DIJKSTRA (graphName, graphData, options) {
var params = TRAVERSAL_PARAMS(), factory = TRAVERSAL.generalGraphDatasourceFactory(graphName);
params.paths = true;
params.followEdges = MERGE_EXAMPLES_WITH_EDGES(options.edgeExamples, graphData.edges);
params.weight = options.weight;
params.defaultWeight = options.defaultWeight;
params = SHORTEST_PATH_PARAMS(params);
params.visitor = TRAVERSAL_DISTANCE_VISITOR;
var result = [];
graphData.fromVertices.forEach(function (v) {
graphData.toVertices.forEach(function (t) {
var e = TRAVERSAL_FUNC("GENERAL_GRAPH_SHORTEST_PATH",
factory,
TO_ID(v),
TO_ID(t),
options.direction,
params);
result = result.concat(e);
});
});
result.forEach(function (r) {
r.paths = [r.path];
});
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief checks if an example is set
////////////////////////////////////////////////////////////////////////////////
function IS_EXAMPLE_SET (example) {
return (
example && (
(Array.isArray(example) && example.length > 0) ||
(typeof example === "object" && Object.keys(example) > 0)
)
);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief shortest path algorithm
////////////////////////////////////////////////////////////////////////////////
function GENERAL_GRAPH_SHORTEST_PATH (graphName,
startVertexExample,
endVertexExample,
@ -4662,141 +4931,27 @@ function GENERAL_GRAPH_SHORTEST_PATH (graphName,
options.direction = 'any';
}
var result = [];
options.edgeExamples = options.edgeExamples || [];
var graph = RESOLVE_GRAPH_TO_DOCUMENTS(graphName, options);
graph.fromVerticesIDs = {};
graph.fromVertices.forEach(function (a) {
graph.fromVerticesIDs[a._id] = a;
});
graph.toVerticesIDs = {};
graph.toVertices.forEach(function (a) {
graph.toVerticesIDs[a._id] = a;
});
var paths = {};
var fromVertices = [];
var toVertices = [];
graph.edges.forEach(function(e) {
if (options.direction === "outbound") {
if (!paths[e._from]) {
paths[e._from] = {};
}
paths[e._from][e._to] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, edges : [e], vertices : [e._from, e._to]};
fromVertices.push(e._from);
toVertices.push(e._to);
} else if (options.direction === "inbound") {
if (!paths[e.to]) {
paths[e._to] = {};
}
paths[e._to][e._from] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, edges : [e], vertices : [e._from, e._to]};
fromVertices.push(e._to);
toVertices.push(e._from);
} else {
if (!paths[e._from]) {
paths[e._from] = {};
}
if (!paths[e._to]) {
paths[e._to] = {};
}
paths[e._from][e._to] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, edges : [e], vertices : [e._from, e._to]};
paths[e._to][e._from] = {distance : DETERMINE_WEIGHT(e, options.weight,
options.defaultWeight)
, edges : [e], vertices : [e._from, e._to]};
fromVertices.push(e._to);
toVertices.push(e._from);
fromVertices.push(e._from);
toVertices.push(e._to);
if (!options.algorithm) {
if (!IS_EXAMPLE_SET(startVertexExample) && !IS_EXAMPLE_SET(endVertexExample)) {
options.algorithm = "Floyd-Warshall";
}
});
var removeDuplicates = function(elem, pos, self) {
return self.indexOf(elem) === pos;
};
fromVertices.filter(removeDuplicates);
toVertices.filter(removeDuplicates);
var allVertices = fromVertices.concat(toVertices).filter(removeDuplicates);
allVertices.forEach(function (k) {
allVertices.forEach(function (i) {
allVertices.forEach(function (j) {
if (i === j ) {
if (!paths[i]) {
paths[i] = {};
}
paths[i] [j] = null;
return;
}
if (paths[i] && paths[i][k] && paths[i][k].distance < Infinity &&
paths[k] && paths[k][j] && paths[k][j].distance < Infinity &&
( !paths[i][j] ||
paths[i][k].distance + paths[k][j].distance < paths[i][j].distance
)
) {
if (!paths[i][j]) {
paths[i][j] = {};
}
paths[i][j].distance = paths[i][k].distance+paths[k][j].distance;
paths[i][j].edges = paths[i][k].edges.concat(paths[k][j].edges);
paths[i][j].vertices =
paths[i][k].vertices.concat(paths[k][j].vertices).filter(removeDuplicates);
}
});
});
});
Object.keys(paths).forEach(function (from) {
if (!graph.fromVerticesIDs[from]) {
return;
}
Object.keys(paths[from]).forEach(function (to) {
if (!graph.toVerticesIDs[to]) {
return;
}
if (from === to) {
result.push({
startVertex : from,
vertex : graph.toVerticesIDs[to],
path : {
edges : [],
vertices : []
},
distance : 0
});
return;
}
result.push({
startVertex : from,
vertex : graph.toVerticesIDs[to],
path : {
edges : paths[from][to].edges,
vertices : paths[from][to].vertices
},
distance : paths[from][to].distance
});
});
});
return result;
}
if (options.algorithm === "Floyd-Warshall") {
return CALCULATE_SHORTEST_PATHES_WITH_FLOYD_WARSHALL(graph, options);
}
return CALCULATE_SHORTEST_PATHES_WITH_DIJKSTRA(
graphName, graph , options
);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief distance to
////////////////////////////////////////////////////////////////////////////////
@ -4816,22 +4971,6 @@ function GENERAL_GRAPH_DISTANCE_TO (graphName,
}
////////////////////////////////////////////////////////////////////////////////
/// @brief helper function to determine parameters for TRAVERSAL and
/// GRAPH_TRAVERSAL
////////////////////////////////////////////////////////////////////////////////
function TRAVERSAL_PARAMS (params) {
"use strict";
if (params === undefined) {
params = { };
}
params.visitor = TRAVERSAL_VISITOR;
return params;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief traverse a graph
////////////////////////////////////////////////////////////////////////////////
@ -5038,26 +5177,6 @@ function GRAPH_NEIGHBORS (vertexCollection,
}
////////////////////////////////////////////////////////////////////////////////
/// @brief merge list of edges with list of examples
////////////////////////////////////////////////////////////////////////////////
function MERGE_EXAMPLES_WITH_EDGES (examples, edges) {
var result = [],filter;
if (examples.length === 0) {
return edges;
}
edges.forEach(function(edge) {
examples.forEach(function(example) {
filter = CLONE(example);
if (!(filter._id || filter._key)) {
filter._id = edge._id;
}
result.push(filter);
});
});
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return connected neighbors
@ -5238,6 +5357,7 @@ function GENERAL_GRAPH_COMMON_PROPERTIES (
options = { };
}
options.fromVertexExample = vertex1Examples;
options.toVertexExample = vertex2Examples;
options.direction = 'any';
options.ignoreProperties = TO_LIST(options.ignoreProperties, true);
@ -5246,58 +5366,64 @@ function GENERAL_GRAPH_COMMON_PROPERTIES (
var removeDuplicates = function(elem, pos, self) {
return self.indexOf(elem) === pos;
};
var c = 0 ;
var t = {};
g.fromVertices.forEach(function (n1) {
options.fromVertexExample = vertex2Examples;
vertex2Examples = TO_LIST(vertex2Examples);
var searchOptions = [];
Object.keys(n1).forEach(function (key) {
if (key.indexOf("_") === 0 || options.ignoreProperties.indexOf(key) !== -1) {
if (key.indexOf("_") === 0 || options.ignoreProperties.indexOf(key) !== -1) {
return;
}
if (!t[key + "|" + JSON.stringify(n1[key])]) {
t[key + "|" + JSON.stringify(n1[key])] = {from : [], to : []};
}
t[key + "|" + JSON.stringify(n1[key])].from.push(n1);
});
});
g.toVertices.forEach(function (n1) {
Object.keys(n1).forEach(function (key) {
if (key.indexOf("_") === 0) {
return;
}
if (!t[key + "|" + JSON.stringify(n1[key])]) {
return;
}
t[key + "|" + JSON.stringify(n1[key])].to.push(n1);
});
});
var tmp = {};
Object.keys(t).forEach(function (r) {
t[r].from.forEach(function (f) {
if (!tmp[f._id]) {
tmp[f._id] = [];
}
t[r].to.forEach(function (t) {
if (t._id === f._id) {
return;
}
if (vertex2Examples.length === 0) {
var con = {};
con[key] = n1[key];
searchOptions.push(con);
}
vertex2Examples.forEach(function (example) {
var con = CLONE(example);
con[key] = n1[key];
searchOptions.push(con);
tmp[f._id].push(t);
});
});
if (searchOptions.length > 0) {
options.fromVertexExample = searchOptions;
var commons = DOCUMENTS_BY_EXAMPLE(
g.fromCollections.filter(removeDuplicates), options.fromVertexExample
);
result[n1._id] = [];
commons.forEach(function (c) {
if (c._id !== n1._id) {
result[n1._id].push(c);
}
});
if (result[n1._id].length === 0) {
delete result[n1._id];
}
}
});
Object.keys(result).forEach(function (r) {
var tmp = {};
tmp[r] = result[r];
if (Object.keys(result[r]).length > 0) {
res.push(tmp);
Object.keys(tmp).forEach(function (r) {
if (tmp[r].length === 0) {
return;
}
var a = {};
a[r] = tmp[r];
res.push(a);
});
return res;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the eccentricity of all vertices in the graph
/// @brief return the absolute eccentricity of vertices in the graph
////////////////////////////////////////////////////////////////////////////////
function VERTICES_ECCENTRICITY (graphName, options) {
function GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY (graphName, vertexExample, options) {
"use strict";
@ -5308,7 +5434,8 @@ function VERTICES_ECCENTRICITY (graphName, options) {
options.direction = 'any';
}
var distanceMap = GENERAL_GRAPH_DISTANCE_TO(graphName, {} , {}, options), result = {}, max = 0;
var distanceMap = GENERAL_GRAPH_DISTANCE_TO(
graphName, vertexExample , {}, options), result = {}, max = 0;
distanceMap.forEach(function(d) {
if (!result[d.startVertex]) {
result[d.startVertex] = d.distance;
@ -5317,9 +5444,10 @@ function VERTICES_ECCENTRICITY (graphName, options) {
}
});
return result;
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the normalized eccentricity of all vertices in the graph
@ -5331,12 +5459,13 @@ function GENERAL_GRAPH_ECCENTRICITY (graphName, options) {
if (! options) {
options = { };
}
if (! options.direction) {
options.direction = 'any';
if (! options.algorithm) {
options.algorithm = "Floyd-Warshall";
}
var result = VERTICES_ECCENTRICITY(graphName, options), max = 0;
var result = GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY(graphName, {}, options), max = 0;
Object.keys(result).forEach(function (r) {
result[r] = 1 / result[r];
result[r] = result[r] === 0 ? 0 : 1 / result[r];
if (result[r] > max) {
max = result[r];
}
@ -5347,6 +5476,36 @@ function GENERAL_GRAPH_ECCENTRICITY (graphName, options) {
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the eccentricity of all vertices in the graph
////////////////////////////////////////////////////////////////////////////////
function GENERAL_GRAPH_ABSOLUTE_CLOSENESS (graphName, vertexExample, options) {
"use strict";
if (! options) {
options = { };
}
if (! options.direction) {
options.direction = 'any';
}
var distanceMap = GENERAL_GRAPH_DISTANCE_TO(graphName, vertexExample , {}, options), result = {};
distanceMap.forEach(function(d) {
if (options.direction !== 'any' && options.calcNormalized) {
d.distance = d.distance === 0 ? 0 : 1 / d.distance;
}
if (!result[d.startVertex]) {
result[d.startVertex] = d.distance;
} else {
result[d.startVertex] = d.distance + result[d.startVertex];
}
});
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the normalized closeness of all vertices in the graph
////////////////////////////////////////////////////////////////////////////////
@ -5357,24 +5516,16 @@ function GENERAL_GRAPH_CLOSENESS (graphName, options) {
if (! options) {
options = { };
}
if (! options.direction) {
options.direction = 'any';
}
var distanceMap = GENERAL_GRAPH_DISTANCE_TO(graphName, {} , {}, options), result = {}, max = 0;
distanceMap.forEach(function(d) {
if (options.direction !== 'any') {
d.distance = d.distance === 0 ? 0 : 1 / d.distance;
}
if (!result[d.startVertex]) {
result[d.startVertex] = d.distance;
} else {
result[d.startVertex] = d.distance + result[d.startVertex];
}
});
options.calcNormalized = true;
if (! options.algorithm) {
options.algorithm = "Floyd-Warshall";
}
var result = GENERAL_GRAPH_ABSOLUTE_CLOSENESS(graphName, {}, options), max = 0;
Object.keys(result).forEach(function (r) {
if (options.direction === 'any') {
result[r] = 1 / result[r];
result[r] = result[r] === 0 ? 0 : 1 / result[r];
}
if (result[r] > max) {
max = result[r];
@ -5388,6 +5539,56 @@ function GENERAL_GRAPH_CLOSENESS (graphName, options) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the betweeness of all vertices in the graph
////////////////////////////////////////////////////////////////////////////////
function GENERAL_GRAPH_ABSOLUTE_BETWEENNESS (graphName, options) {
"use strict";
if (! options) {
options = { };
}
if (! options.direction) {
options.direction = 'any';
}
options.algorithm = "Floyd-Warshall";
var distanceMap = GENERAL_GRAPH_DISTANCE_TO(graphName, {} , {}, options),
result = {};
distanceMap.forEach(function(d) {
var tmp = {};
if (!result[d.startVertex]) {
result[d.startVertex] = 0;
}
if (!result[d.vertex._id]) {
result[d.vertex._id] = 0;
}
d.paths.forEach(function (p) {
p.vertices.forEach(function (v) {
if (v === d.startVertex || v === d.vertex._id) {
return;
}
if (!tmp[v]) {
tmp[v] = 1;
} else {
tmp[v]++;
}
});
});
Object.keys(tmp).forEach(function (t) {
if (!result[t]) {
result[t] = 0;
}
result[t] = result[t] + tmp[t] / d.paths.length;
});
});
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the normalized betweenness of all vertices in the graph
////////////////////////////////////////////////////////////////////////////////
@ -5398,37 +5599,8 @@ function GENERAL_GRAPH_BETWEENNESS (graphName, options) {
if (! options) {
options = { };
}
if (! options.direction) {
options.direction = 'any';
}
var distanceMap = GENERAL_GRAPH_SHORTEST_PATH(graphName, {} , {}, options),
result = {}, max = 0, hits = {};
distanceMap.forEach(function(d) {
if (hits[d.startVertex + d.vertex._id] ||
hits[d.vertex._id + d.startVertex]
) {
return;
}
hits[d.startVertex + d.vertex._id] = true;
hits[d.vertex._id + d.startVertex] = true;
d.path.vertices.forEach(function (v) {
if (v === d.vertex._id || v === d.startVertex) {
if (!result[d.vertex._id]) {
result[d.vertex._id] = 0;
}
if (!result[d.startVertex]) {
result[d.startVertex] = 0;
}
return;
}
if (!result[v]) {
result[v] = 1;
} else {
result[v] = result[v] + 1;
}
});
});
var result = GENERAL_GRAPH_ABSOLUTE_BETWEENNESS(graphName, options), max = 0;
Object.keys(result).forEach(function (r) {
if (result[r] > max) {
max = result[r];
@ -5442,6 +5614,7 @@ function GENERAL_GRAPH_BETWEENNESS (graphName, options) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the radius of the graph
////////////////////////////////////////////////////////////////////////////////
@ -5455,8 +5628,15 @@ function GENERAL_GRAPH_RADIUS (graphName, options) {
if (! options.direction) {
options.direction = 'any';
}
var result = VERTICES_ECCENTRICITY(graphName, options), min = Infinity;
if (! options.algorithm) {
options.algorithm = "Floyd-Warshall";
}
var result = GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY(graphName, {}, options), min = Infinity;
Object.keys(result).forEach(function (r) {
if (result[r] === 0) {
return;
}
if (result[r] < min) {
min = result[r];
}
@ -5480,7 +5660,11 @@ function GENERAL_GRAPH_DIAMETER (graphName, options) {
if (! options.direction) {
options.direction = 'any';
}
var result = VERTICES_ECCENTRICITY(graphName, options), max = 0;
if (! options.algorithm) {
options.algorithm = "Floyd-Warshall";
}
var result = GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY(graphName, {}, options), max = 0;
Object.keys(result).forEach(function (r) {
if (result[r] > max) {
max = result[r];
@ -5614,6 +5798,9 @@ exports.GENERAL_GRAPH_COMMON_PROPERTIES = GENERAL_GRAPH_COMMON_PROPERTIES;
exports.GENERAL_GRAPH_ECCENTRICITY = GENERAL_GRAPH_ECCENTRICITY;
exports.GENERAL_GRAPH_BETWEENNESS = GENERAL_GRAPH_BETWEENNESS;
exports.GENERAL_GRAPH_CLOSENESS = GENERAL_GRAPH_CLOSENESS;
exports.GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY = GENERAL_GRAPH_ABSOLUTE_ECCENTRICITY;
exports.GENERAL_GRAPH_ABSOLUTE_BETWEENNESS = GENERAL_GRAPH_ABSOLUTE_BETWEENNESS;
exports.GENERAL_GRAPH_ABSOLUTE_CLOSENESS = GENERAL_GRAPH_ABSOLUTE_CLOSENESS;
exports.GENERAL_GRAPH_DIAMETER = GENERAL_GRAPH_DIAMETER;
exports.GENERAL_GRAPH_RADIUS = GENERAL_GRAPH_RADIUS;
exports.NOT_NULL = NOT_NULL;

File diff suppressed because it is too large Load Diff

View File

@ -39,6 +39,88 @@ var internal = require("internal");
function CompactionSuite () {
return {
////////////////////////////////////////////////////////////////////////////////
/// @brief create movement of shapes
////////////////////////////////////////////////////////////////////////////////
testShapesMovement : function () {
var cn = "example";
internal.db._drop(cn);
var cn = internal.db._create(cn, { "journalSize" : 1048576 });
var i, j;
for (i = 0; i < 1000; ++i) {
var doc = { _key: "old" + i, a: i, b: "test" + i, values: [ ], atts: { } }, x = { };
for (j = 0; j < 10; ++j) {
doc.values.push(j);
doc.atts["test" + i + j] = "test";
x["foo" + i] = [ "1" ];
}
doc.atts.foo = x;
cn.save(doc);
}
// now access the documents once, to build the shape accessors
for (i = 0; i < 1000; ++i) {
var doc = cn.document("old" + i);
var keys = Object.keys(doc);
assertTrue(doc.hasOwnProperty("a"));
assertEqual(i, doc.a);
assertTrue(doc.hasOwnProperty("b"));
assertEqual("test" + i, doc.b);
assertTrue(doc.hasOwnProperty("values"));
assertEqual(10, doc.values.length);
assertTrue(doc.hasOwnProperty("atts"));
assertEqual(11, Object.keys(doc.atts).length);
for (j = 0; j < 10; ++j) {
assertEqual("test", doc.atts["test" + i + j]);
}
for (j = 0; j < 10; ++j) {
assertEqual([ "1" ], doc.atts.foo["foo" + i]);
}
}
// fill the datafile with rubbish
for (i = 0; i < 10000; ++i) {
cn.save({ _key: "test" + i, value: "thequickbrownfox" });
}
for (i = 0; i < 10000; ++i) {
cn.remove("test" + i);
}
internal.wait(7);
assertEqual(1000, cn.count());
// now access the "old" documents, which were probably moved
for (i = 0; i < 1000; ++i) {
var doc = cn.document("old" + i);
var keys = Object.keys(doc);
assertTrue(doc.hasOwnProperty("a"));
assertEqual(i, doc.a);
assertTrue(doc.hasOwnProperty("b"));
assertEqual("test" + i, doc.b);
assertTrue(doc.hasOwnProperty("values"));
assertEqual(10, doc.values.length);
assertTrue(doc.hasOwnProperty("atts"));
assertEqual(11, Object.keys(doc.atts).length);
for (j = 0; j < 10; ++j) {
assertEqual("test", doc.atts["test" + i + j]);
}
for (j = 0; j < 10; ++j) {
assertEqual([ "1" ], doc.atts.foo["foo" + i]);
}
}
internal.db._drop(cn);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test shapes
////////////////////////////////////////////////////////////////////////////////

View File

@ -70,6 +70,7 @@ static bool BytecodeShapeAccessor (TRI_shaper_t* shaper, TRI_shape_access_t* acc
paids = (TRI_shape_aid_t*) (((char const*) path) + sizeof(TRI_shape_path_t));
// collect the bytecode
// we need at least 2 entries in the vector to store an accessor
TRI_InitVectorPointer2(&ops, shaper->_memoryZone, 2);
@ -214,7 +215,7 @@ static bool BytecodeShapeAccessor (TRI_shaper_t* shaper, TRI_shape_access_t* acc
TRI_DestroyVectorPointer(&ops);
accessor->_shape = NULL;
accessor->_resultSid = 0;
accessor->_code = NULL;
return true;
@ -222,7 +223,7 @@ static bool BytecodeShapeAccessor (TRI_shaper_t* shaper, TRI_shape_access_t* acc
else {
TRI_DestroyVectorPointer(&ops);
accessor->_shape = NULL;
accessor->_resultSid = 0;
accessor->_code = NULL;
return true;
@ -238,7 +239,8 @@ static bool BytecodeShapeAccessor (TRI_shaper_t* shaper, TRI_shape_access_t* acc
return false;
}
accessor->_shape = shape;
// remember resulting sid
accessor->_resultSid = shape->_sid;
// steal buffer from ops vector so we don't need to copy it
accessor->_code = const_cast<void const**>(ops._buffer);
@ -262,7 +264,7 @@ static bool ExecuteBytecodeShapeAccessor (TRI_shape_access_t const* accessor,
TRI_shape_size_t pos;
TRI_shape_size_t* offsetsV;
if (accessor->_shape == NULL) {
if (accessor->_resultSid == 0) {
return false;
}
@ -364,7 +366,7 @@ bool TRI_ExecuteShapeAccessor (TRI_shape_access_t const* accessor,
return false;
}
result->_sid = accessor->_shape->_sid;
result->_sid = accessor->_resultSid;
result->_data.data = (char*) begin;
result->_data.length = (uint32_t) (((char const*) end) - ((char const*) begin));
@ -384,12 +386,12 @@ void TRI_PrintShapeAccessor (TRI_shape_access_t* accessor) {
(unsigned long) accessor->_sid,
(unsigned long) accessor->_pid);
if (accessor->_shape == NULL) {
if (accessor->_resultSid == 0) {
printf(" result shape: -\n");
return;
}
printf(" result shape: %lu\n", (unsigned long) accessor->_shape->_sid);
printf(" result shape: %lu\n", (unsigned long) accessor->_resultSid);
void const** ops = static_cast<void const**>(accessor->_code);

View File

@ -54,7 +54,7 @@ typedef struct TRI_shape_access_s {
TRI_shape_sid_t _sid; // shaped identifier of the shape we are looking at
TRI_shape_pid_t _pid; // path identifier of the attribute path
TRI_shape_t const* _shape; // resulting shape
TRI_shape_sid_t _resultSid; // resulting shape
void const** _code; // bytecode
TRI_memory_zone_t* _memoryZone;