1
0
Fork 0

Documentation: corrected typos and case, prefer American over British English

This commit is contained in:
CoDEmanX 2015-09-01 14:45:55 +02:00
parent 4088f186ce
commit a39b712efe
83 changed files with 249 additions and 250 deletions

View File

@ -7,7 +7,7 @@ modified documents. This is even the case when a document gets deleted. The
two benefits are: two benefits are:
* Objects can be stored coherently and compactly in the main memory. * Objects can be stored coherently and compactly in the main memory.
* Objects are preserved, wo isolated writing and reading transactions allow * Objects are preserved, isolated writing and reading transactions allow
accessing these objects for parallel operations. accessing these objects for parallel operations.
The system collects obsolete versions as garbage, recognizing them as The system collects obsolete versions as garbage, recognizing them as

View File

@ -1,6 +1,6 @@
!CHAPTER Date functions !CHAPTER Date functions
AQL offers functionality to work with dates. Dates are no datatypes of their own in AQL offers functionality to work with dates. Dates are no data types of their own in
AQL (neither are they in JSON, which is often used as a format to ship data into and AQL (neither are they in JSON, which is often used as a format to ship data into and
out of ArangoDB). Instead, dates in AQL are internally represented by either numbers out of ArangoDB). Instead, dates in AQL are internally represented by either numbers
(timestamps) or strings. The date functions in AQL provide mechanisms to convert from (timestamps) or strings. The date functions in AQL provide mechanisms to convert from
@ -43,7 +43,7 @@ These two above date functions accept the following input values:
522 milliseconds, UTC / Zulu time. Another example value without time component is 522 milliseconds, UTC / Zulu time. Another example value without time component is
*2014-05-07Z*. *2014-05-07Z*.
Please note that if no timezone offset is specified in a datestring, ArangoDB will Please note that if no timezone offset is specified in a date string, ArangoDB will
assume UTC time automatically. This is done to ensure portability of queries across assume UTC time automatically. This is done to ensure portability of queries across
servers with different timezone settings, and because timestamps will always be servers with different timezone settings, and because timestamps will always be
UTC-based. UTC-based.
@ -124,8 +124,8 @@ There are two recommended ways to store timestamps in ArangoDB:
- as string with ISO 8601 UTC timestamp - as string with ISO 8601 UTC timestamp
- as [Epoch number](https://en.wikipedia.org/wiki/Epoch_%28reference_date%29) - as [Epoch number](https://en.wikipedia.org/wiki/Epoch_%28reference_date%29)
This way you can work with [skiplist incices](../IndexHandling/Skiplist.html) and use This way you can work with [skiplist indices](../IndexHandling/Skiplist.html) and use
string comparisons (less than, greater than, in, equality) to express timeranges in your queries: string comparisons (less than, greater than, in, equality) to express time ranges in your queries:
@startDocuBlockInline working_with_date_time @startDocuBlockInline working_with_date_time
@EXAMPLE_ARANGOSH_OUTPUT{working_with_date_time} @EXAMPLE_ARANGOSH_OUTPUT{working_with_date_time}

View File

@ -30,8 +30,8 @@ AQL supports the following functions to operate on document values:
This will return *2*, because the third example matches, and because the This will return *2*, because the third example matches, and because the
*return-index* flag is set to *true*. *return-index* flag is set to *true*.
- *MERGE(document1, document2, ... documentn)*: Merges the documents - *MERGE(document1, document2, ... documentN)*: Merges the documents
in *document1* to *documentn* into a single document. If document attribute in *document1* to *documentN* into a single document. If document attribute
keys are ambiguous, the merged result will contain the values of the documents keys are ambiguous, the merged result will contain the values of the documents
contained later in the argument list. contained later in the argument list.
@ -56,8 +56,8 @@ AQL supports the following functions to operate on document values:
Please note that merging will only be done for top-level attributes. If you wish to Please note that merging will only be done for top-level attributes. If you wish to
merge sub-attributes, you should consider using *MERGE_RECURSIVE* instead. merge sub-attributes, you should consider using *MERGE_RECURSIVE* instead.
- *MERGE_RECURSIVE(document1, document2, ... documentn)*: Recursively - *MERGE_RECURSIVE(document1, document2, ... documentN)*: Recursively
merges the documents in *document1* to *documentn* into a single document. If merges the documents in *document1* to *documentN* into a single document. If
document attribute keys are ambiguous, the merged result will contain the values of the document attribute keys are ambiguous, the merged result will contain the values of the
documents contained later in the argument list. documents contained later in the argument list.

View File

@ -15,7 +15,7 @@ AQL offers the following functions to filter data based on [fulltext indexes](..
- *FULLTEXT(emails, "body", "banana")* Will look for the word *banana* in the - *FULLTEXT(emails, "body", "banana")* Will look for the word *banana* in the
attribute *body* of the collection *collection*. attribute *body* of the collection *collection*.
- *FULLTEXT(emails, "body", "banana,orange")* Will look for boths the words - *FULLTEXT(emails, "body", "banana,orange")* Will look for both words
*banana* and *orange* in the mentioned attribute. Only those documents will be *banana* and *orange* in the mentioned attribute. Only those documents will be
returned that contain both words. returned that contain both words.

View File

@ -24,7 +24,7 @@ i.e. *LENGTH(foo)* and *length(foo)* are equivalent.
!SUBSUBSECTION Extending AQL !SUBSUBSECTION Extending AQL
Since ArangoDB 1.3, it is possible to extend AQL with user-defined functions. Since ArangoDB 1.3, it is possible to extend AQL with user-defined functions.
These functions need to be written in Javascript, and be registered before usage These functions need to be written in JavaScript, and be registered before usage
in a query. in a query.
Please refer to [Extending AQL](../AqlExtending/README.md) for more details on this. Please refer to [Extending AQL](../AqlExtending/README.md) for more details on this.

View File

@ -55,7 +55,7 @@ This section describes various AQL functions which can be used to receive inform
!SUBSECTION Shortest Paths, distances and traversals. !SUBSECTION Shortest Paths, distances and traversals.
<!-- js/server/modules/org/arangodb/ahuacatl.js --> <!-- js/server/modules/org/arangodb/ahuacatl.js -->
This section describes AQL functions, that calculate pathes from a subset of vertices in a graph to another subset of vertices. This section describes AQL functions, that calculate paths from a subset of vertices in a graph to another subset of vertices.
!SUBSECTION GRAPH_PATHS !SUBSECTION GRAPH_PATHS
<!-- js/server/modules/org/arangodb/ahuacatl.js --> <!-- js/server/modules/org/arangodb/ahuacatl.js -->

View File

@ -3,7 +3,7 @@
!SECTION Executing queries !SECTION Executing queries
You can run AQL queries from your application via the HTTP REST API. The full You can run AQL queries from your application via the HTTP REST API. The full
API description is available at [Http Interface for AQL Query Cursors](../HttpAqlQueryCursor/README.md). API description is available at [HTTP Interface for AQL Query Cursors](../HttpAqlQueryCursor/README.md).
You can also run AQL queries from arangosh. To do so, you can use the *_query* method You can also run AQL queries from arangosh. To do so, you can use the *_query* method
of the *db* object. This will run the specified query in the context of the currently of the *db* object. This will run the specified query in the context of the currently
@ -216,10 +216,10 @@ The meaning of the statistics attributes is as follows:
* *writesIgnored*: the total number of data-modification operations that were unsuccessful, * *writesIgnored*: the total number of data-modification operations that were unsuccessful,
but have been ignored because of query option `ignoreErrors`. but have been ignored because of query option `ignoreErrors`.
* *scannedFull*: the total number of documents iterated over when scanning a collection * *scannedFull*: the total number of documents iterated over when scanning a collection
without an index. Documents scanned by sub-queries will be included in the result, but not without an index. Documents scanned by subqueries will be included in the result, but not
no operations triggered by built-in or user-defined AQL functions. no operations triggered by built-in or user-defined AQL functions.
* *scannedIndex*: the total number of documents iterated over when scanning a collection using * *scannedIndex*: the total number of documents iterated over when scanning a collection using
an index. Documents scanned by sub-queries will be included in the result, but not an index. Documents scanned by subqueries will be included in the result, but not
no operations triggered by built-in or user-defined AQL functions. no operations triggered by built-in or user-defined AQL functions.
* *filtered*: the total number of documents that were removed after executing a filter condition * *filtered*: the total number of documents that were removed after executing a filter condition
in a `FilterNode`. Note that `IndexRangeNode`s can also filter documents by selecting only in a `FilterNode`. Note that `IndexRangeNode`s can also filter documents by selecting only
@ -333,7 +333,7 @@ The following example disables all optimizer rules but `remove-redundant-calcula
The contents of an execution plan are meant to be machine-readable. To get a human-readable The contents of an execution plan are meant to be machine-readable. To get a human-readable
version of a query's execution plan, the following commnands can be used: version of a query's execution plan, the following commands can be used:
@startDocuBlockInline 10_workWithAQL_statementsPlansOptimizer3 @startDocuBlockInline 10_workWithAQL_statementsPlansOptimizer3
@EXAMPLE_ARANGOSH_OUTPUT{10_workWithAQL_statementsPlansOptimizer3} @EXAMPLE_ARANGOSH_OUTPUT{10_workWithAQL_statementsPlansOptimizer3}
@ -357,8 +357,8 @@ return the some information about the query.
The return value is an object with the collection names used in the query listed in the The return value is an object with the collection names used in the query listed in the
`collections` attribute, and all bind parameters listed in the `bindVars` attribute. `collections` attribute, and all bind parameters listed in the `bindVars` attribute.
Addtionally, the internal representation of the query, the query's abstract syntax tree, will Additionally, the internal representation of the query, the query's abstract syntax tree, will
be returned in the `ast` attribute of the result. Please note that the abstract syntax tree be returned in the `AST` attribute of the result. Please note that the abstract syntax tree
will be returned without any optimizations applied to it. will be returned without any optimizations applied to it.
@startDocuBlockInline 11_workWithAQL_parseQueries @startDocuBlockInline 11_workWithAQL_parseQueries

View File

@ -49,16 +49,16 @@ function categories:
DOCUMENT("users/john") DOCUMENT("users/john")
DOCUMENT([ "users/john", "users/amy" ]) DOCUMENT([ "users/john", "users/amy" ])
- *CALL(function, arg1, ..., argn)*: Dynamically calls the function with name *function* - *CALL(function, arg1, ..., argN)*: Dynamically calls the function with name *function*
with the arguments specified. Both built-in and user-defined functions can be called. with the arguments specified. Both built-in and user-defined functions can be called.
Arguments are passed as seperate parameters to the called function. Arguments are passed as separate parameters to the called function.
/* "this" */ /* "this" */
CALL('SUBSTRING', 'this is a test', 0, 4) CALL('SUBSTRING', 'this is a test', 0, 4)
- *APPLY(function, arguments)*: Dynamically calls the function with name *function* - *APPLY(function, arguments)*: Dynamically calls the function with name *function*
with the arguments specified. Both built-in and user-defined functions can be called. with the arguments specified. Both built-in and user-defined functions can be called.
Arguments are passed as seperate parameters to the called function. Arguments are passed as separate parameters to the called function.
/* "this is" */ /* "this is" */
APPLY('SUBSTRING', [ 'this is a test', 0, 7 ]) APPLY('SUBSTRING', [ 'this is a test', 0, 7 ])

View File

@ -49,7 +49,7 @@ FOR u IN users
``` ```
In this example, there are two array iterations: an outer iteration over the array In this example, there are two array iterations: an outer iteration over the array
*users* plus an inner iteration over the arry *locations*. The inner array is *users* plus an inner iteration over the array *locations*. The inner array is
traversed as many times as there are elements in the outer array. For each traversed as many times as there are elements in the outer array. For each
iteration, the current values of *users* and *locations* are made available for iteration, the current values of *users* and *locations* are made available for
further processing in the variable *u* and *l*. further processing in the variable *u* and *l*.
@ -1084,7 +1084,7 @@ query option.
by a `RETURN` statement (intermediate `LET` statements are allowed, too). These statements by a `RETURN` statement (intermediate `LET` statements are allowed, too). These statements
can optionally perform calculations and refer to the pseudo-values `OLD` and `NEW`. can optionally perform calculations and refer to the pseudo-values `OLD` and `NEW`.
In case the upsert performed an insert operation, `OLD` will have a value of *null*. In case the upsert performed an insert operation, `OLD` will have a value of *null*.
In case the upsert performed an update or replace opertion, `OLD` will contain the In case the upsert performed an update or replace operation, `OLD` will contain the
previous version of the document, before update/replace. previous version of the document, before update/replace.
`NEW` will always be populated. It will contain the inserted document in case the `NEW` will always be populated. It will contain the inserted document in case the

View File

@ -23,7 +23,7 @@ These operators accept any data types for the first and second operands.
Each of the comparison operators returns a boolean value if the comparison can Each of the comparison operators returns a boolean value if the comparison can
be evaluated and returns *true* if the comparison evaluates to true, and *false* be evaluated and returns *true* if the comparison evaluates to true, and *false*
otherwise. Please note that the comparsion operators will not perform any otherwise. Please note that the comparison operators will not perform any
implicit type casts if the compared operands have different types. implicit type casts if the compared operands have different types.
Some examples for comparison operations in AQL: Some examples for comparison operations in AQL:
@ -83,7 +83,7 @@ boolean value.
This behavior has changed in ArangoDB 2.3. Passing non-boolean values to a This behavior has changed in ArangoDB 2.3. Passing non-boolean values to a
logical operator is now allowed. Any-non boolean operands will be casted logical operator is now allowed. Any-non boolean operands will be casted
to boolean implicity by the operator, without making the query abort. to boolean implicitly by the operator, without making the query abort.
The *conversion to a boolean value* works as follows: The *conversion to a boolean value* works as follows:
- `null` will be converted to `false` - `null` will be converted to `false`
@ -137,7 +137,7 @@ Some example arithmetic operations:
The arithmetic operators accept operands of any type. This behavior has changed in The arithmetic operators accept operands of any type. This behavior has changed in
ArangoDB 2.3. Passing non-numeric values to an arithmetic operator is now allow. ArangoDB 2.3. Passing non-numeric values to an arithmetic operator is now allow.
Any-non numeric operands will be casted to numbers implicity by the operator, Any-non numeric operands will be casted to numbers implicitly by the operator,
without making the query abort. without making the query abort.
The *conversion to a numeric value* works as follows: The *conversion to a numeric value* works as follows:

View File

@ -113,7 +113,7 @@ Here is the meaning of these rules in context of this query:
is calculated multiple times, but each calculation inside a loop iteration would is calculated multiple times, but each calculation inside a loop iteration would
produce the same value. Therefore, the expression result is shared by several nodes. produce the same value. Therefore, the expression result is shared by several nodes.
* `remove-unnecessary-calculations`: removes *CalculationNode*s whose result values are * `remove-unnecessary-calculations`: removes *CalculationNode*s whose result values are
not used in the query. In the example this happenes due to the `remove-redundant-calculations` not used in the query. In the example this happens due to the `remove-redundant-calculations`
rule having made some calculations unnecessary. rule having made some calculations unnecessary.
* `use-index-range`: use an index to iterate over a collection instead of performing a * `use-index-range`: use an index to iterate over a collection instead of performing a
full collection scan. In the example case this makes sense, as the index can be full collection scan. In the example case this makes sense, as the index can be
@ -145,7 +145,7 @@ access type, which can be either `read` or `write`.
!SUBSUBSECTION Variables used in a query !SUBSUBSECTION Variables used in a query
The optimizer will also return a list of variables used in a plan (and query). This The optimizer will also return a list of variables used in a plan (and query). This
list will contain auxilliary variables created by the optimizer itself. This list list will contain auxiliary variables created by the optimizer itself. This list
can be ignored by end users in most cases. can be ignored by end users in most cases.
@ -188,7 +188,7 @@ This will return an unoptimized plan in the `plan`:
@END_EXAMPLE_ARANGOSH_OUTPUT @END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock AQLEXP_06_explainUnoptimizedPlans @endDocuBlock AQLEXP_06_explainUnoptimizedPlans
Note that some optimisations are already done at parse time (i.e. evaluate simple constant Note that some optimizations are already done at parse time (i.e. evaluate simple constant
calculation as `1 + 1`) calculation as `1 + 1`)
@ -277,12 +277,12 @@ The following execution node types will appear in the output of `explain`:
appear once per *LIMIT* statement. appear once per *LIMIT* statement.
* *CalculationNode*: evaluates an expression. The expression result may be used by * *CalculationNode*: evaluates an expression. The expression result may be used by
other nodes, e.g. *FilterNode*, *EnumerateListNode*, *SortNode* etc. other nodes, e.g. *FilterNode*, *EnumerateListNode*, *SortNode* etc.
* *SubqueryNode*: executes a sub-query. * *SubqueryNode*: executes a subquery.
* *SortNode*: performs a sort of its input values. * *SortNode*: performs a sort of its input values.
* *AggregateNode*: aggregates its input and produces new output variables. This will * *AggregateNode*: aggregates its input and produces new output variables. This will
appear once per *COLLECT* statement. appear once per *COLLECT* statement.
* *ReturnNode*: returns data to the caller. Will appear in each read-only query at * *ReturnNode*: returns data to the caller. Will appear in each read-only query at
least once. Sub-queries will also contain *ReturnNode*s. least once. Subqueries will also contain *ReturnNode*s.
* *InsertNode*: inserts documents into a collection (given in its *collection* * *InsertNode*: inserts documents into a collection (given in its *collection*
attribute). Will appear exactly once in a query that contains an *INSERT* statement. attribute). Will appear exactly once in a query that contains an *INSERT* statement.
* *RemoveNode*: removes documents from a collection (given in its *collection* * *RemoveNode*: removes documents from a collection (given in its *collection*
@ -292,7 +292,7 @@ The following execution node types will appear in the output of `explain`:
* *UpdateNode*: updates documents in a collection (given in its *collection* * *UpdateNode*: updates documents in a collection (given in its *collection*
attribute). Will appear exactly once in a query that contains an *UPDATE* statement. attribute). Will appear exactly once in a query that contains an *UPDATE* statement.
* *NoResultsNode*: will be inserted if *FILTER* statements turn out to be never * *NoResultsNode*: will be inserted if *FILTER* statements turn out to be never
satisfyable. The *NoResultsNode* will pass an empty result set into the processing satisfiable. The *NoResultsNode* will pass an empty result set into the processing
pipeline. pipeline.
For queries in the cluster, the following nodes may appear in execution plans: For queries in the cluster, the following nodes may appear in execution plans:
@ -302,7 +302,7 @@ For queries in the cluster, the following nodes may appear in execution plans:
into a combined stream of results. into a combined stream of results.
* *DistributeNode*: used on a coordinator to fan-out data to one or multiple shards, * *DistributeNode*: used on a coordinator to fan-out data to one or multiple shards,
taking into account a collection's shard key. taking into account a collection's shard key.
* *RemoteNode*: a *RemoteNode* will perfom communication with another ArangoDB * *RemoteNode*: a *RemoteNode* will perform communication with another ArangoDB
instances in the cluster. For example, the cluster coordinator will need to instances in the cluster. For example, the cluster coordinator will need to
communicate with other servers to fetch the actual data from the shards. It communicate with other servers to fetch the actual data from the shards. It
will do so via *RemoteNode*s. The data servers themselves might again pull will do so via *RemoteNode*s. The data servers themselves might again pull
@ -369,7 +369,7 @@ The following optimizer rules may appear in the `rules` attribute of cluster pla
* `distribute-in-cluster`: will appear when query parts get distributed in a cluster. * `distribute-in-cluster`: will appear when query parts get distributed in a cluster.
This is not an optimization rule, and it cannot be turned off. This is not an optimization rule, and it cannot be turned off.
* `scatter-in-cluster`: will appear when scatter, gatter, and remote nodes are inserted * `scatter-in-cluster`: will appear when scatter, gather, and remote nodes are inserted
into a distributed query. This is not an optimization rule, and it cannot be turned off. into a distributed query. This is not an optimization rule, and it cannot be turned off.
* `distribute-filtercalc-to-cluster`: will appear when filters are moved up in a * `distribute-filtercalc-to-cluster`: will appear when filters are moved up in a
distributed execution plan. Filters are moved as far up in the plan as possible to distributed execution plan. Filters are moved as far up in the plan as possible to

View File

@ -2,8 +2,8 @@
For string processing, AQL offers the following functions: For string processing, AQL offers the following functions:
- *CONCAT(value1, value2, ... valuen)*: Concatenate the strings - *CONCAT(value1, value2, ... valueN)*: Concatenate the strings
passed as in *value1* to *valuen*. *null* values are ignored. Array value arguments passed as in *value1* to *valueN*. *null* values are ignored. Array value arguments
are expanded automatically, and their individual members will be concatenated. are expanded automatically, and their individual members will be concatenated.
/* "foobarbaz" */ /* "foobarbaz" */
@ -12,8 +12,8 @@ For string processing, AQL offers the following functions:
/* "foobarbaz" */ /* "foobarbaz" */
CONCAT([ 'foo', 'bar', 'baz' ]) CONCAT([ 'foo', 'bar', 'baz' ])
- *CONCAT_SEPARATOR(separator, value1, value2, ... valuen)*: - *CONCAT_SEPARATOR(separator, value1, value2, ... valueN)*:
Concatenate the strings passed as arguments *value1* to *valuen* using the Concatenate the strings passed as arguments *value1* to *valueN* using the
*separator* string. *null* values are ignored. Array value arguments *separator* string. *null* values are ignored. Array value arguments
are expanded automatically, and their individual members will be concatenated. are expanded automatically, and their individual members will be concatenated.

View File

@ -14,7 +14,7 @@ task. Each of the these functions takes an operand of any data type and returns
a result value of type corresponding to the function name (e.g. *TO_NUMBER()* a result value of type corresponding to the function name (e.g. *TO_NUMBER()*
will return a number value): will return a number value):
- *TO_BOOL(value)*: Takes an input *valu*e of any type and converts it - *TO_BOOL(value)*: Takes an input *value* of any type and converts it
into the appropriate boolean value as follows: into the appropriate boolean value as follows:
- *null* is converted to *false*. - *null* is converted to *false*.
- Numbers are converted to *true* if they are unequal to 0, and to *false* otherwise. - Numbers are converted to *true* if they are unequal to 0, and to *false* otherwise.

View File

@ -273,4 +273,4 @@ FOR u IN users
``` ```
To increase readability, the repeated expression *LENGTH(group)* was put into a variable To increase readability, the repeated expression *LENGTH(group)* was put into a variable
*numUsers*. The *FILTER* on *numUsers* is the equivalent an SQL *HAVING* caluse. *numUsers*. The *FILTER* on *numUsers* is the equivalent an SQL *HAVING* clause.

View File

@ -122,7 +122,7 @@ FOR u IN users
``` ```
In this query we are still iterating over the users in the *users* collection In this query we are still iterating over the users in the *users* collection
and for each matching user we are executing a sub-query to create the matching and for each matching user we are executing a subquery to create the matching
list of related users. list of related users.
!SUBSECTION Self joins !SUBSECTION Self joins
@ -208,12 +208,12 @@ FOR user IN users
``` ```
So, for each user we pick the list of her friends and count them. The ones where So, for each user we pick the list of her friends and count them. The ones where
count equals zero are the lonely people. Using *RETURN 1* in the sub-query count equals zero are the lonely people. Using *RETURN 1* in the subquery
saves even more precious CPU cycles and gives the optimizer more alternatives. saves even more precious CPU cycles and gives the optimizer more alternatives.
!SUBSECTION Pitfalls !SUBSECTION Pitfalls
Since we're free of schematas, there is by default no way to tell the format of the Since we're free of schemata, there is by default no way to tell the format of the
documents. So, if your documents don't contain an attribute, it defaults to documents. So, if your documents don't contain an attribute, it defaults to
null. We can however check our data for accuracy like this: null. We can however check our data for accuracy like this:
@ -230,13 +230,13 @@ RETURN LENGTH(FOR f IN relations FILTER f.friendOf == null RETURN 1)
So that the above queries return 10k matches each, the result of i.e. the Join So that the above queries return 10k matches each, the result of i.e. the Join
tuples query will become 100.000.000 items large and will use much memory plus tuples query will become 100.000.000 items large and will use much memory plus
compution time. So it is generaly a good idea to revalidate that the criteria computation time. So it is generally a good idea to revalidate that the criteria
for your join conditions exist. for your join conditions exist.
Using indices on the properties can speed up the operation significantly. Using indices on the properties can speed up the operation significantly.
You can use the explain helper to revalidate your query actualy uses them. You can use the explain helper to revalidate your query actually uses them.
If you work with joins on edge collections you would typicaly aggregate over If you work with joins on edge collections you would typically aggregate over
the internal fields *_id*, *_from* and *_to* (where *_id* equals *userId*, the internal fields *_id*, *_from* and *_to* (where *_id* equals *userId*,
*_from* *friendOf* and *_to* would be *thisUser* in our examples). Arangodb *_from* *friendOf* and *_to* would be *thisUser* in our examples). ArangoDB
implicitely creates indices on them. implicitly creates indices on them.

View File

@ -37,7 +37,7 @@ access to any external data, it must take care to set up the data by
itself. itself.
All AQL user function-specific variables should be introduced with the `var` All AQL user function-specific variables should be introduced with the `var`
keyword in order to not accidently access already defined variables from keyword in order to not accidentally access already defined variables from
outer scopes. Not using the `var` keyword for own variables may cause side outer scopes. Not using the `var` keyword for own variables may cause side
effects when executing the function. effects when executing the function.
@ -74,7 +74,7 @@ function (values) {
User functions must only return primitive types (i.e. *null*, boolean User functions must only return primitive types (i.e. *null*, boolean
values, numeric values, string values) or aggregate types (lists or values, numeric values, string values) or aggregate types (lists or
documents) composed of these types. documents) composed of these types.
Returning any other Javascript object type from a user function may lead Returning any other JavaScript object type from a user function may lead
to undefined behavior and should be avoided. to undefined behavior and should be avoided.
!SECTION Miscellaneous !SECTION Miscellaneous

View File

@ -5,7 +5,7 @@ fully-featured programming language.
To add missing functionality or to simplify queries, users To add missing functionality or to simplify queries, users
may add their own functions to AQL. These functions can be written may add their own functions to AQL. These functions can be written
in Javascript, and must be registered via the API; in JavaScript, and must be registered via the API;
see [Registering Functions](../AqlExtending/Functions.html). see [Registering Functions](../AqlExtending/Functions.html).
In order to avoid conflicts with existing or future built-in In order to avoid conflicts with existing or future built-in

View File

@ -51,7 +51,7 @@ encoded body and still let ArangoDB send the non-encoded version, for example:
```js ```js
res.body = 'VGhpcyBpcyBhIHRlc3Q='; res.body = 'VGhpcyBpcyBhIHRlc3Q=';
res.transformations = res.transformations || [ ]; // initialise res.transformations = res.transformations || [ ]; // initialize
res.transformations.push('base64decode'); // will base64 decode the response body res.transformations.push('base64decode'); // will base64 decode the response body
``` ```
@ -355,7 +355,7 @@ Then we send some curl requests to these sample routes:
@endDocuBlock MOD_08d_routingCurlToOwnConsoleLog @endDocuBlock MOD_08d_routingCurlToOwnConsoleLog
and the console (and / or the logfile) will show requests and replies. and the console (and / or the logfile) will show requests and replies.
*Note that logging doesn't warant the sequence in which these lines *Note that logging doesn't warrant the sequence in which these lines
will appear.* will appear.*
!SECTION Application Deployment !SECTION Application Deployment

View File

@ -80,7 +80,7 @@ that any existing index definitions for the collection will be preserved even if
As the import file already contains the data in JSON format, attribute names and As the import file already contains the data in JSON format, attribute names and
data types are fully preserved. As can be seen in the example data, there is no data types are fully preserved. As can be seen in the example data, there is no
need for all data records to have the same attribute names or types. Records can need for all data records to have the same attribute names or types. Records can
be in-homogenous. be inhomogeneous.
Please note that by default, _arangoimp_ will import data into the specified Please note that by default, _arangoimp_ will import data into the specified
collection in the default database (*_system*). To specify a different database, collection in the default database (*_system*). To specify a different database,
@ -353,4 +353,3 @@ If you import values into *_key*, you should make sure they are valid and unique
When importing data into an edge collection, you should make sure that all import When importing data into an edge collection, you should make sure that all import
documents can *_from* and *_to* and that their values point to existing documents. documents can *_from* and *_to* and that their values point to existing documents.

View File

@ -3,7 +3,7 @@
The ArangoDB shell (_arangosh_) is a command-line tool that can be used for The ArangoDB shell (_arangosh_) is a command-line tool that can be used for
administration of ArangoDB, including running ad-hoc queries. administration of ArangoDB, including running ad-hoc queries.
The _arangosh_ binary is shipped with ArangoDB. It offers a javascript shell The _arangosh_ binary is shipped with ArangoDB. It offers a JavaScript shell
environment providing access to the ArangoDB server. environment providing access to the ArangoDB server.
Arangosh can be invoked like this: Arangosh can be invoked like this:
@ -78,9 +78,9 @@ you can paste multiple lines into arangosh, given the first line ends with an op
@endDocuBlock shellPaste @endDocuBlock shellPaste
To load your own javascript code into the current javascript interpreter context, use the load command: To load your own JavaScript code into the current JavaScript interpreter context, use the load command:
require("internal").load("/tmp/test.js") // <- Linux / Macos require("internal").load("/tmp/test.js") // <- Linux / MacOS
require("internal").load("c:\\tmp\\test.js") // <- Windows require("internal").load("c:\\tmp\\test.js") // <- Windows
Exiting arangosh can be done using the key combination ```<CTRL> + D``` or by typing ```quit<CR>``` Exiting arangosh can be done using the key combination ```<CTRL> + D``` or by typing ```quit<CR>```

View File

@ -28,7 +28,7 @@
@startDocuBlock keep_alive_timeout @startDocuBlock keep_alive_timeout
!SUBSECTION Default api compatibility !SUBSECTION Default API compatibility
@startDocuBlock serverDefaultApi @startDocuBlock serverDefaultApi

View File

@ -87,7 +87,7 @@
!SUBSECTION Collection type !SUBSECTION Collection type
@startDocuBlock collectionType @startDocuBlock collectionType
!SUBSECTION get the Version of Arangodb !SUBSECTION Get the Version of ArangoDB
@startDocuBlock databaseVersion @startDocuBlock databaseVersion
!SUBSECTION Misc !SUBSECTION Misc

View File

@ -34,9 +34,9 @@ STANDARD options:
--use-pager use pager --use-pager use pager
JAVASCRIPT options: JAVASCRIPT options:
--javascript.check <string> syntax check code Javascript code from file --javascript.check <string> syntax check code JavaScript code from file
--javascript.execute <string> execute Javascript code from file --javascript.execute <string> execute JavaScript code from file
--javascript.execute-string <string> execute Javascript code from string --javascript.execute-string <string> execute JavaScript code from string
--javascript.startup-directory <string> startup paths containing the JavaScript files --javascript.startup-directory <string> startup paths containing the JavaScript files
--javascript.unit-tests <string> do not start as shell, run unit tests instead --javascript.unit-tests <string> do not start as shell, run unit tests instead
--jslint <string> do not start as shell, run jslint instead --jslint <string> do not start as shell, run jslint instead

View File

@ -80,7 +80,7 @@ If you want more control over the object other apps receive when they load your
} }
``` ```
To replicate the same behaviour as in the earlier example, the file **exports.js** could look like this: To replicate the same behavior as in the earlier example, the file **exports.js** could look like this:
```js ```js
exports.doodads = require('./doodads'); exports.doodads = require('./doodads');

View File

@ -3,7 +3,7 @@
Now we are almost ready to write some code. Now we are almost ready to write some code.
Hence it is time to introduce the folder structure created by Foxx. Hence it is time to introduce the folder structure created by Foxx.
We still follow the example of the app installed at `/example`. We still follow the example of the app installed at `/example`.
The route to reach this application via http(s) is constructed with the following parts: The route to reach this application via HTTP(S) is constructed with the following parts:
* The ArangoDB endpoint `<arangodb>`: (e.g. `http://localhost:8529`) * The ArangoDB endpoint `<arangodb>`: (e.g. `http://localhost:8529`)
* The selected database `<db>`: (e.g. `_system`) * The selected database `<db>`: (e.g. `_system`)

View File

@ -136,7 +136,7 @@ Returns the job id.
Note that if you pass a function for the **backOff** calculation, **success** callback or **failure** callback options the function will be serialized to the database as a string and therefore must not rely on any external scope or external variables. Note that if you pass a function for the **backOff** calculation, **success** callback or **failure** callback options the function will be serialized to the database as a string and therefore must not rely on any external scope or external variables.
When the job is set to automatically repeat, the **failure** callback will only be executed when a run of the job has failured more than **maxFailures** times. Note that if the job fails and **maxFailures** is set, it will be rescheduled according to the **backOff** until it has either failed too many times or completed successfully before being scheduled according to the **repeatDelay** again. Recovery attempts by **maxFailures** do not count towards **repeatTimes**. When the job is set to automatically repeat, the **failure** callback will only be executed when a run of the job has failed more than **maxFailures** times. Note that if the job fails and **maxFailures** is set, it will be rescheduled according to the **backOff** until it has either failed too many times or completed successfully before being scheduled according to the **repeatDelay** again. Recovery attempts by **maxFailures** do not count towards **repeatTimes**.
The **success** and **failure** callbacks receive the following arguments: The **success** and **failure** callbacks receive the following arguments:

View File

@ -35,4 +35,4 @@ The tools contain:
* [Console API](../Develop/Console.md) * [Console API](../Develop/Console.md)
Finally we want to apply some meta information to the Foxx. Finally we want to apply some meta information to the Foxx.
How this is done is described in the [Metainformation](../Develop/Manifest.md) chapter. How this is done is described in the [Meta information](../Develop/Manifest.md) chapter.

View File

@ -44,7 +44,7 @@ Any errors raised by the script will be handled depending on how the script was
* if the script was invoked with the **foxx-manager** CLI, it will exit with a non-zero exit status and print the error message. * if the script was invoked with the **foxx-manager** CLI, it will exit with a non-zero exit status and print the error message.
* if the script was invoked from the HTTP API (e.g. using the web admin frontend), it will return an error response using the exception's `statusCode` property if specified or 500. * if the script was invoked from the HTTP API (e.g. using the web admin frontend), it will return an error response using the exception's `statusCode` property if specified or 500.
* if the script was invoked fromt the Foxx job queue, the job's failure counter will be incremented and the job will be rescheduled or marked as failed if no attempts remain. * if the script was invoked from the Foxx job queue, the job's failure counter will be incremented and the job will be rescheduled or marked as failed if no attempts remain.
**Examples** **Examples**
@ -69,7 +69,7 @@ The following scripts are currently recognized as life-cycle scripts:
!SUBSECTION Setup Script !SUBSECTION Setup Script
The **setup** script will be executed without arguments during the installion of your Foxx app: The **setup** script will be executed without arguments during the installation of your Foxx app:
```sh ```sh
unix>foxx-manager install hello-foxx /example unix>foxx-manager install hello-foxx /example

View File

@ -1,4 +1,4 @@
!CHAPTER Install Applications from github !CHAPTER Install Applications from Github
In this chapter we will make use of the Foxx manager as described [before](README.md). In this chapter we will make use of the Foxx manager as described [before](README.md).
This time we want to install an app out of our version control hosted on [github.com](https://www.github.com). This time we want to install an app out of our version control hosted on [github.com](https://www.github.com).

View File

@ -44,7 +44,7 @@ There are currently several applications installed, all of them are system appli
You can safely ignore system applications. You can safely ignore system applications.
We are now going to install the _hello world_ application. It is called We are now going to install the _hello world_ application. It is called
"hello-foxx" - no suprise there. "hello-foxx" - no surprise there.
unix> foxx-manager install hello-foxx /example unix> foxx-manager install hello-foxx /example
Application hello-foxx version 1.5.0 installed successfully at mount point /example Application hello-foxx version 1.5.0 installed successfully at mount point /example
@ -177,7 +177,7 @@ But in most cases you will install you own application that is probably not publ
The Application identifier supports several input formats: The Application identifier supports several input formats:
* `appname:version` Install an App from the ArangoDB store [Read More](Store.md) * `appname:version` Install an App from the ArangoDB store [Read More](Store.md)
* `git:user/repository:tag` Install an App from github [Read More](Github.md) * `git:user/repository:tag` Install an App from Github [Read More](Github.md)
* `http(s)://example.com/app.zip` Install an App from an URL [Read More](Remote.md) * `http(s)://example.com/app.zip` Install an App from an URL [Read More](Remote.md)
* `/usr/tmp/app.zip` Install an App from local file system [Read More](Local.md) * `/usr/tmp/app.zip` Install an App from local file system [Read More](Local.md)
* `EMPTY` Generate a new Application [Read More](Generate.md) * `EMPTY` Generate a new Application [Read More](Generate.md)

View File

@ -2,7 +2,7 @@
In this chapter we will make use of the Foxx manager as described [before](README.md). In this chapter we will make use of the Foxx manager as described [before](README.md).
This time we want to install an app hosted on a server. This time we want to install an app hosted on a server.
Currently the Foxx-manager supports downloads of applications via http and https protocols. Currently the Foxx-manager supports downloads of applications via HTTP and HTTPS protocols.
!SECTION Remote file format !SECTION Remote file format
The file on the remote server has to be a valid Foxx application packed in a zip archive. The file on the remote server has to be a valid Foxx application packed in a zip archive.
@ -57,7 +57,7 @@ Length Date Time Name
614800 37 files 614800 37 files
``` ```
Next you have to make this file publicly available over http or https on a webserver. Next you have to make this file publicly available over HTTP or HTTPS on a webserver.
Assume we can download the app at **http://www.example.com/hello.zip**. Assume we can download the app at **http://www.example.com/hello.zip**.
!SECTION Install from remote server !SECTION Install from remote server
@ -71,7 +71,7 @@ Application hello-foxx version 1.5.0 installed successfully at mount point /exam
ArangoDB will try to download and extract the file stored at the remote location. ArangoDB will try to download and extract the file stored at the remote location.
This http or https link can be used in all functions of the Foxx-manager that allow to install Foxx applications: This HTTP or HTTPS link can be used in all functions of the Foxx-manager that allow to install Foxx applications:
**install** **install**

View File

@ -1,4 +1,4 @@
!CHAPTER Install Applications from github !CHAPTER Install Applications from Github
In this chapter we will make use of the Foxx manager as described [before](README.md). In this chapter we will make use of the Foxx manager as described [before](README.md).
This time we want to install an app out of our version control hosted on github.com. This time we want to install an app out of our version control hosted on github.com.

View File

@ -82,7 +82,7 @@ The following table lists a mapping of microservice concepts and how they can be
<tr> <tr>
<td>Different Databases</td> <td>Different Databases</td>
<td>Multi-Model</td> <td>Multi-Model</td>
<td>In most setups you need different database technologies because you have several data formats. ArangoDB is a multi-model database that server many formats. However if you still need another database you can connect to it via Http.</td> <td>In most setups you need different database technologies because you have several data formats. ArangoDB is a multi-model database that server many formats. However if you still need another database you can connect to it via HTTP.</td>
</tr> </tr>
<tr> <tr>
<td>Asynchronous Calls</td> <td>Asynchronous Calls</td>
@ -107,7 +107,7 @@ The following table lists a mapping of microservice concepts and how they can be
<tr> <tr>
<td>Maintenance UI</td> <td>Maintenance UI</td>
<td>Built-in Web server</td> <td>Built-in Web server</td>
<td>ArangoDB has a built-in Web server, allowing Foxxes to ship a Webpage as their UI. So you can attach a maintanance UI directly to your microservice and have it deployed with it in the same step.</td> <td>ArangoDB has a built-in Web server, allowing Foxxes to ship a Webpage as their UI. So you can attach a maintenance UI directly to your microservice and have it deployed with it in the same step.</td>
</tr> </tr>
<tr> <tr>
<td>Easy Setup</td> <td>Easy Setup</td>

View File

@ -95,7 +95,7 @@ Using keyOptions it is possible to disallow user-specified keys completely, or t
As ArangoDB supports MVCC, documents can exist in more than one revision. The document revision is the MVCC token used to identify a particular revision of a document. It is a string value currently containing an integer number and is unique within the list of document revisions for a single document. Document revisions can be used to conditionally update, replace or delete documents in the database. In order to find a particular revision of a document, you need the document handle and the document revision. As ArangoDB supports MVCC, documents can exist in more than one revision. The document revision is the MVCC token used to identify a particular revision of a document. It is a string value currently containing an integer number and is unique within the list of document revisions for a single document. Document revisions can be used to conditionally update, replace or delete documents in the database. In order to find a particular revision of a document, you need the document handle and the document revision.
ArangoDB currently uses 64bit unsigned integer values to maintain document revisions internally. When returning document revisions to clients, ArangoDB will put them into a string to ensure the revision id is not clipped by clients that do not support big integers. Clients should treat the revision id returned by ArangoDB as an opaque string when they store or use it locally. This will allow ArangoDB to change the format of revision ids later if this should be required. Clients can use revisions ids to perform simple equality/non-equality comparisons (e.g. to check whether a document has changed or not), but they should not use revision ids to perform greater/less than comparisions with them to check if a document revision is older than one another, even if this might work for some cases. ArangoDB currently uses 64bit unsigned integer values to maintain document revisions internally. When returning document revisions to clients, ArangoDB will put them into a string to ensure the revision id is not clipped by clients that do not support big integers. Clients should treat the revision id returned by ArangoDB as an opaque string when they store or use it locally. This will allow ArangoDB to change the format of revision ids later if this should be required. Clients can use revisions ids to perform simple equality/non-equality comparisons (e.g. to check whether a document has changed or not), but they should not use revision ids to perform greater/less than comparisons with them to check if a document revision is older than one another, even if this might work for some cases.
!SUBSECTION Edge !SUBSECTION Edge
@ -131,9 +131,9 @@ For example, given a fulltext index on the `translations` attribute and the foll
{ translations: "Fox is the English translation of the German word Fuchs" } { translations: "Fox is the English translation of the German word Fuchs" }
{ translations: [ "ArangoDB", "document", "database", "Foxx" ] } { translations: [ "ArangoDB", "document", "database", "Foxx" ] }
If the index attribute is neither a string, an object or an array, its contents will not be indexed. When indexing the contents of an array attribute, an array member will only be included in the index if it is a string. When indexing the contents of an object attribute, an object member value will only be included in the index if it is a string. Other datatypes are ignored and not indexed. If the index attribute is neither a string, an object or an array, its contents will not be indexed. When indexing the contents of an array attribute, an array member will only be included in the index if it is a string. When indexing the contents of an object attribute, an object member value will only be included in the index if it is a string. Other data types are ignored and not indexed.
Only words with a (specifyable) minimum length are indexed. Word tokenization is done using the word boundary analysis provided by libicu, which is taking into account the selected language provided at server start. Words are indexed in their lower-cased form. The index supports complete match queries (full words) and prefix queries. Only words with a (specifiable) minimum length are indexed. Word tokenization is done using the word boundary analysis provided by libicu, which is taking into account the selected language provided at server start. Words are indexed in their lower-cased form. The index supports complete match queries (full words) and prefix queries.
!SUBSECTION Geo Index !SUBSECTION Geo Index

View File

@ -1,7 +1,7 @@
!CHAPTER HTTP Interface for Administration and Monitoring !CHAPTER HTTP Interface for Administration and Monitoring
This is an introduction to ArangoDB's Http interface for administration and This is an introduction to ArangoDB's HTTP interface for administration and
monitoring of the server. monitoring of the server.
<!-- lib/Admin/RestAdminLogHandler.cpp --> <!-- lib/Admin/RestAdminLogHandler.cpp -->

View File

@ -1,5 +1,5 @@
!CHAPTER Http Interface !CHAPTER HTTP Interface
Following you have ArangoDB's Http Interface for Documents, Databases, Edges and more. Following you have ArangoDB's HTTP Interface for Documents, Databases, Edges and more.
There are also some examples provided for every API action. There are also some examples provided for every API action.

View File

@ -2,8 +2,8 @@
!SUBSECTION Explaining and parsing queries !SUBSECTION Explaining and parsing queries
ArangoDB has an Http interface to syntactically validate AQL queries. ArangoDB has an HTTP interface to syntactically validate AQL queries.
Furthermore, it offers an Http interface to retrieve the execution plan for any Furthermore, it offers an HTTP interface to retrieve the execution plan for any
valid AQL query. valid AQL query.
Both functionalities do not actually execute the supplied AQL query, but only Both functionalities do not actually execute the supplied AQL query, but only
@ -17,7 +17,7 @@ inspect it and return meta information about it.
!SUBSECTION Query tracking !SUBSECTION Query tracking
ArangoDB has an Http interface for retrieving the lists of currently ArangoDB has an HTTP interface for retrieving the lists of currently
executing AQL queries and the list of slow AQL queries. In order to make meaningful executing AQL queries and the list of slow AQL queries. In order to make meaningful
use of these APIs, query tracking needs to be enabled in the database the HTTP use of these APIs, query tracking needs to be enabled in the database the HTTP
request is executed for. request is executed for.
@ -40,7 +40,7 @@ request is executed for.
!SUBSECTION Killing queries !SUBSECTION Killing queries
Running AQL queries can also be killed on the server. ArangoDB provides a kill facility Running AQL queries can also be killed on the server. ArangoDB provides a kill facility
via an Http interface. To kill a running query, its id (as returned for the query in the via an HTTP interface. To kill a running query, its id (as returned for the query in the
list of currently running queries) must be specified. The kill flag of the query will list of currently running queries) must be specified. The kill flag of the query will
then be set, and the query will be aborted as soon as it reaches a cancellation point. then be set, and the query will be aborted as soon as it reaches a cancellation point.

View File

@ -25,7 +25,7 @@ result set from the server. In this case no server side cursor will be created.
{ "query" : "FOR u IN users LIMIT 2 RETURN u", "count" : true, "batchSize" : 2 } { "query" : "FOR u IN users LIMIT 2 RETURN u", "count" : true, "batchSize" : 2 }
HTTP/1.1 201 Created HTTP/1.1 201 Created
content-type: application/json Content-type: application/json
{ {
"hasMore" : false, "hasMore" : false,
@ -68,7 +68,7 @@ Create and extract first batch:
{ "query" : "FOR u IN users LIMIT 5 RETURN u", "count" : true, "batchSize" : 2 } { "query" : "FOR u IN users LIMIT 5 RETURN u", "count" : true, "batchSize" : 2 }
HTTP/1.1 201 Created HTTP/1.1 201 Created
content-type: application/json Content-type: application/json
{ {
"hasMore" : true, "hasMore" : true,
@ -99,7 +99,7 @@ Extract next batch, still have more:
> curl -X PUT --dump - http://localhost:8529/_api/cursor/26011191 > curl -X PUT --dump - http://localhost:8529/_api/cursor/26011191
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json Content-type: application/json
{ {
"hasMore" : true, "hasMore" : true,
@ -130,7 +130,7 @@ Extract next batch, done:
> curl -X PUT --dump - http://localhost:8529/_api/cursor/26011191 > curl -X PUT --dump - http://localhost:8529/_api/cursor/26011191
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json Content-type: application/json
{ {
"hasMore" : false, "hasMore" : false,
@ -154,7 +154,7 @@ Do not do this because *hasMore* now has a value of false:
> curl -X PUT --dump - http://localhost:8529/_api/cursor/26011191 > curl -X PUT --dump - http://localhost:8529/_api/cursor/26011191
HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found
content-type: application/json Content-type: application/json
{ {
"errorNum": 1600, "errorNum": 1600,
@ -169,7 +169,7 @@ content-type: application/json
The `_api/cursor` endpoint can also be used to execute modifying queries. The `_api/cursor` endpoint can also be used to execute modifying queries.
The following example appends a value into the array `arrayValue` of the document The following example appends a value into the array `arrayValue` of the document
with key `test` in the collection `documents`. Normal update behaviour is to with key `test` in the collection `documents`. Normal update behavior is to
replace the attribute completely, and using an update AQL query with the `PUSH()` replace the attribute completely, and using an update AQL query with the `PUSH()`
function allows to append to the array. function allows to append to the array.
@ -178,7 +178,7 @@ curl --data @- -X POST --dump http://127.0.0.1:8529/_api/cursor
{ "query": "FOR doc IN documents FILTER doc._key == @myKey UPDATE doc._key WITH { arrayValue: PUSH(doc.arrayValue, @value) } IN documents","bindVars": { "myKey": "test", "value": 42 } } { "query": "FOR doc IN documents FILTER doc._key == @myKey UPDATE doc._key WITH { arrayValue: PUSH(doc.arrayValue, @value) } IN documents","bindVars": { "myKey": "test", "value": 42 } }
HTTP/1.1 201 Created HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"result" : [], "result" : [],

View File

@ -2,14 +2,14 @@
!SUBSECTION AQL User Functions Management !SUBSECTION AQL User Functions Management
This is an introduction to ArangoDB's Http interface for managing AQL This is an introduction to ArangoDB's HTTP interface for managing AQL
user functions. AQL user functions are a means to extend the functionality user functions. AQL user functions are a means to extend the functionality
of ArangoDB's query language (AQL) with user-defined Javascript code. of ArangoDB's query language (AQL) with user-defined JavaScript code.
For an overview of how AQL user functions work, please refer to For an overview of how AQL user functions work, please refer to
[Extending Aql](../AqlExtending/README.md). [Extending Aql](../AqlExtending/README.md).
The Http interface provides an API for adding, deleting, and listing The HTTP interface provides an API for adding, deleting, and listing
previously registered AQL user functions. previously registered AQL user functions.
All user functions managed through this interface will be stored in the All user functions managed through this interface will be stored in the

View File

@ -13,7 +13,7 @@ request results do not depend on each other.
Clients can use ArangoDB's batch API by issuing a multipart HTTP POST Clients can use ArangoDB's batch API by issuing a multipart HTTP POST
request to the URL */_api/batch* handler. The handler will accept the request to the URL */_api/batch* handler. The handler will accept the
request if the Content-Type is *multipart/form-data* and a boundary request if the Content-type is *multipart/form-data* and a boundary
string is specified. ArangoDB will then decompose the batch request string is specified. ArangoDB will then decompose the batch request
into its individual parts using this boundary. This also means that into its individual parts using this boundary. This also means that
the boundary string itself must not be contained in any of the parts. the boundary string itself must not be contained in any of the parts.
@ -28,20 +28,20 @@ parts as well.
The server expects each part message to start with exactly the The server expects each part message to start with exactly the
following "header": following "header":
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
You can optionally specify a *Content-Id* "header" to uniquely You can optionally specify a *Content-Id* "header" to uniquely
identify each part message. The server will return the *Content-Id* in identify each part message. The server will return the *Content-Id* in
its response if it is specified. Otherwise, the server will not send a its response if it is specified. Otherwise, the server will not send a
Content-Id "header" back. The server will not validate the uniqueness Content-Id "header" back. The server will not validate the uniqueness
of the Content-Id. After the mandatory *Content-Type* and the of the Content-Id. After the mandatory *Content-type* and the
optional *Content-Id* header, two Windows line breaks optional *Content-Id* header, two Windows line breaks
(i.e. *\r\n\r\n*) must follow. Any deviation of this structure (i.e. *\r\n\r\n*) must follow. Any deviation of this structure
might lead to the part being rejected or incorrectly interpreted. The might lead to the part being rejected or incorrectly interpreted. The
part request payload, formatted as a regular HTTP request, must follow part request payload, formatted as a regular HTTP request, must follow
the two Windows line breaks literal directly. the two Windows line breaks literal directly.
Note that the literal *Content-Type: application/x-arango-batchpart* Note that the literal *Content-type: application/x-arango-batchpart*
technically is the header of the MIME part, and the HTTP request technically is the header of the MIME part, and the HTTP request
(including its headers) is the body part of the MIME part. (including its headers) is the body part of the MIME part.
@ -58,23 +58,23 @@ creation operations. The boundary used in this example is
*Examples* *Examples*
```js ```js
> curl -X POST --data-binary @- --header "Content-Type: multipart/form-data; boundary=XXXsubpartXXX" http://localhost:8529/_api/batch > curl -X POST --data-binary @- --header "Content-type: multipart/form-data; boundary=XXXsubpartXXX" http://localhost:8529/_api/batch
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
Content-Id: 1 Content-Id: 1
POST /_api/document?collection=xyz&createCollection=true HTTP/1.1 POST /_api/document?collection=xyz&createCollection=true HTTP/1.1
{"a":1,"b":2,"c":3} {"a":1,"b":2,"c":3}
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
Content-Id: 2 Content-Id: 2
POST /_api/document?collection=xyz HTTP/1.1 POST /_api/document?collection=xyz HTTP/1.1
{"a":1,"b":2,"c":3,"d":4} {"a":1,"b":2,"c":3,"d":4}
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
Content-Id: 3 Content-Id: 3
POST /_api/document?collection=xyz HTTP/1.1 POST /_api/document?collection=xyz HTTP/1.1
@ -102,38 +102,38 @@ operation might also return arbitrary HTTP headers and a body/payload:
```js ```js
HTTP/1.1 200 OK HTTP/1.1 200 OK
connection: Keep-Alive Connection: Keep-Alive
content-type: multipart/form-data; boundary=XXXsubpartXXX Content-type: multipart/form-data; boundary=XXXsubpartXXX
content-length: 1055 Content-length: 1055
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
Content-Id: 1 Content-Id: 1
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: "9514299" Etag: "9514299"
content-length: 53 Content-length: 53
{"error":false,"_id":"xyz/9514299","_key":"9514299","_rev":"9514299"} {"error":false,"_id":"xyz/9514299","_key":"9514299","_rev":"9514299"}
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
Content-Id: 2 Content-Id: 2
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: "9579835" Etag: "9579835"
content-length: 53 Content-length: 53
{"error":false,"_id":"xyz/9579835","_key":"9579835","_rev":"9579835"} {"error":false,"_id":"xyz/9579835","_key":"9579835","_rev":"9579835"}
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
Content-Id: 3 Content-Id: 3
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: "9645371" Etag: "9645371"
content-length: 53 Content-length: 53
{"error":false,"_id":"xyz/9645371","_key":"9645371","_rev":"9645371"} {"error":false,"_id":"xyz/9645371","_key":"9645371","_rev":"9645371"}
--XXXsubpartXXX-- --XXXsubpartXXX--
@ -152,15 +152,15 @@ requests that produced errors:
*Examples* *Examples*
```js ```js
> curl -X POST --data-binary @- --header "Content-Type: multipart/form-data; boundary=XXXsubpartXXX" http://localhost:8529/_api/batch > curl -X POST --data-binary @- --header "Content-type: multipart/form-data; boundary=XXXsubpartXXX" http://localhost:8529/_api/batch
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
POST /_api/document?collection=nonexisting POST /_api/document?collection=nonexisting
{"a":1,"b":2,"c":3} {"a":1,"b":2,"c":3}
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
POST /_api/document?collection=xyz POST /_api/document?collection=xyz
@ -177,24 +177,24 @@ header of the overall response is *1*:
```js ```js
HTTP/1.1 200 OK HTTP/1.1 200 OK
x-arango-errors: 1 x-arango-errors: 1
content-type: multipart/form-data; boundary=XXXsubpartXXX Content-type: multipart/form-data; boundary=XXXsubpartXXX
content-length: 711 Content-length: 711
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
content-length: 111 Content-length: 111
{"error":true,"code":404,"errorNum":1203,"errorMessage":"collection \/_api\/collection\/nonexisting not found"} {"error":true,"code":404,"errorNum":1203,"errorMessage":"collection \/_api\/collection\/nonexisting not found"}
--XXXsubpartXXX --XXXsubpartXXX
Content-Type: application/x-arango-batchpart Content-type: application/x-arango-batchpart
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: "9841979" Etag: "9841979"
content-length: 53 Content-length: 53
{"error":false,"_id":"xyz/9841979","_key":"9841979","_rev":"9841979"} {"error":false,"_id":"xyz/9841979","_key":"9841979","_rev":"9841979"}
--XXXsubpartXXX-- --XXXsubpartXXX--

View File

@ -16,9 +16,9 @@ curl --data-binary @- -X POST --dump - "http://localhost:8529/_api/import?collec
[ "Jane", "Doe", 31, "female" ] [ "Jane", "Doe", 31, "female" ]
HTTP/1.1 201 Created HTTP/1.1 201 Created
server: triagens GmbH High-Performance HTTP Server Server: triagens GmbH High-Performance HTTP Server
connection: Keep-Alive Connection: Keep-Alive
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{"error":false,"created":2,"empty":0,"errors":0} {"error":false,"created":2,"empty":0,"errors":0}
``` ```

View File

@ -47,9 +47,9 @@ curl --data-binary @- -X POST --dump - "http://localhost:8529/_api/import?type=d
{ "type" : "bird", "name" : "robin" } { "type" : "bird", "name" : "robin" }
HTTP/1.1 201 Created HTTP/1.1 201 Created
server: triagens GmbH High-Performance HTTP Server Server: triagens GmbH High-Performance HTTP Server
connection: Keep-Alive Connection: Keep-Alive
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{"error":false,"created":2,"empty":0,"errors":0} {"error":false,"created":2,"empty":0,"errors":0}
``` ```

View File

@ -2,7 +2,7 @@
!SUBSECTION Collections !SUBSECTION Collections
This is an introduction to ArangoDB's Http interface for collections. This is an introduction to ArangoDB's HTTP interface for collections.
!SUBSUBSECTION Collection !SUBSUBSECTION Collection

View File

@ -1,6 +1,6 @@
!CHAPTER Database Management !CHAPTER Database Management
This is an introduction to ArangoDB's Http interface for managing databases. This is an introduction to ArangoDB's HTTP interface for managing databases.
The HTTP interface for databases provides operations to create and drop The HTTP interface for databases provides operations to create and drop
individual databases. These are mapped to the standard HTTP methods *POST* individual databases. These are mapped to the standard HTTP methods *POST*

View File

@ -23,7 +23,7 @@ Example:
**Note**: The following examples use the short URL format for brevity. **Note**: The following examples use the short URL format for brevity.
Each document also has a [document revision](../Glossary/index.html#document_revision) or etag with is returned in the Each document also has a [document revision](../Glossary/index.html#document_revision) or Etag with is returned in the
"ETag" HTTP header when requesting a document. "ETag" HTTP header when requesting a document.
If you obtain a document using *GET* and you want to check if a newer revision If you obtain a document using *GET* and you want to check if a newer revision

View File

@ -28,7 +28,7 @@ An example document:
``` ```
All documents contain special attributes: the document handle in `_id`, the All documents contain special attributes: the document handle in `_id`, the
document's unique key in `_key` and and the etag aka [document revision](../Glossary/index.html#document_revision) in document's unique key in `_key` and and the Etag aka [document revision](../Glossary/index.html#document_revision) in
`_rev`. The value of the `_key` attribute can be specified by the user when `_rev`. The value of the `_key` attribute can be specified by the user when
creating a document. `_id` and `_key` values are immutable once the document creating a document. `_id` and `_key` values are immutable once the document
has been created. The `_rev` value is maintained by ArangoDB autonomously. has been created. The `_rev` value is maintained by ArangoDB autonomously.

View File

@ -1,6 +1,6 @@
!CHAPTER General Graphs !CHAPTER General Graphs
This chapter describes the http interface for the multi-collection graph module. This chapter describes the HTTP interface for the multi-collection graph module.
It allows you to define a graph that is spread across several edge and document collections. It allows you to define a graph that is spread across several edge and document collections.
This allows you to structure your models in line with your domain and group them logically in collections and giving you the power to query them in the same graph queries. This allows you to structure your models in line with your domain and group them logically in collections and giving you the power to query them in the same graph queries.
There is no need to include the referenced collections within the query, this module will handle it for you. There is no need to include the referenced collections within the query, this module will handle it for you.

View File

@ -54,8 +54,8 @@ unix> curl -X POST --data-binary @- --dump - http://localhost:8529/_api/graph/gr
{"_key":"edge1","_from":"vert2","_to":"vert1","optional1":"val1"} {"_key":"edge1","_from":"vert2","_to":"vert1","optional1":"val1"}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 144630753 Etag: 144630753
{ {
"edge" : { "edge" : {
@ -94,9 +94,9 @@ Revision of an edge
`if-none-match (string,optional)` `if-none-match (string,optional)`
If the "If-None-Match" header is given, then it must contain exactly one etag. The document is returned, if it has a different revision than the given etag. Otherwise a HTTP 304 is returned. If the "If-None-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has a different revision than the given Etag. Otherwise a HTTP 304 is returned.
if-match (string,optional) if-match (string,optional)
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
Returns an object with an attribute edge containing an array of all edge properties. Returns an object with an attribute edge containing an array of all edge properties.
@ -125,8 +125,8 @@ is returned if the graph or edge was not found. The response body contains an er
unix> curl --dump - http://localhost:8529/_api/graph/graph/edge/edge1 unix> curl --dump - http://localhost:8529/_api/graph/graph/edge/edge1
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 147579873 Etag: 147579873
{ {
"edge" : { "edge" : {
@ -175,7 +175,7 @@ The call expects a JSON object as body with the new edge properties.
`if-match (string,optional)` `if-match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
Replaces the optional edge properties. Replaces the optional edge properties.
@ -207,8 +207,8 @@ unix> curl -X PUT --data-binary @- --dump - http://localhost:8529/_api/graph/gra
{"optional1":"val2"} {"optional1":"val2"}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 154526689 Etag: 154526689
{ {
"edge" : { "edge" : {
@ -261,7 +261,7 @@ The call expects a JSON object as body with the properties to patch.
`if-match (string,optional)` `if-match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -295,8 +295,8 @@ unix> curl -X PATCH --data-binary @- --dump - http://localhost:8529/_api/graph/g
{"optional3":"val3"} {"optional3":"val3"}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 158065633 Etag: 158065633
{ {
"edge" : { "edge" : {
@ -340,7 +340,7 @@ Revision of an edge
`if-match (string,optional)` `if-match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -370,7 +370,7 @@ is returned if the graph or the edge was not found. The response body contains a
unix> curl -X DELETE --dump - http://localhost:8529/_api/graph/graph/edge/edge1 unix> curl -X DELETE --dump - http://localhost:8529/_api/graph/graph/edge/edge1
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"deleted" : true, "deleted" : true,
@ -431,7 +431,7 @@ unix> curl -X POST --data-binary @- --dump - http://localhost:8529/_api/graph/gr
{"batchSize" : 100, "filter" : {"direction" : "any", "properties":[] }} {"batchSize" : 100, "filter" : {"direction" : "any", "properties":[] }}
HTTP/1.1 201 Created HTTP/1.1 201 Created
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"result" : [ "result" : [
@ -461,7 +461,7 @@ unix> curl -X POST --data-binary @- --dump - http://localhost:8529/_api/graph/gr
{"batchSize" : 100, "filter" : {"direction" : "out", "properties":[ { "key": "optional1", "value": "val2", "compare" : "==" }, ] }} {"batchSize" : 100, "filter" : {"direction" : "out", "properties":[ { "key": "optional1", "value": "val2", "compare" : "==" }, ] }}
HTTP/1.1 201 Created HTTP/1.1 201 Created
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"result" : [ "result" : [
@ -540,7 +540,7 @@ unix> curl -X POST --data-binary @- --dump - http://localhost:8529/_api/graph/gr
{"batchSize" : 100, "filter" : { "direction" : "any" }} {"batchSize" : 100, "filter" : { "direction" : "any" }}
HTTP/1.1 201 Created HTTP/1.1 201 Created
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"result" : [ "result" : [

View File

@ -2,7 +2,7 @@
**Warning: This Chapter is Deprecated** **Warning: This Chapter is Deprecated**
This api is deprecated and will be removed soon. This API is deprecated and will be removed soon.
Please use [General Graphs](../HttpGharial/README.md) instead. Please use [General Graphs](../HttpGharial/README.md) instead.
`POST /_api/graph`*(create graph)* `POST /_api/graph`*(create graph)*
@ -17,7 +17,7 @@ Wait until document has been sync to disk.
`graph (json,required)` `graph (json,required)`
The call expects a JSON object as body with the following attributes: _key: The name of the new graph. vertices: The name of the vertices collection. edges: The name of the egde collection. The call expects a JSON object as body with the following attributes: _key: The name of the new graph. vertices: The name of the vertices collection. edges: The name of the edge collection.
!SUBSECTION Description !SUBSECTION Description
@ -45,8 +45,8 @@ unix> curl -X POST --data-binary @- --dump - http://localhost:8529/_api/graph/
{"_key":"graph","vertices":"vertices","edges":"edges"} {"_key":"graph","vertices":"vertices","edges":"edges"}
HTTP/1.1 201 Created HTTP/1.1 201 Created
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 103998433 Etag: 103998433
{ {
"graph" : { "graph" : {
@ -74,12 +74,12 @@ The name of the graph.
`If-None-Match (string,optional)` `If-None-Match (string,optional)`
If graph-name is specified, then this header can be used to check whether a specific graph has changed or not. If graph-name is specified, then this header can be used to check whether a specific graph has changed or not.
If the "If-None-Match" header is given, then it must contain exactly one etag. The document is returned if it has a different revision than the given etag. Otherwise a HTTP 304 is returned. If the "If-None-Match" header is given, then it must contain exactly one Etag. The document is returned if it has a different revision than the given Etag. Otherwise a HTTP 304 is returned.
`If-Match (string,optional)` `If-Match (string,optional)`
If graph-name is specified, then this header can be used to check whether a specific graph has changed or not. If graph-name is specified, then this header can be used to check whether a specific graph has changed or not.
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -113,8 +113,8 @@ get graph by name
unix> curl --dump - http://localhost:8529/_api/graph/graph unix> curl --dump - http://localhost:8529/_api/graph/graph
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 105440225 Etag: 105440225
{ {
"graph" : { "graph" : {
@ -135,7 +135,7 @@ get all graphs
unix> curl --dump - http://localhost:8529/_api/graph unix> curl --dump - http://localhost:8529/_api/graph
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"graphs" : [ "graphs" : [
@ -171,7 +171,7 @@ The name of the graph
`If-Match (string,optional)` `If-Match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -203,7 +203,7 @@ delete graph by name
unix> curl -X DELETE --dump - http://localhost:8529/_api/graph/graph unix> curl -X DELETE --dump - http://localhost:8529/_api/graph/graph
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"deleted" : true, "deleted" : true,

View File

@ -50,8 +50,8 @@ unix> curl -X POST --data-binary @- --dump - http://localhost:8529/_api/graph/gr
{"_key":"v1","optional1":"val1"} {"_key":"v1","optional1":"val1"}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 112518113 Etag: 112518113
{ {
"vertex" : { "vertex" : {
@ -87,11 +87,11 @@ Revision of a vertex
`If-None-Match (string,optional)` `If-None-Match (string,optional)`
If the "If-None-Match" header is given, then it must contain exactly one etag. The document is returned, if it has a different revision than the given etag. Otherwise a HTTP 304 is returned. If the "If-None-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has a different revision than the given Etag. Otherwise a HTTP 304 is returned.
`If-Match (string,optional)` `If-Match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -123,8 +123,8 @@ get vertex properties by name
unix> curl --dump - http://localhost:8529/_api/graph/graph/vertex/v1 unix> curl --dump - http://localhost:8529/_api/graph/graph/vertex/v1
HTTP/1.1 200 OK HTTP/1.1 200 OK
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 115532769 Etag: 115532769
{ {
"vertex" : { "vertex" : {
@ -170,7 +170,7 @@ The call expects a JSON object as body with the new vertex properties.
`if-match (string,optional)` `if-match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is updated, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is updated, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -202,8 +202,8 @@ unix> curl -X PUT --data-binary @- --dump - http://localhost:8529/_api/graph/gra
{"optional1":"val2"} {"optional1":"val2"}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 120579041 Etag: 120579041
{ {
"vertex" : { "vertex" : {
@ -253,7 +253,7 @@ The call expects a JSON object as body with the properties to patch.
`if-match (string,optional)` `if-match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is updated, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is updated, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -287,8 +287,8 @@ unix> curl -X PATCH --data-binary @- --dump - http://localhost:8529/_api/graph/g
{"optional1":"vertexPatch"} {"optional1":"vertexPatch"}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 123659233 Etag: 123659233
{ {
"vertex" : { "vertex" : {
@ -305,8 +305,8 @@ unix> curl -X PATCH --data-binary @- --dump - http://localhost:8529/_api/graph/g
{"optional1":null} {"optional1":null}
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
etag: 124117985 Etag: 124117985
{ {
"vertex" : { "vertex" : {
@ -346,7 +346,7 @@ Revision of a vertex
`If-Match (string,optional)` `If-Match (string,optional)`
If the "If-Match" header is given, then it must contain exactly one etag. The document is returned, if it has the same revision ad the given etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the etag in an attribute rev in the URL. If the "If-Match" header is given, then it must contain exactly one Etag. The document is returned, if it has the same revision ad the given Etag. Otherwise a HTTP 412 is returned. As an alternative you can supply the Etag in an attribute rev in the URL.
!SUBSECTION Description !SUBSECTION Description
@ -376,7 +376,7 @@ is returned if the graph or the vertex was not found. The response body contains
unix> curl -X DELETE --dump - http://localhost:8529/_api/graph/graph/vertex/v1 unix> curl -X DELETE --dump - http://localhost:8529/_api/graph/graph/vertex/v1
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
content-type: application/json; charset=utf-8 Content-type: application/json; charset=utf-8
{ {
"deleted" : true, "deleted" : true,

View File

@ -2,7 +2,7 @@
!SUBSECTION Indexes !SUBSECTION Indexes
This is an introduction to ArangoDB's Http interface for indexes in This is an introduction to ArangoDB's HTTP interface for indexes in
general. There are special sections for various index types. general. There are special sections for various index types.
!SUBSUBSECTION Index !SUBSUBSECTION Index

View File

@ -2,7 +2,7 @@
!SUBSECTION Simple Queries !SUBSECTION Simple Queries
This is an introduction to ArangoDB's Http interface for simple queries. This is an introduction to ArangoDB's HTTP interface for simple queries.
Simple queries can be used if the query condition is straight forward simple, Simple queries can be used if the query condition is straight forward simple,
i.e., a document reference, all documents, a query-by-example, or a simple geo i.e., a document reference, all documents, a query-by-example, or a simple geo

View File

@ -1,6 +1,6 @@
!CHAPTER Http tasks Interface !CHAPTER HTTP tasks Interface
Following you have ArangoDB's Http Interface for Tasks. Following you have ArangoDB's HTTP Interface for Tasks.
There are also some examples provided for every API action. There are also some examples provided for every API action.

View File

@ -8,8 +8,8 @@ the server.
Transactions in ArangoDB do not offer separate *BEGIN*, *COMMIT* and *ROLLBACK* Transactions in ArangoDB do not offer separate *BEGIN*, *COMMIT* and *ROLLBACK*
operations as they are available in many other database products. operations as they are available in many other database products.
Instead, ArangoDB transactions are described by a Javascript function, and the Instead, ArangoDB transactions are described by a JavaScript function, and the
code inside the Javascript function will then be executed transactionally. code inside the JavaScript function will then be executed transactionally.
At the end of the function, the transaction is automatically committed, and all At the end of the function, the transaction is automatically committed, and all
changes done by the transaction will be persisted. If an exception is thrown changes done by the transaction will be persisted. If an exception is thrown
during transaction execution, all operations performed in the transaction are during transaction execution, all operations performed in the transaction are

View File

@ -1,6 +1,6 @@
!CHAPTER HTTP Interface for User Management !CHAPTER HTTP Interface for User Management
This is an introduction to ArangoDB's Http interface for managing users. This is an introduction to ArangoDB's HTTP interface for managing users.
The interface provides a simple means to add, update, and remove users. All The interface provides a simple means to add, update, and remove users. All
users managed through this interface will be stored in the system collection users managed through this interface will be stored in the system collection

View File

@ -24,7 +24,7 @@ If the index attribute is neither a string, an object or an array, its contents
not be indexed. When indexing the contents of an array attribute, an array member will not be indexed. When indexing the contents of an array attribute, an array member will
only be included in the index if it is a string. When indexing the contents of an object only be included in the index if it is a string. When indexing the contents of an object
attribute, an object member value will only be included in the index if it is a string. attribute, an object member value will only be included in the index if it is a string.
Other datatypes are ignored and not indexed. Other data types are ignored and not indexed.
!SECTION Accessing Fulltext Indexes from the Shell !SECTION Accessing Fulltext Indexes from the Shell

View File

@ -24,7 +24,7 @@ to produce correct results, regardless of whether or which index is used to sati
If it is unsure about whether using an index will violate the policy, it will not make use of the index. If it is unsure about whether using an index will violate the policy, it will not make use of the index.
!SUBSECTION Troubeshooting !SUBSECTION Troubleshooting
When in doubt about whether and which indexes will be used for executing a given AQL query, use When in doubt about whether and which indexes will be used for executing a given AQL query, use
the `explain()` method for the statement as follows (from the ArangoShell): the `explain()` method for the statement as follows (from the ArangoShell):

View File

@ -230,8 +230,8 @@ separate document attributes (latitude and longitude) or a single array attribut
contains both latitude and longitude. Latitude and longitude must be numeric values. contains both latitude and longitude. Latitude and longitude must be numeric values.
Th geo index provides operations to find documents with coordinates nearest to a given Th geo index provides operations to find documents with coordinates nearest to a given
comparsion coordinate, and to find documents with coordinates that are within a specifiable comparison coordinate, and to find documents with coordinates that are within a specifiable
radius around a comparsion coordinate. radius around a comparison coordinate.
The geo index is used via dedicated functions in AQL or the simple queries, but will The geo index is used via dedicated functions in AQL or the simple queries, but will
not enabled for other types of queries or conditions. not enabled for other types of queries or conditions.
@ -241,7 +241,7 @@ not enabled for other types of queries or conditions.
A fulltext index can be used to find words, or prefixes of words inside documents. A fulltext index can be used to find words, or prefixes of words inside documents.
A fulltext index can be created on a single attribute only, and will index all words A fulltext index can be created on a single attribute only, and will index all words
contained in documents that have a textual value in that attribute. Only words with a (specifyable) contained in documents that have a textual value in that attribute. Only words with a (specifiable)
minimum length are indexed. Word tokenization is done using the word boundary analysis minimum length are indexed. Word tokenization is done using the word boundary analysis
provided by libicu, which is taking into account the selected language provided at provided by libicu, which is taking into account the selected language provided at
server start. Words are indexed in their lower-cased form. The index supports complete server start. Words are indexed in their lower-cased form. The index supports complete

View File

@ -135,7 +135,7 @@ Sparse skiplist indexes can be used for sorting if the optimizer can safely dete
index range does not include `null` for any of the index attributes. index range does not include `null` for any of the index attributes.
Note that if you intend to use [joins](../AqlExamples/Join.html) it may be clever Note that if you intend to use [joins](../AqlExamples/Join.html) it may be clever
to use non-sparsity and maybe even uniq-ness for that attribute, else all items containing to use non-sparsity and maybe even uniqueness for that attribute, else all items containing
the NULL-value will match against each other and thus produce large results. the NULL-value will match against each other and thus produce large results.

View File

@ -37,7 +37,7 @@ db._index("demo/362549736");
@startDocuBlock collectionGetIndexes @startDocuBlock collectionGetIndexes
!SUBSECTION Creating an index !SUBSECTION Creating an index
Indexes can be created using the general method *ensureIndex*, or *enshure&lt;type&gt;Index* (see the respective subchapters that also describe the behaviour of this index in more detail. Indexes can be created using the general method *ensureIndex*, or *ensure&lt;type&gt;Index* (see the respective subchapters that also describe the behavior of this index in more detail.
<!-- arangod/V8Server/v8-vocindex.cpp --> <!-- arangod/V8Server/v8-vocindex.cpp -->
@startDocuBlock collectionEnsureIndex @startDocuBlock collectionEnsureIndex

View File

@ -68,7 +68,7 @@ Download the latest source using GIT:
Note: if you only plan to compile ArangoDB locally and do not want to modify or push Note: if you only plan to compile ArangoDB locally and do not want to modify or push
any changes, you can speed up cloning substantially by using the *--single-branch* and any changes, you can speed up cloning substantially by using the *--single-branch* and
*--depth* parameters for the clone command as follws: *--depth* parameters for the clone command as follows:
git clone --single-branch --depth 1 git://github.com/arangodb/arangodb.git git clone --single-branch --depth 1 git://github.com/arangodb/arangodb.git
@ -167,7 +167,7 @@ parameter once to perform required upgrade or initialization tasks.
!SECTION Devel Version !SECTION Devel Version
Note: a seperate [blog article](http://jsteemann.github.io/blog/2014/10/16/how-to-compile-arangodb-from-source/) Note: a separate [blog article](http://jsteemann.github.io/blog/2014/10/16/how-to-compile-arangodb-from-source/)
is available that describes how to compile ArangoDB from source on Ubuntu. is available that describes how to compile ArangoDB from source on Ubuntu.
!SUBSECTION Basic System Requirements !SUBSECTION Basic System Requirements

View File

@ -13,7 +13,7 @@ Installing for a single user: Select a different directory during
installation. For example *c:\Users\&lt;Username&gt;\ArangoDB* or *c:\ArangoDB*. installation. For example *c:\Users\&lt;Username&gt;\ArangoDB* or *c:\ArangoDB*.
Installing for multiple users: Keep the default directory. After the Installing for multiple users: Keep the default directory. After the
installation edit the file *&lt;ROOTDIR&gt;\etc\Arangodb\arangod.conf*. Adjust the installation edit the file *&lt;ROOTDIR&gt;\etc\ArangoDB\arangod.conf*. Adjust the
*directory* and *app-path* so that these paths point into your home directory. *directory* and *app-path* so that these paths point into your home directory.
[database] [database]

View File

@ -1,6 +1,6 @@
!CHAPTER JavaScript Modules !CHAPTER JavaScript Modules
!SUBSECTION Introduction to Javascript Modules !SUBSECTION Introduction to JavaScript Modules
The ArangoDB uses a [CommonJS](http://wiki.commonjs.org/wiki) The ArangoDB uses a [CommonJS](http://wiki.commonjs.org/wiki)
compatible module and package concept. You can use the function *require* in compatible module and package concept. You can use the function *require* in

View File

@ -3,7 +3,7 @@ The query module provides the infrastructure for working with currently running
!SUBSECTION Properties !SUBSECTION Properties
`queries.properties()` Returns the servers current query tracking configuration; we change the slow query threshhold to get better results: `queries.properties()` Returns the servers current query tracking configuration; we change the slow query threshold to get better results:
@startDocuBlockInline QUERY_01_properyOfQueries @startDocuBlockInline QUERY_01_properyOfQueries
@EXAMPLE_ARANGOSH_OUTPUT{QUERY_01_properyOfQueries} @EXAMPLE_ARANGOSH_OUTPUT{QUERY_01_properyOfQueries}

View File

@ -92,7 +92,7 @@ outside of its own scope. The callback function can still define and
use its own variables. use its own variables.
To pass parameters to a task, the *params* attribute can be set when To pass parameters to a task, the *params* attribute can be set when
registering a task. Note that the parameters are limited to datatypes registering a task. Note that the parameters are limited to data types
usable in JSON (meaning no callback functions can be passed as parameters usable in JSON (meaning no callback functions can be passed as parameters
into a task): into a task):

View File

@ -23,7 +23,7 @@ following attribute naming constraints are not violated:
as desired, provided the name is a valid UTF-8 string. For maximum as desired, provided the name is a valid UTF-8 string. For maximum
portability, special characters should be avoided though. For example, portability, special characters should be avoided though. For example,
attribute names may contain the dot symbol, but the dot has a special meaning attribute names may contain the dot symbol, but the dot has a special meaning
in Javascript and also in AQL, so when using such attribute names in one of in JavaScript and also in AQL, so when using such attribute names in one of
these languages, the attribute name would need to be quoted by the end these languages, the attribute name would need to be quoted by the end
user. This will work but requires more work so it might be better to use user. This will work but requires more work so it might be better to use
attribute names which don't require any quoting/escaping in all languages attribute names which don't require any quoting/escaping in all languages

View File

@ -30,7 +30,7 @@ guaranteed any result order either.
!SECTION AQL Improvements !SECTION AQL Improvements
AQL offers functionality to work with dates. Dates are no datatypes of their own AQL offers functionality to work with dates. Dates are no data types of their own
in AQL (neither they are in JSON, which is often used as a format to ship data in AQL (neither they are in JSON, which is often used as a format to ship data
into and out of ArangoDB). Instead, dates in AQL are internally represented by into and out of ArangoDB). Instead, dates in AQL are internally represented by
either numbers (timestamps) or strings. The date functions in AQL provide either numbers (timestamps) or strings. The date functions in AQL provide
@ -213,7 +213,7 @@ also supports the installation of ArangoDB as a service.
!SECTION Fixes for 32 bit systems !SECTION Fixes for 32 bit systems
Several issues have been fixed that occured only when using ArangoDB on a 32 bits Several issues have been fixed that occurred only when using ArangoDB on a 32 bits
operating system, specifically: operating system, specifically:
- a crash in a third party component used to manage cluster data - a crash in a third party component used to manage cluster data
@ -244,7 +244,7 @@ source. For instance GNU CC of at least version 4.8.
!SECTION Miscellaneous Improvements !SECTION Miscellaneous Improvements
- Cancelable asynchronous jobs: several potentially long-running jobs can now be - Cancelable asynchronous jobs: several potentially long-running jobs can now be
cancelled via an explicit cancel operation. This allows stopping long-running canceled via an explicit cancel operation. This allows stopping long-running
queries, traversals or scripts without shutting down the complete ArangoDB queries, traversals or scripts without shutting down the complete ArangoDB
process. Job cancellation is provided for asynchronously executed jobs as is process. Job cancellation is provided for asynchronously executed jobs as is
described in @ref HttpJobCancel. described in @ref HttpJobCancel.

View File

@ -91,7 +91,7 @@ special replication logger on the master. The replication logger caused an extra
write operation into the *_replication* system collection for each actual write write operation into the *_replication* system collection for each actual write
operation. This extra write is now superfluous. Instead, slaves can read directly operation. This extra write is now superfluous. Instead, slaves can read directly
from the master's write-ahead log to get informed about most recent data changes. from the master's write-ahead log to get informed about most recent data changes.
This removes the need to store data-modication operations in the *_replication* This removes the need to store data-modification operations in the *_replication*
collection altogether. collection altogether.
For the configuration of the write-ahead log, please refer to [Write-ahead log options](../ConfigureArango/Wal.md). For the configuration of the write-ahead log, please refer to [Write-ahead log options](../ConfigureArango/Wal.md).

View File

@ -66,7 +66,7 @@ the more intuitive variant:
FOR i IN ... FILTER i NOT IN [ 23, 42 ] ... FOR i IN ... FILTER i NOT IN [ 23, 42 ] ...
!SUBSUBSECTION Improvements of builtin functions !SUBSUBSECTION Improvements of built-in functions
The following AQL string functions have been added: The following AQL string functions have been added:
@ -74,7 +74,7 @@ The following AQL string functions have been added:
- `RTRIM(value, characters)`: right-trims a string value - `RTRIM(value, characters)`: right-trims a string value
- `FIND_FIRST(value, search, start, end)`: finds the first occurrence - `FIND_FIRST(value, search, start, end)`: finds the first occurrence
of a search string of a search string
- `FIND_LAST(value, search, start, end)`: finds the last occurence of a - `FIND_LAST(value, search, start, end)`: finds the last occurrence of a
search string search string
- `SPLIT(value, separator, limit) `: splits a string into an array, - `SPLIT(value, separator, limit) `: splits a string into an array,
using a separator using a separator
@ -294,7 +294,7 @@ ArangoDB 2.3 provides Foxx apps for user management and salted hash-based authen
Foxx now provides async workers via the Foxx Queues API. Jobs enqueued in a job queue will be executed asynchronously outside of the request/response cycle of Foxx controllers and can be used to communicate with external services or perform tasks that take a long time to complete or may require multiple attempts. Foxx now provides async workers via the Foxx Queues API. Jobs enqueued in a job queue will be executed asynchronously outside of the request/response cycle of Foxx controllers and can be used to communicate with external services or perform tasks that take a long time to complete or may require multiple attempts.
Jobs can be scheduled in advance or set to be executed immediately, the number of retry attempts, the retry delay as well as sucess and failure handlers can be defined for each job individually. Job types that integrate various external services for transactional e-mails, logging and user tracking can be found in the Foxx app registry. Jobs can be scheduled in advance or set to be executed immediately, the number of retry attempts, the retry delay as well as success and failure handlers can be defined for each job individually. Job types that integrate various external services for transactional e-mails, logging and user tracking can be found in the Foxx app registry.
!SUBSECTION Misc !SUBSECTION Misc

View File

@ -234,7 +234,7 @@ provides a better overview of the app, e.g.:
* API documentation * API documentation
Installing a new Foxx application on the server is made easy using the new Installing a new Foxx application on the server is made easy using the new
`Add application` button. The `Add application` dialogue provides all the `Add application` button. The `Add application` dialog provides all the
features already available in the `foxx-manager` console application plus some more: features already available in the `foxx-manager` console application plus some more:
* install a Foxx application from Github * install a Foxx application from Github
@ -272,10 +272,10 @@ easily by setting the `includeSystem` attribute to `false` in the following comm
* replication.applier.properties({ includeSystem: false }); * replication.applier.properties({ includeSystem: false });
This will exclude all system collections (including `_aqlfunctions`, `_graphs` etc.) This will exclude all system collections (including `_aqlfunctions`, `_graphs` etc.)
from the initial synchronisation and the continuous replication. from the initial synchronization and the continuous replication.
If this is also undesired, it is also possible to specify a list of collections to If this is also undesired, it is also possible to specify a list of collections to
exclude from the initial synchronisation and the continuous replication using the exclude from the initial synchronization and the continuous replication using the
`restrictCollections` attribute, e.g.: `restrictCollections` attribute, e.g.:
```js ```js

View File

@ -98,7 +98,7 @@ the lookup value cannot be `null`, a sparse index may be used. When uncertain, t
will not make use of a sparse index in a query in order to produce correct results. will not make use of a sparse index in a query in order to produce correct results.
For example, the following queries cannot use a sparse index on `attr` because the optimizer For example, the following queries cannot use a sparse index on `attr` because the optimizer
will not know beforehand whether the comparsion values for `doc.attr` will include `null`: will not know beforehand whether the comparison values for `doc.attr` will include `null`:
FOR doc In collection FOR doc In collection
FILTER doc.attr == SOME_FUNCTION(...) FILTER doc.attr == SOME_FUNCTION(...)
@ -154,7 +154,7 @@ provided there is an index on `doc.value`):
SORT doc.value SORT doc.value
RETURN doc RETURN doc
The AQL optimizer rule "use-index-for-sort" now also removes sort in case the sort critieria The AQL optimizer rule "use-index-for-sort" now also removes sort in case the sort criteria
excludes the left-most index attributes, but the left-most index attributes are used excludes the left-most index attributes, but the left-most index attributes are used
by the index for equality-only lookups. by the index for equality-only lookups.
@ -191,7 +191,7 @@ data-modification part can run in lockstep with the data retrieval part of the q
or if the data retrieval part must be executed and completed first before the data-modification or if the data retrieval part must be executed and completed first before the data-modification
can start. can start.
Executing both data retrieval and data-modifcation in lockstep allows using much smaller Executing both data retrieval and data-modification in lockstep allows using much smaller
buffers for intermediate results, reducing the memory usage of queries. Not all queries are buffers for intermediate results, reducing the memory usage of queries. Not all queries are
eligible for this optimization, and the optimizer will only apply the optimization when it can eligible for this optimization, and the optimizer will only apply the optimization when it can
safely detect that the data-modification part of the query will not modify data to be found safely detect that the data-modification part of the query will not modify data to be found
@ -247,7 +247,7 @@ Now the path on filesystem is identical to the URL (except the appended APP):
The routing of Foxx has been exposed to major internal changes we adjusted because of user feedback. The routing of Foxx has been exposed to major internal changes we adjusted because of user feedback.
This allows us to set the development mode per mountpoint without having to change paths and hold This allows us to set the development mode per mountpoint without having to change paths and hold
apps at seperate locations. apps at separate locations.
!SUBSECTION Foxx Development mode !SUBSECTION Foxx Development mode

View File

@ -12,7 +12,7 @@ Key features include:
* Use ArangoDB as an **application server** and fuse your application and database together for maximal throughput * Use ArangoDB as an **application server** and fuse your application and database together for maximal throughput
* JavaScript for all: **no language zoo**, you can use one language from your browser to your back-end * JavaScript for all: **no language zoo**, you can use one language from your browser to your back-end
* ArangoDB is **multi-threaded** - exploit the power of all your cores * ArangoDB is **multi-threaded** - exploit the power of all your cores
* **Flexible data modelling**: model your data as combination of key-value pairs, documents or graphs - perfect for social relations * **Flexible data modeling**: model your data as combination of key-value pairs, documents or graphs - perfect for social relations
* Free **index choice**: use the correct index for your problem, be it a skip list or a fulltext search * Free **index choice**: use the correct index for your problem, be it a skip list or a fulltext search
* Configurable **durability**: let the application decide if it needs more durability or more performance * Configurable **durability**: let the application decide if it needs more durability or more performance
* No-nonsense storage: ArangoDB uses all of the power of **modern storage hardware**, like SSD and large caches * No-nonsense storage: ArangoDB uses all of the power of **modern storage hardware**, like SSD and large caches
@ -27,11 +27,11 @@ You can also go to our [cookbook](https://docs.arangodb.com/cookbook) and look t
!SUBSECTION Community !SUBSECTION Community
If you have questions regarding Arangodb, Foxx, drivers, or this documentation don't hesitate to contact us on: If you have questions regarding ArangoDB, Foxx, drivers, or this documentation don't hesitate to contact us on:
- [github](https://github.com/arangodb/arangodb/issues) for issues and missbehaviour or [pull requests](https://www.arangodb.com/community/) - [Github](https://github.com/arangodb/arangodb/issues) for issues and misbehavior or [pull requests](https://www.arangodb.com/community/)
- [google groups](https://groups.google.com/forum/?hl=de#!forum/arangodb) for discussions about ArangoDB in general or to anounce your new Foxx App - [Google groups](https://groups.google.com/forum/?hl=de#!forum/arangodb) for discussions about ArangoDB in general or to announce your new Foxx App
- [stackoverflow](http://stackoverflow.com/questions/tagged/arangodb) for questions about AQL, usage scenarios etc. - [Stackoverflow](http://stackoverflow.com/questions/tagged/arangodb) for questions about AQL, usage scenarios etc.
Please describe: Please describe:
@ -41,6 +41,6 @@ Please describe:
- the client you're using - the client you're using
- which parts of the Documentation you're working with (link) - which parts of the Documentation you're working with (link)
- what you expect to happen - what you expect to happen
- whats actualy happening - whats actually happening
We will respond as soon as possible. We will respond as soon as possible.

View File

@ -231,7 +231,7 @@ If the replication applier of a database has never been started before, it needs
from the master's log from which to start fetching events. from the master's log from which to start fetching events.
There is one caveat to consider when stopping a replication on the slave: if there are still There is one caveat to consider when stopping a replication on the slave: if there are still
ongoing replicated transactions that are neither commited or aborted, stopping the replication ongoing replicated transactions that are neither committed or aborted, stopping the replication
applier will cause these operations to be lost for the slave. If these transactions commit on the applier will cause these operations to be lost for the slave. If these transactions commit on the
master later and the replication is resumed, the slave will not be able to commit these transactions, master later and the replication is resumed, the slave will not be able to commit these transactions,
too. Thus stopping the replication applier on the slave manually should only be done if there too. Thus stopping the replication applier on the slave manually should only be done if there

View File

@ -20,7 +20,7 @@ needs to process the incoming HTTP requests, return the requested data from its
send the response. send the response.
In ArangoDB versions prior to 2.2, transactions were logged on the master as an uninterrupted In ArangoDB versions prior to 2.2, transactions were logged on the master as an uninterrupted
sequence, restricting their maxmial size considerably. While a transaction was written to the sequence, restricting their maximal size considerably. While a transaction was written to the
master's replication log, any other replication logging activity was blocked. master's replication log, any other replication logging activity was blocked.
This is not the case since ArangoDB 2.2. Transactions are now written to the write-ahead log This is not the case since ArangoDB 2.2. Transactions are now written to the write-ahead log

View File

@ -78,7 +78,7 @@ object is kept for compatibility with previous versions only.
Replication is configured on a per-database level. If multiple database are to be Replication is configured on a per-database level. If multiple database are to be
replicated, the replication must be set up individually per database. replicated, the replication must be set up individually per database.
The replication applier on the slave can be used to perform a one-time synchronisation The replication applier on the slave can be used to perform a one-time synchronization
with the master (and then stop), or to perform an ongoing replication of changes. To with the master (and then stop), or to perform an ongoing replication of changes. To
resume replication on slave restart, the *autoStart* attribute of the replication resume replication on slave restart, the *autoStart* attribute of the replication
applier must be set to true. applier must be set to true.

View File

@ -117,7 +117,7 @@
* [Sessions](Foxx/Develop/Sessions.md) * [Sessions](Foxx/Develop/Sessions.md)
* [Background Tasks](Foxx/Develop/Queues.md) * [Background Tasks](Foxx/Develop/Queues.md)
* [Console API](Foxx/Develop/Console.md) * [Console API](Foxx/Develop/Console.md)
* [Metainformation](Foxx/Develop/Manifest.md) * [Meta information](Foxx/Develop/Manifest.md)
* [Exports](Foxx/Develop/Exports.md) * [Exports](Foxx/Develop/Exports.md)
* [Documentation](Foxx/Develop/ApiDocumentation.md) * [Documentation](Foxx/Develop/ApiDocumentation.md)
* [Production](Foxx/Production/README.md) * [Production](Foxx/Production/README.md)
@ -209,7 +209,7 @@
* [Sharding](HttpShardingInterface/README.md) * [Sharding](HttpShardingInterface/README.md)
* [Miscellaneous functions](HttpMiscellaneousFunctions/README.md) * [Miscellaneous functions](HttpMiscellaneousFunctions/README.md)
* [General Handling](GeneralHttp/README.md) * [General Handling](GeneralHttp/README.md)
* [Javascript Modules](ModuleJavaScript/README.md) * [JavaScript Modules](ModuleJavaScript/README.md)
* ["console"](ModuleConsole/README.md) * ["console"](ModuleConsole/README.md)
* ["fs"](ModuleFs/README.md) * ["fs"](ModuleFs/README.md)
* ["process"](ModuleProcess/README.md) * ["process"](ModuleProcess/README.md)

View File

@ -81,7 +81,7 @@ coordinators below.
More interesting is that such a cluster plan document can be used to More interesting is that such a cluster plan document can be used to
start up the cluster conveniently using a *Kickstarter* object. Please start up the cluster conveniently using a *Kickstarter* object. Please
note that the *launch* method of the kickstarter shown below initialises note that the *launch* method of the kickstarter shown below initializes
all data directories and log files, so if you have previously used the all data directories and log files, so if you have previously used the
same cluster plan you will lose all your data. Use the *relaunch* method same cluster plan you will lose all your data. Use the *relaunch* method
described below instead in that case. described below instead in that case.

View File

@ -17,7 +17,7 @@ right DBservers.
As a central highly available service to hold the cluster configuration As a central highly available service to hold the cluster configuration
and to synchronize reconfiguration and fail-over operations we currently and to synchronize reconfiguration and fail-over operations we currently
use a an external program called *etcd* (see [github use a an external program called *etcd* (see [Github
page](https://github.com/coreos/etcd)). It provides a hierarchical page](https://github.com/coreos/etcd)). It provides a hierarchical
key value store with strong consistency and reliability promises. key value store with strong consistency and reliability promises.
This is called the "agency" and its processes are called "agents". This is called the "agency" and its processes are called "agents".

View File

@ -52,7 +52,7 @@ Complete match and prefix search options can be combined with the logical
operators. operators.
Please note that only words with a minimum length will get indexed. This minimum Please note that only words with a minimum length will get indexed. This minimum
length can be defined when creating the fulltext index. For words tokenisation, length can be defined when creating the fulltext index. For words tokenization,
the libicu text boundary analysis is used, which takes into account the default the libicu text boundary analysis is used, which takes into account the default
as defined at server startup (*--server.default-language* startup as defined at server startup (*--server.default-language* startup
option). Generally, the word boundary analysis will filter out punctuation but option). Generally, the word boundary analysis will filter out punctuation but

View File

@ -15,7 +15,7 @@ is started.
There are no individual *BEGIN*, *COMMIT* or *ROLLBACK* transaction commands There are no individual *BEGIN*, *COMMIT* or *ROLLBACK* transaction commands
in ArangoDB. Instead, a transaction in ArangoDB is started by providing a in ArangoDB. Instead, a transaction in ArangoDB is started by providing a
description of the transaction to the *db._executeTransaction* Javascript description of the transaction to the *db._executeTransaction* JavaScript
function: function:
```js ```js
@ -111,7 +111,7 @@ db._executeTransaction({
}); });
``` ```
Please note that any operations specified in *action* will be executed on the Please note that any operations specified in *action* will be executed on the
server, in a separate scope. Variables will be bound late. Accessing any Javascript server, in a separate scope. Variables will be bound late. Accessing any JavaScript
variables defined on the client-side or in some other server context from inside variables defined on the client-side or in some other server context from inside
a transaction may not work. a transaction may not work.
Instead, any variables used inside *action* should be defined inside *action* itself: Instead, any variables used inside *action* should be defined inside *action* itself:
@ -154,7 +154,7 @@ There is no explicit abort or roll back command.
As mentioned earlier, a transaction will commit automatically when the end of As mentioned earlier, a transaction will commit automatically when the end of
the *action* function is reached and no exception has been thrown. In this the *action* function is reached and no exception has been thrown. In this
case, the user can return any legal Javascript value from the function: case, the user can return any legal JavaScript value from the function:
```js ```js
db._executeTransaction({ db._executeTransaction({

View File

@ -136,7 +136,7 @@ arangosh> stmt = db._createStatement("FOR i IN mycollection RETURN i"); stmt.exe
} }
``` ```
For data-modification queries, ArangoDB 2.3 retuns a result with the same structure: For data-modification queries, ArangoDB 2.3 returns a result with the same structure:
``` ```
arangosh> stmt = db._createStatement("FOR i IN xx REMOVE i IN xx"); stmt.execute().getExtra() arangosh> stmt = db._createStatement("FOR i IN xx REMOVE i IN xx"); stmt.execute().getExtra()
@ -296,7 +296,7 @@ of ArangoDB so their benefit was limited. The support for bitarray indexes has
thus been removed in ArangoDB 2.3. It is not possible to create indexes of type thus been removed in ArangoDB 2.3. It is not possible to create indexes of type
"bitarray" with ArangoDB 2.3. "bitarray" with ArangoDB 2.3.
When a collection is openend that contains a bitarray index definition created When a collection is opened that contains a bitarray index definition created
with a previous version of ArangoDB, ArangoDB will ignore it and log the following with a previous version of ArangoDB, ArangoDB will ignore it and log the following
warning: warning:

View File

@ -63,14 +63,14 @@ This may be considered a feature or an anti-feature, so it is configurable.
If replication of system collections is undesired, they can be excluded from replication If replication of system collections is undesired, they can be excluded from replication
by setting the `includeSystem` attribute to `false` in the following commands: by setting the `includeSystem` attribute to `false` in the following commands:
* initial synchronisation: `replication.sync({ includeSystem: false })` * initial synchronization: `replication.sync({ includeSystem: false })`
* continuous replication: `replication.applier.properties({ includeSystem: false })` * continuous replication: `replication.applier.properties({ includeSystem: false })`
This will exclude all system collections (including `_aqlfunctions`, `_graphs` etc.) This will exclude all system collections (including `_aqlfunctions`, `_graphs` etc.)
from the initial synchronisation and the continuous replication. from the initial synchronization and the continuous replication.
If this is also undesired, it is also possible to specify a list of collections to If this is also undesired, it is also possible to specify a list of collections to
exclude from the initial synchronisation and the continuous replication using the exclude from the initial synchronization and the continuous replication using the
`restrictCollections` attribute, e.g.: `restrictCollections` attribute, e.g.:
require("org/arangodb/replication").applier.properties({ require("org/arangodb/replication").applier.properties({
@ -112,7 +112,7 @@ rebuild afterwards:
./configure <options go here> ./configure <options go here>
make make
!SECTION Misceallaneous changes !SECTION Miscellaneous changes
As a consequence of global renaming in the codebase, the option `mergeArrays` has As a consequence of global renaming in the codebase, the option `mergeArrays` has
been renamed to `mergeObjects`. This option controls whether JSON objects will be been renamed to `mergeObjects`. This option controls whether JSON objects will be

View File

@ -37,7 +37,7 @@ There was the option `--harmony`, which turned on almost all harmony features.
In ArangoDB 2.5, V8 provides the following harmony-related options: In ArangoDB 2.5, V8 provides the following harmony-related options:
* --harmony (enable all completed harmony features) * --harmony (enable all completed harmony features)
* --harmony_shipping (enable all shipped harmony fetaures) * --harmony_shipping (enable all shipped harmony features)
* --harmony_modules (enable "harmony modules (implies block scoping)" (in progress)) * --harmony_modules (enable "harmony modules (implies block scoping)" (in progress))
* --harmony_arrays (enable "harmony array methods" (in progress)) * --harmony_arrays (enable "harmony array methods" (in progress))
* --harmony_array_includes (enable "harmony Array.prototype.includes" (in progress)) * --harmony_array_includes (enable "harmony Array.prototype.includes" (in progress))