1
0
Fork 0

Remove whitespace from end of lines

This commit is contained in:
Simran Brucherseifer 2019-11-25 19:42:28 +01:00
parent 4b2f4c794c
commit 7fcc3ca06f
83 changed files with 292 additions and 292 deletions

View File

@ -10,7 +10,7 @@ cache of the selected database. Each result is a JSON object with the following
- *hash*: the query result's hash
- *query*: the query string
- *query*: the query string
- *bindVars*: the query's bind parameters. this attribute is only shown if tracking for
bind variables was enabled at server start
@ -21,7 +21,7 @@ cache of the selected database. Each result is a JSON object with the following
- *started*: the date and time when the query was stored in the cache
- *hits*: number of times the result was served from the cache (can be
- *hits*: number of times the result was served from the cache (can be
*0* for queries that were only stored in the cache but were never accessed
again afterwards)

View File

@ -14,10 +14,10 @@ JSON object with the following properties:
- *maxResults*: the maximum number of query results that will be stored per database-specific
cache.
- *maxResultsSize*: the maximum cumulated size of query results that will be stored per
- *maxResultsSize*: the maximum cumulated size of query results that will be stored per
database-specific cache.
- *maxEntrySize*: the maximum individual result size of queries that will be stored per
- *maxEntrySize*: the maximum individual result size of queries that will be stored per
database-specific cache.
- *includeSystem*: whether or not results of queries that involve system collections will be

View File

@ -12,11 +12,11 @@ JSON object with the following properties:
*false*, neither queries nor slow queries will be tracked.
- *trackSlowQueries*: if set to *true*, then slow queries will be tracked
in the list of slow queries if their runtime exceeds the value set in
in the list of slow queries if their runtime exceeds the value set in
*slowQueryThreshold*. In order for slow queries to be tracked, the *enabled*
property must also be set to *true*.
- *trackBindVars*: if set to *true*, then bind variables used in queries will
- *trackBindVars*: if set to *true*, then bind variables used in queries will
be tracked.
- *maxSlowQueries*: the maximum number of slow queries to keep in the list

View File

@ -22,7 +22,7 @@ Each query is a JSON object with the following attributes:
- *started*: the date and time when the query was started
- *runTime*: the query's total run time
- *runTime*: the query's total run time
- *state*: the query's current execution state (will always be "finished"
for the list of slow queries)

View File

@ -15,7 +15,7 @@ in the list of slow queries if their runtime exceeds the value set in
property must also be set to *true*.
@RESTBODYPARAM{trackBindVars,boolean,required,}
If set to *true*, then the bind variables used in queries will be tracked
If set to *true*, then the bind variables used in queries will be tracked
along with queries.
@RESTBODYPARAM{maxSlowQueries,integer,required,int64}

View File

@ -16,7 +16,7 @@ the name of the AQL user function.
a namespace prefix, and all functions in the specified namespace will be deleted.
The returned number of deleted functions may become 0 if none matches the string.
- *false*: The function name provided in *name* must be fully
qualified, including any namespaces. If none matches the *name*, HTTP 404 is returned.
qualified, including any namespaces. If none matches the *name*, HTTP 404 is returned.
@RESTDESCRIPTION
Removes an existing AQL user function or function group, identified by *name*.
@ -34,7 +34,7 @@ boolean flag to indicate whether an error occurred (*false* in this case)
the HTTP status code
@RESTREPLYBODY{deletedCount,integer,required,int64}
The number of deleted user functions, always `1` when `group` is set to *false*.
The number of deleted user functions, always `1` when `group` is set to *false*.
Any number `>= 0` when `group` is set to *true*
@RESTRETURNCODE{400}
@ -74,9 +74,9 @@ deletes a function:
@EXAMPLE_ARANGOSH_RUN{RestAqlfunctionDelete}
var url = "/_api/aqlfunction/square::x::y";
var body = {
name : "square::x::y",
code : "function (x) { return x*x; }"
var body = {
name : "square::x::y",
code : "function (x) { return x*x; }"
};
db._connection.POST("/_api/aqlfunction", body);

View File

@ -26,7 +26,7 @@ boolean flag to indicate whether an error occurred (*false* in this case)
the HTTP status code
@RESTREPLYBODY{result,array,required,aql_userfunction_struct}
All functions, or the ones matching the *namespace* parameter
All functions, or the ones matching the *namespace* parameter
@RESTSTRUCT{name,aql_userfunction_struct,string,required,}
The fully qualified name of the user function

View File

@ -22,7 +22,7 @@ if set to *true*, all possible execution plans will be returned.
The default is *false*, meaning only the optimal plan will be returned.
@RESTSTRUCT{maxNumberOfPlans,explain_options,integer,optional,int64}
an optional maximum number of plans that the optimizer is
an optional maximum number of plans that the optimizer is
allowed to generate. Setting this attribute to a low value allows to put a
cap on the amount of work the optimizer does.
@ -42,20 +42,20 @@ returned, but the query will not be executed.
The execution plan that is returned by the server can be used to estimate the
probable performance of the query. Though the actual performance will depend
on many different factors, the execution plan normally can provide some rough
estimates on the amount of work the server needs to do in order to actually run
estimates on the amount of work the server needs to do in order to actually run
the query.
By default, the explain operation will return the optimal plan as chosen by
the query optimizer The optimal plan is the plan with the lowest total estimated
cost. The plan will be returned in the attribute *plan* of the response object.
If the option *allPlans* is specified in the request, the result will contain
all plans created by the optimizer. The plans will then be returned in the
If the option *allPlans* is specified in the request, the result will contain
all plans created by the optimizer. The plans will then be returned in the
attribute *plans*.
The result will also contain an attribute *warnings*, which is an array of
The result will also contain an attribute *warnings*, which is an array of
warnings that occurred during optimization or execution plan creation. Additionally,
a *stats* attribute is contained in the result with some optimizer statistics.
If *allPlans* is set to *false*, the result will contain an attribute *cacheable*
If *allPlans* is set to *false*, the result will contain an attribute *cacheable*
that states whether the query results can be cached on the server if the query
result cache were used. The *cacheable* attribute is not present when *allPlans*
is set to *true*.
@ -104,7 +104,7 @@ Valid query
db._drop(cn);
db._create(cn);
for (var i = 0; i < 10; ++i) { db.products.save({ id: i }); }
body = {
body = {
query : "FOR p IN products RETURN p"
};
@ -125,7 +125,7 @@ A plan with some optimizer rules applied
db._create(cn);
db.products.ensureSkiplist("id");
for (var i = 0; i < 10; ++i) { db.products.save({ id: i }); }
body = {
body = {
query : "FOR p IN products LET a = p.id FILTER a == 4 LET name = p.name SORT p.id LIMIT 1 RETURN name",
};
@ -146,7 +146,7 @@ Using some options
db._create(cn);
db.products.ensureSkiplist("id");
for (var i = 0; i < 10; ++i) { db.products.save({ id: i }); }
body = {
body = {
query : "FOR p IN products LET a = p.id FILTER a == 4 LET name = p.name SORT p.id LIMIT 1 RETURN name",
options : {
maxNumberOfPlans : 2,
@ -173,10 +173,10 @@ Returning all plans
db._drop(cn);
db._create(cn);
db.products.ensureHashIndex("id");
body = {
body = {
query : "FOR p IN products FILTER p.id == 25 RETURN p",
options: {
allPlans: true
allPlans: true
}
};
@ -192,7 +192,7 @@ A query that produces a warning
@EXAMPLE_ARANGOSH_RUN{RestExplainWarning}
var url = "/_api/explain";
body = {
body = {
query : "FOR i IN 1..10 RETURN 1 / 0"
};
@ -210,7 +210,7 @@ Invalid query (missing bind parameter)
var cn = "products";
db._drop(cn);
db._create(cn);
body = {
body = {
query : "FOR p IN products FILTER p.id == @id LIMIT 2 RETURN p.n"
};

View File

@ -1,6 +1,6 @@
@startDocuBlock JSF_get_admin_status
@brief returns status information of the server.
@brief returns status information of the server.
@RESTHEADER{GET /_admin/status, Return status information, RestStatusHandler}

View File

@ -8,7 +8,7 @@
The id of the task to delete.
@RESTDESCRIPTION
Deletes the task identified by *id* on the server.
Deletes the task identified by *id* on the server.
@RESTRETURNCODES

View File

@ -14,7 +14,7 @@ used only in the context of server monitoring only.
@RESTRETURNCODE{200}
This API will return HTTP 200 in case the server is up and running and usable for
arbitrary operations, is not set to read-only mode and is currently not a follower
arbitrary operations, is not set to read-only mode and is currently not a follower
in case of an active failover setup.
@RESTRETURNCODE{503}

View File

@ -10,7 +10,7 @@ a field `mode` with the value `readonly` or `default`. In a read-only server
all write operations will fail with an error code of `1004` (_ERROR_READ_ONLY_).
Creating or dropping of databases and collections will also fail with error code `11` (_ERROR_FORBIDDEN_).
This is a public API so it does *not* require authentication.
This is a public API so it does *not* require authentication.
@RESTRETURNCODES

View File

@ -91,7 +91,7 @@ array containing the values
total connection times
@RESTSTRUCT{totalTime,client_statistics_struct,object,required,setof_statistics_struct}
the system time
the system time
@RESTSTRUCT{requestTime,client_statistics_struct,object,required,setof_statistics_struct}
the request times

View File

@ -98,7 +98,7 @@ which openssl version do we link?
the host os - *linux*, *windows* or *darwin*
@RESTSTRUCT{reactor-type,version_details_struct,string,optional,}
*epoll* TODO
*epoll* TODO
@RESTSTRUCT{rocksdb-version,version_details_struct,string,optional,}
the rocksdb version this release bundles

View File

@ -33,7 +33,7 @@ the transport, one of ['http', 'https', 'velocystream']
@RESTREPLYBODY{server,object,required,admin_echo_server_struct}
@RESTSTRUCT{address,admin_echo_server_struct,string,required,}
the bind address of the endpoint this request was sent to
the bind address of the endpoint this request was sent to
@RESTSTRUCT{port,admin_echo_server_struct,integer,required,}
the port this request was sent to

View File

@ -19,9 +19,9 @@ returned.
Note that this API endpoint will only be present if the server was
started with the option `--javascript.allow-admin-execute true`.
The default value of this option is `false`, which disables the execution of
user-defined code and disables this API endpoint entirely.
This is also the recommended setting for production.
The default value of this option is `false`, which disables the execution of
user-defined code and disables this API endpoint entirely.
This is also the recommended setting for production.
@RESTRETURNCODE{200}
is returned when everything went well, or if a timeout occurred. In the

View File

@ -17,7 +17,7 @@ The parameters to be passed into command
number of seconds between the executions
@RESTBODYPARAM{offset,integer,optional,int64}
Number of seconds initial delay
Number of seconds initial delay
@RESTDESCRIPTION
creates a new task with a generated id

View File

@ -20,7 +20,7 @@ UUID is created for this part of the ID.
@RESTBODYPARAM{timeout,number,optional,double}
The time in seconds that the operation tries to get a consistent
snapshot. The default is 120 seconds.
snapshot. The default is 120 seconds.
@RESTBODYPARAM{allowInconsistent,boolean,optional,boolean}
If this flag is set to `true` and no global transaction lock can be

View File

@ -7,7 +7,7 @@
Delete a specific local backup identified by the given `id`.
@RESTBODYPARAM{id,string,required,string}
The identifier for this backup.
The identifier for this backup.
@RESTRETURNCODES

View File

@ -16,7 +16,7 @@ attribute.
@RESTBODYPARAM{remoteRepository,string,required,string}
URL of remote reporsitory. This is required when a download
operation is scheduled. In this case leave out the `downloadId`
attribute. Provided repository URLs are normalized and validated as follows: One single colon must appear separating the configurtion section name and the path. The URL prefix up to the colon must exist as a key in the config object below. No slashes must appear before the colon. Multiple back to back slashes are collapsed to one, as `..` and `.` are applied accordingly. Local repositories must be absolute paths and must begin with a `/`. Trailing `/` are removed.
attribute. Provided repository URLs are normalized and validated as follows: One single colon must appear separating the configurtion section name and the path. The URL prefix up to the colon must exist as a key in the config object below. No slashes must appear before the colon. Multiple back to back slashes are collapsed to one, as `..` and `.` are applied accordingly. Local repositories must be absolute paths and must begin with a `/`. Trailing `/` are removed.
@RESTBODYPARAM{config,object,required,object}
Configuration of remote repository. This is required when a download

View File

@ -8,7 +8,7 @@ Lists all locally found backups.
@RESTBODYPARAM{id,string,optional,string}
The body can either be empty (in which case all available backups are
listed), or it can be an object with an attribute `id`, which
listed), or it can be an object with an attribute `id`, which
is a string. In the latter case the returned list
is restricted to the backup with the given id.
@ -60,6 +60,6 @@ method other than `POST`, then an *HTTP 405 METHOD NOT ALLOWED* is returned.
};
@END_EXAMPLE_ARANGOSH_RUN
The result consists of a `list` object of hot backups by their `id`, where `id` uniquely identifies a specific hot backup, `version` depicts the version of ArangoDB, which was used to create any individual hot backup and `datetime` displays the time of creation of the hot backup. Further parameters are the size of the backup in bytes as `sizeInBytes`, the number of individual data files as `nrFiles`, the number of db servers at time of creation as `nrDBServers`, the number of backup parts, which are found on the currently reachable db servers as `nrPiecesPresent`. If the backup was created allowing inconsistences, it is so denoted as `potentiallyInconsistent`. The `available` boolean parameter is tightly connected to the backup to be present and ready to be restored on all db servers. It is `true` except, when the number of db servers currently reachable does not match to the number of db servers listed in the backup.
The result consists of a `list` object of hot backups by their `id`, where `id` uniquely identifies a specific hot backup, `version` depicts the version of ArangoDB, which was used to create any individual hot backup and `datetime` displays the time of creation of the hot backup. Further parameters are the size of the backup in bytes as `sizeInBytes`, the number of individual data files as `nrFiles`, the number of db servers at time of creation as `nrDBServers`, the number of backup parts, which are found on the currently reachable db servers as `nrPiecesPresent`. If the backup was created allowing inconsistences, it is so denoted as `potentiallyInconsistent`. The `available` boolean parameter is tightly connected to the backup to be present and ready to be restored on all db servers. It is `true` except, when the number of db servers currently reachable does not match to the number of db servers listed in the backup.
@endDocuBlock

View File

@ -42,7 +42,7 @@ are detailed in the returned error document.
logJsonResponse(response);
body = {
error: false, code: 200,
error: false, code: 200,
result: {
"previous":"FAILSAFE", "isCluster":false
}

View File

@ -16,7 +16,7 @@ attribute.
@RESTBODYPARAM{remoteRepository,string,optional,string}
URL of remote reporsitory. This is required when an upload
operation is scheduled. In this case leave out the `uploadId`
attribute. Provided repository URLs are normalized and validated as follows: One single colon must appear separating the configurtion section name and the path. The URL prefix up to the colon must exist as a key in the config object below. No slashes must appear before the colon. Multiple back to back slashes are collapsed to one, as `..` and `.` are applied accordingly. Local repositories must be absolute paths and must begin with a `/`. Trailing `/` are removed.
attribute. Provided repository URLs are normalized and validated as follows: One single colon must appear separating the configurtion section name and the path. The URL prefix up to the colon must exist as a key in the config object below. No slashes must appear before the colon. Multiple back to back slashes are collapsed to one, as `..` and `.` are applied accordingly. Local repositories must be absolute paths and must begin with a `/`. Trailing `/` are removed.
@RESTBODYPARAM{config,object,optional,object}
Configuration of remote repository. This is required when an upload

View File

@ -81,24 +81,24 @@ The boundary (`SomeBoundaryValue`) is passed to the server in the HTTP
@EXAMPLE_ARANGOSH_RUN{RestBatchMultipartHeader}
var parts = [
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: myId1\r\n\r\n" +
"Content-Id: myId1\r\n\r\n" +
"GET /_api/version HTTP/1.1\r\n",
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: myId2\r\n\r\n" +
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: myId2\r\n\r\n" +
"DELETE /_api/collection/products HTTP/1.1\r\n",
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: someId\r\n\r\n" +
"POST /_api/collection/products HTTP/1.1\r\n\r\n" +
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: someId\r\n\r\n" +
"POST /_api/collection/products HTTP/1.1\r\n\r\n" +
"{\"name\": \"products\" }\r\n",
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: nextId\r\n\r\n" +
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: nextId\r\n\r\n" +
"GET /_api/collection/products/figures HTTP/1.1\r\n",
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: otherId\r\n\r\n" +
"Content-Type: application/x-arango-batchpart\r\n" +
"Content-Id: otherId\r\n\r\n" +
"DELETE /_api/collection/products HTTP/1.1\r\n"
];
var boundary = "SomeBoundaryValue";
@ -120,9 +120,9 @@ in this case try to find the boundary at the beginning of the request body).
@EXAMPLE_ARANGOSH_RUN{RestBatchImplicitBoundary}
var parts = [
"Content-Type: application/x-arango-batchpart\r\n\r\n" +
"Content-Type: application/x-arango-batchpart\r\n\r\n" +
"DELETE /_api/collection/notexisting1 HTTP/1.1\r\n",
"Content-Type: application/x-arango-batchpart\r\n\r\n" +
"Content-Type: application/x-arango-batchpart\r\n\r\n" +
"DELETE _api/collection/notexisting2 HTTP/1.1\r\n"
];
var boundary = "SomeBoundaryValue";

View File

@ -120,8 +120,8 @@ line in the import data is empty
db._drop(cn);
db._create(cn);
var body = '[ "_key", "value1", "value2" ]\n' +
'[ "abc", 25, "test" ]\n\n' +
var body = '[ "_key", "value1", "value2" ]\n' +
'[ "abc", 25, "test" ]\n\n' +
'[ "foo", "bar", "baz" ]';
var response = logCurlRequestRaw('POST', "/_api/import?collection=" + cn, body);
@ -145,7 +145,7 @@ Importing into an edge collection, with attributes `_from`, `_to` and `name`
db._drop("products");
db._create("products");
var body = '[ "_from", "_to", "name" ]\n' +
var body = '[ "_from", "_to", "name" ]\n' +
'[ "products/123","products/234", "some name" ]\n' +
'[ "products/332", "products/abc", "other name" ]';
@ -190,7 +190,7 @@ Violating a unique constraint, but allow partial imports
db._drop(cn);
db._create(cn);
var body = '[ "_key", "value1", "value2" ]\n' +
var body = '[ "_key", "value1", "value2" ]\n' +
'[ "abc", 25, "test" ]\n' +
'["abc", "bar", "baz" ]';
@ -214,7 +214,7 @@ Violating a unique constraint, not allowing partial imports
db._create(cn);
var body = '[ "_key", "value1", "value2" ]\n' +
'[ "abc", 25, "test" ]\n' +
'[ "abc", 25, "test" ]\n' +
'["abc", "bar", "baz" ]';
var response = logCurlRequest('POST', "/_api/import?collection=" + cn + "&complete=true", body);
@ -231,8 +231,8 @@ Using a non-existing collection
var cn = "products";
db._drop(cn);
var body = '[ "_key", "value1", "value2" ]\n' +
'[ "abc", 25, "test" ]\n' +
var body = '[ "_key", "value1", "value2" ]\n' +
'[ "abc", 25, "test" ]\n' +
'["foo", "bar", "baz" ]';
var response = logCurlRequest('POST', "/_api/import?collection=" + cn, body);

View File

@ -47,11 +47,11 @@ Controls what action is carried out in case of a unique key constraint
violation. Possible values are:<br>
- *error*: this will not import the current document because of the unique
key constraint violation. This is the default setting.
- *update*: this will update an existing document in the database with the
- *update*: this will update an existing document in the database with the
data specified in the request. Attributes of the existing document that
are not present in the request will be preserved.
- *replace*: this will replace an existing document in the database with the
data specified in the request.
data specified in the request.
- *ignore*: this will not update an existing document and simply ignore the
error caused by a unique key constraint violation.<br>
Note that that *update*, *replace* and *ignore* will only work when the
@ -156,8 +156,8 @@ Importing documents from individual JSON lines
db._flushCache();
var body = '{ "_key": "abc", "value1": 25, "value2": "test",' +
'"allowed": true }\n' +
'{ "_key": "foo", "name": "baz" }\n\n' +
'"allowed": true }\n' +
'{ "_key": "foo", "name": "baz" }\n\n' +
'{ "name": {' +
' "detailed": "detailed name", "short": "short name" } }\n';
var response = logCurlRequestRaw('POST', "/_api/import?collection=" + cn
@ -211,8 +211,8 @@ Importing into an edge collection, with attributes `_from`, `_to` and `name`
db._create("products");
db._flushCache();
var body = '{ "_from": "products/123", "_to": "products/234" }\n' +
'{"_from": "products/332", "_to": "products/abc", ' +
var body = '{ "_from": "products/123", "_to": "products/234" }\n' +
'{"_from": "products/332", "_to": "products/abc", ' +
' "name": "other name" }';
var response = logCurlRequestRaw('POST', "/_api/import?collection=" + cn + "&type=documents", body);
@ -259,7 +259,7 @@ Violating a unique constraint, but allow partial imports
db._create(cn);
db._flushCache();
var body = '{ "_key": "abc", "value1": 25, "value2": "test" }\n' +
var body = '{ "_key": "abc", "value1": 25, "value2": "test" }\n' +
'{ "_key": "abc", "value1": "bar", "value2": "baz" }';
var response = logCurlRequestRaw('POST', "/_api/import?collection=" + cn

View File

@ -35,7 +35,7 @@ not set, a server-controlled default value will be used.
an optional limit value, determining the maximum number of documents to
be included in the cursor. Omitting the *limit* attribute or setting it to 0 will
lead to no limit being used. If a limit is used, it is undefined which documents
from the collection will be included in the export and which will be excluded.
from the collection will be included in the export and which will be excluded.
This is because there is no natural order of documents in a collection.
@RESTBODYPARAM{ttl,integer,required,int64}
@ -45,7 +45,7 @@ is useful to ensure garbage collection of cursors that are not fully fetched
by clients. If not set, a server-defined value will be used.
@RESTBODYPARAM{restrict,object,optional,post_api_export_restrictions}
an object containing an array of attribute names that will be
an object containing an array of attribute names that will be
included or excluded when returning result documents. Not specifying
*restrict* will by default return all attributes of each document.
@ -63,15 +63,15 @@ Specifying names of nested attributes is not supported at the moment.
The name of the collection to export.
@RESTDESCRIPTION
A call to this method creates a cursor containing all documents in the
A call to this method creates a cursor containing all documents in the
specified collection. In contrast to other data-producing APIs, the internal
data structures produced by the export API are more lightweight, so it is
the preferred way to retrieve all documents from a collection.
Documents are returned in a similar manner as in the `/_api/cursor` REST API.
Documents are returned in a similar manner as in the `/_api/cursor` REST API.
If all documents of the collection fit into the first batch, then no cursor
will be created, and the result object's *hasMore* attribute will be set to
*false*. If not all documents fit into the first batch, then the result
*false*. If not all documents fit into the first batch, then the result
object's *hasMore* attribute will be set to *true*, and the *id* attribute
of the result will contain a cursor id.
@ -83,7 +83,7 @@ log (WAL) at the time the export is run will not be exported.
To export these documents as well, the caller can issue a WAL flush request
before calling the export API or set the *flush* attribute. Setting the *flush*
option will trigger a WAL flush before the export so documents get copied from
option will trigger a WAL flush before the export so documents get copied from
the WAL to the collection datafiles.
If the result set can be created by the server, the server will respond with
@ -123,7 +123,7 @@ details. The object has the following attributes:
Clients should always delete an export cursor result as early as possible because a
lingering export cursor will prevent the underlying collection from being
compacted or unloaded. By default, unused cursors will be deleted automatically
compacted or unloaded. By default, unused cursors will be deleted automatically
after a server-defined idle time, and clients can adjust this idle time by setting
the *ttl* value.

View File

@ -5,7 +5,7 @@
@RESTHEADER{PUT /_admin/cluster/maintenance, Enable or disable the supervision maintenance mode}
@RESTDESCRIPTION
This API allows you to temporarily enable the supervision maintenance mode. Be aware that no
This API allows you to temporarily enable the supervision maintenance mode. Be aware that no
automatic failovers of any kind will take place while the maintenance mode is enabled.
The _cluster_ supervision reactivates itself automatically _60 minutes_ after disabling it.

View File

@ -111,8 +111,8 @@ The total filesize of all compactor files (in bytes).
The number of revisions of this collection stored in the document revisions cache.
@RESTSTRUCT{size,collection_figures_readcache,integer,required,int64}
The memory used for storing the revisions of this collection in the document
revisions cache (in bytes). This figure does not include the document data but
The memory used for storing the revisions of this collection in the document
revisions cache (in bytes). This figure does not include the document data but
only mappings from document revision ids to cache entry locations.
@RESTSTRUCT{revisions,collection_figures,object,required,collection_figures_revisions}
@ -121,8 +121,8 @@ only mappings from document revision ids to cache entry locations.
The number of revisions of this collection managed by the storage engine.
@RESTSTRUCT{size,collection_figures_revisions,integer,required,int64}
The memory used for storing the revisions of this collection in the storage
engine (in bytes). This figure does not include the document data but only mappings
The memory used for storing the revisions of this collection in the storage
engine (in bytes). This figure does not include the document data but only mappings
from document revision ids to storage engine datafile positions.
@RESTSTRUCT{indexes,collection_figures,object,required,collection_figures_indexes}
@ -144,23 +144,23 @@ The number of markers in the write-ahead
log for this collection that have not been transferred to journals or datafiles.
@RESTSTRUCT{documentReferences,collection_figures,integer,optional,int64}
The number of references to documents in datafiles that JavaScript code
currently holds. This information can be used for debugging compaction and
The number of references to documents in datafiles that JavaScript code
currently holds. This information can be used for debugging compaction and
unload issues.
@RESTSTRUCT{waitingFor,collection_figures,string,optional,string}
An optional string value that contains information about which object type is at the
head of the collection's cleanup queue. This information can be used for debugging
An optional string value that contains information about which object type is at the
head of the collection's cleanup queue. This information can be used for debugging
compaction and unload issues.
@RESTSTRUCT{compactionStatus,collection_figures,object,optional,compactionStatus_attributes}
@RESTSTRUCT{message,compactionStatus_attributes,string,optional,string}
The action that was performed when the compaction was last run for the collection.
The action that was performed when the compaction was last run for the collection.
This information can be used for debugging compaction issues.
@RESTSTRUCT{time,compactionStatus_attributes,string,optional,string}
The point in time the compaction for the collection was last executed.
The point in time the compaction for the collection was last executed.
This information can be used for debugging compaction issues.
@RESTREPLYBODY{journalSize,integer,required,int64}

View File

@ -21,7 +21,7 @@ existed.
The request must body must contain a JSON document with at least the
collection's shard key attributes set to some values.
The response is a JSON object with a *shardId* attribute, which will
The response is a JSON object with a *shardId* attribute, which will
contain the ID of the responsible shard.
**Note** : This method is only available in a cluster coordinator.
@ -56,7 +56,7 @@ is returned.
var response = logCurlRequestRaw('PUT', "/_api/collection/" + cn + "/responsibleShard", body);
assert(response.code === 200);
assert(JSON.parse(response.body).hasOwnProperty("shardId"));
assert(JSON.parse(response.body).hasOwnProperty("shardId"));
logJsonResponse(response);
db._drop(cn);

View File

@ -58,15 +58,15 @@ should be a JSON array containing the following attributes:
specifies the type of the key generator. The currently available generators are
*traditional*, *autoincrement*, *uuid* and *padded*.<br>
The *traditional* key generator generates numerical keys in ascending order.
The *autoincrement* key generator generates numerical keys in ascending order,
The *autoincrement* key generator generates numerical keys in ascending order,
the inital offset and the spacing can be configured
The *padded* key generator generates keys of a fixed length (16 bytes) in
ascending lexicographical sort order. This is ideal for usage with the _RocksDB_
engine, which will slightly benefit keys that are inserted in lexicographically
ascending order. The key generator can be used in a single-server or cluster.
The *uuid* key generator generates universally unique 128 bit keys, which
The *uuid* key generator generates universally unique 128 bit keys, which
are stored in hexadecimal human-readable format. This key generator can be used
in a single-server or cluster to generate "seemingly random" keys. The keys
in a single-server or cluster to generate "seemingly random" keys. The keys
produced by this key generator are not lexicographically sorted.
@RESTSTRUCT{allowUserKeys,post_api_collection_opts,boolean,required,}
@ -122,11 +122,11 @@ and the hash value is used to determine the target shard.
(The default is *1*): in a cluster, this attribute determines how many copies
of each shard are kept on different DBServers. The value 1 means that only one
copy (no synchronous replication) is kept. A value of k means that k-1 replicas
are kept. Any two copies reside on different DBServers. Replication between them is
synchronous, that is, every write operation to the "leader" copy will be replicated
are kept. Any two copies reside on different DBServers. Replication between them is
synchronous, that is, every write operation to the "leader" copy will be replicated
to all "follower" replicas, before the write operation is reported successful.
If a server fails, this is detected automatically and one of the servers holding
If a server fails, this is detected automatically and one of the servers holding
copies take over, usually without an error being reported.
@RESTBODYPARAM{distributeShardsLike,string,optional,string}
@ -140,10 +140,10 @@ collections alone will generate warnings (which can be overridden)
about missing sharding prototype.
@RESTBODYPARAM{shardingStrategy,string,optional,string}
This attribute specifies the name of the sharding strategy to use for
the collection. Since ArangoDB 3.4 there are different sharding strategies
to select from when creating a new collection. The selected *shardingStrategy*
value will remain fixed for the collection and cannot be changed afterwards.
This attribute specifies the name of the sharding strategy to use for
the collection. Since ArangoDB 3.4 there are different sharding strategies
to select from when creating a new collection. The selected *shardingStrategy*
value will remain fixed for the collection and cannot be changed afterwards.
This is important to make the collection keep its sharding settings and
always find documents already distributed to shards using the same
initial sharding algorithm.
@ -162,15 +162,15 @@ The available sharding strategies are:
If no sharding strategy is specified, the default will be *hash* for
all collections, and *enterprise-hash-smart-edge* for all smart edge
collections (requires the *Enterprise Edition* of ArangoDB).
Manually overriding the sharding strategy does not yet provide a
collections (requires the *Enterprise Edition* of ArangoDB).
Manually overriding the sharding strategy does not yet provide a
benefit, but it may later in case other sharding strategies are added.
@RESTBODYPARAM{smartJoinAttribute,string,optional,string}
In an *Enterprise Edition* cluster, this attribute determines an attribute
of the collection that must contain the shard key value of the referred-to
smart join collection. Additionally, the shard key for a document in this
collection must contain the value of this attribute, followed by a colon,
of the collection that must contain the shard key value of the referred-to
smart join collection. Additionally, the shard key for a document in this
collection must contain the value of this attribute, followed by a colon,
followed by the actual primary key of the document.
This feature can only be used in the *Enterprise Edition* and requires the
@ -178,7 +178,7 @@ This feature can only be used in the *Enterprise Edition* and requires the
of another collection. It also requires the *shardKeys* attribute of the
collection to be set to a single shard key attribute, with an additional ':'
at the end.
A further restriction is that whenever documents are stored or updated in the
A further restriction is that whenever documents are stored or updated in the
collection, the value stored in the *smartJoinAttribute* must be a string.
@RESTQUERYPARAMETERS

View File

@ -22,7 +22,7 @@ attribute(s)
- *waitForSync*: If *true* then creating or changing a
document will wait until the data has been synchronized to disk.
- *journalSize*: The maximal size of a journal or datafile in bytes.
- *journalSize*: The maximal size of a journal or datafile in bytes.
The value must be at least `1048576` (1 MB). Note that when
changing the journalSize value, it will only have an effect for
additional journals or datafiles that are created. Already

View File

@ -34,7 +34,7 @@ It returns an object with the attributes
- *isSystem*: If *true* then the collection is a system collection.
If renaming the collection succeeds, then the collection is also renamed in
If renaming the collection succeeds, then the collection is also renamed in
all graph definitions inside the `_graphs` collection in the current database.
**Note**: this method is not available in a cluster.

View File

@ -51,12 +51,12 @@ if set to *true* and the query contains a *LIMIT* clause, then the
result will have an *extra* attribute with the sub-attributes *stats*
and *fullCount*, `{ ... , "extra": { "stats": { "fullCount": 123 } } }`.
The *fullCount* attribute will contain the number of documents in the result before the
last top-level LIMIT in the query was applied. It can be used to count the number of
last top-level LIMIT in the query was applied. It can be used to count the number of
documents that match certain filter criteria, but only return a subset of them, in one go.
It is thus similar to MySQL's *SQL_CALC_FOUND_ROWS* hint. Note that setting the option
will disable a few LIMIT optimizations and may lead to more documents being processed,
and thus make queries run longer. Note that the *fullCount* attribute may only
be present in the result if the query has a top-level LIMIT clause and the LIMIT
be present in the result if the query has a top-level LIMIT clause and the LIMIT
clause is actually used in the query.
@RESTSTRUCT{maxPlans,post_api_cursor_opts,integer,optional,int64}
@ -78,11 +78,11 @@ default value for *failOnWarning* so it does not need to be set on a per-query l
@RESTSTRUCT{stream,post_api_cursor_opts,boolean,optional,}
Specify *true* and the query will be executed in a **streaming** fashion. The query result is
not stored on the server, but calculated on the fly. *Beware*: long-running queries will
need to hold the collection locks for as long as the query cursor exists.
When set to *false* a query will be executed right away in its entirety.
need to hold the collection locks for as long as the query cursor exists.
When set to *false* a query will be executed right away in its entirety.
In that case query results are either returned right away (if the result set is small enough),
or stored on the arangod instance and accessible via the cursor API (with respect to the `ttl`).
It is advisable to *only* use this option on short-running queries or without exclusive locks
or stored on the arangod instance and accessible via the cursor API (with respect to the `ttl`).
It is advisable to *only* use this option on short-running queries or without exclusive locks
(write-locks on MMFiles).
Please note that the query options `cache`, `count` and `fullCount` will not work on streaming queries.
Additionally query statistics, warnings and profiling data will only be available after the query is finished.

View File

@ -93,7 +93,7 @@ Creating a database named *example*.
@END_EXAMPLE_ARANGOSH_RUN
Creating a database named *mydb* with two users, flexible sharding and
default replication factor of 3 for collections that will be part of
default replication factor of 3 for collections that will be part of
the newly created database.
@EXAMPLE_ARANGOSH_RUN{RestDatabaseCreateUsers}

View File

@ -17,7 +17,7 @@ Collection from which documents are removed.
Wait until deletion operation has been synced to disk.
@RESTQUERYPARAM{returnOld,boolean,optional}
Return additionally the complete previous revision of the changed
Return additionally the complete previous revision of the changed
document under the attribute *old* in the result.
@RESTQUERYPARAM{ignoreRevs,boolean,optional}

View File

@ -14,11 +14,11 @@ Removes the document identified by *document-handle*.
Wait until deletion operation has been synced to disk.
@RESTQUERYPARAM{returnOld,boolean,optional}
Return additionally the complete previous revision of the changed
Return additionally the complete previous revision of the changed
document under the attribute *old* in the result.
@RESTQUERYPARAM{silent,boolean,optional}
If set to *true*, an empty object will be returned as response. No meta-data
If set to *true*, an empty object will be returned as response. No meta-data
will be returned for the removed document. This option can be used to
save some network traffic.
@ -29,10 +29,10 @@ You can conditionally remove a document based on a target revision id by
using the *if-match* HTTP header.
@RESTDESCRIPTION
If *silent* is not set to *true*, the body of the response contains a JSON
object with the information about the handle and the revision. The attribute
*_id* contains the known *document-handle* of the removed document, *_key*
contains the key which uniquely identifies a document in a given collection,
If *silent* is not set to *true*, the body of the response contains a JSON
object with the information about the handle and the revision. The attribute
*_id* contains the known *document-handle* of the removed document, *_key*
contains the key which uniquely identifies a document in a given collection,
and the attribute *_rev* contains the document revision.
If the *waitForSync* parameter is not specified or set to *false*,

View File

@ -18,20 +18,20 @@ operation is executed
@RESTQUERYPARAM{ignoreRevs,string,optional}
Should the value be *true* (the default):
If a search document contains a value for the *_rev* field,
then the document is only returned if it has the same revision value.
then the document is only returned if it has the same revision value.
Otherwise a precondition failed error is returned.
@RESTDESCRIPTION
Returns the documents identified by their *_key* in the body objects.
The body of the request _must_ contain a JSON array of either
Returns the documents identified by their *_key* in the body objects.
The body of the request _must_ contain a JSON array of either
strings (the *_key* values to lookup) or search documents.
A search document _must_ contain at least a value for the *_key* field.
A search document _must_ contain at least a value for the *_key* field.
A value for `_rev` _may_ be specified to verify whether the document
has the same revision value, unless _ignoreRevs_ is set to false.
has the same revision value, unless _ignoreRevs_ is set to false.
Cluster only: The search document _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
Cluster only: The search document _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
are treated as hints to improve performance. Should the shard keys
values be incorrect ArangoDB may answer with a *not found* error.

View File

@ -32,14 +32,14 @@ value. If set to *true*, objects will be merged. The default is
Wait until document has been synced to disk.
@RESTQUERYPARAM{ignoreRevs,boolean,optional}
By default, or if this is set to *true*, the *_rev* attributes in
By default, or if this is set to *true*, the *_rev* attributes in
the given document is ignored. If this is set to *false*, then
the *_rev* attribute given in the body document is taken as a
precondition. The document is only updated if the current revision
is the one specified.
@RESTQUERYPARAM{returnOld,boolean,optional}
Return additionally the complete previous revision of the changed
Return additionally the complete previous revision of the changed
document under the attribute *old* in the result.
@RESTQUERYPARAM{returnNew,boolean,optional}
@ -47,7 +47,7 @@ Return additionally the complete new document under the attribute *new*
in the result.
@RESTQUERYPARAM{silent,boolean,optional}
If set to *true*, an empty object will be returned as response. No meta-data
If set to *true*, an empty object will be returned as response. No meta-data
will be returned for the updated document. This option can be used to
save some network traffic.
@ -63,7 +63,7 @@ The body of the request must contain a JSON document with the
attributes to patch (the patch document). All attributes from the
patch document will be added to the existing document if they do not
yet exist, and overwritten in the existing document if they do exist
there.
there.
The value of the `_key` attribute as well as attributes
used as sharding keys may not be changed.
@ -88,8 +88,8 @@ the *Etag* header field contains the new revision of the document
(in double quotes) and the *Location* header contains a complete URL
under which the document can be queried.
Cluster only: The patch document _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
Cluster only: The patch document _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
are treated as hints to improve performance. Should the shard keys
values be incorrect ArangoDB may answer with a *not found* error
@ -104,10 +104,10 @@ applied. The *waitForSync* query parameter cannot be used to disable
synchronization for collections that have a default *waitForSync* value
of *true*.
If *silent* is not set to *true*, the body of the response contains a JSON
object with the information about the handle and the revision. The attribute
*_id* contains the known *document-handle* of the updated document, *_key*
contains the key which uniquely identifies a document in a given collection,
If *silent* is not set to *true*, the body of the response contains a JSON
object with the information about the handle and the revision. The attribute
*_id* contains the known *document-handle* of the updated document, *_key*
contains the key which uniquely identifies a document in a given collection,
and the attribute *_rev* contains the new document revision.
If the query parameter *returnOld* is *true*, then

View File

@ -33,14 +33,14 @@ value. If set to *true*, objects will be merged. The default is
Wait until the new documents have been synced to disk.
@RESTQUERYPARAM{ignoreRevs,boolean,optional}
By default, or if this is set to *true*, the *_rev* attributes in
By default, or if this is set to *true*, the *_rev* attributes in
the given documents are ignored. If this is set to *false*, then
any *_rev* attribute given in a body document is taken as a
precondition. The document is only updated if the current revision
is the one specified.
@RESTQUERYPARAM{returnOld,boolean,optional}
Return additionally the complete previous revision of the changed
Return additionally the complete previous revision of the changed
documents under the attribute *old* in the result.
@RESTQUERYPARAM{returnNew,boolean,optional}
@ -67,8 +67,8 @@ document in the body and its value does not match the revision of
the corresponding document in the database, the precondition is
violated.
Cluster only: The patch document _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
Cluster only: The patch document _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
are treated as hints to improve performance. Should the shard keys
values be incorrect ArangoDB may answer with a *not found* error

View File

@ -31,7 +31,7 @@ Additionally return the complete old document under the attribute *old*
in the result. Only available if the overwrite option is used.
@RESTQUERYPARAM{silent,boolean,optional}
If set to *true*, an empty object will be returned as response. No meta-data
If set to *true*, an empty object will be returned as response. No meta-data
will be returned for the created document. This option can be used to
save some network traffic.
@ -53,7 +53,7 @@ contains the path to the newly created document. The *Etag* header field
contains the revision of the document. Both are only set in the single
document case.
If *silent* is not set to *true*, the body of the response contains a
If *silent* is not set to *true*, the body of the response contains a
JSON object with the following attributes:
- *_id* contains the document handle of the newly created document

View File

@ -31,7 +31,7 @@ Additionally return the complete old document under the attribute *old*
in the result. Only available if the overwrite option is used.
@RESTQUERYPARAM{silent,boolean,optional}
If set to *true*, an empty object will be returned as response. No meta-data
If set to *true*, an empty object will be returned as response. No meta-data
will be returned for the created document. This option can be used to
save some network traffic.
@ -54,7 +54,7 @@ errorCode set to the error code that has happened.
Possibly given *_id* and *_rev* attributes in the body are always ignored,
the URL part or the query parameter collection respectively counts.
If *silent* is not set to *true*, the body of the response contains an
If *silent* is not set to *true*, the body of the response contains an
array of JSON objects with the following attributes:
- *_id* contains the document handle of the newly created document

View File

@ -18,14 +18,14 @@ This URL parameter must be a document handle.
Wait until document has been synced to disk.
@RESTQUERYPARAM{ignoreRevs,boolean,optional}
By default, or if this is set to *true*, the *_rev* attributes in
By default, or if this is set to *true*, the *_rev* attributes in
the given document is ignored. If this is set to *false*, then
the *_rev* attribute given in the body document is taken as a
precondition. The document is only replaced if the current revision
is the one specified.
@RESTQUERYPARAM{returnOld,boolean,optional}
Return additionally the complete previous revision of the changed
Return additionally the complete previous revision of the changed
document under the attribute *old* in the result.
@RESTQUERYPARAM{returnNew,boolean,optional}
@ -33,7 +33,7 @@ Return additionally the complete new document under the attribute *new*
in the result.
@RESTQUERYPARAM{silent,boolean,optional}
If set to *true*, an empty object will be returned as response. No meta-data
If set to *true*, an empty object will be returned as response. No meta-data
will be returned for the replaced document. This option can be used to
save some network traffic.
@ -68,8 +68,8 @@ the *Etag* header field contains the new revision of the document
and the *Location* header contains a complete URL under which the
document can be queried.
Cluster only: The replace documents _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
Cluster only: The replace documents _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
are treated as hints to improve performance. Should the shard keys
values be incorrect ArangoDB may answer with a *not found* error.
@ -84,10 +84,10 @@ applied. The *waitForSync* query parameter cannot be used to disable
synchronization for collections that have a default *waitForSync* value
of *true*.
If *silent* is not set to *true*, the body of the response contains a JSON
object with the information about the handle and the revision. The attribute
*_id* contains the known *document-handle* of the updated document, *_key*
contains the key which uniquely identifies a document in a given collection,
If *silent* is not set to *true*, the body of the response contains a JSON
object with the information about the handle and the revision. The attribute
*_id* contains the known *document-handle* of the updated document, *_key*
contains the key which uniquely identifies a document in a given collection,
and the attribute *_rev* contains the new document revision.
If the query parameter *returnOld* is *true*, then

View File

@ -19,14 +19,14 @@ documents are replaced.
Wait until the new documents have been synced to disk.
@RESTQUERYPARAM{ignoreRevs,boolean,optional}
By default, or if this is set to *true*, the *_rev* attributes in
By default, or if this is set to *true*, the *_rev* attributes in
the given documents are ignored. If this is set to *false*, then
any *_rev* attribute given in a body document is taken as a
precondition. The document is only replaced if the current revision
is the one specified.
@RESTQUERYPARAM{returnOld,boolean,optional}
Return additionally the complete previous revision of the changed
Return additionally the complete previous revision of the changed
documents under the attribute *old* in the result.
@RESTQUERYPARAM{returnNew,boolean,optional}
@ -46,8 +46,8 @@ document in the body and its value does not match the revision of
the corresponding document in the database, the precondition is
violated.
Cluster only: The replace documents _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
Cluster only: The replace documents _may_ contain
values for the collection's pre-defined shard keys. Values for the shard keys
are treated as hints to improve performance. Should the shard keys
values be incorrect ArangoDB may answer with a *not found* error.

View File

@ -16,7 +16,7 @@ Number of shards created for every new collection in the graph.
The replication factor used for every new collection in the graph.
@RESTSTRUCT{_id,graph_representation,string,required,}
The internal id value of this graph.
The internal id value of this graph.
@RESTSTRUCT{_rev,graph_representation,string,required,}
The revision of this graph. Can be used to make sure to not override

View File

@ -6,7 +6,7 @@
@RESTDESCRIPTION
Creates a new edge in the collection.
Within the body the edge has to contain a *_from* and *_to* value referencing to valid vertices in the graph.
Furthermore the edge has to be valid in the definition of the used
Furthermore the edge has to be valid in the definition of the used
[edge collection](../../Manual/Appendix/Glossary.html#edge-collection).
@RESTURLPARAMETERS
@ -14,7 +14,7 @@ Furthermore the edge has to be valid in the definition of the used
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the edge collection the edge belongs to.
@RESTQUERYPARAMETERS

View File

@ -28,7 +28,7 @@ Collection will only be dropped if it is not used in other graphs.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Returned if the edge definition could be removed from the graph
Returned if the edge definition could be removed from the graph
and waitForSync is true.
@RESTREPLYBODY{error,boolean,required,}

View File

@ -11,10 +11,10 @@ Removes an edge from the collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the edge collection the edge belongs to.
@RESTURLPARAM{edge,string,required}
@RESTURLPARAM{edge,string,required}
The *_key* attribute of the edge.
@RESTQUERYPARAMETERS

View File

@ -11,10 +11,10 @@ Gets an edge from the given collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the edge collection the edge belongs to.
@RESTURLPARAM{edge,string,required}
@RESTURLPARAM{edge,string,required}
The *_key* attribute of the edge.
@RESTQUERYPARAMETERS
@ -34,7 +34,7 @@ you can supply the Etag in an attribute rev in the URL.
@RESTHEADERPARAM{if-none-match,string,optional}
If the "If-None-Match" header is given, then it must contain exactly one Etag. The document is returned,
only if it has a different revision as the given Etag. Otherwise a HTTP 304 is returned.
only if it has a different revision as the given Etag. Otherwise a HTTP 304 is returned.
@RESTRETURNCODES

View File

@ -11,10 +11,10 @@ Replaces the data of an edge in the collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the edge collection the edge belongs to.
@RESTURLPARAM{edge,string,required}
@RESTURLPARAM{edge,string,required}
The *_key* attribute of the vertex.
@RESTQUERYPARAMETERS

View File

@ -42,7 +42,7 @@ Number of shards created for every new collection in the graph.
The replication factor used for every new collection in the graph.
@RESTSTRUCT{_id,graph_representation,string,required,}
The internal id value of this graph.
The internal id value of this graph.
@RESTSTRUCT{_rev,graph_representation,string,required,}
The revision of this graph. Can be used to make sure to not override

View File

@ -11,7 +11,7 @@ Adds a vertex to the given collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the vertex collection the vertex should be inserted into.
@RESTQUERYPARAMETERS

View File

@ -11,10 +11,10 @@ Removes a vertex from the collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the vertex collection the vertex belongs to.
@RESTURLPARAM{vertex,string,required}
@RESTURLPARAM{vertex,string,required}
The *_key* attribute of the vertex.
@RESTQUERYPARAMETERS

View File

@ -11,10 +11,10 @@ Gets a vertex from the given collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the vertex collection the vertex belongs to.
@RESTURLPARAM{vertex,string,required}
@RESTURLPARAM{vertex,string,required}
The *_key* attribute of the vertex.
@RESTQUERYPARAMETERS
@ -34,7 +34,7 @@ you can supply the Etag in an query parameter *rev*.
@RESTHEADERPARAM{if-none-match,string,optional}
If the "If-None-Match" header is given, then it must contain exactly one Etag. The document is returned,
only if it has a different revision as the given Etag. Otherwise a HTTP 304 is returned.
only if it has a different revision as the given Etag. Otherwise a HTTP 304 is returned.
@RESTRETURNCODES

View File

@ -14,7 +14,7 @@ The name of the graph.
@RESTURLPARAM{collection,string,required}
The name of the vertex collection the vertex belongs to.
@RESTURLPARAM{vertex,string,required}
@RESTURLPARAM{vertex,string,required}
The *_key* attribute of the vertex.
@RESTQUERYPARAMETERS

View File

@ -11,10 +11,10 @@ Replaces the data of a vertex in the collection.
@RESTURLPARAM{graph,string,required}
The name of the graph.
@RESTURLPARAM{collection,string,required}
@RESTURLPARAM{collection,string,required}
The name of the vertex collection the vertex belongs to.
@RESTURLPARAM{vertex,string,required}
@RESTURLPARAM{vertex,string,required}
The *_key* attribute of the vertex.
@RESTQUERYPARAMETERS

View File

@ -18,7 +18,7 @@ attributes:
- *type*: the index type
All other attributes are type-dependent. For example, some indexes provide
*unique* or *sparse* flags, whereas others don't. Some indexes also provide
*unique* or *sparse* flags, whereas others don't. Some indexes also provide
a selectivity estimate in the *selectivityEstimate* attribute of the result.
@RESTRETURNCODES

View File

@ -27,8 +27,8 @@ of the index details. Depending on the index type, a single attribute or
multiple attributes can be indexed. In the latter case, an array of
strings is expected.
Indexing the system attribute *_id* is not supported for user-defined indexes.
Manually creating an index using *_id* as an index attribute will fail with
Indexing the system attribute *_id* is not supported for user-defined indexes.
Manually creating an index using *_id* as an index attribute will fail with
an error.
Optionally, an index name may be specified as a string in the *name* attribute.
@ -53,10 +53,10 @@ cluster.
Hash, skiplist and persistent indexes can optionally be created in a sparse
variant. A sparse index will be created if the *sparse* attribute in
the index details is set to *true*. Sparse indexes do not index documents
for which any of the index attributes is either not set or is *null*.
for which any of the index attributes is either not set or is *null*.
The optional attribute **deduplicate** is supported by array indexes of
type *hash* or *skiplist*. It controls whether inserting duplicate index values
type *hash* or *skiplist*. It controls whether inserting duplicate index values
from the same document into a unique array index will lead to a unique constraint
error or not. The default value is *true*, so only a single instance of each
non-unique index value will be inserted into the index per document. Trying to

View File

@ -51,9 +51,9 @@ Creating a fulltext index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "fulltext",
fields: [ "text" ]
var body = {
type: "fulltext",
fields: [ "text" ]
};
var response = logCurlRequest('POST', url, body);

View File

@ -65,9 +65,9 @@ Creating a geo index with a location attribute
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "geo",
fields : [ "b" ]
var body = {
type: "geo",
fields : [ "b" ]
};
var response = logCurlRequest('POST', url, body);
@ -86,9 +86,9 @@ Creating a geo index with latitude and longitude attributes
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "geo",
fields: [ "e", "f" ]
var body = {
type: "geo",
fields: [ "e", "f" ]
};
var response = logCurlRequest('POST', url, body);

View File

@ -31,9 +31,9 @@ Creates a hash index for the collection *collection-name* if it
does not already exist. The call expects an object containing the index
details.
In a sparse index all documents will be excluded from the index that do not
contain at least one of the specified index attributes (i.e. *fields*) or that
have a value of *null* in any of the specified index attributes. Such documents
In a sparse index all documents will be excluded from the index that do not
contain at least one of the specified index attributes (i.e. *fields*) or that
have a value of *null* in any of the specified index attributes. Such documents
will not be indexed, and not be taken into account for uniqueness checks if
the *unique* flag is set.
@ -70,10 +70,10 @@ Creating an unique constraint
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "hash",
unique: true,
fields : [ "a", "b" ]
var body = {
type: "hash",
unique: true,
fields : [ "a", "b" ]
};
var response = logCurlRequest('POST', url, body);
@ -92,10 +92,10 @@ Creating a non-unique hash index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "hash",
unique: false,
fields: [ "a", "b" ]
var body = {
type: "hash",
unique: false,
fields: [ "a", "b" ]
};
var response = logCurlRequest('POST', url, body);
@ -114,11 +114,11 @@ Creating a sparse index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "hash",
unique: false,
sparse: true,
fields: [ "a" ]
var body = {
type: "hash",
unique: false,
sparse: true,
fields: [ "a" ]
};
var response = logCurlRequest('POST', url, body);

View File

@ -27,9 +27,9 @@ Creates a persistent index for the collection *collection-name*, if
it does not already exist. The call expects an object containing the index
details.
In a sparse index all documents will be excluded from the index that do not
contain at least one of the specified index attributes (i.e. *fields*) or that
have a value of *null* in any of the specified index attributes. Such documents
In a sparse index all documents will be excluded from the index that do not
contain at least one of the specified index attributes (i.e. *fields*) or that
have a value of *null* in any of the specified index attributes. Such documents
will not be indexed, and not be taken into account for uniqueness checks if
the *unique* flag is set.
@ -67,10 +67,10 @@ Creating a persistent index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "persistent",
unique: false,
fields: [ "a", "b" ]
var body = {
type: "persistent",
unique: false,
fields: [ "a", "b" ]
};
var response = logCurlRequest('POST', url, body);
@ -89,11 +89,11 @@ Creating a sparse persistent index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
var body = {
type: "persistent",
unique: false,
sparse: true,
fields: [ "a" ]
unique: false,
sparse: true,
fields: [ "a" ]
};
var response = logCurlRequest('POST', url, body);

View File

@ -30,9 +30,9 @@ Creates a skip-list index for the collection *collection-name*, if
it does not already exist. The call expects an object containing the index
details.
In a sparse index all documents will be excluded from the index that do not
contain at least one of the specified index attributes (i.e. *fields*) or that
have a value of *null* in any of the specified index attributes. Such documents
In a sparse index all documents will be excluded from the index that do not
contain at least one of the specified index attributes (i.e. *fields*) or that
have a value of *null* in any of the specified index attributes. Such documents
will not be indexed, and not be taken into account for uniqueness checks if
the *unique* flag is set.
@ -70,10 +70,10 @@ Creating a skiplist index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "skiplist",
unique: false,
fields: [ "a", "b" ]
var body = {
type: "skiplist",
unique: false,
fields: [ "a", "b" ]
};
var response = logCurlRequest('POST', url, body);
@ -92,11 +92,11 @@ Creating a sparse skiplist index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "skiplist",
unique: false,
sparse: true,
fields: [ "a" ]
var body = {
type: "skiplist",
unique: false,
sparse: true,
fields: [ "a" ]
};
var response = logCurlRequest('POST', url, body);

View File

@ -36,7 +36,7 @@ If the index does not already exist and could be created, then a *HTTP 201*
is returned.
@RESTRETURNCODE{400}
If the collection already contains another TTL index, then an *HTTP 400* is
If the collection already contains another TTL index, then an *HTTP 400* is
returned, as there can be at most one TTL index per collection.
@RESTRETURNCODE{404}
@ -52,10 +52,10 @@ Creating a TTL index
db._create(cn);
var url = "/_api/index?collection=" + cn;
var body = {
type: "ttl",
var body = {
type: "ttl",
expireAfter: 3600,
fields : [ "createdAt" ]
fields : [ "createdAt" ]
};
var response = logCurlRequest('POST', url, body);

View File

@ -39,17 +39,17 @@ The response is a JSON object with the following attributes:
applicable to the applier.
Client applications can use it to determine approximately how far the applier
is behind the remote server, and can periodically check if the value is
is behind the remote server, and can periodically check if the value is
increasing (applier is falling behind) or decreasing (applier is catching up).
Please note that as the remote server will only keep one last log tick value
for all of its databases, but replication may be restricted to just certain
databases on the applier, this value is more meaningful when the global applier
Please note that as the remote server will only keep one last log tick value
for all of its databases, but replication may be restricted to just certain
databases on the applier, this value is more meaningful when the global applier
is used.
Additionally, the last log tick provided by the remote server may increase
due to writes into system collections that are not replicated due to replication
configuration. So the reported value may exaggerate the reality a bit for
some scenarios.
some scenarios.
- *time*: the time on the applier server.

View File

@ -8,8 +8,8 @@
Returns the last available tick value that can be served from the server's
replication log. This corresponds to the tick of the latest successfull operation.
The result is a JSON object containing the attributes *tick*, *time* and *server*.
* *tick*: contains the last available tick, *time*
The result is a JSON object containing the attributes *tick*, *time* and *server*.
* *tick*: contains the last available tick, *time*
* *time*: the server time as string in format "YYYY-MM-DDTHH:MM:SSZ"
* *server*: An object with fields *version* and *serverId*

View File

@ -9,7 +9,7 @@ Returns the currently available ranges of tick values for all WAL files.
The tick values can be used to determine if certain
data (identified by tick value) are still available for replication.
The body of the response contains a JSON object.
The body of the response contains a JSON object.
* *tickMin*: minimum tick available
* *tickMax: maximum tick available
* *time*: the server time as string in format "YYYY-MM-DDTHH:MM:SSZ"

View File

@ -18,7 +18,7 @@ Inclusive upper bound tick value for results.
@RESTQUERYPARAM{lastScanned,number,optional}
Should be set to the value of the *x-arango-replication-lastscanned* header
or alternatively 0 on first try. This allows the rocksdb engine to break up
large transactions over multiple responses.
large transactions over multiple responses.
@RESTQUERYPARAM{global,boolean,optional}
Whether operations for all databases should be included. When set to *false*
@ -29,7 +29,7 @@ only valid on the *_system* database. The default is *false*.
Approximate maximum size of the returned result.
@RESTQUERYPARAM{syncerId,number,optional}
Id of the client used to tail results. The server will use this to
Id of the client used to tail results. The server will use this to
keep operations until the client has fetched them. Must be a positive integer.
**Note** this or serverId is required to have a chance at fetching reading all
operations with the rocksdb storage engine.
@ -45,7 +45,7 @@ operations with the rocksdb storage engine.
Short description of the client, used for informative purposes only.
@RESTQUERYPARAM{barrierId,number,optional}
Id of barrier used to keep WAL entries around. **Note** this is only required for the
Id of barrier used to keep WAL entries around. **Note** this is only required for the
MMFiles storage engine
@RESTDESCRIPTION

View File

@ -92,8 +92,8 @@ to the master in case there is no write activity on the master.
This value will be ignored if set to *0*.
@RESTBODYPARAM{idleMaxWaitTime,integer,optional,int64}
the maximum wait time (in seconds) that the applier will intentionally idle
before fetching more log data from the master in case the master has
the maximum wait time (in seconds) that the applier will intentionally idle
before fetching more log data from the master in case the master has
already sent all its log data and there have been previous log fetch attempts
that resulted in no more log data. This wait time can be used to control the
maximum frequency with which the replication applier sends HTTP log fetch
@ -105,13 +105,13 @@ This value will be ignored if set to *0*.
@RESTBODYPARAM{requireFromPresent,boolean,required,}
if set to *true*, then the replication applier will check
at start whether the start tick from which it starts or resumes replication is
still present on the master. If not, then there would be data loss. If
still present on the master. If not, then there would be data loss. If
*requireFromPresent* is *true*, the replication applier will abort with an
appropriate error message. If set to *false*, then the replication applier will
still start, and ignore the data loss.
@RESTBODYPARAM{verbose,boolean,required,}
if set to *true*, then a log line will be emitted for all operations
if set to *true*, then a log line will be emitted for all operations
performed by the replication applier. This should be used for debugging replication
problems only.

View File

@ -146,19 +146,19 @@ attributes:
then the remote server has additional data that the applier has not yet
fetched and processed, or the remote server may have more data that is not
applicable to the applier.
Client applications can use it to determine approximately how far the applier
is behind the remote server, and can periodically check if the value is
is behind the remote server, and can periodically check if the value is
increasing (applier is falling behind) or decreasing (applier is catching up).
Please note that as the remote server will only keep one last log tick value
for all of its databases, but replication may be restricted to just certain
databases on the applier, this value is more meaningful when the global applier
Please note that as the remote server will only keep one last log tick value
for all of its databases, but replication may be restricted to just certain
databases on the applier, this value is more meaningful when the global applier
is used.
Additionally, the last log tick provided by the remote server may increase
due to writes into system collections that are not replicated due to replication
configuration. So the reported value may exaggerate the reality a bit for
some scenarios.
some scenarios.
- *time*: the time on the applier server.

View File

@ -49,7 +49,7 @@ Equivalent AQL query (the RETURN clause is optional):
The body of the response contains a JSON object with information how many
documents were removed (and how many were not). The *removed* attribute will
contain the number of actually removed documents. The *ignored* attribute
contain the number of actually removed documents. The *ignored* attribute
will contain the number of keys in the request for which no matching document
could be found.

View File

@ -28,9 +28,9 @@ as body with the following attributes:
- *batchSize*: The number of documents to return in one go. (optional)
- *ttl*: The time-to-live for the cursor (in seconds, optional).
- *ttl*: The time-to-live for the cursor (in seconds, optional).
- *stream*: Create this cursor as a stream query (optional).
- *stream*: Create this cursor as a stream query (optional).
Returns a cursor containing the result, see [HTTP Cursor](../AqlQueryCursor/README.md) for details.

View File

@ -41,13 +41,13 @@ for the collection and the specified attribute.
Returns a cursor containing the result, see [HTTP Cursor](../AqlQueryCursor/README.md) for details.
Note: the *fulltext* simple query is **deprecated** as of ArangoDB 2.6.
Note: the *fulltext* simple query is **deprecated** as of ArangoDB 2.6.
This API may be removed in future versions of ArangoDB. The preferred
way for retrieving documents from a collection using the near operator is
to issue an AQL query using the *FULLTEXT* [AQL function](../../AQL/Functions/Fulltext.html)
to issue an AQL query using the *FULLTEXT* [AQL function](../../AQL/Functions/Fulltext.html)
as follows:
FOR doc IN FULLTEXT(@@collection, @attributeName, @queryString, @limit)
FOR doc IN FULLTEXT(@@collection, @attributeName, @queryString, @limit)
RETURN doc
@RESTRETURNCODES

View File

@ -48,10 +48,10 @@ the *geo* field to select a particular index.
Returns a cursor containing the result, see [HTTP Cursor](../AqlQueryCursor/README.md) for details.
Note: the *near* simple query is **deprecated** as of ArangoDB 2.6.
Note: the *near* simple query is **deprecated** as of ArangoDB 2.6.
This API may be removed in future versions of ArangoDB. The preferred
way for retrieving documents from a collection using the near operator is
to issue an [AQL query](../../AQL/Functions/Geo.html) using the *NEAR* function as follows:
to issue an [AQL query](../../AQL/Functions/Geo.html) using the *NEAR* function as follows:
FOR doc IN NEAR(@@collection, @latitude, @longitude, @limit)
RETURN doc`

View File

@ -41,14 +41,14 @@ range query, a skip-list index on the queried attribute must be present.
Returns a cursor containing the result, see [HTTP Cursor](../AqlQueryCursor/README.md) for details.
Note: the *range* simple query is **deprecated** as of ArangoDB 2.6.
Note: the *range* simple query is **deprecated** as of ArangoDB 2.6.
The function may be removed in future versions of ArangoDB. The preferred
way for retrieving documents from a collection within a specific range
is to use an AQL query as follows:
is to use an AQL query as follows:
FOR doc IN @@collection
FILTER doc.value >= @left && doc.value < @right
LIMIT @skip, @limit
FOR doc IN @@collection
FILTER doc.value >= @left && doc.value < @right
LIMIT @skip, @limit
RETURN doc`
@RESTRETURNCODES
@ -62,7 +62,7 @@ query. The response body contains an error document in this case.
@RESTRETURNCODE{404}
is returned if the collection specified by *collection* is unknown or no
suitable index for the range query is present. The response body contains
suitable index for the range query is present. The response body contains
an error document in this case.
@EXAMPLES

View File

@ -49,10 +49,10 @@ you can use the *geo* field to select a particular index.
Returns a cursor containing the result, see [HTTP Cursor](../AqlQueryCursor/README.md) for details.
Note: the *within* simple query is **deprecated** as of ArangoDB 2.6.
Note: the *within* simple query is **deprecated** as of ArangoDB 2.6.
This API may be removed in future versions of ArangoDB. The preferred
way for retrieving documents from a collection using the near operator is
to issue an [AQL query](../../AQL/Functions/Geo.html) using the *WITHIN* function as follows:
to issue an [AQL query](../../AQL/Functions/Geo.html) using the *WITHIN* function as follows:
FOR doc IN WITHIN(@@collection, @latitude, @longitude, @radius, @distanceAttributeName)
RETURN doc

View File

@ -39,7 +39,7 @@ If given, the identifier of the geo-index to use. (optional)
@RESTDESCRIPTION
This will find all documents within the specified rectangle (determined by
the given coordinates (*latitude1*, *longitude1*, *latitude2*, *longitude2*).
the given coordinates (*latitude1*, *longitude1*, *latitude2*, *longitude2*).
In order to use the *within-rectangle* query, a geo index must be defined for
the collection. This index also defines which attribute holds the
@ -74,7 +74,7 @@ response body contains an error document in this case.
}
var url = "/_api/simple/within-rectangle";
var body = {
collection: "products",
collection: "products",
latitude1 : 0,
longitude1 : 0,
latitude2 : 0.2,

View File

@ -10,10 +10,10 @@
The transaction identifier,
@RESTDESCRIPTION
Abort a running server-side transaction. Aborting is an idempotent operation.
Abort a running server-side transaction. Aborting is an idempotent operation.
It is not an error to abort a transaction more than once.
If the transaction can be aborted, *HTTP 200* will be returned.
If the transaction can be aborted, *HTTP 200* will be returned.
The returned JSON object has the following properties:
- *error*: boolean flag to indicate if an error occurred (*false*
@ -25,7 +25,7 @@ The returned JSON object has the following properties:
- *id*: the identifier of the transaction
- *status*: containing the string 'aborted'
If the transaction cannot be found, aborting is not allowed or the
If the transaction cannot be found, aborting is not allowed or the
transaction was already committed, the server
will respond with *HTTP 400*, *HTTP 404* or *HTTP 409*.

View File

@ -10,7 +10,7 @@
The transaction identifier.
@RESTDESCRIPTION
The result is an object describing the status of the transaction.
The result is an object describing the status of the transaction.
It has at least the following attributes:
- *id*: the identifier of the transaction

View File

@ -35,7 +35,7 @@ Transaction size limit in bytes. Honored by the RocksDB storage engine only.
@RESTDESCRIPTION
The transaction description must be passed in the body of the POST request.
If the transaction can be started on the server, *HTTP 201* will be returned.
If the transaction can be started on the server, *HTTP 201* will be returned.
For successfully started transactions, the returned JSON object has the
following properties:

View File

@ -10,10 +10,10 @@
The transaction identifier,
@RESTDESCRIPTION
Commit a running server-side transaction. Committing is an idempotent operation.
Commit a running server-side transaction. Committing is an idempotent operation.
It is not an error to commit a transaction more than once.
If the transaction can be committed, *HTTP 200* will be returned.
If the transaction can be committed, *HTTP 200* will be returned.
The returned JSON object has the following properties:
- *error*: boolean flag to indicate if an error occurred (*false*
@ -25,7 +25,7 @@ The returned JSON object has the following properties:
- *id*: the identifier of the transaction
- *status*: containing the string 'committed'
If the transaction cannot be found, committing is not allowed or the
If the transaction cannot be found, committing is not allowed or the
transaction was aborted, the server
will respond with *HTTP 400*, *HTTP 404* or *HTTP 409*.

View File

@ -34,7 +34,7 @@ Using an identifier:
assert(response.code === 200);
logJsonResponse(response);
db._dropView("testView");
@END_EXAMPLE_ARANGOSH_RUN

View File

@ -8,10 +8,10 @@
@RESTURLPARAM{type,string,required}
The type of jobs to delete. type can be:
* *all*: Deletes all jobs results. Currently executing or queued async
* *all*: Deletes all jobs results. Currently executing or queued async
jobs will not be stopped by this call.
* *expired*: Deletes expired results. To determine the expiration status of a
result, pass the stamp query parameter. stamp needs to be a UNIX timestamp,
* *expired*: Deletes expired results. To determine the expiration status of a
result, pass the stamp query parameter. stamp needs to be a UNIX timestamp,
and all async job results created at a lower timestamp will be deleted.
* *an actual job-id*: In this case, the call will remove the result of the
specified async job. If the job is currently executing or queued, it will