mirror of https://gitee.com/bigwinds/arangodb
de-externalise docublocks.
This commit is contained in:
parent
679e2db476
commit
3de1cc1fb3
File diff suppressed because it is too large
Load Diff
|
@ -12,16 +12,117 @@ function code must be specified.
|
||||||
|
|
||||||
!SUBSECTION Register
|
!SUBSECTION Register
|
||||||
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
||||||
@startDocuBlock aqlFunctionsRegister
|
|
||||||
|
|
||||||
|
@brief register an AQL user function
|
||||||
|
`aqlfunctions.register(name, code, isDeterministic)`
|
||||||
|
|
||||||
|
Registers an AQL user function, identified by a fully qualified function
|
||||||
|
name. The function code in *code* must be specified as a JavaScript
|
||||||
|
function or a string representation of a JavaScript function.
|
||||||
|
If the function code in *code* is passed as a string, it is required that
|
||||||
|
the string evaluates to a JavaScript function definition.
|
||||||
|
|
||||||
|
If a function identified by *name* already exists, the previous function
|
||||||
|
definition will be updated. Please also make sure that the function code
|
||||||
|
does not violate the [Conventions](../AqlExtending/Conventions.md) for AQL
|
||||||
|
functions.
|
||||||
|
|
||||||
|
The *isDeterministic* attribute can be used to specify whether the
|
||||||
|
function results are fully deterministic (i.e. depend solely on the input
|
||||||
|
and are the same for repeated calls with the same input values). It is not
|
||||||
|
used at the moment but may be used for optimizations later.
|
||||||
|
|
||||||
|
The registered function is stored in the selected database's system
|
||||||
|
collection *_aqlfunctions*.
|
||||||
|
|
||||||
|
The function returns *true* when it updates/replaces an existing AQL
|
||||||
|
function of the same name, and *false* otherwise. It will throw an exception
|
||||||
|
when it detects syntactially invalid function code.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/aql/functions").register("myfunctions::temperature::celsiustofahrenheit",
|
||||||
|
function (celsius) {
|
||||||
|
return celsius * 1.8 + 32;
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Unregister
|
!SUBSECTION Unregister
|
||||||
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
||||||
@startDocuBlock aqlFunctionsUnregister
|
|
||||||
|
|
||||||
|
@brief delete an existing AQL user function
|
||||||
|
`aqlfunctions.unregister(name)`
|
||||||
|
|
||||||
|
Unregisters an existing AQL user function, identified by the fully qualified
|
||||||
|
function name.
|
||||||
|
|
||||||
|
Trying to unregister a function that does not exist will result in an
|
||||||
|
exception.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/aql/functions").unregister("myfunctions::temperature::celsiustofahrenheit");
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Unregister Group
|
!SUBSECTION Unregister Group
|
||||||
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
||||||
@startDocuBlock aqlFunctionsUnregisterGroup
|
|
||||||
|
|
||||||
|
@brief delete a group of AQL user functions
|
||||||
|
`aqlfunctions.unregisterGroup(prefix)`
|
||||||
|
|
||||||
|
Unregisters a group of AQL user function, identified by a common function
|
||||||
|
group prefix.
|
||||||
|
|
||||||
|
This will return the number of functions unregistered.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/aql/functions").unregisterGroup("myfunctions::temperature");
|
||||||
|
|
||||||
|
require("@arangodb/aql/functions").unregisterGroup("myfunctions");
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION To Array
|
!SUBSECTION To Array
|
||||||
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
<!-- js/common/modules/@arangodb/aql/functions.js -->
|
||||||
@startDocuBlock aqlFunctionsToArray
|
|
||||||
|
|
||||||
|
@brief list all AQL user functions
|
||||||
|
`aqlfunctions.toArray()`
|
||||||
|
|
||||||
|
Returns all previously registered AQL user functions, with their fully
|
||||||
|
qualified names and function code.
|
||||||
|
|
||||||
|
The result may optionally be restricted to a specified group of functions
|
||||||
|
by specifying a group prefix:
|
||||||
|
|
||||||
|
`aqlfunctions.toArray(prefix)`
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To list all available user functions:
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/aql/functions").toArray();
|
||||||
|
```
|
||||||
|
|
||||||
|
To list all available user functions in the *myfunctions* namespace:
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/aql/functions").toArray("myfunctions");
|
||||||
|
```
|
||||||
|
|
||||||
|
To list all available user functions in the *myfunctions::temperature* namespace:
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/aql/functions").toArray("myfunctions::temperature");
|
||||||
|
```
|
||||||
|
|
||||||
|
|
|
@ -2,11 +2,53 @@
|
||||||
|
|
||||||
!SUBSECTION Drop
|
!SUBSECTION Drop
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionDrop
|
|
||||||
|
|
||||||
|
@brief drops a collection
|
||||||
|
`collection.drop()`
|
||||||
|
|
||||||
|
Drops a *collection* and all its indexes.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDrop
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDrop}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
col.drop();
|
||||||
|
col;
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDrop
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Truncate
|
!SUBSECTION Truncate
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock collectionTruncate
|
|
||||||
|
|
||||||
|
@brief truncates a collection
|
||||||
|
`collection.truncate()`
|
||||||
|
|
||||||
|
Truncates a *collection*, removing all documents but keeping all its
|
||||||
|
indexes.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Truncates a collection:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionTruncate
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionTruncate}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
col.save({ "Hello" : "World" });
|
||||||
|
col.count();
|
||||||
|
col.truncate();
|
||||||
|
col.count();
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionTruncate
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Properties
|
!SUBSECTION Properties
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
|
@ -14,11 +56,131 @@
|
||||||
|
|
||||||
!SUBSECTION Figures
|
!SUBSECTION Figures
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionFigures
|
|
||||||
|
|
||||||
|
@brief returns the figures of a collection
|
||||||
|
`collection.figures()`
|
||||||
|
|
||||||
|
Returns an object containing statistics about the collection.
|
||||||
|
**Note** : Retrieving the figures will always load the collection into
|
||||||
|
memory.
|
||||||
|
|
||||||
|
* *alive.count*: The number of currently active documents in all datafiles and
|
||||||
|
journals of the collection. Documents that are contained in the
|
||||||
|
write-ahead log only are not reported in this figure.
|
||||||
|
* *alive.size*: The total size in bytes used by all active documents of the
|
||||||
|
collection. Documents that are contained in the write-ahead log only are
|
||||||
|
not reported in this figure.
|
||||||
|
- *dead.count*: The number of dead documents. This includes document
|
||||||
|
versions that have been deleted or replaced by a newer version. Documents
|
||||||
|
deleted or replaced that are contained in the write-ahead log only are not
|
||||||
|
reported in this figure.
|
||||||
|
* *dead.size*: The total size in bytes used by all dead documents.
|
||||||
|
* *dead.deletion*: The total number of deletion markers. Deletion markers
|
||||||
|
only contained in the write-ahead log are not reporting in this figure.
|
||||||
|
* *datafiles.count*: The number of datafiles.
|
||||||
|
* *datafiles.fileSize*: The total filesize of datafiles (in bytes).
|
||||||
|
* *journals.count*: The number of journal files.
|
||||||
|
* *journals.fileSize*: The total filesize of the journal files
|
||||||
|
(in bytes).
|
||||||
|
* *compactors.count*: The number of compactor files.
|
||||||
|
* *compactors.fileSize*: The total filesize of the compactor files
|
||||||
|
(in bytes).
|
||||||
|
* *shapefiles.count*: The number of shape files. This value is
|
||||||
|
deprecated and kept for compatibility reasons only. The value will always
|
||||||
|
be 0 since ArangoDB 2.0 and higher.
|
||||||
|
* *shapefiles.fileSize*: The total filesize of the shape files. This
|
||||||
|
value is deprecated and kept for compatibility reasons only. The value will
|
||||||
|
always be 0 in ArangoDB 2.0 and higher.
|
||||||
|
* *shapes.count*: The total number of shapes used in the collection.
|
||||||
|
This includes shapes that are not in use anymore. Shapes that are contained
|
||||||
|
in the write-ahead log only are not reported in this figure.
|
||||||
|
* *shapes.size*: The total size of all shapes (in bytes). This includes
|
||||||
|
shapes that are not in use anymore. Shapes that are contained in the
|
||||||
|
write-ahead log only are not reported in this figure.
|
||||||
|
* *attributes.count*: The total number of attributes used in the
|
||||||
|
collection. Note: the value includes data of attributes that are not in use
|
||||||
|
anymore. Attributes that are contained in the write-ahead log only are
|
||||||
|
not reported in this figure.
|
||||||
|
* *attributes.size*: The total size of the attribute data (in bytes).
|
||||||
|
Note: the value includes data of attributes that are not in use anymore.
|
||||||
|
Attributes that are contained in the write-ahead log only are not
|
||||||
|
reported in this figure.
|
||||||
|
* *indexes.count*: The total number of indexes defined for the
|
||||||
|
collection, including the pre-defined indexes (e.g. primary index).
|
||||||
|
* *indexes.size*: The total memory allocated for indexes in bytes.
|
||||||
|
* *maxTick*: The tick of the last marker that was stored in a journal
|
||||||
|
of the collection. This might be 0 if the collection does not yet have
|
||||||
|
a journal.
|
||||||
|
* *uncollectedLogfileEntries*: The number of markers in the write-ahead
|
||||||
|
log for this collection that have not been transferred to journals or
|
||||||
|
datafiles.
|
||||||
|
* *documentReferences*: The number of references to documents in datafiles
|
||||||
|
that JavaScript code currently holds. This information can be used for
|
||||||
|
debugging compaction and unload issues.
|
||||||
|
* *waitingFor*: An optional string value that contains information about
|
||||||
|
which object type is at the head of the collection's cleanup queue. This
|
||||||
|
information can be used for debugging compaction and unload issues.
|
||||||
|
* *compactionStatus.time*: The point in time the compaction for the collection
|
||||||
|
was last executed. This information can be used for debugging compaction
|
||||||
|
issues.
|
||||||
|
* *compactionStatus.message*: The action that was performed when the compaction
|
||||||
|
was last run for the collection. This information can be used for debugging
|
||||||
|
compaction issues.
|
||||||
|
|
||||||
|
**Note**: collection data that are stored in the write-ahead log only are
|
||||||
|
not reported in the results. When the write-ahead log is collected, documents
|
||||||
|
might be added to journals and datafiles of the collection, which may modify
|
||||||
|
the figures of the collection. Also note that `waitingFor` and `compactionStatus`
|
||||||
|
may be empty when called on a coordinator in a cluster.
|
||||||
|
|
||||||
|
Additionally, the filesizes of collection and index parameter JSON files are
|
||||||
|
not reported. These files should normally have a size of a few bytes
|
||||||
|
each. Please also note that the *fileSize* values are reported in bytes
|
||||||
|
and reflect the logical file sizes. Some filesystems may use optimisations
|
||||||
|
(e.g. sparse files) so that the actual physical file size is somewhat
|
||||||
|
different. Directories and sub-directories may also require space in the
|
||||||
|
file system, but this space is not reported in the *fileSize* results.
|
||||||
|
|
||||||
|
That means that the figures reported do not reflect the actual disk
|
||||||
|
usage of the collection with 100% accuracy. The actual disk usage of
|
||||||
|
a collection is normally slightly higher than the sum of the reported
|
||||||
|
*fileSize* values. Still the sum of the *fileSize* values can still be
|
||||||
|
used as a lower bound approximation of the disk usage.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionFigures
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionFigures}
|
||||||
|
~ require("internal").wal.flush(true, true);
|
||||||
|
db.demo.figures()
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionFigures
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Load
|
!SUBSECTION Load
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionLoad
|
|
||||||
|
|
||||||
|
@brief loads a collection
|
||||||
|
`collection.load()`
|
||||||
|
|
||||||
|
Loads a collection into memory.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionLoad
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionLoad}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
col.load();
|
||||||
|
col;
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionLoad
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Reserve
|
!SUBSECTION Reserve
|
||||||
`collection.reserve( number)`
|
`collection.reserve( number)`
|
||||||
|
@ -31,20 +193,114 @@ Not all indexes implement the reserve function at the moment. The indexes that d
|
||||||
|
|
||||||
!SUBSECTION Revision
|
!SUBSECTION Revision
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionRevision
|
|
||||||
|
|
||||||
|
@brief returns the revision id of a collection
|
||||||
|
`collection.revision()`
|
||||||
|
|
||||||
|
Returns the revision id of the collection
|
||||||
|
|
||||||
|
The revision id is updated when the document data is modified, either by
|
||||||
|
inserting, deleting, updating or replacing documents in it.
|
||||||
|
|
||||||
|
The revision id of a collection can be used by clients to check whether
|
||||||
|
data in a collection has changed or if it is still unmodified since a
|
||||||
|
previous fetch of the revision id.
|
||||||
|
|
||||||
|
The revision id returned is a string value. Clients should treat this value
|
||||||
|
as an opaque string, and only use it for equality/non-equality comparisons.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Checksum
|
!SUBSECTION Checksum
|
||||||
<!-- arangod/V8Server/v8-query.cpp -->
|
<!-- arangod/V8Server/v8-query.cpp -->
|
||||||
@startDocuBlock collectionChecksum
|
|
||||||
|
|
||||||
|
@brief calculates a checksum for the data in a collection
|
||||||
|
`collection.checksum(withRevisions, withData)`
|
||||||
|
|
||||||
|
The *checksum* operation calculates a CRC32 checksum of the keys
|
||||||
|
contained in collection *collection*.
|
||||||
|
|
||||||
|
If the optional argument *withRevisions* is set to *true*, then the
|
||||||
|
revision ids of the documents are also included in the checksumming.
|
||||||
|
|
||||||
|
If the optional argument *withData* is set to *true*, then the
|
||||||
|
actual document data is also checksummed. Including the document data in
|
||||||
|
checksumming will make the calculation slower, but is more accurate.
|
||||||
|
|
||||||
|
**Note**: this method is not available in a cluster.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Unload
|
!SUBSECTION Unload
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionUnload
|
|
||||||
|
|
||||||
|
@brief unloads a collection
|
||||||
|
`collection.unload()`
|
||||||
|
|
||||||
|
Starts unloading a collection from memory. Note that unloading is deferred
|
||||||
|
until all query have finished.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline CollectionUnload
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{CollectionUnload}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
col.unload();
|
||||||
|
col;
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock CollectionUnload
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Rename
|
!SUBSECTION Rename
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionRename
|
|
||||||
|
|
||||||
|
@brief renames a collection
|
||||||
|
`collection.rename(new-name)`
|
||||||
|
|
||||||
|
Renames a collection using the *new-name*. The *new-name* must not
|
||||||
|
already be used for a different collection. *new-name* must also be a
|
||||||
|
valid collection name. For more information on valid collection names please
|
||||||
|
refer to the [naming conventions](../NamingConventions/README.md).
|
||||||
|
|
||||||
|
If renaming fails for any reason, an error is thrown.
|
||||||
|
If renaming the collection succeeds, then the collection is also renamed in
|
||||||
|
all graph definitions inside the `_graphs` collection in the current
|
||||||
|
database.
|
||||||
|
|
||||||
|
**Note**: this method is not available in a cluster.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionRename
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionRename}
|
||||||
|
~ db._create("example");
|
||||||
|
c = db.example;
|
||||||
|
c.rename("better-example");
|
||||||
|
c;
|
||||||
|
~ db._drop("better-example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionRename
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Rotate
|
!SUBSECTION Rotate
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock collectionRotate
|
|
||||||
|
|
||||||
|
@brief rotates the current journal of a collection
|
||||||
|
`collection.rotate()`
|
||||||
|
|
||||||
|
Rotates the current journal of a collection. This operation makes the
|
||||||
|
current journal of the collection a read-only datafile so it may become a
|
||||||
|
candidate for garbage collection. If there is currently no journal available
|
||||||
|
for the collection, the operation will fail with an error.
|
||||||
|
|
||||||
|
**Note**: this method is not available in a cluster.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -2,30 +2,354 @@
|
||||||
|
|
||||||
!SUBSECTION Collection
|
!SUBSECTION Collection
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock collectionDatabaseName
|
|
||||||
|
|
||||||
|
@brief returns a single collection or null
|
||||||
|
`db._collection(collection-name)`
|
||||||
|
|
||||||
|
Returns the collection with the given name or null if no such collection
|
||||||
|
exists.
|
||||||
|
|
||||||
|
`db._collection(collection-identifier)`
|
||||||
|
|
||||||
|
Returns the collection with the given identifier or null if no such
|
||||||
|
collection exists. Accessing collections by identifier is discouraged for
|
||||||
|
end users. End users should access collections using the collection name.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Get a collection by name:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseName}
|
||||||
|
db._collection("demo");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseName
|
||||||
|
|
||||||
|
Get a collection by id:
|
||||||
|
|
||||||
|
```
|
||||||
|
arangosh> db._collection(123456);
|
||||||
|
[ArangoCollection 123456, "demo" (type document, status loaded)]
|
||||||
|
```
|
||||||
|
|
||||||
|
Unknown collection:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseNameUnknown
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseNameUnknown}
|
||||||
|
db._collection("unknown");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseNameUnknown
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Create
|
!SUBSECTION Create
|
||||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||||
@startDocuBlock collectionDatabaseCreate
|
|
||||||
|
|
||||||
|
@brief creates a new document or edge collection
|
||||||
|
`db._create(collection-name)`
|
||||||
|
|
||||||
|
Creates a new document collection named *collection-name*.
|
||||||
|
If the collection name already exists or if the name format is invalid, an
|
||||||
|
error is thrown. For more information on valid collection names please refer
|
||||||
|
to the [naming conventions](../NamingConventions/README.md).
|
||||||
|
|
||||||
|
`db._create(collection-name, properties)`
|
||||||
|
|
||||||
|
*properties* must be an object with the following attributes:
|
||||||
|
|
||||||
|
* *waitForSync* (optional, default *false*): If *true* creating
|
||||||
|
a document will only return after the data was synced to disk.
|
||||||
|
|
||||||
|
* *journalSize* (optional, default is a
|
||||||
|
configuration parameter: The maximal
|
||||||
|
size of a journal or datafile. Note that this also limits the maximal
|
||||||
|
size of a single object. Must be at least 1MB.
|
||||||
|
|
||||||
|
* *isSystem* (optional, default is *false*): If *true*, create a
|
||||||
|
system collection. In this case *collection-name* should start with
|
||||||
|
an underscore. End users should normally create non-system collections
|
||||||
|
only. API implementors may be required to create system collections in
|
||||||
|
very special occasions, but normally a regular collection will do.
|
||||||
|
|
||||||
|
* *isVolatile* (optional, default is *false*): If *true then the
|
||||||
|
collection data is kept in-memory only and not made persistent. Unloading
|
||||||
|
the collection will cause the collection data to be discarded. Stopping
|
||||||
|
or re-starting the server will also cause full loss of data in the
|
||||||
|
collection. Setting this option will make the resulting collection be
|
||||||
|
slightly faster than regular collections because ArangoDB does not
|
||||||
|
enforce any synchronization to disk and does not calculate any CRC
|
||||||
|
checksums for datafiles (as there are no datafiles).
|
||||||
|
|
||||||
|
* *keyOptions* (optional): additional options for key generation. If
|
||||||
|
specified, then *keyOptions* should be a JSON array containing the
|
||||||
|
following attributes (**note**: some of them are optional):
|
||||||
|
* *type*: specifies the type of the key generator. The currently
|
||||||
|
available generators are *traditional* and *autoincrement*.
|
||||||
|
* *allowUserKeys*: if set to *true*, then it is allowed to supply
|
||||||
|
own key values in the *_key* attribute of a document. If set to
|
||||||
|
*false*, then the key generator will solely be responsible for
|
||||||
|
generating keys and supplying own key values in the *_key* attribute
|
||||||
|
of documents is considered an error.
|
||||||
|
* *increment*: increment value for *autoincrement* key generator.
|
||||||
|
Not used for other key generator types.
|
||||||
|
* *offset*: initial offset value for *autoincrement* key generator.
|
||||||
|
Not used for other key generator types.
|
||||||
|
|
||||||
|
* *numberOfShards* (optional, default is *1*): in a cluster, this value
|
||||||
|
determines the number of shards to create for the collection. In a single
|
||||||
|
server setup, this option is meaningless.
|
||||||
|
|
||||||
|
* *shardKeys* (optional, default is *[ "_key" ]*): in a cluster, this
|
||||||
|
attribute determines which document attributes are used to determine the
|
||||||
|
target shard for documents. Documents are sent to shards based on the
|
||||||
|
values they have in their shard key attributes. The values of all shard
|
||||||
|
key attributes in a document are hashed, and the hash value is used to
|
||||||
|
determine the target shard. Note that values of shard key attributes cannot
|
||||||
|
be changed once set.
|
||||||
|
This option is meaningless in a single server setup.
|
||||||
|
|
||||||
|
When choosing the shard keys, one must be aware of the following
|
||||||
|
rules and limitations: In a sharded collection with more than
|
||||||
|
one shard it is not possible to set up a unique constraint on
|
||||||
|
an attribute that is not the one and only shard key given in
|
||||||
|
*shardKeys*. This is because enforcing a unique constraint
|
||||||
|
would otherwise make a global index necessary or need extensive
|
||||||
|
communication for every single write operation. Furthermore, if
|
||||||
|
*_key* is not the one and only shard key, then it is not possible
|
||||||
|
to set the *_key* attribute when inserting a document, provided
|
||||||
|
the collection has more than one shard. Again, this is because
|
||||||
|
the database has to enforce the unique constraint on the *_key*
|
||||||
|
attribute and this can only be done efficiently if this is the
|
||||||
|
only shard key by delegating to the individual shards.
|
||||||
|
|
||||||
|
`db._create(collection-name, properties, type)`
|
||||||
|
|
||||||
|
Specifies the optional *type* of the collection, it can either be *document*
|
||||||
|
or *edge*. On default it is document. Instead of giving a type you can also use
|
||||||
|
*db._createEdgeCollection* or *db._createDocumentCollection*.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
With defaults:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseCreate
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreate}
|
||||||
|
c = db._create("users");
|
||||||
|
c.properties();
|
||||||
|
~ db._drop("users");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseCreate
|
||||||
|
|
||||||
|
With properties:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseCreateProperties
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreateProperties}
|
||||||
|
|c = db._create("users", { waitForSync : true,
|
||||||
|
journalSize : 1024 * 1204});
|
||||||
|
c.properties();
|
||||||
|
~ db._drop("users");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseCreateProperties
|
||||||
|
|
||||||
|
With a key generator:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseCreateKey
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreateKey}
|
||||||
|
| db._create("users",
|
||||||
|
{ keyOptions: { type: "autoincrement", offset: 10, increment: 5 } });
|
||||||
|
db.users.save({ name: "user 1" });
|
||||||
|
db.users.save({ name: "user 2" });
|
||||||
|
db.users.save({ name: "user 3" });
|
||||||
|
~ db._drop("users");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseCreateKey
|
||||||
|
|
||||||
|
With a special key option:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseCreateSpecialKey
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreateSpecialKey}
|
||||||
|
db._create("users", { keyOptions: { allowUserKeys: false } });
|
||||||
|
db.users.save({ name: "user 1" });
|
||||||
|
| db.users.save({ name: "user 2", _key: "myuser" });
|
||||||
|
~ // xpError(ERROR_ARANGO_DOCUMENT_KEY_UNEXPECTED)
|
||||||
|
db.users.save({ name: "user 3" });
|
||||||
|
~ db._drop("users");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseCreateSpecialKey
|
||||||
|
|
||||||
|
|
||||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||||
@startDocuBlock collectionCreateEdgeCollection
|
|
||||||
|
|
||||||
|
@brief creates a new edge collection
|
||||||
|
`db._createEdgeCollection(collection-name)`
|
||||||
|
|
||||||
|
Creates a new edge collection named *collection-name*. If the
|
||||||
|
collection name already exists an error is thrown. The default value
|
||||||
|
for *waitForSync* is *false*.
|
||||||
|
|
||||||
|
`db._createEdgeCollection(collection-name, properties)`
|
||||||
|
|
||||||
|
*properties* must be an object with the following attributes:
|
||||||
|
|
||||||
|
* *waitForSync* (optional, default *false*): If *true* creating
|
||||||
|
a document will only return after the data was synced to disk.
|
||||||
|
* *journalSize* (optional, default is
|
||||||
|
"configuration parameter"): The maximal size of
|
||||||
|
a journal or datafile. Note that this also limits the maximal
|
||||||
|
size of a single object and must be at least 1MB.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||||
@startDocuBlock collectionCreateDocumentCollection
|
|
||||||
|
|
||||||
|
@brief creates a new document collection
|
||||||
|
`db._createDocumentCollection(collection-name)`
|
||||||
|
|
||||||
|
Creates a new document collection named *collection-name*. If the
|
||||||
|
document name already exists and error is thrown.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION All Collections
|
!SUBSECTION All Collections
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock collectionDatabaseNameAll
|
|
||||||
|
|
||||||
|
@brief returns all collections
|
||||||
|
`db._collections()`
|
||||||
|
|
||||||
|
Returns all collections of the given database.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionsDatabaseName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionsDatabaseName}
|
||||||
|
~ db._create("example");
|
||||||
|
db._collections();
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionsDatabaseName
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Collection Name
|
!SUBSECTION Collection Name
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock collectionDatabaseCollectionName
|
|
||||||
|
|
||||||
|
@brief selects a collection from the vocbase
|
||||||
|
`db.collection-name`
|
||||||
|
|
||||||
|
Returns the collection with the given *collection-name*. If no such
|
||||||
|
collection exists, create a collection named *collection-name* with the
|
||||||
|
default properties.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseCollectionName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCollectionName}
|
||||||
|
~ db._create("example");
|
||||||
|
db.example;
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseCollectionName
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Drop
|
!SUBSECTION Drop
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock collectionDatabaseDrop
|
|
||||||
|
|
||||||
|
@brief drops a collection
|
||||||
|
`db._drop(collection)`
|
||||||
|
|
||||||
|
Drops a *collection* and all its indexes.
|
||||||
|
|
||||||
|
`db._drop(collection-identifier)`
|
||||||
|
|
||||||
|
Drops a collection identified by *collection-identifier* and all its
|
||||||
|
indexes. No error is thrown if there is no such collection.
|
||||||
|
|
||||||
|
`db._drop(collection-name)`
|
||||||
|
|
||||||
|
Drops a collection named *collection-name* and all its indexes. No error
|
||||||
|
is thrown if there is no such collection.
|
||||||
|
|
||||||
|
*Examples*
|
||||||
|
|
||||||
|
Drops a collection:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseDrop
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseDrop}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
db._drop(col);
|
||||||
|
col;
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseDrop
|
||||||
|
|
||||||
|
Drops a collection identified by name:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseDropName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseDropName}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
db._drop("example");
|
||||||
|
col;
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseDropName
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Truncate
|
!SUBSECTION Truncate
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock collectionDatabaseTruncate
|
|
||||||
|
|
||||||
|
@brief truncates a collection
|
||||||
|
`db._truncate(collection)`
|
||||||
|
|
||||||
|
Truncates a *collection*, removing all documents but keeping all its
|
||||||
|
indexes.
|
||||||
|
|
||||||
|
`db._truncate(collection-identifier)`
|
||||||
|
|
||||||
|
Truncates a collection identified by *collection-identified*. No error is
|
||||||
|
thrown if there is no such collection.
|
||||||
|
|
||||||
|
`db._truncate(collection-name)`
|
||||||
|
|
||||||
|
Truncates a collection named *collection-name*. No error is thrown if
|
||||||
|
there is no such collection.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Truncates a collection:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseTruncate
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseTruncate}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
col.save({ "Hello" : "World" });
|
||||||
|
col.count();
|
||||||
|
db._truncate(col);
|
||||||
|
col.count();
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseTruncate
|
||||||
|
|
||||||
|
Truncates a collection identified by name:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionDatabaseTruncateName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseTruncateName}
|
||||||
|
~ db._create("example");
|
||||||
|
col = db.example;
|
||||||
|
col.save({ "Hello" : "World" });
|
||||||
|
col.count();
|
||||||
|
db._truncate("example");
|
||||||
|
col.count();
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionDatabaseTruncateName
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,24 @@
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Reuse address
|
!SUBSECTION Reuse address
|
||||||
@startDocuBlock serverReuseAddress
|
|
||||||
|
|
||||||
|
@brief try to reuse address
|
||||||
|
`--server.reuse-address`
|
||||||
|
|
||||||
|
If this boolean option is set to *true* then the socket option
|
||||||
|
SO_REUSEADDR is set on all server endpoints, which is the default.
|
||||||
|
If this option is set to *false* it is possible that it takes up
|
||||||
|
to a minute after a server has terminated until it is possible for
|
||||||
|
a new server to use the same endpoint again. This is why this is
|
||||||
|
activated by default.
|
||||||
|
|
||||||
|
Please note however that under some operating systems this can be
|
||||||
|
a security risk because it might be possible for another process
|
||||||
|
to bind to the same address and port, possibly hijacking network
|
||||||
|
traffic. Under Windows, ArangoDB additionally sets the flag
|
||||||
|
SO_EXCLUSIVEADDRUSE as a measure to alleviate this problem.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Disable authentication
|
!SUBSECTION Disable authentication
|
||||||
|
@ -13,7 +30,23 @@
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Disable authentication-unix-sockets
|
!SUBSECTION Disable authentication-unix-sockets
|
||||||
@startDocuBlock serverAuthenticationDisable
|
|
||||||
|
|
||||||
|
@brief disable authentication for requests via UNIX domain sockets
|
||||||
|
`--server.disable-authentication-unix-sockets value`
|
||||||
|
|
||||||
|
Setting *value* to true will turn off authentication on the server side
|
||||||
|
for requests coming in via UNIX domain sockets. With this flag enabled,
|
||||||
|
clients located on the same host as the ArangoDB server can use UNIX domain
|
||||||
|
sockets to connect to the server without authentication.
|
||||||
|
Requests coming in by other means (e.g. TCP/IP) are not affected by this option.
|
||||||
|
|
||||||
|
The default value is *false*.
|
||||||
|
|
||||||
|
**Note**: this option is only available on platforms that support UNIX
|
||||||
|
domain
|
||||||
|
sockets.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Authenticate system only
|
!SUBSECTION Authenticate system only
|
||||||
|
@ -21,7 +54,24 @@
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Disable replication-applier
|
!SUBSECTION Disable replication-applier
|
||||||
@startDocuBlock serverDisableReplicationApplier
|
|
||||||
|
|
||||||
|
@brief disable the replication applier on server startup
|
||||||
|
`--server.disable-replication-applier flag`
|
||||||
|
|
||||||
|
If *true* the server will start with the replication applier turned off,
|
||||||
|
even if the replication applier is configured with the *autoStart* option.
|
||||||
|
Using the command-line option will not change the value of the *autoStart*
|
||||||
|
option in the applier configuration, but will suppress auto-starting the
|
||||||
|
replication applier just once.
|
||||||
|
|
||||||
|
If the option is not used, ArangoDB will read the applier configuration
|
||||||
|
from
|
||||||
|
the file *REPLICATION-APPLIER-CONFIG* on startup, and use the value of the
|
||||||
|
*autoStart* attribute from this file.
|
||||||
|
|
||||||
|
The default is *false*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Keep-alive timeout
|
!SUBSECTION Keep-alive timeout
|
||||||
|
@ -29,47 +79,276 @@
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Default API compatibility
|
!SUBSECTION Default API compatibility
|
||||||
@startDocuBlock serverDefaultApi
|
|
||||||
|
|
||||||
|
@brief default API compatibility
|
||||||
|
`--server.default-api-compatibility`
|
||||||
|
|
||||||
|
This option can be used to determine the API compatibility of the ArangoDB
|
||||||
|
server. It expects an ArangoDB version number as an integer, calculated as
|
||||||
|
follows:
|
||||||
|
|
||||||
|
*10000 \* major + 100 \* minor (example: *10400* for ArangoDB 1.4)*
|
||||||
|
|
||||||
|
The value of this option will have an influence on some API return values
|
||||||
|
when the HTTP client used does not send any compatibility information.
|
||||||
|
|
||||||
|
In most cases it will be sufficient to not set this option explicitly but to
|
||||||
|
keep the default value. However, in case an "old" ArangoDB client is used
|
||||||
|
that does not send any compatibility information and that cannot handle the
|
||||||
|
responses of the current version of ArangoDB, it might be reasonable to set
|
||||||
|
the option to an old version number to improve compatibility with older
|
||||||
|
clients.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Hide Product header
|
!SUBSECTION Hide Product header
|
||||||
@startDocuBlock serverHideProductHeader
|
|
||||||
|
|
||||||
|
@brief hide the "Server: ArangoDB" header in HTTP responses
|
||||||
|
`--server.hide-product-header`
|
||||||
|
|
||||||
|
If *true*, the server will exclude the HTTP header "Server: ArangoDB" in
|
||||||
|
HTTP responses. If set to *false*, the server will send the header in
|
||||||
|
responses.
|
||||||
|
|
||||||
|
The default is *false*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Allow method override
|
!SUBSECTION Allow method override
|
||||||
@startDocuBlock serverAllowMethod
|
|
||||||
|
|
||||||
|
@brief allow HTTP method override via custom headers?
|
||||||
|
`--server.allow-method-override`
|
||||||
|
|
||||||
|
When this option is set to *true*, the HTTP request method will optionally
|
||||||
|
be fetched from one of the following HTTP request headers if present in
|
||||||
|
the request:
|
||||||
|
|
||||||
|
- *x-http-method*
|
||||||
|
- *x-http-method-override*
|
||||||
|
- *x-method-override*
|
||||||
|
|
||||||
|
If the option is set to *true* and any of these headers is set, the
|
||||||
|
request method will be overridden by the value of the header. For example,
|
||||||
|
this allows issuing an HTTP DELETE request which to the outside world will
|
||||||
|
look like an HTTP GET request. This allows bypassing proxies and tools that
|
||||||
|
will only let certain request types pass.
|
||||||
|
|
||||||
|
Setting this option to *true* may impose a security risk so it should only
|
||||||
|
be used in controlled environments.
|
||||||
|
|
||||||
|
The default value for this option is *false*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Server threads
|
!SUBSECTION Server threads
|
||||||
@startDocuBlock serverThreads
|
|
||||||
|
|
||||||
|
@brief number of dispatcher threads
|
||||||
|
`--server.threads number`
|
||||||
|
|
||||||
|
Specifies the *number* of threads that are spawned to handle HTTP REST
|
||||||
|
requests.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Keyfile
|
!SUBSECTION Keyfile
|
||||||
@startDocuBlock serverKeyfile
|
|
||||||
|
|
||||||
|
@brief keyfile containing server certificate
|
||||||
|
`--server.keyfile filename`
|
||||||
|
|
||||||
|
If SSL encryption is used, this option must be used to specify the filename
|
||||||
|
of the server private key. The file must be PEM formatted and contain both
|
||||||
|
the certificate and the server's private key.
|
||||||
|
|
||||||
|
The file specified by *filename* should have the following structure:
|
||||||
|
|
||||||
|
```
|
||||||
|
# create private key in file "server.key"
|
||||||
|
openssl genrsa -des3 -out server.key 1024
|
||||||
|
|
||||||
|
# create certificate signing request (csr) in file "server.csr"
|
||||||
|
openssl req -new -key server.key -out server.csr
|
||||||
|
|
||||||
|
# copy away original private key to "server.key.org"
|
||||||
|
cp server.key server.key.org
|
||||||
|
|
||||||
|
# remove passphrase from the private key
|
||||||
|
openssl rsa -in server.key.org -out server.key
|
||||||
|
|
||||||
|
# sign the csr with the key, creates certificate PEM file "server.crt"
|
||||||
|
openssl x509 -req -days 365 -in server.csr -signkey server.key -out
|
||||||
|
server.crt
|
||||||
|
|
||||||
|
# combine certificate and key into single PEM file "server.pem"
|
||||||
|
cat server.crt server.key > server.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
You may use certificates issued by a Certificate Authority or self-signed
|
||||||
|
certificates. Self-signed certificates can be created by a tool of your
|
||||||
|
choice. When using OpenSSL for creating the self-signed certificate, the
|
||||||
|
following commands should create a valid keyfile:
|
||||||
|
|
||||||
|
```
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
|
||||||
|
(base64 encoded certificate)
|
||||||
|
|
||||||
|
-----END CERTIFICATE-----
|
||||||
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
|
|
||||||
|
(base64 encoded private key)
|
||||||
|
|
||||||
|
-----END RSA PRIVATE KEY-----
|
||||||
|
```
|
||||||
|
|
||||||
|
For further information please check the manuals of the tools you use to
|
||||||
|
create the certificate.
|
||||||
|
|
||||||
|
**Note**: the \-\-server.keyfile option must be set if the server is
|
||||||
|
started with
|
||||||
|
at least one SSL endpoint.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Cafile
|
!SUBSECTION Cafile
|
||||||
@startDocuBlock serverCafile
|
|
||||||
|
|
||||||
|
@brief CA file
|
||||||
|
`--server.cafile filename`
|
||||||
|
|
||||||
|
This option can be used to specify a file with CA certificates that are
|
||||||
|
sent
|
||||||
|
to the client whenever the server requests a client certificate. If the
|
||||||
|
file is specified, The server will only accept client requests with
|
||||||
|
certificates issued by these CAs. Do not specify this option if you want
|
||||||
|
clients to be able to connect without specific certificates.
|
||||||
|
|
||||||
|
The certificates in *filename* must be PEM formatted.
|
||||||
|
|
||||||
|
**Note**: this option is only relevant if at least one SSL endpoint is
|
||||||
|
used.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION SSL protocol
|
!SUBSECTION SSL protocol
|
||||||
@startDocuBlock serverSSLProtocol
|
|
||||||
|
|
||||||
|
@brief SSL protocol type to use
|
||||||
|
`--server.ssl-protocolvalue`
|
||||||
|
|
||||||
|
Use this option to specify the default encryption protocol to be used.
|
||||||
|
The following variants are available:
|
||||||
|
- 1: SSLv2
|
||||||
|
- 2: SSLv23
|
||||||
|
- 3: SSLv3
|
||||||
|
- 4: TLSv1
|
||||||
|
|
||||||
|
The default *value* is 4 (i.e. TLSv1).
|
||||||
|
|
||||||
|
**Note**: this option is only relevant if at least one SSL endpoint is used.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION SSL cache
|
!SUBSECTION SSL cache
|
||||||
@startDocuBlock serverSSLCache
|
|
||||||
|
|
||||||
|
@brief whether or not to use SSL session caching
|
||||||
|
`--server.ssl-cache value`
|
||||||
|
|
||||||
|
Set to true if SSL session caching should be used.
|
||||||
|
|
||||||
|
*value* has a default value of *false* (i.e. no caching).
|
||||||
|
|
||||||
|
**Note**: this option is only relevant if at least one SSL endpoint is used, and
|
||||||
|
only if the client supports sending the session id.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION SSL options
|
!SUBSECTION SSL options
|
||||||
@startDocuBlock serverSSLOptions
|
|
||||||
|
|
||||||
|
@brief ssl options to use
|
||||||
|
`--server.ssl-options value`
|
||||||
|
|
||||||
|
This option can be used to set various SSL-related options. Individual
|
||||||
|
option values must be combined using bitwise OR.
|
||||||
|
|
||||||
|
Which options are available on your platform is determined by the OpenSSL
|
||||||
|
version you use. The list of options available on your platform might be
|
||||||
|
retrieved by the following shell command:
|
||||||
|
|
||||||
|
```
|
||||||
|
> grep "#define SSL_OP_.*" /usr/include/openssl/ssl.h
|
||||||
|
|
||||||
|
#define SSL_OP_MICROSOFT_SESS_ID_BUG 0x00000001L
|
||||||
|
#define SSL_OP_NETSCAPE_CHALLENGE_BUG 0x00000002L
|
||||||
|
#define SSL_OP_LEGACY_SERVER_CONNECT 0x00000004L
|
||||||
|
#define SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG 0x00000008L
|
||||||
|
#define SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG 0x00000010L
|
||||||
|
#define SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER 0x00000020L
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
A description of the options can be found online in the
|
||||||
|
[OpenSSL
|
||||||
|
documentation](http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html)
|
||||||
|
|
||||||
|
**Note**: this option is only relevant if at least one SSL endpoint is
|
||||||
|
used.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION SSL cipher
|
!SUBSECTION SSL cipher
|
||||||
@startDocuBlock serverSSLCipher
|
|
||||||
|
|
||||||
|
@brief ssl cipher list to use
|
||||||
|
`--server.ssl-cipher-list cipher-list`
|
||||||
|
|
||||||
|
This option can be used to restrict the server to certain SSL ciphers
|
||||||
|
only,
|
||||||
|
and to define the relative usage preference of SSL ciphers.
|
||||||
|
|
||||||
|
The format of *cipher-list* is documented in the OpenSSL documentation.
|
||||||
|
|
||||||
|
To check which ciphers are available on your platform, you may use the
|
||||||
|
following shell command:
|
||||||
|
|
||||||
|
```
|
||||||
|
> openssl ciphers -v
|
||||||
|
|
||||||
|
ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1
|
||||||
|
ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1
|
||||||
|
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
|
||||||
|
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
|
||||||
|
DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(256)
|
||||||
|
Mac=SHA1
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
The default value for *cipher-list* is "ALL".
|
||||||
|
|
||||||
|
**Note**: this option is only relevant if at least one SSL endpoint is used.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Backlog size
|
!SUBSECTION Backlog size
|
||||||
@startDocuBlock serverBacklog
|
|
||||||
|
|
||||||
|
@brief listen backlog size
|
||||||
|
`--server.backlog-size`
|
||||||
|
|
||||||
|
Allows to specify the size of the backlog for the *listen* system call
|
||||||
|
The default value is 10. The maximum value is platform-dependent.
|
||||||
|
Specifying
|
||||||
|
a higher value than defined in the system header's SOMAXCONN may result in
|
||||||
|
a warning on server start. The actual value used by *listen* may also be
|
||||||
|
silently truncated on some platforms (this happens inside the *listen*
|
||||||
|
system call).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Disable server statistics
|
!SUBSECTION Disable server statistics
|
||||||
|
@ -84,7 +363,16 @@ the option *--disable-figures*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Session timeout
|
!SUBSECTION Session timeout
|
||||||
@startDocuBlock SessionTimeout
|
|
||||||
|
|
||||||
|
@brief time to live for server sessions
|
||||||
|
`--server.session-timeout value`
|
||||||
|
|
||||||
|
The timeout for web interface sessions, using for authenticating requests
|
||||||
|
to the web interface (/_admin/aardvark) and related areas.
|
||||||
|
|
||||||
|
Sessions are only used when authentication is turned on.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Foxx queues
|
!SUBSECTION Foxx queues
|
||||||
|
@ -96,7 +384,32 @@ the option *--disable-figures*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Directory
|
!SUBSECTION Directory
|
||||||
@startDocuBlock DatabaseDirectory
|
|
||||||
|
|
||||||
|
@brief path to the database
|
||||||
|
`--database.directory directory`
|
||||||
|
|
||||||
|
The directory containing the collections and datafiles. Defaults
|
||||||
|
to */var/lib/arango*. When specifying the database directory, please
|
||||||
|
make sure the directory is actually writable by the arangod process.
|
||||||
|
|
||||||
|
You should further not use a database directory which is provided by a
|
||||||
|
network filesystem such as NFS. The reason is that networked filesystems
|
||||||
|
might cause inconsistencies when there are multiple parallel readers or
|
||||||
|
writers or they lack features required by arangod (e.g. flock()).
|
||||||
|
|
||||||
|
`directory`
|
||||||
|
|
||||||
|
When using the command line version, you can simply supply the database
|
||||||
|
directory as argument.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```
|
||||||
|
> ./arangod --server.endpoint tcp://127.0.0.1:8529 --database.directory
|
||||||
|
/tmp/vocbase
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Journal size
|
!SUBSECTION Journal size
|
||||||
|
@ -112,36 +425,164 @@ the option *--disable-figures*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Disable AQL query tracking
|
!SUBSECTION Disable AQL query tracking
|
||||||
@startDocuBlock databaseDisableQueryTracking
|
|
||||||
|
|
||||||
|
@brief disable the query tracking feature
|
||||||
|
`--database.disable-query-tracking flag`
|
||||||
|
|
||||||
|
If *true*, the server's query tracking feature will be disabled by
|
||||||
|
default.
|
||||||
|
|
||||||
|
The default is *false*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Throw collection not loaded error
|
!SUBSECTION Throw collection not loaded error
|
||||||
@startDocuBlock databaseThrowCollectionNotLoadedError
|
|
||||||
|
|
||||||
|
@brief throw collection not loaded error
|
||||||
|
`--database.throw-collection-not-loaded-error flag`
|
||||||
|
|
||||||
|
Accessing a not-yet loaded collection will automatically load a collection
|
||||||
|
on first access. This flag controls what happens in case an operation
|
||||||
|
would need to wait for another thread to finalize loading a collection. If
|
||||||
|
set to *true*, then the first operation that accesses an unloaded collection
|
||||||
|
will load it. Further threads that try to access the same collection while
|
||||||
|
it is still loading will get an error (1238, *collection not loaded*). When
|
||||||
|
the initial operation has completed loading the collection, all operations
|
||||||
|
on the collection can be carried out normally, and error 1238 will not be
|
||||||
|
thrown.
|
||||||
|
|
||||||
|
If set to *false*, the first thread that accesses a not-yet loaded collection
|
||||||
|
will still load it. Other threads that try to access the collection while
|
||||||
|
loading will not fail with error 1238 but instead block until the collection
|
||||||
|
is fully loaded. This configuration might lead to all server threads being
|
||||||
|
blocked because they are all waiting for the same collection to complete
|
||||||
|
loading. Setting the option to *true* will prevent this from happening, but
|
||||||
|
requires clients to catch error 1238 and react on it (maybe by scheduling
|
||||||
|
a retry for later).
|
||||||
|
|
||||||
|
The default value is *false*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION AQL Query caching mode
|
!SUBSECTION AQL Query caching mode
|
||||||
@startDocuBlock queryCacheMode
|
|
||||||
|
|
||||||
|
@brief whether or not to enable the AQL query cache
|
||||||
|
`--database.query-cache-mode`
|
||||||
|
|
||||||
|
Toggles the AQL query cache behavior. Possible values are:
|
||||||
|
|
||||||
|
* *off*: do not use query cache
|
||||||
|
* *on*: always use query cache, except for queries that have their *cache*
|
||||||
|
attribute set to *false*
|
||||||
|
* *demand*: use query cache only for queries that have their *cache*
|
||||||
|
attribute set to *true*
|
||||||
|
set
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION AQL Query cache size
|
!SUBSECTION AQL Query cache size
|
||||||
@startDocuBlock queryCacheMaxResults
|
|
||||||
|
|
||||||
|
@brief maximum number of elements in the query cache per database
|
||||||
|
`--database.query-cache-max-results`
|
||||||
|
|
||||||
|
Maximum number of query results that can be stored per database-specific
|
||||||
|
query cache. If a query is eligible for caching and the number of items in
|
||||||
|
the database's query cache is equal to this threshold value, another cached
|
||||||
|
query result will be removed from the cache.
|
||||||
|
|
||||||
|
This option only has an effect if the query cache mode is set to either
|
||||||
|
*on* or *demand*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Index threads
|
!SUBSECTION Index threads
|
||||||
@startDocuBlock indexThreads
|
|
||||||
|
|
||||||
|
@brief number of background threads for parallel index creation
|
||||||
|
`--database.index-threads`
|
||||||
|
|
||||||
|
Specifies the *number* of background threads for index creation. When a
|
||||||
|
collection contains extra indexes other than the primary index, these other
|
||||||
|
indexes can be built by multiple threads in parallel. The index threads
|
||||||
|
are shared among multiple collections and databases. Specifying a value of
|
||||||
|
*0* will turn off parallel building, meaning that indexes for each collection
|
||||||
|
are built sequentially by the thread that opened the collection.
|
||||||
|
If the number of index threads is greater than 1, it will also be used to
|
||||||
|
built the edge index of a collection in parallel (this also requires the
|
||||||
|
edge index in the collection to be split into multiple buckets).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION V8 contexts
|
!SUBSECTION V8 contexts
|
||||||
@startDocuBlock v8Contexts
|
|
||||||
|
|
||||||
|
@brief number of V8 contexts for executing JavaScript actions
|
||||||
|
`--server.v8-contexts number`
|
||||||
|
|
||||||
|
Specifies the *number* of V8 contexts that are created for executing
|
||||||
|
JavaScript code. More contexts allow execute more JavaScript actions in
|
||||||
|
parallel, provided that there are also enough threads available. Please
|
||||||
|
note that each V8 context will use a substantial amount of memory and
|
||||||
|
requires periodic CPU processing time for garbage collection.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Garbage collection frequency (time-based)
|
!SUBSECTION Garbage collection frequency (time-based)
|
||||||
@startDocuBlock jsGcFrequency
|
|
||||||
|
|
||||||
|
@brief JavaScript garbage collection frequency (each x seconds)
|
||||||
|
`--javascript.gc-frequency frequency`
|
||||||
|
|
||||||
|
Specifies the frequency (in seconds) for the automatic garbage collection of
|
||||||
|
JavaScript objects. This setting is useful to have the garbage collection
|
||||||
|
still work in periods with no or little numbers of requests.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Garbage collection interval (request-based)
|
!SUBSECTION Garbage collection interval (request-based)
|
||||||
@startDocuBlock jsStartupGcInterval
|
|
||||||
|
|
||||||
|
@brief JavaScript garbage collection interval (each x requests)
|
||||||
|
`--javascript.gc-interval interval`
|
||||||
|
|
||||||
|
Specifies the interval (approximately in number of requests) that the
|
||||||
|
garbage collection for JavaScript objects will be run in each thread.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION V8 options
|
!SUBSECTION V8 options
|
||||||
@startDocuBlock jsV8Options
|
|
||||||
|
|
||||||
|
@brief optional arguments to pass to v8
|
||||||
|
`--javascript.v8-options options`
|
||||||
|
|
||||||
|
Optional arguments to pass to the V8 Javascript engine. The V8 engine will
|
||||||
|
run with default settings unless explicit options are specified using this
|
||||||
|
option. The options passed will be forwarded to the V8 engine which will
|
||||||
|
parse them on its own. Passing invalid options may result in an error being
|
||||||
|
printed on stderr and the option being ignored.
|
||||||
|
|
||||||
|
Options need to be passed in one string, with V8 option names being prefixed
|
||||||
|
with double dashes. Multiple options need to be separated by whitespace.
|
||||||
|
To get a list of all available V8 options, you can use
|
||||||
|
the value *"--help"* as follows:
|
||||||
|
```
|
||||||
|
--javascript.v8-options "--help"
|
||||||
|
```
|
||||||
|
|
||||||
|
Another example of specific V8 options being set at startup:
|
||||||
|
|
||||||
|
```
|
||||||
|
--javascript.v8-options "--harmony --log"
|
||||||
|
```
|
||||||
|
|
||||||
|
Names and features or usable options depend on the version of V8 being used,
|
||||||
|
and might change in the future if a different version of V8 is being used
|
||||||
|
in ArangoDB. Not all options offered by V8 might be sensible to use in the
|
||||||
|
context of ArangoDB. Use the specific options only if you are sure that
|
||||||
|
they are not harmful for the regular database operation.
|
||||||
|
|
||||||
|
|
|
@ -2,28 +2,155 @@
|
||||||
|
|
||||||
!SUBSECTION Node ID
|
!SUBSECTION Node ID
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterMyLocalInfo
|
|
||||||
|
|
||||||
|
@brief this server's id
|
||||||
|
`--cluster.my-local-info info`
|
||||||
|
|
||||||
|
Some local information about the server in the cluster, this can for
|
||||||
|
example be an IP address with a process ID or any string unique to
|
||||||
|
the server. Specifying *info* is mandatory on startup if the server
|
||||||
|
id (see below) is not specified. Each server of the cluster must
|
||||||
|
have a unique local info. This is ignored if my-id below is specified.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Agency endpoint
|
!SUBSECTION Agency endpoint
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterAgencyEndpoint
|
|
||||||
|
|
||||||
|
@brief list of agency endpoints
|
||||||
|
`--cluster.agency-endpoint endpoint`
|
||||||
|
|
||||||
|
An agency endpoint the server can connect to. The option can be specified
|
||||||
|
multiple times so the server can use a cluster of agency servers.
|
||||||
|
Endpoints
|
||||||
|
have the following pattern:
|
||||||
|
|
||||||
|
- tcp://ipv4-address:port - TCP/IP endpoint, using IPv4
|
||||||
|
- tcp://[ipv6-address]:port - TCP/IP endpoint, using IPv6
|
||||||
|
- ssl://ipv4-address:port - TCP/IP endpoint, using IPv4, SSL encryption
|
||||||
|
- ssl://[ipv6-address]:port - TCP/IP endpoint, using IPv6, SSL encryption
|
||||||
|
|
||||||
|
At least one endpoint must be specified or ArangoDB will refuse to start.
|
||||||
|
It is recommended to specify at least two endpoints so ArangoDB has an
|
||||||
|
alternative endpoint if one of them becomes unavailable.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```
|
||||||
|
--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint
|
||||||
|
tcp://192.168.1.2:4002
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Agency prefix
|
!SUBSECTION Agency prefix
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterAgencyPrefix
|
|
||||||
|
|
||||||
|
@brief global agency prefix
|
||||||
|
`--cluster.agency-prefix prefix`
|
||||||
|
|
||||||
|
The global key prefix used in all requests to the agency. The specified
|
||||||
|
prefix will become part of each agency key. Specifying the key prefix
|
||||||
|
allows managing multiple ArangoDB clusters with the same agency
|
||||||
|
server(s).
|
||||||
|
|
||||||
|
*prefix* must consist of the letters *a-z*, *A-Z* and the digits *0-9*
|
||||||
|
only. Specifying a prefix is mandatory.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```
|
||||||
|
--cluster.prefix mycluster
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION MyId
|
!SUBSECTION MyId
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterMyId
|
|
||||||
|
|
||||||
|
@brief this server's id
|
||||||
|
`--cluster.my-id id`
|
||||||
|
|
||||||
|
The local server's id in the cluster. Specifying *id* is mandatory on
|
||||||
|
startup. Each server of the cluster must have a unique id.
|
||||||
|
|
||||||
|
Specifying the id is very important because the server id is used for
|
||||||
|
determining the server's role and tasks in the cluster.
|
||||||
|
|
||||||
|
*id* must be a string consisting of the letters *a-z*, *A-Z* or the
|
||||||
|
digits *0-9* only.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION MyAddress
|
!SUBSECTION MyAddress
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterMyAddress
|
|
||||||
|
|
||||||
|
@brief this server's address / endpoint
|
||||||
|
`--cluster.my-address endpoint`
|
||||||
|
|
||||||
|
The server's endpoint for cluster-internal communication. If specified, it
|
||||||
|
must have the following pattern:
|
||||||
|
- tcp://ipv4-address:port - TCP/IP endpoint, using IPv4
|
||||||
|
- tcp://[ipv6-address]:port - TCP/IP endpoint, using IPv6
|
||||||
|
- ssl://ipv4-address:port - TCP/IP endpoint, using IPv4, SSL encryption
|
||||||
|
- ssl://[ipv6-address]:port - TCP/IP endpoint, using IPv6, SSL encryption
|
||||||
|
|
||||||
|
If no *endpoint* is specified, the server will look up its internal
|
||||||
|
endpoint address in the agency. If no endpoint can be found in the agency
|
||||||
|
for the server's id, ArangoDB will refuse to start.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```
|
||||||
|
--cluster.my-address tcp://192.168.1.1:8530
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Username
|
!SUBSECTION Username
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterUsername
|
|
||||||
|
|
||||||
|
@brief username used for cluster-internal communication
|
||||||
|
`--cluster.username username`
|
||||||
|
|
||||||
|
The username used for authorization of cluster-internal requests.
|
||||||
|
This username will be used to authenticate all requests and responses in
|
||||||
|
cluster-internal communication, i.e. requests exchanged between
|
||||||
|
coordinators
|
||||||
|
and individual database servers.
|
||||||
|
|
||||||
|
This option is used for cluster-internal requests only. Regular requests
|
||||||
|
to
|
||||||
|
coordinators are authenticated normally using the data in the *_users*
|
||||||
|
collection.
|
||||||
|
|
||||||
|
If coordinators and database servers are run with authentication turned
|
||||||
|
off,
|
||||||
|
(e.g. by setting the *--server.disable-authentication* option to *true*),
|
||||||
|
the cluster-internal communication will also be unauthenticated.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Password
|
!SUBSECTION Password
|
||||||
<!-- arangod/Cluster/ApplicationCluster.h -->
|
<!-- arangod/Cluster/ApplicationCluster.h -->
|
||||||
@startDocuBlock clusterPassword
|
|
||||||
|
|
||||||
|
@brief password used for cluster-internal communication
|
||||||
|
`--cluster.password password`
|
||||||
|
|
||||||
|
The password used for authorization of cluster-internal requests.
|
||||||
|
This password will be used to authenticate all requests and responses in
|
||||||
|
cluster-internal communication, i.e. requests exchanged between
|
||||||
|
coordinators
|
||||||
|
and individual database servers.
|
||||||
|
|
||||||
|
This option is used for cluster-internal requests only. Regular requests
|
||||||
|
to
|
||||||
|
coordinators are authenticated normally using the data in the `_users`
|
||||||
|
collection.
|
||||||
|
|
||||||
|
If coordinators and database servers are run with authentication turned
|
||||||
|
off,
|
||||||
|
(e.g. by setting the *--server.disable-authentication* option to *true*),
|
||||||
|
the cluster-internal communication will also be unauthenticated.
|
||||||
|
|
||||||
|
|
|
@ -1,15 +1,42 @@
|
||||||
!CHAPTER Command-Line Options for Communication
|
!CHAPTER Command-Line Options for Communication
|
||||||
|
|
||||||
!SUBSECTION Scheduler threads
|
!SUBSECTION Scheduler threads
|
||||||
@startDocuBlock schedulerThreads
|
|
||||||
|
|
||||||
|
@brief number of scheduler threads
|
||||||
|
`--scheduler.threads arg`
|
||||||
|
|
||||||
|
An integer argument which sets the number of threads to use in the IO
|
||||||
|
scheduler. The default is 1.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Scheduler maximal queue size
|
!SUBSECTION Scheduler maximal queue size
|
||||||
@startDocuBlock schedulerMaximalQueueSize
|
|
||||||
|
|
||||||
|
@brief maximum size of the dispatcher queue for asynchronous requests
|
||||||
|
`--scheduler.maximal-queue-size size`
|
||||||
|
|
||||||
|
Specifies the maximum *size* of the dispatcher queue for asynchronous
|
||||||
|
task execution. If the queue already contains *size* tasks, new tasks
|
||||||
|
will be rejected until other tasks are popped from the queue. Setting this
|
||||||
|
value may help preventing from running out of memory if the queue is
|
||||||
|
filled
|
||||||
|
up faster than the server can process requests.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Scheduler backend
|
!SUBSECTION Scheduler backend
|
||||||
@startDocuBlock schedulerBackend
|
|
||||||
|
|
||||||
|
@brief scheduler backend
|
||||||
|
`--scheduler.backend arg`
|
||||||
|
|
||||||
|
The I/O method used by the event handler. The default (if this option is
|
||||||
|
not specified) is to try all recommended backends. This is platform
|
||||||
|
specific. See libev for further details and the meaning of select, poll
|
||||||
|
and epoll.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Io backends
|
!SUBSECTION Io backends
|
||||||
|
|
|
@ -44,5 +44,15 @@ When not in the default database, you must first switch to it using the
|
||||||
|
|
||||||
!SUBSECTION List
|
!SUBSECTION List
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock listEndpoints
|
|
||||||
|
|
||||||
|
@brief returns a list of all endpoints
|
||||||
|
`db._listEndpoints()`
|
||||||
|
|
||||||
|
Returns a list of all endpoints and their mapped databases.
|
||||||
|
|
||||||
|
Please note that managing endpoints can only be performed from out of the
|
||||||
|
*_system* database. When not in the default database, you must first switch
|
||||||
|
to it using the "db._useDatabase" method.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -27,52 +27,281 @@ Use *""* to disable.
|
||||||
|
|
||||||
!SUBSECTION Request
|
!SUBSECTION Request
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logRequests
|
|
||||||
|
|
||||||
|
@brief log file for requests
|
||||||
|
`--log.requests-file filename`
|
||||||
|
|
||||||
|
This option allows the user to specify the name of a file to which
|
||||||
|
requests are logged. By default, no log file is used and requests are
|
||||||
|
not logged. Note that if the file named by *filename* does not
|
||||||
|
exist, it will be created. If the file cannot be created (e.g. due to
|
||||||
|
missing file privileges), the server will refuse to start. If the
|
||||||
|
specified
|
||||||
|
file already exists, output is appended to that file.
|
||||||
|
|
||||||
|
Use *+* to log to standard error. Use *-* to log to standard output.
|
||||||
|
Use *""* to disable request logging altogether.
|
||||||
|
|
||||||
|
The log format is
|
||||||
|
- `"http-request"`: static string indicating that an HTTP request was
|
||||||
|
logged
|
||||||
|
- client address: IP address of client
|
||||||
|
- HTTP method type, e.g. `GET`, `POST`
|
||||||
|
- HTTP version, e.g. `HTTP/1.1`
|
||||||
|
- HTTP response code, e.g. 200
|
||||||
|
- request body length in bytes
|
||||||
|
- response body length in bytes
|
||||||
|
- server request processing time, containing the time span between
|
||||||
|
fetching
|
||||||
|
the first byte of the HTTP request and the start of the HTTP response
|
||||||
|
|
||||||
|
|
||||||
!SECTION Human Readable Logging
|
!SECTION Human Readable Logging
|
||||||
|
|
||||||
!SUBSECTION Logfiles
|
!SUBSECTION Logfiles
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logFile
|
|
||||||
|
|
||||||
|
@brief log file
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Level
|
!SUBSECTION Level
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logLevel
|
|
||||||
|
|
||||||
|
@brief log level
|
||||||
|
`--log.level level`
|
||||||
|
|
||||||
|
`--log level`
|
||||||
|
|
||||||
|
Allows the user to choose the level of information which is logged by the
|
||||||
|
server. The argument *level* is specified as a string and can be one of
|
||||||
|
the values listed below. Note that, fatal errors, that is, errors which
|
||||||
|
cause the server to terminate, are always logged irrespective of the log
|
||||||
|
level assigned by the user. The variant *c* log.level can be used in
|
||||||
|
configuration files, the variant *c* log for command line options.
|
||||||
|
|
||||||
|
**fatal**:
|
||||||
|
Logs errors which cause the server to terminate.
|
||||||
|
|
||||||
|
Fatal errors generally indicate some inconsistency with the manner in
|
||||||
|
which
|
||||||
|
the server has been coded. Fatal errors may also indicate a problem with
|
||||||
|
the
|
||||||
|
platform on which the server is running. Fatal errors always cause the
|
||||||
|
server to terminate. For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-20T07:32:12Z [4742] FATAL a http server has already been created
|
||||||
|
```
|
||||||
|
|
||||||
|
**error**:
|
||||||
|
Logs errors which the server has encountered.
|
||||||
|
|
||||||
|
These errors may not necessarily result in the termination of the
|
||||||
|
server. For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-17T13:10:22Z [13967] ERROR strange log level 'errors'\, going to
|
||||||
|
'warning'
|
||||||
|
```
|
||||||
|
|
||||||
|
**warning**:
|
||||||
|
Provides information on errors encountered by the server,
|
||||||
|
which are not necessarily detrimental to it's continued operation.
|
||||||
|
|
||||||
|
For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-20T08:15:26Z [5533] WARNING got corrupted HTTP request 'POS?'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: The setting the log level to warning will also result in all
|
||||||
|
errors
|
||||||
|
to be logged as well.
|
||||||
|
|
||||||
|
**info**:
|
||||||
|
Logs information about the status of the server.
|
||||||
|
|
||||||
|
For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-20T07:40:38Z [4998] INFO SimpleVOC ready for business
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: The setting the log level to info will also result in all errors
|
||||||
|
and
|
||||||
|
warnings to be logged as well.
|
||||||
|
|
||||||
|
**debug**:
|
||||||
|
Logs all errors, all warnings and debug information.
|
||||||
|
|
||||||
|
Debug log information is generally useful to find out the state of the
|
||||||
|
server in the case of an error. For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-17T13:02:53Z [13783] DEBUG opened port 7000 for any
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: The setting the log level to debug will also result in all
|
||||||
|
errors,
|
||||||
|
warnings and server status information to be logged as well.
|
||||||
|
|
||||||
|
**trace**:
|
||||||
|
As the name suggests, logs information which may be useful to trace
|
||||||
|
problems encountered with using the server.
|
||||||
|
|
||||||
|
For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-20T08:23:12Z [5687] TRACE trying to open port 8000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: The setting the log level to trace will also result in all
|
||||||
|
errors,
|
||||||
|
warnings, status information, and debug information to be logged as well.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Local Time
|
!SUBSECTION Local Time
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logLocalTime
|
|
||||||
|
|
||||||
|
@brief log dates and times in local time zone
|
||||||
|
`--log.use-local-time`
|
||||||
|
|
||||||
|
If specified, all dates and times in log messages will use the server's
|
||||||
|
local time-zone. If not specified, all dates and times in log messages
|
||||||
|
will be printed in UTC / Zulu time. The date and time format used in logs
|
||||||
|
is always `YYYY-MM-DD HH:MM:SS`, regardless of this setting. If UTC time
|
||||||
|
is used, a `Z` will be appended to indicate Zulu time.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Line number
|
!SUBSECTION Line number
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logLineNumber
|
|
||||||
|
|
||||||
|
@brief log line number
|
||||||
|
`--log.line-number`
|
||||||
|
|
||||||
|
Normally, if an human readable fatal, error, warning or info message is
|
||||||
|
logged, no information about the file and line number is provided. The
|
||||||
|
file
|
||||||
|
and line number is only logged for debug and trace message. This option
|
||||||
|
can
|
||||||
|
be use to always log these pieces of information.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Prefix
|
!SUBSECTION Prefix
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logPrefix
|
|
||||||
|
|
||||||
|
@brief log prefix
|
||||||
|
`--log.prefix prefix`
|
||||||
|
|
||||||
|
This option is used specify an prefix to logged text.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Thread
|
!SUBSECTION Thread
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logThread
|
|
||||||
|
|
||||||
|
@brief log thread identifier
|
||||||
|
`--log.thread`
|
||||||
|
|
||||||
|
Whenever log output is generated, the process ID is written as part of the
|
||||||
|
log information. Setting this option appends the thread id of the calling
|
||||||
|
thread to the process id. For example,
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-20T13:04:01Z [19355] INFO ready for business
|
||||||
|
```
|
||||||
|
|
||||||
|
when no thread is logged and
|
||||||
|
|
||||||
|
```
|
||||||
|
2010-09-20T13:04:17Z [19371-18446744072487317056] ready for business
|
||||||
|
```
|
||||||
|
|
||||||
|
when this command line option is set.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Source Filter
|
!SUBSECTION Source Filter
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logSourceFilter
|
|
||||||
|
|
||||||
|
@brief log source filter
|
||||||
|
`--log.source-filter arg`
|
||||||
|
|
||||||
|
For debug and trace messages, only log those messages originated from the
|
||||||
|
C source file *arg*. The argument can be used multiple times.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Content Filter
|
!SUBSECTION Content Filter
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logContentFilter
|
|
||||||
|
|
||||||
|
@brief log content filter
|
||||||
|
`--log.content-filter arg`
|
||||||
|
|
||||||
|
Only log message containing the specified string *arg*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Performance
|
!SUBSECTION Performance
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logPerformance
|
|
||||||
|
|
||||||
|
@brief performance logging
|
||||||
|
`--log.performance`
|
||||||
|
|
||||||
|
If this option is set, performance-related info messages will be logged
|
||||||
|
via
|
||||||
|
the regular logging mechanisms. These will consist of mostly timing and
|
||||||
|
debugging information for performance-critical operations.
|
||||||
|
|
||||||
|
Currently performance-related operations are logged as INFO messages.
|
||||||
|
Messages starting with prefix `[action]` indicate that an instrumented
|
||||||
|
operation was started (note that its end won't be logged). Messages with
|
||||||
|
prefix `[timer]` will contain timing information for operations. Note that
|
||||||
|
no timing information will be logged for operations taking less time than
|
||||||
|
1 second. This is to ensure that sub-second operations do not pollute
|
||||||
|
logs.
|
||||||
|
|
||||||
|
The contents of performance-related log messages enabled by this option
|
||||||
|
are subject to change in future versions of ArangoDB.
|
||||||
|
|
||||||
|
|
||||||
!SECTION Machine Readable Logging
|
!SECTION Machine Readable Logging
|
||||||
|
|
||||||
!SUBSECTION Application
|
!SUBSECTION Application
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logApplication
|
|
||||||
|
|
||||||
|
@brief log application name
|
||||||
|
`--log.application name`
|
||||||
|
|
||||||
|
Specifies the *name* of the application which should be logged if this
|
||||||
|
item of
|
||||||
|
information is to be logged.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Facility
|
!SUBSECTION Facility
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock logFacility
|
|
||||||
|
|
||||||
|
@brief log facility
|
||||||
|
`--log.facility name`
|
||||||
|
|
||||||
|
If this option is set, then in addition to output being directed to the
|
||||||
|
standard output (or to a specified file, in the case that the command line
|
||||||
|
log.file option was set), log output is also sent to the system logging
|
||||||
|
facility. The *arg* is the system log facility to use. See syslog for
|
||||||
|
further details.
|
||||||
|
|
||||||
|
The value of *arg* depends on your syslog configuration. In general it
|
||||||
|
will be *user*. Fatal messages are mapped to *crit*, so if *arg*
|
||||||
|
is *user*, these messages will be logged as *user.crit*. Error
|
||||||
|
messages are mapped to *err*. Warnings are mapped to *warn*. Info
|
||||||
|
messages are mapped to *notice*. Debug messages are mapped to
|
||||||
|
*info*. Trace messages are mapped to *debug*.
|
||||||
|
|
||||||
|
|
|
@ -9,11 +9,28 @@ environment variable.
|
||||||
|
|
||||||
!SUBSECTION General help
|
!SUBSECTION General help
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock generalHelp
|
|
||||||
|
|
||||||
|
@brief program options
|
||||||
|
`--help`
|
||||||
|
|
||||||
|
`-h`
|
||||||
|
|
||||||
|
Prints a list of the most common options available and then
|
||||||
|
exits. In order to see all options use *--help-all*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Version
|
!SUBSECTION Version
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock generalVersion
|
|
||||||
|
|
||||||
|
@brief version of the application
|
||||||
|
`--version`
|
||||||
|
|
||||||
|
`-v`
|
||||||
|
|
||||||
|
Prints the version of the server and exits.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Upgrade
|
!SUBSECTION Upgrade
|
||||||
`--upgrade`
|
`--upgrade`
|
||||||
|
@ -30,7 +47,83 @@ Whether or not this option is specified, the server will always perform a versio
|
||||||
|
|
||||||
!SUBSECTION Configuration
|
!SUBSECTION Configuration
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
@startDocuBlock configurationFilename
|
|
||||||
|
|
||||||
|
@brief config file
|
||||||
|
`--configuration filename`
|
||||||
|
|
||||||
|
`-c filename`
|
||||||
|
|
||||||
|
Specifies the name of the configuration file to use.
|
||||||
|
|
||||||
|
If this command is not passed to the server, then by default, the server
|
||||||
|
will attempt to first locate a file named *~/.arango/arangod.conf* in the
|
||||||
|
user's home directory.
|
||||||
|
|
||||||
|
If no such file is found, the server will proceed to look for a file
|
||||||
|
*arangod.conf* in the system configuration directory. The system
|
||||||
|
configuration directory is platform-specific, and may be changed when
|
||||||
|
compiling ArangoDB yourself. It may default to */etc/arangodb* or
|
||||||
|
*/usr/local/etc/arangodb*. This file is installed when using a package
|
||||||
|
manager like rpm or dpkg. If you modify this file and later upgrade to a
|
||||||
|
new
|
||||||
|
version of ArangoDB, then the package manager normally warns you about the
|
||||||
|
conflict. In order to avoid these warning for small adjustments, you can
|
||||||
|
put
|
||||||
|
local overrides into a file *arangod.conf.local*.
|
||||||
|
|
||||||
|
Only command line options with a value should be set within the
|
||||||
|
configuration file. Command line options which act as flags should be
|
||||||
|
entered on the command line when starting the server.
|
||||||
|
|
||||||
|
Whitespace in the configuration file is ignored. Each option is specified
|
||||||
|
on
|
||||||
|
a separate line in the form
|
||||||
|
|
||||||
|
```js
|
||||||
|
key = value
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, a header section can be specified and options pertaining to
|
||||||
|
that section can be specified in a shorter form
|
||||||
|
|
||||||
|
```js
|
||||||
|
[log]
|
||||||
|
level = trace
|
||||||
|
```
|
||||||
|
|
||||||
|
rather than specifying
|
||||||
|
|
||||||
|
```js
|
||||||
|
log.level = trace
|
||||||
|
```
|
||||||
|
|
||||||
|
Comments can be placed in the configuration file, only if the line begins
|
||||||
|
with one or more hash symbols (#).
|
||||||
|
|
||||||
|
There may be occasions where a configuration file exists and the user
|
||||||
|
wishes
|
||||||
|
to override configuration settings stored in a configuration file. Any
|
||||||
|
settings specified on the command line will overwrite the same setting
|
||||||
|
when
|
||||||
|
it appears in a configuration file. If the user wishes to completely
|
||||||
|
ignore
|
||||||
|
configuration files without necessarily deleting the file (or files), then
|
||||||
|
add the command line option
|
||||||
|
|
||||||
|
```js
|
||||||
|
-c none
|
||||||
|
```
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
```js
|
||||||
|
--configuration none
|
||||||
|
```
|
||||||
|
|
||||||
|
When starting up the server. Note that, the word *none* is
|
||||||
|
case-insensitive.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Daemon
|
!SUBSECTION Daemon
|
||||||
`--daemon`
|
`--daemon`
|
||||||
|
@ -41,7 +134,19 @@ parameter pid-file is given, then the server will report an error and exit.
|
||||||
|
|
||||||
!SUBSECTION Default Language
|
!SUBSECTION Default Language
|
||||||
<!-- arangod/RestServer/ArangoServer.h -->
|
<!-- arangod/RestServer/ArangoServer.h -->
|
||||||
@startDocuBlock DefaultLanguage
|
|
||||||
|
|
||||||
|
@brief server default language for sorting strings
|
||||||
|
`--default-language default-language`
|
||||||
|
|
||||||
|
The default language ist used for sorting and comparing strings.
|
||||||
|
The language value is a two-letter language code (ISO-639) or it is
|
||||||
|
composed by a two-letter language code with and a two letter country code
|
||||||
|
(ISO-3166). Valid languages are "de", "en", "en_US" or "en_UK".
|
||||||
|
|
||||||
|
The default default-language is set to be the system locale on that
|
||||||
|
platform.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Supervisor
|
!SUBSECTION Supervisor
|
||||||
`--supervisor`
|
`--supervisor`
|
||||||
|
@ -81,19 +186,63 @@ start up a new database process:
|
||||||
|
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
|
|
||||||
@startDocuBlock configurationUid
|
|
||||||
|
|
||||||
|
@brief the user id to use for the process
|
||||||
|
`--uid uid`
|
||||||
|
|
||||||
|
The name (identity) of the user the server will run as. If this parameter
|
||||||
|
is
|
||||||
|
not specified, the server will not attempt to change its UID, so that the
|
||||||
|
UID used by the server will be the same as the UID of the user who started
|
||||||
|
the server. If this parameter is specified, then the server will change
|
||||||
|
its
|
||||||
|
UID after opening ports and reading configuration files, but before
|
||||||
|
accepting connections or opening other files (such as recovery files).
|
||||||
|
This
|
||||||
|
is useful when the server must be started with raised privileges (in
|
||||||
|
certain
|
||||||
|
environments) but security considerations require that these privileges be
|
||||||
|
dropped once the server has started work.
|
||||||
|
|
||||||
|
Observe that this parameter cannot be used to bypass operating system
|
||||||
|
security. In general, this parameter (and its corresponding relative gid)
|
||||||
|
can lower privileges but not raise them.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Group identity
|
!SUBSECTION Group identity
|
||||||
|
|
||||||
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
<!-- lib/ApplicationServer/ApplicationServer.h -->
|
||||||
|
|
||||||
@startDocuBlock configurationGid
|
|
||||||
|
|
||||||
|
@brief the group id to use for the process
|
||||||
|
`--gid gid`
|
||||||
|
|
||||||
|
The name (identity) of the group the server will run as. If this parameter
|
||||||
|
is not specified, then the server will not attempt to change its GID, so
|
||||||
|
that the GID the server runs as will be the primary group of the user who
|
||||||
|
started the server. If this parameter is specified, then the server will
|
||||||
|
change its GID after opening ports and reading configuration files, but
|
||||||
|
before accepting connections or opening other files (such as recovery
|
||||||
|
files).
|
||||||
|
|
||||||
|
This parameter is related to the parameter uid.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Process identity
|
!SUBSECTION Process identity
|
||||||
|
|
||||||
<!-- lib/Rest/AnyServer.h -->
|
<!-- lib/Rest/AnyServer.h -->
|
||||||
|
|
||||||
@startDocuBlock pidFile
|
|
||||||
|
|
||||||
|
@brief pid file
|
||||||
|
`--pid-file filename`
|
||||||
|
|
||||||
|
The name of the process ID file to use when running the server as a
|
||||||
|
daemon. This parameter must be specified if either the flag *daemon* or
|
||||||
|
*supervisor* is set.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Console
|
!SUBSECTION Console
|
||||||
`--console`
|
`--console`
|
||||||
|
@ -111,4 +260,20 @@ already running in this or another mode.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Random Generator
|
!SUBSECTION Random Generator
|
||||||
@startDocuBlock randomGenerator
|
|
||||||
|
|
||||||
|
@brief random number generator to use
|
||||||
|
`--random.generator arg`
|
||||||
|
|
||||||
|
The argument is an integer (1,2,3 or 4) which sets the manner in which
|
||||||
|
random numbers are generated. The default method (3) is to use the a
|
||||||
|
non-blocking random (or pseudorandom) number generator supplied by the
|
||||||
|
operating system.
|
||||||
|
|
||||||
|
Specifying an argument of 2, uses a blocking random (or
|
||||||
|
pseudorandom) number generator. Specifying an argument 1 sets a
|
||||||
|
pseudorandom
|
||||||
|
number generator using an implication of the Mersenne Twister MT19937
|
||||||
|
algorithm. Algorithm 4 is a combination of the blocking random number
|
||||||
|
generator and the Mersenne Twister.
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,16 @@ a replication backlog.
|
||||||
|
|
||||||
!SUBSECTION Directory
|
!SUBSECTION Directory
|
||||||
<!-- arangod/Wal/LogfileManager.h -->
|
<!-- arangod/Wal/LogfileManager.h -->
|
||||||
@startDocuBlock WalLogfileDirectory
|
|
||||||
|
|
||||||
|
@brief the WAL logfiles directory
|
||||||
|
`--wal.directory`
|
||||||
|
|
||||||
|
Specifies the directory in which the write-ahead logfiles should be
|
||||||
|
stored. If this option is not specified, it defaults to the subdirectory
|
||||||
|
*journals* in the server's global database directory. If the directory is
|
||||||
|
not present, it will be created.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Logfile size
|
!SUBSECTION Logfile size
|
||||||
<!-- arangod/Wal/LogfileManager.h -->
|
<!-- arangod/Wal/LogfileManager.h -->
|
||||||
|
@ -39,21 +48,132 @@ a replication backlog.
|
||||||
|
|
||||||
!SUBSECTION Throttling
|
!SUBSECTION Throttling
|
||||||
<!-- arangod/Wal/LogfileManager.h -->
|
<!-- arangod/Wal/LogfileManager.h -->
|
||||||
@startDocuBlock WalLogfileThrottling
|
|
||||||
|
|
||||||
|
@brief throttle writes to WAL when at least such many operations are
|
||||||
|
waiting for garbage collection
|
||||||
|
`--wal.throttle-when-pending`
|
||||||
|
|
||||||
|
The maximum value for the number of write-ahead log garbage-collection
|
||||||
|
queue
|
||||||
|
elements. If set to *0*, the queue size is unbounded, and no
|
||||||
|
writtle-throttling will occur. If set to a non-zero value,
|
||||||
|
writte-throttling
|
||||||
|
will automatically kick in when the garbage-collection queue contains at
|
||||||
|
least as many elements as specified by this option.
|
||||||
|
While write-throttling is active, data-modification operations will
|
||||||
|
intentionally be delayed by a configurable amount of time. This is to
|
||||||
|
ensure the write-ahead log garbage collector can catch up with the
|
||||||
|
operations executed.
|
||||||
|
Write-throttling will stay active until the garbage-collection queue size
|
||||||
|
goes down below the specified value.
|
||||||
|
Write-throttling is turned off by default.
|
||||||
|
|
||||||
|
`--wal.throttle-wait`
|
||||||
|
|
||||||
|
This option determines the maximum wait time (in milliseconds) for
|
||||||
|
operations that are write-throttled. If write-throttling is active and a
|
||||||
|
new write operation is to be executed, it will wait for at most the
|
||||||
|
specified amount of time for the write-ahead log garbage-collection queue
|
||||||
|
size to fall below the throttling threshold. If the queue size decreases
|
||||||
|
before the maximum wait time is over, the operation will be executed
|
||||||
|
normally. If the queue size does not decrease before the wait time is
|
||||||
|
over,
|
||||||
|
the operation will be aborted with an error.
|
||||||
|
This option only has an effect if `--wal.throttle-when-pending` has a
|
||||||
|
non-zero value, which is not the default.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Number of slots
|
!SUBSECTION Number of slots
|
||||||
<!-- arangod/Wal/LogfileManager.h -->
|
<!-- arangod/Wal/LogfileManager.h -->
|
||||||
@startDocuBlock WalLogfileSlots
|
|
||||||
|
|
||||||
|
@brief maximum number of slots to be used in parallel
|
||||||
|
`--wal.slots`
|
||||||
|
|
||||||
|
Configures the amount of write slots the write-ahead log can give to write
|
||||||
|
operations in parallel. Any write operation will lease a slot and return
|
||||||
|
it
|
||||||
|
to the write-ahead log when it is finished writing the data. A slot will
|
||||||
|
remain blocked until the data in it was synchronized to disk. After that,
|
||||||
|
a slot becomes reusable by following operations. The required number of
|
||||||
|
slots is thus determined by the parallelity of write operations and the
|
||||||
|
disk synchronization speed. Slow disks probably need higher values, and
|
||||||
|
fast
|
||||||
|
disks may only require a value lower than the default.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Ignore logfile errors
|
!SUBSECTION Ignore logfile errors
|
||||||
<!-- arangod/Wal/LogfileManager.h -->
|
<!-- arangod/Wal/LogfileManager.h -->
|
||||||
@startDocuBlock WalLogfileIgnoreLogfileErrors
|
|
||||||
|
|
||||||
|
@brief ignore logfile errors when opening logfiles
|
||||||
|
`--wal.ignore-logfile-errors`
|
||||||
|
|
||||||
|
Ignores any recovery errors caused by corrupted logfiles on startup. When
|
||||||
|
set to *false*, the recovery procedure on startup will fail with an error
|
||||||
|
whenever it encounters a corrupted (that includes only half-written)
|
||||||
|
logfile. This is a security precaution to prevent data loss in case of
|
||||||
|
disk
|
||||||
|
errors etc. When the recovery procedure aborts because of corruption, any
|
||||||
|
corrupted files can be inspected and fixed (or removed) manually and the
|
||||||
|
server can be restarted afterwards.
|
||||||
|
|
||||||
|
Setting the option to *true* will make the server continue with the
|
||||||
|
recovery
|
||||||
|
procedure even in case it detects corrupt logfile entries. In this case it
|
||||||
|
will stop at the first corrupted logfile entry and ignore all others,
|
||||||
|
which
|
||||||
|
might cause data loss.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Ignore recovery errors
|
!SUBSECTION Ignore recovery errors
|
||||||
<!-- arangod/Wal/LogfileManager.h -->
|
<!-- arangod/Wal/LogfileManager.h -->
|
||||||
@startDocuBlock WalLogfileIgnoreRecoveryErrors
|
|
||||||
|
|
||||||
|
@brief ignore recovery errors
|
||||||
|
`--wal.ignore-recovery-errors`
|
||||||
|
|
||||||
|
Ignores any recovery errors not caused by corrupted logfiles but by
|
||||||
|
logical
|
||||||
|
errors. Logical errors can occur if logfiles or any other server datafiles
|
||||||
|
have been manually edited or the server is somehow misconfigured.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Ignore (non-WAL) datafile errors
|
!SUBSECTION Ignore (non-WAL) datafile errors
|
||||||
<!-- arangod/RestServer/ArangoServer.h -->
|
<!-- arangod/RestServer/ArangoServer.h -->
|
||||||
@startDocuBlock databaseIgnoreDatafileErrors
|
|
||||||
|
|
||||||
|
@brief ignore datafile errors when loading collections
|
||||||
|
`--database.ignore-datafile-errors boolean`
|
||||||
|
|
||||||
|
If set to `false`, CRC mismatch and other errors in collection datafiles
|
||||||
|
will lead to a collection not being loaded at all. The collection in this
|
||||||
|
case becomes unavailable. If such collection needs to be loaded during WAL
|
||||||
|
recovery, the WAL recovery will also abort (if not forced with option
|
||||||
|
`--wal.ignore-recovery-errors true`).
|
||||||
|
|
||||||
|
Setting this flag to `false` protects users from unintentionally using a
|
||||||
|
collection with corrupted datafiles, from which only a subset of the
|
||||||
|
original data can be recovered. Working with such collection could lead
|
||||||
|
to data loss and follow up errors.
|
||||||
|
In order to access such collection, it is required to inspect and repair
|
||||||
|
the collection datafile with the datafile debugger (arango-dfdb).
|
||||||
|
|
||||||
|
If set to `true`, CRC mismatch and other errors during the loading of a
|
||||||
|
collection will lead to the datafile being partially loaded, up to the
|
||||||
|
position of the first error. All data up to until the invalid position
|
||||||
|
will be loaded. This will enable users to continue with collection
|
||||||
|
datafiles
|
||||||
|
even if they are corrupted, but this will result in only a partial load
|
||||||
|
of the original data and potential follow up errors. The WAL recovery
|
||||||
|
will still abort when encountering a collection with a corrupted datafile,
|
||||||
|
at least if `--wal.ignore-recovery-errors` is not set to `true`.
|
||||||
|
|
||||||
|
The default value is *false*, so collections with corrupted datafiles will
|
||||||
|
not be loaded at all, preventing partial loads and follow up errors.
|
||||||
|
However,
|
||||||
|
if such collection is required at server startup, during WAL recovery, the
|
||||||
|
server will abort the recovery and refuse to start.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -8,32 +8,172 @@ database only.
|
||||||
|
|
||||||
!SUBSECTION Name
|
!SUBSECTION Name
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseName
|
|
||||||
|
|
||||||
|
return the database name
|
||||||
|
`db._name()`
|
||||||
|
|
||||||
|
Returns the name of the current database as a string.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline dbName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{dbName}
|
||||||
|
require("internal").db._name();
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock dbName
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION ID
|
!SUBSECTION ID
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseId
|
|
||||||
|
|
||||||
|
return the database id
|
||||||
|
`db._id()`
|
||||||
|
|
||||||
|
Returns the id of the current database as a string.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline dbId
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{dbId}
|
||||||
|
require("internal").db._id();
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock dbId
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Path
|
!SUBSECTION Path
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databasePath
|
|
||||||
|
|
||||||
|
return the path to database files
|
||||||
|
`db._path()`
|
||||||
|
|
||||||
|
Returns the filesystem path of the current database as a string.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline dbPath
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{dbPath}
|
||||||
|
require("internal").db._path();
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock dbPath
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION isSystem
|
!SUBSECTION isSystem
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseIsSystem
|
|
||||||
|
|
||||||
|
return the database type
|
||||||
|
`db._isSystem()`
|
||||||
|
|
||||||
|
Returns whether the currently used database is the *_system* database.
|
||||||
|
The system database has some special privileges and properties, for example,
|
||||||
|
database management operations such as create or drop can only be executed
|
||||||
|
from within this database. Additionally, the *_system* database itself
|
||||||
|
cannot be dropped.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Use Database
|
!SUBSECTION Use Database
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseUseDatabase
|
|
||||||
|
|
||||||
|
change the current database
|
||||||
|
`db._useDatabase(name)`
|
||||||
|
|
||||||
|
Changes the current database to the database specified by *name*. Note
|
||||||
|
that the database specified by *name* must already exist.
|
||||||
|
|
||||||
|
Changing the database might be disallowed in some contexts, for example
|
||||||
|
server-side actions (including Foxx).
|
||||||
|
|
||||||
|
When performing this command from arangosh, the current credentials (username
|
||||||
|
and password) will be re-used. These credentials might not be valid to
|
||||||
|
connect to the database specified by *name*. Additionally, the database
|
||||||
|
only be accessed from certain endpoints only. In this case, switching the
|
||||||
|
database might not work, and the connection / session should be closed and
|
||||||
|
restarted with different username and password credentials and/or
|
||||||
|
endpoint data.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION List Databases
|
!SUBSECTION List Databases
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseListDatabase
|
|
||||||
|
|
||||||
|
return the list of all existing databases
|
||||||
|
`db._listDatabases()`
|
||||||
|
|
||||||
|
Returns the list of all databases. This method can only be used from within
|
||||||
|
the *_system* database.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Create Database
|
!SUBSECTION Create Database
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseCreateDatabase
|
|
||||||
|
|
||||||
|
create a new database
|
||||||
|
`db._createDatabase(name, options, users)`
|
||||||
|
|
||||||
|
Creates a new database with the name specified by *name*.
|
||||||
|
There are restrictions for database names
|
||||||
|
(see [DatabaseNames](../NamingConventions/DatabaseNames.md)).
|
||||||
|
|
||||||
|
Note that even if the database is created successfully, there will be no
|
||||||
|
change into the current database to the new database. Changing the current
|
||||||
|
database must explicitly be requested by using the
|
||||||
|
*db._useDatabase* method.
|
||||||
|
|
||||||
|
The *options* attribute currently has no meaning and is reserved for
|
||||||
|
future use.
|
||||||
|
|
||||||
|
The optional *users* attribute can be used to create initial users for
|
||||||
|
the new database. If specified, it must be a list of user objects. Each user
|
||||||
|
object can contain the following attributes:
|
||||||
|
|
||||||
|
* *username*: the user name as a string. This attribute is mandatory.
|
||||||
|
* *passwd*: the user password as a string. If not specified, then it defaults
|
||||||
|
to the empty string.
|
||||||
|
* *active*: a boolean flag indicating whether the user account should be
|
||||||
|
active or not. The default value is *true*.
|
||||||
|
* *extra*: an optional JSON object with extra user information. The data
|
||||||
|
contained in *extra* will be stored for the user but not be interpreted
|
||||||
|
further by ArangoDB.
|
||||||
|
|
||||||
|
If no initial users are specified, a default user *root* will be created
|
||||||
|
with an empty string password. This ensures that the new database will be
|
||||||
|
accessible via HTTP after it is created.
|
||||||
|
|
||||||
|
You can create users in a database if no initial user is specified. Switch
|
||||||
|
into the new database (username and password must be identical to the current
|
||||||
|
session) and add or modify users with the following commands.
|
||||||
|
|
||||||
|
```js
|
||||||
|
require("@arangodb/users").save(username, password, true);
|
||||||
|
require("@arangodb/users").update(username, password, true);
|
||||||
|
require("@arangodb/users").remove(username);
|
||||||
|
```
|
||||||
|
Alternatively, you can specify user data directly. For example:
|
||||||
|
|
||||||
|
```js
|
||||||
|
db._createDatabase("newDB", [], [{ username: "newUser", passwd: "123456", active: true}])
|
||||||
|
```
|
||||||
|
|
||||||
|
Those methods can only be used from within the *_system* database.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Drop Database
|
!SUBSECTION Drop Database
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock databaseDropDatabase
|
|
||||||
|
|
||||||
|
drop an existing database
|
||||||
|
`db._dropDatabase(name)`
|
||||||
|
|
||||||
|
Drops the database specified by *name*. The database specified by
|
||||||
|
*name* must exist.
|
||||||
|
|
||||||
|
**Note**: Dropping databases is only possible from within the *_system*
|
||||||
|
database. The *_system* database itself cannot be dropped.
|
||||||
|
|
||||||
|
Databases are dropped asynchronously, and will be physically removed if
|
||||||
|
all clients have disconnected and references have been garbage-collected.
|
||||||
|
|
||||||
|
|
|
@ -2,20 +2,258 @@
|
||||||
|
|
||||||
!SUBSECTION Document
|
!SUBSECTION Document
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock documentsDocumentName
|
|
||||||
|
|
||||||
|
@brief looks up a document and returns it
|
||||||
|
`db._document(document)`
|
||||||
|
|
||||||
|
This method finds a document given its identifier. It returns the document
|
||||||
|
if the document exists. An error is thrown if no document with the given
|
||||||
|
identifier exists, or if the specified *_rev* value does not match the
|
||||||
|
current revision of the document.
|
||||||
|
|
||||||
|
**Note**: If the method is executed on the arangod server (e.g. from
|
||||||
|
inside a Foxx application), an immutable document object will be returned
|
||||||
|
for performance reasons. It is not possible to change attributes of this
|
||||||
|
immutable object. To update or patch the returned document, it needs to be
|
||||||
|
cloned/copied into a regular JavaScript object first. This is not necessary
|
||||||
|
if the *_document* method is called from out of arangosh or from any
|
||||||
|
other client.
|
||||||
|
|
||||||
|
`db._document(document-handle)`
|
||||||
|
|
||||||
|
As before. Instead of document a *document-handle* can be passed as
|
||||||
|
first argument.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Returns the document:
|
||||||
|
|
||||||
|
@startDocuBlockInline documentsDocumentName
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{documentsDocumentName}
|
||||||
|
~ db._create("example");
|
||||||
|
~ var myid = db.example.insert({_key: "12345"});
|
||||||
|
db._document("example/12345");
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock documentsDocumentName
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Exists
|
!SUBSECTION Exists
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock documentsDocumentExists
|
|
||||||
|
|
||||||
|
@brief checks whether a document exists
|
||||||
|
`db._exists(document)`
|
||||||
|
|
||||||
|
This method determines whether a document exists given its identifier.
|
||||||
|
Instead of returning the found document or an error, this method will
|
||||||
|
return either *true* or *false*. It can thus be used
|
||||||
|
for easy existence checks.
|
||||||
|
|
||||||
|
No error will be thrown if the sought document or collection does not
|
||||||
|
exist.
|
||||||
|
Still this method will throw an error if used improperly, e.g. when called
|
||||||
|
with a non-document handle.
|
||||||
|
|
||||||
|
`db._exists(document-handle)`
|
||||||
|
|
||||||
|
As before, but instead of a document a document-handle can be passed.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Replace
|
!SUBSECTION Replace
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock documentsDocumentReplace
|
|
||||||
|
|
||||||
|
@brief replaces a document
|
||||||
|
`db._replace(document, data)`
|
||||||
|
|
||||||
|
The method returns a document with the attributes *_id*, *_rev* and
|
||||||
|
*_oldRev*. The attribute *_id* contains the document handle of the
|
||||||
|
updated document, the attribute *_rev* contains the document revision of
|
||||||
|
the updated document, the attribute *_oldRev* contains the revision of
|
||||||
|
the old (now replaced) document.
|
||||||
|
|
||||||
|
If there is a conflict, i. e. if the revision of the *document* does not
|
||||||
|
match the revision in the collection, then an error is thrown.
|
||||||
|
|
||||||
|
`db._replace(document, data, true)`
|
||||||
|
|
||||||
|
As before, but in case of a conflict, the conflict is ignored and the old
|
||||||
|
document is overwritten.
|
||||||
|
|
||||||
|
`db._replace(document, data, true, waitForSync)`
|
||||||
|
|
||||||
|
The optional *waitForSync* parameter can be used to force
|
||||||
|
synchronization of the document replacement operation to disk even in case
|
||||||
|
that the *waitForSync* flag had been disabled for the entire collection.
|
||||||
|
Thus, the *waitForSync* parameter can be used to force synchronization
|
||||||
|
of just specific operations. To use this, set the *waitForSync* parameter
|
||||||
|
to *true*. If the *waitForSync* parameter is not specified or set to
|
||||||
|
*false*, then the collection's default *waitForSync* behavior is
|
||||||
|
applied. The *waitForSync* parameter cannot be used to disable
|
||||||
|
synchronization for collections that have a default *waitForSync* value
|
||||||
|
of *true*.
|
||||||
|
|
||||||
|
`db._replace(document-handle, data)`
|
||||||
|
|
||||||
|
As before. Instead of document a *document-handle* can be passed as
|
||||||
|
first argument.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Create and replace a document:
|
||||||
|
|
||||||
|
@startDocuBlockInline documentsDocumentReplace
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{documentsDocumentReplace}
|
||||||
|
~ db._create("example");
|
||||||
|
a1 = db.example.insert({ a : 1 });
|
||||||
|
a2 = db._replace(a1, { a : 2 });
|
||||||
|
a3 = db._replace(a1, { a : 3 }); // xpError(ERROR_ARANGO_CONFLICT);
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock documentsDocumentReplace
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Update
|
!SUBSECTION Update
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock documentsDocumentUpdate
|
|
||||||
|
|
||||||
|
@brief update a document
|
||||||
|
`db._update(document, data, overwrite, keepNull, waitForSync)`
|
||||||
|
|
||||||
|
Updates an existing *document*. The *document* must be a document in
|
||||||
|
the current collection. This document is then patched with the
|
||||||
|
*data* given as second argument. The optional *overwrite* parameter can
|
||||||
|
be used to control the behavior in case of version conflicts (see below).
|
||||||
|
The optional *keepNull* parameter can be used to modify the behavior when
|
||||||
|
handling *null* values. Normally, *null* values are stored in the
|
||||||
|
database. By setting the *keepNull* parameter to *false*, this behavior
|
||||||
|
can be changed so that all attributes in *data* with *null* values will
|
||||||
|
be removed from the target document.
|
||||||
|
|
||||||
|
The optional *waitForSync* parameter can be used to force
|
||||||
|
synchronization of the document update operation to disk even in case
|
||||||
|
that the *waitForSync* flag had been disabled for the entire collection.
|
||||||
|
Thus, the *waitForSync* parameter can be used to force synchronization
|
||||||
|
of just specific operations. To use this, set the *waitForSync* parameter
|
||||||
|
to *true*. If the *waitForSync* parameter is not specified or set to
|
||||||
|
false*, then the collection's default *waitForSync* behavior is
|
||||||
|
applied. The *waitForSync* parameter cannot be used to disable
|
||||||
|
synchronization for collections that have a default *waitForSync* value
|
||||||
|
of *true*.
|
||||||
|
|
||||||
|
The method returns a document with the attributes *_id*, *_rev* and
|
||||||
|
*_oldRev*. The attribute *_id* contains the document handle of the
|
||||||
|
updated document, the attribute *_rev* contains the document revision of
|
||||||
|
the updated document, the attribute *_oldRev* contains the revision of
|
||||||
|
the old (now replaced) document.
|
||||||
|
|
||||||
|
If there is a conflict, i. e. if the revision of the *document* does not
|
||||||
|
match the revision in the collection, then an error is thrown.
|
||||||
|
|
||||||
|
`db._update(document, data, true)`
|
||||||
|
|
||||||
|
As before, but in case of a conflict, the conflict is ignored and the old
|
||||||
|
document is overwritten.
|
||||||
|
|
||||||
|
`db._update(document-handle, data)`
|
||||||
|
|
||||||
|
As before. Instead of document a *document-handle* can be passed as
|
||||||
|
first argument.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Create and update a document:
|
||||||
|
|
||||||
|
@startDocuBlockInline documentDocumentUpdate
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{documentDocumentUpdate}
|
||||||
|
~ db._create("example");
|
||||||
|
a1 = db.example.insert({ a : 1 });
|
||||||
|
a2 = db._update(a1, { b : 2 });
|
||||||
|
a3 = db._update(a1, { c : 3 }); // xpError(ERROR_ARANGO_CONFLICT);
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock documentDocumentUpdate
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Remove
|
!SUBSECTION Remove
|
||||||
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
<!-- arangod/V8Server/v8-vocbase.cpp -->
|
||||||
@startDocuBlock documentsDocumentRemove
|
|
||||||
|
|
||||||
|
@brief removes a document
|
||||||
|
`db._remove(document)`
|
||||||
|
|
||||||
|
Removes a document. If there is revision mismatch, then an error is thrown.
|
||||||
|
|
||||||
|
`db._remove(document, true)`
|
||||||
|
|
||||||
|
Removes a document. If there is revision mismatch, then mismatch is ignored
|
||||||
|
and document is deleted. The function returns *true* if the document
|
||||||
|
existed and was deleted. It returns *false*, if the document was already
|
||||||
|
deleted.
|
||||||
|
|
||||||
|
`db._remove(document, true, waitForSync)` or
|
||||||
|
`db._remove(document, {overwrite: true or false, waitForSync: true or false})`
|
||||||
|
|
||||||
|
The optional *waitForSync* parameter can be used to force synchronization
|
||||||
|
of the document deletion operation to disk even in case that the
|
||||||
|
*waitForSync* flag had been disabled for the entire collection. Thus,
|
||||||
|
the *waitForSync* parameter can be used to force synchronization of just
|
||||||
|
specific operations. To use this, set the *waitForSync* parameter to
|
||||||
|
*true*. If the *waitForSync* parameter is not specified or set to
|
||||||
|
*false*, then the collection's default *waitForSync* behavior is
|
||||||
|
applied. The *waitForSync* parameter cannot be used to disable
|
||||||
|
synchronization for collections that have a default *waitForSync* value
|
||||||
|
of *true*.
|
||||||
|
|
||||||
|
`db._remove(document-handle, data)`
|
||||||
|
|
||||||
|
As before. Instead of document a *document-handle* can be passed as first
|
||||||
|
argument.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Remove a document:
|
||||||
|
|
||||||
|
@startDocuBlockInline documentsCollectionRemove
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionRemove}
|
||||||
|
~ db._create("example");
|
||||||
|
a1 = db.example.insert({ a : 1 });
|
||||||
|
db._remove(a1);
|
||||||
|
db._remove(a1); // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND);
|
||||||
|
db._remove(a1, true);
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock documentsCollectionRemove
|
||||||
|
|
||||||
|
Remove a document with a conflict:
|
||||||
|
|
||||||
|
@startDocuBlockInline documentsCollectionRemoveConflict
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionRemoveConflict}
|
||||||
|
~ db._create("example");
|
||||||
|
a1 = db.example.insert({ a : 1 });
|
||||||
|
a2 = db._replace(a1, { a : 2 });
|
||||||
|
db._remove(a1); // xpError(ERROR_ARANGO_CONFLICT)
|
||||||
|
db._remove(a1, true);
|
||||||
|
db._document(a1); // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND)
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock documentsCollectionRemoveConflict
|
||||||
|
|
||||||
|
Remove a document using new signature:
|
||||||
|
|
||||||
|
@startDocuBlockInline documentsCollectionRemoveSignature
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionRemoveSignature}
|
||||||
|
~ db._create("example");
|
||||||
|
db.example.insert({ a: 1 } );
|
||||||
|
| db.example.remove("example/11265325374",
|
||||||
|
{ overwrite: true, waitForSync: false})
|
||||||
|
~ db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock documentsCollectionRemoveSignature
|
||||||
|
|
||||||
|
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -35,16 +35,143 @@ Other fields can be updated as in default collection.
|
||||||
|
|
||||||
!SUBSECTION Insert
|
!SUBSECTION Insert
|
||||||
<!-- arangod/V8Server/v8-collection.cpp -->
|
<!-- arangod/V8Server/v8-collection.cpp -->
|
||||||
@startDocuBlock InsertEdgeCol
|
|
||||||
|
|
||||||
|
@brief saves a new edge document
|
||||||
|
`edge-collection.insert(from, to, document)`
|
||||||
|
|
||||||
|
Saves a new edge and returns the document-handle. *from* and *to*
|
||||||
|
must be documents or document references.
|
||||||
|
|
||||||
|
`edge-collection.insert(from, to, document, waitForSync)`
|
||||||
|
|
||||||
|
The optional *waitForSync* parameter can be used to force
|
||||||
|
synchronization of the document creation operation to disk even in case
|
||||||
|
that the *waitForSync* flag had been disabled for the entire collection.
|
||||||
|
Thus, the *waitForSync* parameter can be used to force synchronization
|
||||||
|
of just specific operations. To use this, set the *waitForSync* parameter
|
||||||
|
to *true*. If the *waitForSync* parameter is not specified or set to
|
||||||
|
*false*, then the collection's default *waitForSync* behavior is
|
||||||
|
applied. The *waitForSync* parameter cannot be used to disable
|
||||||
|
synchronization for collections that have a default *waitForSync* value
|
||||||
|
of *true*.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline EDGCOL_01_SaveEdgeCol
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_01_SaveEdgeCol}
|
||||||
|
db._create("vertex");
|
||||||
|
db._createEdgeCollection("relation");
|
||||||
|
v1 = db.vertex.insert({ name : "vertex 1" });
|
||||||
|
v2 = db.vertex.insert({ name : "vertex 2" });
|
||||||
|
e1 = db.relation.insert(v1, v2, { label : "knows" });
|
||||||
|
db._document(e1);
|
||||||
|
~ db._drop("relation");
|
||||||
|
~ db._drop("vertex");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock EDGCOL_01_SaveEdgeCol
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Edges
|
!SUBSECTION Edges
|
||||||
<!-- arangod/V8Server/v8-query.cpp -->
|
<!-- arangod/V8Server/v8-query.cpp -->
|
||||||
@startDocuBlock edgeCollectionEdges
|
|
||||||
|
|
||||||
|
@brief selects all edges for a set of vertices
|
||||||
|
`edge-collection.edges(vertex)`
|
||||||
|
|
||||||
|
The *edges* operator finds all edges starting from (outbound) or ending
|
||||||
|
in (inbound) *vertex*.
|
||||||
|
|
||||||
|
`edge-collection.edges(vertices)`
|
||||||
|
|
||||||
|
The *edges* operator finds all edges starting from (outbound) or ending
|
||||||
|
in (inbound) a document from *vertices*, which must a list of documents
|
||||||
|
or document handles.
|
||||||
|
|
||||||
|
@startDocuBlockInline EDGCOL_02_Relation
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_02_Relation}
|
||||||
|
db._create("vertex");
|
||||||
|
db._createEdgeCollection("relation");
|
||||||
|
~ var myGraph = {};
|
||||||
|
myGraph.v1 = db.vertex.insert({ name : "vertex 1" });
|
||||||
|
myGraph.v2 = db.vertex.insert({ name : "vertex 2" });
|
||||||
|
| myGraph.e1 = db.relation.insert(myGraph.v1, myGraph.v2,
|
||||||
|
{ label : "knows"});
|
||||||
|
db._document(myGraph.e1);
|
||||||
|
db.relation.edges(myGraph.e1._id);
|
||||||
|
~ db._drop("relation");
|
||||||
|
~ db._drop("vertex");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock EDGCOL_02_Relation
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION InEdges
|
!SUBSECTION InEdges
|
||||||
<!-- arangod/V8Server/v8-query.cpp -->
|
<!-- arangod/V8Server/v8-query.cpp -->
|
||||||
@startDocuBlock edgeCollectionInEdges
|
|
||||||
|
|
||||||
|
@brief selects all inbound edges
|
||||||
|
`edge-collection.inEdges(vertex)`
|
||||||
|
|
||||||
|
The *edges* operator finds all edges ending in (inbound) *vertex*.
|
||||||
|
|
||||||
|
`edge-collection.inEdges(vertices)`
|
||||||
|
|
||||||
|
The *edges* operator finds all edges ending in (inbound) a document from
|
||||||
|
*vertices*, which must a list of documents or document handles.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
@startDocuBlockInline EDGCOL_02_inEdges
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_02_inEdges}
|
||||||
|
db._create("vertex");
|
||||||
|
db._createEdgeCollection("relation");
|
||||||
|
~ var myGraph = {};
|
||||||
|
myGraph.v1 = db.vertex.insert({ name : "vertex 1" });
|
||||||
|
myGraph.v2 = db.vertex.insert({ name : "vertex 2" });
|
||||||
|
| myGraph.e1 = db.relation.insert(myGraph.v1, myGraph.v2,
|
||||||
|
{ label : "knows"});
|
||||||
|
db._document(myGraph.e1);
|
||||||
|
db.relation.inEdges(myGraph.v1._id);
|
||||||
|
db.relation.inEdges(myGraph.v2._id);
|
||||||
|
~ db._drop("relation");
|
||||||
|
~ db._drop("vertex");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock EDGCOL_02_inEdges
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION OutEdges
|
!SUBSECTION OutEdges
|
||||||
<!-- arangod/V8Server/v8-query.cpp -->
|
<!-- arangod/V8Server/v8-query.cpp -->
|
||||||
@startDocuBlock edgeCollectionOutEdges
|
|
||||||
|
|
||||||
|
@brief selects all outbound edges
|
||||||
|
`edge-collection.outEdges(vertex)`
|
||||||
|
|
||||||
|
The *edges* operator finds all edges starting from (outbound)
|
||||||
|
*vertices*.
|
||||||
|
|
||||||
|
`edge-collection.outEdges(vertices)`
|
||||||
|
|
||||||
|
The *edges* operator finds all edges starting from (outbound) a document
|
||||||
|
from *vertices*, which must a list of documents or document handles.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
@startDocuBlockInline EDGCOL_02_outEdges
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_02_outEdges}
|
||||||
|
db._create("vertex");
|
||||||
|
db._createEdgeCollection("relation");
|
||||||
|
~ var myGraph = {};
|
||||||
|
myGraph.v1 = db.vertex.insert({ name : "vertex 1" });
|
||||||
|
myGraph.v2 = db.vertex.insert({ name : "vertex 2" });
|
||||||
|
| myGraph.e1 = db.relation.insert(myGraph.v1, myGraph.v2,
|
||||||
|
{ label : "knows"});
|
||||||
|
db._document(myGraph.e1);
|
||||||
|
db.relation.outEdges(myGraph.v1._id);
|
||||||
|
db.relation.outEdges(myGraph.v2._id);
|
||||||
|
~ db._drop("relation");
|
||||||
|
~ db._drop("vertex");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock EDGCOL_02_outEdges
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -6,4 +6,65 @@ give them access to the admin frontend or any other parts of ArangoDB.
|
||||||
|
|
||||||
!SECTION Mounting the API documentation
|
!SECTION Mounting the API documentation
|
||||||
<!-- js/server/modules/@arangodb/foxx/controller.js -->
|
<!-- js/server/modules/@arangodb/foxx/controller.js -->
|
||||||
@startDocuBlock JSF_foxx_controller_apiDocumentation
|
|
||||||
|
|
||||||
|
|
||||||
|
`Controller.apiDocumentation(path, [opts])`
|
||||||
|
|
||||||
|
Mounts the API documentation (Swagger) at the given `path`.
|
||||||
|
|
||||||
|
Note that the `path` can use path parameters as usual but must not use any
|
||||||
|
wildcard (`*`) or optional (`:name?`) parameters.
|
||||||
|
|
||||||
|
The optional **opts** can be an object with any of the following properties:
|
||||||
|
|
||||||
|
* **before**: a function that will be executed before a request to
|
||||||
|
this endpoint is processed further.
|
||||||
|
* **appPath**: the mount point of the app for which documentation will be
|
||||||
|
shown. Default: the mount point of the active app.
|
||||||
|
* **indexFile**: file path or file name of the Swagger HTML file.
|
||||||
|
Default: `"index.html"`.
|
||||||
|
* **swaggerJson**: file path or file name of the Swagger API description JSON
|
||||||
|
file or a function `swaggerJson(req, res, opts)` that sends a Swagger API
|
||||||
|
description in JSON. Default: the built-in Swagger description generator.
|
||||||
|
* **swaggerRoot**: absolute path that will be used as the path path for any
|
||||||
|
relative paths of the documentation assets, **swaggerJson** file and
|
||||||
|
the **indexFile**. Default: the built-in Swagger distribution.
|
||||||
|
|
||||||
|
If **opts** is a function, it will be used as the value of **opts.before**.
|
||||||
|
|
||||||
|
If **opts.before** returns `false`, the request will not be processed
|
||||||
|
further.
|
||||||
|
|
||||||
|
If **opts.before** returns an object, any properties will override the
|
||||||
|
equivalent properties of **opts** for the current request.
|
||||||
|
|
||||||
|
Of course all **before**, **after** or **around** functions defined on the
|
||||||
|
controller will also be executed as usual.
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
```js
|
||||||
|
controller.apiDocumentation('/my/dox');
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
A request to `/my/dox` will be redirect to `/my/dox/index.html`,
|
||||||
|
which will show the API documentation of the active app.
|
||||||
|
|
||||||
|
```js
|
||||||
|
controller.apiDocumentation('/my/dox', function (req, res) {
|
||||||
|
if (!req.session.get('uid')) {
|
||||||
|
res.status(403);
|
||||||
|
res.json({error: 'only logged in users may see the API'});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return {appPath: req.parameters.mount};
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
A request to `/my/dox/index.html?mount=/_admin/aardvark` will show the
|
||||||
|
API documentation of the admin frontend (mounted at `/_admin/aardvark`).
|
||||||
|
If the user is not logged in, the error message will be shown instead.
|
||||||
|
|
||||||
|
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -115,40 +115,146 @@ people.remove(person);
|
||||||
|
|
||||||
!SUBSECTION Extend
|
!SUBSECTION Extend
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_extend
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxModel#extend(instanceProperties, classProperties)`
|
||||||
|
|
||||||
|
Extend the Model prototype to add or overwrite methods.
|
||||||
|
The first object contains the properties to be defined on the instance,
|
||||||
|
the second object those to be defined on the prototype.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Initialize
|
!SUBSECTION Initialize
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_initializer
|
|
||||||
|
|
||||||
|
|
||||||
|
`new FoxxModel(data)`
|
||||||
|
|
||||||
|
If you initialize a model, you can give it initial *data* as an object.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
instance = new Model({
|
||||||
|
a: 1
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Get
|
!SUBSECTION Get
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_get
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxModel#get(name)`
|
||||||
|
|
||||||
|
Get the value of an attribute
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
instance = new Model({
|
||||||
|
a: 1
|
||||||
|
});
|
||||||
|
|
||||||
|
instance.get("a");
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Set
|
!SUBSECTION Set
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_set
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxModel#set(name, value)`
|
||||||
|
|
||||||
|
Set the value of an attribute or multiple attributes at once
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
instance = new Model({
|
||||||
|
a: 1
|
||||||
|
});
|
||||||
|
|
||||||
|
instance.set("a", 2);
|
||||||
|
instance.set({
|
||||||
|
b: 2
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Has
|
!SUBSECTION Has
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_has
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxModel#has(name)`
|
||||||
|
|
||||||
|
Returns true if the attribute is set to a non-null or non-undefined value.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```js
|
||||||
|
instance = new Model({
|
||||||
|
a: 1
|
||||||
|
});
|
||||||
|
|
||||||
|
instance.has("a"); //=> true
|
||||||
|
instance.has("b"); //=> false
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION isValid
|
!SUBSECTION isValid
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_isvalid
|
|
||||||
|
|
||||||
|
|
||||||
|
`model.isValid`
|
||||||
|
|
||||||
|
The *isValid* flag indicates whether the model's state is currently valid.
|
||||||
|
If the model does not have a schema, it will always be considered valid.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Errors
|
!SUBSECTION Errors
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_errors
|
|
||||||
|
|
||||||
|
|
||||||
|
`model.errors`
|
||||||
|
|
||||||
|
The *errors* property maps the names of any invalid attributes to their
|
||||||
|
corresponding validation error.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Attributes
|
!SUBSECTION Attributes
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_attributes
|
|
||||||
|
|
||||||
|
|
||||||
|
`model.attributes`
|
||||||
|
|
||||||
|
The *attributes* property is the internal hash containing the model's state.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION forDB
|
!SUBSECTION forDB
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_forDB
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxModel#forDB()`
|
||||||
|
|
||||||
|
Return a copy of the model which can be saved into ArangoDB
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION forClient
|
!SUBSECTION forClient
|
||||||
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
<!-- js/server/modules/@arangodb/foxx/model.js -->
|
||||||
@startDocuBlock JSF_foxx_model_forClient
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxModel#forClient()`
|
||||||
|
|
||||||
|
Return a copy of the model which you can send to the client.
|
||||||
|
|
||||||
|
|
|
@ -54,7 +54,51 @@ people.remove(person);
|
||||||
|
|
||||||
!SUBSECTION Initialize
|
!SUBSECTION Initialize
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_initializer
|
|
||||||
|
|
||||||
|
`new FoxxRepository(collection, opts)`
|
||||||
|
|
||||||
|
Create a new instance of Repository.
|
||||||
|
|
||||||
|
A Foxx Repository is always initialized with a collection object. You can get
|
||||||
|
your collection object by asking your Foxx.Controller for it: the
|
||||||
|
*collection* method takes the name of the collection (and will prepend
|
||||||
|
the prefix of your application). It also takes two optional arguments:
|
||||||
|
|
||||||
|
1. Model: The prototype of a model. If you do not provide it, it will default
|
||||||
|
to Foxx.Model
|
||||||
|
2. Prefix: You can provide the prefix of the application if you need it in
|
||||||
|
your Repository (for some AQL queries probably)
|
||||||
|
|
||||||
|
If the Model has any static methods named after the lifecycle events, they
|
||||||
|
will automatically be registered as listeners to the events emitted by this
|
||||||
|
repository.
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
```js
|
||||||
|
instance = new Repository(appContext.collection("my_collection"));
|
||||||
|
or:
|
||||||
|
instance = new Repository(appContext.collection("my_collection"), {
|
||||||
|
model: MyModelPrototype
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
Example with listeners:
|
||||||
|
|
||||||
|
```js
|
||||||
|
var ValidatedModel = Model.extend({
|
||||||
|
schema: {...}
|
||||||
|
}, {
|
||||||
|
beforeSave(modelInstance) {
|
||||||
|
if (!modelInstance.valid) {
|
||||||
|
throw new Error('Refusing to save: model is not valid!');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
instance = new Repository(collection, {model: ValidatedModel});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SECTION Defining custom queries
|
!SECTION Defining custom queries
|
||||||
|
|
||||||
|
@ -127,19 +171,32 @@ ctrl.get("/:id", function(req, res) {
|
||||||
|
|
||||||
!SUBSECTION Collection
|
!SUBSECTION Collection
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_collection
|
|
||||||
|
|
||||||
|
The wrapped ArangoDB collection object.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Model
|
!SUBSECTION Model
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_model
|
|
||||||
|
|
||||||
|
The model of this repository. Formerly called "modelPrototype".
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Model schema
|
!SUBSECTION Model schema
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_modelSchema
|
|
||||||
|
|
||||||
|
The schema of this repository's model.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Prefix
|
!SUBSECTION Prefix
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_prefix
|
|
||||||
|
|
||||||
|
The prefix of the application. This is useful if you want to construct AQL
|
||||||
|
queries for example.
|
||||||
|
|
||||||
|
|
||||||
!SECTION Defining indexes
|
!SECTION Defining indexes
|
||||||
|
|
||||||
|
@ -169,55 +226,381 @@ FulltextRepository.prototype.indexes = [
|
||||||
|
|
||||||
!SUBSECTION Adding entries to the repository
|
!SUBSECTION Adding entries to the repository
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_save
|
|
||||||
|
|
||||||
|
`FoxxRepository#save(model)`
|
||||||
|
|
||||||
|
Saves a model into the database.
|
||||||
|
Expects a model. Will set the ID and Rev on the model.
|
||||||
|
Returns the model.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.save(my_model);
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Finding entries in the repository
|
!SUBSECTION Finding entries in the repository
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_byId
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_byExample
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_firstExample
|
`FoxxRepository#byId(id)`
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_all
|
Returns the model for the given ID ("collection/key") or "key".
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var byIdModel = repository.byId('test/12411');
|
||||||
|
byIdModel.get('name');
|
||||||
|
|
||||||
|
var byKeyModel = repository.byId('12412');
|
||||||
|
byKeyModel.get('name');
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#byExample(example)`
|
||||||
|
|
||||||
|
Returns an array of models for the given ID.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var myModel = repository.byExample({ amazing: true });
|
||||||
|
myModel[0].get('name');
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#firstExample(example)`
|
||||||
|
|
||||||
|
Returns the first model that matches the given example.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var myModel = repository.firstExample({ amazing: true });
|
||||||
|
myModel.get('name');
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#all()`
|
||||||
|
|
||||||
|
Returns an array of models that matches the given example. You can provide
|
||||||
|
both a skip and a limit value.
|
||||||
|
|
||||||
|
**Warning:** ArangoDB doesn't guarantee a specific order in this case, to make
|
||||||
|
this really useful we have to explicitly provide something to order by.
|
||||||
|
|
||||||
|
*Parameter*
|
||||||
|
|
||||||
|
* *options* (optional):
|
||||||
|
* *skip* (optional): skips the first given number of models.
|
||||||
|
* *limit* (optional): only returns at most the given number of models.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var myModel = repository.all({ skip: 4, limit: 2 });
|
||||||
|
myModel[0].get('name');
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#any()`
|
||||||
|
|
||||||
|
Returns a random model from this repository (or null if there is none).
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.any();
|
||||||
|
```
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_any
|
|
||||||
|
|
||||||
!SUBSECTION Removing entries from the repository
|
!SUBSECTION Removing entries from the repository
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_remove
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_removeById
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_removeByExample
|
`FoxxRepository#remove(model)`
|
||||||
|
|
||||||
|
Remove the model from the repository.
|
||||||
|
Expects a model.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.remove(myModel);
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#removeById(id)`
|
||||||
|
|
||||||
|
Remove the document with the given ID ("collection/key") or "key".
|
||||||
|
Expects an ID or key of an existing document.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.removeById('test/12121');
|
||||||
|
repository.removeById('12122');
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#removeByExample(example)`
|
||||||
|
|
||||||
|
Find all documents that fit this example and remove them.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.removeByExample({ toBeDeleted: true });
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Replacing entries in the repository
|
!SUBSECTION Replacing entries in the repository
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_replace
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_replaceById
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_replaceByExample
|
`FoxxRepository#replace(model)`
|
||||||
|
|
||||||
|
Find the model in the database by its *_id* and replace it with this version.
|
||||||
|
Expects a model. Sets the revision of the model.
|
||||||
|
Returns the model.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
myModel.set('name', 'Jan Steemann');
|
||||||
|
repository.replace(myModel);
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#replaceById(id, object)`
|
||||||
|
|
||||||
|
Find the item in the database by the given ID ("collection/key") or "key"
|
||||||
|
and replace it with the given object's attributes.
|
||||||
|
|
||||||
|
If the object is a model, updates the model's revision and returns the model.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.replaceById('test/123345', myNewModel);
|
||||||
|
repository.replaceById('123346', myNewModel);
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#replaceByExample(example, object)`
|
||||||
|
|
||||||
|
Find every matching item by example and replace it with the attributes in
|
||||||
|
the provided object.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.replaceByExample({ replaceMe: true }, myNewModel);
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Updating entries in the repository
|
!SUBSECTION Updating entries in the repository
|
||||||
@startDocuBlock JSF_foxx_repository_update
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_updateById
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_updateByExample
|
`FoxxRepository#update(model, object)`
|
||||||
|
|
||||||
|
Find the model in the database by its *_id* and update it with the given object.
|
||||||
|
Expects a model. Sets the revision of the model and updates its properties.
|
||||||
|
Returns the model.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.update(myModel, {name: 'Jan Steeman'});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#updateById(id, object)`
|
||||||
|
|
||||||
|
Find an item by ID ("collection/key") or "key" and update it with the
|
||||||
|
attributes in the provided object.
|
||||||
|
|
||||||
|
If the object is a model, updates the model's revision and returns the model.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.updateById('test/12131', { newAttribute: 'awesome' });
|
||||||
|
repository.updateById('12132', { newAttribute: 'awesomer' });
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#updateByExample(example, object)`
|
||||||
|
|
||||||
|
Find every matching item by example and update it with the attributes in
|
||||||
|
the provided object.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.updateByExample({ findMe: true }, { newAttribute: 'awesome' });
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#exists(id)`
|
||||||
|
|
||||||
|
Checks whether a model with the given ID or key exists.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.exists(model.get('_id'));
|
||||||
|
```
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_exists
|
|
||||||
|
|
||||||
!SUBSECTION Counting entries in the repository
|
!SUBSECTION Counting entries in the repository
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_count
|
|
||||||
|
|
||||||
|
`FoxxRepository#count()`
|
||||||
|
|
||||||
|
Returns the number of entries in this collection.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.count();
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Index-specific repository methods
|
!SUBSECTION Index-specific repository methods
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_range
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_near
|
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_within
|
`FoxxRepository#range(attribute, left, right)`
|
||||||
|
|
||||||
|
Returns all models in the repository such that the attribute is greater
|
||||||
|
than or equal to *left* and strictly less than *right*.
|
||||||
|
|
||||||
|
For range queries it is required that a skiplist index is present for the
|
||||||
|
queried attribute. If no skiplist index is present on the attribute, the
|
||||||
|
method will not be available.
|
||||||
|
|
||||||
|
*Parameter*
|
||||||
|
|
||||||
|
* *attribute*: attribute to query.
|
||||||
|
* *left*: lower bound of the value range (inclusive).
|
||||||
|
* *right*: upper bound of the value range (exclusive).
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.range("age", 10, 13);
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#near(latitude, longitude, options)`
|
||||||
|
|
||||||
|
Finds models near the coordinate *(latitude, longitude)*. The returned
|
||||||
|
list is sorted by distance with the nearest model coming first.
|
||||||
|
|
||||||
|
For geo queries it is required that a geo index is present in the
|
||||||
|
repository. If no geo index is present, the methods will not be available.
|
||||||
|
|
||||||
|
*Parameter*
|
||||||
|
|
||||||
|
* *latitude*: latitude of the coordinate.
|
||||||
|
* *longitude*: longitude of the coordinate.
|
||||||
|
* *options* (optional):
|
||||||
|
* *geo* (optional): name of the specific geo index to use.
|
||||||
|
* *distance* (optional): If set to a truthy value, the returned models
|
||||||
|
will have an additional property containing the distance between the
|
||||||
|
given coordinate and the model. If the value is a string, that value
|
||||||
|
will be used as the property name, otherwise the name defaults to *"distance"*.
|
||||||
|
* *limit* (optional): number of models to return. Defaults to *100*.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.near(0, 0, {geo: "home", distance: true, limit: 10});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#within(latitude, longitude, radius, options)`
|
||||||
|
|
||||||
|
Finds models within the distance *radius* from the coordinate
|
||||||
|
*(latitude, longitude)*. The returned list is sorted by distance with the
|
||||||
|
nearest model coming first.
|
||||||
|
|
||||||
|
For geo queries it is required that a geo index is present in the
|
||||||
|
repository. If no geo index is present, the methods will not be available.
|
||||||
|
|
||||||
|
*Parameter*
|
||||||
|
|
||||||
|
* *latitude*: latitude of the coordinate.
|
||||||
|
* *longitude*: longitude of the coordinate.
|
||||||
|
* *radius*: maximum distance from the coordinate.
|
||||||
|
* *options* (optional):
|
||||||
|
* *geo* (optional): name of the specific geo index to use.
|
||||||
|
* *distance* (optional): If set to a truthy value, the returned models
|
||||||
|
will have an additional property containing the distance between the
|
||||||
|
given coordinate and the model. If the value is a string, that value
|
||||||
|
will be used as the property name, otherwise the name defaults to *"distance"*.
|
||||||
|
* *limit* (optional): number of models to return. Defaults to *100*.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.within(0, 0, 2000 * 1000, {geo: "home", distance: true, limit: 10});
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`FoxxRepository#fulltext(attribute, query, options)`
|
||||||
|
|
||||||
|
Returns all models whose attribute *attribute* matches the search query
|
||||||
|
*query*.
|
||||||
|
|
||||||
|
In order to use the fulltext method, a fulltext index must be defined on
|
||||||
|
the repository. If multiple fulltext indexes are defined on the repository
|
||||||
|
for the attribute, the most capable one will be selected.
|
||||||
|
If no fulltext index is present, the method will not be available.
|
||||||
|
|
||||||
|
*Parameter*
|
||||||
|
|
||||||
|
* *attribute*: model attribute to perform a search on.
|
||||||
|
* *query*: query to match the attribute against.
|
||||||
|
* *options* (optional):
|
||||||
|
* *limit* (optional): number of models to return. Defaults to all.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
repository.fulltext("text", "word", {limit: 1});
|
||||||
|
```
|
||||||
|
|
||||||
@startDocuBlock JSF_foxx_repository_fulltext
|
|
||||||
|
|
|
@ -28,11 +28,95 @@ Therefore you get exactly this two entry points:
|
||||||
|
|
||||||
!SUBSECTION Edges
|
!SUBSECTION Edges
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_edges
|
|
||||||
|
|
||||||
|
@brief Select some edges from the graph.
|
||||||
|
|
||||||
|
`graph._edges(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select a subset of the edges stored in the graph.
|
||||||
|
This is one of the entry points for the fluent AQL interface.
|
||||||
|
It will return a mutable AQL statement which can be further refined, using the
|
||||||
|
functions described below.
|
||||||
|
The resulting set of edges can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
In the examples the *toArray* function is used to print the result.
|
||||||
|
The description of this function can be found below.
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphEdgesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphEdgesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
graph._edges().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphEdgesUnfiltered
|
||||||
|
|
||||||
|
To request filtered edges:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphEdgesFiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphEdgesFiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
graph._edges({type: "married"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphEdgesFiltered
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Vertices
|
!SUBSECTION Vertices
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_vertices
|
|
||||||
|
|
||||||
|
@brief Select some vertices from the graph.
|
||||||
|
|
||||||
|
`graph._vertices(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select a subset of the vertices stored in the graph.
|
||||||
|
This is one of the entry points for the fluent AQL interface.
|
||||||
|
It will return a mutable AQL statement which can be further refined, using the
|
||||||
|
functions described below.
|
||||||
|
The resulting set of edges can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
In the examples the *toArray* function is used to print the result.
|
||||||
|
The description of this function can be found below.
|
||||||
|
|
||||||
|
To request unfiltered vertices:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphVerticesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphVerticesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
graph._vertices().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphVerticesUnfiltered
|
||||||
|
|
||||||
|
To request filtered vertices:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphVerticesFiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphVerticesFiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
graph._vertices([{name: "Alice"}, {name: "Bob"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphVerticesFiltered
|
||||||
|
|
||||||
|
|
||||||
!SECTION Working with the query cursor
|
!SECTION Working with the query cursor
|
||||||
|
|
||||||
|
@ -44,19 +128,155 @@ The cursor functionality is described in this section.
|
||||||
|
|
||||||
!SUBSECTION ToArray
|
!SUBSECTION ToArray
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_toArray
|
|
||||||
|
|
||||||
|
@brief Returns an array containing the complete result.
|
||||||
|
|
||||||
|
`graph_query.toArray()`
|
||||||
|
|
||||||
|
This function executes the generated query and returns the
|
||||||
|
entire result as one array.
|
||||||
|
ToArray does not return the generated query anymore and
|
||||||
|
hence can only be the endpoint of a query.
|
||||||
|
However keeping a reference to the query before
|
||||||
|
executing allows to chain further statements to it.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To collect the entire result of a query toArray can be used:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLToArray
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLToArray}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices();
|
||||||
|
query.toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLToArray
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION HasNext
|
!SUBSECTION HasNext
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_hasNext
|
|
||||||
|
|
||||||
|
@brief Checks if the query has further results.
|
||||||
|
|
||||||
|
`graph_query.hasNext()`
|
||||||
|
|
||||||
|
The generated statement maintains a cursor for you.
|
||||||
|
If this cursor is already present *hasNext()* will
|
||||||
|
use this cursors position to determine if there are
|
||||||
|
further results available.
|
||||||
|
If the query has not yet been executed *hasNext()*
|
||||||
|
will execute it and create the cursor for you.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Start query execution with hasNext:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLHasNext
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLHasNext}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices();
|
||||||
|
query.hasNext();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLHasNext
|
||||||
|
|
||||||
|
Iterate over the result as long as it has more elements:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLHasNextIteration
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLHasNextIteration}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices();
|
||||||
|
| while (query.hasNext()) {
|
||||||
|
| var entry = query.next();
|
||||||
|
| // Do something with the entry
|
||||||
|
}
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLHasNextIteration
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Next
|
!SUBSECTION Next
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_next
|
|
||||||
|
|
||||||
|
@brief Request the next element in the result.
|
||||||
|
|
||||||
|
`graph_query.next()`
|
||||||
|
|
||||||
|
The generated statement maintains a cursor for you.
|
||||||
|
If this cursor is already present *next()* will
|
||||||
|
use this cursors position to deliver the next result.
|
||||||
|
Also the cursor position will be moved by one.
|
||||||
|
If the query has not yet been executed *next()*
|
||||||
|
will execute it and create the cursor for you.
|
||||||
|
It will throw an error of your query has no further results.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Request some elements with next:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLNext
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLNext}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices();
|
||||||
|
query.next();
|
||||||
|
query.next();
|
||||||
|
query.next();
|
||||||
|
query.next();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLNext
|
||||||
|
|
||||||
|
The cursor is recreated if the query is changed:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLNextRecreate
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLNextRecreate}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices();
|
||||||
|
query.next();
|
||||||
|
query.edges();
|
||||||
|
query.next();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLNextRecreate
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Count
|
!SUBSECTION Count
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_count
|
|
||||||
|
|
||||||
|
@brief Returns the number of returned elements if the query is executed.
|
||||||
|
|
||||||
|
`graph_query.count()`
|
||||||
|
|
||||||
|
This function determines the amount of elements to be expected within the result of the query.
|
||||||
|
It can be used at the beginning of execution of the query
|
||||||
|
before using *next()* or in between *next()* calls.
|
||||||
|
The query object maintains a cursor of the query for you.
|
||||||
|
*count()* does not change the cursor position.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To count the number of matched elements:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLCount
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLCount}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices();
|
||||||
|
query.count();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLCount
|
||||||
|
|
||||||
|
|
||||||
!SECTION Fluent queries
|
!SECTION Fluent queries
|
||||||
|
|
||||||
|
@ -67,41 +287,581 @@ In this section all available query statements are described.
|
||||||
|
|
||||||
!SUBSECTION Edges
|
!SUBSECTION Edges
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_edges
|
|
||||||
|
|
||||||
|
@brief Select all edges for the vertices selected before.
|
||||||
|
|
||||||
|
`graph_query.edges(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select all edges for each of the vertices selected
|
||||||
|
in the step before.
|
||||||
|
This will include *inbound* as well as *outbound* edges.
|
||||||
|
The resulting set of edges can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
The complexity of this method is **O(n\*m^x)** with *n* being the vertices defined by the
|
||||||
|
parameter vertexExamplex, *m* the average amount of edges of a vertex and *x* the maximal depths.
|
||||||
|
Hence the default call would have a complexity of **O(n\*m)**;
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered edges:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLEdgesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLEdgesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.edges().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLEdgesUnfiltered
|
||||||
|
|
||||||
|
To request filtered edges by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLEdgesFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLEdgesFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.edges({type: "married"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLEdgesFilteredSingle
|
||||||
|
|
||||||
|
To request filtered edges by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLEdgesFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLEdgesFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.edges([{type: "married"}, {type: "friend"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLEdgesFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION OutEdges
|
!SUBSECTION OutEdges
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_outEdges
|
|
||||||
|
|
||||||
|
@brief Select all outbound edges for the vertices selected before.
|
||||||
|
|
||||||
|
`graph_query.outEdges(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select all *outbound* edges for each of the vertices selected
|
||||||
|
in the step before.
|
||||||
|
The resulting set of edges can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered outbound edges:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLOutEdgesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLOutEdgesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.outEdges().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLOutEdgesUnfiltered
|
||||||
|
|
||||||
|
To request filtered outbound edges by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLOutEdgesFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLOutEdgesFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.outEdges({type: "married"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLOutEdgesFilteredSingle
|
||||||
|
|
||||||
|
To request filtered outbound edges by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLOutEdgesFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLOutEdgesFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.outEdges([{type: "married"}, {type: "friend"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLOutEdgesFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION InEdges
|
!SUBSECTION InEdges
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_inEdges
|
|
||||||
|
|
||||||
|
@brief Select all inbound edges for the vertices selected before.
|
||||||
|
|
||||||
|
`graph_query.inEdges(examples)`
|
||||||
|
|
||||||
|
|
||||||
|
Creates an AQL statement to select all *inbound* edges for each of the vertices selected
|
||||||
|
in the step before.
|
||||||
|
The resulting set of edges can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered inbound edges:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLInEdgesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLInEdgesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.inEdges().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLInEdgesUnfiltered
|
||||||
|
|
||||||
|
To request filtered inbound edges by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLInEdgesFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLInEdgesFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.inEdges({type: "married"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLInEdgesFilteredSingle
|
||||||
|
|
||||||
|
To request filtered inbound edges by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLInEdgesFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLInEdgesFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices([{name: "Alice"}, {name: "Bob"}]);
|
||||||
|
query.inEdges([{type: "married"}, {type: "friend"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLInEdgesFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Vertices
|
!SUBSECTION Vertices
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_vertices
|
|
||||||
|
|
||||||
|
@brief Select all vertices connected to the edges selected before.
|
||||||
|
|
||||||
|
`graph_query.vertices(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select all vertices for each of the edges selected
|
||||||
|
in the step before.
|
||||||
|
This includes all vertices contained in *_from* as well as *_to* attribute of the edges.
|
||||||
|
The resulting set of vertices can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered vertices:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLVerticesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLVerticesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.vertices().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLVerticesUnfiltered
|
||||||
|
|
||||||
|
To request filtered vertices by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLVerticesFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLVerticesFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.vertices({name: "Alice"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLVerticesFilteredSingle
|
||||||
|
|
||||||
|
To request filtered vertices by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLVerticesFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLVerticesFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.vertices([{name: "Alice"}, {name: "Charly"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLVerticesFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION FromVertices
|
!SUBSECTION FromVertices
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_fromVertices
|
|
||||||
|
|
||||||
|
@brief Select all source vertices of the edges selected before.
|
||||||
|
|
||||||
|
`graph_query.fromVertices(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select the set of vertices where the edges selected
|
||||||
|
in the step before start at.
|
||||||
|
This includes all vertices contained in *_from* attribute of the edges.
|
||||||
|
The resulting set of vertices can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered source vertices:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLFromVerticesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLFromVerticesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.fromVertices().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLFromVerticesUnfiltered
|
||||||
|
|
||||||
|
To request filtered source vertices by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLFromVerticesFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLFromVerticesFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.fromVertices({name: "Alice"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLFromVerticesFilteredSingle
|
||||||
|
|
||||||
|
To request filtered source vertices by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLFromVerticesFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLFromVerticesFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.fromVertices([{name: "Alice"}, {name: "Charly"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLFromVerticesFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION ToVertices
|
!SUBSECTION ToVertices
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_toVertices
|
|
||||||
|
|
||||||
|
@brief Select all vertices targeted by the edges selected before.
|
||||||
|
|
||||||
|
`graph_query.toVertices(examples)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select the set of vertices where the edges selected
|
||||||
|
in the step before end in.
|
||||||
|
This includes all vertices contained in *_to* attribute of the edges.
|
||||||
|
The resulting set of vertices can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered target vertices:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLToVerticesUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLToVerticesUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLToVerticesUnfiltered
|
||||||
|
|
||||||
|
To request filtered target vertices by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLToVerticesFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLToVerticesFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices({name: "Bob"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLToVerticesFilteredSingle
|
||||||
|
|
||||||
|
To request filtered target vertices by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLToVerticesFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLToVerticesFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices([{name: "Bob"}, {name: "Diana"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLToVerticesFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Neighbors
|
!SUBSECTION Neighbors
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_neighbors
|
|
||||||
|
|
||||||
|
@brief Select all neighbors of the vertices selected in the step before.
|
||||||
|
|
||||||
|
`graph_query.neighbors(examples, options)`
|
||||||
|
|
||||||
|
Creates an AQL statement to select all neighbors for each of the vertices selected
|
||||||
|
in the step before.
|
||||||
|
The resulting set of vertices can be filtered by defining one or more *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@PARAM{options, object, optional}
|
||||||
|
An object defining further options. Can have the following values:
|
||||||
|
* *direction*: The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default).
|
||||||
|
* *edgeExamples*: Filter the edges to be followed, see [Definition of examples](#definition-of-examples)
|
||||||
|
* *edgeCollectionRestriction* : One or a list of edge-collection names that should be
|
||||||
|
considered to be on the path.
|
||||||
|
* *vertexCollectionRestriction* : One or a list of vertex-collection names that should be
|
||||||
|
considered on the intermediate vertex steps.
|
||||||
|
* *minDepth*: Defines the minimal number of intermediate steps to neighbors (default is 1).
|
||||||
|
* *maxDepth*: Defines the maximal number of intermediate steps to neighbors (default is 1).
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
To request unfiltered neighbors:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLNeighborsUnfiltered
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLNeighborsUnfiltered}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.neighbors().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLToVerticesFilteredMultiple
|
||||||
|
|
||||||
|
To request filtered neighbors by a single example:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLNeighborsFilteredSingle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLNeighborsFilteredSingle}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.neighbors({name: "Bob"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLNeighborsFilteredSingle
|
||||||
|
|
||||||
|
To request filtered neighbors by multiple examples:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLNeighborsFilteredMultiple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLNeighborsFilteredMultiple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.vertices([{name: "Bob"}, {name: "Charly"}]).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLNeighborsFilteredMultiple
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Restrict
|
!SUBSECTION Restrict
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_restrict
|
|
||||||
|
|
||||||
|
@brief Restricts the last statement in the chain to return
|
||||||
|
only elements of a specified set of collections
|
||||||
|
|
||||||
|
`graph_query.restrict(restrictions)`
|
||||||
|
|
||||||
|
By default all collections in the graph are searched for matching elements
|
||||||
|
whenever vertices and edges are requested.
|
||||||
|
Using *restrict* after such a statement allows to restrict the search
|
||||||
|
to a specific set of collections within the graph.
|
||||||
|
Restriction is only applied to this one part of the query.
|
||||||
|
It does not effect earlier or later statements.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{restrictions, array, optional}
|
||||||
|
Define either one or a list of collections in the graph.
|
||||||
|
Only elements from these collections are taken into account for the result.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Request all directly connected vertices unrestricted:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLUnrestricted
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLUnrestricted}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.edges().vertices().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLUnrestricted
|
||||||
|
|
||||||
|
Apply a restriction to the directly connected vertices:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLRestricted
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLRestricted}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.edges().vertices().restrict("female").toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLRestricted
|
||||||
|
|
||||||
|
Restriction of a query is only valid for collections known to the graph:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLRestrictedUnknown
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLRestrictedUnknown}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.edges().vertices().restrict(["female", "male", "products"]).toArray(); // xpError(ERROR_BAD_PARAMETER);
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLRestrictedUnknown
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Filter
|
!SUBSECTION Filter
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_filter
|
|
||||||
|
|
||||||
|
@brief Filter the result of the query
|
||||||
|
|
||||||
|
`graph_query.filter(examples)`
|
||||||
|
|
||||||
|
This can be used to further specfiy the expected result of the query.
|
||||||
|
The result set is reduced to the set of elements that matches the given *examples*.
|
||||||
|
|
||||||
|
@PARAMS
|
||||||
|
|
||||||
|
@PARAM{examples, object, optional}
|
||||||
|
See [Definition of examples](#definition-of-examples)
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Request vertices unfiltered:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLUnfilteredVertices
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLUnfilteredVertices}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLUnfilteredVertices
|
||||||
|
|
||||||
|
Request vertices filtered:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLFilteredVertices
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLFilteredVertices}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices().filter({name: "Alice"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLFilteredVertices
|
||||||
|
|
||||||
|
Request edges unfiltered:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLUnfilteredEdges
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLUnfilteredEdges}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices().outEdges().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLUnfilteredEdges
|
||||||
|
|
||||||
|
Request edges filtered:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLFilteredEdges
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLFilteredEdges}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._edges({type: "married"});
|
||||||
|
query.toVertices().outEdges().filter({type: "married"}).toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLFilteredEdges
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Path
|
!SUBSECTION Path
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_fluent_aql_path
|
|
||||||
|
|
||||||
|
@brief The result of the query is the path to all elements.
|
||||||
|
|
||||||
|
`graph_query.path()`
|
||||||
|
|
||||||
|
By defaut the result of the generated AQL query is the set of elements passing the last matches.
|
||||||
|
So having a `vertices()` query as the last step the result will be set of vertices.
|
||||||
|
Using `path()` as the last action before requesting the result
|
||||||
|
will modify the result such that the path required to find the set vertices is returned.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Request the iteratively explored path using vertices and edges:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLPathSimple
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLPathSimple}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.outEdges().toVertices().path().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLPathSimple
|
||||||
|
|
||||||
|
When requesting neighbors the path to these neighbors is expanded:
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphFluentAQLPathNeighbors
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphFluentAQLPathNeighbors}
|
||||||
|
var examples = require("@arangodb/graph-examples/example-graph.js");
|
||||||
|
var graph = examples.loadGraph("social");
|
||||||
|
var query = graph._vertices({name: "Alice"});
|
||||||
|
query.neighbors().path().toArray();
|
||||||
|
~ examples.dropGraph("social");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphFluentAQLPathNeighbors
|
||||||
|
|
||||||
|
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -7,4 +7,42 @@ There is no need to include the referenced collections within the query, this mo
|
||||||
|
|
||||||
!SUBSUBSECTION Three Steps to create a graph
|
!SUBSUBSECTION Three Steps to create a graph
|
||||||
|
|
||||||
@startDocuBlock JSF_general_graph_how_to_create
|
* Create a graph
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphCreateGraphHowTo1
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphCreateGraphHowTo1}
|
||||||
|
var graph_module = require("@arangodb/general-graph");
|
||||||
|
var graph = graph_module._create("myGraph");
|
||||||
|
graph;
|
||||||
|
~ graph_module._drop("myGraph", true);
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphCreateGraphHowTo1
|
||||||
|
|
||||||
|
* Add some vertex collections
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphCreateGraphHowTo2
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphCreateGraphHowTo2}
|
||||||
|
~ var graph_module = require("@arangodb/general-graph");
|
||||||
|
~ var graph = graph_module._create("myGraph");
|
||||||
|
graph._addVertexCollection("shop");
|
||||||
|
graph._addVertexCollection("customer");
|
||||||
|
graph._addVertexCollection("pet");
|
||||||
|
graph;
|
||||||
|
~ graph_module._drop("myGraph", true);
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphCreateGraphHowTo2
|
||||||
|
|
||||||
|
* Define relations on the
|
||||||
|
|
||||||
|
@startDocuBlockInline generalGraphCreateGraphHowTo3
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{generalGraphCreateGraphHowTo3}
|
||||||
|
~ var graph_module = require("@arangodb/general-graph");
|
||||||
|
~ var graph = graph_module._create("myGraph");
|
||||||
|
var rel = graph_module._relation("isCustomer", ["shop"], ["customer"]);
|
||||||
|
graph._extendEdgeDefinitions(rel);
|
||||||
|
graph;
|
||||||
|
~ graph_module._drop("myGraph", true);
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock generalGraphCreateGraphHowTo3
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -30,4 +30,42 @@ of "old" documents, and so save the user from implementing own jobs to purge
|
||||||
!SECTION Accessing Cap Constraints from the Shell
|
!SECTION Accessing Cap Constraints from the Shell
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock collectionEnsureCapConstraint
|
|
||||||
|
|
||||||
|
ensures that a cap constraint exists
|
||||||
|
`collection.ensureIndex({ type: "cap", size: size, byteSize: byteSize })`
|
||||||
|
|
||||||
|
Creates a size restriction aka cap for the collection of `size`
|
||||||
|
documents and/or `byteSize` data size. If the restriction is in place
|
||||||
|
and the (`size` plus one) document is added to the collection, or the
|
||||||
|
total active data size in the collection exceeds `byteSize`, then the
|
||||||
|
least recently created or updated documents are removed until all
|
||||||
|
constraints are satisfied.
|
||||||
|
|
||||||
|
It is allowed to specify either `size` or `byteSize`, or both at
|
||||||
|
the same time. If both are specified, then the automatic document removal
|
||||||
|
will be triggered by the first non-met constraint.
|
||||||
|
|
||||||
|
Note that at most one cap constraint is allowed per collection. Trying
|
||||||
|
to create additional cap constraints will result in an error. Creating
|
||||||
|
cap constraints is also not supported in sharded collections with more
|
||||||
|
than one shard.
|
||||||
|
|
||||||
|
Note that this does not imply any restriction of the number of revisions
|
||||||
|
of documents.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Restrict the number of document to at most 10 documents:
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionEnsureCapConstraint
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionEnsureCapConstraint}
|
||||||
|
~db._create('examples');
|
||||||
|
db.examples.ensureIndex({ type: "cap", size: 10 });
|
||||||
|
for (var i = 0; i < 20; ++i) { var d = db.examples.save( { n : i } ); }
|
||||||
|
db.examples.count();
|
||||||
|
~db._drop('examples');
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionEnsureCapConstraint
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -29,6 +29,48 @@ Other data types are ignored and not indexed.
|
||||||
!SECTION Accessing Fulltext Indexes from the Shell
|
!SECTION Accessing Fulltext Indexes from the Shell
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js -->
|
<!-- js/server/modules/@arangodb/arango-collection.js -->
|
||||||
@startDocuBlock ensureFulltextIndex
|
|
||||||
|
|
||||||
@startDocuBlock lookUpFulltextIndex
|
|
||||||
|
ensures that a fulltext index exists
|
||||||
|
`collection.ensureIndex({ type: "fulltext", fields: [ "field" ], minLength: minLength })`
|
||||||
|
|
||||||
|
Creates a fulltext index on all documents on attribute *field*.
|
||||||
|
|
||||||
|
Fulltext indexes are implicitly sparse: all documents which do not have
|
||||||
|
the specified *field* attribute or that have a non-qualifying value in their
|
||||||
|
*field* attribute will be ignored for indexing.
|
||||||
|
|
||||||
|
Only a single attribute can be indexed. Specifying multiple attributes is
|
||||||
|
unsupported.
|
||||||
|
|
||||||
|
The minimum length of words that are indexed can be specified via the
|
||||||
|
*minLength* parameter. Words shorter than minLength characters will
|
||||||
|
not be indexed. *minLength* has a default value of 2, but this value might
|
||||||
|
be changed in future versions of ArangoDB. It is thus recommended to explicitly
|
||||||
|
specify this value.
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details is returned.
|
||||||
|
|
||||||
|
@startDocuBlockInline ensureFulltextIndex
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{ensureFulltextIndex}
|
||||||
|
~db._create("example");
|
||||||
|
db.example.ensureIndex({ type: "fulltext", fields: [ "text" ], minLength: 3 });
|
||||||
|
db.example.save({ text : "the quick brown", b : { c : 1 } });
|
||||||
|
db.example.save({ text : "quick brown fox", b : { c : 2 } });
|
||||||
|
db.example.save({ text : "brown fox jums", b : { c : 3 } });
|
||||||
|
db.example.save({ text : "fox jumps over", b : { c : 4 } });
|
||||||
|
db.example.save({ text : "jumps over the", b : { c : 5 } });
|
||||||
|
db.example.save({ text : "over the lazy", b : { c : 6 } });
|
||||||
|
db.example.save({ text : "the lazy dog", b : { c : 7 } });
|
||||||
|
db._query("FOR document IN FULLTEXT(example, 'text', 'the') RETURN document");
|
||||||
|
~db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock ensureFulltextIndex
|
||||||
|
|
||||||
|
|
||||||
|
looks up a fulltext index
|
||||||
|
`collection.lookupFulltextIndex(attribute, minLength)`
|
||||||
|
|
||||||
|
Checks whether a fulltext index on the given attribute *attribute* exists.
|
||||||
|
|
||||||
|
|
|
@ -15,7 +15,75 @@ documents which do not fulfill these requirements.
|
||||||
!SECTION Accessing Geo Indexes from the Shell
|
!SECTION Accessing Geo Indexes from the Shell
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock collectionEnsureGeoIndex
|
|
||||||
|
|
||||||
|
ensures that a geo index exists
|
||||||
|
`collection.ensureIndex({ type: "geo", fields: [ "location" ] })`
|
||||||
|
|
||||||
|
Creates a geo-spatial index on all documents using *location* as path to
|
||||||
|
the coordinates. The value of the attribute has to be an array with at least two
|
||||||
|
numeric values. The array must contain the latitude (first value) and the
|
||||||
|
longitude (second value).
|
||||||
|
|
||||||
|
All documents, which do not have the attribute path or have a non-conforming
|
||||||
|
value in it are excluded from the index.
|
||||||
|
|
||||||
|
A geo index is implicitly sparse, and there is no way to control its sparsity.
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details, including the index-identifier, is returned.
|
||||||
|
|
||||||
|
To create a geo on an array attribute that contains longitude first, set the
|
||||||
|
*geoJson* attribute to `true`. This corresponds to the format described in
|
||||||
|
[positions](http://geojson.org/geojson-spec.html)
|
||||||
|
|
||||||
|
`collection.ensureIndex({ type: "geo", fields: [ "location" ], geoJson: true })`
|
||||||
|
|
||||||
|
To create a geo-spatial index on all documents using *latitude* and
|
||||||
|
*longitude* as separate attribute paths, two paths need to be specified
|
||||||
|
in the *fields* array:
|
||||||
|
|
||||||
|
`collection.ensureIndex({ type: "geo", fields: [ "latitude", "longitude" ] })`
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details, including the index-identifier, is returned.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Create a geo index for an array attribute:
|
||||||
|
|
||||||
|
@startDocuBlockInline geoIndexCreateForArrayAttribute
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{geoIndexCreateForArrayAttribute}
|
||||||
|
~db._create("geo")
|
||||||
|
db.geo.ensureIndex({ type: "geo", fields: [ "loc" ] });
|
||||||
|
| for (i = -90; i <= 90; i += 10) {
|
||||||
|
| for (j = -180; j <= 180; j += 10) {
|
||||||
|
| db.geo.save({ name : "Name/" + i + "/" + j, loc: [ i, j ] });
|
||||||
|
| }
|
||||||
|
}
|
||||||
|
db.geo.count();
|
||||||
|
db.geo.near(0, 0).limit(3).toArray();
|
||||||
|
db.geo.near(0, 0).count();
|
||||||
|
~db._drop("geo")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock geoIndexCreateForArrayAttribute
|
||||||
|
|
||||||
|
Create a geo index for a hash array attribute:
|
||||||
|
|
||||||
|
@startDocuBlockInline geoIndexCreateForArrayAttribute2
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{geoIndexCreateForArrayAttribute2}
|
||||||
|
~db._create("geo2")
|
||||||
|
db.geo2.ensureIndex({ type: "geo", fields: [ "location.latitude", "location.longitude" ] });
|
||||||
|
| for (i = -90; i <= 90; i += 10) {
|
||||||
|
| for (j = -180; j <= 180; j += 10) {
|
||||||
|
| db.geo2.save({ name : "Name/" + i + "/" + j, location: { latitude : i, longitude : j } });
|
||||||
|
| }
|
||||||
|
}
|
||||||
|
db.geo2.near(0, 0).limit(3).toArray();
|
||||||
|
~db._drop("geo2")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock geoIndexCreateForArrayAttribute2
|
||||||
|
|
||||||
|
|
||||||
<!-- js/common/modules/@arangodb/arango-collection-common.js-->
|
<!-- js/common/modules/@arangodb/arango-collection-common.js-->
|
||||||
@startDocuBlock collectionGeo
|
@startDocuBlock collectionGeo
|
||||||
|
@ -27,4 +95,13 @@ documents which do not fulfill these requirements.
|
||||||
@startDocuBlock collectionWithin
|
@startDocuBlock collectionWithin
|
||||||
|
|
||||||
|
|
||||||
@startDocuBlock collectionEnsureGeoConstraint
|
|
||||||
|
|
||||||
|
ensures that a geo constraint exists
|
||||||
|
`collection.ensureIndex({ type: "geo", fields: [ "location" ] })`
|
||||||
|
|
||||||
|
Since ArangoDB 2.5, this method is an alias for *ensureGeoIndex* since
|
||||||
|
geo indexes are always sparse, meaning that documents that do not contain
|
||||||
|
the index attributes or have non-numeric values in the index attributes
|
||||||
|
will not be indexed.
|
||||||
|
|
||||||
|
|
|
@ -17,8 +17,75 @@ of `null`.
|
||||||
!SECTION Accessing Hash Indexes from the Shell
|
!SECTION Accessing Hash Indexes from the Shell
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock ensureUniqueConstraint
|
|
||||||
|
|
||||||
|
@brief ensures that a unique constraint exists
|
||||||
|
`collection.ensureIndex({ type: "hash", fields: [ "field1", ..., "fieldn" ], unique: true })`
|
||||||
|
|
||||||
|
Creates a unique hash index on all documents using *field1*, ... *fieldn*
|
||||||
|
as attribute paths. At least one attribute path has to be given.
|
||||||
|
The index will be non-sparse by default.
|
||||||
|
|
||||||
|
All documents in the collection must differ in terms of the indexed
|
||||||
|
attributes. Creating a new document or updating an existing document will
|
||||||
|
will fail if the attribute uniqueness is violated.
|
||||||
|
|
||||||
|
To create a sparse unique index, set the *sparse* attribute to `true`:
|
||||||
|
|
||||||
|
`collection.ensureIndex({ type: "hash", fields: [ "field1", ..., "fieldn" ], unique: true, sparse: true })`
|
||||||
|
|
||||||
|
In case that the index was successfully created, the index identifier is returned.
|
||||||
|
|
||||||
|
Non-existing attributes will default to `null`.
|
||||||
|
In a sparse index all documents will be excluded from the index for which all
|
||||||
|
specified index attributes are `null`. Such documents will not be taken into account
|
||||||
|
for uniqueness checks.
|
||||||
|
|
||||||
|
In a non-sparse index, **all** documents regardless of `null` - attributes will be
|
||||||
|
indexed and will be taken into account for uniqueness checks.
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details, including the index-identifier, is returned.
|
||||||
|
|
||||||
|
@startDocuBlockInline ensureUniqueConstraint
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{ensureUniqueConstraint}
|
||||||
|
~db._create("test");
|
||||||
|
db.test.ensureIndex({ type: "hash", fields: [ "a", "b.c" ], unique: true });
|
||||||
|
db.test.save({ a : 1, b : { c : 1 } });
|
||||||
|
db.test.save({ a : 1, b : { c : 1 } }); // xpError(ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED)
|
||||||
|
db.test.save({ a : 1, b : { c : null } });
|
||||||
|
db.test.save({ a : 1 }); // xpError(ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED)
|
||||||
|
~db._drop("test");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock ensureUniqueConstraint
|
||||||
|
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock ensureHashIndex
|
|
||||||
|
|
||||||
|
@brief ensures that a non-unique hash index exists
|
||||||
|
`collection.ensureIndex({ type: "hash", fields: [ "field1", ..., "fieldn" ] })`
|
||||||
|
|
||||||
|
Creates a non-unique hash index on all documents using *field1*, ... *fieldn*
|
||||||
|
as attribute paths. At least one attribute path has to be given.
|
||||||
|
The index will be non-sparse by default.
|
||||||
|
|
||||||
|
To create a sparse unique index, set the *sparse* attribute to `true`:
|
||||||
|
|
||||||
|
`collection.ensureIndex({ type: "hash", fields: [ "field1", ..., "fieldn" ], sparse: true })`
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details, including the index-identifier, is returned.
|
||||||
|
|
||||||
|
@startDocuBlockInline ensureHashIndex
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{ensureHashIndex}
|
||||||
|
~db._create("test");
|
||||||
|
db.test.ensureIndex({ type: "hash", fields: [ "a" ] });
|
||||||
|
db.test.save({ a : 1 });
|
||||||
|
db.test.save({ a : 1 });
|
||||||
|
db.test.save({ a : null });
|
||||||
|
~db._drop("test");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock ensureHashIndex
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -18,11 +18,144 @@ of `null`.
|
||||||
!SECTION Accessing Skiplist Indexes from the Shell
|
!SECTION Accessing Skiplist Indexes from the Shell
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock ensureUniqueSkiplist
|
|
||||||
|
|
||||||
|
@brief ensures that a unique skiplist index exists
|
||||||
|
`collection.ensureIndex({ type: "skiplist", fields: [ "field1", ..., "fieldn" ], unique: true })`
|
||||||
|
|
||||||
|
Creates a unique skiplist index on all documents using *field1*, ... *fieldn*
|
||||||
|
as attribute paths. At least one attribute path has to be given. The index will
|
||||||
|
be non-sparse by default.
|
||||||
|
|
||||||
|
All documents in the collection must differ in terms of the indexed
|
||||||
|
attributes. Creating a new document or updating an existing document will
|
||||||
|
will fail if the attribute uniqueness is violated.
|
||||||
|
|
||||||
|
To create a sparse unique index, set the *sparse* attribute to `true`:
|
||||||
|
|
||||||
|
`collection.ensureIndex({ type: "skiplist", fields: [ "field1", ..., "fieldn" ], unique: true, sparse: true })`
|
||||||
|
|
||||||
|
In a sparse index all documents will be excluded from the index that do not
|
||||||
|
contain at least one of the specified index attributes or that have a value
|
||||||
|
of `null` in any of the specified index attributes. Such documents will
|
||||||
|
not be indexed, and not be taken into account for uniqueness checks.
|
||||||
|
|
||||||
|
In a non-sparse index, these documents will be indexed (for non-present
|
||||||
|
indexed attributes, a value of `null` will be used) and will be taken into
|
||||||
|
account for uniqueness checks.
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details, including the index-identifier, is returned.
|
||||||
|
|
||||||
|
@startDocuBlockInline ensureUniqueSkiplist
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{ensureUniqueSkiplist}
|
||||||
|
~db._create("ids");
|
||||||
|
db.ids.ensureIndex({ type: "skiplist", fields: [ "myId" ], unique: true });
|
||||||
|
db.ids.save({ "myId": 123 });
|
||||||
|
db.ids.save({ "myId": 456 });
|
||||||
|
db.ids.save({ "myId": 789 });
|
||||||
|
db.ids.save({ "myId": 123 }); // xpError(ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED)
|
||||||
|
~db._drop("ids");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@startDocuBlockInline ensureUniqueSkiplistMultiColmun
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{ensureUniqueSkiplistMultiColmun}
|
||||||
|
@endDocuBlock ensureUniqueSkiplistMultiColmun
|
||||||
|
~db._create("ids");
|
||||||
|
db.ids.ensureIndex({ type: "skiplist", fields: [ "name.first", "name.last" ], unique: true });
|
||||||
|
db.ids.save({ "name" : { "first" : "hans", "last": "hansen" }});
|
||||||
|
db.ids.save({ "name" : { "first" : "jens", "last": "jensen" }});
|
||||||
|
db.ids.save({ "name" : { "first" : "hans", "last": "jensen" }});
|
||||||
|
| db.ids.save({ "name" : { "first" : "hans", "last": "hansen" }});
|
||||||
|
~ // xpError(ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED)
|
||||||
|
~db._drop("ids");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock ensureUniqueSkiplist
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
<!-- js/server/modules/@arangodb/arango-collection.js-->
|
||||||
@startDocuBlock ensureSkiplist
|
|
||||||
|
|
||||||
|
@brief ensures that a non-unique skiplist index exists
|
||||||
|
`collection.ensureIndex({ type: "skiplist", fields: [ "field1", ..., "fieldn" ] })`
|
||||||
|
|
||||||
|
Creates a non-unique skiplist index on all documents using *field1*, ...
|
||||||
|
*fieldn* as attribute paths. At least one attribute path has to be given.
|
||||||
|
The index will be non-sparse by default.
|
||||||
|
|
||||||
|
To create a sparse unique index, set the *sparse* attribute to `true`.
|
||||||
|
|
||||||
|
In case that the index was successfully created, an object with the index
|
||||||
|
details, including the index-identifier, is returned.
|
||||||
|
|
||||||
|
@startDocuBlockInline ensureSkiplist
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{ensureSkiplist}
|
||||||
|
~db._create("names");
|
||||||
|
db.names.ensureIndex({ type: "skiplist", fields: [ "first" ] });
|
||||||
|
db.names.save({ "first" : "Tim" });
|
||||||
|
db.names.save({ "first" : "Tom" });
|
||||||
|
db.names.save({ "first" : "John" });
|
||||||
|
db.names.save({ "first" : "Tim" });
|
||||||
|
db.names.save({ "first" : "Tom" });
|
||||||
|
~db._drop("names");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock ensureSkiplist
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Query by example using a skiplist index
|
!SUBSECTION Query by example using a skiplist index
|
||||||
@startDocuBlock collectionByExampleSkiplist
|
|
||||||
|
|
||||||
|
@brief constructs a query-by-example using a skiplist index
|
||||||
|
`collection.byExampleSkiplist(index, example)`
|
||||||
|
|
||||||
|
Selects all documents from the specified skiplist index that match the
|
||||||
|
specified example and returns a cursor.
|
||||||
|
|
||||||
|
You can use *toArray*, *next*, or *hasNext* to access the
|
||||||
|
result. The result can be limited using the *skip* and *limit*
|
||||||
|
operator.
|
||||||
|
|
||||||
|
An attribute name of the form *a.b* is interpreted as attribute path,
|
||||||
|
not as attribute. If you use
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ a : { c : 1 } }
|
||||||
|
```
|
||||||
|
|
||||||
|
as example, then you will find all documents, such that the attribute
|
||||||
|
*a* contains a document of the form *{c : 1 }*. For example the document
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ a : { c : 1 }, b : 1 }
|
||||||
|
```
|
||||||
|
|
||||||
|
will match, but the document
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ a : { c : 1, b : 1 } }*
|
||||||
|
```
|
||||||
|
|
||||||
|
will not.
|
||||||
|
|
||||||
|
However, if you use
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ a.c : 1 }*,
|
||||||
|
```
|
||||||
|
|
||||||
|
then you will find all documents, which contain a sub-document in *a*
|
||||||
|
that has an attribute @LIT{c} of value *1*. Both the following documents
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ a : { c : 1 }, b : 1 }
|
||||||
|
```
|
||||||
|
and
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ a : { c : 1, b : 1 } }
|
||||||
|
```
|
||||||
|
will match.
|
||||||
|
|
||||||
|
`collection.byExampleSkiplist(index, path1, value1, ...)`
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -34,7 +34,27 @@ db._index("demo/362549736");
|
||||||
|
|
||||||
!SUBSECTION Listing all indexes of a collection
|
!SUBSECTION Listing all indexes of a collection
|
||||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||||
@startDocuBlock collectionGetIndexes
|
|
||||||
|
|
||||||
|
@brief returns information about the indexes
|
||||||
|
`getIndexes()`
|
||||||
|
|
||||||
|
Returns an array of all indexes defined for the collection.
|
||||||
|
|
||||||
|
Note that `_key` implicitly has an index assigned to it.
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionGetIndexes
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionGetIndexes}
|
||||||
|
~db._create("test");
|
||||||
|
~db.test.ensureUniqueSkiplist("skiplistAttribute");
|
||||||
|
~db.test.ensureUniqueSkiplist("skiplistUniqueAttribute");
|
||||||
|
|~db.test.ensureHashIndex("hashListAttribute",
|
||||||
|
"hashListSecondAttribute.subAttribute");
|
||||||
|
db.test.getIndexes();
|
||||||
|
~db._drop("test");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionGetIndexes
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Creating an index
|
!SUBSECTION Creating an index
|
||||||
Indexes should be created using the general method *ensureIndex*. This
|
Indexes should be created using the general method *ensureIndex*. This
|
||||||
|
@ -42,22 +62,152 @@ method obsoletes the specialized index-specific methods *ensureHashIndex*,
|
||||||
*ensureSkiplist*, *ensureUniqueConstraint* etc.
|
*ensureSkiplist*, *ensureUniqueConstraint* etc.
|
||||||
|
|
||||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||||
@startDocuBlock collectionEnsureIndex
|
|
||||||
|
|
||||||
|
@brief ensures that an index exists
|
||||||
|
`collection.ensureIndex(index-description)`
|
||||||
|
|
||||||
|
Ensures that an index according to the *index-description* exists. A
|
||||||
|
new index will be created if none exists with the given description.
|
||||||
|
|
||||||
|
The *index-description* must contain at least a *type* attribute.
|
||||||
|
Other attributes may be necessary, depending on the index type.
|
||||||
|
|
||||||
|
**type** can be one of the following values:
|
||||||
|
- *hash*: hash index
|
||||||
|
- *skiplist*: skiplist index
|
||||||
|
- *fulltext*: fulltext index
|
||||||
|
- *geo1*: geo index, with one attribute
|
||||||
|
- *geo2*: geo index, with two attributes
|
||||||
|
- *cap*: cap constraint
|
||||||
|
|
||||||
|
**sparse** can be *true* or *false*.
|
||||||
|
|
||||||
|
For *hash*, and *skiplist* the sparsity can be controlled, *fulltext* and *geo*
|
||||||
|
are [sparse](WhichIndex.md) by definition.
|
||||||
|
|
||||||
|
**unique** can be *true* or *false* and is supported by *hash* or *skiplist*
|
||||||
|
|
||||||
|
Calling this method returns an index object. Whether or not the index
|
||||||
|
object existed before the call is indicated in the return attribute
|
||||||
|
*isNewlyCreated*.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionEnsureIndex
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionEnsureIndex}
|
||||||
|
~db._create("test");
|
||||||
|
db.test.ensureIndex({ type: "hash", fields: [ "a" ], sparse: true });
|
||||||
|
db.test.ensureIndex({ type: "hash", fields: [ "a", "b" ], unique: true });
|
||||||
|
~db._drop("test");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionEnsureIndex
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Dropping an index
|
!SUBSECTION Dropping an index
|
||||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||||
@startDocuBlock col_dropIndex
|
|
||||||
|
|
||||||
|
@brief drops an index
|
||||||
|
`collection.dropIndex(index)`
|
||||||
|
|
||||||
|
Drops the index. If the index does not exist, then *false* is
|
||||||
|
returned. If the index existed and was dropped, then *true* is
|
||||||
|
returned. Note that you cannot drop some special indexes (e.g. the primary
|
||||||
|
index of a collection or the edge index of an edge collection).
|
||||||
|
|
||||||
|
`collection.dropIndex(index-handle)`
|
||||||
|
|
||||||
|
Same as above. Instead of an index an index handle can be given.
|
||||||
|
|
||||||
|
@startDocuBlockInline col_dropIndex
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{col_dropIndex}
|
||||||
|
~db._create("example");
|
||||||
|
db.example.ensureSkiplist("a", "b");
|
||||||
|
var indexInfo = db.example.getIndexes();
|
||||||
|
indexInfo;
|
||||||
|
db.example.dropIndex(indexInfo[0])
|
||||||
|
db.example.dropIndex(indexInfo[1].id)
|
||||||
|
indexInfo = db.example.getIndexes();
|
||||||
|
~db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock col_dropIndex
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SECTION Database Methods
|
!SECTION Database Methods
|
||||||
|
|
||||||
!SUBSECTION Fetching an index by handle
|
!SUBSECTION Fetching an index by handle
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock IndexHandle
|
|
||||||
|
|
||||||
|
@brief finds an index
|
||||||
|
`db._index(index-handle)`
|
||||||
|
|
||||||
|
Returns the index with *index-handle* or null if no such index exists.
|
||||||
|
|
||||||
|
@startDocuBlockInline IndexHandle
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{IndexHandle}
|
||||||
|
~db._create("example");
|
||||||
|
db.example.ensureIndex({ type: "skiplist", fields: [ "a", "b" ] });
|
||||||
|
var indexInfo = db.example.getIndexes().map(function(x) { return x.id; });
|
||||||
|
indexInfo;
|
||||||
|
db._index(indexInfo[0])
|
||||||
|
db._index(indexInfo[1])
|
||||||
|
~db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock IndexHandle
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Dropping an index
|
!SUBSECTION Dropping an index
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock dropIndex
|
|
||||||
|
|
||||||
|
@brief drops an index
|
||||||
|
`db._dropIndex(index)`
|
||||||
|
|
||||||
|
Drops the *index*. If the index does not exist, then *false* is
|
||||||
|
returned. If the index existed and was dropped, then *true* is
|
||||||
|
returned.
|
||||||
|
|
||||||
|
`db._dropIndex(index-handle)`
|
||||||
|
|
||||||
|
Drops the index with *index-handle*.
|
||||||
|
|
||||||
|
@startDocuBlockInline dropIndex
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{dropIndex}
|
||||||
|
~db._create("example");
|
||||||
|
db.example.ensureIndex({ type: "skiplist", fields: [ "a", "b" ] });
|
||||||
|
var indexInfo = db.example.getIndexes();
|
||||||
|
indexInfo;
|
||||||
|
db._dropIndex(indexInfo[0])
|
||||||
|
db._dropIndex(indexInfo[1].id)
|
||||||
|
indexInfo = db.example.getIndexes();
|
||||||
|
~db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock dropIndex
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Revalidating whether an index is used
|
!SUBSECTION Revalidating whether an index is used
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock IndexVerify
|
|
||||||
|
|
||||||
|
@brief finds an index
|
||||||
|
|
||||||
|
So you've created an index, and since its maintainance isn't for free,
|
||||||
|
you definitely want to know whether your query can utilize it.
|
||||||
|
|
||||||
|
You can use explain to verify whether **skiplists** or **hash indexes** are
|
||||||
|
used (if you omit `colors: false` you will get nice colors in ArangoShell):
|
||||||
|
|
||||||
|
@startDocuBlockInline IndexVerify
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{IndexVerify}
|
||||||
|
~db._create("example");
|
||||||
|
var explain = require("@arangodb/aql/explainer").explain;
|
||||||
|
db.example.ensureIndex({ type: "skiplist", fields: [ "a", "b" ] });
|
||||||
|
explain("FOR doc IN example FILTER doc.a < 23 RETURN doc", {colors:false});
|
||||||
|
~db._drop("example");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock IndexVerify
|
||||||
|
|
||||||
|
|
|
@ -5,54 +5,176 @@ The action module provides the infrastructure for defining HTTP actions.
|
||||||
!SECTION Basics
|
!SECTION Basics
|
||||||
!SUBSECTION Error message
|
!SUBSECTION Error message
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsGetErrorMessage
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.getErrorMessage(code)`
|
||||||
|
|
||||||
|
Returns the error message for an error code.
|
||||||
|
|
||||||
|
|
||||||
!SECTION Standard HTTP Result Generators
|
!SECTION Standard HTTP Result Generators
|
||||||
|
|
||||||
@startDocuBlock actionsDefineHttp
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.defineHttp(options)`
|
||||||
|
|
||||||
|
Defines a new action. The *options* are as follows:
|
||||||
|
|
||||||
|
`options.url`
|
||||||
|
|
||||||
|
The URL, which can be used to access the action. This path might contain
|
||||||
|
slashes. Note that this action will also be called, if a url is given such that
|
||||||
|
*options.url* is a prefix of the given url and no longer definition
|
||||||
|
matches.
|
||||||
|
|
||||||
|
`options.prefix`
|
||||||
|
|
||||||
|
If *false*, then only use the action for exact matches. The default is
|
||||||
|
*true*.
|
||||||
|
|
||||||
|
`options.callback(request, response)`
|
||||||
|
|
||||||
|
The request argument contains a description of the request. A request
|
||||||
|
parameter *foo* is accessible as *request.parametrs.foo*. A request
|
||||||
|
header *bar* is accessible as *request.headers.bar*. Assume that
|
||||||
|
the action is defined for the url */foo/bar* and the request url is
|
||||||
|
*/foo/bar/hugo/egon*. Then the suffix parts *[ "hugo", "egon" ]*
|
||||||
|
are availible in *request.suffix*.
|
||||||
|
|
||||||
|
The callback must define fill the *response*.
|
||||||
|
|
||||||
|
* *response.responseCode*: the response code
|
||||||
|
* *response.contentType*: the content type of the response
|
||||||
|
* *response.body*: the body of the response
|
||||||
|
|
||||||
|
You can use the functions *ResultOk* and *ResultError* to easily
|
||||||
|
generate a response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result ok
|
!SUBSECTION Result ok
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultOk
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultOk(req, res, code, result, headers)`
|
||||||
|
|
||||||
|
The function defines a response. *code* is the status code to
|
||||||
|
return. *result* is the result object, which will be returned as JSON
|
||||||
|
object in the body. *headers* is an array of headers to returned.
|
||||||
|
The function adds the attribute *error* with value *false*
|
||||||
|
and *code* with value *code* to the *result*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result bad
|
!SUBSECTION Result bad
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultBad
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultBad(req, res, error-code, msg, headers)`
|
||||||
|
|
||||||
|
The function generates an error response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result not found
|
!SUBSECTION Result not found
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultNotFound
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultNotFound(req, res, code, msg, headers)`
|
||||||
|
|
||||||
|
The function generates an error response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result unsupported
|
!SUBSECTION Result unsupported
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultUnsupported
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultUnsupported(req, res, headers)`
|
||||||
|
|
||||||
|
The function generates an error response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result error
|
!SUBSECTION Result error
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultError
|
|
||||||
|
|
||||||
|
|
||||||
|
*actions.resultError(*req*, *res*, *code*, *errorNum*,
|
||||||
|
*errorMessage*, *headers*, *keyvals)*
|
||||||
|
|
||||||
|
The function generates an error response. The response body is an array
|
||||||
|
with an attribute *errorMessage* containing the error message
|
||||||
|
*errorMessage*, *error* containing *true*, *code* containing
|
||||||
|
*code*, *errorNum* containing *errorNum*, and *errorMessage*
|
||||||
|
containing the error message *errorMessage*. *keyvals* are mixed
|
||||||
|
into the result.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result not Implemented
|
!SUBSECTION Result not Implemented
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultNotImplemented
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultNotImplemented(req, res, msg, headers)`
|
||||||
|
|
||||||
|
The function generates an error response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result permanent redirect
|
!SUBSECTION Result permanent redirect
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultPermanentRedirect
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultPermanentRedirect(req, res, options, headers)`
|
||||||
|
|
||||||
|
The function generates a redirect response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result temporary redirect
|
!SUBSECTION Result temporary redirect
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultTemporaryRedirect
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultTemporaryRedirect(req, res, options, headers)`
|
||||||
|
|
||||||
|
The function generates a redirect response.
|
||||||
|
|
||||||
|
|
||||||
!SECTION ArangoDB Result Generators
|
!SECTION ArangoDB Result Generators
|
||||||
|
|
||||||
!SUBSECTION Collection not found
|
!SUBSECTION Collection not found
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsCollectionNotFound
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.collectionNotFound(req, res, collection, headers)`
|
||||||
|
|
||||||
|
The function generates an error response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Index not found
|
!SUBSECTION Index not found
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsIndexNotFound
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.indexNotFound(req, res, collection, index, headers)`
|
||||||
|
|
||||||
|
The function generates an error response.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Result exception
|
!SUBSECTION Result exception
|
||||||
<!-- js/server/modules/@arangodb/actions.js -->
|
<!-- js/server/modules/@arangodb/actions.js -->
|
||||||
@startDocuBlock actionsResultException
|
|
||||||
|
|
||||||
|
|
||||||
|
`actions.resultException(req, res, err, headers, verbose)`
|
||||||
|
|
||||||
|
The function generates an error response. If @FA{verbose} is set to
|
||||||
|
*true* or not specified (the default), then the error stack trace will
|
||||||
|
be included in the error message if available. If @FA{verbose} is a string
|
||||||
|
it will be prepended before the error message and the stacktrace will also
|
||||||
|
be included.
|
||||||
|
|
||||||
|
|
|
@ -6,19 +6,51 @@ The implementation tries to follow the CommonJS specification where possible.
|
||||||
!SECTION Single File Directory Manipulation
|
!SECTION Single File Directory Manipulation
|
||||||
|
|
||||||
!SUBSUBSECTION exists
|
!SUBSUBSECTION exists
|
||||||
@startDocuBlock JS_Exists
|
|
||||||
|
|
||||||
|
checks if a file of any type or directory exists
|
||||||
|
`fs.exists(path)`
|
||||||
|
|
||||||
|
Returns true if a file (of any type) or a directory exists at a given
|
||||||
|
path. If the file is a broken symbolic link, returns false.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION isFile
|
!SUBSUBSECTION isFile
|
||||||
@startDocuBlock JS_IsFile
|
|
||||||
|
|
||||||
|
tests if path is a file
|
||||||
|
`fs.isFile(path)`
|
||||||
|
|
||||||
|
Returns true if the *path* points to a file.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION isDirectory
|
!SUBSUBSECTION isDirectory
|
||||||
@startDocuBlock JS_IsDirectory
|
|
||||||
|
|
||||||
|
tests if path is a directory
|
||||||
|
`fs.isDirectory(path)`
|
||||||
|
|
||||||
|
Returns true if the *path* points to a directory.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION size
|
!SUBSUBSECTION size
|
||||||
@startDocuBlock JS_Size
|
|
||||||
|
|
||||||
|
gets the size of a file
|
||||||
|
`fs.size(path)`
|
||||||
|
|
||||||
|
Returns the size of the file specified by *path*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION mtime
|
!SUBSUBSECTION mtime
|
||||||
@startDocuBlock JS_MTime
|
|
||||||
|
|
||||||
|
gets the last modification time of a file
|
||||||
|
`fs.mtime(filename)`
|
||||||
|
|
||||||
|
Returns the last modification date of the specified file. The date is
|
||||||
|
returned as a Unix timestamp (number of seconds elapsed since January 1 1970).
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION pathSeparator
|
!SUBSUBSECTION pathSeparator
|
||||||
`fs.pathSeparator`
|
`fs.pathSeparator`
|
||||||
|
@ -31,48 +63,154 @@ If you want to combine two paths you can use fs.pathSeparator instead of */* or
|
||||||
The function returns the combination of the path and filename, e.g. fs.join(Hello/World, foo.bar) would return Hello/World/foo.bar.
|
The function returns the combination of the path and filename, e.g. fs.join(Hello/World, foo.bar) would return Hello/World/foo.bar.
|
||||||
|
|
||||||
!SUBSUBSECTION getTempFile
|
!SUBSUBSECTION getTempFile
|
||||||
@startDocuBlock JS_GetTempFile
|
|
||||||
|
|
||||||
|
returns the name for a (new) temporary file
|
||||||
|
`fs.getTempFile(directory, createFile)`
|
||||||
|
|
||||||
|
Returns the name for a new temporary file in directory *directory*.
|
||||||
|
If *createFile* is *true*, an empty file will be created so no other
|
||||||
|
process can create a file of the same name.
|
||||||
|
|
||||||
|
**Note**: The directory *directory* must exist.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION getTempPath
|
!SUBSUBSECTION getTempPath
|
||||||
@startDocuBlock JS_GetTempPath
|
|
||||||
|
|
||||||
|
returns the temporary directory
|
||||||
|
`fs.getTempPath()`
|
||||||
|
|
||||||
|
Returns the absolute path of the temporary directory
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION makeAbsolute
|
!SUBSUBSECTION makeAbsolute
|
||||||
@startDocuBlock JS_MakeAbsolute
|
|
||||||
|
|
||||||
|
makes a given path absolute
|
||||||
|
`fs.makeAbsolute(path)`
|
||||||
|
|
||||||
|
Returns the given string if it is an absolute path, otherwise an
|
||||||
|
absolute path to the same location is returned.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION chmod
|
!SUBSUBSECTION chmod
|
||||||
@startDocuBlock JS_Chmod
|
|
||||||
|
|
||||||
|
sets file permissions of specified files (non windows only)
|
||||||
|
`fs.exists(path)`
|
||||||
|
|
||||||
|
Returns true on success.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION list
|
!SUBSUBSECTION list
|
||||||
@startDocuBlock JS_List
|
|
||||||
|
|
||||||
|
returns the directory listing
|
||||||
|
`fs.list(path)`
|
||||||
|
|
||||||
|
The functions returns the names of all the files in a directory, in
|
||||||
|
lexically sorted order. Throws an exception if the directory cannot be
|
||||||
|
traversed (or path is not a directory).
|
||||||
|
|
||||||
|
**Note**: this means that list("x") of a directory containing "a" and "b" would
|
||||||
|
return ["a", "b"], not ["x/a", "x/b"].
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION listTree
|
!SUBSUBSECTION listTree
|
||||||
@startDocuBlock JS_ListTree
|
|
||||||
|
|
||||||
|
returns the directory tree
|
||||||
|
`fs.listTree(path)`
|
||||||
|
|
||||||
|
The function returns an array that starts with the given path, and all of
|
||||||
|
the paths relative to the given path, discovered by a depth first traversal
|
||||||
|
of every directory in any visited directory, reporting but not traversing
|
||||||
|
symbolic links to directories. The first path is always *""*, the path
|
||||||
|
relative to itself.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION makeDirectory
|
!SUBSUBSECTION makeDirectory
|
||||||
@startDocuBlock JS_MakeDirectory
|
|
||||||
|
|
||||||
|
creates a directory
|
||||||
|
`fs.makeDirectory(path)`
|
||||||
|
|
||||||
|
Creates the directory specified by *path*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION makeDirectoryRecursive
|
!SUBSUBSECTION makeDirectoryRecursive
|
||||||
@startDocuBlock JS_MakeDirectoryRecursive
|
|
||||||
|
|
||||||
|
creates a directory
|
||||||
|
`fs.makeDirectoryRecursive(path)`
|
||||||
|
|
||||||
|
Creates the directory hierarchy specified by *path*.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION remove
|
!SUBSUBSECTION remove
|
||||||
@startDocuBlock JS_Remove
|
|
||||||
|
|
||||||
|
removes a file
|
||||||
|
`fs.remove(filename)`
|
||||||
|
|
||||||
|
Removes the file *filename* at the given path. Throws an exception if the
|
||||||
|
path corresponds to anything that is not a file or a symbolic link. If
|
||||||
|
"path" refers to a symbolic link, removes the symbolic link.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION removeDirectory
|
!SUBSUBSECTION removeDirectory
|
||||||
@startDocuBlock JS_RemoveDirectory
|
|
||||||
|
|
||||||
|
removes an empty directory
|
||||||
|
`fs.removeDirectory(path)`
|
||||||
|
|
||||||
|
Removes a directory if it is empty. Throws an exception if the path is not
|
||||||
|
an empty directory.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION removeDirectoryRecursive
|
!SUBSUBSECTION removeDirectoryRecursive
|
||||||
@startDocuBlock JS_RemoveDirectoryRecursive
|
|
||||||
|
|
||||||
|
removes a directory
|
||||||
|
`fs.removeDirectoryRecursive(path)`
|
||||||
|
|
||||||
|
Removes a directory with all subelements. Throws an exception if the path
|
||||||
|
is not a directory.
|
||||||
|
|
||||||
|
|
||||||
!SECTION File IO
|
!SECTION File IO
|
||||||
|
|
||||||
!SUBSUBSECTION read
|
!SUBSUBSECTION read
|
||||||
@startDocuBlock JS_Read
|
|
||||||
|
|
||||||
|
reads in a file
|
||||||
|
`fs.read(filename)`
|
||||||
|
|
||||||
|
Reads in a file and returns the content as string. Please note that the
|
||||||
|
file content must be encoded in UTF-8.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION read64
|
!SUBSUBSECTION read64
|
||||||
@startDocuBlock JS_Read64
|
|
||||||
|
|
||||||
|
reads in a file as base64
|
||||||
|
`fs.read64(filename)`
|
||||||
|
|
||||||
|
Reads in a file and returns the content as string. The file content is
|
||||||
|
Base64 encoded.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION readBuffer
|
!SUBSUBSECTION readBuffer
|
||||||
@startDocuBlock JS_ReadBuffer
|
|
||||||
|
|
||||||
|
reads in a file
|
||||||
|
`fs.readBuffer(filename)`
|
||||||
|
|
||||||
|
Reads in a file and returns its content in a Buffer object.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION readFileSync
|
!SUBSUBSECTION readFileSync
|
||||||
`fs.readFileSync(filename, encoding)`
|
`fs.readFileSync(filename, encoding)`
|
||||||
|
@ -91,7 +229,13 @@ object.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION save
|
!SUBSUBSECTION save
|
||||||
@startDocuBlock JS_Save
|
|
||||||
|
|
||||||
|
writes to a file
|
||||||
|
`fs.write(filename, content)`
|
||||||
|
|
||||||
|
Writes the content into a file. Content can be a string or a Buffer object.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION writeFileSync
|
!SUBSUBSECTION writeFileSync
|
||||||
`fs.writeFileSync(filename, content)`
|
`fs.writeFileSync(filename, content)`
|
||||||
|
@ -101,19 +245,70 @@ This is an alias for `fs.write(filename, content)`.
|
||||||
!SECTION Recursive Manipulation
|
!SECTION Recursive Manipulation
|
||||||
|
|
||||||
!SUBSUBSECTION copyRecursive
|
!SUBSUBSECTION copyRecursive
|
||||||
@startDocuBlock JS_CopyDirectoryRecursive
|
|
||||||
|
|
||||||
|
copies a directory structure
|
||||||
|
`fs.copyRecursive(source, destination)`
|
||||||
|
|
||||||
|
Copies *source* to *destination*.
|
||||||
|
Exceptions will be thrown on:
|
||||||
|
- Failure to copy the file
|
||||||
|
- specifying a directory for destination when source is a file
|
||||||
|
- specifying a directory as source and destination
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION CopyFile
|
!SUBSUBSECTION CopyFile
|
||||||
@startDocuBlock JS_CopyFile
|
|
||||||
|
|
||||||
|
copies a file into a target file
|
||||||
|
`fs.copyFile(source, destination)`
|
||||||
|
|
||||||
|
Copies *source* to destination. If Destination is a directory, a file
|
||||||
|
of the same name will be created in that directory, else the copy will get
|
||||||
|
the
|
||||||
|
specified filename.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION move
|
!SUBSUBSECTION move
|
||||||
@startDocuBlock JS_MoveFile
|
|
||||||
|
|
||||||
|
renames a file
|
||||||
|
`fs.move(source, destination)`
|
||||||
|
|
||||||
|
Moves *source* to destination. Failure to move the file, or
|
||||||
|
specifying a directory for destination when source is a file will throw an
|
||||||
|
exception. Likewise, specifying a directory as source and destination will
|
||||||
|
fail.
|
||||||
|
|
||||||
|
|
||||||
!SECTION ZIP
|
!SECTION ZIP
|
||||||
|
|
||||||
!SUBSUBSECTION unzipFile
|
!SUBSUBSECTION unzipFile
|
||||||
@startDocuBlock JS_Unzip
|
|
||||||
|
|
||||||
|
unzips a file
|
||||||
|
`fs.unzipFile(filename, outpath, skipPaths, overwrite, password)`
|
||||||
|
|
||||||
|
Unzips the zip file specified by *filename* into the path specified by
|
||||||
|
*outpath*. Overwrites any existing target files if *overwrite* is set
|
||||||
|
to *true*.
|
||||||
|
|
||||||
|
Returns *true* if the file was unzipped successfully.
|
||||||
|
|
||||||
|
|
||||||
!SUBSUBSECTION zipFile
|
!SUBSUBSECTION zipFile
|
||||||
@startDocuBlock JS_Zip
|
|
||||||
|
|
||||||
|
zips a file
|
||||||
|
`fs.zipFile(filename, chdir, files, password)`
|
||||||
|
|
||||||
|
Stores the files specified by *files* in the zip file *filename*. If
|
||||||
|
the file *filename* already exists, an error is thrown. The list of input
|
||||||
|
files *files* must be given as a list of absolute filenames. If *chdir* is
|
||||||
|
not empty, the *chdir* prefix will be stripped from the filename in the
|
||||||
|
zip file, so when it is unzipped filenames will be relative.
|
||||||
|
Specifying a password is optional.
|
||||||
|
|
||||||
|
Returns *true* if the file was zipped successfully.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -22,36 +22,320 @@ Here are the details of the functionality:
|
||||||
|
|
||||||
!SUBSECTION Require
|
!SUBSECTION Require
|
||||||
<!-- js/server/modules/@arangodb/cluster/planner.js -->
|
<!-- js/server/modules/@arangodb/cluster/planner.js -->
|
||||||
@startDocuBlock JSF_Cluster_Planner_Constructor
|
|
||||||
|
|
||||||
|
|
||||||
|
*new require("@arangodb/cluster").Planner(userConfig)*
|
||||||
|
|
||||||
|
This constructor builds a cluster planner object. The one and only
|
||||||
|
argument is an object that can have the properties described below.
|
||||||
|
The planner can plan clusters on a single machine (basically for
|
||||||
|
testing purposes) and on multiple machines. The resulting "cluster plans"
|
||||||
|
can be used by the kickstarter to start up the processes comprising
|
||||||
|
the cluster, including the agency. To this end, there has to be one
|
||||||
|
dispatcher on every machine participating in the cluster. A dispatcher
|
||||||
|
is a simple instance of ArangoDB, compiled with the cluster extensions,
|
||||||
|
but not running in cluster mode. This is why the configuration option
|
||||||
|
*dispatchers* below is of central importance.
|
||||||
|
|
||||||
|
- *dispatchers*: an object with a property for each dispatcher,
|
||||||
|
the property name is the ID of the dispatcher and the value
|
||||||
|
should be an object with at least the property *endpoint*
|
||||||
|
containing the endpoint of the corresponding dispatcher.
|
||||||
|
Further optional properties are:
|
||||||
|
|
||||||
|
- *avoidPorts* which is an object
|
||||||
|
in which all port numbers that should not be used are bound to
|
||||||
|
*true*, default is empty, that is, all ports can be used
|
||||||
|
- *arangodExtraArgs*, which is a list of additional
|
||||||
|
command line arguments that will be given to DBservers and
|
||||||
|
coordinators started by this dispatcher, the default is
|
||||||
|
an empty list. These arguments will be appended to those
|
||||||
|
produced automatically, such that one can overwrite
|
||||||
|
things with this.
|
||||||
|
- *allowCoordinators*, which is a boolean value indicating
|
||||||
|
whether or not coordinators should be started on this
|
||||||
|
dispatcher, the default is *true*
|
||||||
|
- *allowDBservers*, which is a boolean value indicating
|
||||||
|
whether or not DBservers should be started on this dispatcher,
|
||||||
|
the default is *true*
|
||||||
|
- *allowAgents*, which is a boolean value indicating whether or
|
||||||
|
not agents should be started on this dispatcher, the default is
|
||||||
|
*true*
|
||||||
|
- *username*, which is a string that contains the user name
|
||||||
|
for authentication with this dispatcher
|
||||||
|
- *passwd*, which is a string that contains the password
|
||||||
|
for authentication with this dispatcher, if not both
|
||||||
|
*username* and *passwd* are set, then no authentication
|
||||||
|
is used between dispatchers. Note that this will not work
|
||||||
|
if the dispatchers are configured with authentication.
|
||||||
|
|
||||||
|
If *.dispatchers* is empty (no property), then an entry for the
|
||||||
|
local arangod itself is automatically added. Note that if the
|
||||||
|
only configured dispatcher has endpoint *tcp://localhost:*,
|
||||||
|
all processes are started in a special "local" mode and are
|
||||||
|
configured to bind their endpoints only to the localhost device.
|
||||||
|
In all other cases both agents and *arangod* instances bind
|
||||||
|
their endpoints to all available network devices.
|
||||||
|
- *numberOfAgents*: the number of agents in the agency,
|
||||||
|
usually there is no reason to deviate from the default of 3. The
|
||||||
|
planner distributes them amongst the dispatchers, if possible.
|
||||||
|
- *agencyPrefix*: a string that is used as prefix for all keys of
|
||||||
|
configuration data stored in the agency.
|
||||||
|
- *numberOfDBservers*: the number of DBservers in the
|
||||||
|
cluster. The planner distributes them evenly amongst the dispatchers.
|
||||||
|
- *startSecondaries*: a boolean flag indicating whether or not
|
||||||
|
secondary servers are started. In this version, this flag is
|
||||||
|
silently ignored, since we do not yet have secondary servers.
|
||||||
|
- *numberOfCoordinators*: the number of coordinators in the cluster,
|
||||||
|
the planner distributes them evenly amongst the dispatchers.
|
||||||
|
- *DBserverIDs*: a list of DBserver IDs (strings). If the planner
|
||||||
|
runs out of IDs it creates its own ones using *DBserver*
|
||||||
|
concatenated with a unique number.
|
||||||
|
- *coordinatorIDs*: a list of coordinator IDs (strings). If the planner
|
||||||
|
runs out of IDs it creates its own ones using *Coordinator*
|
||||||
|
concatenated with a unique number.
|
||||||
|
- *dataPath*: this is a string and describes the path under which
|
||||||
|
the agents, the DBservers and the coordinators store their
|
||||||
|
data directories. This can either be an absolute path (in which
|
||||||
|
case all machines in the clusters must use the same path), or
|
||||||
|
it can be a relative path. In the latter case it is relative
|
||||||
|
to the directory that is configured in the dispatcher with the
|
||||||
|
*cluster.data-path* option (command line or configuration file).
|
||||||
|
The directories created will be called *data-PREFIX-ID* where
|
||||||
|
*PREFIX* is replaced with the agency prefix (see above) and *ID*
|
||||||
|
is the ID of the DBserver or coordinator.
|
||||||
|
- *logPath*: this is a string and describes the path under which
|
||||||
|
the DBservers and the coordinators store their log file. This can
|
||||||
|
either be an absolute path (in which case all machines in the cluster
|
||||||
|
must use the same path), or it can be a relative path. In the
|
||||||
|
latter case it is relative to the directory that is configured
|
||||||
|
in the dispatcher with the *cluster.log-path* option.
|
||||||
|
- *arangodPath*: this is a string and describes the path to the
|
||||||
|
actual executable *arangod* that will be started for the
|
||||||
|
DBservers and coordinators. If this is an absolute path, it
|
||||||
|
obviously has to be the same on all machines in the cluster
|
||||||
|
as described for *dataPath*. If it is an empty string, the
|
||||||
|
dispatcher uses the executable that is configured with the
|
||||||
|
*cluster.arangod-path* option, which is by default the same
|
||||||
|
executable as the dispatcher uses.
|
||||||
|
- *agentPath*: this is a string and describes the path to the
|
||||||
|
actual executable that will be started for the agents in the
|
||||||
|
agency. If this is an absolute path, it obviously has to be
|
||||||
|
the same on all machines in the cluster, as described for
|
||||||
|
*arangodPath*. If it is an empty string, the dispatcher
|
||||||
|
uses its *cluster.agent-path* option.
|
||||||
|
- *agentExtPorts*: a list of port numbers to use for the external
|
||||||
|
ports of the agents. When running out of numbers in this list,
|
||||||
|
the planner increments the last one used by one for every port
|
||||||
|
needed. Note that the planner checks availability of the ports
|
||||||
|
during the planning phase by contacting the dispatchers on the
|
||||||
|
different machines, and uses only ports that are free during
|
||||||
|
the planning phase. Obviously, if those ports are connected
|
||||||
|
before the actual startup, things can go wrong.
|
||||||
|
- *agentIntPorts*: a list of port numbers to use for the internal
|
||||||
|
ports of the agents. The same comments as for *agentExtPorts*
|
||||||
|
apply.
|
||||||
|
- *DBserverPorts*: a list of port numbers to use for the
|
||||||
|
DBservers. The same comments as for *agentExtPorts* apply.
|
||||||
|
- *coordinatorPorts*: a list of port numbers to use for the
|
||||||
|
coordinators. The same comments as for *agentExtPorts* apply.
|
||||||
|
- *useSSLonDBservers*: a boolean flag indicating whether or not
|
||||||
|
we use SSL on all DBservers in the cluster
|
||||||
|
- *useSSLonCoordinators*: a boolean flag indicating whether or not
|
||||||
|
we use SSL on all coordinators in the cluster
|
||||||
|
- *valgrind*: a string to contain the path of the valgrind binary
|
||||||
|
if we should run the cluster components in it
|
||||||
|
- *valgrindopts*: commandline options to the valgrind process
|
||||||
|
- *valgrindXmlFileBase*: pattern for logfiles
|
||||||
|
- *valgrindTestname*: name of test to add to the logfiles
|
||||||
|
- *valgrindHosts*: which host classes should run in valgrind?
|
||||||
|
Coordinator / DBServer
|
||||||
|
- *extremeVerbosity* : if set to true, then there will be more test
|
||||||
|
run output, especially for cluster tests.
|
||||||
|
|
||||||
|
All these values have default values. Here is the current set of
|
||||||
|
default values:
|
||||||
|
|
||||||
|
```js
|
||||||
|
{
|
||||||
|
"agencyPrefix" : "arango",
|
||||||
|
"numberOfAgents" : 1,
|
||||||
|
"numberOfDBservers" : 2,
|
||||||
|
"startSecondaries" : false,
|
||||||
|
"numberOfCoordinators" : 1,
|
||||||
|
"DBserverIDs" : ["Pavel", "Perry", "Pancho", "Paul", "Pierre",
|
||||||
|
"Pit", "Pia", "Pablo" ],
|
||||||
|
"coordinatorIDs" : ["Claus", "Chantalle", "Claire", "Claudia",
|
||||||
|
"Claas", "Clemens", "Chris" ],
|
||||||
|
"dataPath" : "", // means configured in dispatcher
|
||||||
|
"logPath" : "", // means configured in dispatcher
|
||||||
|
"arangodPath" : "", // means configured as dispatcher
|
||||||
|
"agentPath" : "", // means configured in dispatcher
|
||||||
|
"agentExtPorts" : [4001],
|
||||||
|
"agentIntPorts" : [7001],
|
||||||
|
"DBserverPorts" : [8629],
|
||||||
|
"coordinatorPorts" : [8530],
|
||||||
|
"dispatchers" : {"me": {"endpoint": "tcp://localhost:"}},
|
||||||
|
// this means only we as a local instance
|
||||||
|
"useSSLonDBservers" : false,
|
||||||
|
"useSSLonCoordinators" : false
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Get Plan
|
!SUBSECTION Get Plan
|
||||||
<!-- js/server/modules/@arangodb/cluster/planner.js -->
|
<!-- js/server/modules/@arangodb/cluster/planner.js -->
|
||||||
@startDocuBlock JSF_Planner_prototype_getPlan
|
|
||||||
|
|
||||||
|
|
||||||
|
`Planner.getPlan()`
|
||||||
|
|
||||||
|
returns the cluster plan as a JavaScript object. The result of this
|
||||||
|
method can be given to the constructor of a Kickstarter.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Require
|
!SUBSECTION Require
|
||||||
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
||||||
@startDocuBlock JSF_Cluster_Kickstarter_Constructor
|
|
||||||
|
|
||||||
|
|
||||||
|
`new require("@arangodb/cluster").Kickstarter(plan)`
|
||||||
|
|
||||||
|
This constructor constructs a kickstarter object. Its first
|
||||||
|
argument is a cluster plan as for example provided by the planner
|
||||||
|
(see Cluster Planner Constructor and the general
|
||||||
|
explanations before this reference). The second argument is
|
||||||
|
optional and is taken to be "me" if omitted, it is the ID of the
|
||||||
|
dispatcher this object should consider itself to be. If the plan
|
||||||
|
contains startup commands for the dispatcher with this ID, these
|
||||||
|
commands are executed immediately. Otherwise they are handed over
|
||||||
|
to another responsible dispatcher via a REST call.
|
||||||
|
|
||||||
|
The resulting object of this constructors allows to launch,
|
||||||
|
shutdown, relaunch the cluster described in the plan.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Launch
|
!SUBSECTION Launch
|
||||||
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
||||||
@startDocuBlock JSF_Kickstarter_prototype_launch
|
|
||||||
|
|
||||||
|
|
||||||
|
`Kickstarter.launch()`
|
||||||
|
|
||||||
|
This starts up a cluster as described in the plan which was given to
|
||||||
|
the constructor. To this end, other dispatchers are contacted as
|
||||||
|
necessary. All startup commands for the local dispatcher are
|
||||||
|
executed immediately.
|
||||||
|
|
||||||
|
The result is an object that contains information about the started
|
||||||
|
processes, this object is also stored in the Kickstarter object
|
||||||
|
itself. We do not go into details here about the data structure,
|
||||||
|
but the most important information are the process IDs of the
|
||||||
|
started processes. The corresponding
|
||||||
|
[see shutdown method](../ModulePlanner/README.md#shutdown) needs this
|
||||||
|
information to shut down all processes.
|
||||||
|
|
||||||
|
Note that all data in the DBservers and all log files and all agency
|
||||||
|
information in the cluster is deleted by this call. This is because
|
||||||
|
it is intended to set up a cluster for the first time. See
|
||||||
|
the [relaunch method](../ModulePlanner/README.md#relaunch)
|
||||||
|
for restarting a cluster without data loss.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Check Cluster Health
|
!SUBSECTION Check Cluster Health
|
||||||
@startDocuBlock JSF_Kickstarter_prototype_isHealthy
|
|
||||||
|
|
||||||
|
|
||||||
|
`Kickstarter.isHealthy()`
|
||||||
|
|
||||||
|
This checks that all processes belonging to a running cluster are
|
||||||
|
healthy. To this end, other dispatchers are contacted as necessary.
|
||||||
|
At this stage it is only checked that the processes are still up and
|
||||||
|
running.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Shutdown
|
!SUBSECTION Shutdown
|
||||||
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
||||||
@startDocuBlock JSF_Kickstarter_prototype_shutdown
|
|
||||||
|
|
||||||
|
|
||||||
|
`Kickstarter.shutdown()`
|
||||||
|
|
||||||
|
This shuts down a cluster as described in the plan which was given to
|
||||||
|
the constructor. To this end, other dispatchers are contacted as
|
||||||
|
necessary. All processes in the cluster are gracefully shut down
|
||||||
|
in the right order.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Relaunch
|
!SUBSECTION Relaunch
|
||||||
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
||||||
@startDocuBlock JSF_Kickstarter_prototype_relaunch
|
|
||||||
|
|
||||||
|
|
||||||
|
`Kickstarter.relaunch()`
|
||||||
|
|
||||||
|
This starts up a cluster as described in the plan which was given to
|
||||||
|
the constructor. To this end, other dispatchers are contacted as
|
||||||
|
necessary. All startup commands for the local dispatcher are
|
||||||
|
executed immediately.
|
||||||
|
|
||||||
|
The result is an object that contains information about the started
|
||||||
|
processes, this object is also stored in the Kickstarter object
|
||||||
|
itself. We do not go into details here about the data structure,
|
||||||
|
but the most important information are the process IDs of the
|
||||||
|
started processes. The corresponding
|
||||||
|
[shutdown method ](../ModulePlanner/README.md#shutdown) needs this information to
|
||||||
|
shut down all processes.
|
||||||
|
|
||||||
|
Note that this methods needs that all data in the DBservers and the
|
||||||
|
agency information in the cluster are already set up properly. See
|
||||||
|
the [launch method](../ModulePlanner/README.md#launch) for
|
||||||
|
starting a cluster for the first time.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Upgrade
|
!SUBSECTION Upgrade
|
||||||
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
||||||
@startDocuBlock JSF_Kickstarter_prototype_upgrade
|
|
||||||
|
|
||||||
|
|
||||||
|
`Kickstarter.upgrade(username, passwd)`
|
||||||
|
|
||||||
|
This performs an upgrade procedure on a cluster as described in
|
||||||
|
the plan which was given to the constructor. To this end, other
|
||||||
|
dispatchers are contacted as necessary. All commands for the local
|
||||||
|
dispatcher are executed immediately. The basic approach for the
|
||||||
|
upgrade is as follows: The agency is started first (exactly as
|
||||||
|
in relaunch), no configuration is sent there (exactly as in the
|
||||||
|
relaunch action), all servers are first started with the option
|
||||||
|
"--upgrade" and then normally. In the end, the upgrade-database.js
|
||||||
|
script is run on one of the coordinators, as in the launch action.
|
||||||
|
|
||||||
|
The result is an object that contains information about the started
|
||||||
|
processes, this object is also stored in the Kickstarter object
|
||||||
|
itself. We do not go into details here about the data structure,
|
||||||
|
but the most important information are the process IDs of the
|
||||||
|
started processes. The corresponding
|
||||||
|
[shutdown method](../ModulePlanner/README.md#shutdown) needs
|
||||||
|
this information to shut down all processes.
|
||||||
|
|
||||||
|
Note that this methods needs that all data in the DBservers and the
|
||||||
|
agency information in the cluster are already set up properly. See
|
||||||
|
the [launch method](../ModulePlanner/README.md#launch) for
|
||||||
|
starting a cluster for the first time.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Cleanup
|
!SUBSECTION Cleanup
|
||||||
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
<!-- js/server/modules/@arangodb/cluster/kickstarter.js -->
|
||||||
@startDocuBlock JSF_Kickstarter_prototype_cleanup
|
|
||||||
|
|
||||||
|
|
||||||
|
`Kickstarter.cleanup()`
|
||||||
|
|
||||||
|
This cleans up all the data and logs of a previously shut down cluster.
|
||||||
|
To this end, other dispatchers are contacted as necessary.
|
||||||
|
[Use shutdown](../ModulePlanner/README.md#shutdown) first and
|
||||||
|
use with caution, since potentially a lot of data is being erased with
|
||||||
|
this call!
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -4,13 +4,100 @@ This module provides functionality for administering the write-ahead logs.
|
||||||
|
|
||||||
!SUBSECTION Configuration
|
!SUBSECTION Configuration
|
||||||
<!-- arangod/V8Server/v8-vocbase.h -->
|
<!-- arangod/V8Server/v8-vocbase.h -->
|
||||||
@startDocuBlock walPropertiesGet
|
|
||||||
|
|
||||||
|
@brief retrieves the configuration of the write-ahead log
|
||||||
|
`internal.wal.properties()`
|
||||||
|
|
||||||
|
Retrieves the configuration of the write-ahead log. The result is a JSON
|
||||||
|
array with the following attributes:
|
||||||
|
- *allowOversizeEntries*: whether or not operations that are bigger than a
|
||||||
|
single logfile can be executed and stored
|
||||||
|
- *logfileSize*: the size of each write-ahead logfile
|
||||||
|
- *historicLogfiles*: the maximum number of historic logfiles to keep
|
||||||
|
- *reserveLogfiles*: the maximum number of reserve logfiles that ArangoDB
|
||||||
|
allocates in the background
|
||||||
|
- *syncInterval*: the interval for automatic synchronization of not-yet
|
||||||
|
synchronized write-ahead log data (in milliseconds)
|
||||||
|
- *throttleWait*: the maximum wait time that operations will wait before
|
||||||
|
they get aborted if case of write-throttling (in milliseconds)
|
||||||
|
- *throttleWhenPending*: the number of unprocessed garbage-collection
|
||||||
|
operations that, when reached, will activate write-throttling. A value of
|
||||||
|
*0* means that write-throttling will not be triggered.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline WalPropertiesGet
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{WalPropertiesGet}
|
||||||
|
require("internal").wal.properties();
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock WalPropertiesGet
|
||||||
|
|
||||||
|
|
||||||
<!-- arangod/V8Server/v8-vocbase.h -->
|
<!-- arangod/V8Server/v8-vocbase.h -->
|
||||||
@startDocuBlock walPropertiesSet
|
|
||||||
|
|
||||||
|
@brief configures the write-ahead log
|
||||||
|
`internal.wal.properties(properties)`
|
||||||
|
|
||||||
|
Configures the behavior of the write-ahead log. *properties* must be a JSON
|
||||||
|
JSON object with the following attributes:
|
||||||
|
- *allowOversizeEntries*: whether or not operations that are bigger than a
|
||||||
|
single logfile can be executed and stored
|
||||||
|
- *logfileSize*: the size of each write-ahead logfile
|
||||||
|
- *historicLogfiles*: the maximum number of historic logfiles to keep
|
||||||
|
- *reserveLogfiles*: the maximum number of reserve logfiles that ArangoDB
|
||||||
|
allocates in the background
|
||||||
|
- *throttleWait*: the maximum wait time that operations will wait before
|
||||||
|
they get aborted if case of write-throttling (in milliseconds)
|
||||||
|
- *throttleWhenPending*: the number of unprocessed garbage-collection
|
||||||
|
operations that, when reached, will activate write-throttling. A value of
|
||||||
|
*0* means that write-throttling will not be triggered.
|
||||||
|
|
||||||
|
Specifying any of the above attributes is optional. Not specified attributes
|
||||||
|
will be ignored and the configuration for them will not be modified.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline WalPropertiesSet
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{WalPropertiesSet}
|
||||||
|
| require("internal").wal.properties({
|
||||||
|
| allowOverSizeEntries: true,
|
||||||
|
logfileSize: 32 * 1024 * 1024 });
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock WalPropertiesSet
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Flushing
|
!SUBSECTION Flushing
|
||||||
|
|
||||||
<!-- arangod/V8Server/v8-vocbase.h -->
|
<!-- arangod/V8Server/v8-vocbase.h -->
|
||||||
@startDocuBlock walFlush
|
|
||||||
|
|
||||||
|
@brief flushes the currently open WAL logfile
|
||||||
|
`internal.wal.flush(waitForSync, waitForCollector)`
|
||||||
|
|
||||||
|
Flushes the write-ahead log. By flushing the currently active write-ahead
|
||||||
|
logfile, the data in it can be transferred to collection journals and
|
||||||
|
datafiles. This is useful to ensure that all data for a collection is
|
||||||
|
present in the collection journals and datafiles, for example, when dumping
|
||||||
|
the data of a collection.
|
||||||
|
|
||||||
|
The *waitForSync* option determines whether or not the operation should
|
||||||
|
block until the not-yet synchronized data in the write-ahead log was
|
||||||
|
synchronized to disk.
|
||||||
|
|
||||||
|
The *waitForCollector* operation can be used to specify that the operation
|
||||||
|
should block until the data in the flushed log has been collected by the
|
||||||
|
write-ahead log garbage collector. Note that setting this option to *true*
|
||||||
|
might block for a long time if there are long-running transactions and
|
||||||
|
the write-ahead log garbage collector cannot finish garbage collection.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline WalFlush
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{WalFlush}
|
||||||
|
require("internal").wal.flush();
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock WalFlush
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -20,28 +20,193 @@ is specified, the server will pick a reasonable default value.
|
||||||
|
|
||||||
!SUBSECTION Has Next
|
!SUBSECTION Has Next
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock cursorHasNext
|
|
||||||
|
|
||||||
|
@brief checks if the cursor is exhausted
|
||||||
|
`cursor.hasNext()`
|
||||||
|
|
||||||
|
The *hasNext* operator returns *true*, then the cursor still has
|
||||||
|
documents. In this case the next document can be accessed using the
|
||||||
|
*next* operator, which will advance the cursor.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline cursorHasNext
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{cursorHasNext}
|
||||||
|
~ db._create("five");
|
||||||
|
~ db.five.save({ name : "one" });
|
||||||
|
~ db.five.save({ name : "two" });
|
||||||
|
~ db.five.save({ name : "three" });
|
||||||
|
~ db.five.save({ name : "four" });
|
||||||
|
~ db.five.save({ name : "five" });
|
||||||
|
var a = db.five.all();
|
||||||
|
while (a.hasNext()) print(a.next());
|
||||||
|
~ db._drop("five")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock cursorHasNext
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Next
|
!SUBSECTION Next
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock cursorNext
|
|
||||||
|
|
||||||
|
@brief returns the next result document
|
||||||
|
`cursor.next()`
|
||||||
|
|
||||||
|
If the *hasNext* operator returns *true*, then the underlying
|
||||||
|
cursor of the simple query still has documents. In this case the
|
||||||
|
next document can be accessed using the *next* operator, which
|
||||||
|
will advance the underlying cursor. If you use *next* on an
|
||||||
|
exhausted cursor, then *undefined* is returned.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline cursorNext
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{cursorNext}
|
||||||
|
~ db._create("five");
|
||||||
|
~ db.five.save({ name : "one" });
|
||||||
|
~ db.five.save({ name : "two" });
|
||||||
|
~ db.five.save({ name : "three" });
|
||||||
|
~ db.five.save({ name : "four" });
|
||||||
|
~ db.five.save({ name : "five" });
|
||||||
|
db.five.all().next();
|
||||||
|
~ db._drop("five")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock cursorNext
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Set Batch size
|
!SUBSECTION Set Batch size
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock cursorSetBatchSize
|
|
||||||
|
|
||||||
|
@brief sets the batch size for any following requests
|
||||||
|
`cursor.setBatchSize(number)`
|
||||||
|
|
||||||
|
Sets the batch size for queries. The batch size determines how many results
|
||||||
|
are at most transferred from the server to the client in one chunk.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Get Batch size
|
!SUBSECTION Get Batch size
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock cursorGetBatchSize
|
|
||||||
|
|
||||||
|
@brief returns the batch size
|
||||||
|
`cursor.getBatchSize()`
|
||||||
|
|
||||||
|
Returns the batch size for queries. If the returned value is undefined, the
|
||||||
|
server will determine a sensible batch size for any following requests.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Execute Query
|
!SUBSECTION Execute Query
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock queryExecute
|
|
||||||
|
|
||||||
|
@brief executes a query
|
||||||
|
`query.execute(batchSize)`
|
||||||
|
|
||||||
|
Executes a simple query. If the optional batchSize value is specified,
|
||||||
|
the server will return at most batchSize values in one roundtrip.
|
||||||
|
The batchSize cannot be adjusted after the query is first executed.
|
||||||
|
|
||||||
|
**Note**: There is no need to explicitly call the execute method if another
|
||||||
|
means of fetching the query results is chosen. The following two approaches
|
||||||
|
lead to the same result:
|
||||||
|
|
||||||
|
@startDocuBlockInline executeQuery
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{executeQuery}
|
||||||
|
~ db._create("users");
|
||||||
|
~ db.users.save({ name: "Gerhard" });
|
||||||
|
~ db.users.save({ name: "Helmut" });
|
||||||
|
~ db.users.save({ name: "Angela" });
|
||||||
|
result = db.users.all().toArray();
|
||||||
|
q = db.users.all(); q.execute(); result = [ ]; while (q.hasNext()) { result.push(q.next()); }
|
||||||
|
~ db._drop("users")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock executeQuery
|
||||||
|
|
||||||
|
The following two alternatives both use a batchSize and return the same
|
||||||
|
result:
|
||||||
|
|
||||||
|
@startDocuBlockInline executeQueryBatchSize
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{executeQueryBatchSize}
|
||||||
|
~ db._create("users");
|
||||||
|
~ db.users.save({ name: "Gerhard" });
|
||||||
|
~ db.users.save({ name: "Helmut" });
|
||||||
|
~ db.users.save({ name: "Angela" });
|
||||||
|
q = db.users.all(); q.setBatchSize(20); q.execute(); while (q.hasNext()) { print(q.next()); }
|
||||||
|
q = db.users.all(); q.execute(20); while (q.hasNext()) { print(q.next()); }
|
||||||
|
~ db._drop("users")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock executeQueryBatchSize
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Dispose
|
!SUBSECTION Dispose
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock cursorDispose
|
|
||||||
|
|
||||||
|
@brief disposes the result
|
||||||
|
`cursor.dispose()`
|
||||||
|
|
||||||
|
If you are no longer interested in any further results, you should call
|
||||||
|
*dispose* in order to free any resources associated with the cursor.
|
||||||
|
After calling *dispose* you can no longer access the cursor.
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Count
|
!SUBSECTION Count
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock cursorCount
|
|
||||||
|
|
||||||
|
@brief counts the number of documents
|
||||||
|
`cursor.count()`
|
||||||
|
|
||||||
|
The *count* operator counts the number of document in the result set and
|
||||||
|
returns that number. The *count* operator ignores any limits and returns
|
||||||
|
the total number of documents found.
|
||||||
|
|
||||||
|
**Note**: Not all simple queries support counting. In this case *null* is
|
||||||
|
returned.
|
||||||
|
|
||||||
|
`cursor.count(true)`
|
||||||
|
|
||||||
|
If the result set was limited by the *limit* operator or documents were
|
||||||
|
skiped using the *skip* operator, the *count* operator with argument
|
||||||
|
*true* will use the number of elements in the final result set - after
|
||||||
|
applying *limit* and *skip*.
|
||||||
|
|
||||||
|
**Note**: Not all simple queries support counting. In this case *null* is
|
||||||
|
returned.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
Ignore any limit:
|
||||||
|
|
||||||
|
@startDocuBlockInline cursorCount
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{cursorCount}
|
||||||
|
~ db._create("five");
|
||||||
|
~ db.five.save({ name : "one" });
|
||||||
|
~ db.five.save({ name : "two" });
|
||||||
|
~ db.five.save({ name : "three" });
|
||||||
|
~ db.five.save({ name : "four" });
|
||||||
|
~ db.five.save({ name : "five" });
|
||||||
|
db.five.all().limit(2).count();
|
||||||
|
~ db._drop("five")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock cursorCount
|
||||||
|
|
||||||
|
Counting any limit or skip:
|
||||||
|
|
||||||
|
@startDocuBlockInline cursorCountLimit
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{cursorCountLimit}
|
||||||
|
~ db._create("five");
|
||||||
|
~ db.five.save({ name : "one" });
|
||||||
|
~ db.five.save({ name : "two" });
|
||||||
|
~ db.five.save({ name : "three" });
|
||||||
|
~ db.five.save({ name : "four" });
|
||||||
|
~ db.five.save({ name : "five" });
|
||||||
|
db.five.all().limit(2).count(true);
|
||||||
|
~ db._drop("five")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock cursorCountLimit
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,41 @@ When a fulltext index exists, it can be queried using a fulltext query.
|
||||||
|
|
||||||
!SUBSECTION Fulltext
|
!SUBSECTION Fulltext
|
||||||
<!-- js/common/modules/@arangodb/arango-collection-common.js-->
|
<!-- js/common/modules/@arangodb/arango-collection-common.js-->
|
||||||
@startDocuBlock collectionFulltext
|
|
||||||
|
|
||||||
|
@brief queries the fulltext index
|
||||||
|
`collection.fulltext(attribute, query)`
|
||||||
|
|
||||||
|
The *fulltext* simple query functions performs a fulltext search on the specified
|
||||||
|
*attribute* and the specified *query*.
|
||||||
|
|
||||||
|
Details about the fulltext query syntax can be found below.
|
||||||
|
|
||||||
|
Note: the *fulltext* simple query function is **deprecated** as of ArangoDB 2.6.
|
||||||
|
The function may be removed in future versions of ArangoDB. The preferred
|
||||||
|
way for executing fulltext queries is to use an AQL query using the *FULLTEXT*
|
||||||
|
[AQL function](../Aql/FulltextFunctions.md) as follows:
|
||||||
|
|
||||||
|
FOR doc IN FULLTEXT(@@collection, @attributeName, @queryString, @limit)
|
||||||
|
RETURN doc
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline collectionFulltext
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{collectionFulltext}
|
||||||
|
~ db._drop("emails");
|
||||||
|
~ db._create("emails");
|
||||||
|
db.emails.ensureFulltextIndex("content");
|
||||||
|
| db.emails.save({ content:
|
||||||
|
"Hello Alice, how are you doing? Regards, Bob"});
|
||||||
|
| db.emails.save({ content:
|
||||||
|
"Hello Charlie, do Alice and Bob know about it?"});
|
||||||
|
db.emails.save({ content: "I think they don't know. Regards, Eve" });
|
||||||
|
db.emails.fulltext("content", "charlie,|eve").toArray();
|
||||||
|
~ db._drop("emails");
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock collectionFulltext
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Fulltext Syntax:
|
!SUBSECTION Fulltext Syntax:
|
||||||
|
|
||||||
|
|
|
@ -14,8 +14,66 @@ should be sorted, so that the pagination works in a predicable way.
|
||||||
|
|
||||||
!SUBSECTION Limit
|
!SUBSECTION Limit
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock queryLimit
|
|
||||||
|
|
||||||
|
@brief limit
|
||||||
|
`query.limit(number)`
|
||||||
|
|
||||||
|
Limits a result to the first *number* documents. Specifying a limit of
|
||||||
|
*0* will return no documents at all. If you do not need a limit, just do
|
||||||
|
not add the limit operator. The limit must be non-negative.
|
||||||
|
|
||||||
|
In general the input to *limit* should be sorted. Otherwise it will be
|
||||||
|
unclear which documents will be included in the result set.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline queryLimit
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{queryLimit}
|
||||||
|
~ db._create("five");
|
||||||
|
~ db.five.save({ name : "one" });
|
||||||
|
~ db.five.save({ name : "two" });
|
||||||
|
~ db.five.save({ name : "three" });
|
||||||
|
~ db.five.save({ name : "four" });
|
||||||
|
~ db.five.save({ name : "five" });
|
||||||
|
db.five.all().toArray();
|
||||||
|
db.five.all().limit(2).toArray();
|
||||||
|
~ db._drop("five")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock queryLimit
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Skip
|
!SUBSECTION Skip
|
||||||
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
<!-- js/common/modules/@arangodb/simple-query-common.js -->
|
||||||
@startDocuBlock querySkip
|
|
||||||
|
|
||||||
|
@brief skip
|
||||||
|
`query.skip(number)`
|
||||||
|
|
||||||
|
Skips the first *number* documents. If *number* is positive, then this
|
||||||
|
number of documents are skipped before returning the query results.
|
||||||
|
|
||||||
|
In general the input to *skip* should be sorted. Otherwise it will be
|
||||||
|
unclear which documents will be included in the result set.
|
||||||
|
|
||||||
|
Note: using negative *skip* values is **deprecated** as of ArangoDB 2.6 and
|
||||||
|
will not be supported in future versions of ArangoDB.
|
||||||
|
|
||||||
|
@EXAMPLES
|
||||||
|
|
||||||
|
@startDocuBlockInline querySkip
|
||||||
|
@EXAMPLE_ARANGOSH_OUTPUT{querySkip}
|
||||||
|
~ db._create("five");
|
||||||
|
~ db.five.save({ name : "one" });
|
||||||
|
~ db.five.save({ name : "two" });
|
||||||
|
~ db.five.save({ name : "three" });
|
||||||
|
~ db.five.save({ name : "four" });
|
||||||
|
~ db.five.save({ name : "five" });
|
||||||
|
db.five.all().toArray();
|
||||||
|
db.five.all().skip(3).toArray();
|
||||||
|
~ db._drop("five")
|
||||||
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
|
@endDocuBlock querySkip
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -29,7 +29,37 @@ transaction is automatically aborted, and all changes are rolled back.
|
||||||
|
|
||||||
!SUBSECTION Execute transaction
|
!SUBSECTION Execute transaction
|
||||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||||
@startDocuBlock executeTransaction
|
|
||||||
|
|
||||||
|
@brief executes a transaction
|
||||||
|
`db._executeTransaction(object)`
|
||||||
|
|
||||||
|
Executes a server-side transaction, as specified by *object*.
|
||||||
|
|
||||||
|
*object* must have the following attributes:
|
||||||
|
- *collections*: a sub-object that defines which collections will be
|
||||||
|
used in the transaction. *collections* can have these attributes:
|
||||||
|
- *read*: a single collection or a list of collections that will be
|
||||||
|
used in the transaction in read-only mode
|
||||||
|
- *write*: a single collection or a list of collections that will be
|
||||||
|
used in the transaction in write or read mode.
|
||||||
|
- *action*: a Javascript function or a string with Javascript code
|
||||||
|
containing all the instructions to be executed inside the transaction.
|
||||||
|
If the code runs through successfully, the transaction will be committed
|
||||||
|
at the end. If the code throws an exception, the transaction will be
|
||||||
|
rolled back and all database operations will be rolled back.
|
||||||
|
|
||||||
|
Additionally, *object* can have the following optional attributes:
|
||||||
|
- *waitForSync*: boolean flag indicating whether the transaction
|
||||||
|
is forced to be synchronous.
|
||||||
|
- *lockTimeout*: a numeric value that can be used to set a timeout for
|
||||||
|
waiting on collection locks. If not specified, a default value will be
|
||||||
|
used. Setting *lockTimeout* to *0* will make ArangoDB not time
|
||||||
|
out waiting for a lock.
|
||||||
|
- *params*: optional arguments passed to the function specified in
|
||||||
|
*action*.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
!SUBSECTION Declaration of collections
|
!SUBSECTION Declaration of collections
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue