1
0
Fork 0

Doc - Administration & Programs Refactor (#4907)

This commit is contained in:
Simran 2018-05-10 13:05:22 +02:00 committed by sleto-it
parent eac32700ea
commit 59de3403c1
113 changed files with 4958 additions and 1118 deletions

View File

@ -10,7 +10,7 @@ The two options in managing graphs are to either use
- graph functions on a combination of document and edge collections. - graph functions on a combination of document and edge collections.
Named graphs can be defined through the [graph-module](../../Manual/Graphs/GeneralGraphs/index.html) Named graphs can be defined through the [graph-module](../../Manual/Graphs/GeneralGraphs/index.html)
or via the [web interface](../../Manual/Administration/WebInterface/index.html). or via the [web interface](../../Manual/Programs/WebInterface/index.html).
The definition contains the name of the graph, and the vertex and edge collections The definition contains the name of the graph, and the vertex and edge collections
involved. Since the management functions are layered on top of simple sets of involved. Since the management functions are layered on top of simple sets of
document and edge collections, you can also use regular AQL functions to work with them. document and edge collections, you can also use regular AQL functions to work with them.

View File

@ -46,4 +46,4 @@ Queries can also be saved in the AQL editor along with their bind parameter valu
for later reuse. This data is stored in the user profile in the current database for later reuse. This data is stored in the user profile in the current database
(in the *_users* system table). (in the *_users* system table).
Also see the detailed description of the [Web Interface](../../Manual/Administration/WebInterface/index.html). Also see the detailed description of the [Web Interface](../../Manual/Programs/WebInterface/index.html).

View File

@ -17,7 +17,7 @@ and may even be as simple as an identity transformation thus making the view
represent all documents available in the specified set of collections. represent all documents available in the specified set of collections.
Views can be defined and administered on a per view-type basis via Views can be defined and administered on a per view-type basis via
the [web interface](../../Manual/Administration/WebInterface/index.html). the [web interface](../../Manual/Programs/WebInterface/index.html).
The currently supported view implementations are: The currently supported view implementations are:
* **arangosearch** as described in [ArangoSearch View](ArangoSearch.md) * **arangosearch** as described in [ArangoSearch View](ArangoSearch.md)

View File

@ -3,6 +3,8 @@ div.example_show_button {
text-align: center; text-align: center;
position: relative; position: relative;
top: -10px; top: -10px;
display: flex;
justify-content: center;
} }
.book .book-body .navigation.navigation-next { .book .book-body .navigation.navigation-next {

View File

@ -3,6 +3,8 @@ div.example_show_button {
text-align: center; text-align: center;
position: relative; position: relative;
top: -10px; top: -10px;
display: flex;
justify-content: center;
} }
.book .book-body .navigation.navigation-next { .book .book-body .navigation.navigation-next {

View File

@ -3,6 +3,8 @@ div.example_show_button {
text-align: center; text-align: center;
position: relative; position: relative;
top: -10px; top: -10px;
display: flex;
justify-content: center;
} }
.book .book-body .navigation.navigation-next { .book .book-body .navigation.navigation-next {

View File

@ -6,5 +6,5 @@ Following you have ArangoDB's HTTP Interface for Documents, Databases, Edges and
There are also some examples provided for every API action. There are also some examples provided for every API action.
You may also use the interactive [Swagger documentation](http://swagger.io) in the You may also use the interactive [Swagger documentation](http://swagger.io) in the
[ArangoDB webinterface](../../Manual/Administration/WebInterface/index.html) [ArangoDB webinterface](../../Manual/Programs/WebInterface/index.html)
to explore the API calls below. to explore the API calls below.

View File

@ -4,7 +4,7 @@ Notes on Databases
Please keep in mind that each database contains its own system collections, Please keep in mind that each database contains its own system collections,
which need to set up when a database is created. This will make the creation which need to set up when a database is created. This will make the creation
of a database take a while. Replication is configured on a per-database level, of a database take a while. Replication is configured on a per-database level,
meaning that any replication logging or applying for the a new database must meaning that any replication logging or applying for a new database must
be configured explicitly after a new database has been created. Foxx applications be configured explicitly after a new database has been created. Foxx applications
are also available only in the context of the database they have been installed are also available only in the context of the database they have been installed
in. A new database will only provide access to the system applications shipped in. A new database will only provide access to the system applications shipped

View File

@ -3,6 +3,8 @@ div.example_show_button {
text-align: center; text-align: center;
position: relative; position: relative;
top: -10px; top: -10px;
display: flex;
justify-content: center;
} }
.book .book-body .navigation.navigation-next { .book .book-body .navigation.navigation-next {

View File

@ -1,490 +0,0 @@
Arangoimport
============
This manual describes the ArangoDB importer _arangoimport_, which can be used for
bulk imports.
The most convenient method to import a lot of data into ArangoDB is to use the
*arangoimport* command-line tool. It allows you to import data records from a file
into an existing database collection.
It is possible to import [document keys](../Appendix/Glossary.md#document-key) with the documents using the *_key*
attribute. When importing into an [edge collection](../Appendix/Glossary.md#edge-collection), it is mandatory that all
imported documents have the *_from* and *_to* attributes, and that they contain
valid references.
Let's assume for the following examples you want to import user data into an
existing collection named "users" on the server.
Importing Data into an ArangoDB Database
----------------------------------------
### Importing JSON-encoded Data
Let's further assume the import at hand is encoded in JSON. We'll be using these
example user records to import:
```js
{ "name" : { "first" : "John", "last" : "Connor" }, "active" : true, "age" : 25, "likes" : [ "swimming"] }
{ "name" : { "first" : "Jim", "last" : "O'Brady" }, "age" : 19, "likes" : [ "hiking", "singing" ] }
{ "name" : { "first" : "Lisa", "last" : "Jones" }, "dob" : "1981-04-09", "likes" : [ "running" ] }
```
To import these records, all you need to do is to put them into a file (with one
line for each record to import) and run the following command:
> arangoimport --file "data.json" --type jsonl --collection "users"
This will transfer the data to the server, import the records, and print a
status summary. To show the intermediate progress during the import process, the
option *--progress* can be added. This option will show the percentage of the
input file that has been sent to the server. This will only be useful for big
import files.
> arangoimport --file "data.json" --type json --collection users --progress true
It is also possible to use the output of another command as an input for arangoimport.
For example, the following shell command can be used to pipe data from the `cat`
process to arangoimport:
> cat data.json | arangoimport --file - --type json --collection users
Note that you have to use `--file -` if you want to use another command as input
for arangoimport. No progress can be reported for such imports as the size of the input
will be unknown to arangoimport.
By default, the endpoint *tcp://127.0.0.1:8529* will be used. If you want to
specify a different endpoint, you can use the *--server.endpoint* option. You
probably want to specify a database user and password as well. You can do so by
using the options *--server.username* and *--server.password*. If you do not
specify a password, you will be prompted for one.
> arangoimport --server.endpoint tcp://127.0.0.1:8529 --server.username root --file "data.json" --type json --collection "users"
Note that the collection (*users* in this case) must already exist or the import
will fail. If you want to create a new collection with the import data, you need
to specify the *--create-collection* option. Note that by default it will create
a document collection and no edge collection.
> arangoimport --file "data.json" --type json --collection "users" --create-collection true
To create an edge collection instead, use the *--create-collection-type* option
and set it to *edge*:
> arangoimport --file "data.json" --collection "myedges" --create-collection true --create-collection-type edge
When importing data into an existing collection it is often convenient to first
remove all data from the collection and then start the import. This can be achieved
by passing the *--overwrite* parameter to _arangoimport_. If it is set to *true*,
any existing data in the collection will be removed prior to the import. Note
that any existing index definitions for the collection will be preserved even if
*--overwrite* is set to true.
> arangoimport --file "data.json" --type json --collection "users" --overwrite true
As the import file already contains the data in JSON format, attribute names and
data types are fully preserved. As can be seen in the example data, there is no
need for all data records to have the same attribute names or types. Records can
be inhomogeneous.
Please note that by default, _arangoimport_ will import data into the specified
collection in the default database (*_system*). To specify a different database,
use the *--server.database* option when invoking _arangoimport_. If you want to
import into a nonexistent database you need to pass *--create-database true*.
Note *--create-database* defaults to *false*
The tool also supports parallel imports, with multiple threads. Using multiple
threads may provide a speedup, especially when using the RocksDB storage engine.
To specify the number of parallel threads use the `--threads` option:
> arangoimport --threads 4 --file "data.json" --type json --collection "users"
Note that using multiple threads may lead to a non-sequential import of the input
data. Data that appears later in the input file may be imported earlier than data
that appears earlier in the input file. This is normally not a problem but may cause
issues when when there are data dependencies or duplicates in the import data. In
this case, the number of threads should be set to 1.
### JSON input file formats
**Note**: *arangoimport* supports two formats when importing JSON data from
a file. The first format that we also used above is commonly known as [jsonl](http://jsonlines.org)).
However, in contrast to the JSONL specification it requires the input file to contain
one complete JSON document in each line, e.g.
```js
{ "_key": "one", "value": 1 }
{ "_key": "two", "value": 2 }
{ "_key": "foo", "value": "bar" }
...
```
So one could argue that this is only a subset of JSONL.
The above format can be imported sequentially by _arangoimport_. It will read data
from the input file in chunks and send it in batches to the server. Each batch
will be about as big as specified in the command-line parameter *--batch-size*.
An alternative is to put one big JSON document into the input file like this:
```js
[
{ "_key": "one", "value": 1 },
{ "_key": "two", "value": 2 },
{ "_key": "foo", "value": "bar" },
...
]
```
This format allows line breaks within the input file as required. The downside
is that the whole input file will need to be read by _arangoimport_ before it can
send the first batch. This might be a problem if the input file is big. By
default, _arangoimport_ will allow importing such files up to a size of about 16 MB.
If you want to allow your _arangoimport_ instance to use more memory, you may want
to increase the maximum file size by specifying the command-line option
*--batch-size*. For example, to set the batch size to 32 MB, use the following
command:
> arangoimport --file "data.json" --type json --collection "users" --batch-size 33554432
Please also note that you may need to increase the value of *--batch-size* if
a single document inside the input file is bigger than the value of *--batch-size*.
### Importing CSV Data
_arangoimport_ also offers the possibility to import data from CSV files. This
comes handy when the data at hand is in CSV format already and you don't want to
spend time converting them to JSON for the import.
To import data from a CSV file, make sure your file contains the attribute names
in the first row. All the following lines in the file will be interpreted as
data records and will be imported.
The CSV import requires the data to have a homogeneous structure. All records
must have exactly the same amount of columns as there are headers. By default,
lines with a different number of values will not be imported and there will be
warnings for them. To still import lines with less values than in the header,
there is the *--ignore-missing* option. If set to true, lines that have a
different amount of fields will be imported. In this case only those attributes
will be populated for which there are values. Attributes for which there are
no values present will silently be discarded.
Example:
```
"first","last","age","active","dob"
"John","Connor",25,true
"Jim","O'Brady"
```
With *--ignore-missing* this will produce the following documents:
```js
{ "first" : "John", "last" : "Connor", "active" : true, "age" : 25 }
{ "first" : "Jim", "last" : "O'Brady" }
```
The cell values can have different data types though. If a cell does not have
any value, it can be left empty in the file. These values will not be imported
so the attributes will not "be there" in document created. Values enclosed in
quotes will be imported as strings, so to import numeric values, boolean values
or the null value, don't enclose the value in quotes in your file.
We'll be using the following import for the CSV import:
```
"first","last","age","active","dob"
"John","Connor",25,true,
"Jim","O'Brady",19,,
"Lisa","Jones",,,"1981-04-09"
Hans,dos Santos,0123,,
Wayne,Brewer,,false,
```
The command line to execute the import is:
> arangoimport --file "data.csv" --type csv --collection "users"
The above data will be imported into 5 documents which will look as follows:
```js
{ "first" : "John", "last" : "Connor", "active" : true, "age" : 25 }
{ "first" : "Jim", "last" : "O'Brady", "age" : 19 }
{ "first" : "Lisa", "last" : "Jones", "dob" : "1981-04-09" }
{ "first" : "Hans", "last" : "dos Santos", "age" : 123 }
{ "first" : "Wayne", "last" : "Brewer", "active" : false }
```
As can be seen, values left completely empty in the input file will be treated
as absent. Numeric values not enclosed in quotes will be treated as numbers.
Note that leading zeros in numeric values will be removed. To import numbers
with leading zeros, please use strings.
The literals *true* and *false* will be treated as booleans if they are not
enclosed in quotes. Other values not enclosed in quotes will be treated as
strings.
Any values enclosed in quotes will be treated as strings, too.
String values containing the quote character or the separator must be enclosed
with quote characters. Within a string, the quote character itself must be
escaped with another quote character (or with a backslash if the *--backslash-escape*
option is used).
Note that the quote and separator characters can be adjusted via the
*--quote* and *--separator* arguments when invoking _arangoimport_. The quote
character defaults to the double quote (*"*). To use a literal quote in a
string, you can use two quote characters.
To use backslash for escaping quote characters, please set the option
*--backslash-escape* to *true*.
The importer supports Windows (CRLF) and Unix (LF) line breaks. Line breaks might
also occur inside values that are enclosed with the quote character.
Here's an example for using literal quotes and newlines inside values:
```
"name","password"
"Foo","r4ndom""123!"
"Bar","wow!
this is a
multine password!"
"Bartholomew ""Bart"" Simpson","Milhouse"
```
Extra whitespace at the end of each line will be ignored. Whitespace at the
start of lines or between field values will not be ignored, so please make sure
that there is no extra whitespace in front of values or between them.
### Importing TSV Data
You may also import tab-separated values (TSV) from a file. This format is very
simple: every line in the file represents a data record. There is no quoting or
escaping. That also means that the separator character (which defaults to the
tabstop symbol) must not be used anywhere in the actual data.
As with CSV, the first line in the TSV file must contain the attribute names,
and all lines must have an identical number of values.
If a different separator character or string should be used, it can be specified
with the *--separator* argument.
An example command line to execute the TSV import is:
> arangoimport --file "data.tsv" --type tsv --collection "users"
### Attribute Name Translation
For the CSV and TSV input formats, attribute names can be translated automatically.
This is useful in case the import file has different attribute names than those
that should be used in ArangoDB.
A common use case is to rename an "id" column from the input file into "_key" as
it is expected by ArangoDB. To do this, specify the following translation when
invoking arangoimport:
> arangoimport --file "data.csv" --type csv --translate "id=_key"
Other common cases are to rename columns in the input file to *_from* and *_to*:
> arangoimport --file "data.csv" --type csv --translate "from=_from" --translate "to=_to"
The *translate* option can be specified multiple types. The source attribute name
and the target attribute must be separated with a *=*.
### Ignoring Attributes
For the CSV and TSV input formats, certain attribute names can be ignored on imports.
In an ArangoDB cluster there are cases where this can come in handy,
when your documents already contain a `_key` attribute
and your collection has a sharding attribute other than `_key`: In the cluster this
configuration is not supported, because ArangoDB needs to guarantee the uniqueness of the `_key`
attribute in *all* shards of the collection.
> arangoimport --file "data.csv" --type csv --remove-attribute "_key"
The same thing would apply if your data contains an *_id* attribute:
> arangoimport --file "data.csv" --type csv --remove-attribute "_id"
### Importing into an Edge Collection
arangoimport can also be used to import data into an existing edge collection.
The import data must, for each edge to import, contain at least the *_from* and
*_to* attributes. These indicate which other two documents the edge should connect.
It is necessary that these attributes are set for all records, and point to
valid document ids in existing collections.
*Examples*
```js
{ "_from" : "users/1234", "_to" : "users/4321", "desc" : "1234 is connected to 4321" }
```
**Note**: The edge collection must already exist when the import is started. Using
the *--create-collection* flag will not work because arangoimport will always try to
create a regular document collection if the target collection does not exist.
### Updating existing documents
By default, arangoimport will try to insert all documents from the import file into the
specified collection. In case the import file contains documents that are already present
in the target collection (matching is done via the *_key* attributes), then a default
arangoimport run will not import these documents and complain about unique key constraint
violations.
However, arangoimport can be used to update or replace existing documents in case they
already exist in the target collection. It provides the command-line option *--on-duplicate*
to control the behavior in case a document is already present in the database.
The default value of *--on-duplicate* is *error*. This means that when the import file
contains a document that is present in the target collection already, then trying to
re-insert a document with the same *_key* value is considered an error, and the document in
the database will not be modified.
Other possible values for *--on-duplicate* are:
- *update*: each document present in the import file that is also present in the target
collection already will be updated by arangoimport. *update* will perform a partial update
of the existing document, modifying only the attributes that are present in the import
file and leaving all other attributes untouched.
The values of system attributes *_id*, *_key*, *_rev*, *_from* and *_to* cannot be
updated or replaced in existing documents.
- *replace*: each document present in the import file that is also present in the target
collection already will be replace by arangoimport. *replace* will replace the existing
document entirely, resulting in a document with only the attributes specified in the import
file.
The values of system attributes *_id*, *_key*, *_rev*, *_from* and *_to* cannot be
updated or replaced in existing documents.
- *ignore*: each document present in the import file that is also present in the target
collection already will be ignored and not modified in the target collection.
When *--on-duplicate* is set to either *update* or *replace*, arangoimport will return the
number of documents updated/replaced in the *updated* return value. When set to another
value, the value of *updated* will always be zero. When *--on-duplicate* is set to *ignore*,
arangoimport will return the number of ignored documents in the *ignored* return value.
When set to another value, *ignored* will always be zero.
It is possible to perform a combination of inserts and updates/replaces with a single
arangoimport run. When *--on-duplicate* is set to *update* or *replace*, all documents present
in the import file will be inserted into the target collection provided they are valid
and do not already exist with the specified *_key*. Documents that are already present
in the target collection (identified by *_key* attribute) will instead be updated/replaced.
### Arangoimport result output
An _arangoimport_ import run will print out the final results on the command line.
It will show the
* number of documents created (*created*)
* number of documents updated/replaced (*updated/replaced*, only non-zero if
*--on-duplicate* was set to *update* or *replace*, see below)
* number of warnings or errors that occurred on the server side (*warnings/errors*)
* number of ignored documents (only non-zero if *--on-duplicate* was set to *ignore*).
*Example*
```js
created: 2
warnings/errors: 0
updated/replaced: 0
ignored: 0
```
For CSV and TSV imports, the total number of input file lines read will also be printed
(*lines read*).
_arangoimport_ will also print out details about warnings and errors that happened on the
server-side (if any).
### Attribute Naming and Special Attributes
Attributes whose names start with an underscore are treated in a special way by
ArangoDB:
- the optional *_key* attribute contains the document's key. If specified, the value
must be formally valid (e.g. must be a string and conform to the naming conventions).
Additionally, the key value must be unique within the
collection the import is run for.
- *_from*: when importing into an edge collection, this attribute contains the id
of one of the documents connected by the edge. The value of *_from* must be a
syntactically valid document id and the referred collection must exist.
- *_to*: when importing into an edge collection, this attribute contains the id
of the other document connected by the edge. The value of *_to* must be a
syntactically valid document id and the referred collection must exist.
- *_rev*: this attribute contains the revision number of a document. However, the
revision numbers are managed by ArangoDB and cannot be specified on import. Thus
any value in this attribute is ignored on import.
If you import values into *_key*, you should make sure they are valid and unique.
When importing data into an edge collection, you should make sure that all import
documents can *_from* and *_to* and that their values point to existing documents.
To avoid specifying complete document ids (consisting of collection names and document
keys) for *_from* and *_to* values, there are the options *--from-collection-prefix* and
*--to-collection-prefix*. If specified, these values will be automatically prepended
to each value in *_from* (or *_to* resp.). This allows specifying only document keys
inside *_from* and/or *_to*.
*Example*
> arangoimport --from-collection-prefix users --to-collection-prefix products ...
Importing the following document will then create an edge between *users/1234* and
*products/4321*:
```js
{ "_from" : "1234", "_to" : "4321", "desc" : "users/1234 is connected to products/4321" }
```
### Automatic pacing with busy or low throughput disk subsystems
Arangoimport has an automatic pacing algorithm that limits how fast
data is sent to the ArangoDB servers. This pacing algorithm exists to
prevent the import operation from failing due to slow responses.
Google Compute and other VM providers limit the throughput of disk
devices. Google's limit is more strict for smaller disk rentals, than
for larger. Specifically, a user could choose the smallest disk space
and be limited to 3 Mbytes per second. Similarly, other users'
processes on the shared VM can limit available throughput of the disk
devices.
The automatic pacing algorithm adjusts the transmit block size
dynamically based upon the actual throughput of the server over the
last 20 seconds. Further, each thread delivers its portion of the data
in mostly non-overlapping chunks. The thread timing creates
intentional windows of non-import activity to allow the server extra
time for meta operations.
Automatic pacing intentionally does not use the full throughput of a
disk device. An unlimited (really fast) disk device might not need
pacing. Raising the number of threads via the "--threads X" command
line to any value of X greater than 2 will increase the total
throughput used.
Automatic pacing frees the user from adjusting the throughput used to
match available resources. It is disabled by manually specifying any
"--batch-size". 16777216 was the previous default for --batch-size.
Having --batch-size too large can lead to transmitted data backing-up
on the server, resulting in a TimeoutError.
The pacing algorithm works successfully with mmfiles with disks
limited to read and write throughput as small as 1 Mbyte per
second. The algorithm works successfully with rocksdb with disks
limited to read and write throughput as small as 3 Mbyte per second.

View File

@ -1,41 +0,0 @@
ArangoDB Shell Configuration
============================
_arangosh_ will look for a user-defined startup script named *.arangosh.rc* in the
user's home directory on startup. The home directory will likely be `/home/<username>/`
on Unix/Linux, and is determined on Windows by peeking into the environment variables
`%HOMEDRIVE%` and `%HOMEPATH%`.
If the file *.arangosh.rc* is present in the home directory, _arangosh_ will execute
the contents of this file inside the global scope.
You can use this to define your own extra variables and functions that you need often.
For example, you could put the following into the *.arangosh.rc* file in your home
directory:
```js
// "var" keyword avoided intentionally...
// otherwise "timed" would not survive the scope of this script
global.timed = function (cb) {
console.time("callback");
cb();
console.timeEnd("callback");
};
```
This will make a function named *timed* available in _arangosh_ in the global scope.
You can now start _arangosh_ and invoke the function like this:
```js
timed(function () {
for (var i = 0; i < 1000; ++i) {
db.test.save({ value: i });
}
});
```
Please keep in mind that, if present, the *.arangosh.rc* file needs to contain valid
JavaScript code. If you want any variables in the global scope to survive you need to
omit the *var* keyword for them. Otherwise the variables will only be visible inside
the script itself, but not outside.

View File

@ -1,49 +0,0 @@
ArangoDB Shell Output
=====================
The ArangoDB shell will print the output of the last evaluated expression
by default:
@startDocuBlockInline lastExpressionResult
@EXAMPLE_ARANGOSH_OUTPUT{lastExpressionResult}
42 * 23
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock lastExpressionResult
In order to prevent printing the result of the last evaluated expression,
the expression result can be captured in a variable, e.g.
@startDocuBlockInline lastExpressionResultCaptured
@EXAMPLE_ARANGOSH_OUTPUT{lastExpressionResultCaptured}
var calculationResult = 42 * 23
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock lastExpressionResultCaptured
There is also the `print` function to explicitly print out values in the
ArangoDB shell:
@startDocuBlockInline printFunction
@EXAMPLE_ARANGOSH_OUTPUT{printFunction}
print({ a: "123", b: [1,2,3], c: "test" });
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock printFunction
By default, the ArangoDB shell uses a pretty printer when JSON documents are
printed. This ensures documents are printed in a human-readable way:
@startDocuBlockInline usingToArray
@EXAMPLE_ARANGOSH_OUTPUT{usingToArray}
db._create("five")
for (i = 0; i < 5; i++) db.five.save({value:i})
db.five.toArray()
~db._drop("five");
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock usingToArray
While the pretty-printer produces nice looking results, it will need a lot of
screen space for each document. Sometimes a more dense output might be better.
In this case, the pretty printer can be turned off using the command
*stop_pretty_print()*.
To turn on pretty printing again, use the *start_pretty_print()* command.

View File

@ -1,107 +0,0 @@
ArangoDB Shell Introduction
===========================
The ArangoDB shell (_arangosh_) is a command-line tool that can be used for
administration of ArangoDB, including running ad-hoc queries.
The _arangosh_ binary is shipped with ArangoDB. It offers a JavaScript shell
environment providing access to the ArangoDB server.
Arangosh can be invoked like this:
```
unix> arangosh
```
By default _arangosh_ will try to connect to an ArangoDB server running on
server *localhost* on port *8529*. It will use the username *root* and an
empty password by default. Additionally it will connect to the default database
(*_system*). All these defaults can be changed using the following
command-line options:
* *--server.database <string>*: name of the database to connect to
* *--server.endpoint <string>*: endpoint to connect to
* *--server.username <string>*: database username
* *--server.password <string>*: password to use when connecting
* *--server.authentication <bool>*: whether or not to use authentication
For example, to connect to an ArangoDB server on IP *192.168.173.13* on port
8530 with the user *foo* and using the database *test*, use:
unix> arangosh \
--server.endpoint tcp://192.168.173.13:8530 \
--server.username foo \
--server.database test \
--server.authentication true
_arangosh_ will then display a password prompt and try to connect to the
server after the password was entered.
To change the current database after the connection has been made, you
can use the `db._useDatabase()` command in arangosh:
@startDocuBlockInline shellUseDB
@EXAMPLE_ARANGOSH_OUTPUT{shellUseDB}
db._createDatabase("myapp");
db._useDatabase("myapp");
db._useDatabase("_system");
db._dropDatabase("myapp");
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock shellUseDB
To get a list of available commands, arangosh provides a *help()* function.
Calling it will display helpful information.
_arangosh_ also provides auto-completion. Additional information on available
commands and methods is thus provided by typing the first few letters of a
variable and then pressing the tab key. It is recommend to try this with entering
*db.* (without pressing return) and then pressing tab.
By the way, _arangosh_ provides the *db* object by default, and this object can
be used for switching to a different database and managing collections inside the
current database.
For a list of available methods for the *db* object, type
@startDocuBlockInline shellHelp
@EXAMPLE_ARANGOSH_OUTPUT{shellHelp}
db._help();
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock shellHelp
you can paste multiple lines into arangosh, given the first line ends with an
opening brace:
@startDocuBlockInline shellPaste
@EXAMPLE_ARANGOSH_OUTPUT{shellPaste}
|for (var i = 0; i < 10; i ++) {
| require("@arangodb").print("Hello world " + i + "!\n");
}
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock shellPaste
To load your own JavaScript code into the current JavaScript interpreter context,
use the load command:
require("internal").load("/tmp/test.js") // <- Linux / MacOS
require("internal").load("c:\\tmp\\test.js") // <- Windows
Exiting arangosh can be done using the key combination `<CTRL> + D` or by
typing `quit<CR>`
Escaping
--------
In AQL, escaping is done traditionally with the backslash character: `\`.
As seen above, this leads to double backslashes when specifying Windows paths.
Arangosh requires another level of escaping, also with the backslash character.
It adds up to four backslashes that need to be written in Arangosh for a single
literal backslash (`c:\tmp\test.js`):
db._query('RETURN "c:\\\\tmp\\\\test.js"')
You can use [bind variables](../../../AQL/Invocation/WithArangosh.html) to
mitigate this:
var somepath = "c:\\tmp\\test.js"
db._query(aql`RETURN ${somepath}`)

View File

@ -0,0 +1,10 @@
Backup and Restore
==================
Backup and restore can be done via the tools [_arangodump_](../Programs/Arangodump/README.md) and [_arangorestore_](../Programs/Arangorestore/README.md)
<!-- Offline dumps -->
<!-- Hot backups -->
<!-- Cluster -->

View File

@ -108,7 +108,7 @@ Moving/Rebalancing _shards_
--------------------------- ---------------------------
A _shard_ can be moved from a _DBServer_ to another, and the entire shard distribution A _shard_ can be moved from a _DBServer_ to another, and the entire shard distribution
can be rebalanced using the correponding buttons in the web [UI](../WebInterface/Cluster.md). can be rebalanced using the correponding buttons in the web [UI](../../Programs/WebInterface/Cluster.md).
Replacing/Removing a _Coordinator_ Replacing/Removing a _Coordinator_
---------------------------------- ----------------------------------

View File

@ -0,0 +1,14 @@
Import and Export
=================
Import and export can be done via the tools [_arangoimport_](../Programs/Arangoimport/README.md) and [_arangoexport_](../Programs/Arangoexport/README.md)
<!-- Importing from files -->
<!-- Bulk import via HTTP API -->
<!-- Export to files -->
<!-- Bulk export via HTTP API -->
<!-- Syncing with 3rd party systems? -->

View File

@ -4,7 +4,7 @@ Managing Users
The user management in ArangoDB 3 is similar to the ones found in MySQL, The user management in ArangoDB 3 is similar to the ones found in MySQL,
PostgreSQL, or other database systems. PostgreSQL, or other database systems.
User management is possible in the [web interface](../WebInterface/Users.md) User management is possible in the [web interface](../../Programs/WebInterface/Users.md)
and in [arangosh](InArangosh.md) while logged on to the *\_system* database. and in [arangosh](InArangosh.md) while logged on to the *\_system* database.
Note that the only usernames *must* not start with `:role:`. Note that the only usernames *must* not start with `:role:`.
@ -132,7 +132,7 @@ collection *data* nor create new collections in the database *example*.
Granting Access Levels Granting Access Levels
---------------------- ----------------------
Access levels can be managed via the [web interface](../WebInterface/Users.md) or in [arangosh](InArangosh.md). Access levels can be managed via the [web interface](../../Programs/WebInterface/Users.md) or in [arangosh](InArangosh.md).
In order to grant an access level to a user, you can assign one of In order to grant an access level to a user, you can assign one of
three access levels for each database and one of three levels for each three access levels for each database and one of three levels for each

View File

@ -1,18 +1,44 @@
Administration Administration
============== ==============
Most administration can be managed using the *arangosh*. Tools
-----
Deployments of ArangoDB servers can be managed with the following tools:
Filesystems - [**Web interface**](../Programs/WebInterface/README.md):
----------- [_Arangod_](../Programs/Arangod/README.md) serves a graphical web interface to
be accessed with a browser via the server port. It provides basic and advanced
functionality to interact with the server and its data.
{### TODO: In case of a cluster, the web interface can be reached via any of the coordinators. What about other deployment modes? ###}
As one would expect for a database, we recommend a locally mounted filesystems. - **ArangoShell**: [_Arangosh_](../Programs/Arangosh/README.md) is a V8 shell to
interact with any local or remote ArangoDB server through a JavaScript
interface. It can be used to automate tasks. Some developers may prefer it over
the web interface, especially for simple CRUD. It is not to be confused with
general command lines like Bash or PowerShell.
NFS or similar network filesystems will not work. - **RESTful API**: _Arangod_ has an [HTTP interface](../../HTTP/index.html) through
which it can be fully managed. The official client tools including _Arangosh_ and
the Web interface talk to this bare metal interface. It is also relevant for
[driver](../../Drivers/index.html) developers.
On Linux we recommend the use of ext4fs, on Windows NTFS and on MacOS HFS+. - [**ArangoDB Starter**](../Programs/Starter/README.md): This deployment tool
helps to start _Arangod_ instances, like for a Cluster or an Active Failover setup.
For a full list of tools, please refer to the [Programs & Tools](../Programs/README.md) chapter.
We recommend to **not** use BTRFS on Linux. It is known to not work well in conjunction with ArangoDB. Deployment Administration
We experienced that ArangoDB faces latency issues on accessing its database files on BTRFS partitions. -------------------------
In conjunction with BTRFS and AUFS we also saw data loss on restart.
- [Master/Slave](MasterSlave/README.md)
- [Active Failover](ActiveFailover/README.md)
- [Cluster](Cluster/README.md)
- [Datacenter to datacenter replication](DC2DC/README.md)
Other Topics
------------
- [Backup & Restore](BackupRestore.md)
- [Import & Export](ImportExport.md)
- [User Management](ManagingUsers/README.md)

View File

@ -1,13 +0,0 @@
Web Interface
=============
ArangoDB comes with a built-in web interface for administration. The interface
differs for standalone instances and cluster setups.
Standalone:
![Standalone Standalone](images/overview.png)
Cluster:
![Cluster Frontend](images/clusterView.png)

View File

@ -1,7 +1,7 @@
The "db" Object The "db" Object
=============== ===============
The `db` object is available in [arangosh](../../GettingStarted/Arangosh.md) by The `db` object is available in [arangosh](../../Programs/Arangosh/README.md) by
default, and can also be imported and used in Foxx services. default, and can also be imported and used in Foxx services.
*db.name* returns a [collection object](CollectionObject.md) for the collection *name*. *db.name* returns a [collection object](CollectionObject.md) for the collection *name*.

View File

@ -9,9 +9,9 @@ transported using [JSON](https://en.wikipedia.org/wiki/JSON) via a TCP connectio
using the HTTP protocol. A [REST API](https://en.wikipedia.org/wiki/Representational_state_transfer) using the HTTP protocol. A [REST API](https://en.wikipedia.org/wiki/Representational_state_transfer)
is provided to interact with the database system. is provided to interact with the database system.
The [web interface](../Administration/WebInterface/README.md) that comes with The [web interface](../Programs/WebInterface/README.md) that comes with
ArangoDB, called *Aardvark*, provides graphical user interface that is easy to use. ArangoDB, called *Aardvark*, provides graphical user interface that is easy to use.
An [interactive shell](../GettingStarted/Arangosh.md), called *Arangosh*, is also An [interactive shell](../Programs/Arangosh/README.md), called *Arangosh*, is also
shipped. In addition, there are so called [drivers](https://arangodb.com/downloads/arangodb-drivers/) shipped. In addition, there are so called [drivers](https://arangodb.com/downloads/arangodb-drivers/)
that make it easy to use the database system in various environments and that make it easy to use the database system in various environments and
programming languages. All these tools use the HTTP interface of the server and programming languages. All these tools use the HTTP interface of the server and

View File

@ -192,7 +192,7 @@ retrieve the storage engine type used by the server
Returns the name of the storage engine in use (`mmfiles` or `rocksdb`), as well Returns the name of the storage engine in use (`mmfiles` or `rocksdb`), as well
as a list of supported features (types of indexes and as a list of supported features (types of indexes and
[dfdb](../../Troubleshooting/DatafileDebugger.md)). [dfdb](../../Programs/Arango-dfdb/README.md)).
### Engine statistics ### Engine statistics

View File

@ -10,7 +10,7 @@ This chapter introduces ArangoDB's core concepts and covers
between natural data structures and great performance between natural data structures and great performance
You will also find usage examples on how to interact with the database system You will also find usage examples on how to interact with the database system
using [arangosh](../Administration/Arangosh/README.md), e.g. how to create and using [arangosh](../Programs/Arangosh/README.md), e.g. how to create and
drop databases / collections, or how to save, update, replace and remove drop databases / collections, or how to save, update, replace and remove
documents. You can do all this using the [web interface](../GettingStarted/WebInterface.md) documents. You can do all this using the [web interface](../GettingStarted/WebInterface.md)
as well and may therefore skip these sections as beginner. as well and may therefore skip these sections as beginner.

View File

@ -1,98 +0,0 @@
Details about the ArangoDB Shell
================================
After the server has been started,
you can use the ArangoDB shell (_arangosh_) to administrate the
server. Without any arguments, the ArangoDB shell will try to contact
the server on port 8529 on the localhost. For more information see the
[ArangoDB Shell documentation](../Administration/Arangosh/README.md). You might need to set additional options
(endpoint, username and password) when connecting:
```
unix> ./arangosh --server.endpoint tcp://127.0.0.1:8529 --server.username root
```
The shell will print its own version number and if successfully connected
to a server the version number of the ArangoDB server.
Command-Line Options
--------------------
Use `--help` to get a list of command-line options:
```
unix> ./arangosh --help
STANDARD options:
--audit-log <string> audit log file to save commands and results to
--configuration <string> read configuration file
--help help message
--max-upload-size <uint64> maximum size of import chunks (in bytes) (default: 500000)
--no-auto-complete disable auto completion
--no-colors deactivate color support
--pager <string> output pager (default: "less -X -R -F -L")
--pretty-print pretty print values
--quiet no banner
--temp.path <string> path for temporary files (default: "/tmp/arangodb")
--use-pager use pager
JAVASCRIPT options:
--javascript.check <string> syntax check code JavaScript code from file
--javascript.execute <string> execute JavaScript code from file
--javascript.execute-string <string> execute JavaScript code from string
--javascript.startup-directory <string> startup paths containing the JavaScript files
--javascript.unit-tests <string> do not start as shell, run unit tests instead
--jslint <string> do not start as shell, run jslint instead
LOGGING options:
--log.level <string> log level (default: "info")
CLIENT options:
--server.connect-timeout <double> connect timeout in seconds (default: 3)
--server.authentication <bool> whether or not to use authentication (default: true)
--server.endpoint <string> endpoint to connect to, use 'none' to start without a server (default: "tcp://127.0.0.1:8529")
--server.password <string> password to use when connecting (leave empty for prompt)
--server.request-timeout <double> request timeout in seconds (default: 300)
--server.username <string> username to use when connecting (default: "root")
```
Database Wrappers
-----------------
The [`db` object](../Appendix/References/DBObject.md) is available in *arangosh*
as well as on *arangod* i.e. if you're using [Foxx](../Foxx/README.md). While its
interface is persistant between the *arangosh* and the *arangod* implementations,
its underpinning is not. The *arangod* implementation are JavaScript wrappers
around ArangoDB's native C++ implementation, whereas the *arangosh* implementation
wraps HTTP accesses to ArangoDB's [RESTfull API](../../HTTP/index.html).
So while this code may produce similar results when executed in *arangosh* and
*arangod*, the cpu usage and time required will be really different:
```js
for (i = 0; i < 100000; i++) {
db.test.save({ name: { first: "Jan" }, count: i});
}
```
Since the *arangosh* version will be doing around 100k HTTP requests, and the
*arangod* version will directly write to the database.
Using `arangosh` via unix shebang mechanisms
--------------------------------------------
In unix operating systems you can start scripts by specifying the interpreter in the first line of the script.
This is commonly called `shebang` or `hash bang`. You can also do that with `arangosh`, i.e. create `~/test.js`:
#!/usr/bin/arangosh --javascript.execute
require("internal").print("hello world")
db._query("FOR x IN test RETURN x").toArray()
Note that the first line has to end with a blank in order to make it work.
Mark it executable to the OS:
#> chmod a+x ~/test.js
and finaly try it out:
#> ~/test.js

View File

@ -3,7 +3,7 @@ Web Interface
The server itself (_arangod_) speaks HTTP / REST, but you can use the The server itself (_arangod_) speaks HTTP / REST, but you can use the
graphical web interface to keep it simple. There is also graphical web interface to keep it simple. There is also
[arangosh](../Administration/Arangosh/README.md), a synchronous shell [arangosh](../Programs/Arangosh/README.md), a synchronous shell
for interaction with the server. If you are a developer, you might for interaction with the server. If you are a developer, you might
prefer the shell over the GUI. It does not provide features like prefer the shell over the GUI. It does not provide features like
syntax highlighting however. syntax highlighting however.
@ -26,16 +26,16 @@ Depending on the installation method used, the installation process either
prompted for the root password or the default root password is empty prompted for the root password or the default root password is empty
(see [Securing the installation](Installation.md#securing-the-installation)). (see [Securing the installation](Installation.md#securing-the-installation)).
![Aardvark Login Form](../Administration/WebInterface/images/loginView.png) ![Aardvark Login Form](../Programs/WebInterface/images/loginView.png)
Next you will be asked which database to use. Every server instance comes with Next you will be asked which database to use. Every server instance comes with
a `_system` database. Select this database to continue. a `_system` database. Select this database to continue.
![select database](../Administration/WebInterface/images/selectDBView.png) ![select database](../Programs/WebInterface/images/selectDBView.png)
You should then be presented the dashboard with server statistics like this: You should then be presented the dashboard with server statistics like this:
![Aardvark Dashboard Request Statistics](../Administration/WebInterface/images/dashboardView.png) ![Aardvark Dashboard Request Statistics](../Programs/WebInterface/images/dashboardView.png)
For a more detailed description of the interface, see [Web Interface](../Administration/WebInterface/README.md). For a more detailed description of the interface, see [Web Interface](../Programs/WebInterface/README.md).

View File

@ -30,17 +30,17 @@ In queries you can define in which directions the edge relations may be followed
Named Graphs Named Graphs
------------ ------------
Named graphs are completely managed by ArangoDB, and thus also [visible in the web interface](../Administration/WebInterface/Graphs.md). Named graphs are completely managed by ArangoDB, and thus also [visible in the web interface](../Programs/WebInterface/Graphs.md).
They use the full spectrum of ArangoDB's graph features. You may access them via several interfaces. They use the full spectrum of ArangoDB's graph features. You may access them via several interfaces.
- [AQL Graph Operations](../../AQL/Graphs/index.html) with several flavors: - [AQL Graph Operations](../../AQL/Graphs/index.html) with several flavors:
- [AQL Traversals](../../AQL/Graphs/Traversals.html) on both named and anonymous graphs - [AQL Traversals](../../AQL/Graphs/Traversals.html) on both named and anonymous graphs
- [AQL Shortest Path](../../AQL/Graphs/ShortestPath.html) on both named and anonymous graph - [AQL Shortest Path](../../AQL/Graphs/ShortestPath.html) on both named and anonymous graph
- [JavaScript General Graph implementation, as you may use it in Foxx Services](GeneralGraphs/README.md) - [JavaScript General Graph implementation, as you may use it in Foxx Services](GeneralGraphs/README.md)
- [Graph Management](GeneralGraphs/Management.md); creating & manipualating graph definitions; inserting, updating and deleting vertices and edges into graphs - [Graph Management](GeneralGraphs/Management.md); creating & manipulating graph definitions; inserting, updating and deleting vertices and edges into graphs
- [Graph Functions](GeneralGraphs/Functions.md) for working with edges and vertices, to analyze them and their relations - [Graph Functions](GeneralGraphs/Functions.md) for working with edges and vertices, to analyze them and their relations
- [JavaScript Smart Graph implementation, for scalable graphs](SmartGraphs/README.md) - [JavaScript Smart Graph implementation, for scalable graphs](SmartGraphs/README.md)
- [Smart Graph Management](SmartGraphs/Management.md); creating & manipualating SmartGraph definitions; Differences to General Graph - [Smart Graph Management](SmartGraphs/Management.md); creating & manipulating SmartGraph definitions; Differences to General Graph
- [RESTful General Graph interface](../../HTTP/Gharial/index.html) used to implement graph management in client drivers - [RESTful General Graph interface](../../HTTP/Gharial/index.html) used to implement graph management in client drivers
### Manipulating collections of named graphs with regular document functions ### Manipulating collections of named graphs with regular document functions
@ -91,14 +91,14 @@ To only follow friend edges, you would specify `friend_edges` as sole edge colle
Both approaches have advantages and disadvantages. `FILTER` operations on ede attributes will do comparisons on Both approaches have advantages and disadvantages. `FILTER` operations on ede attributes will do comparisons on
each traversed edge, which may become CPU-intense. When not *finding* the edges in the first place because of the each traversed edge, which may become CPU-intense. When not *finding* the edges in the first place because of the
collection containing them is not traversed at all, there will never be a reason to actualy check for their collection containing them is not traversed at all, there will never be a reason to actually check for their
`type` attribute with `FILTER`. `type` attribute with `FILTER`.
The multiple edge collections approach is limited by the [number of collections that can be used simultaneously in one query](../../AQL/Fundamentals/Syntax.html#collection-names). The multiple edge collections approach is limited by the [number of collections that can be used simultaneously in one query](../../AQL/Fundamentals/Syntax.html#collection-names).
Every collection used in a query requires some resources inside of ArangoDB and the number is therefore limited Every collection used in a query requires some resources inside of ArangoDB and the number is therefore limited
to cap the resource requirements. You may also have constraints on other edge attributes, such as a hash index to cap the resource requirements. You may also have constraints on other edge attributes, such as a hash index
with a unique constraint, which requires the documents to be in a single collection for the uniqueness guarantee, with a unique constraint, which requires the documents to be in a single collection for the uniqueness guarantee,
and it may thus not be possible to store the different types of edges in multiple edeg collections. and it may thus not be possible to store the different types of edges in multiple edge collections.
So, if your edges have about a dozen different types, it's okay to choose the collection approach, otherwise So, if your edges have about a dozen different types, it's okay to choose the collection approach, otherwise
the `FILTER` approach is preferred. You can still use `FILTER` operations on edges of course. You can get rid the `FILTER` approach is preferred. You can still use `FILTER` operations on edges of course. You can get rid
@ -152,8 +152,8 @@ flexible ways, whereas it would cause headache to handle it in a relational data
Backup and restore Backup and restore
------------------ ------------------
For sure you want to have backups of your graph data, you can use [Arangodump](../Administration/Arangodump.md) to create the backup, For sure you want to have backups of your graph data, you can use [Arangodump](../Programs/Arangodump/README.md) to create the backup,
and [Arangorestore](../Administration/Arangorestore.md) to restore a backup into a new ArangoDB. You should however note that: and [Arangorestore](../Programs/Arangorestore/README.md) to restore a backup into a new ArangoDB. You should however note that:
- you need the system collection `_graphs` if you backup named graphs. - you need the system collection `_graphs` if you backup named graphs.
- you need to backup the complete set of all edge and vertex collections your graph consists of. Partial dump/restore may not work. - you need to backup the complete set of all edge and vertex collections your graph consists of. Partial dump/restore may not work.
@ -170,7 +170,7 @@ Example Graphs
ArangoDB comes with a set of easily graspable graphs that are used to demonstrate the APIs. ArangoDB comes with a set of easily graspable graphs that are used to demonstrate the APIs.
You can use the `add samples` tab in the `create graph` window in the webinterface, or load the module `@arangodb/graph-examples/example-graph` in arangosh and use it to create instances of these graphs in your ArangoDB. You can use the `add samples` tab in the `create graph` window in the webinterface, or load the module `@arangodb/graph-examples/example-graph` in arangosh and use it to create instances of these graphs in your ArangoDB.
Once you've created them, you can [inspect them in the webinterface](../Administration/WebInterface/Graphs.md) - which was used to create the pictures below. Once you've created them, you can [inspect them in the webinterface](../Programs/WebInterface/Graphs.md) - which was used to create the pictures below.
You [can easily look into the innards of this script](https://github.com/arangodb/arangodb/blob/devel/js/common/modules/%40arangodb/graph-examples/example-graph.js) for reference about howto manage graphs programatically. You [can easily look into the innards of this script](https://github.com/arangodb/arangodb/blob/devel/js/common/modules/%40arangodb/graph-examples/example-graph.js) for reference about howto manage graphs programatically.

View File

@ -53,7 +53,7 @@ Getting started
--------------- ---------------
First of all SmartGraphs *cannot use existing collections*, when switching to SmartGraph from an existing data set you have to import the data into a fresh SmartGraph. First of all SmartGraphs *cannot use existing collections*, when switching to SmartGraph from an existing data set you have to import the data into a fresh SmartGraph.
This switch can be easily achieved with [arangodump](../../Administration/Arangodump.md) and [arangorestore](../../Administration/Arangorestore.md). This switch can be easily achieved with [arangodump](../../Programs/Arangodump/README.md) and [arangorestore](../../Programs/Arangorestore/README.md).
The only thing you have to change in this pipeline is that you create the new collections with the SmartGraph before starting `arangorestore`. The only thing you have to change in this pipeline is that you create the new collections with the SmartGraph before starting `arangorestore`.
* Create a graph * Create a graph

View File

@ -12,7 +12,7 @@ Version 3.3
support means you can fallback to a replica of your cluster in case of a support means you can fallback to a replica of your cluster in case of a
disaster in one datacenter. disaster in one datacenter.
- [**Encrypted Backups**](Administration/Arangodump.md#encryption): - [**Encrypted Backups**](Programs/Arangodump/Examples.md#encryption):
Arangodump can create backups encrypted with a secret key using AES256 Arangodump can create backups encrypted with a secret key using AES256
block cipher. block cipher.
@ -90,6 +90,6 @@ Version 3.0
- [**Foxx 3.0**](Foxx/README.md): overhauled JS framework for data-centric - [**Foxx 3.0**](Foxx/README.md): overhauled JS framework for data-centric
microservices microservices
- Significantly improved [**Web Interface**](Administration/WebInterface/README.md) - Significantly improved [**Web Interface**](Programs/WebInterface/README.md)
Also see [What's New in 3.0](ReleaseNotes/NewFeatures30.md). Also see [What's New in 3.0](ReleaseNotes/NewFeatures30.md).

View File

@ -86,8 +86,8 @@ If you are upgrading ArangoDB from an earlier version you need to copy your old
database directory [to the new default paths](#custom-install-paths). Upgrading database directory [to the new default paths](#custom-install-paths). Upgrading
will keep your old data, password and choice of storage engine as it is. will keep your old data, password and choice of storage engine as it is.
Switching to the RocksDB storage engine requires a Switching to the RocksDB storage engine requires a
[export](../Administration/Arangoexport.md) and [export](../Programs/Arangoexport/README.md) and
[reimport](../Administration/Arangoimport.md) of your data. [reimport](../Programs/Arangoimport/README.md) of your data.
Starting Starting
-------- --------

View File

@ -1,10 +1,7 @@
Datafile Debugger Arango-dfdb Examples
================= ====================
In Case Of Disaster ArangoDB uses append-only journals. Data corruption should only occur when the
-------------------
AranagoDB uses append-only journals. Data corruption should only occur when the
database server is killed. In this case, the corruption should only occur in the database server is killed. In this case, the corruption should only occur in the
last object(s) that have being written to the journal. last object(s) that have being written to the journal.
@ -14,7 +11,7 @@ hardware fault occurred.
If a journal or datafile is corrupt, shut down the database server and start If a journal or datafile is corrupt, shut down the database server and start
the program the program
unix> arango-dfdb arango-dfdb
in order to check the consistency of the datafiles and journals. This brings up in order to check the consistency of the datafiles and journals. This brings up
@ -76,4 +73,3 @@ If you answer **Y**, the corrupted entry will be removed.
If you see a corruption in a datafile (and not a journal), then something is If you see a corruption in a datafile (and not a journal), then something is
terribly wrong. These files are immutable and never changed by ArangoDB. A terribly wrong. These files are immutable and never changed by ArangoDB. A
corruption in such file is an indication of a hard-disk failure. corruption in such file is an indication of a hard-disk failure.

View File

@ -0,0 +1,12 @@
Arango-dfdb
===========
{% hint 'info' %}
`arango-dfdb` works with the
[MMFiles storage engine](../../Architecture/StorageEngines.md) only.
{% endhint %}
The ArangoDB Datafile Debugger can check datafiles for corruptions
and remove invalid entries to repair them. Such corruptions should
not occur unless there was a hardware failure. The tool is to be
used with caution.

View File

@ -0,0 +1,24 @@
Arangobench Examples
====================
Start Arangobench with the default user and server endpoint:
arangobench
Run the 'version' test case with 1000 requests, without concurrency:
--test-case version --requests 1000 --concurrency 1
Run the 'document' test case with 2000 requests, with two concurrent threads:
--test-case document --requests 1000 --concurrency 2
Run the 'document' test case with 2000 requests, with concurrency 2,
with async requests:
--test-case document --requests 1000 --concurrency 2 --async true
Run the 'document' test case with 2000 requests, with concurrency 2,
using batch requests:
--test-case document --requests 1000 --concurrency 2 --batch-size 10

View File

@ -1,17 +1,14 @@
Arangobench Arangobench Startup Options
=========== ===========================
Arangobench is ArangoDB's benchmark and test tool. It can be used to issue test Usage: `arangobench [<options>]`
requests to the database for performance and server function testing.
It supports parallel querying and batch requests.
Related blog posts: @startDocuBlock program_options_arangobench
- [Measuring ArangoDB insert performance](https://www.arangodb.com/2012/10/gain-factor-of-5-using-batch-updates/) Full description
- [Gain factor of 5 using batch requests](https://www.arangodb.com/2013/11/measuring-arangodb-insert-performance/) ----------------
Startup options {### TODO: Compare the differences and remove everything that is already in the auto-generated tables ###}
---------------
- *--async*: Send asynchronous requests. The default value is *false*. - *--async*: Send asynchronous requests. The default value is *false*.
@ -84,27 +81,3 @@ Startup options
- *--verbose*: Print out replies if the HTTP header indicates DB errors. - *--verbose*: Print out replies if the HTTP header indicates DB errors.
(default: false). (default: false).
### Examples
arangobench
Starts Arangobench with the default user and server endpoint.
--test-case version --requests 1000 --concurrency 1
Runs the 'version' test case with 1000 requests, without concurrency.
--test-case document --requests 1000 --concurrency 2
Runs the 'document' test case with 2000 requests, with two concurrent threads.
--test-case document --requests 1000 --concurrency 2 --async true
Runs the 'document' test case with 2000 requests, with concurrency 2,
with async requests.
--test-case document --requests 1000 --concurrency 2 --batch-size 10
Runs the 'document' test case with 2000 requests, with concurrency 2,
using batch requests.

View File

@ -0,0 +1,11 @@
Arangobench
===========
_Arangobench_ is ArangoDB's benchmark and test tool. It can be used to issue test
requests to the database for performance and server function testing.
It supports parallel querying and batch requests.
Related blog posts:
- [Measuring ArangoDB insert performance](https://www.arangodb.com/2012/10/gain-factor-of-5-using-batch-updates/)
- [Gain factor of 5 using batch requests](https://www.arangodb.com/2013/11/measuring-arangodb-insert-performance/)

View File

@ -0,0 +1,6 @@
ArangoDB Server Options
=======================
Usage: `arangod [<options>]`
@startDocuBlock program_options_arangod

View File

@ -0,0 +1,8 @@
ArangoDB Server
===============
The ArangoDB daemon (_arangod_) is the central server binary, which can run in
different modes for a variety of setups like single server and clusters.
See [Administration](../../Administration/README.md) for server configuration
and [Deployment](../../Deployment/README.md) for operation mode details.

View File

@ -1,11 +1,9 @@
Dumping Data from an ArangoDB database Arangodump Examples
====================================== ===================
To dump data from an ArangoDB server instance, you will need to invoke _arangodump_. _arangodump_ can be invoked in a command line by executing the following command:
Dumps can be re-imported with _arangorestore_. _arangodump_ can be invoked by executing
the following command:
unix> arangodump --output-directory "dump" arangodump --output-directory "dump"
This will connect to an ArangoDB server and dump all non-system collections from This will connect to an ArangoDB server and dump all non-system collections from
the default database (*_system*) into an output directory named *dump*. the default database (*_system*) into an output directory named *dump*.
@ -14,23 +12,23 @@ an intentional security measure to prevent you from accidentally overwriting alr
dumped data. If you are positive that you want to overwrite data in the output dumped data. If you are positive that you want to overwrite data in the output
directory, you can use the parameter *--overwrite true* to confirm this: directory, you can use the parameter *--overwrite true* to confirm this:
unix> arangodump --output-directory "dump" --overwrite true arangodump --output-directory "dump" --overwrite true
_arangodump_ will by default connect to the *_system* database using the default _arangodump_ will by default connect to the *_system* database using the default
endpoint. If you want to connect to a different database or a different endpoint, endpoint. If you want to connect to a different database or a different endpoint,
or use authentication, you can use the following command-line options: or use authentication, you can use the following command-line options:
* *--server.database <string>*: name of the database to connect to - *--server.database <string>*: name of the database to connect to
* *--server.endpoint <string>*: endpoint to connect to - *--server.endpoint <string>*: endpoint to connect to
* *--server.username <string>*: username - *--server.username <string>*: username
* *--server.password <string>*: password to use (omit this and you'll be prompted for the - *--server.password <string>*: password to use (omit this and you'll be prompted for the
password) password)
* *--server.authentication <bool>*: whether or not to use authentication - *--server.authentication <bool>*: whether or not to use authentication
Here's an example of dumping data from a non-standard endpoint, using a dedicated Here's an example of dumping data from a non-standard endpoint, using a dedicated
[database name](../Appendix/Glossary.md#database-name): [database name](../../Appendix/Glossary.md#database-name):
unix> arangodump --server.endpoint tcp://192.168.173.13:8531 --server.username backup --server.database mydb --output-directory "dump" arangodump --server.endpoint tcp://192.168.173.13:8531 --server.username backup --server.database mydb --output-directory "dump"
When finished, _arangodump_ will print out a summary line with some aggregate When finished, _arangodump_ will print out a summary line with some aggregate
statistics about what it did, e.g.: statistics about what it did, e.g.:
@ -41,20 +39,20 @@ By default, _arangodump_ will dump both structural information and documents fro
non-system collections. To adjust this, there are the following command-line non-system collections. To adjust this, there are the following command-line
arguments: arguments:
* *--dump-data <bool>*: set to *true* to include documents in the dump. Set to *false* - *--dump-data <bool>*: set to *true* to include documents in the dump. Set to *false*
to exclude documents. The default value is *true*. to exclude documents. The default value is *true*.
* *--include-system-collections <bool>*: whether or not to include system collections - *--include-system-collections <bool>*: whether or not to include system collections
in the dump. The default value is *false*. in the dump. The default value is *false*.
For example, to only dump structural information of all collections (including system For example, to only dump structural information of all collections (including system
collections), use: collections), use:
unix> arangodump --dump-data false --include-system-collections true --output-directory "dump" arangodump --dump-data false --include-system-collections true --output-directory "dump"
To restrict the dump to just specific collections, there is is the *--collection* option. To restrict the dump to just specific collections, there is is the *--collection* option.
It can be specified multiple times if required: It can be specified multiple times if required:
unix> arangodump --collection myusers --collection myvalues --output-directory "dump" arangodump --collection myusers --collection myvalues --output-directory "dump"
Structural information for a collection will be saved in files with name pattern Structural information for a collection will be saved in files with name pattern
*<collection-name>.structure.json*. Each structure file will contains a JSON object *<collection-name>.structure.json*. Each structure file will contains a JSON object
@ -74,7 +72,7 @@ in the cluster.
However, as opposed to the single instance situation, this operation However, as opposed to the single instance situation, this operation
does not guarantee to dump a consistent snapshot if write operations does not guarantee to dump a consistent snapshot if write operations
happen during the dump operation. It is therefore recommended not to happen during the dump operation. It is therefore recommended not to
perform any data-modifcation operations on the cluster whilst *arangodump* perform any data-modification operations on the cluster whilst *arangodump*
is running. is running.
As above, the output will be one structure description file and one data As above, the output will be one structure description file and one data
@ -85,7 +83,6 @@ and the shard keys.
Note that the version of the arangodump client tool needs to match the Note that the version of the arangodump client tool needs to match the
version of the ArangoDB server it connects to. version of the ArangoDB server it connects to.
Advanced cluster options Advanced cluster options
------------------------ ------------------------
@ -94,29 +91,28 @@ Starting with version 3.1.17, collections may be created with shard
distribution identical to an existing prototypical collection; distribution identical to an existing prototypical collection;
i.e. shards are distributed in the very same pattern as in the i.e. shards are distributed in the very same pattern as in the
prototype collection. Such collections cannot be dumped without the prototype collection. Such collections cannot be dumped without the
reference collection or arangodump with yield an error. reference collection or arangodump yields an error.
unix> arangodump --collection clonedCollection --output-directory "dump" arangodump --collection clonedCollection --output-directory "dump"
ERROR Collection clonedCollection's shard distribution is based on a that of collection prototypeCollection, which is not dumped along. You may dump the collection regardless of the missing prototype collection by using the --ignore-distribute-shards-like-errors parameter. ERROR Collection clonedCollection's shard distribution is based on a that of collection prototypeCollection, which is not dumped along. You may dump the collection regardless of the missing prototype collection by using the --ignore-distribute-shards-like-errors parameter.
There are two ways to approach that problem: Solve it, i.e. dump the There are two ways to approach that problem.
prototype collection along: Dump the prototype collection along:
unix> arangodump --collection clonedCollection --collection prototypeCollection --output-directory "dump" arangodump --collection clonedCollection --collection prototypeCollection --output-directory "dump"
Processed 2 collection(s), wrote 81920 byte(s) into datafiles, sent 1 batch(es) Processed 2 collection(s), wrote 81920 byte(s) into datafiles, sent 1 batch(es)
Or override that behaviour to be able to dump the collection Or override that behavior to be able to dump the collection
individually. individually:
unix> arangodump --collection B clonedCollection --output-directory "dump" --ignore-distribute-shards-like-errors arangodump --collection B clonedCollection --output-directory "dump" --ignore-distribute-shards-like-errors
Processed 1 collection(s), wrote 34217 byte(s) into datafiles, sent 1 batch(es) Processed 1 collection(s), wrote 34217 byte(s) into datafiles, sent 1 batch(es)
No that in consequence, restoring such a collection without its Note that in consequence, restoring such a collection without its
prototype is affected. [arangorestore](Arangorestore.md) prototype is affected. [arangorestore](../Arangorestore/README.md)
Encryption Encryption
---------- ----------
@ -153,4 +149,3 @@ dd if=/dev/random bs=1 count=32 of=yourSecretKeyFile
For security, it is best to create these keys offline (away from your For security, it is best to create these keys offline (away from your
database servers) and directly store them in you secret management database servers) and directly store them in you secret management
tool. tool.

View File

@ -0,0 +1,6 @@
Arangodump Options
==================
Usage: `arangodump [<options>]`
@startDocuBlock program_options_arangodump

View File

@ -0,0 +1,15 @@
Arangodump
==========
_Arangodump_ is a command-line client tool to create backups of the data and
structures stored in [ArangoDB servers](../Arangod/README.md).
Dumps are meant to be restored with [_Arangorestore_](../Arangorestore/README.md).
If you want to export for external programs to formats like JSON or CSV, see
[_Arangoexport_](../Arangoexport/README.md) instead.
_Arangodump_ can backup selected collections or all collections of a database,
optionally including _system_ collections. One can backup the structure, i.e.
the collections with their configuration without any data, only the data stored
in them, or both. Dumps can optionally be encrypted.

View File

@ -1,11 +1,9 @@
Exporting Data from an ArangoDB database Arangoexport Examples
====================================== =====================
To export data from an ArangoDB server instance, you will need to invoke _arangoexport_. _arangoexport_ can be invoked by executing the following command in a command line:
_arangoexport_ can be invoked by executing
the following command:
unix> arangoexport --collection test --output-directory "dump" arangoexport --collection test --output-directory "dump"
This exports the collections *test* into the directory *dump* as one big json array. Every entry This exports the collections *test* into the directory *dump* as one big json array. Every entry
in this array is one document from the collection without a specific order. To export more than in this array is one document from the collection without a specific order. To export more than
@ -17,17 +15,17 @@ _arangoexport_ will by default connect to the *_system* database using the defau
endpoint. If you want to connect to a different database or a different endpoint, endpoint. If you want to connect to a different database or a different endpoint,
or use authentication, you can use the following command-line options: or use authentication, you can use the following command-line options:
* *--server.database <string>*: name of the database to connect to - *--server.database <string>*: name of the database to connect to
* *--server.endpoint <string>*: endpoint to connect to - *--server.endpoint <string>*: endpoint to connect to
* *--server.username <string>*: username - *--server.username <string>*: username
* *--server.password <string>*: password to use (omit this and you'll be prompted for the - *--server.password <string>*: password to use (omit this and you'll be prompted for the
password) password)
* *--server.authentication <bool>*: whether or not to use authentication - *--server.authentication <bool>*: whether or not to use authentication
Here's an example of exporting data from a non-standard endpoint, using a dedicated Here's an example of exporting data from a non-standard endpoint, using a dedicated
[database name](../Appendix/Glossary.md#database-name): [database name](../../Appendix/Glossary.md#database-name):
unix> arangoexport --server.endpoint tcp://192.168.173.13:8531 --server.username backup --server.database mydb --collection test --output-directory "my-export" arangoexport --server.endpoint tcp://192.168.173.13:8531 --server.username backup --server.database mydb --collection test --output-directory "my-export"
When finished, _arangoexport_ will print out a summary line with some aggregate When finished, _arangoexport_ will print out a summary line with some aggregate
statistics about what it did, e.g.: statistics about what it did, e.g.:
@ -38,7 +36,7 @@ statistics about what it did, e.g.:
Export JSON Export JSON
----------- -----------
unix> arangoexport --type json --collection test arangoexport --type json --collection test
This exports the collection *test* into the output directory *export* as one json array. This exports the collection *test* into the output directory *export* as one json array.
Every array entry is one document from the collection *test* Every array entry is one document from the collection *test*
@ -46,24 +44,25 @@ Every array entry is one document from the collection *test*
Export JSONL Export JSONL
------------ ------------
unix> arangoexport --type jsonl --collection test arangoexport --type jsonl --collection test
This exports the collection *test* into the output directory *export* as [jsonl](http://jsonlines.org). Every line in the export is one document from the collection *test* as json. This exports the collection *test* into the output directory *export* as [JSONL](http://jsonlines.org).
Every line in the export is one document from the collection *test* as JSON.
Export CSV Export CSV
---------- ----------
unix> arangoexport --type csv --collection test --fields _key,_id,_rev arangoexport --type csv --collection test --fields _key,_id,_rev
This exports the collection *test* into the output directory *export* as CSV. The first This exports the collection *test* into the output directory *export* as CSV. The first
line contains the header with all field names. Each line is one document represented as line contains the header with all field names. Each line is one document represented as
CSV and separated with a comma. Objects and Arrays are represented as a JSON string. CSV and separated with a comma. Objects and arrays are represented as a JSON string.
Export XML Export XML
---------- ----------
unix> arangoexport --type xml --collection test arangoexport --type xml --collection test
This exports the collection *test* into the output directory *export* as generic XML. This exports the collection *test* into the output directory *export* as generic XML.
The root element of the generated XML file is named *collection*. The root element of the generated XML file is named *collection*.
@ -83,25 +82,14 @@ If you export all attributes (*--xgmml-label-only false*) note that attribute ty
Bad Bad
// doc1 { "rank": 1 } // doc1
{ { "rank": "2" } // doc2
"rank": 1
}
// doc2
{
"rank": "2"
}
Good Good
// doc1 { "rank": 1 } // doc1
{ { "rank": 2 } // doc2
"rank": 1
}
// doc2
{
"rank": 2
}
{% endhint %} {% endhint %}
**XGMML specific options** **XGMML specific options**
@ -113,35 +101,41 @@ Good
**Export based on collections** **Export based on collections**
unix> arangoexport --type xgmml --graph-name mygraph --collection vertex --collection edge arangoexport --type xgmml --graph-name mygraph --collection vertex --collection edge
This exports the a unnamed graph with vertex collection *vertex* and edge collection *edge* into the xgmml file *mygraph.xgmml*. This exports an unnamed graph with vertex collection *vertex* and edge collection *edge* into the xgmml file *mygraph.xgmml*.
**Export based on a named graph** **Export based on a named graph**
unix> arangoexport --type xgmml --graph-name mygraph arangoexport --type xgmml --graph-name mygraph
This exports the named graph mygraph into the xgmml file *mygraph.xgmml*. This exports the named graph mygraph into the xgmml file *mygraph.xgmml*.
**Export XGMML without attributes** **Export XGMML without attributes**
unix> arangoexport --type xgmml --graph-name mygraph --xgmml-label-only true arangoexport --type xgmml --graph-name mygraph --xgmml-label-only true
This exports the named graph mygraph into the xgmml file *mygraph.xgmml* without the *&lt;att&gt;* tag in nodes and edges. This exports the named graph mygraph into the xgmml file *mygraph.xgmml* without the *&lt;att&gt;* tag in nodes and edges.
**Export XGMML with a specific label** **Export XGMML with a specific label**
unix> arangoexport --type xgmml --graph-name mygraph --xgmml-label-attribute name arangoexport --type xgmml --graph-name mygraph --xgmml-label-attribute name
This exports the named graph mygraph into the xgmml file *mygraph.xgmml* with a label from documents attribute *name* instead of the default attribute *label*. This exports the named graph mygraph into the xgmml file *mygraph.xgmml* with a label from documents attribute *name* instead of the default attribute *label*.
Export via AQL query Export via AQL query
-------------------- --------------------
unix> arangoexport --type jsonl --query "for book in books filter book.sells > 100 return book" arangoexport --type jsonl --query "FOR book IN books FILTER book.sells > 100 RETURN book"
Export via an aql query allows you to export the returned data as the type specified with *--type*. Export via an AQL query allows you to export the returned data as the type specified with *--type*.
The example exports all books as jsonl that are sold more than 100 times. The example exports all books as JSONL that are sold more than 100 times.
arangoexport --type csv --fields title,category1,category2 --query "FOR book IN books RETURN { title: book.title, category1: book.categories[0], category2: book.categories[1] }"
A *fields* list is required for CSV exports, but you can use an AQL query to produce
these fields. For example, you can de-normalize document structures like arrays and
nested objects to a tabular form as demonstrated above.

View File

@ -0,0 +1,6 @@
Arangoexport Options
====================
Usage: `arangoexport [<options>]`
@startDocuBlock program_options_arangoexport

View File

@ -0,0 +1,9 @@
Arangoexport
============
_Arangoexport_ is a command-line client tool to export data from
[ArangoDB servers](../Arangod/README.md) to formats like JSON, CSV or XML for
consumption by third-party tools.
If you want to create backups, see [_Arangodump_](../Arangodump/README.md)
instead.

View File

@ -0,0 +1,146 @@
Arangoimport Details
====================
The most convenient method to import a lot of data into ArangoDB is to use the
*arangoimport* command-line tool. It allows you to bulk import data records
from a file into a database collection. Multiple files can be imported into
the same or different collections by invoking it multiple times.
Importing into an Edge Collection
---------------------------------
Arangoimport can also be used to import data into an existing edge collection.
The import data must, for each edge to import, contain at least the *_from* and
*_to* attributes. These indicate which other two documents the edge should connect.
It is necessary that these attributes are set for all records, and point to
valid document IDs in existing collections.
*Example*
```js
{ "_from" : "users/1234", "_to" : "users/4321", "desc" : "1234 is connected to 4321" }
```
**Note**: The edge collection must already exist when the import is started. Using
the *--create-collection* flag will not work because arangoimport will always try to
create a regular document collection if the target collection does not exist.
Attribute Naming and Special Attributes
---------------------------------------
Attributes whose names start with an underscore are treated in a special way by
ArangoDB:
- the optional *_key* attribute contains the document's key. If specified, the value
must be formally valid (e.g. must be a string and conform to the naming conventions).
Additionally, the key value must be unique within the
collection the import is run for.
- *_from*: when importing into an edge collection, this attribute contains the id
of one of the documents connected by the edge. The value of *_from* must be a
syntactically valid document id and the referred collection must exist.
- *_to*: when importing into an edge collection, this attribute contains the id
of the other document connected by the edge. The value of *_to* must be a
syntactically valid document id and the referred collection must exist.
- *_rev*: this attribute contains the revision number of a document. However, the
revision numbers are managed by ArangoDB and cannot be specified on import. Thus
any value in this attribute is ignored on import.
If you import values into *_key*, you should make sure they are valid and unique.
When importing data into an edge collection, you should make sure that all import
documents can *_from* and *_to* and that their values point to existing documents.
To avoid specifying complete document ids (consisting of collection names and document
keys) for *_from* and *_to* values, there are the options *--from-collection-prefix* and
*--to-collection-prefix*. If specified, these values will be automatically prepended
to each value in *_from* (or *_to* resp.). This allows specifying only document keys
inside *_from* and/or *_to*.
*Example*
arangoimport --from-collection-prefix users --to-collection-prefix products ...
Importing the following document will then create an edge between *users/1234* and
*products/4321*:
```js
{ "_from" : "1234", "_to" : "4321", "desc" : "users/1234 is connected to products/4321" }
```
Updating existing documents
---------------------------
By default, arangoimport will try to insert all documents from the import file into the
specified collection. In case the import file contains documents that are already present
in the target collection (matching is done via the *_key* attributes), then a default
arangoimport run will not import these documents and complain about unique key constraint
violations.
However, arangoimport can be used to update or replace existing documents in case they
already exist in the target collection. It provides the command-line option *--on-duplicate*
to control the behavior in case a document is already present in the database.
The default value of *--on-duplicate* is *error*. This means that when the import file
contains a document that is present in the target collection already, then trying to
re-insert a document with the same *_key* value is considered an error, and the document in
the database will not be modified.
Other possible values for *--on-duplicate* are:
- *update*: each document present in the import file that is also present in the target
collection already will be updated by arangoimport. *update* will perform a partial update
of the existing document, modifying only the attributes that are present in the import
file and leaving all other attributes untouched.
The values of system attributes *_id*, *_key*, *_rev*, *_from* and *_to* cannot be
updated or replaced in existing documents.
- *replace*: each document present in the import file that is also present in the target
collection already will be replace by arangoimport. *replace* will replace the existing
document entirely, resulting in a document with only the attributes specified in the import
file.
The values of system attributes *_id*, *_key*, *_rev*, *_from* and *_to* cannot be
updated or replaced in existing documents.
- *ignore*: each document present in the import file that is also present in the target
collection already will be ignored and not modified in the target collection.
When *--on-duplicate* is set to either *update* or *replace*, arangoimport will return the
number of documents updated/replaced in the *updated* return value. When set to another
value, the value of *updated* will always be zero. When *--on-duplicate* is set to *ignore*,
arangoimport will return the number of ignored documents in the *ignored* return value.
When set to another value, *ignored* will always be zero.
It is possible to perform a combination of inserts and updates/replaces with a single
arangoimport run. When *--on-duplicate* is set to *update* or *replace*, all documents present
in the import file will be inserted into the target collection provided they are valid
and do not already exist with the specified *_key*. Documents that are already present
in the target collection (identified by *_key* attribute) will instead be updated/replaced.
Result output
-------------
An _arangoimport_ import run will print out the final results on the command line.
It will show the
- number of documents created (*created*)
- number of documents updated/replaced (*updated/replaced*, only non-zero if
*--on-duplicate* was set to *update* or *replace*, see below)
- number of warnings or errors that occurred on the server side (*warnings/errors*)
- number of ignored documents (only non-zero if *--on-duplicate* was set to *ignore*).
*Example*
```js
created: 2
warnings/errors: 0
updated/replaced: 0
ignored: 0
```
For CSV and TSV imports, the total number of input file lines read will also be printed
(*lines read*).
_arangoimport_ will also print out details about warnings and errors that happened on the
server-side (if any).

View File

@ -0,0 +1,161 @@
Arangoimport Examples: CSV / TSV
================================
Importing CSV Data
------------------
_arangoimport_ offers the possibility to import data from CSV files. This
comes handy when the data at hand is in CSV format already and you don't want to
spend time converting them to JSON for the import.
To import data from a CSV file, make sure your file contains the attribute names
in the first row. All the following lines in the file will be interpreted as
data records and will be imported.
The CSV import requires the data to have a homogeneous structure. All records
must have exactly the same amount of columns as there are headers. By default,
lines with a different number of values will not be imported and there will be
warnings for them. To still import lines with less values than in the header,
there is the *--ignore-missing* option. If set to true, lines that have a
different amount of fields will be imported. In this case only those attributes
will be populated for which there are values. Attributes for which there are
no values present will silently be discarded.
Example:
```
"first","last","age","active","dob"
"John","Connor",25,true
"Jim","O'Brady"
```
With *--ignore-missing* this will produce the following documents:
```js
{ "first" : "John", "last" : "Connor", "active" : true, "age" : 25 }
{ "first" : "Jim", "last" : "O'Brady" }
```
The cell values can have different data types though. If a cell does not have
any value, it can be left empty in the file. These values will not be imported
so the attributes will not "be there" in document created. Values enclosed in
quotes will be imported as strings, so to import numeric values, boolean values
or the null value, don't enclose the value in quotes in your file.
We'll be using the following import for the CSV import:
```
"first","last","age","active","dob"
"John","Connor",25,true,
"Jim","O'Brady",19,,
"Lisa","Jones",,,"1981-04-09"
Hans,dos Santos,0123,,
Wayne,Brewer,,false,
```
The command line to execute the import is:
arangoimport --file "data.csv" --type csv --collection "users"
The above data will be imported into 5 documents which will look as follows:
```js
{ "first" : "John", "last" : "Connor", "active" : true, "age" : 25 }
{ "first" : "Jim", "last" : "O'Brady", "age" : 19 }
{ "first" : "Lisa", "last" : "Jones", "dob" : "1981-04-09" }
{ "first" : "Hans", "last" : "dos Santos", "age" : 123 }
{ "first" : "Wayne", "last" : "Brewer", "active" : false }
```
As can be seen, values left completely empty in the input file will be treated
as absent. Numeric values not enclosed in quotes will be treated as numbers.
Note that leading zeros in numeric values will be removed. To import numbers
with leading zeros, please use strings.
The literals *true* and *false* will be treated as booleans if they are not
enclosed in quotes. Other values not enclosed in quotes will be treated as
strings.
Any values enclosed in quotes will be treated as strings, too.
String values containing the quote character or the separator must be enclosed
with quote characters. Within a string, the quote character itself must be
escaped with another quote character (or with a backslash if the *--backslash-escape*
option is used).
Note that the quote and separator characters can be adjusted via the
*--quote* and *--separator* arguments when invoking _arangoimport_. The quote
character defaults to the double quote (*"*). To use a literal quote in a
string, you can use two quote characters.
To use backslash for escaping quote characters, please set the option
*--backslash-escape* to *true*.
The importer supports Windows (CRLF) and Unix (LF) line breaks. Line breaks might
also occur inside values that are enclosed with the quote character.
Here's an example for using literal quotes and newlines inside values:
```
"name","password"
"Foo","r4ndom""123!"
"Bar","wow!
this is a
multine password!"
"Bartholomew ""Bart"" Simpson","Milhouse"
```
Extra whitespace at the end of each line will be ignored. Whitespace at the
start of lines or between field values will not be ignored, so please make sure
that there is no extra whitespace in front of values or between them.
Importing TSV Data
------------------
You may also import tab-separated values (TSV) from a file. This format is very
simple: every line in the file represents a data record. There is no quoting or
escaping. That also means that the separator character (which defaults to the
tabstop symbol) must not be used anywhere in the actual data.
As with CSV, the first line in the TSV file must contain the attribute names,
and all lines must have an identical number of values.
If a different separator character or string should be used, it can be specified
with the *--separator* argument.
An example command line to execute the TSV import is:
arangoimport --file "data.tsv" --type tsv --collection "users"
Attribute Name Translation
--------------------------
For the CSV and TSV input formats, attribute names can be translated automatically.
This is useful in case the import file has different attribute names than those
that should be used in ArangoDB.
A common use case is to rename an "id" column from the input file into "_key" as
it is expected by ArangoDB. To do this, specify the following translation when
invoking arangoimport:
arangoimport --file "data.csv" --type csv --translate "id=_key"
Other common cases are to rename columns in the input file to *_from* and *_to*:
arangoimport --file "data.csv" --type csv --translate "from=_from" --translate "to=_to"
The *translate* option can be specified multiple types. The source attribute name
and the target attribute must be separated with a *=*.
Ignoring Attributes
-------------------
For the CSV and TSV input formats, certain attribute names can be ignored on imports.
In an ArangoDB cluster there are cases where this can come in handy,
when your documents already contain a `_key` attribute
and your collection has a sharding attribute other than `_key`: In the cluster this
configuration is not supported, because ArangoDB needs to guarantee the uniqueness of the `_key`
attribute in *all* shards of the collection.
arangoimport --file "data.csv" --type csv --remove-attribute "_key"
The same thing would apply if your data contains an *_id* attribute:
arangoimport --file "data.csv" --type csv --remove-attribute "_id"

View File

@ -0,0 +1,144 @@
Arangoimport Examples: JSON
===========================
Importing JSON-encoded Data
---------------------------
We will be using these example user records to import:
```js
{ "name" : { "first" : "John", "last" : "Connor" }, "active" : true, "age" : 25, "likes" : [ "swimming"] }
{ "name" : { "first" : "Jim", "last" : "O'Brady" }, "age" : 19, "likes" : [ "hiking", "singing" ] }
{ "name" : { "first" : "Lisa", "last" : "Jones" }, "dob" : "1981-04-09", "likes" : [ "running" ] }
```
To import these records, all you need to do is to put them into a file (with one
line for each record to import) and run the following command:
arangoimport --file "data.json" --type jsonl --collection "users"
This will transfer the data to the server, import the records, and print a
status summary. To show the intermediate progress during the import process, the
option *--progress* can be added. This option will show the percentage of the
input file that has been sent to the server. This will only be useful for big
import files.
arangoimport --file "data.json" --type json --collection users --progress true
It is also possible to use the output of another command as an input for arangoimport.
For example, the following shell command can be used to pipe data from the `cat`
process to arangoimport (Linux/Cygwin only):
cat data.json | arangoimport --file - --type json --collection users
In a command line or PowerShell on Windows, there is the `type` command:
type data.json | arangoimport --file - --type json --collection users
Note that you have to use `--file -` if you want to use another command as input
for arangoimport. No progress can be reported for such imports as the size of the input
will be unknown to arangoimport.
By default, the endpoint *tcp://127.0.0.1:8529* will be used. If you want to
specify a different endpoint, you can use the *--server.endpoint* option. You
probably want to specify a database user and password as well. You can do so by
using the options *--server.username* and *--server.password*. If you do not
specify a password, you will be prompted for one.
arangoimport --server.endpoint tcp://127.0.0.1:8529 --server.username root --file "data.json" --type json --collection "users"
Note that the collection (*users* in this case) must already exist or the import
will fail. If you want to create a new collection with the import data, you need
to specify the *--create-collection* option. Note that by default it will create
a document collection and no edge collection.
arangoimport --file "data.json" --type json --collection "users" --create-collection true
To create an edge collection instead, use the *--create-collection-type* option
and set it to *edge*:
arangoimport --file "data.json" --collection "myedges" --create-collection true --create-collection-type edge
When importing data into an existing collection it is often convenient to first
remove all data from the collection and then start the import. This can be achieved
by passing the *--overwrite* parameter to _arangoimport_. If it is set to *true*,
any existing data in the collection will be removed prior to the import. Note
that any existing index definitions for the collection will be preserved even if
*--overwrite* is set to true.
arangoimport --file "data.json" --type json --collection "users" --overwrite true
As the import file already contains the data in JSON format, attribute names and
data types are fully preserved. As can be seen in the example data, there is no
need for all data records to have the same attribute names or types. Records can
be inhomogeneous.
Please note that by default, _arangoimport_ will import data into the specified
collection in the default database (*_system*). To specify a different database,
use the *--server.database* option when invoking _arangoimport_. If you want to
import into a nonexistent database you need to pass *--create-database true*.
Note *--create-database* defaults to *false*
The tool also supports parallel imports, with multiple threads. Using multiple
threads may provide a speedup, especially when using the RocksDB storage engine.
To specify the number of parallel threads use the `--threads` option:
arangoimport --threads 4 --file "data.json" --type json --collection "users"
Note that using multiple threads may lead to a non-sequential import of the input
data. Data that appears later in the input file may be imported earlier than data
that appears earlier in the input file. This is normally not a problem but may cause
issues when when there are data dependencies or duplicates in the import data. In
this case, the number of threads should be set to 1.
JSON input file formats
-----------------------
*arangoimport* supports two formats when importing JSON data from a file:
- JSON
- JSONL
The first format that we already used above is commonly known as
[JSONL](http://jsonlines.org), also called new-line delimited JSON.
However, in contrast to the JSONL specification it requires the input file to contain
one complete JSON object in each line, e.g.
```js
{ "_key": "one", "value": 1 }
{ "_key": "two", "value": 2 }
{ "_key": "foo", "value": "bar" }
...
```
So one could argue that this is only a subset of JSONL, which permits top-level arrays.
The above format can be imported sequentially by _arangoimport_. It will read data
from the input file in chunks and send it in batches to the server. Each batch
will be about as big as specified in the command-line parameter *--batch-size*.
An alternative is to put one big JSON array into the input file like this:
```js
[
{ "_key": "one", "value": 1 },
{ "_key": "two", "value": 2 },
{ "_key": "foo", "value": "bar" },
...
]
```
This format allows line breaks in the input file within documents. The downside
is that the whole input file will need to be read by _arangoimport_ before it can
send the first batch. This might be a problem if the input file is big. By
default, _arangoimport_ will allow importing such files up to a size of about 16 MB.
If you want to allow your _arangoimport_ instance to use more memory, you may want
to increase the maximum file size by specifying the command-line option
*--batch-size*. For example, to set the batch size to 32 MB, use the following
command:
arangoimport --file "data.json" --type json --collection "users" --batch-size 33554432
Please also note that you may need to increase the value of *--batch-size* if
a single document inside the input file is bigger than the value of *--batch-size*.

View File

@ -0,0 +1,6 @@
Arangoimport Options
====================
Usage: `arangoimport [<options>]`
@startDocuBlock program_options_arangoimport

View File

@ -0,0 +1,8 @@
Arangoimport
============
_Arangoimport_ is a command-line client tool to import data in JSON, CSV and TSV
format to [ArangoDB servers](../Arangod/README.md).
If you want to restore backups, see [_Arangorestore_](../Arangorestore/README.md)
instead.

View File

@ -1,24 +1,22 @@
Arangorestore Arangorestore Examples
============= ======================
To reload data from a dump previously created with [arangodump](Arangodump.md), To restore data from a dump previously created with [_Arangodump_](../Arangodump/README.md),
ArangoDB provides the _arangorestore_ tool. ArangoDB provides the _arangorestore_ tool.
Please note that arangorestore Please note that in versions older than 3.3, _Arangorestore_
**must not be used to create several similar database instances in one installation**. **must not be used to create several similar database instances in one installation**.
This means if you have an arangodump output of database `a`, and you create a second database `b` This means that if you have an _Arangodump_ output of database `a`, create a second database `b`
on the same instance of ArangoDB, and restore the dump of `a` into `b` - data integrity can not on the same instance of ArangoDB, and restore the dump of `a` into `b` - data integrity can not
be guaranteed. be guaranteed. This limitation was solved starting from ArangoDB version 3.3
Reloading Data into an ArangoDB database Invoking Arangorestore
---------------------------------------- ----------------------
### Invoking arangorestore
_arangorestore_ can be invoked from the command-line as follows: _arangorestore_ can be invoked from the command-line as follows:
unix> arangorestore --input-directory "dump" arangorestore --input-directory "dump"
This will connect to an ArangoDB server and reload structural information and This will connect to an ArangoDB server and reload structural information and
documents found in the input directory *dump*. Please note that the input directory documents found in the input directory *dump*. Please note that the input directory
@ -28,12 +26,12 @@ _arangorestore_ will by default connect to the *_system* database using the defa
endpoint. If you want to connect to a different database or a different endpoint, endpoint. If you want to connect to a different database or a different endpoint,
or use authentication, you can use the following command-line options: or use authentication, you can use the following command-line options:
* *--server.database <string>*: name of the database to connect to - *--server.database <string>*: name of the database to connect to
* *--server.endpoint <string>*: endpoint to connect to - *--server.endpoint <string>*: endpoint to connect to
* *--server.username <string>*: username - *--server.username <string>*: username
* *--server.password <string>*: password to use (omit this and you'll be prompted for the - *--server.password <string>*: password to use (omit this and you'll be prompted for the
password) password)
* *--server.authentication <bool>*: whether or not to use authentication - *--server.authentication <bool>*: whether or not to use authentication
Since version 2.6 _arangorestore_ provides the option *--create-database*. Setting this Since version 2.6 _arangorestore_ provides the option *--create-database*. Setting this
option to *true* will create the target database if it does not exist. When creating the option to *true* will create the target database if it does not exist. When creating the
@ -52,13 +50,13 @@ will abort instantly.
The `--force-same-database` option is set to `false` by default to ensure backwards-compatibility. The `--force-same-database` option is set to `false` by default to ensure backwards-compatibility.
Here's an example of reloading data to a non-standard endpoint, using a dedicated Here's an example of reloading data to a non-standard endpoint, using a dedicated
[database name](../Appendix/Glossary.md#database-name): [database name](../../Appendix/Glossary.md#database-name):
unix> arangorestore --server.endpoint tcp://192.168.173.13:8531 --server.username backup --server.database mydb --input-directory "dump" arangorestore --server.endpoint tcp://192.168.173.13:8531 --server.username backup --server.database mydb --input-directory "dump"
To create the target database whe restoring, use a command like this: To create the target database whe restoring, use a command like this:
unix> arangorestore --server.username backup --server.database newdb --create-database true --input-directory "dump" arangorestore --server.username backup --server.database newdb --create-database true --input-directory "dump"
_arangorestore_ will print out its progress while running, and will end with a line _arangorestore_ will print out its progress while running, and will end with a line
showing some aggregate statistics: showing some aggregate statistics:
@ -73,55 +71,57 @@ will be dropped and re-created with the data found in the input directory.
The following parameters are available to adjust this behavior: The following parameters are available to adjust this behavior:
* *--create-collection <bool>*: set to *true* to create collections in the target - *--create-collection <bool>*: set to *true* to create collections in the target
database. If the target database already contains a collection with the same name, database. If the target database already contains a collection with the same name,
it will be dropped first and then re-created with the properties found in the input it will be dropped first and then re-created with the properties found in the input
directory. Set to *false* to keep existing collections in the target database. If directory. Set to *false* to keep existing collections in the target database. If
set to *false* and _arangorestore_ encounters a collection that is present in the set to *false* and _arangorestore_ encounters a collection that is present in the
input directory but not in the target database, it will abort. The default value is *true*. input directory but not in the target database, it will abort. The default value is *true*.
* *--import-data <bool>*: set to *true* to load document data into the collections in - *--import-data <bool>*: set to *true* to load document data into the collections in
the target database. Set to *false* to not load any document data. The default value the target database. Set to *false* to not load any document data. The default value
is *true*. is *true*.
* *--include-system-collections <bool>*: whether or not to include system collections - *--include-system-collections <bool>*: whether or not to include system collections
when re-creating collections or reloading data. The default value is *false*. when re-creating collections or reloading data. The default value is *false*.
For example, to (re-)create all non-system collections and load document data into them, use: For example, to (re-)create all non-system collections and load document data into them, use:
unix> arangorestore --create-collection true --import-data true --input-directory "dump" arangorestore --create-collection true --import-data true --input-directory "dump"
This will drop potentially existing collections in the target database that are also present This will drop potentially existing collections in the target database that are also present
in the input directory. in the input directory.
To include system collections too, use *--include-system-collections true*: To include system collections too, use *--include-system-collections true*:
unix> arangorestore --create-collection true --import-data true --include-system-collections true --input-directory "dump" arangorestore --create-collection true --import-data true --include-system-collections true --input-directory "dump"
To (re-)create all non-system collections without loading document data, use: To (re-)create all non-system collections without loading document data, use:
unix> arangorestore --create-collection true --import-data false --input-directory "dump" arangorestore --create-collection true --import-data false --input-directory "dump"
This will also drop existing collections in the target database that are also present in the This will also drop existing collections in the target database that are also present in the
input directory. input directory.
To just load document data into all non-system collections, use: To just load document data into all non-system collections, use:
unix> arangorestore --create-collection false --import-data true --input-directory "dump" arangorestore --create-collection false --import-data true --input-directory "dump"
To restrict reloading to just specific collections, there is is the *--collection* option. To restrict reloading to just specific collections, there is is the *--collection* option.
It can be specified multiple times if required: It can be specified multiple times if required:
unix> arangorestore --collection myusers --collection myvalues --input-directory "dump" arangorestore --collection myusers --collection myvalues --input-directory "dump"
Collections will be processed by in alphabetical order by _arangorestore_, with all document Collections will be processed by in alphabetical order by _arangorestore_, with all document
collections being processed before all [edge collection](../Appendix/Glossary.md#edge-collection)s. This is to ensure that reloading collections being processed before all [edge collection](../../Appendix/Glossary.md#edge-collection)s. This is to ensure that reloading
data into edge collections will have the document collections linked in edges (*_from* and data into edge collections will have the document collections linked in edges (*_from* and
*_to* attributes) loaded. *_to* attributes) loaded.
### Encryption Encryption
----------
See [arangodump](Arangodump.md) for details. See [Arangodump](../Arangodump/Examples.md#encryption) for details.
### Restoring Revision Ids and Collection Ids Restoring Revision IDs and Collection IDs
-----------------------------------------
_arangorestore_ will reload document and edges data with the exact same *_key*, *_from* and _arangorestore_ will reload document and edges data with the exact same *_key*, *_from* and
*_to* values found in the input directory. However, when loading document data, it will assign *_to* values found in the input directory. However, when loading document data, it will assign
@ -130,29 +130,31 @@ intentional (normally, every server should create its own *_rev* values) there m
situations when it is required to re-use the exact same *_rev* values for the reloaded data. situations when it is required to re-use the exact same *_rev* values for the reloaded data.
This can be achieved by setting the *--recycle-ids* parameter to *true*: This can be achieved by setting the *--recycle-ids* parameter to *true*:
unix> arangorestore --collection myusers --collection myvalues --input-directory "dump" arangorestore --collection myusers --collection myvalues --input-directory "dump"
Note that setting *--recycle-ids* to *true* will also cause collections to be (re-)created in Note that setting *--recycle-ids* to *true* will also cause collections to be (re-)created in
the target database with the exact same collection id as in the input directory. Any potentially the target database with the exact same collection id as in the input directory. Any potentially
existing collection in the target database with the same collection id will then be dropped. existing collection in the target database with the same collection id will then be dropped.
### Reloading Data into a different Collection Reloading Data into a different Collection
------------------------------------------
With some creativity you can use _arangodump_ and _arangorestore_ to transfer data from one With some creativity you can use _arangodump_ and _arangorestore_ to transfer data from one
collection into another (either on the same server or not). For example, to copy data from collection into another (either on the same server or not). For example, to copy data from
a collection *myvalues* in database *mydb* into a collection *mycopyvalues* in database *mycopy*, a collection *myvalues* in database *mydb* into a collection *mycopyvalues* in database *mycopy*,
you can start with the following command: you can start with the following command:
unix> arangodump --collection myvalues --server.database mydb --output-directory "dump" arangodump --collection myvalues --server.database mydb --output-directory "dump"
This will create two files, *myvalues.structure.json* and *myvalues.data.json*, in the output This will create two files, *myvalues.structure.json* and *myvalues.data.json*, in the output
directory. To load data from the datafile into an existing collection *mycopyvalues* in database directory. To load data from the datafile into an existing collection *mycopyvalues* in database
*mycopy*, rename the files to *mycopyvalues.structure.json* and *mycopyvalues.data.json*. *mycopy*, rename the files to *mycopyvalues.structure.json* and *mycopyvalues.data.json*.
After that, run the following command: After that, run the following command:
unix> arangorestore --collection mycopyvalues --server.database mycopy --input-directory "dump" arangorestore --collection mycopyvalues --server.database mycopy --input-directory "dump"
### Using arangorestore with sharding Using arangorestore with sharding
---------------------------------
As of Version 2.1 the *arangorestore* tool supports sharding. Simply As of Version 2.1 the *arangorestore* tool supports sharding. Simply
point it to one of the coordinators in your cluster and it will point it to one of the coordinators in your cluster and it will
@ -184,7 +186,7 @@ collection. This is for safety reasons to ensure consistency of IDs.
collection, whose shard distribution follows a collection, which does collection, whose shard distribution follows a collection, which does
not exist in the cluster and which was not dumped along: not exist in the cluster and which was not dumped along:
unix> arangorestore --collection clonedCollection --server.database mydb --input-directory "dump" arangorestore --collection clonedCollection --server.database mydb --input-directory "dump"
ERROR got error from server: HTTP 500 (Internal Server Error): ArangoError 1486: must not have a distributeShardsLike attribute pointing to an unknown collection ERROR got error from server: HTTP 500 (Internal Server Error): ArangoError 1486: must not have a distributeShardsLike attribute pointing to an unknown collection
Processed 0 collection(s), read 0 byte(s) from datafiles, sent 0 batch(es) Processed 0 collection(s), read 0 byte(s) from datafiles, sent 0 batch(es)
@ -192,14 +194,18 @@ not exist in the cluster and which was not dumped along:
The collection can be restored by overriding the error message as The collection can be restored by overriding the error message as
follows: follows:
unix> arangorestore --collection clonedCollection --server.database mydb --input-directory "dump" --ignore-distribute-shards-like-errors arangorestore --collection clonedCollection --server.database mydb --input-directory "dump" --ignore-distribute-shards-like-errors
### Restore into an authentication enabled ArangoDB Restore into an authentication enabled ArangoDB
-----------------------------------------------
Of course you can restore data into a password protected ArangoDB as well. Of course you can restore data into a password protected ArangoDB as well.
However this requires certain user rights for the user used in the restore process. However this requires certain user rights for the user used in the restore process.
The rights are described in detail in the [Managing Users](ManagingUsers/README.md) chapter. The rights are described in detail in the [Managing Users](../../Administration/ManagingUsers/README.md) chapter.
For restore this short overview is sufficient: For restore this short overview is sufficient:
* When importing into an existing database, the given user needs `Administrate` access on this database. - When importing into an existing database, the given user needs `Administrate`
* When creating a new Database during restore, the given user needs `Administrate` access on `_system`. The user will be promoted with `Administrate` access on the newly created database. access on this database.
- When creating a new Database during restore, the given user needs `Administrate`
access on `_system`. The user will be promoted with `Administrate` access on the
newly created database.

View File

@ -0,0 +1,6 @@
Arangorestore Options
=====================
Usage: `arangorestore [<options>]`
@startDocuBlock program_options_arangorestore

View File

@ -0,0 +1,12 @@
Arangorestore
=============
_Arangorestore_ is a command-line client tool to restore backups created by
[_Arangodump_](../Arangodump/README.md) to [ArangoDB servers](../Arangod/README.md).
If you want to import data in formats like JSON or CSV, see
[_Arangoimport_](../Arangoimport/README.md) instead.
_Arangorestore_ can restore selected collections or all collections of a backup,
optionally including _system_ collections. One can restore the structure, i.e.
the collections with their configuration with or without data.

View File

@ -0,0 +1,186 @@
Arangosh Details
================
Interaction
-----------
You can paste multiple lines into Arangosh, given the first line ends with an
opening brace:
@startDocuBlockInline shellPaste
@EXAMPLE_ARANGOSH_OUTPUT{shellPaste}
|for (var i = 0; i < 10; i ++) {
| require("@arangodb").print("Hello world " + i + "!\n");
}
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock shellPaste
To load your own JavaScript code into the current JavaScript interpreter context,
use the load command:
require("internal").load("/tmp/test.js") // <- Linux / MacOS
require("internal").load("c:\\tmp\\test.js") // <- Windows
Exiting arangosh can be done using the key combination ```<CTRL> + D``` or by
typing ```quit<CR>```
Shell Output
------------
The ArangoDB shell will print the output of the last evaluated expression
by default:
@startDocuBlockInline lastExpressionResult
@EXAMPLE_ARANGOSH_OUTPUT{lastExpressionResult}
42 * 23
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock lastExpressionResult
In order to prevent printing the result of the last evaluated expression,
the expression result can be captured in a variable, e.g.
@startDocuBlockInline lastExpressionResultCaptured
@EXAMPLE_ARANGOSH_OUTPUT{lastExpressionResultCaptured}
var calculationResult = 42 * 23
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock lastExpressionResultCaptured
There is also the `print` function to explicitly print out values in the
ArangoDB shell:
@startDocuBlockInline printFunction
@EXAMPLE_ARANGOSH_OUTPUT{printFunction}
print({ a: "123", b: [1,2,3], c: "test" });
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock printFunction
By default, the ArangoDB shell uses a pretty printer when JSON documents are
printed. This ensures documents are printed in a human-readable way:
@startDocuBlockInline usingToArray
@EXAMPLE_ARANGOSH_OUTPUT{usingToArray}
db._create("five")
for (i = 0; i < 5; i++) db.five.save({value:i})
db.five.toArray()
~db._drop("five");
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock usingToArray
While the pretty-printer produces nice looking results, it will need a lot of
screen space for each document. Sometimes a more dense output might be better.
In this case, the pretty printer can be turned off using the command
*stop_pretty_print()*.
To turn on pretty printing again, use the *start_pretty_print()* command.
Escaping
--------
In AQL, escaping is done traditionally with the backslash character: `\`.
As seen above, this leads to double backslashes when specifying Windows paths.
Arangosh requires another level of escaping, also with the backslash character.
It adds up to four backslashes that need to be written in Arangosh for a single
literal backslash (`c:\tmp\test.js`):
db._query('RETURN "c:\\\\tmp\\\\test.js"')
You can use [bind variables](../../../AQL/Invocation/WithArangosh.html) to
mitigate this:
var somepath = "c:\\tmp\\test.js"
db._query(aql`RETURN ${somepath}`)
Database Wrappers
-----------------
_Arangosh_ provides the *db* object by default, and this object can
be used for switching to a different database and managing collections inside the
current database.
For a list of available methods for the *db* object, type
@startDocuBlockInline shellHelp
@EXAMPLE_ARANGOSH_OUTPUT{shellHelp}
db._help();
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock shellHelp
The [`db` object](../../Appendix/References/DBObject.md) is available in *arangosh*
as well as on *arangod* i.e. if you're using [Foxx](../../Foxx/README.md). While its
interface is persistent between the *arangosh* and the *arangod* implementations,
its underpinning is not. The *arangod* implementation are JavaScript wrappers
around ArangoDB's native C++ implementation, whereas the *arangosh* implementation
wraps HTTP accesses to ArangoDB's [RESTfull API](../../../HTTP/index.html).
So while this code may produce similar results when executed in *arangosh* and
*arangod*, the CPU usage and time required will be really different since the
*arangosh* version will be doing around 100k HTTP requests, and the
*arangod* version will directly write to the database:
```js
for (i = 0; i < 100000; i++) {
db.test.save({ name: { first: "Jan" }, count: i});
}
```
Using `arangosh` via unix shebang mechanisms
--------------------------------------------
In unix operating systems you can start scripts by specifying the interpreter in the first line of the script.
This is commonly called `shebang` or `hash bang`. You can also do that with `arangosh`, i.e. create `~/test.js`:
#!/usr/bin/arangosh --javascript.execute
require("internal").print("hello world")
db._query("FOR x IN test RETURN x").toArray()
Note that the first line has to end with a blank in order to make it work.
Mark it executable to the OS:
#> chmod a+x ~/test.js
and finaly try it out:
#> ~/test.js
Shell Configuration
-------------------
_arangosh_ will look for a user-defined startup script named *.arangosh.rc* in the
user's home directory on startup. The home directory will likely be `/home/<username>/`
on Unix/Linux, and is determined on Windows by peeking into the environment variables
`%HOMEDRIVE%` and `%HOMEPATH%`.
If the file *.arangosh.rc* is present in the home directory, _arangosh_ will execute
the contents of this file inside the global scope.
You can use this to define your own extra variables and functions that you need often.
For example, you could put the following into the *.arangosh.rc* file in your home
directory:
```js
// "var" keyword avoided intentionally...
// otherwise "timed" would not survive the scope of this script
global.timed = function (cb) {
console.time("callback");
cb();
console.timeEnd("callback");
};
```
This will make a function named *timed* available in _arangosh_ in the global scope.
You can now start _arangosh_ and invoke the function like this:
```js
timed(function () {
for (var i = 0; i < 1000; ++i) {
db.test.save({ value: i });
}
});
```
Please keep in mind that, if present, the *.arangosh.rc* file needs to contain valid
JavaScript code. If you want any variables in the global scope to survive you need to
omit the *var* keyword for them. Otherwise the variables will only be visible inside
the script itself, but not outside.

View File

@ -0,0 +1,45 @@
Arangosh Examples
=================
By default _arangosh_ will try to connect to an ArangoDB server running on
server *localhost* on port *8529*. It will use the username *root* and an
empty password by default. Additionally it will connect to the default database
(*_system*). All these defaults can be changed using the following
command-line options:
- *--server.database <string>*: name of the database to connect to
- *--server.endpoint <string>*: endpoint to connect to
- *--server.username <string>*: database username
- *--server.password <string>*: password to use when connecting
- *--server.authentication <bool>*: whether or not to use authentication
For example, to connect to an ArangoDB server on IP *192.168.173.13* on port
8530 with the user *foo* and using the database *test*, use:
arangosh --server.endpoint tcp://192.168.173.13:8530 --server.username foo --server.database test --server.authentication true
_arangosh_ will then display a password prompt and try to connect to the
server after the password was entered.
The shell will print its own version number and if successfully connected
to a server the version number of the ArangoDB server.
To change the current database after the connection has been made, you
can use the `db._useDatabase()` command in Arangosh:
@startDocuBlockInline shellUseDB
@EXAMPLE_ARANGOSH_OUTPUT{shellUseDB}
db._createDatabase("myapp");
db._useDatabase("myapp");
db._useDatabase("_system");
db._dropDatabase("myapp");
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock shellUseDB
To get a list of available commands, Arangosh provides a *help()* function.
Calling it will display helpful information.
_arangosh_ also provides auto-completion. Additional information on available
commands and methods is thus provided by typing the first few letters of a
variable and then pressing the tab key. It is recommend to try this with entering
*db.* (without pressing return) and then pressing tab.

View File

@ -0,0 +1,6 @@
Arangosh Options
================
Usage: `arangosh [<options>]`
@startDocuBlock program_options_arangosh

View File

@ -0,0 +1,15 @@
Arangosh
========
The ArangoDB shell (_arangosh_) is a command-line client tool that can be used
for administration of ArangoDB servers.
It offers a V8 JavaScript shell environment, in which you can use JS interfaces
and modules like the [`db` object](../../Appendix/References/DBObject.md) to
manage collections or run ad-hoc queries for instance, access the
[General Graph module](../../Graphs/GeneralGraphs/README.md) or other features.
It can be used as interactive shell (REPL) as well as to execute a JavaScript
string or file. It is not a general command line like PowerShell or Bash however.
Commands like `curl` or invocations of [ArangoDB programs and tools](../README.md)
are not possible inside of this JS shell!

View File

@ -1,43 +1,22 @@
ArangoDB Programs Programs & Tools
================= ================
The full ArangoDB package comes with the following programs: The full ArangoDB package ships with the following programs and tools:
- `arangod`: [ArangoDB server](../Administration/Configuration/GeneralArangod.md). | Binary name | Brief description |
This server program is intended to run as a daemon process / service to serve the |-----------------|-------------------|
various clients connections to the server via TCP / HTTP. It also provides a | `arangod` | [ArangoDB server](Arangod/README.md). This server program is intended to run as a daemon process / service to serve the various client connections to the server via TCP / HTTP. It also provides a [web interface](WebInterface/README.md).
[web interface](../Administration/WebInterface/README.md). | `arangodb` | [ArangoDB Starter](Starter/README.md) for easy deployment of ArangoDB instances.
| `arangosh` | [ArangoDB shell](Arangosh/README.md). A client that implements a read-eval-print loop (REPL) and provides functions to access and administrate the ArangoDB server.
| `arangoimport` | [Bulk importer](Arangoimport/README.md) for the ArangoDB server. It supports JSON and CSV.
| `arangoexport` | [Bulk exporter](Arangoexport/README.md) for the ArangoDB server. It supports JSON, CSV and XML.
| `arangodump` | Tool to [create backups](Arangodump/README.md) of an ArangoDB database.
| `arangorestore` | Tool to [load backups](Arangorestore/README.md) back into an ArangoDB database.
| `arango-dfdb` | [Datafile debugger](Arango-dfdb/README.md) for ArangoDB (MMFiles storage engine only). It is primarily intended to be used during development of ArangoDB.
| `arangobench` | [Benchmark and test tool](Arangobench/README.md). It can be used for performance and server function testing.
| `arangovpack` | Utility to convert [VelocyPack](https://github.com/arangodb/velocypack) data to JSON.
- `arangodb`: [ArangoDB Starter](Starter/README.md) for easy deployment of The client package comes with a subset of programs and tools:
ArangoDB instances.
- `arangosh`: [ArangoDB shell](../Administration/Arangosh/README.md).
A client that implements a read-eval-print loop (REPL) and provides functions
to access and administrate the ArangoDB server.
- `arangoimport`: [Bulk importer](../Administration/Arangoimport.md) for the
ArangoDB server. It supports JSON and CSV.
- `arangoexport`: [Bulk exporter](../Administration/Arangoexport.md) for the
ArangoDB server. It supports JSON, CSV and XML.
- `arangodump`: Tool to [create backups](../Administration/Arangodump.md)
of an ArangoDB database in JSON format.
- `arangorestore`: Tool to [load data of a backup](../Administration/Arangorestore.md)
back into an ArangoDB database.
- `arango-dfdb`: [Datafile debugger](../Troubleshooting/DatafileDebugger.md) for
ArangoDB (MMFiles storage engine only). It is primarily intended to be used
during development of ArangoDB.
- `arangobench`: [Benchmark and test tool](../Troubleshooting/Arangobench.md).
It can be used for performance and server function testing.
- `arangovpack`: Utility to convert [VPack](https://github.com/arangodb/velocypack)
data to JSON.
The client package comes with a subset of programs:
- arangosh - arangosh
- arangoimport - arangoimport

View File

@ -0,0 +1,20 @@
Web Interface
=============
The ArangoDB server (*arangod*) comes with a built-in web interface for
administration. It lets you manage databases, collections, documents,
users, graphs and more. You can also run and explain queries in a
convenient way. Statistics and server status are provided as well.
The Web Interface (also Web UI, frontend or *Aardvark*) can be accessed with a
browser under the URL `http://localhost:8529` with default server settings.
The interface differs for standalone instances and cluster setups.
Standalone:
![Standalone Frontend](images/overview.png)
Cluster:
![Cluster Frontend](images/clusterView.png)

View File

@ -35,5 +35,5 @@ explicitly set). The default access levels for this user and database
appear in the artificial row with the collection name `*`. appear in the artificial row with the collection name `*`.
{% hint 'info' %} {% hint 'info' %}
Also see [**Managing Users**](../ManagingUsers/README.md) about access levels. Also see [**Managing Users**](../../Administration/ManagingUsers/README.md) about access levels.
{% endhint %} {% endhint %}

View File

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

View File

@ -23,7 +23,7 @@ The documentation is organized in four handbooks:
solutions. solutions.
Features are illustrated with interactive usage examples; you can cut'n'paste them Features are illustrated with interactive usage examples; you can cut'n'paste them
into [arangosh](Administration/Arangosh/README.md) to try them out. The HTTP into [arangosh](Programs/Arangosh/README.md) to try them out. The HTTP
[REST-API](../HTTP/index.html) for driver developers is demonstrated with cut'n'paste [REST-API](../HTTP/index.html) for driver developers is demonstrated with cut'n'paste
recipes intended to be used with the [cURL](http://curl.haxx.se). Drivers may provide recipes intended to be used with the [cURL](http://curl.haxx.se). Drivers may provide
their own examples based on these .js based examples to improve understandability their own examples based on these .js based examples to improve understandability

View File

@ -408,7 +408,7 @@ if the cache should be checked for a result.
### Optimizer ### Optimizer
The AQL optimizer rule `patch-update-statements` has been added. This rule can The AQL optimizer rule `patch-update-statements` has been added. This rule can
optimize certain AQL UPDATE queries that update documents in the a collection optimize certain AQL UPDATE queries that update documents in a collection
that they also iterate over. that they also iterate over.
For example, the following query reads documents from a collection in order For example, the following query reads documents from a collection in order

View File

@ -80,7 +80,7 @@ Note that encrypted backups can be used together with the already existing
RocksDB encryption-at-rest feature, but they can also be used for the MMFiles RocksDB encryption-at-rest feature, but they can also be used for the MMFiles
engine, which does not have encryption-at-rest. engine, which does not have encryption-at-rest.
[Encrypted backups](../Administration/Arangodump.md#encryption) are available [Encrypted backups](../Programs/Arangodump/Examples.md#encryption) are available
in the *Enterprise* edition. in the *Enterprise* edition.
Server-level replication Server-level replication

View File

@ -15,17 +15,52 @@
* [Coming from SQL](GettingStarted/ComingFromSql.md) * [Coming from SQL](GettingStarted/ComingFromSql.md)
* [Next Steps](GettingStarted/NextSteps.md) * [Next Steps](GettingStarted/NextSteps.md)
* [Tutorials](Tutorials/README.md) * [Tutorials](Tutorials/README.md)
# https://@github.com/arangodb-helper/arangodb.git;arangodb;docs/Manual;;/ # https://@github.com/arangodb-helper/arangodb.git;arangodb;docs/Manual;;/
* [ArangoDB Starter](Tutorials/Starter/README.md) * [ArangoDB Starter](Tutorials/Starter/README.md)
# https://@github.com/arangodb/arangosync.git;arangosync;docs/Manual;;/ # https://@github.com/arangodb/arangosync.git;arangosync;docs/Manual;;/
* [Datacenter to datacenter Replication](Tutorials/DC2DC/README.md) * [Datacenter to datacenter Replication](Tutorials/DC2DC/README.md)
# https://@github.com/arangodb/kube-arangodb.git;kube-arangodb;docs/Manual;;/ # https://@github.com/arangodb/kube-arangodb.git;kube-arangodb;docs/Manual;;/
* [Kubernetes](Tutorials/Kubernetes/README.md) * [Kubernetes](Tutorials/Kubernetes/README.md)
* [ArangoDB Programs](Programs/README.md) * [Programs & Tools](Programs/README.md)
# https://@github.com//arangodb-helper/arangodb.git;arangodb;docs/Manual;;/ * [ArangoDB Server](Programs/Arangod/README.md)
* [Options](Programs/Arangod/Options.md)
* [Web Interface](Programs/WebInterface/README.md)
* [Dashboard](Programs/WebInterface/Dashboard.md)
* [Cluster](Programs/WebInterface/Cluster.md)
* [Collections](Programs/WebInterface/Collections.md)
* [Document](Programs/WebInterface/Document.md)
* [Queries](Programs/WebInterface/AqlEditor.md)
* [Graphs](Programs/WebInterface/Graphs.md)
* [Services](Programs/WebInterface/Services.md)
* [Users](Programs/WebInterface/Users.md)
* [Logs](Programs/WebInterface/Logs.md)
* [ArangoDB Shell](Programs/Arangosh/README.md)
* [Examples](Programs/Arangosh/Examples.md)
* [Details](Programs/Arangosh/Details.md)
* [Options](Programs/Arangosh/Options.md)
# https://@github.com//arangodb-helper/arangodb.git;arangodb;docs/Manual;;/
* [ArangoDB Starter](Programs/Starter/README.md) * [ArangoDB Starter](Programs/Starter/README.md)
* [Options](Programs/Starter/Options.md) * [Options](Programs/Starter/Options.md)
* [Security](Programs/Starter/Security.md) * [Security](Programs/Starter/Security.md)
* [Arangodump](Programs/Arangodump/README.md)
* [Examples](Programs/Arangodump/Examples.md)
* [Options](Programs/Arangodump/Options.md)
* [Arangorestore](Programs/Arangorestore/README.md)
* [Examples](Programs/Arangorestore/Examples.md)
* [Options](Programs/Arangorestore/Options.md)
* [Arangoimport](Programs/Arangoimport/README.md)
* [Examples JSON](Programs/Arangoimport/ExamplesJson.md)
* [Examples CSV](Programs/Arangoimport/ExamplesCsv.md)
* [Details](Programs/Arangoimport/Details.md)
* [Options](Programs/Arangoimport/Options.md)
* [Arangoexport](Programs/Arangoexport/README.md)
* [Examples](Programs/Arangoexport/Examples.md)
* [Options](Programs/Arangoexport/Options.md)
* [Arangobench](Programs/Arangobench/README.md)
* [Examples](Programs/Arangobench/Examples.md)
* [Options](Programs/Arangobench/Options.md)
* [Datafile Debugger](Programs/Arango-dfdb/README.md)
* [Examples](Programs/Arango-dfdb/Examples.md)
## CORE TOPICS ## CORE TOPICS
@ -85,9 +120,9 @@
## ADVANCED TOPICS ## ADVANCED TOPICS
* [Architecture](Architecture/README.md) * [Architecture](Architecture/README.md)
* [Storage Engines](Architecture/StorageEngines.md)
* [Replication](Architecture/Replication/README.md) * [Replication](Architecture/Replication/README.md)
* [Write-ahead log](Architecture/WriteAheadLog.md) * [Write-ahead log](Architecture/WriteAheadLog.md)
* [Storage Engines](Architecture/StorageEngines.md)
* [Foxx Microservices](Foxx/README.md) * [Foxx Microservices](Foxx/README.md)
* [Getting started](Foxx/GettingStarted.md) * [Getting started](Foxx/GettingStarted.md)
* [Reference](Foxx/Reference/README.md) * [Reference](Foxx/Reference/README.md)
@ -196,25 +231,9 @@
* [TLS](Deployment/Kubernetes/Tls.md) * [TLS](Deployment/Kubernetes/Tls.md)
* [Upgrading](Deployment/Kubernetes/Upgrading.md) * [Upgrading](Deployment/Kubernetes/Upgrading.md)
* [Administration](Administration/README.md) * [Administration](Administration/README.md)
* [Web Interface](Administration/WebInterface/README.md) * [Backup & Restore](Administration/BackupRestore.md)
* [Dashboard](Administration/WebInterface/Dashboard.md) * [Import & Export](Administration/ImportExport.md)
* [Cluster](Administration/WebInterface/Cluster.md) * [User Management](Administration/ManagingUsers/README.md)
* [Collections](Administration/WebInterface/Collections.md)
* [Document](Administration/WebInterface/Document.md)
* [Queries](Administration/WebInterface/AqlEditor.md)
* [Graphs](Administration/WebInterface/Graphs.md)
* [Services](Administration/WebInterface/Services.md)
* [Users](Administration/WebInterface/Users.md)
* [Logs](Administration/WebInterface/Logs.md)
* [ArangoDB Shell](Administration/Arangosh/README.md)
* [Shell Output](Administration/Arangosh/Output.md)
* [Configuration](Administration/Arangosh/Configuration.md)
* [Details](GettingStarted/Arangosh.md)
* [Arangoimport](Administration/Arangoimport.md)
* [Arangodump](Administration/Arangodump.md)
* [Arangorestore](Administration/Arangorestore.md)
* [Arangoexport](Administration/Arangoexport.md)
* [Managing Users](Administration/ManagingUsers/README.md)
* [In Arangosh](Administration/ManagingUsers/InArangosh.md) * [In Arangosh](Administration/ManagingUsers/InArangosh.md)
* [Server Configuration](Administration/Configuration/README.md) * [Server Configuration](Administration/Configuration/README.md)
* [Operating System Configuration](Administration/Configuration/OperatingSystem.md) * [Operating System Configuration](Administration/Configuration/OperatingSystem.md)
@ -265,11 +284,9 @@
* [Troubleshooting](Troubleshooting/README.md) * [Troubleshooting](Troubleshooting/README.md)
* [arangod](Troubleshooting/Arangod.md) * [arangod](Troubleshooting/Arangod.md)
* [Emergency Console](Troubleshooting/EmergencyConsole.md) * [Emergency Console](Troubleshooting/EmergencyConsole.md)
* [Datafile Debugger](Troubleshooting/DatafileDebugger.md) * [Cluster](Troubleshooting/Cluster/README.md)
* [Arangobench](Troubleshooting/Arangobench.md) # https://@github.com/arangodb/arangosync.git;arangosync;docs/Manual;;/
* [Cluster](Troubleshooting/Cluster/README.md) * [Datacenter to datacenter replication](Troubleshooting/DC2DC/README.md)
# https://@github.com/arangodb/arangosync.git;arangosync;docs/Manual;;/
* [Datacenter to datacenter replication](Troubleshooting/DC2DC/README.md)
--- ---

View File

@ -3,6 +3,8 @@ div.example_show_button {
text-align: center; text-align: center;
position: relative; position: relative;
top: -10px; top: -10px;
display: flex;
justify-content: center;
} }
.book .book-body .navigation.navigation-next { .book .book-body .navigation.navigation-next {
@ -36,6 +38,18 @@ div.example_show_button {
columns: 3; columns: 3;
} }
.book .book-body .program-options code {
background-color: #f0f0f0;
}
.book .book-body .program-options td {
vertical-align: top;
}
.book .book-body .program-options td:first-child {
min-width: 250px;
}
.localized-footer { .localized-footer {
opacity: 0.5; opacity: 0.5;
} }

View File

@ -556,6 +556,11 @@ function check-docublocks()
grep -v '.*~:.*' |\ grep -v '.*~:.*' |\
grep -v '.*#.*:.*' \ grep -v '.*#.*:.*' \
>> /tmp/rawinprog.txt >> /tmp/rawinprog.txt
# These files are converted to docublocks on the fly and only live in memory.
for file in ../Examples/*.json ; do
echo "$file" |sed -e "s;.*/;Generated: @startDocuBlock program_options_;" -e "s;.json;;" >> /tmp/rawinprog.txt
done
set -e set -e
echo "Generated: startDocuBlockInline errorCodes">> /tmp/rawinprog.txt echo "Generated: startDocuBlockInline errorCodes">> /tmp/rawinprog.txt
@ -731,7 +736,6 @@ while [ $# -gt 0 ]; do
esac esac
done done
case "$VERB" in case "$VERB" in
build-books) build-books)
build-books build-books
@ -769,6 +773,15 @@ case "$VERB" in
clean "$@" clean "$@"
;; ;;
*) *)
if test -d "${VERB}"; then
guessBookName="${VERB/\/}"
if [[ $ALLBOOKS = *"${guessBookName}"* ]]; then
build-book "$guessBookName"
check-docublocks "some of the above errors may be because of referenced books weren't rebuilt."
check-dangling-anchors "some of the above errors may be because of referenced books weren't rebuilt."
exit 0
fi
fi
printHelp printHelp
exit 1 exit 1
;; ;;

View File

@ -0,0 +1,14 @@
<!-- Integrate this into installation pages? -->
Filesystems
-----------
As one would expect for a database, we recommend a locally mounted filesystems.
NFS or similar network filesystems will not work.
On Linux we recommend the use of ext4fs, on Windows NTFS and on MacOS HFS+.
We recommend to **not** use BTRFS on Linux. It is known to not work well in conjunction with ArangoDB.
We experienced that ArangoDB faces latency issues on accessing its database files on BTRFS partitions.
In conjunction with BTRFS and AUFS we also saw data loss on restart.

View File

@ -1,7 +1,7 @@
Server-side db-Object implementation Server-side db-Object implementation
------------------------------------ ------------------------------------
We [already talked about the arangosh db Object implementation](../GettingStarted/Arangosh.md), Now a little more about the server version, so the following examples won't work properly in arangosh. We [already talked about the arangosh db Object implementation](../Programs/Arangosh/README.md), Now a little more about the server version, so the following examples won't work properly in arangosh.
Server-side methods of the *db object* will return an `[object ShapedJson]`. This datatype is a very lightweight JavaScript object that contains an internal pointer to where the document data are actually stored in memory or on disk. Especially this is not a fullblown copy of the document's complete data. Server-side methods of the *db object* will return an `[object ShapedJson]`. This datatype is a very lightweight JavaScript object that contains an internal pointer to where the document data are actually stored in memory or on disk. Especially this is not a fullblown copy of the document's complete data.

View File

@ -10,7 +10,7 @@ http://127.0.0.1:8529
If everything works as expected, you should see the login view: If everything works as expected, you should see the login view:
![Login View](../Administration/WebInterface/images/loginView.png) ![Login View](../Programs/WebInterface/images/loginView.png)
For more information on the ArangoDB web interface, see For more information on the ArangoDB web interface, see
[Web Interface](../Administration/WebInterface/README.md) [Web Interface](../Programs/WebInterface/README.md)

Some files were not shown because too many files have changed in this diff Show More