1
0
Fork 0

Some changes to the comments in the code

This commit is contained in:
Thomas Schmidts 2014-06-29 03:34:22 +02:00
parent 63726459c1
commit c2cb0d5a10
18 changed files with 457 additions and 484 deletions

View File

@ -14,8 +14,8 @@ penalty as each data modification will trigger a sync I/O operation.
Instead of overwriting existing documents, a completely new version of the
document is generated. The two benefits are:
- Objects can be stored coherently and compactly in the main memory.
- Objects are preserved-isolated writing and reading transactions allow
* Objects can be stored coherently and compactly in the main memory.
* Objects are preserved-isolated writing and reading transactions allow
accessing these objects for parallel operations.
The system collects obsolete versions as garbage, recognizing them as
@ -29,58 +29,15 @@ processes.
There are certain default values, which you can store in the configuration file
or supply on the command line.
`--database.maximal-journal-size size`
!SUBSECTION Maximal Journal Size
<!-- arangod/RestServer/ArangoServer.h -->
@startDocuBlock databaseMaximalJournalSize
Maximal size of journal in bytes. Can be overwritten when creating a new collection. Note that this also limits the maximal size of a single document.
The default is 32MB.
<!-- @copydetails triagens::arango::ArangoServer::_defaultMaximalSize -->
!SUBSECTION Per Collection Configuration
!SECTION Per Collection Configuration
You can configure the durability behavior on a per collection basis.
Use the ArangoDB shell to change these properties.
`collection.properties()`
Returns an object containing all collection properties.
* waitForSync: If true creating a document will only return after the data was synced to disk.
* journalSize : The size of the journal in bytes.
* isVolatile: If true then the collection data will be kept in memory only and ArangoDB will not write or sync the data to disk.
* keyOptions (optional) additional options for key generation. This is a JSON array containing the following attributes (note: some of the attributes are optional):
* type: the type of the key generator used for the collection.
* allowUserKeys: if set to true, then it is allowed to supply own key values in the _key attribute of a document. If set to false, then the key generator will solely be responsible for generating keys and supplying own key values in the _key attribute of documents is considered an error.
* increment: increment value for autoincrement key generator. Not used for other key generator types.
* offset: initial offset value for autoincrement key generator. Not used for other key generator types.
In a cluster setup, the result will also contain the following attributes:
* numberOfShards: the number of shards of the collection.
* shardKeys: contains the names of document attributes that are used to determine the target shard for documents.
* collection.properties( properties)
Changes the collection properties. properties must be a object with one or more of the following attribute(s):
* waitForSync: If true creating a document will only return after the data was synced to disk.
* journalSize : The size of the journal in bytes.
Note that it is not possible to change the journal size after the journal or datafile has been created. Changing this parameter will only effect newly created journals. Also note that you cannot lower the journal size to less then size of the largest document already stored in the collection.
Note: some other collection properties, such as type, isVolatile, or keyOptions cannot be changed once the collection is created.
*Examples*
Read all properties
arango> db.examples.properties()
{ "waitForSync" : false, "journalSize" : 33554432, "isVolatile" : false }
Change a property
arango> db.examples.properties({ waitForSync : false })
{ "waitForSync" : false, "journalSize" : 33554432, "isVolatile" : false }
<!--@copydetails JS_PropertiesVocbaseCol-->
!SUBSECTION Properties
<!-- arangod/V8Server/v8-vocbase.cpp -->
@startDocuBlock collectionProperties

View File

@ -12,7 +12,6 @@ attribute. When importing into an edge collection, it is mandatory that all
imported documents have the *_from* and *_to* attributes, and that they contain
valid references.
Let's assume for the following examples you want to import user records into an
existing collection named "users" on the server.
@ -23,14 +22,12 @@ existing collection named "users" on the server.
Let's further assume the import at hand is encoded in JSON. We'll be using these
example user records to import:
```
``` js
{ "name" : { "first" : "John", "last" : "Connor" }, "active" : true, "age" : 25, "likes" : [ "swimming"] }
{ "name" : { "first" : "Jim", "last" : "O'Brady" }, "age" : 19, "likes" : [ "hiking", "singing" ] }
{ "name" : { "first" : "Lisa", "last" : "Jones" }, "dob" : "1981-04-09", "likes" : [ "running" ] }
```
<!--@verbinclude arangoimp-data-json-->
To import these records, all you need to do is to put them into a file (with one
line for each record to import) and run the following command:
@ -84,19 +81,24 @@ occurred on the server side, and the total number of input file lines/documents
that it processed. Additionally, _arangoimp_ will print out details about errors
that happened on the server-side (if any).
Example:
*Examples*
created: 2
errors: 0
total: 2
Please note that _arangoimp_ supports two formats when importing JSON data from
```js
created: 2
errors: 0
total: 2
```
**Note**: *arangoimp* supports two formats when importing JSON data from
a file. The first format requires the input file to contain one JSON document
in each line, e.g.
{ "_key": "one", "value": 1 }
{ "_key": "two", "value": 2 }
{ "_key": "foo", "value": "bar" }
...
```js
{ "_key": "one", "value": 1 }
{ "_key": "two", "value": 2 }
{ "_key": "foo", "value": "bar" }
...
```
The above format can be imported sequentially by _arangoimp_. It will read data
from the input file in chunks and send it in batches to the server. Each batch
@ -104,12 +106,14 @@ will be about as big as specified in the command-line parameter *--batch-size*.
An alternative is to put one big JSON document into the input file like this:
[
{ "_key": "one", "value": 1 },
{ "_key": "two", "value": 2 },
{ "_key": "foo", "value": "bar" },
...
]
```js
[
{ "_key": "one", "value": 1 },
{ "_key": "two", "value": 2 },
{ "_key": "foo", "value": "bar" },
...
]
```
This format allows line breaks within the input file as required. The downside
is that the whole input file will need to be read by _arangoimp_ before it can
@ -148,7 +152,12 @@ or the null value, don't enclose the value into the quotes in your file.
We'll be using the following import for the CSV import:
@verbinclude arangoimp-data-csv
```js
"first","name","age","active","dob"
"John","Connor",25,true,
"Jim","O'Brady",19,,
"Lisa","Jones",,,"1981-04-09"
```
The command line to execute the import then is:
@ -183,11 +192,13 @@ The import data must, for each edge to import, contain at least the *_from* and
It is necessary that these attributes are set for all records, and point to
valid document ids in existing collections.
Example:
*Examples*
{ "_from" : "users/1234", "_to" : "users/4321", "desc" : "1234 is connected to 4321" }
```js
{ "_from" : "users/1234", "_to" : "users/4321", "desc" : "1234 is connected to 4321" }
```
Note that the edge collection must already exist when the import is started. Using
**Note**: The edge collection must already exist when the import is started. Using
the *--create-collection* flag will not work because arangoimp will always try to
create a regular document collection if the target collection does not exist.
@ -202,10 +213,10 @@ ArangoDB:
collection the import is run for.
- *_from*: when importing into an edge collection, this attribute contains the id
of one of the documents connected by the edge. The value of *_from* must be a
syntactially valid document id and the referred collection must exist.
syntactically valid document id and the referred collection must exist.
- *_to*: when importing into an edge collection, this attribute contains the id
of the other document connected by the edge. The value of *_to* must be a
syntactially valid document id and the referred collection must exist.
syntactically valid document id and the referred collection must exist.
- *_rev*: this attribute contains the revision number of a document. However, the
revision numbers are managed by ArangoDB and cannot be specified on import. Thus
any value in this attribute is ignored on import.

View File

@ -2,60 +2,26 @@
The action module provides the infrastructure for defining HTTP actions.
!SUBSECTION Basics
`actions.getErrorMessage(code)`
Returns the error message for an error code.
<!--
@anchor JSModuleActionsGetErrorMessage
@copydetails JSF_getErrorMessage
-->
!SECTION Basics
!SUBSECTION Error Message
<!-- js/server/modules/org/arangodb/actions.js -->
@startDocuBlock actionsGetErrorMessage
!SECTION Standard HTTP Result Generators
`actions.resultOk(req, res, code, result, headers)`
The function defines a response. code is the status code to return. result is the result object, which will be returned as JSON object in the body. headers is an array of headers to returned. The function adds the attribute error with value false and code with value code to the result.
`actions.resultBad(req, res, error-code, msg, headers)`
The function generates an error response.
!SUBSECTION Result Ok
<!-- js/server/modules/org/arangodb/actions.js -->
@startDocuBlock actionsResultOk
!SUBSECTION Result Bad
<!-- js/server/modules/org/arangodb/actions.js -->
@startDocuBlock actionsResultBad
`actions.resultNotFound(req, res, code, msg, headers)`
The function generates an error response.
!SUBSECTION Result Unsupported
<!-- js/server/modules/org/arangodb/actions.js -->
@startDocuBlock actionsResultUnsupported
`actions.resultUnsupported(req, res, headers)`
The function generates an error response.
`actions.resultError(req, res, code, errorNum, errorMessage, headers, keyvals)`
The function generates an error response. The response body is an array with an attribute errorMessage containing the error message errorMessage, error containing true, code containing code, errorNum containing errorNum, and errorMessage containing the error message errorMessage. keyvals are mixed into the result.
<!--
@CLEARPAGE
@anchor JSModuleActionsResultOk
@copydetails JSF_resultOk
@CLEARPAGE
@anchor JSModuleActionsResultBad
@copydetails JSF_resultBad
@CLEARPAGE
@anchor JSModuleActionsResultNotFound
@copydetails JSF_resultNotFound
@CLEARPAGE
@anchor JSModuleActionsResultUnsupported
@copydetails JSF_resultUnsupported
@CLEARPAGE
@anchor JSModuleActionsResultError
@copydetails JSF_resultError
-->
!SUBSECTION Result Error
<!-- js/server/modules/org/arangodb/actions.js -->
@startDocuBlock actionsResultError

View File

@ -12,8 +12,9 @@ an exception.
Example usage:
console.assert(value === "abc", "expected: value === abc, actual:", value);
```js
console.assert(value === "abc", "expected: value === abc, actual:", value);
```
!SUBSECTION console.debug
@ -33,7 +34,9 @@ String substitution patterns, which can be used in *format*.
Example usage:
console.debug("%s", "this is a test");
```js
console.debug("%s", "this is a test");
```
!SUBSECTION console.dir
@ -42,8 +45,9 @@ Example usage:
Logs a listing of all properties of the object.
Example usage:
console.dir(myObject);
```js
console.dir(myObject);
```
!SUBSECTION console.error
@ -60,9 +64,9 @@ String substitution patterns, which can be used in *format*.
* *%%o* object hyperlink
Example usage:
console.error("error '%s': %s", type, message);
```js
console.error("error '%s': %s", type, message);
```
!SUBSECTION console.getline
`console.getline()`
@ -81,10 +85,12 @@ indented sub messages.
Example usage:
console.group("user attributes");
console.log("name", user.name);
console.log("id", user.id);
console.groupEnd();
```js
console.group("user attributes");
console.log("name", user.name);
console.log("id", user.id);
console.groupEnd();
```
!SUBSECTION console.groupCollapsed
@ -113,9 +119,9 @@ String substitution patterns, which can be used in *format*.
* *%%o* object hyperlink
Example usage:
console.info("The %s jumped over %d fences", animal, count);
```js
console.info("The %s jumped over %d fences", animal, count);
```
!SUBSECTION console.log
`console.log(format, argument1, ...)`
@ -132,9 +138,11 @@ same name to stop the timer and log the time elapsed.
Example usage:
console.time("mytimer");
...
console.timeEnd("mytimer"); // this will print the elapsed time
```js
console.time("mytimer");
...
console.timeEnd("mytimer"); // this will print the elapsed time
```
!SUBSECTION console.timeEnd
@ -161,5 +169,4 @@ String substitution patterns, which can be used in *format*.
* *%%s* string
* *%%d*, *%%i* integer
* *%%f* floating point number
* *%%o* object hyperlink
* *%%o* object hyperlink

View File

@ -8,23 +8,21 @@ forming the edges. Together both collections form a graph. Assume that the
vertex collection is called *vertices* and the edges collection *edges*, then
you can build a graph using the *Graph* constructor.
arango> var Graph = require("org/arangodb/graph").Graph;
arango> g1 = new Graph("graph", "vertices", "edges");
Graph("vertices", "edges")
<!--@verbinclude graph25-->
```js
arango> var Graph = require("org/arangodb/graph").Graph;
arango> g1 = new Graph("graph", "vertices", "edges");
Graph("vertices", "edges")
```
It is possible to use different edges with the same vertices. For instance, to
build a new graph with a different edge collection use
arango> var Graph = require("org/arangodb/graph").Graph;
arango> g2 = new Graph("graph", "vertices", "alternativeEdges");
Graph("vertices", "alternativeEdges")
```js
arango> var Graph = require("org/arangodb/graph").Graph;
<!--@verbinclude graph26 -->
arango> g2 = new Graph("graph", "vertices", "alternativeEdges");
Graph("vertices", "alternativeEdges")
```
It is, however, impossible to use different vertices with the same edges. Edges
are tied to the vertices.
are tied to the vertices.

View File

@ -1,105 +0,0 @@
!CHAPTER CommonJS Modules
Unfortunately, the JavaScript libraries are just in the process of being
standardized. CommonJS has defined some important modules. ArangoDB implements
the following
* "console" is a well known logging facility to all the JavaScript developers.
ArangoDB implements all of the functions described
<a href="http://wiki.commonjs.org/wiki/Console">here</a>, with the exceptions
of *profile* and *count*.
* "fs" provides a file system API for the manipulation of paths, directories,
files, links, and the construction of file streams. ArangoDB implements
most of Filesystem/A functions described
<a href="http://wiki.commonjs.org/wiki/Filesystem/A">here</a>.
* Modules are implemented according to
<a href="http://wiki.commonjs.org/wiki/Modules">Modules/1.1.1</a>
* Packages are implemented according to
<a href="http://wiki.commonjs.org/wiki/Packages">Packages/1.0</a>
!SUBSECTION ArangoDB Specific Modules
A lot of the modules, however, are ArangoDB specific. These modules
are described in the following chapters.
!SUBSECTION Node Modules
ArangoDB also support some <a href="http://www.nodejs.org/">node</a> modules.
* <a href="http://nodejs.org/api/assert.html">"assert"</a> implements
assertion and testing functions.
* <a href="http://nodejs.org/api/buffer.html">"buffer"</a> implements
a binary data type for JavaScript.
* <a href="http://nodejs.org/api/path.html">"path"</a> implements
functions dealing with filenames and paths.
* <a href="http://nodejs.org/api/punycode.html">"punycode"</a> implements
conversion functions for
<a href="http://en.wikipedia.org/wiki/Punycode">punycode</a> encoding.
* <a href="http://nodejs.org/api/querystring.html">"querystring"</a>
provides utilities for dealing with query strings.
* <a href="http://nodejs.org/api/stream.html">"stream"</a>
provides a streaming interface.
* <a href="http://nodejs.org/api/url.html">"url"</a>
has utilities for URL resolution and parsing.
!SUBSECTION Node Packages
The following <a href="https://npmjs.org/">node packages</a> are preinstalled.
* <a href="http://docs.busterjs.org/en/latest/modules/buster-format/">"buster-format"</a>
* <a href="http://matthewmueller.github.io/cheerio/">"Cheerio.JS"</a>
* <a href="http://coffeescript.org/">"coffee-script"</a> implements a
coffee-script to JavaScript compiler. ArangoDB supports the *compile*
function of the package, but not the *eval* functions.
* <a href="https://github.com/fb55/htmlparser2">"htmlparser2"</a>
* <a href="http://sinonjs.org/">"Sinon.JS"</a>
* <a href="http://underscorejs.org/">"underscore"</a> is a utility-belt library
for JavaScript that provides a lot of the functional programming support that
you would expect in Prototype.js (or Ruby), but without extending any of the
built-in JavaScript objects.
!SUBSECTION require
`require(path)`
*require* checks if the module or package specified by *path* has already
been loaded. If not, the content of the file is executed in a new
context. Within the context you can use the global variable *exports* in
order to export variables and functions. This variable is returned by
*require*.
Assume that your module file is *test1.js* and contains
exports.func1 = function() {
print("1");
};
exports.const1 = 1;
Then you can use *require* to load the file and access the exports.
unix> ./arangosh
arangosh> var test1 = require("test1");
arangosh> test1.const1;
1
arangosh> test1.func1();
1
*require* follows the specification
[Modules/1.1.1](http://wiki.commonjs.org/wiki/Modules/1.1.1).

View File

@ -1,63 +0,0 @@
!CHAPTER Modules Path versus Modules Collection
ArangoDB comes with predefined modules defined in the file-system under the path
specified by *startup.startup-directory*. In a standard installation this
point to the system share directory. Even if you are an administrator of
ArangoDB you might not have write permissions to this location. On the other
hand, in order to deploy some extension for ArangoDB, you might need to install
additional JavaScript modules. This would require you to become root and copy
the files into the share directory. In order to ease the deployment of
extensions, ArangoDB uses a second mechanism to look up JavaScript modules.
JavaScript modules can either be stored in the filesystem as regular file or in
the database collection *_modules*.
If you execute
require("com/example/extension")
then ArangoDB will try to locate the corresponding JavaScript as file as
follows
* There is a cache for the results of previous *require* calls. First of
all ArangoDB checks if *com/example/extension* is already in the modules
cache. If it is, the export object for this module is returned. No further
JavaScript is executed.
* ArangoDB will then check, if there is a file called **com/example/extension.js** in the system search path. If such a file exists, it is executed in a new module context and the value of *exports* object is returned. This value is also stored in the module cache.
* If no file can be found, ArangoDB will check if the collection *_modules*
contains a document of the form
{
path: "/com/example/extension",
content: "...."
}
Note that the leading */´ is important - even if you call *require* without a
leading */´. If such a document exists, then the value of the *content*
attribute must contain the JavaScript code of the module. This string is
executed in a new module context and the value of *exports* object is
returned. This value is also stored in the module cache.
!SUBSECTION Modules Cache
As *require* uses a module cache to store the exports objects of the required
modules, changing the design documents for the modules in the *_modules* collection
might have no effect at all.
You need to clear the cache, when manually changing documents in the *_modules*
collection.
arangosh> require("internal").flushServerModules()
This initiate a flush of the modules in the ArangoDB *server* process.
Please note, that the ArangoDB JavaScript shell uses the same mechanism as the
server to locate JavaScript modules. But the do not share the same module cache.
If you flush the server cache, this will not flush the shell cache - and vice
versa.
In order to flush the modules cache of the JavaScript shell, you should use
arangosh> require("internal").flushModuleCache()

View File

@ -10,3 +10,182 @@ functions of the module or package.
There are some extensions to the CommonJS concept to allow ArangoDB to load
Node.js modules as well.
!SECTION CommonJS Modules
Unfortunately, the JavaScript libraries are just in the process of being
standardized. CommonJS has defined some important modules. ArangoDB implements
the following
* "console" is a well known logging facility to all the JavaScript developers.
ArangoDB implements all of the functions described
<a href="http://wiki.commonjs.org/wiki/Console">here</a>, with the exceptions
of *profile* and *count*.
* "fs" provides a file system API for the manipulation of paths, directories,
files, links, and the construction of file streams. ArangoDB implements
most of Filesystem/A functions described
<a href="http://wiki.commonjs.org/wiki/Filesystem/A">here</a>.
* Modules are implemented according to
<a href="http://wiki.commonjs.org/wiki/Modules">Modules/1.1.1</a>
* Packages are implemented according to
<a href="http://wiki.commonjs.org/wiki/Packages">Packages/1.0</a>
!SUBSECTION ArangoDB Specific Modules
A lot of the modules, however, are ArangoDB specific. These modules
are described in the following chapters.
!SUBSECTION Node Modules
ArangoDB also support some [node](http://www.nodejs.org/)modules.
* <a href="http://nodejs.org/api/assert.html">"assert"</a> implements
assertion and testing functions.
* <a href="http://nodejs.org/api/buffer.html">"buffer"</a> implements
a binary data type for JavaScript.
* <a href="http://nodejs.org/api/path.html">"path"</a> implements
functions dealing with filenames and paths.
* <a href="http://nodejs.org/api/punycode.html">"punycode"</a> implements
conversion functions for
<a href="http://en.wikipedia.org/wiki/Punycode">punycode</a> encoding.
* <a href="http://nodejs.org/api/querystring.html">"querystring"</a>
provides utilities for dealing with query strings.
* <a href="http://nodejs.org/api/stream.html">"stream"</a>
provides a streaming interface.
* <a href="http://nodejs.org/api/url.html">"url"</a>
has utilities for URL resolution and parsing.
!SUBSECTION Node Packages
The following <a href="https://npmjs.org/">node packages</a> are preinstalled.
* <a href="http://docs.busterjs.org/en/latest/modules/buster-format/">"buster-format"</a>
* <a href="http://matthewmueller.github.io/cheerio/">"Cheerio.JS"</a>
* <a href="http://coffeescript.org/">"coffee-script"</a> implements a
coffee-script to JavaScript compiler. ArangoDB supports the *compile*
function of the package, but not the *eval* functions.
* <a href="https://github.com/fb55/htmlparser2">"htmlparser2"</a>
* <a href="http://sinonjs.org/">"Sinon.JS"</a>
* <a href="http://underscorejs.org/">"underscore"</a> is a utility-belt library
for JavaScript that provides a lot of the functional programming support that
you would expect in Prototype.js (or Ruby), but without extending any of the
built-in JavaScript objects.
!SUBSECTION require
`require(path)`
*require* checks if the module or package specified by *path* has already
been loaded. If not, the content of the file is executed in a new
context. Within the context you can use the global variable *exports* in
order to export variables and functions. This variable is returned by
*require*.
Assume that your module file is *test1.js* and contains
```js
exports.func1 = function() {
print("1");
};
exports.const1 = 1;
```
Then you can use *require* to load the file and access the exports.
```js
unix> ./arangosh
arangosh> var test1 = require("test1");
arangosh> test1.const1;
1
arangosh> test1.func1();
1
```
*require* follows the specification
[Modules/1.1.1](http://wiki.commonjs.org/wiki/Modules/1.1.1).
!CHAPTER Modules Path versus Modules Collection
ArangoDB comes with predefined modules defined in the file-system under the path
specified by *startup.startup-directory*. In a standard installation this
point to the system share directory. Even if you are an administrator of
ArangoDB you might not have write permissions to this location. On the other
hand, in order to deploy some extension for ArangoDB, you might need to install
additional JavaScript modules. This would require you to become root and copy
the files into the share directory. In order to ease the deployment of
extensions, ArangoDB uses a second mechanism to look up JavaScript modules.
JavaScript modules can either be stored in the filesystem as regular file or in
the database collection *_modules*.
If you execute
```js
require("com/example/extension")
```
then ArangoDB will try to locate the corresponding JavaScript as file as
follows
* There is a cache for the results of previous *require* calls. First of
all ArangoDB checks if *com/example/extension* is already in the modules
cache. If it is, the export object for this module is returned. No further
JavaScript is executed.
* ArangoDB will then check, if there is a file called **com/example/extension.js** in the system search path. If such a file exists, it is executed in a new module context and the value of *exports* object is returned. This value is also stored in the module cache.
* If no file can be found, ArangoDB will check if the collection *_modules*
contains a document of the form
```js
{
path: "/com/example/extension",
content: "...."
}
```
**Note**: The leading */´ is important - even if you call *require* without a
leading */´. If such a document exists, then the value of the *content*
attribute must contain the JavaScript code of the module. This string is
executed in a new module context and the value of *exports* object is
returned. This value is also stored in the module cache.
!SUBSECTION Modules Cache
As *require* uses a module cache to store the exports objects of the required
modules, changing the design documents for the modules in the *_modules* collection
might have no effect at all.
You need to clear the cache, when manually changing documents in the *_modules*
collection.
```js
arangosh> require("internal").flushServerModules()
```
This initiate a flush of the modules in the ArangoDB *server* process.
Please note, that the ArangoDB JavaScript shell uses the same mechanism as the
server to locate JavaScript modules. But the do not share the same module cache.
If you flush the server cache, this will not flush the shell cache - and vice
versa.
In order to flush the modules cache of the JavaScript shell, you should use
```js
arangosh> require("internal").flushModuleCache()
```

View File

@ -194,8 +194,6 @@
* [General Handling](GeneralHttp/README.md)
<!-- 26 -->
* [Javascript Modules](ModuleJavaScript/README.md)
* [Common JSModules](ModuleJavaScript/JSModules.md)
* [Path](ModuleJavaScript/ModulesPath.md)
* ["console"](ModuleConsole/README.md)
* ["fs"](ModuleFs/README.md)
* ["graph"](ModuleGraph/README.md)

View File

@ -56,4 +56,3 @@ management tools. We will fix this in the next version.
Please also consider the comments in the following section about
firewall setup.

View File

@ -10,7 +10,7 @@ the *cluster.disable-dispatcher-kickstarter* and
*cluster.disable-dispatcher-interface* options in *arangod.conf* both
to *false*.
Note that once you switch *cluster.disable-dispatcher-interface* to
**Note**: Once you switch *cluster.disable-dispatcher-interface* to
*false*, the usual web front end is automatically replaced with the
web front end for cluster planning. Therefore you can simply point
your browser to *http://localhost:8529* (if you are running on the
@ -25,49 +25,55 @@ Start up a regular ArangoDB, either in console mode or connect to it with
the Arango shell *arangosh*. Then you can ask it to plan a cluster for
you:
arangodb> var Planner = require("org/arangodb/cluster").Planner;
arangodb> p = new Planner({numberOfDBservers:3, numberOfCoordinators:2});
[object Object]
```js
arangodb> var Planner = require("org/arangodb/cluster").Planner;
arangodb> p = new Planner({numberOfDBservers:3, numberOfCoordinators:2});
[object Object]
```
If you are curious you can look at the plan of your cluster:
arangodb> p.getPlan();
```
arangodb> p.getPlan();
```
This will show you a huge JSON document. More interestingly, some further
components tell you more about the layout of your cluster:
arangodb> p.DBservers;
[
{
"id" : "Pavel",
"dispatcher" : "me",
"port" : 8629
},
{
"id" : "Perry",
"dispatcher" : "me",
"port" : 8630
},
{
"id" : "Pancho",
"dispatcher" : "me",
"port" : 8631
}
]
arangodb> p.coordinators;
[
{
"id" : "Claus",
"dispatcher" : "me",
"port" : 8530
},
{
"id" : "Chantalle",
"dispatcher" : "me",
"port" : 8531
}
]
```js
arangodb> p.DBservers;
[
{
"id" : "Pavel",
"dispatcher" : "me",
"port" : 8629
},
{
"id" : "Perry",
"dispatcher" : "me",
"port" : 8630
},
{
"id" : "Pancho",
"dispatcher" : "me",
"port" : 8631
}
]
arangodb> p.coordinators;
[
{
"id" : "Claus",
"dispatcher" : "me",
"port" : 8530
},
{
"id" : "Chantalle",
"dispatcher" : "me",
"port" : 8531
}
]
```
This tells you the ports on which your ArangoDB processes will listen.
We will need the 8530 (or whatever appears on your machine) for the
@ -80,9 +86,11 @@ all data directories and log files, so if you have previously used the
same cluster plan you will lose all your data. Use the *relaunch* method
described below instead in that case.
arangodb> var Kickstarter = require("org/arangodb/cluster").Kickstarter;
arangodb> k = new Kickstarter(p.getPlan());
arangodb> k.launch();
```js
arangodb> var Kickstarter = require("org/arangodb/cluster").Kickstarter;
arangodb> k = new Kickstarter(p.getPlan());
arangodb> k.launch();
```js
That is all you have to do, to fire up your first cluster. You will see some
output, which you can safely ignore (as long as no error happens).
@ -92,71 +100,84 @@ as if it were a single ArangoDB instance (use the port number from above
instead of 8530, if you get a different one) (probably from another
shell window):
$ arangosh --server.endpoint tcp://localhost:8530
[... some output omitted]
arangosh [_system]> db._listDatabases();
[
"_system"
]
```js
$ arangosh --server.endpoint tcp://localhost:8530
[... some output omitted]
arangosh [_system]> db._listDatabases();
[
"_system"
]
```js
This for example, lists the cluster wide databases.
Now, let us create a sharded collection. Note, that we only have to specify
the number of shards to use in addition to the usual command.
The shards are automatically distributed among your DBservers:
arangosh [_system]> example = db._create("example",{numberOfShards:6});
[ArangoCollection 1000001, "example" (type document, status loaded)]
arangosh [_system]> x = example.save({"name":"Hans", "age":44});
{
"error" : false,
"_id" : "example/1000008",
"_rev" : "13460426",
"_key" : "1000008"
}
arangosh [_system]> example.document(x._key);
{
"age" : 44,
"name" : "Hans",
"_id" : "example/1000008",
"_rev" : "13460426",
"_key" : "1000008"
}
```js
arangosh [_system]> example = db._create("example",{numberOfShards:6});
[ArangoCollection 1000001, "example" (type document, status loaded)]
arangosh [_system]> x = example.save({"name":"Hans", "age":44});
{
"error" : false,
"_id" : "example/1000008",
"_rev" : "13460426",
"_key" : "1000008"
}
arangosh [_system]> example.document(x._key);
{
"age" : 44,
"name" : "Hans",
"_id" : "example/1000008",
"_rev" : "13460426",
"_key" : "1000008"
}
```js
You can shut down your cluster by using the following Kickstarter
method (in the ArangoDB console):
arangodb> k.shutdown();
```js
arangodb> k.shutdown();
```
If you want to start your cluster again without losing data you have
previously stored in it, you can use the *relaunch* method in exactly the
same way as you previously used the *launch* method:
arangodb> k.relaunch();
```js
arangodb> k.relaunch();
```
Note that if you have destroyed the object *k* for example because you
**Note**: If you have destroyed the object *k* for example because you
have shutdown the ArangoDB instance in which you planned the cluster,
then you can reproduce it for a *relaunch* operation, provided you have
kept the cluster plan object provided by the *getPlan* method. If you
had for example done:
arangodb> var plan = p.getPlan();
arangodb> require("fs").write("saved_plan.json",JSON.stringify(plan));
```js
arangodb> var plan = p.getPlan();
arangodb> require("fs").write("saved_plan.json",JSON.stringify(plan));
```
Then you can later do (in another session):
arangodb> var plan = require("fs").read("saved_plan.json");
arangodb> plan = JSON.parse(plan);
arangodb> var Kickstarter = require("org/arangodb/cluster").Kickstarter;
arangodb> var k = new Kickstarter(plan);
arangodb> k.relaunch();
```js
arangodb> var plan = require("fs").read("saved_plan.json");
arangodb> plan = JSON.parse(plan);
arangodb> var Kickstarter = require("org/arangodb/cluster").Kickstarter;
arangodb> var k = new Kickstarter(plan);
arangodb> k.relaunch();
```
to start the existing cluster anew.
You can check, whether or not, all your cluster processes are still
running, by issuing:
arangodb> k.isHealthy();
```js
arangodb> k.isHealthy();
```
This will show you the status of all processes in the cluster. You
should see "RUNNING" there, in all the relevant places.
@ -164,8 +185,10 @@ should see "RUNNING" there, in all the relevant places.
Finally, to clean up the whole cluster (losing all the data stored in
it), do:
arangodb> k.shutdown();
arangodb> k.cleanup();
```js
arangodb> k.shutdown();
arangodb> k.cleanup();
```
We conclude this section with another example using two machines, which
will act as two dispatchers. We start from scratch using two machines,
@ -175,19 +198,23 @@ instance installed and running. Please make sure, that both bind to
all network devices, so that they can talk to each other. Also enable
the dispatcher functionality on both of them, as described above.
arangodb> var Planner = require("org/arangodb/cluster").Planner;
arangodb> var p = new Planner({
dispatchers: {"me":{"endpoint":"tcp://192.168.173.78:8529"},
"theother":{"endpoint":"tcp://192.168.173.13:6789"}},
"numberOfCoordinators":2, "numberOfDBservers": 2});
```js
arangodb> var Planner = require("org/arangodb/cluster").Planner;
arangodb> var p = new Planner({
dispatchers: {"me":{"endpoint":"tcp://192.168.173.78:8529"},
"theother":{"endpoint":"tcp://192.168.173.13:6789"}},
"numberOfCoordinators":2, "numberOfDBservers": 2});
```
With these commands, you create a cluster plan involving two machines.
The planner will put one DBserver and one Coordinator on each machine.
You can now launch this cluster exactly as explained earlier:
```js
arangodb> var Kickstarter = require("org/arangodb/cluster").Kickstarter;
arangodb> k = new Kickstarter(p.getPlan());
arangodb> k.launch();
```
Likewise, the methods *shutdown*, *relaunch*, *isHealthy* and *cleanup*
work exactly as in the single server case.

View File

@ -1,4 +0,0 @@
"first","name","age","active","dob"
"John","Connor",25,true,
"Jim","O'Brady",19,,
"Lisa","Jones",,,"1981-04-09"

View File

@ -1,3 +0,0 @@
{ "name" : { "first" : "John", "last" : "Connor" }, "active" : true, "age" : 25, "likes" : [ "swimming"] }
{ "name" : { "first" : "Jim", "last" : "O'Brady" }, "age" : 19, "likes" : [ "hiking", "singing" ] }
{ "name" : { "first" : "Lisa", "last" : "Jones" }, "dob" : "1981-04-09", "likes" : [ "running" ] }

View File

@ -1,4 +0,0 @@
arango> var Graph = require("org/arangodb/graph-blueprint").Graph;
arango> g1 = new Graph("graph", "vertices", "edges");
Graph("vertices", "edges")

View File

@ -1,4 +0,0 @@
arango> var Graph = require("org/arangodb/graph-blueprint").Graph;
arango> g2 = new Graph("graph", "vertices", "alternativeEdges");
Graph("vertices", "alternativeEdges")

View File

@ -350,7 +350,7 @@ namespace triagens {
string _databasePath;
////////////////////////////////////////////////////////////////////////////////
/// @brief default journal size
/// @startDocuBlock databaseMaximalJournalSize
///
/// @CMDOPT{\--database.maximal-journal-size @CA{size}}
///

View File

@ -6518,7 +6518,7 @@ static v8::Handle<v8::Value> JS_PlanIdVocbaseCol (v8::Arguments const& argv) {
/// created journals. Also note that you cannot lower the journal size to less
/// then size of the largest document already stored in the collection.
///
/// *Note*: some other collection properties, such as *type*, *isVolatile*,
/// **Note**: some other collection properties, such as *type*, *isVolatile*,
/// or *keyOptions* cannot be changed once the collection is created.
///
/// @EXAMPLES

View File

@ -973,45 +973,47 @@ function routeRequest (req, res) {
////////////////////////////////////////////////////////////////////////////////
/// @brief defines an http action handler
/// @startDocuBlock actionsDefineHttp
///
/// @FUN{actions.defineHttp(@FA{options})}
/// `actions.defineHttp(options)`
///
/// Defines a new action. The @FA{options} are as follows:
/// Defines a new action. The *options* are as follows:
///
/// @FA{options.url}
/// `options.url`
///
/// The URL, which can be used to access the action. This path might contain
/// slashes. Note that this action will also be called, if a url is given such that
/// @FA{options.url} is a prefix of the given url and no longer definition
/// *options.url* is a prefix of the given url and no longer definition
/// matches.
///
/// @FA{options.prefix}
/// *options.prefix}
///
/// If @LIT{false}, then only use the action for exact matches. The default is
/// @LIT{true}.
/// If *false*, then only use the action for exact matches. The default is
/// *true*.
///
/// @FA{options.context}
/// `options.context`
///
/// The context to which this actions belongs. Possible values are "admin"
/// and "user".
///
/// @FA{options.callback}(@FA{request}, @FA{response})
/// `options.callback(request, response)`
///
/// The request argument contains a description of the request. A request
/// parameter @LIT{foo} is accessible as @LIT{request.parametrs.foo}. A request
/// header @LIT{bar} is accessible as @LIT{request.headers.bar}. Assume that
/// the action is defined for the url @LIT{/foo/bar} and the request url is
/// @LIT{/foo/bar/hugo/egon}. Then the suffix parts @LIT{[ "hugon"\, "egon" ]}
/// are availible in @LIT{request.suffix}.
/// parameter *foo* is accessible as *request.parametrs.foo*. A request
/// header *bar* is accessible as *request.headers.bar*. Assume that
/// the action is defined for the url */foo/bar* and the request url is
/// */foo/bar/hugo/egon*. Then the suffix parts *[ "hugon"\, "egon" ]*
/// are availible in *request.suffix*.
///
/// The callback must define fill the @FA{response}.
/// The callback must define fill the *response*.
///
/// - @LIT{@FA{response}.responseCode}: the response code
/// - @LIT{@FA{response}.contentType}: the content type of the response
/// - @LIT{@FA{response}.body}: the body of the response
/// * *response.responseCode*: the response code
/// * *response.contentType*: the content type of the response
/// * *response.body*: the body of the response
///
/// You can use the functions @FN{ResultOk} and @FN{ResultError} to easily
/// You can use the functions *ResultOk* and *ResultError* to easily
/// generate a response.
/// @endDocuBLock
////////////////////////////////////////////////////////////////////////////////
function defineHttp (options) {
@ -1050,11 +1052,12 @@ function defineHttp (options) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get an error message string for an error code
/// @startDocuBlock actionsGetErrorMessage
///
/// @FUN{actions.getErrorMessage(@FA{code})}
/// `actions.getErrorMessage(code)`
///
/// Returns the error message for an error code.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function getErrorMessage (code) {
@ -1108,17 +1111,18 @@ function getJsonBody (req, res, code) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error
/// @startDocuBlock actionsResultError
///
/// @FUN{actions.resultError(@FA{req}, @FA{res}, @FA{code}, @FA{errorNum},
/// @FA{errorMessage}, @FA{headers}, @FA{keyvals})}
/// *actions.resultError(*req*, *res*, *code*, *errorNum*,
/// *errorMessage*, *headers*, *keyvals)*
///
/// The function generates an error response. The response body is an array
/// with an attribute @LIT{errorMessage} containing the error message
/// @FA{errorMessage}, @LIT{error} containing @LIT{true}, @LIT{code} containing
/// @FA{code}, @LIT{errorNum} containing @FA{errorNum}, and @LIT{errorMessage}
/// containing the error message @FA{errorMessage}. @FA{keyvals} are mixed
/// with an attribute *errorMessage* containing the error message
/// *errorMessage*, *error* containing *true*, *code* containing
/// *code*, *errorNum* containing *errorNum*, and *errorMessage*
/// containing the error message *errorMessage*. *keyvals* are mixed
/// into the result.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultError (req, res, httpReturnCode, errorNum, errorMessage, headers, keyvals) {
@ -1453,15 +1457,16 @@ function badParameter (req, res, name) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns a result
/// @startDocuBlock actionsResultOk
///
/// @FUN{actions.resultOk(@FA{req}, @FA{res}, @FA{code}, @FA{result}, @FA{headers}})}
/// `actions.resultOk(req, res, code, result, headers)`
///
/// The function defines a response. @FA{code} is the status code to
/// return. @FA{result} is the result object, which will be returned as JSON
/// object in the body. @LIT{headers} is an array of headers to returned.
/// The function adds the attribute @LIT{error} with value @LIT{false}
/// and @LIT{code} with value @FA{code} to the @FA{result}.
/// The function defines a response. *code* is the status code to
/// return. @*result* is the result object, which will be returned as JSON
/// object in the body. *headers* is an array of headers to returned.
/// The function adds the attribute *error* with value *false*
/// and *code* with value *code* to the *result*.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultOk (req, res, httpReturnCode, result, headers) {
@ -1486,11 +1491,12 @@ function resultOk (req, res, httpReturnCode, result, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for a bad request
/// @startDocuBlock actionsResultBad
///
/// @FUN{actions.resultBad(@FA{req}, @FA{res}, @FA{error-code}, @FA{msg}, @FA{headers})}
/// `actions.resultBad(req, res, error-code, msg, headers)`
///
/// The function generates an error response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultBad (req, res, code, msg, headers) {
@ -1500,11 +1506,12 @@ function resultBad (req, res, code, msg, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for not found
/// @startDocuBlock actionsResultNotFound
///
/// @FUN{actions.resultNotFound(@FA{req}, @FA{res}, @FA{code}, @FA{msg}, @FA{headers})}
/// `actions.resultNotFound(req, res, code, msg, headers)`
///
/// The function generates an error response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultNotFound (req, res, code, msg, headers) {
@ -1514,11 +1521,12 @@ function resultNotFound (req, res, code, msg, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for not implemented
/// @startDocuBlock actionsResultNotImplemented
///
/// @FUN{actions.resultNotImplemented(@FA{req}, @FA{res}, @FA{msg}, @FA{headers})}
/// `actions.resultNotImplemented(req, res, msg, headers)`
///
/// The function generates an error response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultNotImplemented (req, res, msg, headers) {
@ -1533,11 +1541,12 @@ function resultNotImplemented (req, res, msg, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for unsupported methods
/// @startDocuBlock actionsResultUnsupported
///
/// @FUN{actions.resultUnsupported(@FA{req}, @FA{res}, @FA{headers})}
/// `actions.resultUnsupported(req, res, headers)`
///
/// The function generates an error response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultUnsupported (req, res, headers) {
@ -1611,11 +1620,12 @@ function handleRedirect (req, res, options, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates a permanently redirect
/// @startDocuBlock actionsResultPermanentRedirect
///
/// @FUN{actions.resultPermanentRedirect(@FA{req}, @FA{res}, @FA{options}, @FA{headers})}
/// `actions.resultPermanentRedirect(req, res, options, headers)`
///
/// The function generates a redirect response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultPermanentRedirect (req, res, options, headers) {
@ -1627,11 +1637,12 @@ function resultPermanentRedirect (req, res, options, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates a temporary redirect
/// @startDocuBlock actionsResultTemporaryRedirect
///
/// @FUN{actions.resultTemporaryRedirect(@FA{req}, @FA{res}, @FA{options}, @FA{headers})}
/// `actions.resultTemporaryRedirect(req, res, options, headers)`
///
/// The function generates a redirect response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultTemporaryRedirect (req, res, options, headers) {
@ -1727,11 +1738,12 @@ function resultCursor (req, res, cursor, code, options) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for unknown collection
/// @startDocuBlock actionsCollectionNotFound
///
/// @FUN{actions.collectionNotFound(@FA{req}, @FA{res}, @FA{collection}, @FA{headers})}
/// `actions.collectionNotFound(req, res, collection, headers)`
///
/// The function generates an error response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function collectionNotFound (req, res, collection, headers) {
@ -1751,11 +1763,12 @@ function collectionNotFound (req, res, collection, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for an unknown index
/// @startDocuBlock actionsIndexNotFound
///
/// @FUN{actions.indexNotFound(@FA{req}, @FA{res}, @FA{collection}, @FA{index}, @FA{headers})}
/// `actions.indexNotFound(req, res, collection, index, headers)`
///
/// The function generates an error response.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function indexNotFound (req, res, collection, index, headers) {
@ -1781,13 +1794,14 @@ function indexNotFound (req, res, collection, index, headers) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief generates an error for an exception
/// @startDocuBlock actionsResultException
///
/// @FUN{actions.resultException(@FA{req}, @FA{res}, @FA{err}, @FA{headers}, @FA{verbose})}
/// `actions.resultException(req, res, err, headers, verbose)`
///
/// The function generates an error response. If @FA{verbose} is set to
/// @LIT{true} or not specified (the default), then the error stack trace will
/// *true* or not specified (the default), then the error stack trace will
/// be included in the error message if available.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
function resultException (req, res, err, headers, verbose) {