diff --git a/Documentation/Books/Users/Aql/GraphOperations.mdpp b/Documentation/Books/Users/Aql/GraphOperations.mdpp index ee633b5ccc..ec7759e0b7 100644 --- a/Documentation/Books/Users/Aql/GraphOperations.mdpp +++ b/Documentation/Books/Users/Aql/GraphOperations.mdpp @@ -35,27 +35,307 @@ This section describes various AQL functions which can be used to receive inform !SUBSECTION GRAPH_EDGES -@startDocuBlock JSF_aql_general_graph_edges + +`GRAPH_EDGES (graphName, vertexExample, options)` + +The GRAPH\_EDGES function returns all edges of the graph connected to the vertices +defined by the example. + +The complexity of this method is **O(n\*m^x)** with *n* being the vertices defined by the +parameter vertexExamplex, *m* the average amount of edges of a vertex and *x* the maximal +depths. +Hence the default call would have a complexity of **O(n\*m)**; + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *vertexExample* : An example for the desired +vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* (optional) : An object containing the following options: + * *direction* : The direction of the edges as a string. Possible values are *outbound*, *inbound* and *any* (default). + * *edgeCollectionRestriction* : One or multiple edge collection names. Only edges from these collections will be considered for the path. + * *startVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + start vertex of a path. + * *endVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as end vertex of a path. + * *edgeExamples* : A filter example for the edges (see [example](#short-explanation-of-the-example-parameter)). + * *minDepth* : Defines the minimal length of a path from an edge to a vertex + (default is 1, which means only the edges directly connected to a vertex would be returned). + * *maxDepth* : Defines the maximal length of a path from an edge + to a vertex (default is 1, which means only the edges directly connected to a vertex would be returned). + * *maxIterations*: the maximum number of iterations that the traversal is allowed to perform. It is sensible to set this number so unbounded traversals will terminate. + * *includeData*: Defines if the result should contain only ids (false) or if all documents + should be fully extracted (true). By default this parameter is set to false, so only ids + will be returned. + +@EXAMPLES + +A route planner example, all edges to locations with a distance of either 700 or 600.: + + @startDocuBlockInline generalGraphEdges1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphEdges1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_EDGES(" + | +"'routeplanner', {}, {edgeExamples : [{distance: 600}, {distance: 700}]}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphEdges1 + +A route planner example, all outbound edges of Hamburg with a maximal depth of 2 : + + @startDocuBlockInline generalGraphEdges2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphEdges2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_EDGES(" + | +"'routeplanner', 'germanCity/Hamburg', {direction : 'outbound', maxDepth : 2}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphEdges2 + +Including the data: + + @startDocuBlockInline generalGraphEdges3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphEdges3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_EDGES(" + | + "'routeplanner', 'germanCity/Hamburg', {direction : 'outbound'," + | + "maxDepth : 2, includeData: true}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphEdges3 + !SUBSECTION GRAPH_VERTICES -@startDocuBlock JSF_aql_general_graph_vertices + + +The GRAPH\_VERTICES function returns all vertices. + +`GRAPH_VERTICES (graphName, vertexExample, options)` + +According to the optional filters it will only return vertices that have +outbound, inbound or any (default) edges. + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *vertexExample* : An example for the desired vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* (optional) : An object containing the following options: + * *direction* : The direction of the edges as a string. Possible values are *outbound*, *inbound* and *any* (default). + * *vertexCollectionRestriction* : One or multiple vertex collections that should be considered. + +@EXAMPLES + +A route planner example, all vertices of the graph + + @startDocuBlockInline generalGraphVertices1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphVertices1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_VERTICES(" + +"'routeplanner', {}) RETURN e").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphVertices1 + +A route planner example, all vertices from collection *germanCity*. + + @startDocuBlockInline generalGraphVertices2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphVertices2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_VERTICES(" + | +"'routeplanner', {}, {direction : 'any', vertexCollectionRestriction" + + " : 'germanCity'}) RETURN e").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphVertices2 + + !SUBSECTION GRAPH_NEIGHBORS -@startDocuBlock JSF_aql_general_graph_neighbors + + +The GRAPH\_NEIGHBORS function returns all neighbors of vertices. + +`GRAPH_NEIGHBORS (graphName, vertexExample, options)` + +By default only the direct neighbors (path length equals 1) are returned, but one can define +the range of the path length to the neighbors with the options *minDepth* and *maxDepth*. +The complexity of this method is **O(n\*m^x)** with *n* being the vertices defined by the +parameter vertexExamplex, *m* the average amount of neighbors and *x* the maximal depths. +Hence the default call would have a complexity of **O(n\*m)**; + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *vertexExample* : An example for the desired vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *edgeExamples* : A filter example for the edges to the neighbors (see [example](#short-explanation-of-the-example-parameter)). + * *neighborExamples* : An example for the desired neighbors (see [example](#short-explanation-of-the-example-parameter)). + * *edgeCollectionRestriction* : One or multiple edge collection names. Only edges from these collections will be considered for the path. + * *vertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be contained in the + result. This does not effect vertices on the path. + * *minDepth* : Defines the minimal depth a path to a neighbor must have to be returned (default is 1). + * *maxDepth* : Defines the maximal depth a path to a neighbor must have to be returned (default is 1). + * *maxIterations*: the maximum number of iterations that the traversal is + allowed to perform. It is sensible to set this number so unbounded traversals + will terminate at some point. + * *includeData* is a boolean value to define if the returned documents should be extracted + instead of returning their ids only. The default is *false*. + +Note: in ArangoDB versions prior to 2.6 *NEIGHBORS* returned the array of neighbor vertices with +all attributes and not just the vertex ids. To return to the same behavior, set the *includeData* +option to *true* in 2.6 and above. + +@EXAMPLES + +A route planner example, all neighbors of locations with a distance of either +700 or 600.: + + @startDocuBlockInline generalGraphNeighbors1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphNeighbors1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_NEIGHBORS(" + | +"'routeplanner', {}, {edgeExamples : [{distance: 600}, {distance: 700}]}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphNeighbors1 + +A route planner example, all outbound neighbors of Hamburg with a maximal depth of 2 : + + @startDocuBlockInline generalGraphNeighbors2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphNeighbors2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_NEIGHBORS(" + | +"'routeplanner', 'germanCity/Hamburg', {direction : 'outbound', maxDepth : 2}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphNeighbors2 + + !SUBSECTION GRAPH_COMMON_NEIGHBORS -@startDocuBlock JSF_aql_general_graph_common_neighbors + + +The GRAPH\_COMMON\_NEIGHBORS function returns all common neighbors of the vertices +defined by the examples. + +`GRAPH_COMMON_NEIGHBORS (graphName, vertex1Example, vertex2Examples, +optionsVertex1, optionsVertex2)` + +This function returns the intersection of *GRAPH_NEIGHBORS(vertex1Example, optionsVertex1)* +and *GRAPH_NEIGHBORS(vertex2Example, optionsVertex2)*. +The complexity of this method is **O(n\*m^x)** with *n* being the maximal amount of vertices +defined by the parameters vertexExamples, *m* the average amount of neighbors and *x* the +maximal depths. +Hence the default call would have a complexity of **O(n\*m)**; + +For parameter documentation read the documentation of +[GRAPH_NEIGHBORS](#graphneighbors). + +@EXAMPLES + +A route planner example, all common neighbors of capitals. + + @startDocuBlockInline generalGraphCommonNeighbors1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphCommonNeighbors1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_COMMON_NEIGHBORS(" + | +"'routeplanner', {isCapital : true}, {isCapital : true}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphCommonNeighbors1 + +A route planner example, all common outbound neighbors of Hamburg with any other location +which have a maximal depth of 2: + + @startDocuBlockInline generalGraphCommonNeighbors2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphCommonNeighbors2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_COMMON_NEIGHBORS(" + | +"'routeplanner', 'germanCity/Hamburg', {}, {direction : 'outbound', maxDepth : 2}, "+ + | "{direction : 'outbound', maxDepth : 2}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphCommonNeighbors2 + + !SUBSECTION GRAPH_COMMON_PROPERTIES -@startDocuBlock JSF_aql_general_graph_common_properties + + + +`GRAPH_COMMON_PROPERTIES (graphName, vertex1Example, vertex2Examples, options)` + +The GRAPH\_COMMON\_PROPERTIES function returns a list of objects which have the id of +the vertices defined by *vertex1Example* as keys and a list of vertices defined by +*vertex21Example*, that share common properties as value. Notice that only the +vertex id and the matching attributes are returned in the result. + +The complexity of this method is **O(n)** with *n* being the maximal amount of vertices +defined by the parameters vertexExamples. + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *vertex1Example* : An example for the desired vertices (see [example](#short-explanation-of-the-example-parameter)). +* *vertex2Example* : An example for the desired vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* (optional) : An object containing the following options: + * *vertex1CollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered. + * *vertex2CollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered. + * *ignoreProperties* : One or multiple attributes of a document that should be ignored, either a string or an array.. + +@EXAMPLES + +A route planner example, all locations with the same properties: + + @startDocuBlockInline generalGraphProperties1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphProperties1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_COMMON_PROPERTIES(" + | +"'routeplanner', {}, {}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphProperties1 + +A route planner example, all cities which share same properties except for population. + + @startDocuBlockInline generalGraphProperties2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphProperties2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_COMMON_PROPERTIES(" + | +"'routeplanner', {}, {}, {ignoreProperties: 'population'}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphProperties2 + + !SUBSECTION Shortest Paths, distances and traversals. @@ -65,27 +345,338 @@ This section describes AQL functions, that calculate paths from a subset of vert !SUBSECTION GRAPH_PATHS -@startDocuBlock JSF_aql_general_graph_paths + + +The GRAPH\_PATHS function returns all paths of a graph. + +`GRAPH_PATHS (graphName, options)` + +The complexity of this method is **O(n\*n\*m)** with *n* being the amount of vertices in +the graph and *m* the average amount of connected edges; + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *any*, *inbound* and *outbound* (default). + * *followCycles* (optional) : If set to *true* the query follows cycles in the graph, default is false. + * *minLength* (optional) : Defines the minimal length a path must have to be returned (default is 0). + * *maxLength* (optional) : Defines the maximal length a path must have to be returned (default is 10). + +@EXAMPLES + +Return all paths of the graph "social": + + @startDocuBlockInline generalGraphPaths + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphPaths} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("social"); + db._query("RETURN GRAPH_PATHS('social')").toArray(); + ~ examples.dropGraph("social"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphPaths + +Return all inbound paths of the graph "social" with a maximal +length of 1 and a minimal length of 2: + + @startDocuBlockInline generalGraphPaths2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphPaths2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("social"); + | db._query( + | "RETURN GRAPH_PATHS('social', {direction : 'inbound', minLength : 1, maxLength : 2})" + ).toArray(); + ~ examples.dropGraph("social"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphPaths2 + !SUBSECTION GRAPH_SHORTEST_PATH -@startDocuBlock JSF_aql_general_graph_shortest_paths + + +The GRAPH\_SHORTEST\_PATH function returns all shortest paths of a graph. + +`GRAPH_SHORTEST_PATH (graphName, startVertexExample, endVertexExample, options)` + +This function determines all shortest paths in a graph identified by *graphName*. +If one wants to call this function to receive nearly all shortest paths for a +graph the option *algorithm* should be set to +[Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) to +increase performance. +If no algorithm is provided in the options the function chooses the appropriate +one (either [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) + or [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm)) according to its parameters. +The length of a path is by default the amount of edges from one start vertex to +an end vertex. The option weight allows the user to define an edge attribute +representing the length. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *startVertexExample* : An example for the desired start Vertices (see [example](#short-explanation-of-the-example-parameter)). +* *endVertexExample* : An example for the desired end Vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* (optional) : An object containing the following options: + * *direction* : The direction of the edges as a string. Possible values are *outbound*, *inbound* and *any* (default). + * *edgeCollectionRestriction* : One or multiple edge collection names. Only edges from these collections will be considered for the path. + * *startVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + start vertex of a path. + * *endVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + end vertex of a path. + * *edgeExamples* : A filter example for the edges in the shortest paths (see [example](#short-explanation-of-the-example-parameter)). + * *algorithm* : The algorithm to calculate + the shortest paths. If both start and end vertex examples are empty + [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) is + used, otherwise the default is [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm). + * *weight* : The name of the attribute of the edges containing the length as a string. + * *defaultWeight* : Only used with the option *weight*. + If an edge does not have the attribute named as defined in option *weight* this default is used as length. + If no default is supplied the default would be positive Infinity so the path could not be calculated. + * *stopAtFirstMatch* : Only useful if targetVertices is an example that matches + to more than one vertex. If so only the shortest path to + the closest of these target vertices is returned. + This flag is of special use if you have target pattern and + you want to know which vertex with this pattern is matched first. + * *includeData* : Triggers if only *_id*'s are returned (*false*, default) + or if data is included for all objects as well (*true*) + This will modify the content of *vertex*, *path.vertices* and *path.edges*. + +NOTE: Since version 2.6 we have included a new optional parameter *includeData*. +This parameter triggers if the result contains the real data object *true* or +it just includes the *_id* values *false*. +The default value is *false* as it yields performance benefits. + +@EXAMPLES + +A route planner example, shortest distance from all german to all french cities: + + @startDocuBlockInline generalGraphShortestPaths1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphShortestPaths1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_SHORTEST_PATH(" + | + "'routeplanner', {}, {}, {" + + | "weight : 'distance', endVertexCollectionRestriction : 'frenchCity', " + + | "includeData: true, " + + | "startVertexCollectionRestriction : 'germanCity'}) RETURN [e.startVertex, e.vertex._id, " + + | "e.distance, LENGTH(e.paths)]" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphShortestPaths1 + +A route planner example, shortest distance from Hamburg and Cologne to Lyon: + + @startDocuBlockInline generalGraphShortestPaths2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphShortestPaths2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_SHORTEST_PATH(" + | +"'routeplanner', [{_id: 'germanCity/Cologne'},{_id: 'germanCity/Munich'}], " + + | "'frenchCity/Lyon', " + + | "{weight : 'distance'}) RETURN [e.startVertex, e.vertex, e.distance, LENGTH(e.paths)]" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphShortestPaths2 + + !SUBSECTION GRAPH_TRAVERSAL -@startDocuBlock JSF_aql_general_graph_traversal + + +The GRAPH\_TRAVERSAL function traverses through the graph. + +`GRAPH_TRAVERSAL (graphName, startVertexExample, direction, options)` + +This function performs traversals on the given graph. + +The complexity of this function strongly depends on the usage. + +*Parameters* +* *graphName* : The name of the graph as a string. +* *startVertexExample* : An example for the desired vertices (see [example](#short-explanation-of-the-example-parameter)). +* *direction* : The direction of the edges as a string. Possible values are *outbound*, *inbound* and *any* (default). +* *options*: Object containing optional options. + +*Options*: + +* *strategy*: determines the visitation strategy. Possible values are *depthfirst* and *breadthfirst*. Default is *breadthfirst*. +* *order*: determines the visitation order. Possible values are *preorder* and *postorder*. +* *itemOrder*: determines the order in which connections returned by the expander will be processed. Possible values are *forward* and *backward*. +* *maxDepth*: if set to a value greater than *0*, this will limit the traversal to this maximum depth. +* *minDepth*: if set to a value greater than *0*, all vertices found on a level below the *minDepth* level will not be included in the result. +* *maxIterations*: the maximum number of iterations that the traversal is allowed to perform. It is sensible to set this number so unbounded traversals + will terminate at some point. +* *uniqueness*: an object that defines how repeated visitations of vertices should + be handled. The *uniqueness* object can have a sub-attribute *vertices*, and a + sub-attribute *edges*. Each sub-attribute can have one of the following values: + * *"none"*: no uniqueness constraints + * *"path"*: element is excluded if it is already contained in the current path. + This setting may be sensible for graphs that contain cycles (e.g. A -> B -> C -> A). + * *"global"*: element is excluded if it was already found/visited at any point during the traversal. +* *filterVertices* An optional array of example vertex documents that the traversal will treat specially. + If no examples are given, the traversal will handle all encountered vertices equally. + If one or many vertex examples are given, the traversal will exclude any non-matching vertex from the + result and/or not descend into it. Optionally, filterVertices can contain a string containing the name + of a user-defined AQL function that should be responsible for filtering. + If so, the AQL function is expected to have the following signature: + + `function (config, vertex, path)` + + If a custom AQL function is used for filterVertices, it is expected to return one of the following values: + + * [ ]: Include the vertex in the result and descend into its connected edges + * [ "prune" ]: Will include the vertex in the result but not descend into its connected edges + * [ "exclude" ]: Will not include the vertex in the result but descend into its connected edges + * [ "prune", "exclude" ]: Will completely ignore the vertex and its connected edges + +* *vertexFilterMethod:* Only useful in conjunction with filterVertices and if no user-defined AQL function is used. + If specified, it will influence how vertices are handled that don't match the examples in filterVertices: + + * [ "prune" ]: Will include non-matching vertices in the result but not descend into them + * [ "exclude" ]: Will not include non-matching vertices in the result but descend into them + * [ "prune", "exclude" ]: Will completely ignore the vertex and its connected edges + +@EXAMPLES + +A route planner example, start a traversal from Hamburg : + + @startDocuBlockInline generalGraphTraversal1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphTraversal1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_TRAVERSAL('routeplanner', 'germanCity/Hamburg'," + + | " 'outbound') RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphTraversal1 + +A route planner example, start a traversal from Hamburg with a max depth of 1 +so only the direct neighbors of Hamburg are returned: + + @startDocuBlockInline generalGraphTraversal2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphTraversal2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_TRAVERSAL('routeplanner', 'germanCity/Hamburg'," + + | " 'outbound', {maxDepth : 1}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphTraversal2 + + !SUBSECTION GRAPH_TRAVERSAL_TREE -@startDocuBlock JSF_aql_general_graph_traversal_tree + + +The GRAPH\_TRAVERSAL\_TREE function traverses through the graph. + +`GRAPH_TRAVERSAL_TREE (graphName, startVertexExample, direction, connectName, options)` +This function creates a tree format from the result for a better visualization of the path. +This function performs traversals on the given graph. + +The complexity of this function strongly depends on the usage. + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *startVertexExample* : An example for the desired +vertices (see [example](#short-explanation-of-the-example-parameter)). +* *direction* : The direction of the edges as a string. + Possible values are *outbound*, *inbound* and *any* (default). +* *connectName* : The result attribute which + contains the connection. +* *options* (optional) : An object containing options, see + [Graph Traversals](../Aql/GraphOperations.md#graphtraversal): + +@EXAMPLES + +A route planner example, start a traversal from Hamburg : + + @startDocuBlockInline generalGraphTraversalTree1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphTraversalTree1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_TRAVERSAL_TREE('routeplanner', 'germanCity/Hamburg'," + + | " 'outbound', 'connnection') RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphTraversalTree1 + +A route planner example, start a traversal from Hamburg with a max depth of 1 so + only the direct neighbors of Hamburg are returned: + + @startDocuBlockInline generalGraphTraversalTree2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphTraversalTree2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_TRAVERSAL_TREE('routeplanner', 'germanCity/Hamburg',"+ + | " 'outbound', 'connnection', {maxDepth : 1}) RETURN e" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphTraversalTree2 + + !SUBSECTION GRAPH_DISTANCE_TO -@startDocuBlock JSF_aql_general_graph_distance + + +The GRAPH\_DISTANCE\_TO function returns all paths and there distance within a graph. + +`GRAPH_DISTANCE_TO (graphName, startVertexExample, endVertexExample, options)` + +This function is a wrapper of [GRAPH\_SHORTEST\_PATH](#graphshortestpath). +It does not return the actual path but only the distance between two vertices. + +@EXAMPLES + +A route planner example, distance from all french to all german cities: + + @startDocuBlockInline generalGraphDistanceTo1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphDistanceTo1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_DISTANCE_TO(" + | +"'routeplanner', {}, {}, { " + + | " weight : 'distance', endVertexCollectionRestriction : 'germanCity', " + + | "startVertexCollectionRestriction : 'frenchCity'}) RETURN [e.startVertex, e.vertex, " + + | "e.distance]" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphDistanceTo1 + +A route planner example, distance from Hamburg and Cologne to Lyon: + + @startDocuBlockInline generalGraphDistanceTo2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphDistanceTo2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("FOR e IN GRAPH_DISTANCE_TO(" + | + "'routeplanner', [{_id: 'germanCity/Cologne'},{_id: 'germanCity/Hamburg'}], " + + | "'frenchCity/Lyon', " + + | "{weight : 'distance'}) RETURN [e.startVertex, e.vertex, e.distance]" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphDistanceTo2 + + !SUBSECTION Graph measurements. @@ -95,40 +686,558 @@ This section describes AQL functions to calculate various graph related measurem !SUBSECTION GRAPH_ABSOLUTE_ECCENTRICITY -@startDocuBlock JSF_aql_general_graph_absolute_eccentricity + + + +`GRAPH_ABSOLUTE_ECCENTRICITY (graphName, vertexExample, options)` + + The GRAPH\_ABSOLUTE\_ECCENTRICITY function returns the +[eccentricity](http://en.wikipedia.org/wiki/Distance_%28graph_theory%29) +of the vertices defined by the examples. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *vertexExample* : An example for the desired +vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* (optional) : An object containing the following options: + * *direction* : The direction of the edges as a string. Possible values are *outbound*, *inbound* and *any* (default). + * *edgeCollectionRestriction* : One or multiple edge collection names. Only edges from these collections will be considered for the path. + * *startVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + start vertex of a path. + * *endVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + end vertex of a path. + * *edgeExamples* : A filter example for the edges in the shortest paths (see [example](#short-explanation-of-the-example-parameter)). + * *algorithm* : The algorithm to calculate the shortest paths as a string. If vertex example is empty + [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) is + used as default, otherwise the default is + [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm) + * *weight* : The name of the attribute of the edges containing the length as a string. + * *defaultWeight* : Only used with the option *weight*. + +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the absolute eccentricity of all locations. + + @startDocuBlockInline generalGraphAbsEccentricity1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsEccentricity1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_ABSOLUTE_ECCENTRICITY('routeplanner', {})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsEccentricity1 + +A route planner example, the absolute eccentricity of all locations. +This considers the actual distances. + + @startDocuBlockInline generalGraphAbsEccentricity2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsEccentricity2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ABSOLUTE_ECCENTRICITY(" + +"'routeplanner', {}, {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsEccentricity2 + +A route planner example, the absolute eccentricity of all German cities regarding only +outbound paths. + + @startDocuBlockInline generalGraphAbsEccentricity3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsEccentricity3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ABSOLUTE_ECCENTRICITY(" + | + "'routeplanner', {}, {startVertexCollectionRestriction : 'germanCity', " + + "direction : 'outbound', weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsEccentricity3 + + !SUBSECTION GRAPH_ECCENTRICITY -@startDocuBlock JSF_aql_general_graph_eccentricity + + + +`GRAPH_ECCENTRICITY (graphName, options)` + +The GRAPH\_ECCENTRICITY function returns the normalized +[eccentricity](http://en.wikipedia.org/wiki/Distance_%28graph_theory%29) +of the graphs vertices + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *options* (optional) : An object containing the following options: + * *direction* : The direction of the edges as a string. Possible values are *outbound*, *inbound* and *any* (default). + * *algorithm* : The algorithm to calculate the shortest paths as a string. Possible + values are [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) + (default) and [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm). + * *weight* : The name of the attribute of the edges containing the length as a string. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the eccentricity of all locations. + + @startDocuBlockInline generalGraphEccentricity1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphEccentricity1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_ECCENTRICITY('routeplanner')").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphEccentricity1 + +A route planner example, the eccentricity of all locations. +This considers the actual distances. + + @startDocuBlockInline generalGraphEccentricity2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphEccentricity2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ECCENTRICITY('routeplanner', {weight : 'distance'})" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphEccentricity2 + + !SUBSECTION GRAPH_ABSOLUTE_CLOSENESS -@startDocuBlock JSF_aql_general_graph_absolute_closeness + + + +`GRAPH_ABSOLUTE_CLOSENESS (graphName, vertexExample, options)` + +The GRAPH\_ABSOLUTE\_CLOSENESS function returns the +[closeness](http://en.wikipedia.org/wiki/Centrality#Closeness-centrality) +of the vertices defined by the examples. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *vertexExample* : An example for the desired +vertices (see [example](#short-explanation-of-the-example-parameter)). +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *edgeCollectionRestriction* : One or multiple edge collection names. Only edges from these collections will be considered for the path. + * *startVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + start vertex of a path. + * *endVertexCollectionRestriction* : One or multiple vertex collection names. Only vertices from these collections will be considered as + end vertex of a path. + * *edgeExamples* : A filter example for the edges in the shortest paths (see [example](#short-explanation-of-the-example-parameter)). + * *algorithm* : The algorithm to calculate the shortest paths. Possible values are + [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) (default) + and [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm). + * *weight* : The name of the attribute of the edges containing the length. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the absolute closeness of all locations. + + @startDocuBlockInline generalGraphAbsCloseness1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsCloseness1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_ABSOLUTE_CLOSENESS('routeplanner', {})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsCloseness1 + +A route planner example, the absolute closeness of all locations. +This considers the actual distances. + + @startDocuBlockInline generalGraphAbsCloseness2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsCloseness2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ABSOLUTE_CLOSENESS(" + +"'routeplanner', {}, {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsCloseness2 + +A route planner example, the absolute closeness of all German cities regarding only +outbound paths. + + @startDocuBlockInline generalGraphAbsCloseness3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsCloseness3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ABSOLUTE_CLOSENESS(" + | + "'routeplanner', {}, {startVertexCollectionRestriction : 'germanCity', " + + "direction : 'outbound', weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsCloseness3 + + !SUBSECTION GRAPH_CLOSENESS -@startDocuBlock JSF_aql_general_graph_closeness + + + +`GRAPH_CLOSENESS (graphName, options)` + +The GRAPH\_CLOSENESS function returns the normalized +[closeness](http://en.wikipedia.org/wiki/Centrality#Closeness-centrality) +of graphs vertices. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *algorithm* : The algorithm to calculate the shortest paths. Possible values are + [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) (default) + and [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm). + * *weight* : The name of the attribute of the edges containing the length. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this defaultis used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the closeness of all locations. + + @startDocuBlockInline generalGraphCloseness1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphCloseness1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_CLOSENESS('routeplanner')").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphCloseness1 + +A route planner example, the closeness of all locations. +This considers the actual distances. + + @startDocuBlockInline generalGraphCloseness2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphCloseness2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_CLOSENESS(" + +"'routeplanner', {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphCloseness2 + +A route planner example, the absolute closeness of all cities regarding only +outbound paths. + + @startDocuBlockInline generalGraphCloseness3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphCloseness3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_CLOSENESS(" + | + "'routeplanner',{direction : 'outbound', weight : 'distance'})" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphCloseness3 + + !SUBSECTION GRAPH_ABSOLUTE_BETWEENNESS -@startDocuBlock JSF_aql_general_graph_absolute_betweenness + + + +`GRAPH_ABSOLUTE_BETWEENNESS (graphName, options)` + +The GRAPH\_ABSOLUTE\_BETWEENNESS function returns the +[betweenness](http://en.wikipedia.org/wiki/Betweenness_centrality) +of all vertices in the graph. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + + +* *graphName* : The name of the graph as a string. +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *weight* : The name of the attribute of the edges containing the length. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the betweenness can not be calculated. + +@EXAMPLES + +A route planner example, the absolute betweenness of all locations. + + @startDocuBlockInline generalGraphAbsBetweenness1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsBetweenness1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_ABSOLUTE_BETWEENNESS('routeplanner', {})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsBetweenness1 + +A route planner example, the absolute betweenness of all locations. +This considers the actual distances. + + @startDocuBlockInline generalGraphAbsBetweenness2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsBetweenness2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ABSOLUTE_BETWEENNESS(" + +"'routeplanner', {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsBetweenness2 + +A route planner example, the absolute closeness regarding only +outbound paths. + + @startDocuBlockInline generalGraphAbsBetweenness3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphAbsBetweenness3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_ABSOLUTE_BETWEENNESS(" + | + "'routeplanner', {direction : 'outbound', weight : 'distance'})" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphAbsBetweenness3 + + !SUBSECTION GRAPH_BETWEENNESS -@startDocuBlock JSF_aql_general_graph_betweenness + + + +`GRAPH_BETWEENNESS (graphName, options)` + +The GRAPH\_BETWEENNESS function returns the +[betweenness](http://en.wikipedia.org/wiki/Betweenness_centrality) +of graphs vertices. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *weight* : The name of the attribute of the edges containing the length. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the betweenness of all locations. + + @startDocuBlockInline generalGraphBetweenness1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphBetweenness1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_BETWEENNESS('routeplanner')").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphBetweenness1 + +A route planner example, the betweenness of all locations. +This considers the actual distances. + + @startDocuBlockInline generalGraphBetweenness2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphBetweenness2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_BETWEENNESS('routeplanner', {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphBetweenness2 + +A route planner example, the betweenness regarding only +outbound paths. + + @startDocuBlockInline generalGraphBetweenness3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphBetweenness3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_BETWEENNESS(" + + "'routeplanner', {direction : 'outbound', weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphBetweenness3 + + !SUBSECTION GRAPH_RADIUS -@startDocuBlock JSF_aql_general_graph_radius + + + +`GRAPH_RADIUS (graphName, options)` + +*The GRAPH\_RADIUS function returns the +[radius](http://en.wikipedia.org/wiki/Eccentricity_%28graph_theory%29) +of a graph.* + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +* *graphName* : The name of the graph as a string. +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *algorithm* : The algorithm to calculate the shortest paths as a string. Possible + values are [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) + (default) and [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm). + * *weight* : The name of the attribute of the edges containing the length. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the radius of the graph. + + @startDocuBlockInline generalGraphRadius1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphRadius1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_RADIUS('routeplanner')").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphRadius1 + +A route planner example, the radius of the graph. +This considers the actual distances. + + @startDocuBlockInline generalGraphRadius2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphRadius2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_RADIUS('routeplanner', {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphRadius2 + +A route planner example, the radius of the graph regarding only +outbound paths. + + @startDocuBlockInline generalGraphRadius3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphRadius3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_RADIUS(" + | + "'routeplanner', {direction : 'outbound', weight : 'distance'})" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphRadius3 + + !SUBSECTION GRAPH_DIAMETER -@startDocuBlock JSF_aql_general_graph_diameter + + + +`GRAPH_DIAMETER (graphName, options)` + +The GRAPH\_DIAMETER function returns the +[diameter](http://en.wikipedia.org/wiki/Eccentricity_%28graph_theory%29) +of a graph. + +The complexity of the function is described +[here](#the-complexity-of-the-shortest-path-algorithms). + +*Parameters* + +* *graphName* : The name of the graph as a string. +* *options* : An object containing the following options: + * *direction* : The direction of the edges. Possible values are *outbound*, *inbound* and *any* (default). + * *algorithm* : The algorithm to calculate the shortest paths as a string. Possible + values are [Floyd-Warshall](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) + (default) and [Dijkstra](http://en.wikipedia.org/wiki/Dijkstra's_algorithm). + * *weight* : The name of the attribute of the edges containing the length. + * *defaultWeight* : Only used with the option *weight*. +If an edge does not have the attribute named as defined in option *weight* this default is used as length. +If no default is supplied the default would be positive Infinity so the path and +hence the eccentricity can not be calculated. + +@EXAMPLES + +A route planner example, the diameter of the graph. + + @startDocuBlockInline generalGraphDiameter1 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphDiameter1} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_DIAMETER('routeplanner')").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphDiameter1 + +A route planner example, the diameter of the graph. +This considers the actual distances. + + @startDocuBlockInline generalGraphDiameter2 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphDiameter2} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + db._query("RETURN GRAPH_DIAMETER('routeplanner', {weight : 'distance'})").toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphDiameter2 + +A route planner example, the diameter of the graph regarding only +outbound paths. + + @startDocuBlockInline generalGraphDiameter3 + @EXAMPLE_ARANGOSH_OUTPUT{generalGraphDiameter3} + var examples = require("@arangodb/graph-examples/example-graph.js"); + var g = examples.loadGraph("routeplanner"); + | db._query("RETURN GRAPH_DIAMETER(" + | + "'routeplanner', {direction : 'outbound', weight : 'distance'})" + ).toArray(); + ~ examples.dropGraph("routeplanner"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock generalGraphDiameter3 + + diff --git a/Documentation/Books/Users/AqlExtending/Functions.mdpp b/Documentation/Books/Users/AqlExtending/Functions.mdpp index d2f1277ddb..a1a47353d0 100644 --- a/Documentation/Books/Users/AqlExtending/Functions.mdpp +++ b/Documentation/Books/Users/AqlExtending/Functions.mdpp @@ -12,16 +12,117 @@ function code must be specified. !SUBSECTION Register -@startDocuBlock aqlFunctionsRegister + + +@brief register an AQL user function +`aqlfunctions.register(name, code, isDeterministic)` + +Registers an AQL user function, identified by a fully qualified function +name. The function code in *code* must be specified as a JavaScript +function or a string representation of a JavaScript function. +If the function code in *code* is passed as a string, it is required that +the string evaluates to a JavaScript function definition. + +If a function identified by *name* already exists, the previous function +definition will be updated. Please also make sure that the function code +does not violate the [Conventions](../AqlExtending/Conventions.md) for AQL +functions. + +The *isDeterministic* attribute can be used to specify whether the +function results are fully deterministic (i.e. depend solely on the input +and are the same for repeated calls with the same input values). It is not +used at the moment but may be used for optimizations later. + +The registered function is stored in the selected database's system +collection *_aqlfunctions*. + +The function returns *true* when it updates/replaces an existing AQL +function of the same name, and *false* otherwise. It will throw an exception +when it detects syntactially invalid function code. + +@EXAMPLES + +```js + require("@arangodb/aql/functions").register("myfunctions::temperature::celsiustofahrenheit", + function (celsius) { + return celsius * 1.8 + 32; + }); +``` + !SUBSECTION Unregister -@startDocuBlock aqlFunctionsUnregister + + +@brief delete an existing AQL user function +`aqlfunctions.unregister(name)` + +Unregisters an existing AQL user function, identified by the fully qualified +function name. + +Trying to unregister a function that does not exist will result in an +exception. + +@EXAMPLES + +```js + require("@arangodb/aql/functions").unregister("myfunctions::temperature::celsiustofahrenheit"); +``` + !SUBSECTION Unregister Group -@startDocuBlock aqlFunctionsUnregisterGroup + + +@brief delete a group of AQL user functions +`aqlfunctions.unregisterGroup(prefix)` + +Unregisters a group of AQL user function, identified by a common function +group prefix. + +This will return the number of functions unregistered. + +@EXAMPLES + +```js + require("@arangodb/aql/functions").unregisterGroup("myfunctions::temperature"); + + require("@arangodb/aql/functions").unregisterGroup("myfunctions"); +``` + !SUBSECTION To Array -@startDocuBlock aqlFunctionsToArray + + +@brief list all AQL user functions +`aqlfunctions.toArray()` + +Returns all previously registered AQL user functions, with their fully +qualified names and function code. + +The result may optionally be restricted to a specified group of functions +by specifying a group prefix: + +`aqlfunctions.toArray(prefix)` + +@EXAMPLES + +To list all available user functions: + +```js + require("@arangodb/aql/functions").toArray(); +``` + +To list all available user functions in the *myfunctions* namespace: + +```js + require("@arangodb/aql/functions").toArray("myfunctions"); +``` + +To list all available user functions in the *myfunctions::temperature* namespace: + +```js + require("@arangodb/aql/functions").toArray("myfunctions::temperature"); +``` + diff --git a/Documentation/Books/Users/Collections/CollectionMethods.mdpp b/Documentation/Books/Users/Collections/CollectionMethods.mdpp index 0acb6c7d05..21d2c53583 100644 --- a/Documentation/Books/Users/Collections/CollectionMethods.mdpp +++ b/Documentation/Books/Users/Collections/CollectionMethods.mdpp @@ -2,11 +2,53 @@ !SUBSECTION Drop -@startDocuBlock collectionDrop + + +@brief drops a collection +`collection.drop()` + +Drops a *collection* and all its indexes. + +@EXAMPLES + + @startDocuBlockInline collectionDrop + @EXAMPLE_ARANGOSH_OUTPUT{collectionDrop} + ~ db._create("example"); + col = db.example; + col.drop(); + col; + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDrop + + !SUBSECTION Truncate -@startDocuBlock collectionTruncate + + +@brief truncates a collection +`collection.truncate()` + +Truncates a *collection*, removing all documents but keeping all its +indexes. + +@EXAMPLES + +Truncates a collection: + + @startDocuBlockInline collectionTruncate + @EXAMPLE_ARANGOSH_OUTPUT{collectionTruncate} + ~ db._create("example"); + col = db.example; + col.save({ "Hello" : "World" }); + col.count(); + col.truncate(); + col.count(); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionTruncate + !SUBSECTION Properties @@ -14,11 +56,131 @@ !SUBSECTION Figures -@startDocuBlock collectionFigures + + +@brief returns the figures of a collection +`collection.figures()` + +Returns an object containing statistics about the collection. +**Note** : Retrieving the figures will always load the collection into +memory. + +* *alive.count*: The number of currently active documents in all datafiles and + journals of the collection. Documents that are contained in the + write-ahead log only are not reported in this figure. +* *alive.size*: The total size in bytes used by all active documents of the + collection. Documents that are contained in the write-ahead log only are + not reported in this figure. +- *dead.count*: The number of dead documents. This includes document + versions that have been deleted or replaced by a newer version. Documents + deleted or replaced that are contained in the write-ahead log only are not + reported in this figure. +* *dead.size*: The total size in bytes used by all dead documents. +* *dead.deletion*: The total number of deletion markers. Deletion markers + only contained in the write-ahead log are not reporting in this figure. +* *datafiles.count*: The number of datafiles. +* *datafiles.fileSize*: The total filesize of datafiles (in bytes). +* *journals.count*: The number of journal files. +* *journals.fileSize*: The total filesize of the journal files + (in bytes). +* *compactors.count*: The number of compactor files. +* *compactors.fileSize*: The total filesize of the compactor files + (in bytes). +* *shapefiles.count*: The number of shape files. This value is + deprecated and kept for compatibility reasons only. The value will always + be 0 since ArangoDB 2.0 and higher. +* *shapefiles.fileSize*: The total filesize of the shape files. This + value is deprecated and kept for compatibility reasons only. The value will + always be 0 in ArangoDB 2.0 and higher. +* *shapes.count*: The total number of shapes used in the collection. + This includes shapes that are not in use anymore. Shapes that are contained + in the write-ahead log only are not reported in this figure. +* *shapes.size*: The total size of all shapes (in bytes). This includes + shapes that are not in use anymore. Shapes that are contained in the + write-ahead log only are not reported in this figure. +* *attributes.count*: The total number of attributes used in the + collection. Note: the value includes data of attributes that are not in use + anymore. Attributes that are contained in the write-ahead log only are + not reported in this figure. +* *attributes.size*: The total size of the attribute data (in bytes). + Note: the value includes data of attributes that are not in use anymore. + Attributes that are contained in the write-ahead log only are not + reported in this figure. +* *indexes.count*: The total number of indexes defined for the + collection, including the pre-defined indexes (e.g. primary index). +* *indexes.size*: The total memory allocated for indexes in bytes. +* *maxTick*: The tick of the last marker that was stored in a journal + of the collection. This might be 0 if the collection does not yet have + a journal. +* *uncollectedLogfileEntries*: The number of markers in the write-ahead + log for this collection that have not been transferred to journals or + datafiles. +* *documentReferences*: The number of references to documents in datafiles + that JavaScript code currently holds. This information can be used for + debugging compaction and unload issues. +* *waitingFor*: An optional string value that contains information about + which object type is at the head of the collection's cleanup queue. This + information can be used for debugging compaction and unload issues. +* *compactionStatus.time*: The point in time the compaction for the collection + was last executed. This information can be used for debugging compaction + issues. +* *compactionStatus.message*: The action that was performed when the compaction + was last run for the collection. This information can be used for debugging + compaction issues. + +**Note**: collection data that are stored in the write-ahead log only are +not reported in the results. When the write-ahead log is collected, documents +might be added to journals and datafiles of the collection, which may modify +the figures of the collection. Also note that `waitingFor` and `compactionStatus` +may be empty when called on a coordinator in a cluster. + +Additionally, the filesizes of collection and index parameter JSON files are +not reported. These files should normally have a size of a few bytes +each. Please also note that the *fileSize* values are reported in bytes +and reflect the logical file sizes. Some filesystems may use optimisations +(e.g. sparse files) so that the actual physical file size is somewhat +different. Directories and sub-directories may also require space in the +file system, but this space is not reported in the *fileSize* results. + +That means that the figures reported do not reflect the actual disk +usage of the collection with 100% accuracy. The actual disk usage of +a collection is normally slightly higher than the sum of the reported +*fileSize* values. Still the sum of the *fileSize* values can still be +used as a lower bound approximation of the disk usage. + +@EXAMPLES + + @startDocuBlockInline collectionFigures + @EXAMPLE_ARANGOSH_OUTPUT{collectionFigures} + ~ require("internal").wal.flush(true, true); + db.demo.figures() + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionFigures + + !SUBSECTION Load -@startDocuBlock collectionLoad + + +@brief loads a collection +`collection.load()` + +Loads a collection into memory. + +@EXAMPLES + + @startDocuBlockInline collectionLoad + @EXAMPLE_ARANGOSH_OUTPUT{collectionLoad} + ~ db._create("example"); + col = db.example; + col.load(); + col; + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionLoad + + !SUBSECTION Reserve `collection.reserve( number)` @@ -31,20 +193,114 @@ Not all indexes implement the reserve function at the moment. The indexes that d !SUBSECTION Revision -@startDocuBlock collectionRevision + + +@brief returns the revision id of a collection +`collection.revision()` + +Returns the revision id of the collection + +The revision id is updated when the document data is modified, either by +inserting, deleting, updating or replacing documents in it. + +The revision id of a collection can be used by clients to check whether +data in a collection has changed or if it is still unmodified since a +previous fetch of the revision id. + +The revision id returned is a string value. Clients should treat this value +as an opaque string, and only use it for equality/non-equality comparisons. + !SUBSECTION Checksum -@startDocuBlock collectionChecksum + + +@brief calculates a checksum for the data in a collection +`collection.checksum(withRevisions, withData)` + +The *checksum* operation calculates a CRC32 checksum of the keys +contained in collection *collection*. + +If the optional argument *withRevisions* is set to *true*, then the +revision ids of the documents are also included in the checksumming. + +If the optional argument *withData* is set to *true*, then the +actual document data is also checksummed. Including the document data in +checksumming will make the calculation slower, but is more accurate. + +**Note**: this method is not available in a cluster. + + !SUBSECTION Unload -@startDocuBlock collectionUnload + + +@brief unloads a collection +`collection.unload()` + +Starts unloading a collection from memory. Note that unloading is deferred +until all query have finished. + +@EXAMPLES + + @startDocuBlockInline CollectionUnload + @EXAMPLE_ARANGOSH_OUTPUT{CollectionUnload} + ~ db._create("example"); + col = db.example; + col.unload(); + col; + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock CollectionUnload + + !SUBSECTION Rename -@startDocuBlock collectionRename + + +@brief renames a collection +`collection.rename(new-name)` + +Renames a collection using the *new-name*. The *new-name* must not +already be used for a different collection. *new-name* must also be a +valid collection name. For more information on valid collection names please +refer to the [naming conventions](../NamingConventions/README.md). + +If renaming fails for any reason, an error is thrown. +If renaming the collection succeeds, then the collection is also renamed in +all graph definitions inside the `_graphs` collection in the current +database. + +**Note**: this method is not available in a cluster. + +@EXAMPLES + + @startDocuBlockInline collectionRename + @EXAMPLE_ARANGOSH_OUTPUT{collectionRename} + ~ db._create("example"); + c = db.example; + c.rename("better-example"); + c; + ~ db._drop("better-example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionRename + + !SUBSECTION Rotate -@startDocuBlock collectionRotate + + +@brief rotates the current journal of a collection +`collection.rotate()` + +Rotates the current journal of a collection. This operation makes the +current journal of the collection a read-only datafile so it may become a +candidate for garbage collection. If there is currently no journal available +for the collection, the operation will fail with an error. + +**Note**: this method is not available in a cluster. + + diff --git a/Documentation/Books/Users/Collections/DatabaseMethods.mdpp b/Documentation/Books/Users/Collections/DatabaseMethods.mdpp index 035e3ee38c..cf186df75b 100644 --- a/Documentation/Books/Users/Collections/DatabaseMethods.mdpp +++ b/Documentation/Books/Users/Collections/DatabaseMethods.mdpp @@ -2,30 +2,354 @@ !SUBSECTION Collection -@startDocuBlock collectionDatabaseName + + +@brief returns a single collection or null +`db._collection(collection-name)` + +Returns the collection with the given name or null if no such collection +exists. + +`db._collection(collection-identifier)` + +Returns the collection with the given identifier or null if no such +collection exists. Accessing collections by identifier is discouraged for +end users. End users should access collections using the collection name. + +@EXAMPLES + +Get a collection by name: + + @startDocuBlockInline collectionDatabaseName + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseName} + db._collection("demo"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseName + +Get a collection by id: + +``` +arangosh> db._collection(123456); +[ArangoCollection 123456, "demo" (type document, status loaded)] +``` + +Unknown collection: + + @startDocuBlockInline collectionDatabaseNameUnknown + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseNameUnknown} + db._collection("unknown"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseNameUnknown + !SUBSECTION Create -@startDocuBlock collectionDatabaseCreate + + +@brief creates a new document or edge collection +`db._create(collection-name)` + +Creates a new document collection named *collection-name*. +If the collection name already exists or if the name format is invalid, an +error is thrown. For more information on valid collection names please refer +to the [naming conventions](../NamingConventions/README.md). + +`db._create(collection-name, properties)` + +*properties* must be an object with the following attributes: + +* *waitForSync* (optional, default *false*): If *true* creating + a document will only return after the data was synced to disk. + +* *journalSize* (optional, default is a + configuration parameter: The maximal + size of a journal or datafile. Note that this also limits the maximal + size of a single object. Must be at least 1MB. + +* *isSystem* (optional, default is *false*): If *true*, create a + system collection. In this case *collection-name* should start with + an underscore. End users should normally create non-system collections + only. API implementors may be required to create system collections in + very special occasions, but normally a regular collection will do. + +* *isVolatile* (optional, default is *false*): If *true then the + collection data is kept in-memory only and not made persistent. Unloading + the collection will cause the collection data to be discarded. Stopping + or re-starting the server will also cause full loss of data in the + collection. Setting this option will make the resulting collection be + slightly faster than regular collections because ArangoDB does not + enforce any synchronization to disk and does not calculate any CRC + checksums for datafiles (as there are no datafiles). + +* *keyOptions* (optional): additional options for key generation. If + specified, then *keyOptions* should be a JSON array containing the + following attributes (**note**: some of them are optional): + * *type*: specifies the type of the key generator. The currently + available generators are *traditional* and *autoincrement*. + * *allowUserKeys*: if set to *true*, then it is allowed to supply + own key values in the *_key* attribute of a document. If set to + *false*, then the key generator will solely be responsible for + generating keys and supplying own key values in the *_key* attribute + of documents is considered an error. + * *increment*: increment value for *autoincrement* key generator. + Not used for other key generator types. + * *offset*: initial offset value for *autoincrement* key generator. + Not used for other key generator types. + +* *numberOfShards* (optional, default is *1*): in a cluster, this value + determines the number of shards to create for the collection. In a single + server setup, this option is meaningless. + +* *shardKeys* (optional, default is *[ "_key" ]*): in a cluster, this + attribute determines which document attributes are used to determine the + target shard for documents. Documents are sent to shards based on the + values they have in their shard key attributes. The values of all shard + key attributes in a document are hashed, and the hash value is used to + determine the target shard. Note that values of shard key attributes cannot + be changed once set. + This option is meaningless in a single server setup. + + When choosing the shard keys, one must be aware of the following + rules and limitations: In a sharded collection with more than + one shard it is not possible to set up a unique constraint on + an attribute that is not the one and only shard key given in + *shardKeys*. This is because enforcing a unique constraint + would otherwise make a global index necessary or need extensive + communication for every single write operation. Furthermore, if + *_key* is not the one and only shard key, then it is not possible + to set the *_key* attribute when inserting a document, provided + the collection has more than one shard. Again, this is because + the database has to enforce the unique constraint on the *_key* + attribute and this can only be done efficiently if this is the + only shard key by delegating to the individual shards. + +`db._create(collection-name, properties, type)` + +Specifies the optional *type* of the collection, it can either be *document* +or *edge*. On default it is document. Instead of giving a type you can also use +*db._createEdgeCollection* or *db._createDocumentCollection*. + +@EXAMPLES + +With defaults: + + @startDocuBlockInline collectionDatabaseCreate + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreate} + c = db._create("users"); + c.properties(); + ~ db._drop("users"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseCreate + +With properties: + + @startDocuBlockInline collectionDatabaseCreateProperties + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreateProperties} + |c = db._create("users", { waitForSync : true, + journalSize : 1024 * 1204}); + c.properties(); + ~ db._drop("users"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseCreateProperties + +With a key generator: + + @startDocuBlockInline collectionDatabaseCreateKey + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreateKey} + | db._create("users", + { keyOptions: { type: "autoincrement", offset: 10, increment: 5 } }); + db.users.save({ name: "user 1" }); + db.users.save({ name: "user 2" }); + db.users.save({ name: "user 3" }); + ~ db._drop("users"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseCreateKey + +With a special key option: + + @startDocuBlockInline collectionDatabaseCreateSpecialKey + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCreateSpecialKey} + db._create("users", { keyOptions: { allowUserKeys: false } }); + db.users.save({ name: "user 1" }); + | db.users.save({ name: "user 2", _key: "myuser" }); + ~ // xpError(ERROR_ARANGO_DOCUMENT_KEY_UNEXPECTED) + db.users.save({ name: "user 3" }); + ~ db._drop("users"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseCreateSpecialKey + -@startDocuBlock collectionCreateEdgeCollection + + +@brief creates a new edge collection +`db._createEdgeCollection(collection-name)` + +Creates a new edge collection named *collection-name*. If the +collection name already exists an error is thrown. The default value +for *waitForSync* is *false*. + +`db._createEdgeCollection(collection-name, properties)` + +*properties* must be an object with the following attributes: + +* *waitForSync* (optional, default *false*): If *true* creating + a document will only return after the data was synced to disk. +* *journalSize* (optional, default is + "configuration parameter"): The maximal size of + a journal or datafile. Note that this also limits the maximal + size of a single object and must be at least 1MB. + + -@startDocuBlock collectionCreateDocumentCollection + + +@brief creates a new document collection +`db._createDocumentCollection(collection-name)` + +Creates a new document collection named *collection-name*. If the +document name already exists and error is thrown. + !SUBSECTION All Collections -@startDocuBlock collectionDatabaseNameAll + + +@brief returns all collections +`db._collections()` + +Returns all collections of the given database. + +@EXAMPLES + + @startDocuBlockInline collectionsDatabaseName + @EXAMPLE_ARANGOSH_OUTPUT{collectionsDatabaseName} + ~ db._create("example"); + db._collections(); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionsDatabaseName + + !SUBSECTION Collection Name -@startDocuBlock collectionDatabaseCollectionName + + +@brief selects a collection from the vocbase +`db.collection-name` + +Returns the collection with the given *collection-name*. If no such +collection exists, create a collection named *collection-name* with the +default properties. + +@EXAMPLES + + @startDocuBlockInline collectionDatabaseCollectionName + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseCollectionName} + ~ db._create("example"); + db.example; + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseCollectionName + + !SUBSECTION Drop -@startDocuBlock collectionDatabaseDrop + + +@brief drops a collection +`db._drop(collection)` + +Drops a *collection* and all its indexes. + +`db._drop(collection-identifier)` + +Drops a collection identified by *collection-identifier* and all its +indexes. No error is thrown if there is no such collection. + +`db._drop(collection-name)` + +Drops a collection named *collection-name* and all its indexes. No error +is thrown if there is no such collection. + +*Examples* + +Drops a collection: + + @startDocuBlockInline collectionDatabaseDrop + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseDrop} + ~ db._create("example"); + col = db.example; + db._drop(col); + col; + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseDrop + +Drops a collection identified by name: + + @startDocuBlockInline collectionDatabaseDropName + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseDropName} + ~ db._create("example"); + col = db.example; + db._drop("example"); + col; + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseDropName + + !SUBSECTION Truncate -@startDocuBlock collectionDatabaseTruncate \ No newline at end of file + + +@brief truncates a collection +`db._truncate(collection)` + +Truncates a *collection*, removing all documents but keeping all its +indexes. + +`db._truncate(collection-identifier)` + +Truncates a collection identified by *collection-identified*. No error is +thrown if there is no such collection. + +`db._truncate(collection-name)` + +Truncates a collection named *collection-name*. No error is thrown if +there is no such collection. + +@EXAMPLES + +Truncates a collection: + + @startDocuBlockInline collectionDatabaseTruncate + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseTruncate} + ~ db._create("example"); + col = db.example; + col.save({ "Hello" : "World" }); + col.count(); + db._truncate(col); + col.count(); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseTruncate + +Truncates a collection identified by name: + + @startDocuBlockInline collectionDatabaseTruncateName + @EXAMPLE_ARANGOSH_OUTPUT{collectionDatabaseTruncateName} + ~ db._create("example"); + col = db.example; + col.save({ "Hello" : "World" }); + col.count(); + db._truncate("example"); + col.count(); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock collectionDatabaseTruncateName + + diff --git a/Documentation/Books/Users/ConfigureArango/Arangod.mdpp b/Documentation/Books/Users/ConfigureArango/Arangod.mdpp index 7bee01dfd7..b52ac1a66c 100644 --- a/Documentation/Books/Users/ConfigureArango/Arangod.mdpp +++ b/Documentation/Books/Users/ConfigureArango/Arangod.mdpp @@ -5,7 +5,24 @@ !SUBSECTION Reuse address -@startDocuBlock serverReuseAddress + + +@brief try to reuse address +`--server.reuse-address` + +If this boolean option is set to *true* then the socket option +SO_REUSEADDR is set on all server endpoints, which is the default. +If this option is set to *false* it is possible that it takes up +to a minute after a server has terminated until it is possible for +a new server to use the same endpoint again. This is why this is +activated by default. + +Please note however that under some operating systems this can be +a security risk because it might be possible for another process +to bind to the same address and port, possibly hijacking network +traffic. Under Windows, ArangoDB additionally sets the flag +SO_EXCLUSIVEADDRUSE as a measure to alleviate this problem. + !SUBSECTION Disable authentication @@ -13,7 +30,23 @@ !SUBSECTION Disable authentication-unix-sockets -@startDocuBlock serverAuthenticationDisable + + +@brief disable authentication for requests via UNIX domain sockets +`--server.disable-authentication-unix-sockets value` + +Setting *value* to true will turn off authentication on the server side +for requests coming in via UNIX domain sockets. With this flag enabled, +clients located on the same host as the ArangoDB server can use UNIX domain +sockets to connect to the server without authentication. +Requests coming in by other means (e.g. TCP/IP) are not affected by this option. + +The default value is *false*. + +**Note**: this option is only available on platforms that support UNIX +domain +sockets. + !SUBSECTION Authenticate system only @@ -21,7 +54,24 @@ !SUBSECTION Disable replication-applier -@startDocuBlock serverDisableReplicationApplier + + +@brief disable the replication applier on server startup +`--server.disable-replication-applier flag` + +If *true* the server will start with the replication applier turned off, +even if the replication applier is configured with the *autoStart* option. +Using the command-line option will not change the value of the *autoStart* +option in the applier configuration, but will suppress auto-starting the +replication applier just once. + +If the option is not used, ArangoDB will read the applier configuration +from +the file *REPLICATION-APPLIER-CONFIG* on startup, and use the value of the +*autoStart* attribute from this file. + +The default is *false*. + !SUBSECTION Keep-alive timeout @@ -29,47 +79,276 @@ !SUBSECTION Default API compatibility -@startDocuBlock serverDefaultApi + + +@brief default API compatibility +`--server.default-api-compatibility` + +This option can be used to determine the API compatibility of the ArangoDB +server. It expects an ArangoDB version number as an integer, calculated as +follows: + +*10000 \* major + 100 \* minor (example: *10400* for ArangoDB 1.4)* + +The value of this option will have an influence on some API return values +when the HTTP client used does not send any compatibility information. + +In most cases it will be sufficient to not set this option explicitly but to +keep the default value. However, in case an "old" ArangoDB client is used +that does not send any compatibility information and that cannot handle the +responses of the current version of ArangoDB, it might be reasonable to set +the option to an old version number to improve compatibility with older +clients. + !SUBSECTION Hide Product header -@startDocuBlock serverHideProductHeader + + +@brief hide the "Server: ArangoDB" header in HTTP responses +`--server.hide-product-header` + +If *true*, the server will exclude the HTTP header "Server: ArangoDB" in +HTTP responses. If set to *false*, the server will send the header in +responses. + +The default is *false*. + !SUBSECTION Allow method override -@startDocuBlock serverAllowMethod + + +@brief allow HTTP method override via custom headers? +`--server.allow-method-override` + +When this option is set to *true*, the HTTP request method will optionally +be fetched from one of the following HTTP request headers if present in +the request: + +- *x-http-method* +- *x-http-method-override* +- *x-method-override* + +If the option is set to *true* and any of these headers is set, the +request method will be overridden by the value of the header. For example, +this allows issuing an HTTP DELETE request which to the outside world will +look like an HTTP GET request. This allows bypassing proxies and tools that +will only let certain request types pass. + +Setting this option to *true* may impose a security risk so it should only +be used in controlled environments. + +The default value for this option is *false*. + !SUBSECTION Server threads -@startDocuBlock serverThreads + + +@brief number of dispatcher threads +`--server.threads number` + +Specifies the *number* of threads that are spawned to handle HTTP REST +requests. + !SUBSECTION Keyfile -@startDocuBlock serverKeyfile + + +@brief keyfile containing server certificate +`--server.keyfile filename` + +If SSL encryption is used, this option must be used to specify the filename +of the server private key. The file must be PEM formatted and contain both +the certificate and the server's private key. + +The file specified by *filename* should have the following structure: + +``` +# create private key in file "server.key" +openssl genrsa -des3 -out server.key 1024 + +# create certificate signing request (csr) in file "server.csr" +openssl req -new -key server.key -out server.csr + +# copy away original private key to "server.key.org" +cp server.key server.key.org + +# remove passphrase from the private key +openssl rsa -in server.key.org -out server.key + +# sign the csr with the key, creates certificate PEM file "server.crt" +openssl x509 -req -days 365 -in server.csr -signkey server.key -out +server.crt + +# combine certificate and key into single PEM file "server.pem" +cat server.crt server.key > server.pem +``` + +You may use certificates issued by a Certificate Authority or self-signed +certificates. Self-signed certificates can be created by a tool of your +choice. When using OpenSSL for creating the self-signed certificate, the +following commands should create a valid keyfile: + +``` +-----BEGIN CERTIFICATE----- + +(base64 encoded certificate) + +-----END CERTIFICATE----- +-----BEGIN RSA PRIVATE KEY----- + +(base64 encoded private key) + +-----END RSA PRIVATE KEY----- +``` + +For further information please check the manuals of the tools you use to +create the certificate. + +**Note**: the \-\-server.keyfile option must be set if the server is +started with +at least one SSL endpoint. + !SUBSECTION Cafile -@startDocuBlock serverCafile + + +@brief CA file +`--server.cafile filename` + +This option can be used to specify a file with CA certificates that are +sent +to the client whenever the server requests a client certificate. If the +file is specified, The server will only accept client requests with +certificates issued by these CAs. Do not specify this option if you want +clients to be able to connect without specific certificates. + +The certificates in *filename* must be PEM formatted. + +**Note**: this option is only relevant if at least one SSL endpoint is +used. + !SUBSECTION SSL protocol -@startDocuBlock serverSSLProtocol + + +@brief SSL protocol type to use +`--server.ssl-protocolvalue` + +Use this option to specify the default encryption protocol to be used. +The following variants are available: +- 1: SSLv2 +- 2: SSLv23 +- 3: SSLv3 +- 4: TLSv1 + +The default *value* is 4 (i.e. TLSv1). + +**Note**: this option is only relevant if at least one SSL endpoint is used. + !SUBSECTION SSL cache -@startDocuBlock serverSSLCache + + +@brief whether or not to use SSL session caching +`--server.ssl-cache value` + +Set to true if SSL session caching should be used. + +*value* has a default value of *false* (i.e. no caching). + +**Note**: this option is only relevant if at least one SSL endpoint is used, and +only if the client supports sending the session id. + !SUBSECTION SSL options -@startDocuBlock serverSSLOptions + + +@brief ssl options to use +`--server.ssl-options value` + +This option can be used to set various SSL-related options. Individual +option values must be combined using bitwise OR. + +Which options are available on your platform is determined by the OpenSSL +version you use. The list of options available on your platform might be +retrieved by the following shell command: + +``` + > grep "#define SSL_OP_.*" /usr/include/openssl/ssl.h + + #define SSL_OP_MICROSOFT_SESS_ID_BUG 0x00000001L + #define SSL_OP_NETSCAPE_CHALLENGE_BUG 0x00000002L + #define SSL_OP_LEGACY_SERVER_CONNECT 0x00000004L + #define SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG 0x00000008L + #define SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG 0x00000010L + #define SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER 0x00000020L + ... +``` + +A description of the options can be found online in the +[OpenSSL +documentation](http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html) + +**Note**: this option is only relevant if at least one SSL endpoint is +used. + !SUBSECTION SSL cipher -@startDocuBlock serverSSLCipher + + +@brief ssl cipher list to use +`--server.ssl-cipher-list cipher-list` + +This option can be used to restrict the server to certain SSL ciphers +only, +and to define the relative usage preference of SSL ciphers. + +The format of *cipher-list* is documented in the OpenSSL documentation. + +To check which ciphers are available on your platform, you may use the +following shell command: + +``` +> openssl ciphers -v + +ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 +ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 +DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 +DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1 +DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(256) +Mac=SHA1 +... +``` + +The default value for *cipher-list* is "ALL". + +**Note**: this option is only relevant if at least one SSL endpoint is used. + !SUBSECTION Backlog size -@startDocuBlock serverBacklog + + +@brief listen backlog size +`--server.backlog-size` + +Allows to specify the size of the backlog for the *listen* system call +The default value is 10. The maximum value is platform-dependent. +Specifying +a higher value than defined in the system header's SOMAXCONN may result in +a warning on server start. The actual value used by *listen* may also be +silently truncated on some platforms (this happens inside the *listen* +system call). + !SUBSECTION Disable server statistics @@ -84,7 +363,16 @@ the option *--disable-figures*. !SUBSECTION Session timeout -@startDocuBlock SessionTimeout + + +@brief time to live for server sessions +`--server.session-timeout value` + +The timeout for web interface sessions, using for authenticating requests +to the web interface (/_admin/aardvark) and related areas. + +Sessions are only used when authentication is turned on. + !SUBSECTION Foxx queues @@ -96,7 +384,32 @@ the option *--disable-figures*. !SUBSECTION Directory -@startDocuBlock DatabaseDirectory + + +@brief path to the database +`--database.directory directory` + +The directory containing the collections and datafiles. Defaults +to */var/lib/arango*. When specifying the database directory, please +make sure the directory is actually writable by the arangod process. + +You should further not use a database directory which is provided by a +network filesystem such as NFS. The reason is that networked filesystems +might cause inconsistencies when there are multiple parallel readers or +writers or they lack features required by arangod (e.g. flock()). + +`directory` + +When using the command line version, you can simply supply the database +directory as argument. + +@EXAMPLES + +``` +> ./arangod --server.endpoint tcp://127.0.0.1:8529 --database.directory +/tmp/vocbase +``` + !SUBSECTION Journal size @@ -112,36 +425,164 @@ the option *--disable-figures*. !SUBSECTION Disable AQL query tracking -@startDocuBlock databaseDisableQueryTracking + + +@brief disable the query tracking feature +`--database.disable-query-tracking flag` + +If *true*, the server's query tracking feature will be disabled by +default. + +The default is *false*. + !SUBSECTION Throw collection not loaded error -@startDocuBlock databaseThrowCollectionNotLoadedError + + +@brief throw collection not loaded error +`--database.throw-collection-not-loaded-error flag` + +Accessing a not-yet loaded collection will automatically load a collection +on first access. This flag controls what happens in case an operation +would need to wait for another thread to finalize loading a collection. If +set to *true*, then the first operation that accesses an unloaded collection +will load it. Further threads that try to access the same collection while +it is still loading will get an error (1238, *collection not loaded*). When +the initial operation has completed loading the collection, all operations +on the collection can be carried out normally, and error 1238 will not be +thrown. + +If set to *false*, the first thread that accesses a not-yet loaded collection +will still load it. Other threads that try to access the collection while +loading will not fail with error 1238 but instead block until the collection +is fully loaded. This configuration might lead to all server threads being +blocked because they are all waiting for the same collection to complete +loading. Setting the option to *true* will prevent this from happening, but +requires clients to catch error 1238 and react on it (maybe by scheduling +a retry for later). + +The default value is *false*. + !SUBSECTION AQL Query caching mode -@startDocuBlock queryCacheMode + + +@brief whether or not to enable the AQL query cache +`--database.query-cache-mode` + +Toggles the AQL query cache behavior. Possible values are: + +* *off*: do not use query cache +* *on*: always use query cache, except for queries that have their *cache* + attribute set to *false* +* *demand*: use query cache only for queries that have their *cache* + attribute set to *true* + set + !SUBSECTION AQL Query cache size -@startDocuBlock queryCacheMaxResults + + +@brief maximum number of elements in the query cache per database +`--database.query-cache-max-results` + +Maximum number of query results that can be stored per database-specific +query cache. If a query is eligible for caching and the number of items in +the database's query cache is equal to this threshold value, another cached +query result will be removed from the cache. + +This option only has an effect if the query cache mode is set to either +*on* or *demand*. + !SUBSECTION Index threads -@startDocuBlock indexThreads + + +@brief number of background threads for parallel index creation +`--database.index-threads` + +Specifies the *number* of background threads for index creation. When a +collection contains extra indexes other than the primary index, these other +indexes can be built by multiple threads in parallel. The index threads +are shared among multiple collections and databases. Specifying a value of +*0* will turn off parallel building, meaning that indexes for each collection +are built sequentially by the thread that opened the collection. +If the number of index threads is greater than 1, it will also be used to +built the edge index of a collection in parallel (this also requires the +edge index in the collection to be split into multiple buckets). + !SUBSECTION V8 contexts -@startDocuBlock v8Contexts + + +@brief number of V8 contexts for executing JavaScript actions +`--server.v8-contexts number` + +Specifies the *number* of V8 contexts that are created for executing +JavaScript code. More contexts allow execute more JavaScript actions in +parallel, provided that there are also enough threads available. Please +note that each V8 context will use a substantial amount of memory and +requires periodic CPU processing time for garbage collection. + !SUBSECTION Garbage collection frequency (time-based) -@startDocuBlock jsGcFrequency + + +@brief JavaScript garbage collection frequency (each x seconds) +`--javascript.gc-frequency frequency` + +Specifies the frequency (in seconds) for the automatic garbage collection of +JavaScript objects. This setting is useful to have the garbage collection +still work in periods with no or little numbers of requests. + !SUBSECTION Garbage collection interval (request-based) -@startDocuBlock jsStartupGcInterval + + +@brief JavaScript garbage collection interval (each x requests) +`--javascript.gc-interval interval` + +Specifies the interval (approximately in number of requests) that the +garbage collection for JavaScript objects will be run in each thread. + !SUBSECTION V8 options -@startDocuBlock jsV8Options + + +@brief optional arguments to pass to v8 +`--javascript.v8-options options` + +Optional arguments to pass to the V8 Javascript engine. The V8 engine will +run with default settings unless explicit options are specified using this +option. The options passed will be forwarded to the V8 engine which will +parse them on its own. Passing invalid options may result in an error being +printed on stderr and the option being ignored. + +Options need to be passed in one string, with V8 option names being prefixed +with double dashes. Multiple options need to be separated by whitespace. +To get a list of all available V8 options, you can use +the value *"--help"* as follows: +``` +--javascript.v8-options "--help" +``` + +Another example of specific V8 options being set at startup: + +``` +--javascript.v8-options "--harmony --log" +``` + +Names and features or usable options depend on the version of V8 being used, +and might change in the future if a different version of V8 is being used +in ArangoDB. Not all options offered by V8 might be sensible to use in the +context of ArangoDB. Use the specific options only if you are sure that +they are not harmful for the regular database operation. + diff --git a/Documentation/Books/Users/ConfigureArango/Cluster.mdpp b/Documentation/Books/Users/ConfigureArango/Cluster.mdpp index 9b941f726a..8901814932 100644 --- a/Documentation/Books/Users/ConfigureArango/Cluster.mdpp +++ b/Documentation/Books/Users/ConfigureArango/Cluster.mdpp @@ -2,28 +2,155 @@ !SUBSECTION Node ID -@startDocuBlock clusterMyLocalInfo + + +@brief this server's id +`--cluster.my-local-info info` + +Some local information about the server in the cluster, this can for +example be an IP address with a process ID or any string unique to +the server. Specifying *info* is mandatory on startup if the server +id (see below) is not specified. Each server of the cluster must +have a unique local info. This is ignored if my-id below is specified. + !SUBSECTION Agency endpoint -@startDocuBlock clusterAgencyEndpoint + + +@brief list of agency endpoints +`--cluster.agency-endpoint endpoint` + +An agency endpoint the server can connect to. The option can be specified +multiple times so the server can use a cluster of agency servers. +Endpoints +have the following pattern: + +- tcp://ipv4-address:port - TCP/IP endpoint, using IPv4 +- tcp://[ipv6-address]:port - TCP/IP endpoint, using IPv6 +- ssl://ipv4-address:port - TCP/IP endpoint, using IPv4, SSL encryption +- ssl://[ipv6-address]:port - TCP/IP endpoint, using IPv6, SSL encryption + +At least one endpoint must be specified or ArangoDB will refuse to start. +It is recommended to specify at least two endpoints so ArangoDB has an +alternative endpoint if one of them becomes unavailable. + +@EXAMPLES + +``` +--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint +tcp://192.168.1.2:4002 +``` + !SUBSECTION Agency prefix -@startDocuBlock clusterAgencyPrefix + + +@brief global agency prefix +`--cluster.agency-prefix prefix` + +The global key prefix used in all requests to the agency. The specified +prefix will become part of each agency key. Specifying the key prefix +allows managing multiple ArangoDB clusters with the same agency +server(s). + +*prefix* must consist of the letters *a-z*, *A-Z* and the digits *0-9* +only. Specifying a prefix is mandatory. + +@EXAMPLES + +``` +--cluster.prefix mycluster +``` + !SUBSECTION MyId -@startDocuBlock clusterMyId + + +@brief this server's id +`--cluster.my-id id` + +The local server's id in the cluster. Specifying *id* is mandatory on +startup. Each server of the cluster must have a unique id. + +Specifying the id is very important because the server id is used for +determining the server's role and tasks in the cluster. + +*id* must be a string consisting of the letters *a-z*, *A-Z* or the +digits *0-9* only. + !SUBSECTION MyAddress -@startDocuBlock clusterMyAddress + + +@brief this server's address / endpoint +`--cluster.my-address endpoint` + +The server's endpoint for cluster-internal communication. If specified, it +must have the following pattern: +- tcp://ipv4-address:port - TCP/IP endpoint, using IPv4 +- tcp://[ipv6-address]:port - TCP/IP endpoint, using IPv6 +- ssl://ipv4-address:port - TCP/IP endpoint, using IPv4, SSL encryption +- ssl://[ipv6-address]:port - TCP/IP endpoint, using IPv6, SSL encryption + +If no *endpoint* is specified, the server will look up its internal +endpoint address in the agency. If no endpoint can be found in the agency +for the server's id, ArangoDB will refuse to start. + +@EXAMPLES + +``` +--cluster.my-address tcp://192.168.1.1:8530 +``` + !SUBSECTION Username -@startDocuBlock clusterUsername + + +@brief username used for cluster-internal communication +`--cluster.username username` + +The username used for authorization of cluster-internal requests. +This username will be used to authenticate all requests and responses in +cluster-internal communication, i.e. requests exchanged between +coordinators +and individual database servers. + +This option is used for cluster-internal requests only. Regular requests +to +coordinators are authenticated normally using the data in the *_users* +collection. + +If coordinators and database servers are run with authentication turned +off, +(e.g. by setting the *--server.disable-authentication* option to *true*), +the cluster-internal communication will also be unauthenticated. + !SUBSECTION Password -@startDocuBlock clusterPassword + + +@brief password used for cluster-internal communication +`--cluster.password password` + +The password used for authorization of cluster-internal requests. +This password will be used to authenticate all requests and responses in +cluster-internal communication, i.e. requests exchanged between +coordinators +and individual database servers. + +This option is used for cluster-internal requests only. Regular requests +to +coordinators are authenticated normally using the data in the `_users` +collection. + +If coordinators and database servers are run with authentication turned +off, +(e.g. by setting the *--server.disable-authentication* option to *true*), +the cluster-internal communication will also be unauthenticated. + diff --git a/Documentation/Books/Users/ConfigureArango/Communication.mdpp b/Documentation/Books/Users/ConfigureArango/Communication.mdpp index 44f8f60e2e..9a7d662fef 100644 --- a/Documentation/Books/Users/ConfigureArango/Communication.mdpp +++ b/Documentation/Books/Users/ConfigureArango/Communication.mdpp @@ -1,15 +1,42 @@ !CHAPTER Command-Line Options for Communication !SUBSECTION Scheduler threads -@startDocuBlock schedulerThreads + + +@brief number of scheduler threads +`--scheduler.threads arg` + +An integer argument which sets the number of threads to use in the IO +scheduler. The default is 1. + !SUBSECTION Scheduler maximal queue size -@startDocuBlock schedulerMaximalQueueSize + + +@brief maximum size of the dispatcher queue for asynchronous requests +`--scheduler.maximal-queue-size size` + +Specifies the maximum *size* of the dispatcher queue for asynchronous +task execution. If the queue already contains *size* tasks, new tasks +will be rejected until other tasks are popped from the queue. Setting this +value may help preventing from running out of memory if the queue is +filled +up faster than the server can process requests. + !SUBSECTION Scheduler backend -@startDocuBlock schedulerBackend + + +@brief scheduler backend +`--scheduler.backend arg` + +The I/O method used by the event handler. The default (if this option is +not specified) is to try all recommended backends. This is platform +specific. See libev for further details and the meaning of select, poll +and epoll. + !SUBSECTION Io backends diff --git a/Documentation/Books/Users/ConfigureArango/Endpoint.mdpp b/Documentation/Books/Users/ConfigureArango/Endpoint.mdpp index 05df50e0a8..83f5b6fded 100644 --- a/Documentation/Books/Users/ConfigureArango/Endpoint.mdpp +++ b/Documentation/Books/Users/ConfigureArango/Endpoint.mdpp @@ -44,5 +44,15 @@ When not in the default database, you must first switch to it using the !SUBSECTION List -@startDocuBlock listEndpoints + + +@brief returns a list of all endpoints +`db._listEndpoints()` + +Returns a list of all endpoints and their mapped databases. + +Please note that managing endpoints can only be performed from out of the +*_system* database. When not in the default database, you must first switch +to it using the "db._useDatabase" method. + diff --git a/Documentation/Books/Users/ConfigureArango/Logging.mdpp b/Documentation/Books/Users/ConfigureArango/Logging.mdpp index 523cef8fa7..37c686cc55 100644 --- a/Documentation/Books/Users/ConfigureArango/Logging.mdpp +++ b/Documentation/Books/Users/ConfigureArango/Logging.mdpp @@ -27,52 +27,281 @@ Use *""* to disable. !SUBSECTION Request -@startDocuBlock logRequests + + +@brief log file for requests +`--log.requests-file filename` + +This option allows the user to specify the name of a file to which +requests are logged. By default, no log file is used and requests are +not logged. Note that if the file named by *filename* does not +exist, it will be created. If the file cannot be created (e.g. due to +missing file privileges), the server will refuse to start. If the +specified +file already exists, output is appended to that file. + +Use *+* to log to standard error. Use *-* to log to standard output. +Use *""* to disable request logging altogether. + +The log format is +- `"http-request"`: static string indicating that an HTTP request was +logged +- client address: IP address of client +- HTTP method type, e.g. `GET`, `POST` +- HTTP version, e.g. `HTTP/1.1` +- HTTP response code, e.g. 200 +- request body length in bytes +- response body length in bytes +- server request processing time, containing the time span between +fetching + the first byte of the HTTP request and the start of the HTTP response + !SECTION Human Readable Logging !SUBSECTION Logfiles -@startDocuBlock logFile + + +@brief log file + !SUBSECTION Level -@startDocuBlock logLevel + + +@brief log level +`--log.level level` + +`--log level` + +Allows the user to choose the level of information which is logged by the +server. The argument *level* is specified as a string and can be one of +the values listed below. Note that, fatal errors, that is, errors which +cause the server to terminate, are always logged irrespective of the log +level assigned by the user. The variant *c* log.level can be used in +configuration files, the variant *c* log for command line options. + +**fatal**: +Logs errors which cause the server to terminate. + +Fatal errors generally indicate some inconsistency with the manner in +which +the server has been coded. Fatal errors may also indicate a problem with +the +platform on which the server is running. Fatal errors always cause the +server to terminate. For example, + +``` +2010-09-20T07:32:12Z [4742] FATAL a http server has already been created +``` + +**error**: +Logs errors which the server has encountered. + +These errors may not necessarily result in the termination of the +server. For example, + +``` +2010-09-17T13:10:22Z [13967] ERROR strange log level 'errors'\, going to +'warning' +``` + +**warning**: +Provides information on errors encountered by the server, +which are not necessarily detrimental to it's continued operation. + +For example, + +``` +2010-09-20T08:15:26Z [5533] WARNING got corrupted HTTP request 'POS?' +``` + +**Note**: The setting the log level to warning will also result in all +errors +to be logged as well. + +**info**: +Logs information about the status of the server. + +For example, + +``` +2010-09-20T07:40:38Z [4998] INFO SimpleVOC ready for business +``` + +**Note**: The setting the log level to info will also result in all errors +and +warnings to be logged as well. + +**debug**: +Logs all errors, all warnings and debug information. + +Debug log information is generally useful to find out the state of the +server in the case of an error. For example, + +``` +2010-09-17T13:02:53Z [13783] DEBUG opened port 7000 for any +``` + +**Note**: The setting the log level to debug will also result in all +errors, +warnings and server status information to be logged as well. + +**trace**: +As the name suggests, logs information which may be useful to trace +problems encountered with using the server. + +For example, + +``` +2010-09-20T08:23:12Z [5687] TRACE trying to open port 8000 +``` + +**Note**: The setting the log level to trace will also result in all +errors, +warnings, status information, and debug information to be logged as well. + !SUBSECTION Local Time -@startDocuBlock logLocalTime + + +@brief log dates and times in local time zone +`--log.use-local-time` + +If specified, all dates and times in log messages will use the server's +local time-zone. If not specified, all dates and times in log messages +will be printed in UTC / Zulu time. The date and time format used in logs +is always `YYYY-MM-DD HH:MM:SS`, regardless of this setting. If UTC time +is used, a `Z` will be appended to indicate Zulu time. + !SUBSECTION Line number -@startDocuBlock logLineNumber + + +@brief log line number +`--log.line-number` + +Normally, if an human readable fatal, error, warning or info message is +logged, no information about the file and line number is provided. The +file +and line number is only logged for debug and trace message. This option +can +be use to always log these pieces of information. + !SUBSECTION Prefix -@startDocuBlock logPrefix + + +@brief log prefix +`--log.prefix prefix` + +This option is used specify an prefix to logged text. + !SUBSECTION Thread -@startDocuBlock logThread + + +@brief log thread identifier +`--log.thread` + +Whenever log output is generated, the process ID is written as part of the +log information. Setting this option appends the thread id of the calling +thread to the process id. For example, + +``` +2010-09-20T13:04:01Z [19355] INFO ready for business +``` + +when no thread is logged and + +``` +2010-09-20T13:04:17Z [19371-18446744072487317056] ready for business +``` + +when this command line option is set. + !SUBSECTION Source Filter -@startDocuBlock logSourceFilter + + +@brief log source filter +`--log.source-filter arg` + +For debug and trace messages, only log those messages originated from the +C source file *arg*. The argument can be used multiple times. + !SUBSECTION Content Filter -@startDocuBlock logContentFilter + + +@brief log content filter +`--log.content-filter arg` + +Only log message containing the specified string *arg*. + !SUBSECTION Performance -@startDocuBlock logPerformance + + +@brief performance logging +`--log.performance` + +If this option is set, performance-related info messages will be logged +via +the regular logging mechanisms. These will consist of mostly timing and +debugging information for performance-critical operations. + +Currently performance-related operations are logged as INFO messages. +Messages starting with prefix `[action]` indicate that an instrumented +operation was started (note that its end won't be logged). Messages with +prefix `[timer]` will contain timing information for operations. Note that +no timing information will be logged for operations taking less time than +1 second. This is to ensure that sub-second operations do not pollute +logs. + +The contents of performance-related log messages enabled by this option +are subject to change in future versions of ArangoDB. + !SECTION Machine Readable Logging !SUBSECTION Application -@startDocuBlock logApplication + + +@brief log application name +`--log.application name` + +Specifies the *name* of the application which should be logged if this +item of +information is to be logged. + !SUBSECTION Facility -@startDocuBlock logFacility + + +@brief log facility +`--log.facility name` + +If this option is set, then in addition to output being directed to the +standard output (or to a specified file, in the case that the command line +log.file option was set), log output is also sent to the system logging +facility. The *arg* is the system log facility to use. See syslog for +further details. + +The value of *arg* depends on your syslog configuration. In general it +will be *user*. Fatal messages are mapped to *crit*, so if *arg* +is *user*, these messages will be logged as *user.crit*. Error +messages are mapped to *err*. Warnings are mapped to *warn*. Info +messages are mapped to *notice*. Debug messages are mapped to +*info*. Trace messages are mapped to *debug*. + diff --git a/Documentation/Books/Users/ConfigureArango/README.mdpp b/Documentation/Books/Users/ConfigureArango/README.mdpp index e8ad94224a..39bbf86f1a 100644 --- a/Documentation/Books/Users/ConfigureArango/README.mdpp +++ b/Documentation/Books/Users/ConfigureArango/README.mdpp @@ -9,11 +9,28 @@ environment variable. !SUBSECTION General help -@startDocuBlock generalHelp + + +@brief program options +`--help` + +`-h` + +Prints a list of the most common options available and then +exits. In order to see all options use *--help-all*. + !SUBSECTION Version -@startDocuBlock generalVersion + + +@brief version of the application +`--version` + +`-v` + +Prints the version of the server and exits. + !SUBSECTION Upgrade `--upgrade` @@ -30,7 +47,83 @@ Whether or not this option is specified, the server will always perform a versio !SUBSECTION Configuration -@startDocuBlock configurationFilename + + +@brief config file +`--configuration filename` + +`-c filename` + +Specifies the name of the configuration file to use. + +If this command is not passed to the server, then by default, the server +will attempt to first locate a file named *~/.arango/arangod.conf* in the +user's home directory. + +If no such file is found, the server will proceed to look for a file +*arangod.conf* in the system configuration directory. The system +configuration directory is platform-specific, and may be changed when +compiling ArangoDB yourself. It may default to */etc/arangodb* or +*/usr/local/etc/arangodb*. This file is installed when using a package +manager like rpm or dpkg. If you modify this file and later upgrade to a +new +version of ArangoDB, then the package manager normally warns you about the +conflict. In order to avoid these warning for small adjustments, you can +put +local overrides into a file *arangod.conf.local*. + +Only command line options with a value should be set within the +configuration file. Command line options which act as flags should be +entered on the command line when starting the server. + +Whitespace in the configuration file is ignored. Each option is specified +on +a separate line in the form + +```js +key = value +``` + +Alternatively, a header section can be specified and options pertaining to +that section can be specified in a shorter form + +```js +[log] +level = trace +``` + +rather than specifying + +```js +log.level = trace +``` + +Comments can be placed in the configuration file, only if the line begins +with one or more hash symbols (#). + +There may be occasions where a configuration file exists and the user +wishes +to override configuration settings stored in a configuration file. Any +settings specified on the command line will overwrite the same setting +when +it appears in a configuration file. If the user wishes to completely +ignore +configuration files without necessarily deleting the file (or files), then +add the command line option + +```js +-c none +``` + +or + +```js +--configuration none +``` + +When starting up the server. Note that, the word *none* is +case-insensitive. + !SUBSECTION Daemon `--daemon` @@ -41,7 +134,19 @@ parameter pid-file is given, then the server will report an error and exit. !SUBSECTION Default Language -@startDocuBlock DefaultLanguage + + +@brief server default language for sorting strings +`--default-language default-language` + +The default language ist used for sorting and comparing strings. +The language value is a two-letter language code (ISO-639) or it is +composed by a two-letter language code with and a two letter country code +(ISO-3166). Valid languages are "de", "en", "en_US" or "en_UK". + +The default default-language is set to be the system locale on that +platform. + !SUBSECTION Supervisor `--supervisor` @@ -81,19 +186,63 @@ start up a new database process: -@startDocuBlock configurationUid + + +@brief the user id to use for the process +`--uid uid` + +The name (identity) of the user the server will run as. If this parameter +is +not specified, the server will not attempt to change its UID, so that the +UID used by the server will be the same as the UID of the user who started +the server. If this parameter is specified, then the server will change +its +UID after opening ports and reading configuration files, but before +accepting connections or opening other files (such as recovery files). +This +is useful when the server must be started with raised privileges (in +certain +environments) but security considerations require that these privileges be +dropped once the server has started work. + +Observe that this parameter cannot be used to bypass operating system +security. In general, this parameter (and its corresponding relative gid) +can lower privileges but not raise them. + !SUBSECTION Group identity -@startDocuBlock configurationGid + + +@brief the group id to use for the process +`--gid gid` + +The name (identity) of the group the server will run as. If this parameter +is not specified, then the server will not attempt to change its GID, so +that the GID the server runs as will be the primary group of the user who +started the server. If this parameter is specified, then the server will +change its GID after opening ports and reading configuration files, but +before accepting connections or opening other files (such as recovery +files). + +This parameter is related to the parameter uid. + !SUBSECTION Process identity -@startDocuBlock pidFile + + +@brief pid file +`--pid-file filename` + +The name of the process ID file to use when running the server as a +daemon. This parameter must be specified if either the flag *daemon* or +*supervisor* is set. + !SUBSECTION Console `--console` @@ -111,4 +260,20 @@ already running in this or another mode. !SUBSECTION Random Generator -@startDocuBlock randomGenerator \ No newline at end of file + + +@brief random number generator to use +`--random.generator arg` + +The argument is an integer (1,2,3 or 4) which sets the manner in which +random numbers are generated. The default method (3) is to use the a +non-blocking random (or pseudorandom) number generator supplied by the +operating system. + +Specifying an argument of 2, uses a blocking random (or +pseudorandom) number generator. Specifying an argument 1 sets a +pseudorandom +number generator using an implication of the Mersenne Twister MT19937 +algorithm. Algorithm 4 is a combination of the blocking random number +generator and the Mersenne Twister. + diff --git a/Documentation/Books/Users/ConfigureArango/Wal.mdpp b/Documentation/Books/Users/ConfigureArango/Wal.mdpp index 5183dd97cb..c8bf380fbc 100644 --- a/Documentation/Books/Users/ConfigureArango/Wal.mdpp +++ b/Documentation/Books/Users/ConfigureArango/Wal.mdpp @@ -11,7 +11,16 @@ a replication backlog. !SUBSECTION Directory -@startDocuBlock WalLogfileDirectory + + +@brief the WAL logfiles directory +`--wal.directory` + +Specifies the directory in which the write-ahead logfiles should be +stored. If this option is not specified, it defaults to the subdirectory +*journals* in the server's global database directory. If the directory is +not present, it will be created. + !SUBSECTION Logfile size @@ -39,21 +48,132 @@ a replication backlog. !SUBSECTION Throttling -@startDocuBlock WalLogfileThrottling + + +@brief throttle writes to WAL when at least such many operations are +waiting for garbage collection +`--wal.throttle-when-pending` + +The maximum value for the number of write-ahead log garbage-collection +queue +elements. If set to *0*, the queue size is unbounded, and no +writtle-throttling will occur. If set to a non-zero value, +writte-throttling +will automatically kick in when the garbage-collection queue contains at +least as many elements as specified by this option. +While write-throttling is active, data-modification operations will +intentionally be delayed by a configurable amount of time. This is to +ensure the write-ahead log garbage collector can catch up with the +operations executed. +Write-throttling will stay active until the garbage-collection queue size +goes down below the specified value. +Write-throttling is turned off by default. + +`--wal.throttle-wait` + +This option determines the maximum wait time (in milliseconds) for +operations that are write-throttled. If write-throttling is active and a +new write operation is to be executed, it will wait for at most the +specified amount of time for the write-ahead log garbage-collection queue +size to fall below the throttling threshold. If the queue size decreases +before the maximum wait time is over, the operation will be executed +normally. If the queue size does not decrease before the wait time is +over, +the operation will be aborted with an error. +This option only has an effect if `--wal.throttle-when-pending` has a +non-zero value, which is not the default. + !SUBSECTION Number of slots -@startDocuBlock WalLogfileSlots + + +@brief maximum number of slots to be used in parallel +`--wal.slots` + +Configures the amount of write slots the write-ahead log can give to write +operations in parallel. Any write operation will lease a slot and return +it +to the write-ahead log when it is finished writing the data. A slot will +remain blocked until the data in it was synchronized to disk. After that, +a slot becomes reusable by following operations. The required number of +slots is thus determined by the parallelity of write operations and the +disk synchronization speed. Slow disks probably need higher values, and +fast +disks may only require a value lower than the default. + !SUBSECTION Ignore logfile errors -@startDocuBlock WalLogfileIgnoreLogfileErrors + + +@brief ignore logfile errors when opening logfiles +`--wal.ignore-logfile-errors` + +Ignores any recovery errors caused by corrupted logfiles on startup. When +set to *false*, the recovery procedure on startup will fail with an error +whenever it encounters a corrupted (that includes only half-written) +logfile. This is a security precaution to prevent data loss in case of +disk +errors etc. When the recovery procedure aborts because of corruption, any +corrupted files can be inspected and fixed (or removed) manually and the +server can be restarted afterwards. + +Setting the option to *true* will make the server continue with the +recovery +procedure even in case it detects corrupt logfile entries. In this case it +will stop at the first corrupted logfile entry and ignore all others, +which +might cause data loss. + !SUBSECTION Ignore recovery errors -@startDocuBlock WalLogfileIgnoreRecoveryErrors + + +@brief ignore recovery errors +`--wal.ignore-recovery-errors` + +Ignores any recovery errors not caused by corrupted logfiles but by +logical +errors. Logical errors can occur if logfiles or any other server datafiles +have been manually edited or the server is somehow misconfigured. + !SUBSECTION Ignore (non-WAL) datafile errors -@startDocuBlock databaseIgnoreDatafileErrors + + +@brief ignore datafile errors when loading collections +`--database.ignore-datafile-errors boolean` + +If set to `false`, CRC mismatch and other errors in collection datafiles +will lead to a collection not being loaded at all. The collection in this +case becomes unavailable. If such collection needs to be loaded during WAL +recovery, the WAL recovery will also abort (if not forced with option +`--wal.ignore-recovery-errors true`). + +Setting this flag to `false` protects users from unintentionally using a +collection with corrupted datafiles, from which only a subset of the +original data can be recovered. Working with such collection could lead +to data loss and follow up errors. +In order to access such collection, it is required to inspect and repair +the collection datafile with the datafile debugger (arango-dfdb). + +If set to `true`, CRC mismatch and other errors during the loading of a +collection will lead to the datafile being partially loaded, up to the +position of the first error. All data up to until the invalid position +will be loaded. This will enable users to continue with collection +datafiles +even if they are corrupted, but this will result in only a partial load +of the original data and potential follow up errors. The WAL recovery +will still abort when encountering a collection with a corrupted datafile, +at least if `--wal.ignore-recovery-errors` is not set to `true`. + +The default value is *false*, so collections with corrupted datafiles will +not be loaded at all, preventing partial loads and follow up errors. +However, +if such collection is required at server startup, during WAL recovery, the +server will abort the recovery and refuse to start. + diff --git a/Documentation/Books/Users/Databases/WorkingWith.mdpp b/Documentation/Books/Users/Databases/WorkingWith.mdpp index 3ffcc7a84a..ed70f54c3c 100644 --- a/Documentation/Books/Users/Databases/WorkingWith.mdpp +++ b/Documentation/Books/Users/Databases/WorkingWith.mdpp @@ -8,32 +8,172 @@ database only. !SUBSECTION Name -@startDocuBlock databaseName + + +return the database name +`db._name()` + +Returns the name of the current database as a string. + +@EXAMPLES + +@startDocuBlockInline dbName +@EXAMPLE_ARANGOSH_OUTPUT{dbName} + require("internal").db._name(); +@END_EXAMPLE_ARANGOSH_OUTPUT +@endDocuBlock dbName + !SUBSECTION ID -@startDocuBlock databaseId + + +return the database id +`db._id()` + +Returns the id of the current database as a string. + +@EXAMPLES + +@startDocuBlockInline dbId +@EXAMPLE_ARANGOSH_OUTPUT{dbId} + require("internal").db._id(); +@END_EXAMPLE_ARANGOSH_OUTPUT +@endDocuBlock dbId + !SUBSECTION Path -@startDocuBlock databasePath + + +return the path to database files +`db._path()` + +Returns the filesystem path of the current database as a string. + +@EXAMPLES + +@startDocuBlockInline dbPath +@EXAMPLE_ARANGOSH_OUTPUT{dbPath} + require("internal").db._path(); +@END_EXAMPLE_ARANGOSH_OUTPUT +@endDocuBlock dbPath + !SUBSECTION isSystem -@startDocuBlock databaseIsSystem + + +return the database type +`db._isSystem()` + +Returns whether the currently used database is the *_system* database. +The system database has some special privileges and properties, for example, +database management operations such as create or drop can only be executed +from within this database. Additionally, the *_system* database itself +cannot be dropped. + !SUBSECTION Use Database -@startDocuBlock databaseUseDatabase + + +change the current database +`db._useDatabase(name)` + +Changes the current database to the database specified by *name*. Note +that the database specified by *name* must already exist. + +Changing the database might be disallowed in some contexts, for example +server-side actions (including Foxx). + +When performing this command from arangosh, the current credentials (username +and password) will be re-used. These credentials might not be valid to +connect to the database specified by *name*. Additionally, the database +only be accessed from certain endpoints only. In this case, switching the +database might not work, and the connection / session should be closed and +restarted with different username and password credentials and/or +endpoint data. + !SUBSECTION List Databases -@startDocuBlock databaseListDatabase + + +return the list of all existing databases +`db._listDatabases()` + +Returns the list of all databases. This method can only be used from within +the *_system* database. + !SUBSECTION Create Database -@startDocuBlock databaseCreateDatabase + + +create a new database +`db._createDatabase(name, options, users)` + +Creates a new database with the name specified by *name*. +There are restrictions for database names +(see [DatabaseNames](../NamingConventions/DatabaseNames.md)). + +Note that even if the database is created successfully, there will be no +change into the current database to the new database. Changing the current +database must explicitly be requested by using the +*db._useDatabase* method. + +The *options* attribute currently has no meaning and is reserved for +future use. + +The optional *users* attribute can be used to create initial users for +the new database. If specified, it must be a list of user objects. Each user +object can contain the following attributes: + +* *username*: the user name as a string. This attribute is mandatory. +* *passwd*: the user password as a string. If not specified, then it defaults + to the empty string. +* *active*: a boolean flag indicating whether the user account should be + active or not. The default value is *true*. +* *extra*: an optional JSON object with extra user information. The data + contained in *extra* will be stored for the user but not be interpreted + further by ArangoDB. + +If no initial users are specified, a default user *root* will be created +with an empty string password. This ensures that the new database will be +accessible via HTTP after it is created. + +You can create users in a database if no initial user is specified. Switch +into the new database (username and password must be identical to the current +session) and add or modify users with the following commands. + +```js + require("@arangodb/users").save(username, password, true); + require("@arangodb/users").update(username, password, true); + require("@arangodb/users").remove(username); +``` +Alternatively, you can specify user data directly. For example: + +```js + db._createDatabase("newDB", [], [{ username: "newUser", passwd: "123456", active: true}]) +``` + +Those methods can only be used from within the *_system* database. + !SUBSECTION Drop Database -@startDocuBlock databaseDropDatabase \ No newline at end of file + + +drop an existing database +`db._dropDatabase(name)` + +Drops the database specified by *name*. The database specified by +*name* must exist. + +**Note**: Dropping databases is only possible from within the *_system* +database. The *_system* database itself cannot be dropped. + +Databases are dropped asynchronously, and will be physically removed if +all clients have disconnected and references have been garbage-collected. + diff --git a/Documentation/Books/Users/Documents/DatabaseMethods.mdpp b/Documentation/Books/Users/Documents/DatabaseMethods.mdpp index 69db12de56..4ad0952778 100644 --- a/Documentation/Books/Users/Documents/DatabaseMethods.mdpp +++ b/Documentation/Books/Users/Documents/DatabaseMethods.mdpp @@ -2,20 +2,258 @@ !SUBSECTION Document -@startDocuBlock documentsDocumentName + + +@brief looks up a document and returns it +`db._document(document)` + +This method finds a document given its identifier. It returns the document +if the document exists. An error is thrown if no document with the given +identifier exists, or if the specified *_rev* value does not match the +current revision of the document. + +**Note**: If the method is executed on the arangod server (e.g. from +inside a Foxx application), an immutable document object will be returned +for performance reasons. It is not possible to change attributes of this +immutable object. To update or patch the returned document, it needs to be +cloned/copied into a regular JavaScript object first. This is not necessary +if the *_document* method is called from out of arangosh or from any +other client. + +`db._document(document-handle)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + +@EXAMPLES + +Returns the document: + + @startDocuBlockInline documentsDocumentName + @EXAMPLE_ARANGOSH_OUTPUT{documentsDocumentName} + ~ db._create("example"); + ~ var myid = db.example.insert({_key: "12345"}); + db._document("example/12345"); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock documentsDocumentName + + !SUBSECTION Exists -@startDocuBlock documentsDocumentExists + + +@brief checks whether a document exists +`db._exists(document)` + +This method determines whether a document exists given its identifier. +Instead of returning the found document or an error, this method will +return either *true* or *false*. It can thus be used +for easy existence checks. + +No error will be thrown if the sought document or collection does not +exist. +Still this method will throw an error if used improperly, e.g. when called +with a non-document handle. + +`db._exists(document-handle)` + +As before, but instead of a document a document-handle can be passed. + !SUBSECTION Replace -@startDocuBlock documentsDocumentReplace + + +@brief replaces a document +`db._replace(document, data)` + +The method returns a document with the attributes *_id*, *_rev* and +*_oldRev*. The attribute *_id* contains the document handle of the +updated document, the attribute *_rev* contains the document revision of +the updated document, the attribute *_oldRev* contains the revision of +the old (now replaced) document. + +If there is a conflict, i. e. if the revision of the *document* does not +match the revision in the collection, then an error is thrown. + +`db._replace(document, data, true)` + +As before, but in case of a conflict, the conflict is ignored and the old +document is overwritten. + +`db._replace(document, data, true, waitForSync)` + +The optional *waitForSync* parameter can be used to force +synchronization of the document replacement operation to disk even in case +that the *waitForSync* flag had been disabled for the entire collection. +Thus, the *waitForSync* parameter can be used to force synchronization +of just specific operations. To use this, set the *waitForSync* parameter +to *true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`db._replace(document-handle, data)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + +@EXAMPLES + +Create and replace a document: + + @startDocuBlockInline documentsDocumentReplace + @EXAMPLE_ARANGOSH_OUTPUT{documentsDocumentReplace} + ~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + a2 = db._replace(a1, { a : 2 }); + a3 = db._replace(a1, { a : 3 }); // xpError(ERROR_ARANGO_CONFLICT); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock documentsDocumentReplace + + !SUBSECTION Update -@startDocuBlock documentsDocumentUpdate + + +@brief update a document +`db._update(document, data, overwrite, keepNull, waitForSync)` + +Updates an existing *document*. The *document* must be a document in +the current collection. This document is then patched with the +*data* given as second argument. The optional *overwrite* parameter can +be used to control the behavior in case of version conflicts (see below). +The optional *keepNull* parameter can be used to modify the behavior when +handling *null* values. Normally, *null* values are stored in the +database. By setting the *keepNull* parameter to *false*, this behavior +can be changed so that all attributes in *data* with *null* values will +be removed from the target document. + +The optional *waitForSync* parameter can be used to force +synchronization of the document update operation to disk even in case +that the *waitForSync* flag had been disabled for the entire collection. +Thus, the *waitForSync* parameter can be used to force synchronization +of just specific operations. To use this, set the *waitForSync* parameter +to *true*. If the *waitForSync* parameter is not specified or set to +false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +The method returns a document with the attributes *_id*, *_rev* and +*_oldRev*. The attribute *_id* contains the document handle of the +updated document, the attribute *_rev* contains the document revision of +the updated document, the attribute *_oldRev* contains the revision of +the old (now replaced) document. + +If there is a conflict, i. e. if the revision of the *document* does not +match the revision in the collection, then an error is thrown. + +`db._update(document, data, true)` + +As before, but in case of a conflict, the conflict is ignored and the old +document is overwritten. + +`db._update(document-handle, data)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + +@EXAMPLES + +Create and update a document: + + @startDocuBlockInline documentDocumentUpdate + @EXAMPLE_ARANGOSH_OUTPUT{documentDocumentUpdate} + ~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + a2 = db._update(a1, { b : 2 }); + a3 = db._update(a1, { c : 3 }); // xpError(ERROR_ARANGO_CONFLICT); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock documentDocumentUpdate + + !SUBSECTION Remove -@startDocuBlock documentsDocumentRemove + + +@brief removes a document +`db._remove(document)` + +Removes a document. If there is revision mismatch, then an error is thrown. + +`db._remove(document, true)` + +Removes a document. If there is revision mismatch, then mismatch is ignored +and document is deleted. The function returns *true* if the document +existed and was deleted. It returns *false*, if the document was already +deleted. + +`db._remove(document, true, waitForSync)` or +`db._remove(document, {overwrite: true or false, waitForSync: true or false})` + +The optional *waitForSync* parameter can be used to force synchronization +of the document deletion operation to disk even in case that the +*waitForSync* flag had been disabled for the entire collection. Thus, +the *waitForSync* parameter can be used to force synchronization of just +specific operations. To use this, set the *waitForSync* parameter to +*true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`db._remove(document-handle, data)` + +As before. Instead of document a *document-handle* can be passed as first +argument. + +@EXAMPLES + +Remove a document: + + @startDocuBlockInline documentsCollectionRemove + @EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionRemove} + ~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + db._remove(a1); + db._remove(a1); // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND); + db._remove(a1, true); + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock documentsCollectionRemove + +Remove a document with a conflict: + + @startDocuBlockInline documentsCollectionRemoveConflict + @EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionRemoveConflict} + ~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + a2 = db._replace(a1, { a : 2 }); + db._remove(a1); // xpError(ERROR_ARANGO_CONFLICT) + db._remove(a1, true); + db._document(a1); // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND) + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock documentsCollectionRemoveConflict + +Remove a document using new signature: + + @startDocuBlockInline documentsCollectionRemoveSignature + @EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionRemoveSignature} + ~ db._create("example"); + db.example.insert({ a: 1 } ); + | db.example.remove("example/11265325374", + { overwrite: true, waitForSync: false}) + ~ db._drop("example"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock documentsCollectionRemoveSignature + + diff --git a/Documentation/Books/Users/Documents/DocumentMethods.mdpp b/Documentation/Books/Users/Documents/DocumentMethods.mdpp index 0458251738..4503422a21 100644 --- a/Documentation/Books/Users/Documents/DocumentMethods.mdpp +++ b/Documentation/Books/Users/Documents/DocumentMethods.mdpp @@ -2,97 +2,1105 @@ !SUBSECTION All -@startDocuBlock collectionAll + + +@brief constructs an all query for a collection +`collection.all()` + +Fetches all documents from a collection and returns a cursor. You can use +*toArray*, *next*, or *hasNext* to access the result. The result +can be limited using the *skip* and *limit* operator. + +@EXAMPLES + +Use *toArray* to get all documents at once: + + @startDocuBlockInline 001_collectionAll + @EXAMPLE_ARANGOSH_OUTPUT{001_collectionAll} + ~ db._create("five"); + db.five.save({ name : "one" }); + db.five.save({ name : "two" }); + db.five.save({ name : "three" }); + db.five.save({ name : "four" }); + db.five.save({ name : "five" }); + db.five.all().toArray(); + ~ db._drop("five"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock 001_collectionAll + +Use *limit* to restrict the documents: + + @startDocuBlockInline 002_collectionAllNext + @EXAMPLE_ARANGOSH_OUTPUT{002_collectionAllNext} + ~ db._create("five"); + db.five.save({ name : "one" }); + db.five.save({ name : "two" }); + db.five.save({ name : "three" }); + db.five.save({ name : "four" }); + db.five.save({ name : "five" }); + db.five.all().limit(2).toArray(); + ~ db._drop("five"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock 002_collectionAllNext + + !SUBSECTION Query by example -@startDocuBlock collectionByExample + + +@brief constructs a query-by-example for a collection +`collection.byExample(example)` + +Fetches all documents from a collection that match the specified +example and returns a cursor. + +You can use *toArray*, *next*, or *hasNext* to access the +result. The result can be limited using the *skip* and *limit* +operator. + +An attribute name of the form *a.b* is interpreted as attribute path, +not as attribute. If you use + +```json +{ a : { c : 1 } } +``` + +as example, then you will find all documents, such that the attribute +*a* contains a document of the form *{c : 1 }*. For example the document + +```json +{ a : { c : 1 }, b : 1 } +``` + +will match, but the document + +```json +{ a : { c : 1, b : 1 } } +``` + +will not. + +However, if you use + +```json +{ a.c : 1 } +```, + +then you will find all documents, which contain a sub-document in *a* +that has an attribute *c* of value *1*. Both the following documents + +```json +{ a : { c : 1 }, b : 1 } +``` + +and + +```json +{ a : { c : 1, b : 1 } } +``` + +will match. + +`collection.byExample(path1, value1, ...)` + +As alternative you can supply an array of paths and values. + +@EXAMPLES + +Use *toArray* to get all documents at once: + +@startDocuBlockInline 003_collectionByExample +@EXAMPLE_ARANGOSH_OUTPUT{003_collectionByExample} +@endDocuBlock 003_collectionByExample +~ db._create("users"); + db.users.save({ name: "Gerhard" }); + db.users.save({ name: "Helmut" }); + db.users.save({ name: "Angela" }); + db.users.all().toArray(); + db.users.byExample({ "_id" : "users/20" }).toArray(); + db.users.byExample({ "name" : "Gerhard" }).toArray(); + db.users.byExample({ "name" : "Helmut", "_id" : "users/15" }).toArray(); +~ db._drop("users"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +Use *next* to loop over all documents: + +@startDocuBlockInline 004_collectionByExampleNext +@EXAMPLE_ARANGOSH_OUTPUT{004_collectionByExampleNext} +@endDocuBlock 004_collectionByExampleNext +~ db._create("users"); + db.users.save({ name: "Gerhard" }); + db.users.save({ name: "Helmut" }); + db.users.save({ name: "Angela" }); + var a = db.users.byExample( {"name" : "Angela" } ); + while (a.hasNext()) print(a.next()); +~ db._drop("users"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION First Example -@startDocuBlock collectionFirstExample + + +@brief constructs a query-by-example for a collection +`collection.firstExample(example)` + +Returns the first document of a collection that matches the specified +example. If no such document exists, *null* will be returned. +The example has to be specified as paths and values. +See *byExample* for details. + +`collection.firstExample(path1, value1, ...)` + +As alternative you can supply an array of paths and values. + +@EXAMPLES + +@startDocuBlockInline collectionFirstExample +@EXAMPLE_ARANGOSH_OUTPUT{collectionFirstExample} +@endDocuBlock collectionFirstExample +~ db._create("users"); +~ db.users.save({ name: "Gerhard" }); +~ db.users.save({ name: "Helmut" }); +~ db.users.save({ name: "Angela" }); + db.users.firstExample("name", "Angela"); +~ db._drop("users"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Range -@startDocuBlock collectionRange + + +@brief constructs a range query for a collection +`collection.range(attribute, left, right)` + +Returns all documents from a collection such that the *attribute* is +greater or equal than *left* and strictly less than *right*. + +You can use *toArray*, *next*, or *hasNext* to access the +result. The result can be limited using the *skip* and *limit* +operator. + +An attribute name of the form *a.b* is interpreted as attribute path, +not as attribute. + +For range queries it is required that a skiplist index is present for the +queried attribute. If no skiplist index is present on the attribute, an +error will be thrown. + +Note: the *range* simple query function is **deprecated** as of ArangoDB 2.6. +The function may be removed in future versions of ArangoDB. The preferred +way for retrieving documents from a collection within a specific range +is to use an AQL query as follows: + + FOR doc IN @@collection + FILTER doc.value >= @left && doc.value < @right + LIMIT @skip, @limit + RETURN doc + +@EXAMPLES + +Use *toArray* to get all documents at once: + +@startDocuBlockInline 005_collectionRange +@EXAMPLE_ARANGOSH_OUTPUT{005_collectionRange} +@endDocuBlock 005_collectionRange +~ db._create("old"); + db.old.ensureIndex({ type: "skiplist", fields: [ "age" ] }); + db.old.save({ age: 15 }); + db.old.save({ age: 25 }); + db.old.save({ age: 30 }); + db.old.range("age", 10, 30).toArray(); +~ db._drop("old") +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Closed range -@startDocuBlock collectionClosedRange + + +@brief constructs a closed range query for a collection +`collection.closedRange(attribute, left, right)` + +Returns all documents of a collection such that the *attribute* is +greater or equal than *left* and less or equal than *right*. + +You can use *toArray*, *next*, or *hasNext* to access the +result. The result can be limited using the *skip* and *limit* +operator. + +An attribute name of the form *a.b* is interpreted as attribute path, +not as attribute. + +Note: the *closedRange* simple query function is **deprecated** as of ArangoDB 2.6. +The function may be removed in future versions of ArangoDB. The preferred +way for retrieving documents from a collection within a specific range +is to use an AQL query as follows: + + FOR doc IN @@collection + FILTER doc.value >= @left && doc.value <= @right + LIMIT @skip, @limit + RETURN doc + +@EXAMPLES + +Use *toArray* to get all documents at once: + +@startDocuBlockInline 006_collectionClosedRange +@EXAMPLE_ARANGOSH_OUTPUT{006_collectionClosedRange} +@endDocuBlock 006_collectionClosedRange +~ db._create("old"); + db.old.ensureIndex({ type: "skiplist", fields: [ "age" ] }); + db.old.save({ age: 15 }); + db.old.save({ age: 25 }); + db.old.save({ age: 30 }); + db.old.closedRange("age", 10, 30).toArray(); +~ db._drop("old") +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Any -@startDocuBlock documentsCollectionAny + + +@brief returns any document from a collection +`collection.any()` + +Returns a random document from the collection or *null* if none exists. + !SUBSECTION Count -@startDocuBlock collectionCount + + +@brief counts the number of documents in a result set +`collection.count()` + +Returns the number of living documents in the collection. + +@EXAMPLES + +@startDocuBlockInline collectionCount +@EXAMPLE_ARANGOSH_OUTPUT{collectionCount} +@endDocuBlock collectionCount +~ db._create("users"); + db.users.count(); +~ db._drop("users"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION toArray -@startDocuBlock collectionToArray + + +@brief converts collection into an array +`collection.toArray()` + +Converts the collection into an array of documents. Never use this call +in a production environment. + !SUBSECTION Document -@startDocuBlock documentsCollectionName + + +@brief looks up a document +`collection.document(document)` + +The *document* method finds a document given its identifier or a document +object containing the *_id* or *_key* attribute. The method returns +the document if it can be found. + +An error is thrown if *_rev* is specified but the document found has a +different revision already. An error is also thrown if no document exists +with the given *_id* or *_key* value. + +Please note that if the method is executed on the arangod server (e.g. from +inside a Foxx application), an immutable document object will be returned +for performance reasons. It is not possible to change attributes of this +immutable object. To update or patch the returned document, it needs to be +cloned/copied into a regular JavaScript object first. This is not necessary +if the *document* method is called from out of arangosh or from any other +client. + +`collection.document(document-handle)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + +*Examples* + +Returns the document for a document-handle: + +@startDocuBlockInline documentsCollectionName +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionName} +@endDocuBlock documentsCollectionName +~ db._create("example"); +~ var myid = db.example.insert({_key: "2873916"}); + db.example.document("example/2873916"); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +An error is raised if the document is unknown: + +@startDocuBlockInline documentsCollectionNameUnknown +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionNameUnknown} +@endDocuBlock documentsCollectionNameUnknown +~ db._create("example"); +~ var myid = db.example.insert({_key: "2873916"}); +| db.example.document("example/4472917"); +~ // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND) +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +An error is raised if the handle is invalid: + +@startDocuBlockInline documentsCollectionNameHandle +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionNameHandle} +@endDocuBlock documentsCollectionNameHandle +~ db._create("example"); + db.example.document(""); // xpError(ERROR_ARANGO_DOCUMENT_HANDLE_BAD) +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Exists -@startDocuBlock documentsCollectionExists + + +@brief checks whether a document exists +`collection.exists(document)` + +The *exists* method determines whether a document exists given its +identifier. Instead of returning the found document or an error, this +method will return either *true* or *false*. It can thus be used +for easy existence checks. + +The *document* method finds a document given its identifier. It returns +the document. Note that the returned document contains two +pseudo-attributes, namely *_id* and *_rev*. *_id* contains the +document-handle and *_rev* the revision of the document. + +No error will be thrown if the sought document or collection does not +exist. +Still this method will throw an error if used improperly, e.g. when called +with a non-document handle, a non-document, or when a cross-collection +request is performed. + +`collection.exists(document-handle)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + + !SUBSECTION Lookup By Keys -@startDocuBlock collectionLookupByKeys + + +@brief fetches multiple documents by their keys +`collection.documents(keys)` + +Looks up the documents in the specified collection using the array of keys +provided. All documents for which a matching key was specified in the *keys* +array and that exist in the collection will be returned. +Keys for which no document can be found in the underlying collection are ignored, +and no exception will be thrown for them. + +@EXAMPLES + +@startDocuBlockInline collectionLookupByKeys +@EXAMPLE_ARANGOSH_OUTPUT{collectionLookupByKeys} +@endDocuBlock collectionLookupByKeys +~ db._drop("example"); +~ db._create("example"); + keys = [ ]; +| for (var i = 0; i < 10; ++i) { +| db.example.insert({ _key: "test" + i, value: i }); +| keys.push("test" + i); + } + db.example.documents(keys); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Insert -@startDocuBlock documentsCollectionInsert + + +@brief insert a new document +`collection.insert(data)` + +Creates a new document in the *collection* from the given *data*. The +*data* must be an object. + +The method returns a document with the attributes *_id* and *_rev*. +The attribute *_id* contains the document handle of the newly created +document, the attribute *_rev* contains the document revision. + +`collection.insert(data, waitForSync)` + +Creates a new document in the *collection* from the given *data* as +above. The optional *waitForSync* parameter can be used to force +synchronization of the document creation operation to disk even in case +that the *waitForSync* flag had been disabled for the entire collection. +Thus, the *waitForSync* parameter can be used to force synchronization +of just specific operations. To use this, set the *waitForSync* parameter +to *true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +Note: since ArangoDB 2.2, *insert* is an alias for *save*. + +@EXAMPLES + +@startDocuBlockInline documentsCollectionInsert +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionInsert} +@endDocuBlock documentsCollectionInsert +~ db._create("example"); + db.example.insert({ Hello : "World" }); + db.example.insert({ Hello : "World" }, true); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Replace -@startDocuBlock documentsCollectionReplace + + +@brief replaces a document +`collection.replace(document, data)` + +Replaces an existing *document*. The *document* must be a document in +the current collection. This document is then replaced with the +*data* given as second argument. + +The method returns a document with the attributes *_id*, *_rev* and +*{_oldRev*. The attribute *_id* contains the document handle of the +updated document, the attribute *_rev* contains the document revision of +the updated document, the attribute *_oldRev* contains the revision of +the old (now replaced) document. + +If there is a conflict, i. e. if the revision of the *document* does not +match the revision in the collection, then an error is thrown. + +`collection.replace(document, data, true)` or +`collection.replace(document, data, overwrite: true)` + +As before, but in case of a conflict, the conflict is ignored and the old +document is overwritten. + +`collection.replace(document, data, true, waitForSync)` or +`collection.replace(document, data, overwrite: true, waitForSync: true or false)` + +The optional *waitForSync* parameter can be used to force +synchronization of the document replacement operation to disk even in case +that the *waitForSync* flag had been disabled for the entire collection. +Thus, the *waitForSync* parameter can be used to force synchronization +of just specific operations. To use this, set the *waitForSync* parameter +to *true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`collection.replace(document-handle, data)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + +@EXAMPLES + +Create and update a document: + +@startDocuBlockInline documentsCollectionReplace +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionReplace} +@endDocuBlock documentsCollectionReplace +~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + a2 = db.example.replace(a1, { a : 2 }); + a3 = db.example.replace(a1, { a : 3 }); // xpError(ERROR_ARANGO_CONFLICT); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +Use a document handle: + +@startDocuBlockInline documentsCollectionReplaceHandle +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionReplaceHandle} +@endDocuBlock documentsCollectionReplaceHandle +~ db._create("example"); +~ var myid = db.example.insert({_key: "3903044"}); + a1 = db.example.insert({ a : 1 }); + a2 = db.example.replace("example/3903044", { a : 2 }); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Update -@startDocuBlock documentsCollectionUpdate + + +@brief updates a document +`collection.update(document, data, overwrite, keepNull, waitForSync)` or +`collection.update(document, data, +overwrite: true or false, keepNull: true or false, waitForSync: true or false)` + +Updates an existing *document*. The *document* must be a document in +the current collection. This document is then patched with the +*data* given as second argument. The optional *overwrite* parameter can +be used to control the behavior in case of version conflicts (see below). +The optional *keepNull* parameter can be used to modify the behavior when +handling *null* values. Normally, *null* values are stored in the +database. By setting the *keepNull* parameter to *false*, this behavior +can be changed so that all attributes in *data* with *null* values will +be removed from the target document. + +The optional *waitForSync* parameter can be used to force +synchronization of the document update operation to disk even in case +that the *waitForSync* flag had been disabled for the entire collection. +Thus, the *waitForSync* parameter can be used to force synchronization +of just specific operations. To use this, set the *waitForSync* parameter +to *true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +The method returns a document with the attributes *_id*, *_rev* and +*_oldRev*. The attribute *_id* contains the document handle of the +updated document, the attribute *_rev* contains the document revision of +the updated document, the attribute *_oldRev* contains the revision of +the old (now replaced) document. + +If there is a conflict, i. e. if the revision of the *document* does not +match the revision in the collection, then an error is thrown. + +`collection.update(document, data, true)` + +As before, but in case of a conflict, the conflict is ignored and the old +document is overwritten. + +collection.update(document-handle, data)` + +As before. Instead of document a document-handle can be passed as +first argument. + +*Examples* + +Create and update a document: + +@startDocuBlockInline documentsCollectionUpdate +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionUpdate} +@endDocuBlock documentsCollectionUpdate +~ db._create("example"); + a1 = db.example.insert({"a" : 1}); + a2 = db.example.update(a1, {"b" : 2, "c" : 3}); + a3 = db.example.update(a1, {"d" : 4}); // xpError(ERROR_ARANGO_CONFLICT); + a4 = db.example.update(a2, {"e" : 5, "f" : 6 }); + db.example.document(a4); + a5 = db.example.update(a4, {"a" : 1, c : 9, e : 42 }); + db.example.document(a5); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +Use a document handle: + +@startDocuBlockInline documentsCollectionUpdateHandle +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionUpdateHandle} +@endDocuBlock documentsCollectionUpdateHandle +~ db._create("example"); +~ var myid = db.example.insert({_key: "18612115"}); + a1 = db.example.insert({"a" : 1}); + a2 = db.example.update("example/18612115", { "x" : 1, "y" : 2 }); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +Use the keepNull parameter to remove attributes with null values: + +@startDocuBlockInline documentsCollectionUpdateHandleKeepNull +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionUpdateHandleKeepNull} +@endDocuBlock documentsCollectionUpdateHandleKeepNull +~ db._create("example"); +~ var myid = db.example.insert({_key: "19988371"}); + db.example.insert({"a" : 1}); +|db.example.update("example/19988371", + { "b" : null, "c" : null, "d" : 3 }); + db.example.document("example/19988371"); + db.example.update("example/19988371", { "a" : null }, false, false); + db.example.document("example/19988371"); +| db.example.update("example/19988371", + { "b" : null, "c": null, "d" : null }, false, false); + db.example.document("example/19988371"); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +Patching array values: + +@startDocuBlockInline documentsCollectionUpdateHandleArray +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionUpdateHandleArray} +@endDocuBlock documentsCollectionUpdateHandleArray +~ db._create("example"); +~ var myid = db.example.insert({_key: "20774803"}); +| db.example.insert({"a" : { "one" : 1, "two" : 2, "three" : 3 }, + "b" : { }}); +| db.example.update("example/20774803", {"a" : { "four" : 4 }, + "b" : { "b1" : 1 }}); + db.example.document("example/20774803"); +| db.example.update("example/20774803", { "a" : { "one" : null }, +| "b" : null }, + false, false); + db.example.document("example/20774803"); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Remove -@startDocuBlock documentsCollectionRemove + + +@brief removes a document +`collection.remove(document)` + +Removes a document. If there is revision mismatch, then an error is thrown. + +`collection.remove(document, true)` + +Removes a document. If there is revision mismatch, then mismatch is ignored +and document is deleted. The function returns *true* if the document +existed and was deleted. It returns *false*, if the document was already +deleted. + +`collection.remove(document, true, waitForSync)` + +The optional *waitForSync* parameter can be used to force synchronization +of the document deletion operation to disk even in case that the +*waitForSync* flag had been disabled for the entire collection. Thus, +the *waitForSync* parameter can be used to force synchronization of just +specific operations. To use this, set the *waitForSync* parameter to +*true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`collection.remove(document-handle, data)` + +As before. Instead of document a *document-handle* can be passed as +first argument. + +@EXAMPLES + +Remove a document: + +@startDocuBlockInline documentDocumentRemove +@EXAMPLE_ARANGOSH_OUTPUT{documentDocumentRemove} +@endDocuBlock documentDocumentRemove +~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + db.example.document(a1); + db.example.remove(a1); + db.example.document(a1); // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +Remove a document with a conflict: + +@startDocuBlockInline documentDocumentRemoveConflict +@EXAMPLE_ARANGOSH_OUTPUT{documentDocumentRemoveConflict} +@endDocuBlock documentDocumentRemoveConflict +~ db._create("example"); + a1 = db.example.insert({ a : 1 }); + a2 = db.example.replace(a1, { a : 2 }); + db.example.remove(a1); // xpError(ERROR_ARANGO_CONFLICT); + db.example.remove(a1, true); + db.example.document(a1); // xpError(ERROR_ARANGO_DOCUMENT_NOT_FOUND); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Remove By Keys -@startDocuBlock collectionRemoveByKeys + + +@brief removes multiple documents by their keys +`collection.removeByKeys(keys)` + +Looks up the documents in the specified collection using the array of keys +provided, and removes all documents from the collection whose keys are +contained in the *keys* array. Keys for which no document can be found in +the underlying collection are ignored, and no exception will be thrown for +them. + +The method will return an object containing the number of removed documents +in the *removed* sub-attribute, and the number of not-removed/ignored +documents in the *ignored* sub-attribute. + +@EXAMPLES + +@startDocuBlockInline collectionRemoveByKeys +@EXAMPLE_ARANGOSH_OUTPUT{collectionRemoveByKeys} +@endDocuBlock collectionRemoveByKeys +~ db._drop("example"); +~ db._create("example"); + keys = [ ]; +| for (var i = 0; i < 10; ++i) { +| db.example.insert({ _key: "test" + i, value: i }); +| keys.push("test" + i); + } + db.example.removeByKeys(keys); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Remove By Example -@startDocuBlock documentsCollectionRemoveByExample + + +@brief removes documents matching an example +`collection.removeByExample(example)` + +Removes all documents matching an example. + +`collection.removeByExample(document, waitForSync)` + +The optional *waitForSync* parameter can be used to force synchronization +of the document deletion operation to disk even in case that the +*waitForSync* flag had been disabled for the entire collection. Thus, +the *waitForSync* parameter can be used to force synchronization of just +specific operations. To use this, set the *waitForSync* parameter to +*true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`collection.removeByExample(document, waitForSync, limit)` + +The optional *limit* parameter can be used to restrict the number of +removals to the specified value. If *limit* is specified but less than the +number of documents in the collection, it is undefined which documents are +removed. + +@EXAMPLES + +@startDocuBlockInline 010_documentsCollectionRemoveByExample +@EXAMPLE_ARANGOSH_OUTPUT{010_documentsCollectionRemoveByExample} +@endDocuBlock 010_documentsCollectionRemoveByExample +~ db._create("example"); +~ db.example.save({ Hello : "world" }); + db.example.removeByExample( {Hello : "world"} ); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Replace By Example -@startDocuBlock documentsCollectionReplaceByExample + + +@brief replaces documents matching an example +`collection.replaceByExample(example, newValue)` + +Replaces all documents matching an example with a new document body. +The entire document body of each document matching the *example* will be +replaced with *newValue*. The document meta-attributes such as *_id*, +*_key*, *_from*, *_to* will not be replaced. + +`collection.replaceByExample(document, newValue, waitForSync)` + +The optional *waitForSync* parameter can be used to force synchronization +of the document replacement operation to disk even in case that the +*waitForSync* flag had been disabled for the entire collection. Thus, +the *waitForSync* parameter can be used to force synchronization of just +specific operations. To use this, set the *waitForSync* parameter to +*true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`collection.replaceByExample(document, newValue, waitForSync, limit)` + +The optional *limit* parameter can be used to restrict the number of +replacements to the specified value. If *limit* is specified but less than +the number of documents in the collection, it is undefined which documents are +replaced. + +@EXAMPLES + +@startDocuBlockInline 011_documentsCollectionReplaceByExample +@EXAMPLE_ARANGOSH_OUTPUT{011_documentsCollectionReplaceByExample} +@endDocuBlock 011_documentsCollectionReplaceByExample +~ db._create("example"); + db.example.save({ Hello : "world" }); + db.example.replaceByExample({ Hello: "world" }, {Hello: "mars"}, false, 5); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Update By Example -@startDocuBlock documentsCollectionUpdateByExample + + +@brief partially updates documents matching an example +`collection.updateByExample(example, newValue)` + +Partially updates all documents matching an example with a new document body. +Specific attributes in the document body of each document matching the +*example* will be updated with the values from *newValue*. +The document meta-attributes such as *_id*, *_key*, *_from*, +*_to* cannot be updated. + +Partial update could also be used to append new fields, +if there were no old field with same name. + +`collection.updateByExample(document, newValue, keepNull, waitForSync)` + +The optional *keepNull* parameter can be used to modify the behavior when +handling *null* values. Normally, *null* values are stored in the +database. By setting the *keepNull* parameter to *false*, this behavior +can be changed so that all attributes in *data* with *null* values will +be removed from the target document. + +The optional *waitForSync* parameter can be used to force synchronization +of the document replacement operation to disk even in case that the +*waitForSync* flag had been disabled for the entire collection. Thus, +the *waitForSync* parameter can be used to force synchronization of just +specific operations. To use this, set the *waitForSync* parameter to +*true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +`collection.updateByExample(document, newValue, keepNull, waitForSync, limit)` + +The optional *limit* parameter can be used to restrict the number of +updates to the specified value. If *limit* is specified but less than +the number of documents in the collection, it is undefined which documents are +updated. + +`collection.updateByExample(document, newValue, options)` + +Using this variant, the options for the operation can be passed using +an object with the following sub-attributes: +- *keepNull* +- *waitForSync* +- *limit* +- *mergeObjects* + +@EXAMPLES + +@startDocuBlockInline 012_documentsCollectionUpdateByExample +@EXAMPLE_ARANGOSH_OUTPUT{012_documentsCollectionUpdateByExample} +@endDocuBlock 012_documentsCollectionUpdateByExample +~ db._create("example"); + db.example.save({ Hello : "world", foo : "bar" }); + db.example.updateByExample({ Hello: "world" }, { Hello: "foo", World: "bar" }, false); + db.example.byExample({ Hello: "foo" }).toArray() +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION First -@startDocuBlock documentsCollectionFirst + + +@brief selects the n first documents in the collection +`collection.first(count)` + +The *first* method returns the n first documents from the collection, in +order of document insertion/update time. + +If called with the *count* argument, the result is a list of up to +*count* documents. If *count* is bigger than the number of documents +in the collection, then the result will contain as many documents as there +are in the collection. +The result list is ordered, with the "oldest" documents being positioned at +the beginning of the result list. + +When called without an argument, the result is the first document from the +collection. If the collection does not contain any documents, the result +returned is *null*. + +**Note**: this method is not supported in sharded collections with more than +one shard. + +@EXAMPLES + +@startDocuBlockInline documentsCollectionFirst +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionFirst} +@endDocuBlock documentsCollectionFirst +~ db._create("example"); +~ db.example.save({ Hello : "world" }); +~ db.example.save({ Foo : "bar" }); + db.example.first(1); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +@startDocuBlockInline documentsCollectionFirstNull +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionFirstNull} +@endDocuBlock documentsCollectionFirstNull +~ db._create("example"); +~ db.example.save({ Hello : "world" }); + db.example.first(); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Last -@startDocuBlock documentsCollectionLast + + +@brief selects the n last documents in the collection +`collection.last(count)` + +The *last* method returns the n last documents from the collection, in +order of document insertion/update time. + +If called with the *count* argument, the result is a list of up to +*count* documents. If *count* is bigger than the number of documents +in the collection, then the result will contain as many documents as there +are in the collection. +The result list is ordered, with the "latest" documents being positioned at +the beginning of the result list. + +When called without an argument, the result is the last document from the +collection. If the collection does not contain any documents, the result +returned is *null*. + +**Note**: this method is not supported in sharded collections with more than +one shard. + +@EXAMPLES + +@startDocuBlockInline documentsCollectionLast +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionLast} +@endDocuBlock documentsCollectionLast +~ db._create("example"); +~ db.example.save({ Hello : "world" }); +~ db.example.save({ Foo : "bar" }); + db.example.last(2); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + +@startDocuBlockInline documentsCollectionLastNull +@EXAMPLE_ARANGOSH_OUTPUT{documentsCollectionLastNull} +@endDocuBlock documentsCollectionLastNull +~ db._create("example"); +~ db.example.save({ Hello : "world" }); + db.example.last(1); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + !SUBSECTION Collection type -@startDocuBlock collectionType + + +@brief returns the type of a collection +`collection.type()` + +Returns the type of a collection. Possible values are: +- 2: document collection +- 3: edge collection + !SUBSECTION Get the Version of ArangoDB -@startDocuBlock databaseVersion + + +@brief return the server version string +`db._version()` + +Returns the server version string. Note that this is not the version of the +database. + +@EXAMPLES + +@startDocuBlockInline dbVersion +@EXAMPLE_ARANGOSH_OUTPUT{dbVersion} +@endDocuBlock dbVersion + require("internal").db._version(); +@END_EXAMPLE_ARANGOSH_OUTPUT + !SUBSECTION Misc -@startDocuBlock collectionEdgesAll -@startDocuBlock collectionEdgesInbound -@startDocuBlock collectionEdgesOutbound -@startDocuBlock collectionIterate -@startDocuBlock edgeSetProperty + + +@brief returns all edges connected to a vertex +`collection.edges(vertex-id)` + +Returns all edges connected to the vertex specified by *vertex-id*. + + + +@brief returns inbound edges connected to a vertex +`collection.inEdges(vertex-id)` + +Returns inbound edges connected to the vertex specified by *vertex-id*. + + + +@brief returns outbound edges connected to a vertex +`collection.outEdges(vertex-id)` + +Returns outbound edges connected to the vertex specified by *vertex-id*. + + + +@brief iterates over some elements of a collection +`collection.iterate(iterator, options)` + +Iterates over some elements of the collection and apply the function +*iterator* to the elements. The function will be called with the +document as first argument and the current number (starting with 0) +as second argument. + +*options* must be an object with the following attributes: + +- *limit* (optional, default none): use at most *limit* documents. + +- *probability* (optional, default all): a number between *0* and + *1*. Documents are chosen with this probability. + +@EXAMPLES + +@startDocuBlockInline accessViaGeoIndex +@EXAMPLE_ARANGOSH_OUTPUT{accessViaGeoIndex} +@endDocuBlock accessViaGeoIndex +~db._create("example") +|for (i = -90; i <= 90; i += 10) { +| for (j = -180; j <= 180; j += 10) { +| db.example.save({ name : "Name/" + i + "/" + j, +| home : [ i, j ], +| work : [ -i, -j ] }); +| } +|} + + db.example.ensureIndex({ type: "geo", fields: [ "home" ] }); + |items = db.example.getIndexes().map(function(x) { return x.id; }); + db.example.index(items[1]); +~ db._drop("example"); +@END_EXAMPLE_ARANGOSH_OUTPUT + + + + + +`edge.setProperty(name, value)` + +Changes or sets the property *name* an *edges* to *value*. + + diff --git a/Documentation/Books/Users/Edges/README.mdpp b/Documentation/Books/Users/Edges/README.mdpp index 31d92f0ee1..c11faafa1b 100644 --- a/Documentation/Books/Users/Edges/README.mdpp +++ b/Documentation/Books/Users/Edges/README.mdpp @@ -35,16 +35,143 @@ Other fields can be updated as in default collection. !SUBSECTION Insert -@startDocuBlock InsertEdgeCol + + +@brief saves a new edge document +`edge-collection.insert(from, to, document)` + +Saves a new edge and returns the document-handle. *from* and *to* +must be documents or document references. + +`edge-collection.insert(from, to, document, waitForSync)` + +The optional *waitForSync* parameter can be used to force +synchronization of the document creation operation to disk even in case +that the *waitForSync* flag had been disabled for the entire collection. +Thus, the *waitForSync* parameter can be used to force synchronization +of just specific operations. To use this, set the *waitForSync* parameter +to *true*. If the *waitForSync* parameter is not specified or set to +*false*, then the collection's default *waitForSync* behavior is +applied. The *waitForSync* parameter cannot be used to disable +synchronization for collections that have a default *waitForSync* value +of *true*. + +@EXAMPLES + + @startDocuBlockInline EDGCOL_01_SaveEdgeCol + @EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_01_SaveEdgeCol} + db._create("vertex"); + db._createEdgeCollection("relation"); + v1 = db.vertex.insert({ name : "vertex 1" }); + v2 = db.vertex.insert({ name : "vertex 2" }); + e1 = db.relation.insert(v1, v2, { label : "knows" }); + db._document(e1); + ~ db._drop("relation"); + ~ db._drop("vertex"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock EDGCOL_01_SaveEdgeCol + + !SUBSECTION Edges -@startDocuBlock edgeCollectionEdges + + +@brief selects all edges for a set of vertices +`edge-collection.edges(vertex)` + +The *edges* operator finds all edges starting from (outbound) or ending +in (inbound) *vertex*. + +`edge-collection.edges(vertices)` + +The *edges* operator finds all edges starting from (outbound) or ending +in (inbound) a document from *vertices*, which must a list of documents +or document handles. + + @startDocuBlockInline EDGCOL_02_Relation + @EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_02_Relation} + db._create("vertex"); + db._createEdgeCollection("relation"); + ~ var myGraph = {}; + myGraph.v1 = db.vertex.insert({ name : "vertex 1" }); + myGraph.v2 = db.vertex.insert({ name : "vertex 2" }); + | myGraph.e1 = db.relation.insert(myGraph.v1, myGraph.v2, + { label : "knows"}); + db._document(myGraph.e1); + db.relation.edges(myGraph.e1._id); + ~ db._drop("relation"); + ~ db._drop("vertex"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock EDGCOL_02_Relation + + !SUBSECTION InEdges -@startDocuBlock edgeCollectionInEdges + + +@brief selects all inbound edges +`edge-collection.inEdges(vertex)` + +The *edges* operator finds all edges ending in (inbound) *vertex*. + +`edge-collection.inEdges(vertices)` + +The *edges* operator finds all edges ending in (inbound) a document from +*vertices*, which must a list of documents or document handles. + + @EXAMPLES + @startDocuBlockInline EDGCOL_02_inEdges + @EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_02_inEdges} + db._create("vertex"); + db._createEdgeCollection("relation"); + ~ var myGraph = {}; + myGraph.v1 = db.vertex.insert({ name : "vertex 1" }); + myGraph.v2 = db.vertex.insert({ name : "vertex 2" }); + | myGraph.e1 = db.relation.insert(myGraph.v1, myGraph.v2, + { label : "knows"}); + db._document(myGraph.e1); + db.relation.inEdges(myGraph.v1._id); + db.relation.inEdges(myGraph.v2._id); + ~ db._drop("relation"); + ~ db._drop("vertex"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock EDGCOL_02_inEdges + + !SUBSECTION OutEdges -@startDocuBlock edgeCollectionOutEdges + + +@brief selects all outbound edges +`edge-collection.outEdges(vertex)` + +The *edges* operator finds all edges starting from (outbound) +*vertices*. + +`edge-collection.outEdges(vertices)` + +The *edges* operator finds all edges starting from (outbound) a document +from *vertices*, which must a list of documents or document handles. + + @EXAMPLES + @startDocuBlockInline EDGCOL_02_outEdges + @EXAMPLE_ARANGOSH_OUTPUT{EDGCOL_02_outEdges} + db._create("vertex"); + db._createEdgeCollection("relation"); + ~ var myGraph = {}; + myGraph.v1 = db.vertex.insert({ name : "vertex 1" }); + myGraph.v2 = db.vertex.insert({ name : "vertex 2" }); + | myGraph.e1 = db.relation.insert(myGraph.v1, myGraph.v2, + { label : "knows"}); + db._document(myGraph.e1); + db.relation.outEdges(myGraph.v1._id); + db.relation.outEdges(myGraph.v2._id); + ~ db._drop("relation"); + ~ db._drop("vertex"); + @END_EXAMPLE_ARANGOSH_OUTPUT + @endDocuBlock EDGCOL_02_outEdges + + diff --git a/Documentation/Books/Users/Foxx/Develop/ApiDocumentation.mdpp b/Documentation/Books/Users/Foxx/Develop/ApiDocumentation.mdpp index 2173d423e1..ec75c8cf38 100644 --- a/Documentation/Books/Users/Foxx/Develop/ApiDocumentation.mdpp +++ b/Documentation/Books/Users/Foxx/Develop/ApiDocumentation.mdpp @@ -6,4 +6,65 @@ give them access to the admin frontend or any other parts of ArangoDB. !SECTION Mounting the API documentation -@startDocuBlock JSF_foxx_controller_apiDocumentation \ No newline at end of file + + + +`Controller.apiDocumentation(path, [opts])` + +Mounts the API documentation (Swagger) at the given `path`. + +Note that the `path` can use path parameters as usual but must not use any +wildcard (`*`) or optional (`:name?`) parameters. + +The optional **opts** can be an object with any of the following properties: + +* **before**: a function that will be executed before a request to + this endpoint is processed further. +* **appPath**: the mount point of the app for which documentation will be + shown. Default: the mount point of the active app. +* **indexFile**: file path or file name of the Swagger HTML file. + Default: `"index.html"`. +* **swaggerJson**: file path or file name of the Swagger API description JSON + file or a function `swaggerJson(req, res, opts)` that sends a Swagger API + description in JSON. Default: the built-in Swagger description generator. +* **swaggerRoot**: absolute path that will be used as the path path for any + relative paths of the documentation assets, **swaggerJson** file and + the **indexFile**. Default: the built-in Swagger distribution. + +If **opts** is a function, it will be used as the value of **opts.before**. + +If **opts.before** returns `false`, the request will not be processed +further. + +If **opts.before** returns an object, any properties will override the +equivalent properties of **opts** for the current request. + +Of course all **before**, **after** or **around** functions defined on the +controller will also be executed as usual. + +**Examples** + +```js +controller.apiDocumentation('/my/dox'); + +``` + +A request to `/my/dox` will be redirect to `/my/dox/index.html`, +which will show the API documentation of the active app. + +```js +controller.apiDocumentation('/my/dox', function (req, res) { + if (!req.session.get('uid')) { + res.status(403); + res.json({error: 'only logged in users may see the API'}); + return false; + } + return {appPath: req.parameters.mount}; +}); +``` + +A request to `/my/dox/index.html?mount=/_admin/aardvark` will show the +API documentation of the admin frontend (mounted at `/_admin/aardvark`). +If the user is not logged in, the error message will be shown instead. + + diff --git a/Documentation/Books/Users/Foxx/Develop/Controller.mdpp b/Documentation/Books/Users/Foxx/Develop/Controller.mdpp index 51472669c4..65bf277bf3 100644 --- a/Documentation/Books/Users/Foxx/Develop/Controller.mdpp +++ b/Documentation/Books/Users/Foxx/Develop/Controller.mdpp @@ -2,33 +2,184 @@ !SUBSECTION Create -@startDocuBlock JSF_foxx_controller_initializer + + + +`new Controller(applicationContext, options)` + +This creates a new Controller. The first argument is the controller +context available in the variable *applicationContext*. The second one is an +options array with the following attributes: + +* *urlPrefix*: All routes you define within will be prefixed with it. + +@EXAMPLES + +```js +app = new Controller(applicationContext, { + urlPrefix: "/meadow" +}); +``` + + !SECTION HTTP Methods !SUBSECTION get -@startDocuBlock JSF_foxx_controller_get + + + +`Controller.get(path, callback)` + +Defines a new route on `path` that handles requests from the HTTP verb `get`. +This route can also be 'parameterized' like `/goose/:barn`. +In this case you can later get the value the user provided for `barn` +via the `params` function in the `request`. +The function defined in `callback` will be invoked whenever this type of +request is recieved. +`callback` get's two arguments `request` and `response`, see below for further +information about these objects. + +@EXAMPLES + +```js +app.get('/goose/barn', function (req, res) { + // Take this request and deal with it! +}); +``` + + !SUBSECTION head -@startDocuBlock JSF_foxx_controller_head + + + +`Controller.head(path, callback)` + +Defines a new route on `path` that handles requests from the HTTP verb `head`. +This route can also be 'parameterized' like `/goose/:barn`. +In this case you can later get the value the user provided for `barn` +via the `params` function in the `request`. +The function defined in `callback` will be invoked whenever this type of +request is recieved. +`callback` get's two arguments `request` and `response`, see below for further +information about these objects. + +@EXAMPLES + +```js +app.head('/goose/barn', function (req, res) { + // Take this request and deal with it! +}); +``` + + !SUBSECTION post -@startDocuBlock JSF_foxx_controller_post + + + +`Controller.post(path, callback)` + +Defines a new route on `path` that handles requests from the HTTP verb `post`. +This route can also be 'parameterized' like `/goose/:barn`. +In this case you can later get the value the user provided for `barn` +via the `params` function in the `request`. +The function defined in `callback` will be invoked whenever this type of +request is recieved. +`callback` get's two arguments `request` and `response`, see below for further +information about these objects. + +@EXAMPLES + +```js +app.post('/goose/barn', function (req, res) { + // Take this request and deal with it! +}); +``` + + !SUBSECTION put -@startDocuBlock JSF_foxx_controller_put + + + +`Controller.put(path, callback)` + +Defines a new route on `path` that handles requests from the HTTP verb `put`. +This route can also be 'parameterized' like `/goose/:barn`. +In this case you can later get the value the user provided for `barn` +via the `params` function in the `request`. +The function defined in `callback` will be invoked whenever this type of +request is recieved. +`callback` get's two arguments `request` and `response`, see below for further +information about these objects. + +@EXAMPLES + +```js +app.put('/goose/barn', function (req, res) { + // Take this request and deal with it! +}); +``` + + !SUBSECTION patch -@startDocuBlock JSF_foxx_controller_patch + + + +`Controller.patch(path, callback)` + +Defines a new route on `path` that handles requests from the HTTP verb `patch`. +This route can also be 'parameterized' like `/goose/:barn`. +In this case you can later get the value the user provided for `barn` +via the `params` function in the `request`. +The function defined in `callback` will be invoked whenever this type of +request is recieved. +`callback` get's two arguments `request` and `response`, see below for further +information about these objects. + +@EXAMPLES + +```js +app.patch('/goose/barn', function (req, res) { + // Take this request and deal with it! +}); +``` + + !SUBSECTION delete -@startDocuBlock JSF_foxx_controller_delete + + + +`Controller.delete(path, callback)` + +Defines a new route on `path` that handles requests from the HTTP verb `delete`. +This route can also be 'parameterized' like `/goose/:barn`. +In this case you can later get the value the user provided for `barn` +via the `params` function in the `request`. +The function defined in `callback` will be invoked whenever this type of +request is recieved. +`callback` get's two arguments `request` and `response`, see below for further +information about these objects. + +@EXAMPLES + +```js +app.delete('/goose/barn', function (req, res) { + // Take this request and deal with it! +}); +``` + + !SECTION Documenting and constraining a specific route @@ -56,35 +207,346 @@ API by chaining the following methods onto your path definition: !SUBSECTION pathParam -@startDocuBlock JSF_foxx_RequestContext_pathParam + + + +`Route.pathParam(id, options)` + +If you defined a route "/foxx/:name", containing a parameter called `name` you can +constrain which format this parameter is allowed to have. +This format is defined using *joi* in the `options` parameter. +Using this function will at first allow you to access this parameter in your +route handler using `req.params(id)`, will reject any request having a paramter +that does not match the *joi* definition and creates a documentation for this +parameter in ArangoDBs WebInterface. + +For more information on *joi* see [the official Joi documentation](https://github.com/spumko/joi). + +*Parameter* + +* *id*: name of the param. +* *options*: a joi schema or an object with the following properties: + * *type*: a joi schema. + * *description*: documentation description for the parameter. + * *required* (optional): whether the parameter is required. Default: determined by *type*. + +*Examples* + +```js +app.get("/foxx/:name", function { + // Do something +}).pathParam("name", joi.string().required().description("Name of the Foxx")); +``` + +You can also pass in a configuration object instead: + +```js +app.get("/foxx/:name", function { + // Do something +}).pathParam("name", { + type: joi.string(), + required: true, + description: "Name of the Foxx" +}); +``` + !SUBSECTION queryParam -@startDocuBlock JSF_foxx_RequestContext_queryParam + + + +`Route.queryParam(id, options)` + +Describe a query parameter: + +If you defined a route "/foxx", you can allow a query paramter with the +name `id` on it and constrain the format of this parameter by giving it a *joi* type in the `options` parameter. +Using this function will at first allow you to access this parameter in your +route handler using `req.params(id)`, will reject any request having a paramter +that does not match the *joi* definition and creates a documentation for this +parameter in ArangoDBs WebInterface. + +For more information on *joi* see [the official Joi documentation](https://github.com/spumko/joi). + +You can also provide a description of this parameter and +whether you can provide the parameter multiple times. + +*Parameter* + +* *id*: name of the parameter +* *options*: a joi schema or an object with the following properties: + * *type*: a joi schema + * *description*: documentation description for this param. + * *required* (optional): whether the param is required. Default: determined by *type*. + * *allowMultiple* (optional): whether the param can be specified more than once. Default: `false`. + +*Examples* + +```js +app.get("/foxx", function { + // Do something +}).queryParam("id", + joi.string() + .required() + .description("Id of the Foxx") + .meta({allowMultiple: false}) +}); +``` + +You can also pass in a configuration object instead: + +```js +app.get("/foxx", function { + // Do something +}).queryParam("id", { + type: joi.string().required().description("Id of the Foxx"), + allowMultiple: false +}); +``` + !SUBSECTION bodyParam -@startDocuBlock JSF_foxx_RequestContext_bodyParam + + + +`Route.bodyParam(paramName, options)` + +Defines that this route expects a JSON body when requested and binds it to +a pseudo parameter with the name `paramName`. +The body can than be read in the the handler using `req.params(paramName)` on the request object. +In the `options` parameter you can define how a valid request body should look like. +This definition can be done in two ways, either using *joi* directly. +Accessing the body in this case will give you a JSON object. +The other way is to use a Foxx *Model*. +Accessing the body in this case will give you an instance of this Model. +For both ways an entry for the body will be added in the Documentation in ArangoDBs WebInterface. +For information about how to annotate your models, see the Model section. +All requests sending a body that does not match the validation given this way +will automatically be rejected. + +You can also wrap the definition into an array, in this case this route +expects a body of type array containing arbitrary many valid objects. +Accessing the body parameter will then of course return an array of objects. + +Note: The behavior of `bodyParam` changes depending on the `rootElement` option +set in the [manifest](../Develop/Manifest.md). If it is set to `true`, it is +expected that the body is an +object with a key of the same name as the `paramName` argument. +The value of this object is either a single object or in the case of a multi +element an array of objects. + +*Parameter* + + * *paramName*: name of the body parameter in `req.parameters`. + * *options*: a joi schema or an object with the following properties: + * *description*: the documentation description of the request body. + * *type*: the Foxx model or joi schema to use. + * *allowInvalid* (optional): `true` if validation should be skipped. (Default: `false`) + +*Examples* + +```js +app.post("/foxx", function (req, res) { + var foxxBody = req.parameters.foxxBody; + // Do something with foxxBody +}).bodyParam("foxxBody", { + description: "Body of the Foxx", + type: FoxxBodyModel +}); +``` + +Using a joi schema: + +```js +app.post("/foxx", function (req, res) { + var joiBody = req.parameters.joiBody; + // Do something with the number +}).bodyParam("joiBody", { + type: joi.number().integer().min(5), + description: "A number greater than five", + allowInvalid: false // default +}); +``` + +Shorthand version: + +```js +app.post("/foxx", function (req, res) { + var joiBody = req.parameters.joiBody; + // Do something with the number +}).bodyParam( + "joiBody", + joi.number().integer().min(5) + .description("A number greater than five") + .meta({allowInvalid: false}) // default +); +``` + !SUBSECTION errorResponse -@startDocuBlock JSF_foxx_RequestContext_errorResponse + + + +`Route.errorResponse(errorClassOrName, code, description, [callback])` + +Define a reaction to a thrown error for this route: If your handler throws an error +of the errorClass defined in `errorClassOrName` or the error has an attribute `name` equal to `errorClassOrName`, +it will be caught and the response object will be filled with the given +status code and a JSON with error set to your description as the body. + +If you want more control over the returned JSON, you can give an optional fourth +parameter in form of a function. It gets the error as an argument, the return +value will be transformed into JSON and then be used as the body. +The status code will be used as described above. The description will be used for +the documentation. + +It also adds documentation for this error response to the generated documentation. + +*Examples* + +```js +/* define our own error type, FoxxyError */ +var FoxxyError = function (message) { + this.name = "FError"; + this.message = "the following FoxxyError occurred: " + message; +}; +FoxxyError.prototype = new Error(); + +app.get("/foxx", function { + /* throws a FoxxyError */ + throw new FoxxyError(); +}).errorResponse(FoxxyError, 303, "This went completely wrong. Sorry!"); + +app.get("/foxx", function { + throw new FoxxyError("oops!"); +}).errorResponse("FError", 303, "This went completely wrong. Sorry!", function (e) { + return { + code: 123, + desc: e.message + }; +}); +``` + !SUBSECTION onlyif -@startDocuBlock JSF_foxx_RequestContext_onlyIf + + + +`Route.onlyIf(check)` + +This functionality is used to secure a route by applying a checking function +on the request beforehand, for example the check authorization. +It expects `check` to be a function that takes the request object as first parameter. +This function is executed before the actual handler is invoked. +If `check` throws an error the actual handler will not be invoked. +Remember to provide an `errorResponse` on the route as well to define the behavior in this case. + +*Examples* + +```js +app.get("/foxx", function { + // Do something +}).onlyIf(aFunction).errorResponse(ErrorClass, 303, "This went completely wrong. Sorry!"); +``` + !SUBSECTION onlyIfAuthenticated -@startDocuBlock JSF_foxx_RequestContext_onlyIfAuthenticated + + + +`FoxxController#onlyIfAuthenticated(code, reason)` + +Please activate sessions for this app if you want to use this function. +Or activate authentication (deprecated). +If the user is logged in, it will do nothing. Otherwise it will respond with +the status code and the reason you provided (the route handler won't be called). +This will also add the according documentation for this route. + +*Examples* + +```js +app.get("/foxx", function { + // Do something +}).onlyIfAuthenticated(401, "You need to be authenticated"); +``` + !SUBSECTION summary -@startDocuBlock JSF_foxx_RequestContext_summary + + + +`Route.summary(description)` + +Set the summary for this route in the documentation. +Can't be longer than 8192 characters. +This is equal to using JavaDoc style comments right above your function. +If you provide both comment and `summary()` the call to `summary()` wins +and will be used. + +*Examples* + +Version with comment: + +```js +/** Short description + * + * Longer description + * with multiple lines + */ +app.get("/foxx", function() { +}); +``` + +is identical to: + +```js +app.get("/foxx", function() { +}) +.summary("Short description") +.notes(["Longer description", "with multiple lines"]); +``` + + !SUBSECTION notes -@startDocuBlock JSF_foxx_RequestContext_notes + + + +`Route.notes(...description)` + +Set the long description for this route in the documentation + +*Examples* + +Version with comment: + +```js +/** Short description + * + * Longer description + * with multiple lines + */ +app.get("/foxx", function() { +}); +``` + +is identical to: + +```js +app.get("/foxx", function() { +}) +.summary("Short description") +.notes(["Longer description", "with multiple lines"]); +``` + + !SUBSECTION extend @@ -95,7 +557,50 @@ extend the controller with your own functions. These functions can simply combine several of the above on a single name, so you only have to invoke your self defined single function on all routes using these extensions. -@startDocuBlock JSF_foxx_controller_extend + + + +`Controller.extend(extensions)` + +Extends all functions to define routes in this controller. +This allows to combine several route extensions with the invocation +of a single function. +This is especially useful if you use the same input parameter in several routes of +your controller and want to apply the same validation, documentation and error handling +for it. + +The `extensions` parameter is a JSON object with arbitrary keys. +Each key is used as the name of the function you want to define (you cannot overwrite +internal functions like `pathParam`) and the value is a function that will be invoked. +This function can get arbitrary many arguments and the `this` of the function is bound +to the route definition object (e.g. you can use `this.pathParam()`). +Your newly defined function is chainable similar to the internal functions. + +**Examples** + +Define a validator for a queryParameter, including documentation and errorResponses +in a single command: + +```js +controller.extend({ + myParam: function (maxValue) { + this.queryParam("value", {type: joi.number().required()}); + this.onlyIf(function(req) { + var v = req.param("value"); + if (v > maxValue) { + throw new NumberTooLargeError(); + } + }); + this.errorResponse(NumberTooLargeError, 400, "The given value is too large"); + } +}); + +controller.get("/goose/barn", function(req, res) { + // Will only be invoked if the request has parameter value and it is less or equal 5. +}).myParam(5); +``` + + !SECTION Documenting and constraining all routes @@ -129,21 +634,133 @@ ctrl.get('/another/route', function (req, res) { !SUBSECTION errorResponse -@startDocuBlock JSF_foxx_RequestContextBuffer_errorResponse + + + +`Controller.allRoutes.errorResponse(errorClass, code, description)` + +This is equal to invoking `Route.errorResponse` on all routes bound to this controller. + +*Examples* + +```js +app.allRoutes.errorResponse(FoxxyError, 303, "This went completely wrong. Sorry!"); + +app.get("/foxx", function { + // Do something +}); +``` + !SUBSECTION onlyIf -@startDocuBlock JSF_foxx_RequestContextBuffer_onlyIf + + + +`Controller.allRoutes.onlyIf(code, reason)` + +This is equal to invoking `Route.onlyIf` on all routes bound to this controller. + +*Examples* + +```js +app.allRoutes.onlyIf(myPersonalCheck); + +app.get("/foxx", function { + // Do something +}); +``` + !SUBSECTION onlyIfAuthenticated -@startDocuBlock JSF_foxx_RequestContextBuffer_onlyIfAuthenticated + + + +`Controller.allRoutes.onlyIfAuthenticated(code, description)` + +This is equal to invoking `Route.onlyIfAuthenticated` on all routes bound to this controller. + +*Examples* + +```js +app.allRoutes.onlyIfAuthenticated(401, "You need to be authenticated"); + +app.get("/foxx", function { + // Do something +}); +``` + !SUBSECTION pathParam -@startDocuBlock JSF_foxx_RequestContextBuffer_pathParam + + + +`Controller.allRoutes.pathParam(id, options)` + +This is equal to invoking `Route.pathParam` on all routes bound to this controller. + +*Examples* + +```js +app.allRoutes.pathParam("id", joi.string().required().description("Id of the Foxx")); + +app.get("/foxx/:id", function { + // Secured by pathParam +}); +``` + +You can also pass in a configuration object instead: + +```js +app.allRoutes.pathParam("id", { + type: joi.string(), + required: true, + description: "Id of the Foxx" +}); + +app.get("/foxx/:id", function { + // Secured by pathParam +}); +``` + !SUBSECTION bodyParam -@startDocuBlock JSF_foxx_RequestContextBuffer_queryParam + + + +`Controller.allRoutes.queryParam(id, options)` + +This is equal to invoking `Route.queryParam` on all routes bound to this controller. + +*Examples* + +```js +app.allroutes.queryParam("id", + joi.string() + .required() + .description("Id of the Foxx") + .meta({allowMultiple: false}) +}); + +app.get("/foxx", function { + // Do something +}); +``` + +You can also pass in a configuration object instead: + +```js +app.allroutes.queryParam("id", { + type: joi.string().required().description("Id of the Foxx"), + allowMultiple: false +}); + +app.get("/foxx", function { + // Do something +}); +``` + !SECTION Before and After Hooks @@ -155,15 +772,79 @@ example). !SUBSECTION before -@startDocuBlock JSF_foxx_controller_before + + + +`Controller.before(path, callback)` + +Defines an additional function on the route `path` which will be executed +before the callback defined for a specific HTTP verb is executed. +The `callback` function has the same signature as the `callback` in the +specific route. +You can also omit the `path`, in this case `callback` will be executed +before handleing any request in this Controller. + +If `callback` returns the Boolean value `false`, the route handling +will not proceed. You can use this to intercept invalid or unauthorized +requests and prevent them from being passed to the matching routes. + +@EXAMPLES + +```js +app.before('/high/way', function(req, res) { + //Do some crazy request logging +}); +``` + + !SUBSECTION after -@startDocuBlock JSF_foxx_controller_after + + + +`Controller.after(path, callback)` + +Similar to `Controller.before(path, callback)` but `callback` will be invoked +after the request is handled in the specific route. + +@EXAMPLES + +```js +app.after('/high/way', function(req, res) { + //Do some crazy response logging +}); +``` + + !SUBSECTION around -@startDocuBlock JSF_foxx_controller_around + + + +`Controller.around(path, callback)` + +Similar to `Controller.before(path, callback)` `callback` will be invoked +instead of the specific handler. +`callback` takes two additional paramaters `opts` and `next` where +`opts` contains options assigned to the route and `next` is a function. +Whenever you call `next` in `callback` the specific handler is invoked, +if you do not call `next` the specific handler will not be invoked at all. +So using around you can execute code before and after a specific handler +and even call the handler only under certain circumstances. +If you omit `path` `callback` will be called on every request. + +@EXAMPLES + +```js +app.around('/high/way', function(req, res, opts, next) { + //Do some crazy request logging + next(); + //Do some more crazy request logging +}); +``` + !SECTION The Request and Response Objects @@ -222,28 +903,99 @@ convenience methods: !SUBSECTION body -@startDocuBlock JSF_foxx_BaseMiddleware_request_body + + + +`request.body()` + +Get the JSON parsed body of the request. If you need the raw version, please +refer to the *rawBody* function. + !SUBSECTION rawBody -@startDocuBlock JSF_foxx_BaseMiddleware_request_rawBody + + + +`request.rawBody()` + +The raw request body, not parsed. The body is returned as a UTF-8 string. +Note that this can only be used sensibly if the request body contains +valid UTF-8. If the request body is known to contain non-UTF-8 data, the +request body can be accessed by using `request.rawBodyBuffer`. + !SUBSECTION rawBodyBuffer -@startDocuBlock JSF_foxx_BaseMiddleware_request_rawBodyBuffer + + + +`request.rawBodyBuffer()` + +The raw request body, returned as a Buffer object. + !SUBSECTION params -@startDocuBlock JSF_foxx_BaseMiddleware_request_params + + + +`request.params(key)` + +Get the parameters of the request. This process is two-fold: + +* If you have defined an URL like */test/:id* and the user requested + */test/1*, the call *params("id")* will return *1*. +* If you have defined an URL like */test* and the user gives a query + component, the query parameters will also be returned. So for example if + the user requested */test?a=2*, the call *params("a")* will return *2*. + !SUBSECTION cookie -@startDocuBlock JSF_foxx_BaseMiddleware_request_cookie + + + +`request.cookie(name, cfg)` + +Read a cookie from the request. Optionally the cookie's signature can be verified. + +*Parameter* + +* *name*: the name of the cookie to read from the request. +* *cfg* (optional): an object with any of the following properties: + * *signed* (optional): an object with any of the following properties: + * *secret*: a secret string that was used to sign the cookie. + * *algorithm*: hashing algorithm that was used to sign the cookie. Default: *"sha256"*. + +If *signed* is a string, it will be used as the *secret* instead. + +If a *secret* is provided, a second cookie with the name *name + ".sig"* will +be read and its value will be verified as the cookie value's signature. + +If the cookie is not set or its signature is invalid, "undefined" will be returned instead. + +@EXAMPLES + +``` +var sid = request.cookie("sid", {signed: "keyboardcat"}); +``` + !SUBSECTION requestParts Only useful for multi-part requests. -@startDocuBlock JSF_foxx_BaseMiddleware_request_requestParts + + + +`request.requestParts()` + +Returns an array containing the individual parts of a multi-part request. +Each part contains a `headers` attribute with all headers of the part, +and a `data` attribute with the content of the part in a Buffer object. +If the request is not a multi-part request, this function will throw an +error. + !SECTION The Response Object @@ -254,27 +1006,144 @@ You provide your response body as a string here. !SUBSECTION Response status -@startDocuBlock JSF_foxx_BaseMiddleware_response_status + + + +`response.status(code)` + +Set the status *code* of your response, for example: + +@EXAMPLES + +``` +response.status(404); +``` + !SUBSECTION Response set -@startDocuBlock JSF_foxx_BaseMiddleware_response_set + + + +`response.set(key, value)` + +Set a header attribute, for example: + +@EXAMPLES + +```js +response.set("Content-Length", 123); +response.set("Content-Type", "text/plain"); +``` + +or alternatively: + +```js +response.set({ + "Content-Length": "123", + "Content-Type": "text/plain" +}); +``` + !SUBSECTION Response json -@startDocuBlock JSF_foxx_BaseMiddleware_response_json + + + +`response.json(object)` + +Set the content type to JSON and the body to the JSON encoded *object* +you provided. + +@EXAMPLES + +```js +response.json({'born': 'December 12, 1915'}); +``` + !SUBSECTION Response cookie -@startDocuBlock JSF_foxx_BaseMiddleware_response_cookie + + + +`response.cookie(name, value, cfg)` + +Add a cookie to the response. Optionally the cookie can be signed. + +*Parameter* + +* *name*: the name of the cookie to add to the response. +* *value*: the value of the cookie to add to the response. +* *cfg* (optional): an object with any of the following properties: + * *ttl* (optional): the number of seconds until this cookie expires. + * *path* (optional): the cookie path. + * *domain* (optional): the cookie domain. + * *secure* (optional): mark the cookie as safe transport (HTTPS) only. + * *httpOnly* (optional): mark the cookie as HTTP(S) only. + * *signed* (optional): an object with any of the following properties: + * *secret*: a secret string to sign the cookie with. + * *algorithm*: hashing algorithm to sign the cookie with. Default: *"sha256"*. + +If *signed* is a string, it will be used as the *secret* instead. + +If a *secret* is provided, a second cookie with the name *name + ".sig"* will +be added to the response, containing the cookie's HMAC signature. + +@EXAMPLES + +``` +response.cookie("sid", "abcdef", {signed: "keyboardcat"}); +``` + !SUBSECTION Response send -@startDocuBlock JSF_foxx_BaseMiddleware_response_send + + + +`response.send(value)` + +Sets the response body to the specified *value*. If *value* is a Buffer +object, the content type will be set to `application/octet-stream` if not +yet set. If *value* is a string, the content type will be set to `text/html` +if not yet set. If *value* is an object, it will be treated as in `res.json`. + +@EXAMPLES + +```js +response.send({"born": "December 12, 1915"}); +response.send(new Buffer("some binary data")); +response.send("