diff --git a/Documentation/Books/Users/Aql/Invoke.mdpp b/Documentation/Books/Users/Aql/Invoke.mdpp index 7fe065adbf..9857b1017e 100644 --- a/Documentation/Books/Users/Aql/Invoke.mdpp +++ b/Documentation/Books/Users/Aql/Invoke.mdpp @@ -6,8 +6,8 @@ API description is available at [Http Interface for AQL Query Cursor](../HttpAql You can also run AQL queries from arangosh. To do so, first create an ArangoStatement object as follows: - arangosh> stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } ); - [object ArangoStatement] + stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } ); + [object ArangoQueryCursor] To execute the query, use the *execute* method: diff --git a/Documentation/Books/Users/Aql/Operators.mdpp b/Documentation/Books/Users/Aql/Operators.mdpp index 1d01932bae..a7795c5218 100644 --- a/Documentation/Books/Users/Aql/Operators.mdpp +++ b/Documentation/Books/Users/Aql/Operators.mdpp @@ -102,7 +102,7 @@ evaluation. The ternary operator expects a boolean condition as its first operand, and it returns the result of the second operand if the condition evaluates to true, and the third operand otherwise. -Example: +@EXAMPLES: u.age > 15 || u.active == true ? u.userId : null @@ -115,7 +115,7 @@ values. The *..* operator will produce a list of values in the defined range, with both bounding values included. -Example: +@EXAMPLES 2010..2013 @@ -274,7 +274,7 @@ For string processing, AQL offers the following functions: - *UPPER(value)*: Upper-case *value* - *SUBSTRING(value, offset, length)*: Return a substring of *value*, - starting at @FA{offset} and with a maximum length of *length* characters. Offsets + starting at *offset* and with a maximum length of *length* characters. Offsets start at position 0 - *LEFT(value, LENGTH)*: Returns the *LENGTH* leftmost characters of @@ -453,12 +453,12 @@ AQL supports the following functions to operate on list values: *list* is a document, returns the number of attribute keys of the document, regardless of their values. -- @FN{FLATTEN(list), depth)*: Turns a list of lists into a flat list. All +- *FLATTEN(list), depth)*: Turns a list of lists into a flat list. All list elements in *list* will be expanded in the result list. Non-list elements are added as they are. The function will recurse into sub-lists up to a depth of *depth*. *depth* has a default value of 1. - Example: + @EXAMPLES FLATTEN([ 1, 2, [ 3, 4 ], 5, [ 6, 7 ], [ 8, [ 9, 10 ] ]) @@ -523,7 +523,7 @@ AQL supports the following functions to operate on list values: - *LAST(list)*: Returns the last element in *list* or *null* if the list is empty. -- *NTH(list, position)*: Returns the list element at position @FA{position}. +- *NTH(list, position)*: Returns the list element at position *position*. Positions start at 0. If *position* is negative or beyond the upper bound of the list specified by *list*, then *null* will be returned. @@ -536,11 +536,11 @@ AQL supports the following functions to operate on list values: - *SLICE(list, start, length)*: Extracts a slice of the list specified by *list*. The extraction will start at list element with position *start*. Positions start at 0. Up to *length* elements will be extracted. If *length* is - not specified, all list elements starting at @FA{start} will be returned. + not specified, all list elements starting at *start* will be returned. If *start* is negative, it can be used to indicate positions from the end of the list. - Examples: +@EXAMPLES: SLICE([ 1, 2, 3, 4, 5 ], 0, 1) @@ -573,7 +573,7 @@ AQL supports the following functions to operate on list values: Note: No duplicates will be removed. In order to remove duplicates, please use either *UNION_DISTINCT* function or apply the *UNIQUE* on the result of *union*. - Example: + @EXAMPLES RETURN UNION( [ 1, 2, 3 ], @@ -638,7 +638,7 @@ AQL supports the following functions to operate on document values: The *examples* must be a list of 1..n example documents, with any number of attributes each. Note: specifying an empty list of examples is not allowed. - Example usage: + @EXAMPLE RETURN MATCHES( { "test" : 1 }, [ @@ -850,7 +850,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. returns a list of paths through the graph defined by the nodes in the collection *vertexcollection* and edges in the collection *edgecollection*. For each vertex in *vertexcollection*, it will determine the paths through the graph depending on the - value of @FA{direction}: + value of *direction*: - *"outbound"*: Follow all paths that start at the current vertex and lead to another vertex - *"inbound"*: Follow all paths that lead from another vertex to the current vertex - *"any"*: Combination of *"outbound"* and *"inbound"* @@ -865,7 +865,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. - *source*: start vertex of path - *destination*: destination vertex of path - Example calls: +@EXAMPLES PATHS(friends, friendrelations, "outbound", false) @@ -962,9 +962,9 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. - *vertex*: The vertex at the traversal point - *path*: The path history for the traversal point. The path is a document with the attributes *vertices* and *edges*, which are both lists. Note that *path* is only present - in the result if the *paths* attribute is set in the @FA{options} + in the result if the *paths* attribute is set in the *options* - Example calls: +@EXAMPLES TRAVERSAL(friends, friendrelations, "friends/john", "outbound", { strategy: "depthfirst", @@ -1021,7 +1021,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. - *TRAVERSAL_TREE(vertexcollection, edgecollection, startVertex, direction, connectName, options)*: Traverses the graph described by *vertexcollection* and *edgecollection*, - starting at the vertex identified by id @FA{startVertex} and creates a hierarchical result. + starting at the vertex identified by id *startVertex* and creates a hierarchical result. Vertex connectivity is establish by inserted an attribute which has the name specified via the *connectName* parameter. Connected vertices will be placed in this attribute as a list. @@ -1030,7 +1030,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. be set up in a way that resembles a depth-first, pre-order visitation result. Thus, the *strategy* and *order* attributes of the *options* attribute will be ignored. - Example calls: +@EXAMPLES TRAVERSAL_TREE(friends, friendrelations, "friends/john", "outbound", "likes", { itemOrder: "forward" @@ -1049,10 +1049,10 @@ This query is deprecated and will be removed soon. Please use [Graph operations](../Aql/GraphOperations.md) instead. - *SHORTEST_PATH(vertexcollection, edgecollection, startVertex, endVertex, direction, options)*: - Determines the first shortest path from the @FA{startVertex} to the *endVertex*. + Determines the first shortest path from the *startVertex* to the *endVertex*. Both vertices must be present in the vertex collection specified in *vertexcollection*, and any connecting edges must be present in the collection specified by *edgecollection*. - Vertex connectivity is specified by the @FA{direction} parameter: + Vertex connectivity is specified by the *direction* parameter: - *"outbound"*: Vertices are connected in *_from* to *_to* order - *"inbound"*: Vertices are connected in *_to* to *_from* order - *"any"*: Vertices are connected in both *_to* to *_from* and in @@ -1114,9 +1114,9 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. - *vertex*: The vertex at the traversal point - *path*: The path history for the traversal point. The path is a document with the attributes *vertices* and *edges*, which are both lists. Note that *path* is only present - in the result if the *paths* attribute is set in the @FA{options}. + in the result if the *paths* attribute is set in the *options*. - Example calls: +@EXAMPLES SHORTEST_PATH(cities, motorways, "cities/CGN", "cities/MUC", "outbound", { paths: true @@ -1160,7 +1160,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. To not restrict the result to specific connections, *edgeexamples* should be left unspecified. - Example calls: +@EXAMPLES EDGES(friendrelations, "friends/john", "outbound") EDGES(friendrelations, "friends/john", "any", [ { "$label": "knows" } ]) @@ -1182,7 +1182,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead. To not restrict the result to specific connections, *edgeexamples* should be left unspecified. - Example calls: +@EXAMPLES NEIGHBORS(friends, friendrelations, "friends/john", "outbound") NEIGHBORS(users, usersrelations, "users/john", "any", [ { "$label": "recommends" } ] ) @@ -1221,7 +1221,7 @@ function categories: found, *null* will be returned. This function also allows *id* to be a list of ids. In this case, the function will return a list of all documents that could be found. - Examples: +@EXAMPLES: DOCUMENT(users, "users/john") DOCUMENT(users, "john") @@ -1239,10 +1239,10 @@ function categories: DOCUMENT([ "users/john", "users/amy" ]) - *SKIPLIST(collection, condition, skip, limit)*: Return all documents - from a skiplist index on collection *collection* that match the specified @FA{condition}. + from a skiplist index on collection *collection* that match the specified *condition*. This is a shortcut method to use a skiplist index for retrieving specific documents in indexed order. The skiplist index supports equality and less than/greater than queries. The - @FA{skip} and *limit* parameters are optional but can be specified to further limit the + *skip* and *limit* parameters are optional but can be specified to further limit the results: SKIPLIST(test, { created: [[ '>', 0 ]] }, 0, 100) diff --git a/Documentation/Books/Users/SimpleQueries/FulltextQueries.mdpp b/Documentation/Books/Users/SimpleQueries/FulltextQueries.mdpp index f5c9b823ac..44e3b8c2d2 100644 --- a/Documentation/Books/Users/SimpleQueries/FulltextQueries.mdpp +++ b/Documentation/Books/Users/SimpleQueries/FulltextQueries.mdpp @@ -12,9 +12,9 @@ When a fulltext index exists, it can be queried using a fulltext query. !SUBSECTION Fulltext -@startDocuBlock simple-query-fulltext +@startDocuBlock collectionFulltext -!SUBSECTION Fulltext query syntax: +!SUBSECTION Fulltext Syntax: In the simplest form, a fulltext query contains just the sought word. If multiple search words are given in a query, they should be separated by commas. diff --git a/Documentation/Books/Users/SimpleQueries/ModificationQueries.mdpp b/Documentation/Books/Users/SimpleQueries/ModificationQueries.mdpp index 864599e06c..12b11fa545 100644 --- a/Documentation/Books/Users/SimpleQueries/ModificationQueries.mdpp +++ b/Documentation/Books/Users/SimpleQueries/ModificationQueries.mdpp @@ -11,8 +11,8 @@ modify lots of documents in a collection. All methods can optionally be restricted to a specific number of operations. However, if a limit is specific but is less than the number of matches, it will be undefined which of the matching documents will get removed/modified. -[Remove by Example](../Documents/DocumentMethods.html#remove_by_example), -[Replace by Example](../Documents/DocumentMethods.html#replace_by_example) and -[Update by Example](../Documents/DocumentMethods.html#update_by_example) -are described with examples in the subchapter +[Remove by Example](../Documents/DocumentMethods.html#remove_by_example), + [Replace by Example](../Documents/DocumentMethods.html#replace_by_example) and +[Update by Example](../Documents/DocumentMethods.html#update_by_example) + are described with examples in the subchapter [Collection Methods](../Documents/DocumentMethods.md). \ No newline at end of file diff --git a/Documentation/Books/Users/SimpleQueries/Pagination.mdpp b/Documentation/Books/Users/SimpleQueries/Pagination.mdpp index aae32464f6..c860fd21fd 100644 --- a/Documentation/Books/Users/SimpleQueries/Pagination.mdpp +++ b/Documentation/Books/Users/SimpleQueries/Pagination.mdpp @@ -8,8 +8,8 @@ MySQL. *skip* used together with *limit* can be used to implement pagination. The *skip* operator skips over the first n documents. So, in order to create -result pages with 10 result documents per page, you can use `skip(n * -10).limit(10)` to access the 10 documents on the n.th page. This result should +result pages with 10 result documents per page, you can use *skip(n * +10).limit(10)* to access the 10 documents on the n.th page. This result should be sorted, so that the pagination works in a predicable way. !SUBSECTION Limit diff --git a/Documentation/Books/Users/Transactions/Durability.mdpp b/Documentation/Books/Users/Transactions/Durability.mdpp index 7b1beea2ad..a0f95e9a37 100644 --- a/Documentation/Books/Users/Transactions/Durability.mdpp +++ b/Documentation/Books/Users/Transactions/Durability.mdpp @@ -28,7 +28,7 @@ whether the delayed synchronization had kicked in or not. To ensure durability of transactions on a collection that have the *waitForSync* property set to *false*, you can set the *waitForSync* attribute of the object that is passed to *executeTransaction*. This will force a synchronization of the -transaction to disk even for collections that have *waitForSync set to *false*: +transaction to disk even for collections that have *waitForSync* set to *false*: db._executeTransaction({ collections: { diff --git a/Documentation/Books/Users/Transactions/LockingAndIsolation.mdpp b/Documentation/Books/Users/Transactions/LockingAndIsolation.mdpp index 0ff6a7d76b..9ab77528a0 100644 --- a/Documentation/Books/Users/Transactions/LockingAndIsolation.mdpp +++ b/Documentation/Books/Users/Transactions/LockingAndIsolation.mdpp @@ -29,7 +29,7 @@ from the collection as usual. However, as the collection ie added lazily, there isolation from other concurrent operations or transactions. Reads from such collections are potentially non-repeatable. -Example: +@EXAMPLES db._executeTransaction({ collections: { diff --git a/Documentation/Books/Users/Transactions/README.mdpp b/Documentation/Books/Users/Transactions/README.mdpp index cc4b3997ff..f4c3827831 100644 --- a/Documentation/Books/Users/Transactions/README.mdpp +++ b/Documentation/Books/Users/Transactions/README.mdpp @@ -6,13 +6,14 @@ transactions. Transactions in ArangoDB are atomic, consistent, isolated, and durable (*ACID*). These *ACID* properties provide the following guarantees: -- The *atomicity* priniciple makes transactions either complete in their + +* The *atomicity* principle makes transactions either complete in their entirety or have no effect at all. -- The *consistency* principle ensures that no constraints or other invariants +* The *consistency* principle ensures that no constraints or other invariants will be violated during or after any transaction. -- The *isolation* property will hide the modifications of a transaction from +* The *isolation* property will hide the modifications of a transaction from other transactions until the transaction commits. -- Finally, the *durability* proposition makes sure that operations from +* Finally, the *durability* proposition makes sure that operations from transactions that have committed will be made persistent. The amount of transaction durability is configurable in ArangoDB, as is the durability on collection level. \ No newline at end of file diff --git a/Documentation/Books/Users/Transactions/TransactionInvocation.mdpp b/Documentation/Books/Users/Transactions/TransactionInvocation.mdpp index 567b302435..7fce3fff01 100644 --- a/Documentation/Books/Users/Transactions/TransactionInvocation.mdpp +++ b/Documentation/Books/Users/Transactions/TransactionInvocation.mdpp @@ -18,7 +18,9 @@ in ArangoDB. Instead, a transaction in ArangoDB is started by providing a description of the transaction to the *db._executeTransaction* Javascript function: - db._executeTransaction(description); +```js +db._executeTransaction(description); +``` This function will then automatically start a transaction, execute all required data retrieval and/or modification operations, and at the end automatically @@ -45,33 +47,36 @@ Collections for a transaction are declared by providing them in the *collections attribute of the object passed to the *_executeTransaction* function. The *collections* attribute has the sub-attributes *read* and *write*: - db._executeTransaction({ - collections: { - write: [ "users", "logins" ], - read: [ "recommendations" ] - }, - ... - }); +```js +db._executeTransaction({ + collections: { + write: [ "users", "logins" ], + read: [ "recommendations" ] + }, + ... +}); +``` *read* and *write* are optional attributes, and only need to be specified if the operations inside the transactions demand for it. The contents of *read* or *write* can each be lists with collection names or a single collection name (as a string): - - db._executeTransaction({ - collections: { - write: "users", - read: "recommendations" - }, - ... - }); +```js +db._executeTransaction({ + collections: { + write: "users", + read: "recommendations" + }, + ... +}); +``` Note that it is currently optional to specify collections for read-only access. Even without specifying them, it is still possible to read from such collections from within a transaction, but with relaxed isolation. Please refer to -@ref TransactionsLocking for more details. +[Transactions Locking](../Transactions/LockingAndIsolation.md) for more details. !SUBSECTION Declaration of data modification and retrieval operations @@ -79,42 +84,47 @@ All data modification and retrieval operations that are to be executed inside the transaction need to be specified in a Javascript function, using the *action* attribute: - db._executeTransaction({ - collections: { - write: "users" - }, - action: function () { - // all operations go here - } - }); +```js +db._executeTransaction({ + collections: { + write: "users" + }, + action: function () { + // all operations go here + } +}); +``` Any valid Javascript code is allowed inside *action* but the code may only access the collections declared in *collections*. *action* may be a Javascript function as shown above, or a string representation of a Javascript function: - db._executeTransaction({ - collections: { - write: "users" - }, - action: "function () { doSomething(); }" - }); - +``` +db._executeTransaction({ + collections: { + write: "users" + }, + action: "function () { doSomething(); }" +}); +``` Please note that any operations specified in *action* will be executed on the server, in a separate scope. Variables will be bound late. Accessing any Javascript variables defined on the client-side or in some other server context from inside a transaction may not work. Instead, any variables used inside *action* should be defined inside *action* itself: - db._executeTransaction({ - collections: { - write: "users" - }, - action: function () { - var db = require(...).db; - db.users.save({ ... }); - } - }); +``` +db._executeTransaction({ + collections: { + write: "users" + }, + action: function () { + var db = require(...).db; + db.users.save({ ... }); + } +}); +``` When the code inside the *action* attribute is executed, the transaction is already started and all required locks have been acquired. When the code inside @@ -124,18 +134,20 @@ There is no explicit commit command. To make a transaction abort and roll back all changes, an exception needs to be thrown and not caught inside the transaction: - db._executeTransaction({ - collections: { - write: "users" - }, - action: function () { - var db = require("internal").db; - db.users.save({ _key: "hello" }); +```js +db._executeTransaction({ + collections: { + write: "users" + }, + action: function () { + var db = require("internal").db; + db.users.save({ _key: "hello" }); - // will abort and roll back the transaction - throw "doh!"; - } - }); + // will abort and roll back the transaction + throw "doh!"; + } +}); +``` There is no explicit abort or roll back command. @@ -143,18 +155,20 @@ As mentioned earlier, a transaction will commit automatically when the end of the *action* function is reached and no exception has been thrown. In this case, the user can return any legal Javascript value from the function: - db._executeTransaction({ - collections: { - write: "users" - }, - action: function () { - var db = require("internal").db; - db.users.save({ _key: "hello" }); +```js +db._executeTransaction({ + collections: { + write: "users" + }, + action: function () { + var db = require("internal").db; + db.users.save({ _key: "hello" }); - // will commit the transaction and return the value "hello" - return "hello"; - } - }); + // will commit the transaction and return the value "hello" + return "hello"; + } +}); +``` !SUBSECTION Examples @@ -165,108 +179,114 @@ The *c1* collection needs to be declared in the *write* attribute of the The *action* attribute contains the actual transaction code to be executed. This code contains all data modification operations (3 in this example). - // setup - db._create("c1"); - - db._executeTransaction({ - collections: { - write: [ "c1" ] - }, - action: function () { - var db = require("internal").db; - db.c1.save({ _key: "key1" }); - db.c1.save({ _key: "key2" }); - db.c1.save({ _key: "key3" }); - } - }); +```js +// setup +db._create("c1"); +db._executeTransaction({ + collections: { + write: [ "c1" ] + }, + action: function () { + var db = require("internal").db; + db.c1.save({ _key: "key1" }); + db.c1.save({ _key: "key2" }); + db.c1.save({ _key: "key3" }); + } +}); db.c1.count(); // 3 +``` + + Aborting the transaction by throwing an exception in the *action* function will revert all changes, so as if the transaction never happened: - - // setup - db._create("c1"); +``` js +// setup +db._create("c1"); - db._executeTransaction({ - collections: { - write: [ "c1" ] - }, - action: function () { - var db = require("internal").db; - db.c1.save({ _key: "key1" }); - db.c1.count(); // 1 +db._executeTransaction({ + collections: { + write: [ "c1" ] + }, + action: function () { + var db = require("internal").db; + db.c1.save({ _key: "key1" }); + db.c1.count(); // 1 - db.c1.save({ _key: "key2" }); - db.c1.count(); // 2 + db.c1.save({ _key: "key2" }); + db.c1.count(); // 2 - throw "doh!"; - } - }); - - db.c1.count(); // 0 + throw "doh!"; + } +}); +db.c1.count(); // 0 +``` The automatic rollback is also executed when an internal exception is thrown at some point during transaction execution: - // setup - db._create("c1"); +```js +// setup +db._create("c1"); - db._executeTransaction({ - collections: { - write: [ "c1" ] - }, - action: function () { - var db = require("internal").db; - db.c1.save({ _key: "key1" }); - - // will throw duplicate a key error, not explicitly requested by the user - db.c1.save({ _key: "key1" }); +db._executeTransaction({ + collections: { + write: [ "c1" ] + }, + action: function () { + var db = require("internal").db; + db.c1.save({ _key: "key1" }); + + // will throw duplicate a key error, not explicitly requested by the user + db.c1.save({ _key: "key1" }); - // we'll never get here... - } - }); - - db.c1.count(); // 0 + // we'll never get here... + } +}); +db.c1.count(); // 0 +``` As required by the *consistency* principle, aborting or rolling back a transaction will also restore secondary indexes to the state at transaction start. The following example using a cap constraint should illustrate that: - // setup - db._create("c1"); - - // limit the number of documents to 3 - db.c1.ensureCapConstraint(3); +```js +// setup +db._create("c1"); - // insert 3 documents - db.c1.save({ _key: "key1" }); - db.c1.save({ _key: "key2" }); - db.c1.save({ _key: "key3" }); +// limit the number of documents to 3 +db.c1.ensureCapConstraint(3); - // this will push out key1 - // we now have these keys: [ "key1", "key2", "key3" ] - db.c1.save({ _key: "key4" }); +// insert 3 documents +db.c1.save({ _key: "key1" }); +db.c1.save({ _key: "key2" }); +db.c1.save({ _key: "key3" }); + +// this will push out key1 +// we now have these keys: [ "key1", "key2", "key3" ] +db.c1.save({ _key: "key4" }); - db._executeTransaction({ - collections: { - write: [ "c1" ] - }, - action: function () { - var db = require("internal").db; - // this will push out key2. we now have keys [ "key3", "key4", "key5" ] - db.c1.save({ _key: "key5" }); +db._executeTransaction({ + collections: { + write: [ "c1" ] + }, + action: function () { + var db = require("internal").db; + // this will push out key2. we now have keys [ "key3", "key4", "key5" ] + db.c1.save({ _key: "key5" }); - // will abort the transaction - throw "doh!" - } - }); + // will abort the transaction + throw "doh!" + } +}); - // we now have these keys back: [ "key2", "key3", "key4" ] +// we now have these keys back: [ "key2", "key3", "key4" ] +``` !SUBSECTION Cross-collection transactions @@ -274,50 +294,53 @@ There's also the possibility to run a transaction across multiple collections. In this case, multiple collections need to be declared in the *collections* attribute, e.g.: - // setup - db._create("c1"); - db._create("c2"); +```js +// setup +db._create("c1"); +db._create("c2"); - db._executeTransaction({ - collections: { - write: [ "c1", "c2" ] - }, - action: function () { - var db = require("internal").db; - db.c1.save({ _key: "key1" }); - db.c2.save({ _key: "key2" }); - } - }); - - db.c1.count(); // 1 - db.c2.count(); // 1 +db._executeTransaction({ + collections: { + write: [ "c1", "c2" ] + }, + action: function () { + var db = require("internal").db; + db.c1.save({ _key: "key1" }); + db.c2.save({ _key: "key2" }); + } +}); +db.c1.count(); // 1 +db.c2.count(); // 1 +``` Again, throwing an exception from inside the *action* function will make the transaction abort and roll back all changes in all collections: - // setup - db._create("c1"); - db._create("c2"); +```js +// setup +db._create("c1"); +db._create("c2"); - db._executeTransaction({ - collections: { - write: [ "c1", "c2" ] - }, - action: function () { - var db = require("internal").db; - for (var i = 0; i < 100; ++i) { - db.c1.save({ _key: "key" + i }); - db.c2.save({ _key: "key" + i }); - } +db._executeTransaction({ + collections: { + write: [ "c1", "c2" ] + }, + action: function () { + var db = require("internal").db; + for (var i = 0; i < 100; ++i) { + db.c1.save({ _key: "key" + i }); + db.c2.save({ _key: "key" + i }); + } - db.c1.count(); // 100 - db.c2.count(); // 100 + db.c1.count(); // 100 + db.c2.count(); // 100 - // abort - throw "doh!" - } - }); + // abort + throw "doh!" + } +}); - db.c1.count(); // 0 - db.c2.count(); // 0 +db.c1.count(); // 0 +db.c2.count(); // 0 +``` \ No newline at end of file diff --git a/UnitTests/Makefile.unittests b/UnitTests/Makefile.unittests index 4e964426b9..363a4797cd 100755 --- a/UnitTests/Makefile.unittests +++ b/UnitTests/Makefile.unittests @@ -566,8 +566,8 @@ unittests-shell-server-ahuacatl: ### @brief SHELL CLIENT TESTS ################################################################################ -UNITTESTS_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-changeMode.js) -UNITTESTS_NO_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-noChangeMode.js) +UNITTESTS_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-changeMode-noncluster.js) +UNITTESTS_NO_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-noChangeMode-noncluster.js) .PHONY: unittests-shell-client-readonly unittests-shell-client-readonly: diff --git a/arangod/V8Server/V8Job.cpp b/arangod/V8Server/V8Job.cpp index 0aaab1b38d..cd408f87c0 100644 --- a/arangod/V8Server/V8Job.cpp +++ b/arangod/V8Server/V8Job.cpp @@ -56,8 +56,24 @@ V8Job::V8Job (TRI_vocbase_t* vocbase, _vocbase(vocbase), _v8Dealer(v8Dealer), _command(command), - _parameters(parameters), + _parameters(nullptr), _canceled(0) { + + if (parameters != nullptr) { + // create our own copy of the parameters + _parameters = TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, parameters); + } +} + +//////////////////////////////////////////////////////////////////////////////// +/// @brief destroys a V8 job +//////////////////////////////////////////////////////////////////////////////// + +V8Job::~V8Job () { + if (_parameters != nullptr) { + TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters); + _parameters = nullptr; + } } // ----------------------------------------------------------------------------- @@ -76,7 +92,7 @@ Job::JobType V8Job::type () { /// {@inheritDoc} //////////////////////////////////////////////////////////////////////////////// -const string& V8Job::queue () { +string const& V8Job::queue () { static const string queue = "STANDARD"; return queue; } @@ -90,11 +106,10 @@ Job::status_t V8Job::work () { return status_t(JOB_DONE); } - ApplicationV8::V8Context* context - = _v8Dealer->enterContext(_vocbase, 0, true, false); + ApplicationV8::V8Context* context = _v8Dealer->enterContext(_vocbase, nullptr, true, false); // note: the context might be 0 in case of shut-down - if (context == 0) { + if (context == nullptr) { return status_t(JOB_DONE); } @@ -119,7 +134,7 @@ Job::status_t V8Job::work () { } v8::Handle fArgs; - if (_parameters != 0) { + if (_parameters != nullptr) { fArgs = TRI_ObjectJson(_parameters); } else { diff --git a/arangod/V8Server/V8Job.h b/arangod/V8Server/V8Job.h index 15a991bcc3..ac7cf26333 100644 --- a/arangod/V8Server/V8Job.h +++ b/arangod/V8Server/V8Job.h @@ -64,6 +64,12 @@ namespace triagens { std::string const&, TRI_json_t const*); +//////////////////////////////////////////////////////////////////////////////// +/// @brief destroys a V8 job +//////////////////////////////////////////////////////////////////////////////// + + ~V8Job (); + // ----------------------------------------------------------------------------- // --SECTION-- Job methods // ----------------------------------------------------------------------------- @@ -80,7 +86,7 @@ namespace triagens { /// {@inheritDoc} //////////////////////////////////////////////////////////////////////////////// - const std::string& queue (); + std::string const& queue (); //////////////////////////////////////////////////////////////////////////////// /// {@inheritDoc} @@ -140,7 +146,7 @@ namespace triagens { /// @brief paramaters //////////////////////////////////////////////////////////////////////////////// - TRI_json_t const* _parameters; + TRI_json_t* _parameters; //////////////////////////////////////////////////////////////////////////////// /// @brief cancel flag diff --git a/arangod/V8Server/V8TimerTask.cpp b/arangod/V8Server/V8TimerTask.cpp index ffd8f6c43c..d1e3481702 100644 --- a/arangod/V8Server/V8TimerTask.cpp +++ b/arangod/V8Server/V8TimerTask.cpp @@ -65,7 +65,7 @@ V8TimerTask::V8TimerTask (string const& id, _parameters(parameters), _created(TRI_microtime()) { - TRI_ASSERT(vocbase != 0); + TRI_ASSERT(vocbase != nullptr); // increase reference counter for the database used TRI_UseVocBase(_vocbase); @@ -79,7 +79,7 @@ V8TimerTask::~V8TimerTask () { // decrease reference counter for the database used TRI_ReleaseVocBase(_vocbase); - if (_parameters != 0) { + if (_parameters != nullptr) { TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters); } } diff --git a/arangod/VocBase/barrier.h b/arangod/VocBase/barrier.h index 253603f381..215e0dccf2 100644 --- a/arangod/VocBase/barrier.h +++ b/arangod/VocBase/barrier.h @@ -51,7 +51,7 @@ struct TRI_datafile_s; //////////////////////////////////////////////////////////////////////////////// typedef enum { - TRI_BARRIER_ELEMENT, + TRI_BARRIER_ELEMENT = 1, TRI_BARRIER_DATAFILE_DROP_CALLBACK, TRI_BARRIER_DATAFILE_RENAME_CALLBACK, TRI_BARRIER_COLLECTION_UNLOAD_CALLBACK, diff --git a/arangod/VocBase/cleanup.cpp b/arangod/VocBase/cleanup.cpp index 61ee5f20a2..5281b3b432 100644 --- a/arangod/VocBase/cleanup.cpp +++ b/arangod/VocBase/cleanup.cpp @@ -67,7 +67,8 @@ static int const CLEANUP_INDEX_ITERATIONS = 5; /// @brief checks all datafiles of a collection //////////////////////////////////////////////////////////////////////////////// -static void CleanupDocumentCollection (TRI_document_collection_t* document) { +static void CleanupDocumentCollection (TRI_vocbase_col_t* collection, + TRI_document_collection_t* document) { bool unloadChecked = false; // loop until done @@ -121,10 +122,21 @@ static void CleanupDocumentCollection (TRI_document_collection_t* document) { // we must release the lock temporarily to check if the collection is fully collected TRI_UnlockSpin(&container->_lock); + bool isDeleted = false; + // must not hold the spin lock while querying the collection if (! TRI_IsFullyCollectedDocumentCollection(document)) { - // collection is not fully collected - postpone the unload - return; + // if there is still some collection to perform, check if the collection was deleted already + if (TRI_TRY_READ_LOCK_STATUS_VOCBASE_COL(collection)) { + isDeleted = (collection->_status == TRI_VOC_COL_STATUS_DELETED); + TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection); + } + + if (! isDeleted) { + // collection is not fully collected and still undeleted - postpone the unload + return; + } + // if deleted, then we may unload / delete } unloadChecked = true; @@ -248,26 +260,21 @@ void TRI_CleanupVocBase (void* data) { // check if we can get the compactor lock exclusively if (TRI_CheckAndLockCompactorVocBase(vocbase)) { - size_t i, n; - // copy all collections TRI_READ_LOCK_COLLECTIONS_VOCBASE(vocbase); TRI_CopyDataVectorPointer(&collections, &vocbase->_collections); TRI_READ_UNLOCK_COLLECTIONS_VOCBASE(vocbase); - n = collections._length; + size_t const n = collections._length; - for (i = 0; i < n; ++i) { - TRI_vocbase_col_t* collection; - TRI_document_collection_t* document; - - collection = (TRI_vocbase_col_t*) collections._buffer[i]; + for (size_t i = 0; i < n; ++i) { + TRI_vocbase_col_t* collection = static_cast(collections._buffer[i]); TRI_READ_LOCK_STATUS_VOCBASE_COL(collection); - document = collection->_collection; + TRI_document_collection_t* document = collection->_collection; - if (document == NULL) { + if (document == nullptr) { TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection); continue; } @@ -283,7 +290,7 @@ void TRI_CleanupVocBase (void* data) { document->cleanupIndexes(document); } - CleanupDocumentCollection(document); + CleanupDocumentCollection(collection, document); } TRI_UnlockCompactorVocBase(vocbase); diff --git a/arangod/VocBase/datafile.cpp b/arangod/VocBase/datafile.cpp index 0891b46eb2..ca373948e6 100644 --- a/arangod/VocBase/datafile.cpp +++ b/arangod/VocBase/datafile.cpp @@ -1188,11 +1188,8 @@ void TRI_InitMarkerDatafile (char* marker, int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile, TRI_voc_fid_t fid, TRI_voc_size_t maximalSize) { - TRI_df_marker_t* position; - TRI_df_header_marker_t header; - int res; - // create the header + TRI_df_header_marker_t header; TRI_InitMarkerDatafile((char*) &header, TRI_DF_MARKER_HEADER, sizeof(TRI_df_header_marker_t)); header.base._tick = (TRI_voc_tick_t) fid; @@ -1201,7 +1198,8 @@ int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile, header._fid = fid; // reserve space and write header to file - res = TRI_ReserveElementDatafile(datafile, header.base._size, &position, 0); + TRI_df_marker_t* position; + int res = TRI_ReserveElementDatafile(datafile, header.base._size, &position, 0); if (res == TRI_ERROR_NO_ERROR) { res = TRI_WriteCrcElementDatafile(datafile, position, &header.base, false); diff --git a/arangod/VocBase/datafile.h b/arangod/VocBase/datafile.h index 1726a9d2b6..afc2e6c6be 100644 --- a/arangod/VocBase/datafile.h +++ b/arangod/VocBase/datafile.h @@ -279,7 +279,7 @@ typedef struct TRI_datafile_s { bool (*sync)(const struct TRI_datafile_s* const, char const*, char const*); // syncs the datafile int (*truncate)(struct TRI_datafile_s* const, const off_t); // truncates the datafile to a specific length - int _lastError; // last (cirtical) error + int _lastError; // last (critical) error bool _full; // at least one request was rejected because there is not enough room bool _isSealed; // true, if footer has been written diff --git a/arangod/VocBase/document-collection.cpp b/arangod/VocBase/document-collection.cpp index e592adc69e..3438355a8a 100644 --- a/arangod/VocBase/document-collection.cpp +++ b/arangod/VocBase/document-collection.cpp @@ -1688,8 +1688,6 @@ static bool OpenIterator (TRI_df_marker_t const* marker, //////////////////////////////////////////////////////////////////////////////// static int FillInternalIndexes (TRI_document_collection_t* document) { - TRI_ASSERT(! triagens::wal::LogfileManager::instance()->isInRecovery()); - int res = TRI_ERROR_NO_ERROR; for (size_t i = 0; i < document->_allIndexes._length; ++i) { @@ -2320,7 +2318,7 @@ TRI_datafile_t* TRI_CreateDatafileDocumentCollection (TRI_document_collection_t* if (res != TRI_ERROR_NO_ERROR) { document->_lastError = journal->_lastError; - LOG_ERROR("cannot create collection header in file '%s': %s", journal->getName(journal), TRI_last_error()); + LOG_ERROR("cannot create collection header in file '%s': %s", journal->getName(journal), TRI_errno_string(res)); // close the journal and remove it TRI_CloseDatafile(journal); @@ -2332,7 +2330,7 @@ TRI_datafile_t* TRI_CreateDatafileDocumentCollection (TRI_document_collection_t* TRI_col_header_marker_t cm; TRI_InitMarkerDatafile((char*) &cm, TRI_COL_MARKER_HEADER, sizeof(TRI_col_header_marker_t)); - cm.base._tick = (TRI_voc_tick_t) fid; + cm.base._tick = static_cast(fid); cm._type = (TRI_col_type_t) document->_info._type; cm._cid = document->_info._cid; @@ -2697,16 +2695,11 @@ TRI_document_collection_t* TRI_OpenDocumentCollection (TRI_vocbase_t* vocbase, TRI_InitVocShaper(document->getShaper()); // ONLY in OPENCOLLECTION, PROTECTED by fake trx here - // secondary indexes must not be loaded during recovery - // this is because creating indexes might write attribute markers into the WAL, - // but the WAL is read-only at the point of recovery - if (! triagens::wal::LogfileManager::instance()->isInRecovery()) { - // fill internal indexes (this is, the edges index at the moment) - FillInternalIndexes(document); + // fill internal indexes (this is, the edges index at the moment) + FillInternalIndexes(document); - // fill user-defined secondary indexes - TRI_IterateIndexCollection(collection, OpenIndexIterator, collection); - } + // fill user-defined secondary indexes + TRI_IterateIndexCollection(collection, OpenIndexIterator, collection); return document; } diff --git a/arangod/Wal/CollectorThread.cpp b/arangod/Wal/CollectorThread.cpp index df8bdc53a8..c0ac3faacc 100644 --- a/arangod/Wal/CollectorThread.cpp +++ b/arangod/Wal/CollectorThread.cpp @@ -478,7 +478,6 @@ bool CollectorThread::processQueuedOperations () { _numPendingOperations -= numOperations; - // delete the object delete (*it2); @@ -652,17 +651,12 @@ int CollectorThread::processCollectionOperations (CollectorCache* cache) { // finally update all datafile statistics LOG_TRACE("updating datafile statistics for collection '%s'", document->_info._name); updateDatafileStatistics(document, cache); - - // TODO: the following assertion is only true in a running system - // if we just started the server, we don't know how many uncollected operations we have!! - // TRI_ASSERT(document->_uncollectedLogfileEntries >= cache->totalOperationsCount); + document->_uncollectedLogfileEntries -= cache->totalOperationsCount; if (document->_uncollectedLogfileEntries < 0) { document->_uncollectedLogfileEntries = 0; } - cache->freeBarriers(); - res = TRI_ERROR_NO_ERROR; } catch (triagens::arango::Exception const& ex) { @@ -866,7 +860,6 @@ int CollectorThread::transferMarkers (Logfile* logfile, if (cache != nullptr) { // prevent memleak - cache->freeBarriers(); delete cache; } diff --git a/arangod/Wal/CollectorThread.h b/arangod/Wal/CollectorThread.h index 837944b095..9942788091 100644 --- a/arangod/Wal/CollectorThread.h +++ b/arangod/Wal/CollectorThread.h @@ -106,6 +106,7 @@ namespace triagens { if (operations != nullptr) { delete operations; } + freeBarriers(); } //////////////////////////////////////////////////////////////////////////////// @@ -125,6 +126,7 @@ namespace triagens { for (auto it = barriers.begin(); it != barriers.end(); ++it) { TRI_FreeBarrier((*it)); } + barriers.clear(); } diff --git a/arangod/Wal/LogfileManager.cpp b/arangod/Wal/LogfileManager.cpp index 9fed279637..28507889b3 100644 --- a/arangod/Wal/LogfileManager.cpp +++ b/arangod/Wal/LogfileManager.cpp @@ -42,6 +42,7 @@ #include "VocBase/server.h" #include "Wal/AllocatorThread.h" #include "Wal/CollectorThread.h" +#include "Wal/RecoverState.h" #include "Wal/Slots.h" #include "Wal/SynchroniserThread.h" @@ -730,6 +731,10 @@ SlotInfo LogfileManager::allocate (void const* src, uint32_t size) { if (! _allowWrites) { // no writes allowed +#ifdef TRI_ENABLE_MAINTAINER_MODE + TRI_ASSERT(false); +#endif + return SlotInfo(TRI_ERROR_ARANGO_READ_ONLY); } diff --git a/arangod/Wal/LogfileManager.h b/arangod/Wal/LogfileManager.h index ecbc0a938d..6a288fe4c8 100644 --- a/arangod/Wal/LogfileManager.h +++ b/arangod/Wal/LogfileManager.h @@ -48,35 +48,10 @@ namespace triagens { class AllocatorThread; class CollectorThread; + struct RecoverState; class Slot; class SynchroniserThread; -// ----------------------------------------------------------------------------- -// --SECTION-- RecoverState -// ----------------------------------------------------------------------------- - -//////////////////////////////////////////////////////////////////////////////// -/// @brief state that is built up when scanning a WAL logfile during recovery -//////////////////////////////////////////////////////////////////////////////// - - struct RecoverState { - RecoverState () - : collections(), - failedTransactions(), - droppedCollections(), - droppedDatabases(), - lastTick(0), - logfilesToCollect(0) { - } - - std::unordered_map collections; - std::unordered_map> failedTransactions; - std::unordered_set droppedCollections; - std::unordered_set droppedDatabases; - TRI_voc_tick_t lastTick; - int logfilesToCollect; - }; - // ----------------------------------------------------------------------------- // --SECTION-- LogfileManagerState // ----------------------------------------------------------------------------- @@ -358,14 +333,6 @@ namespace triagens { _throttleWhenPending = value; } -//////////////////////////////////////////////////////////////////////////////// -/// @brief whether or not we are in the recovery mode -//////////////////////////////////////////////////////////////////////////////// - - inline bool isInRecovery () const { - return _inRecovery; - } - //////////////////////////////////////////////////////////////////////////////// /// @brief registers a transaction //////////////////////////////////////////////////////////////////////////////// diff --git a/arangod/Wal/RecoverState.h b/arangod/Wal/RecoverState.h new file mode 100644 index 0000000000..6bda051ccd --- /dev/null +++ b/arangod/Wal/RecoverState.h @@ -0,0 +1,80 @@ +//////////////////////////////////////////////////////////////////////////////// +/// @brief Recovery state +/// +/// @file +/// +/// DISCLAIMER +/// +/// Copyright 2014 ArangoDB GmbH, Cologne, Germany +/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany +/// +/// Licensed under the Apache License, Version 2.0 (the "License"); +/// you may not use this file except in compliance with the License. +/// You may obtain a copy of the License at +/// +/// http://www.apache.org/licenses/LICENSE-2.0 +/// +/// Unless required by applicable law or agreed to in writing, software +/// distributed under the License is distributed on an "AS IS" BASIS, +/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +/// See the License for the specific language governing permissions and +/// limitations under the License. +/// +/// Copyright holder is ArangoDB GmbH, Cologne, Germany +/// +/// @author Jan Steemann +/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany +/// @author Copyright 2011-2013, triAGENS GmbH, Cologne, Germany +//////////////////////////////////////////////////////////////////////////////// + +#ifndef ARANGODB_WAL_RECOVER_STATE_H +#define ARANGODB_WAL_RECOVER_STATE_H 1 + +#include "Basics/Common.h" +#include "Basics/Mutex.h" +#include "VocBase/voc-types.h" + +struct TRI_server_s; + +namespace triagens { + namespace wal { + +// ----------------------------------------------------------------------------- +// --SECTION-- RecoverState +// ----------------------------------------------------------------------------- + +//////////////////////////////////////////////////////////////////////////////// +/// @brief state that is built up when scanning a WAL logfile during recovery +//////////////////////////////////////////////////////////////////////////////// + + struct RecoverState { + RecoverState () + : collections(), + failedTransactions(), + droppedCollections(), + droppedDatabases(), + lastTick(0), + logfilesToCollect(0) { + } + + std::unordered_map collections; + std::unordered_map> failedTransactions; + std::unordered_set droppedCollections; + std::unordered_set droppedDatabases; + TRI_voc_tick_t lastTick; + int logfilesToCollect; + }; + + } +} + +#endif + +// ----------------------------------------------------------------------------- +// --SECTION-- END-OF-FILE +// ----------------------------------------------------------------------------- + +// Local Variables: +// mode: outline-minor +// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}" +// End: diff --git a/js/apps/system/aardvark/frontend/js/models/graph.js b/js/apps/system/aardvark/frontend/js/models/graph.js index a39b1b3b9b..c8afb862f0 100644 --- a/js/apps/system/aardvark/frontend/js/models/graph.js +++ b/js/apps/system/aardvark/frontend/js/models/graph.js @@ -1,5 +1,5 @@ /*jslint indent: 2, nomen: true, maxlen: 100, vars: true, white: true, plusplus: true */ -/*global window, Backbone */ +/*global window, Backbone, $ */ (function() { "use strict"; @@ -17,6 +17,39 @@ return raw.graph || raw; }, + addEdgeDefinition: function(edgeDefinition) { + $.ajax( + { + async: false, + type: "POST", + url: this.urlRoot + "/" + this.get("_key") + "/edge", + data: JSON.stringify(edgeDefinition) + } + ); + }, + + deleteEdgeDefinition: function(edgeCollection) { + $.ajax( + { + async: false, + type: "DELETE", + url: this.urlRoot + "/" + this.get("_key") + "/edge/" + edgeCollection + } + ); + }, + + modifyEdgeDefinition: function(edgeDefinition) { + $.ajax( + { + async: false, + type: "PUT", + url: this.urlRoot + "/" + this.get("_key") + "/edge/" + edgeDefinition.collection, + data: JSON.stringify(edgeDefinition) + } + ); + }, + + defaults: { name: "", edgeDefinitions: [], diff --git a/js/apps/system/aardvark/frontend/js/templates/edgeDefinitionTable.ejs b/js/apps/system/aardvark/frontend/js/templates/edgeDefinitionTable.ejs index 717292f8f3..10f34192f3 100644 --- a/js/apps/system/aardvark/frontend/js/templates/edgeDefinitionTable.ejs +++ b/js/apps/system/aardvark/frontend/js/templates/edgeDefinitionTable.ejs @@ -1,5 +1,5 @@