mirror of https://gitee.com/bigwinds/arangodb
Merge branch 'devel' of github.com:triAGENS/ArangoDB into devel
This commit is contained in:
commit
dc7ffb4efb
|
@ -6,8 +6,8 @@ API description is available at [Http Interface for AQL Query Cursor](../HttpAql
|
|||
You can also run AQL queries from arangosh. To do so, first create an
|
||||
ArangoStatement object as follows:
|
||||
|
||||
arangosh> stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
|
||||
[object ArangoStatement]
|
||||
stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
|
||||
[object ArangoQueryCursor]
|
||||
|
||||
To execute the query, use the *execute* method:
|
||||
|
||||
|
|
|
@ -102,7 +102,7 @@ evaluation. The ternary operator expects a boolean condition as its first
|
|||
operand, and it returns the result of the second operand if the condition
|
||||
evaluates to true, and the third operand otherwise.
|
||||
|
||||
Example:
|
||||
@EXAMPLES:
|
||||
|
||||
u.age > 15 || u.active == true ? u.userId : null
|
||||
|
||||
|
@ -115,7 +115,7 @@ values.
|
|||
The *..* operator will produce a list of values in the defined range, with
|
||||
both bounding values included.
|
||||
|
||||
Example:
|
||||
@EXAMPLES
|
||||
|
||||
2010..2013
|
||||
|
||||
|
@ -274,7 +274,7 @@ For string processing, AQL offers the following functions:
|
|||
- *UPPER(value)*: Upper-case *value*
|
||||
|
||||
- *SUBSTRING(value, offset, length)*: Return a substring of *value*,
|
||||
starting at @FA{offset} and with a maximum length of *length* characters. Offsets
|
||||
starting at *offset* and with a maximum length of *length* characters. Offsets
|
||||
start at position 0
|
||||
|
||||
- *LEFT(value, LENGTH)*: Returns the *LENGTH* leftmost characters of
|
||||
|
@ -453,12 +453,12 @@ AQL supports the following functions to operate on list values:
|
|||
*list* is a document, returns the number of attribute keys of the document,
|
||||
regardless of their values.
|
||||
|
||||
- @FN{FLATTEN(list), depth)*: Turns a list of lists into a flat list. All
|
||||
- *FLATTEN(list), depth)*: Turns a list of lists into a flat list. All
|
||||
list elements in *list* will be expanded in the result list. Non-list elements
|
||||
are added as they are. The function will recurse into sub-lists up to a depth of
|
||||
*depth*. *depth* has a default value of 1.
|
||||
|
||||
Example:
|
||||
@EXAMPLES
|
||||
|
||||
FLATTEN([ 1, 2, [ 3, 4 ], 5, [ 6, 7 ], [ 8, [ 9, 10 ] ])
|
||||
|
||||
|
@ -523,7 +523,7 @@ AQL supports the following functions to operate on list values:
|
|||
- *LAST(list)*: Returns the last element in *list* or *null* if the
|
||||
list is empty.
|
||||
|
||||
- *NTH(list, position)*: Returns the list element at position @FA{position}.
|
||||
- *NTH(list, position)*: Returns the list element at position *position*.
|
||||
Positions start at 0. If *position* is negative or beyond the upper bound of the list
|
||||
specified by *list*, then *null* will be returned.
|
||||
|
||||
|
@ -536,11 +536,11 @@ AQL supports the following functions to operate on list values:
|
|||
- *SLICE(list, start, length)*: Extracts a slice of the list specified
|
||||
by *list*. The extraction will start at list element with position *start*.
|
||||
Positions start at 0. Up to *length* elements will be extracted. If *length* is
|
||||
not specified, all list elements starting at @FA{start} will be returned.
|
||||
not specified, all list elements starting at *start* will be returned.
|
||||
If *start* is negative, it can be used to indicate positions from the end of the
|
||||
list.
|
||||
|
||||
Examples:
|
||||
@EXAMPLES:
|
||||
|
||||
SLICE([ 1, 2, 3, 4, 5 ], 0, 1)
|
||||
|
||||
|
@ -573,7 +573,7 @@ AQL supports the following functions to operate on list values:
|
|||
Note: No duplicates will be removed. In order to remove duplicates, please use either
|
||||
*UNION_DISTINCT* function or apply the *UNIQUE* on the result of *union*.
|
||||
|
||||
Example:
|
||||
@EXAMPLES
|
||||
|
||||
RETURN UNION(
|
||||
[ 1, 2, 3 ],
|
||||
|
@ -638,7 +638,7 @@ AQL supports the following functions to operate on document values:
|
|||
The *examples* must be a list of 1..n example documents, with any number of attributes
|
||||
each. Note: specifying an empty list of examples is not allowed.
|
||||
|
||||
Example usage:
|
||||
@EXAMPLE
|
||||
|
||||
RETURN MATCHES(
|
||||
{ "test" : 1 }, [
|
||||
|
@ -850,7 +850,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
returns a list of paths through the graph defined by the nodes in the collection
|
||||
*vertexcollection* and edges in the collection *edgecollection*. For each vertex
|
||||
in *vertexcollection*, it will determine the paths through the graph depending on the
|
||||
value of @FA{direction}:
|
||||
value of *direction*:
|
||||
- *"outbound"*: Follow all paths that start at the current vertex and lead to another vertex
|
||||
- *"inbound"*: Follow all paths that lead from another vertex to the current vertex
|
||||
- *"any"*: Combination of *"outbound"* and *"inbound"*
|
||||
|
@ -865,7 +865,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
- *source*: start vertex of path
|
||||
- *destination*: destination vertex of path
|
||||
|
||||
Example calls:
|
||||
@EXAMPLES
|
||||
|
||||
PATHS(friends, friendrelations, "outbound", false)
|
||||
|
||||
|
@ -962,9 +962,9 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
- *vertex*: The vertex at the traversal point
|
||||
- *path*: The path history for the traversal point. The path is a document with the
|
||||
attributes *vertices* and *edges*, which are both lists. Note that *path* is only present
|
||||
in the result if the *paths* attribute is set in the @FA{options}
|
||||
in the result if the *paths* attribute is set in the *options*
|
||||
|
||||
Example calls:
|
||||
@EXAMPLES
|
||||
|
||||
TRAVERSAL(friends, friendrelations, "friends/john", "outbound", {
|
||||
strategy: "depthfirst",
|
||||
|
@ -1021,7 +1021,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
|
||||
- *TRAVERSAL_TREE(vertexcollection, edgecollection, startVertex, direction, connectName, options)*:
|
||||
Traverses the graph described by *vertexcollection* and *edgecollection*,
|
||||
starting at the vertex identified by id @FA{startVertex} and creates a hierarchical result.
|
||||
starting at the vertex identified by id *startVertex* and creates a hierarchical result.
|
||||
Vertex connectivity is establish by inserted an attribute which has the name specified via
|
||||
the *connectName* parameter. Connected vertices will be placed in this attribute as a
|
||||
list.
|
||||
|
@ -1030,7 +1030,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
be set up in a way that resembles a depth-first, pre-order visitation result. Thus, the
|
||||
*strategy* and *order* attributes of the *options* attribute will be ignored.
|
||||
|
||||
Example calls:
|
||||
@EXAMPLES
|
||||
|
||||
TRAVERSAL_TREE(friends, friendrelations, "friends/john", "outbound", "likes", {
|
||||
itemOrder: "forward"
|
||||
|
@ -1049,10 +1049,10 @@ This query is deprecated and will be removed soon.
|
|||
Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
||||
|
||||
- *SHORTEST_PATH(vertexcollection, edgecollection, startVertex, endVertex, direction, options)*:
|
||||
Determines the first shortest path from the @FA{startVertex} to the *endVertex*.
|
||||
Determines the first shortest path from the *startVertex* to the *endVertex*.
|
||||
Both vertices must be present in the vertex collection specified in *vertexcollection*,
|
||||
and any connecting edges must be present in the collection specified by *edgecollection*.
|
||||
Vertex connectivity is specified by the @FA{direction} parameter:
|
||||
Vertex connectivity is specified by the *direction* parameter:
|
||||
- *"outbound"*: Vertices are connected in *_from* to *_to* order
|
||||
- *"inbound"*: Vertices are connected in *_to* to *_from* order
|
||||
- *"any"*: Vertices are connected in both *_to* to *_from* and in
|
||||
|
@ -1114,9 +1114,9 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
- *vertex*: The vertex at the traversal point
|
||||
- *path*: The path history for the traversal point. The path is a document with the
|
||||
attributes *vertices* and *edges*, which are both lists. Note that *path* is only present
|
||||
in the result if the *paths* attribute is set in the @FA{options}.
|
||||
in the result if the *paths* attribute is set in the *options*.
|
||||
|
||||
Example calls:
|
||||
@EXAMPLES
|
||||
|
||||
SHORTEST_PATH(cities, motorways, "cities/CGN", "cities/MUC", "outbound", {
|
||||
paths: true
|
||||
|
@ -1160,7 +1160,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
To not restrict the result to specific connections, *edgeexamples* should be left
|
||||
unspecified.
|
||||
|
||||
Example calls:
|
||||
@EXAMPLES
|
||||
|
||||
EDGES(friendrelations, "friends/john", "outbound")
|
||||
EDGES(friendrelations, "friends/john", "any", [ { "$label": "knows" } ])
|
||||
|
@ -1182,7 +1182,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
|
|||
To not restrict the result to specific connections, *edgeexamples* should be left
|
||||
unspecified.
|
||||
|
||||
Example calls:
|
||||
@EXAMPLES
|
||||
|
||||
NEIGHBORS(friends, friendrelations, "friends/john", "outbound")
|
||||
NEIGHBORS(users, usersrelations, "users/john", "any", [ { "$label": "recommends" } ] )
|
||||
|
@ -1221,7 +1221,7 @@ function categories:
|
|||
found, *null* will be returned. This function also allows *id* to be a list of ids.
|
||||
In this case, the function will return a list of all documents that could be found.
|
||||
|
||||
Examples:
|
||||
@EXAMPLES:
|
||||
|
||||
DOCUMENT(users, "users/john")
|
||||
DOCUMENT(users, "john")
|
||||
|
@ -1239,10 +1239,10 @@ function categories:
|
|||
DOCUMENT([ "users/john", "users/amy" ])
|
||||
|
||||
- *SKIPLIST(collection, condition, skip, limit)*: Return all documents
|
||||
from a skiplist index on collection *collection* that match the specified @FA{condition}.
|
||||
from a skiplist index on collection *collection* that match the specified *condition*.
|
||||
This is a shortcut method to use a skiplist index for retrieving specific documents in
|
||||
indexed order. The skiplist index supports equality and less than/greater than queries. The
|
||||
@FA{skip} and *limit* parameters are optional but can be specified to further limit the
|
||||
*skip* and *limit* parameters are optional but can be specified to further limit the
|
||||
results:
|
||||
|
||||
SKIPLIST(test, { created: [[ '>', 0 ]] }, 0, 100)
|
||||
|
|
|
@ -12,9 +12,9 @@ When a fulltext index exists, it can be queried using a fulltext query.
|
|||
|
||||
!SUBSECTION Fulltext
|
||||
<!-- js/common/modules/org/arangodb/arango-collection-common.js-->
|
||||
@startDocuBlock simple-query-fulltext
|
||||
@startDocuBlock collectionFulltext
|
||||
|
||||
!SUBSECTION Fulltext query syntax:
|
||||
!SUBSECTION Fulltext Syntax:
|
||||
|
||||
In the simplest form, a fulltext query contains just the sought word. If
|
||||
multiple search words are given in a query, they should be separated by commas.
|
||||
|
|
|
@ -11,8 +11,8 @@ modify lots of documents in a collection.
|
|||
All methods can optionally be restricted to a specific number of operations.
|
||||
However, if a limit is specific but is less than the number of matches, it
|
||||
will be undefined which of the matching documents will get removed/modified.
|
||||
[Remove by Example](../Documents/DocumentMethods.html#remove_by_example),
|
||||
[Replace by Example](../Documents/DocumentMethods.html#replace_by_example) and
|
||||
[Update by Example](../Documents/DocumentMethods.html#update_by_example)
|
||||
are described with examples in the subchapter
|
||||
[Remove by Example](../Documents/DocumentMethods.html#remove_by_example),
|
||||
[Replace by Example](../Documents/DocumentMethods.html#replace_by_example) and
|
||||
[Update by Example](../Documents/DocumentMethods.html#update_by_example)
|
||||
are described with examples in the subchapter
|
||||
[Collection Methods](../Documents/DocumentMethods.md).
|
|
@ -8,8 +8,8 @@ MySQL.
|
|||
|
||||
*skip* used together with *limit* can be used to implement pagination.
|
||||
The *skip* operator skips over the first n documents. So, in order to create
|
||||
result pages with 10 result documents per page, you can use `skip(n *
|
||||
10).limit(10)` to access the 10 documents on the n.th page. This result should
|
||||
result pages with 10 result documents per page, you can use *skip(n *
|
||||
10).limit(10)* to access the 10 documents on the n.th page. This result should
|
||||
be sorted, so that the pagination works in a predicable way.
|
||||
|
||||
!SUBSECTION Limit
|
||||
|
|
|
@ -28,7 +28,7 @@ whether the delayed synchronization had kicked in or not.
|
|||
To ensure durability of transactions on a collection that have the *waitForSync*
|
||||
property set to *false*, you can set the *waitForSync* attribute of the object
|
||||
that is passed to *executeTransaction*. This will force a synchronization of the
|
||||
transaction to disk even for collections that have *waitForSync set to *false*:
|
||||
transaction to disk even for collections that have *waitForSync* set to *false*:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
|
|
|
@ -29,7 +29,7 @@ from the collection as usual. However, as the collection ie added lazily, there
|
|||
isolation from other concurrent operations or transactions. Reads from such
|
||||
collections are potentially non-repeatable.
|
||||
|
||||
Example:
|
||||
@EXAMPLES
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
|
|
|
@ -6,13 +6,14 @@ transactions.
|
|||
Transactions in ArangoDB are atomic, consistent, isolated, and durable (*ACID*).
|
||||
|
||||
These *ACID* properties provide the following guarantees:
|
||||
- The *atomicity* priniciple makes transactions either complete in their
|
||||
|
||||
* The *atomicity* principle makes transactions either complete in their
|
||||
entirety or have no effect at all.
|
||||
- The *consistency* principle ensures that no constraints or other invariants
|
||||
* The *consistency* principle ensures that no constraints or other invariants
|
||||
will be violated during or after any transaction.
|
||||
- The *isolation* property will hide the modifications of a transaction from
|
||||
* The *isolation* property will hide the modifications of a transaction from
|
||||
other transactions until the transaction commits.
|
||||
- Finally, the *durability* proposition makes sure that operations from
|
||||
* Finally, the *durability* proposition makes sure that operations from
|
||||
transactions that have committed will be made persistent. The amount of
|
||||
transaction durability is configurable in ArangoDB, as is the durability
|
||||
on collection level.
|
|
@ -18,7 +18,9 @@ in ArangoDB. Instead, a transaction in ArangoDB is started by providing a
|
|||
description of the transaction to the *db._executeTransaction* Javascript
|
||||
function:
|
||||
|
||||
db._executeTransaction(description);
|
||||
```js
|
||||
db._executeTransaction(description);
|
||||
```
|
||||
|
||||
This function will then automatically start a transaction, execute all required
|
||||
data retrieval and/or modification operations, and at the end automatically
|
||||
|
@ -45,33 +47,36 @@ Collections for a transaction are declared by providing them in the *collections
|
|||
attribute of the object passed to the *_executeTransaction* function. The
|
||||
*collections* attribute has the sub-attributes *read* and *write*:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "users", "logins" ],
|
||||
read: [ "recommendations" ]
|
||||
},
|
||||
...
|
||||
});
|
||||
```js
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "users", "logins" ],
|
||||
read: [ "recommendations" ]
|
||||
},
|
||||
...
|
||||
});
|
||||
```
|
||||
|
||||
*read* and *write* are optional attributes, and only need to be specified if
|
||||
the operations inside the transactions demand for it.
|
||||
|
||||
The contents of *read* or *write* can each be lists with collection names or a
|
||||
single collection name (as a string):
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users",
|
||||
read: "recommendations"
|
||||
},
|
||||
...
|
||||
});
|
||||
|
||||
```js
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users",
|
||||
read: "recommendations"
|
||||
},
|
||||
...
|
||||
});
|
||||
```
|
||||
|
||||
Note that it is currently optional to specify collections for read-only access.
|
||||
Even without specifying them, it is still possible to read from such collections
|
||||
from within a transaction, but with relaxed isolation. Please refer to
|
||||
@ref TransactionsLocking for more details.
|
||||
[Transactions Locking](../Transactions/LockingAndIsolation.md) for more details.
|
||||
|
||||
!SUBSECTION Declaration of data modification and retrieval operations
|
||||
|
||||
|
@ -79,42 +84,47 @@ All data modification and retrieval operations that are to be executed inside
|
|||
the transaction need to be specified in a Javascript function, using the *action*
|
||||
attribute:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
// all operations go here
|
||||
}
|
||||
});
|
||||
```js
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
// all operations go here
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
Any valid Javascript code is allowed inside *action* but the code may only
|
||||
access the collections declared in *collections*.
|
||||
*action* may be a Javascript function as shown above, or a string representation
|
||||
of a Javascript function:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: "function () { doSomething(); }"
|
||||
});
|
||||
|
||||
```
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: "function () { doSomething(); }"
|
||||
});
|
||||
```
|
||||
Please note that any operations specified in *action* will be executed on the
|
||||
server, in a separate scope. Variables will be bound late. Accessing any Javascript
|
||||
variables defined on the client-side or in some other server context from inside
|
||||
a transaction may not work.
|
||||
Instead, any variables used inside *action* should be defined inside *action* itself:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
var db = require(...).db;
|
||||
db.users.save({ ... });
|
||||
}
|
||||
});
|
||||
```
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
var db = require(...).db;
|
||||
db.users.save({ ... });
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
When the code inside the *action* attribute is executed, the transaction is
|
||||
already started and all required locks have been acquired. When the code inside
|
||||
|
@ -124,18 +134,20 @@ There is no explicit commit command.
|
|||
To make a transaction abort and roll back all changes, an exception needs to
|
||||
be thrown and not caught inside the transaction:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.users.save({ _key: "hello" });
|
||||
```js
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.users.save({ _key: "hello" });
|
||||
|
||||
// will abort and roll back the transaction
|
||||
throw "doh!";
|
||||
}
|
||||
});
|
||||
// will abort and roll back the transaction
|
||||
throw "doh!";
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
There is no explicit abort or roll back command.
|
||||
|
||||
|
@ -143,18 +155,20 @@ As mentioned earlier, a transaction will commit automatically when the end of
|
|||
the *action* function is reached and no exception has been thrown. In this
|
||||
case, the user can return any legal Javascript value from the function:
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.users.save({ _key: "hello" });
|
||||
```js
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: "users"
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.users.save({ _key: "hello" });
|
||||
|
||||
// will commit the transaction and return the value "hello"
|
||||
return "hello";
|
||||
}
|
||||
});
|
||||
// will commit the transaction and return the value "hello"
|
||||
return "hello";
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
!SUBSECTION Examples
|
||||
|
||||
|
@ -165,108 +179,114 @@ The *c1* collection needs to be declared in the *write* attribute of the
|
|||
The *action* attribute contains the actual transaction code to be executed.
|
||||
This code contains all data modification operations (3 in this example).
|
||||
|
||||
// setup
|
||||
db._create("c1");
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c1.save({ _key: "key2" });
|
||||
db.c1.save({ _key: "key3" });
|
||||
}
|
||||
});
|
||||
```js
|
||||
// setup
|
||||
db._create("c1");
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c1.save({ _key: "key2" });
|
||||
db.c1.save({ _key: "key3" });
|
||||
}
|
||||
});
|
||||
db.c1.count(); // 3
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
Aborting the transaction by throwing an exception in the *action* function
|
||||
will revert all changes, so as if the transaction never happened:
|
||||
|
||||
// setup
|
||||
db._create("c1");
|
||||
``` js
|
||||
// setup
|
||||
db._create("c1");
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c1.count(); // 1
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c1.count(); // 1
|
||||
|
||||
db.c1.save({ _key: "key2" });
|
||||
db.c1.count(); // 2
|
||||
db.c1.save({ _key: "key2" });
|
||||
db.c1.count(); // 2
|
||||
|
||||
throw "doh!";
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 0
|
||||
throw "doh!";
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 0
|
||||
```
|
||||
|
||||
The automatic rollback is also executed when an internal exception is thrown
|
||||
at some point during transaction execution:
|
||||
|
||||
// setup
|
||||
db._create("c1");
|
||||
```js
|
||||
// setup
|
||||
db._create("c1");
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
|
||||
// will throw duplicate a key error, not explicitly requested by the user
|
||||
db.c1.save({ _key: "key1" });
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
|
||||
// will throw duplicate a key error, not explicitly requested by the user
|
||||
db.c1.save({ _key: "key1" });
|
||||
|
||||
// we'll never get here...
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 0
|
||||
// we'll never get here...
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 0
|
||||
```
|
||||
|
||||
As required by the *consistency* principle, aborting or rolling back a
|
||||
transaction will also restore secondary indexes to the state at transaction
|
||||
start. The following example using a cap constraint should illustrate that:
|
||||
|
||||
// setup
|
||||
db._create("c1");
|
||||
|
||||
// limit the number of documents to 3
|
||||
db.c1.ensureCapConstraint(3);
|
||||
```js
|
||||
// setup
|
||||
db._create("c1");
|
||||
|
||||
// insert 3 documents
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c1.save({ _key: "key2" });
|
||||
db.c1.save({ _key: "key3" });
|
||||
// limit the number of documents to 3
|
||||
db.c1.ensureCapConstraint(3);
|
||||
|
||||
// this will push out key1
|
||||
// we now have these keys: [ "key1", "key2", "key3" ]
|
||||
db.c1.save({ _key: "key4" });
|
||||
// insert 3 documents
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c1.save({ _key: "key2" });
|
||||
db.c1.save({ _key: "key3" });
|
||||
|
||||
// this will push out key1
|
||||
// we now have these keys: [ "key1", "key2", "key3" ]
|
||||
db.c1.save({ _key: "key4" });
|
||||
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
// this will push out key2. we now have keys [ "key3", "key4", "key5" ]
|
||||
db.c1.save({ _key: "key5" });
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
// this will push out key2. we now have keys [ "key3", "key4", "key5" ]
|
||||
db.c1.save({ _key: "key5" });
|
||||
|
||||
// will abort the transaction
|
||||
throw "doh!"
|
||||
}
|
||||
});
|
||||
// will abort the transaction
|
||||
throw "doh!"
|
||||
}
|
||||
});
|
||||
|
||||
// we now have these keys back: [ "key2", "key3", "key4" ]
|
||||
// we now have these keys back: [ "key2", "key3", "key4" ]
|
||||
```
|
||||
|
||||
!SUBSECTION Cross-collection transactions
|
||||
|
||||
|
@ -274,50 +294,53 @@ There's also the possibility to run a transaction across multiple collections.
|
|||
In this case, multiple collections need to be declared in the *collections*
|
||||
attribute, e.g.:
|
||||
|
||||
// setup
|
||||
db._create("c1");
|
||||
db._create("c2");
|
||||
```js
|
||||
// setup
|
||||
db._create("c1");
|
||||
db._create("c2");
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1", "c2" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c2.save({ _key: "key2" });
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 1
|
||||
db.c2.count(); // 1
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1", "c2" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
db.c1.save({ _key: "key1" });
|
||||
db.c2.save({ _key: "key2" });
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 1
|
||||
db.c2.count(); // 1
|
||||
```
|
||||
|
||||
Again, throwing an exception from inside the *action* function will make the
|
||||
transaction abort and roll back all changes in all collections:
|
||||
|
||||
// setup
|
||||
db._create("c1");
|
||||
db._create("c2");
|
||||
```js
|
||||
// setup
|
||||
db._create("c1");
|
||||
db._create("c2");
|
||||
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1", "c2" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
for (var i = 0; i < 100; ++i) {
|
||||
db.c1.save({ _key: "key" + i });
|
||||
db.c2.save({ _key: "key" + i });
|
||||
}
|
||||
db._executeTransaction({
|
||||
collections: {
|
||||
write: [ "c1", "c2" ]
|
||||
},
|
||||
action: function () {
|
||||
var db = require("internal").db;
|
||||
for (var i = 0; i < 100; ++i) {
|
||||
db.c1.save({ _key: "key" + i });
|
||||
db.c2.save({ _key: "key" + i });
|
||||
}
|
||||
|
||||
db.c1.count(); // 100
|
||||
db.c2.count(); // 100
|
||||
db.c1.count(); // 100
|
||||
db.c2.count(); // 100
|
||||
|
||||
// abort
|
||||
throw "doh!"
|
||||
}
|
||||
});
|
||||
// abort
|
||||
throw "doh!"
|
||||
}
|
||||
});
|
||||
|
||||
db.c1.count(); // 0
|
||||
db.c2.count(); // 0
|
||||
db.c1.count(); // 0
|
||||
db.c2.count(); // 0
|
||||
```
|
|
@ -566,8 +566,8 @@ unittests-shell-server-ahuacatl:
|
|||
### @brief SHELL CLIENT TESTS
|
||||
################################################################################
|
||||
|
||||
UNITTESTS_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-changeMode.js)
|
||||
UNITTESTS_NO_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-noChangeMode.js)
|
||||
UNITTESTS_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-changeMode-noncluster.js)
|
||||
UNITTESTS_NO_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-noChangeMode-noncluster.js)
|
||||
.PHONY: unittests-shell-client-readonly
|
||||
|
||||
unittests-shell-client-readonly:
|
||||
|
|
|
@ -56,8 +56,24 @@ V8Job::V8Job (TRI_vocbase_t* vocbase,
|
|||
_vocbase(vocbase),
|
||||
_v8Dealer(v8Dealer),
|
||||
_command(command),
|
||||
_parameters(parameters),
|
||||
_parameters(nullptr),
|
||||
_canceled(0) {
|
||||
|
||||
if (parameters != nullptr) {
|
||||
// create our own copy of the parameters
|
||||
_parameters = TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, parameters);
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief destroys a V8 job
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
V8Job::~V8Job () {
|
||||
if (_parameters != nullptr) {
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters);
|
||||
_parameters = nullptr;
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -76,7 +92,7 @@ Job::JobType V8Job::type () {
|
|||
/// {@inheritDoc}
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
const string& V8Job::queue () {
|
||||
string const& V8Job::queue () {
|
||||
static const string queue = "STANDARD";
|
||||
return queue;
|
||||
}
|
||||
|
@ -90,11 +106,10 @@ Job::status_t V8Job::work () {
|
|||
return status_t(JOB_DONE);
|
||||
}
|
||||
|
||||
ApplicationV8::V8Context* context
|
||||
= _v8Dealer->enterContext(_vocbase, 0, true, false);
|
||||
ApplicationV8::V8Context* context = _v8Dealer->enterContext(_vocbase, nullptr, true, false);
|
||||
|
||||
// note: the context might be 0 in case of shut-down
|
||||
if (context == 0) {
|
||||
if (context == nullptr) {
|
||||
return status_t(JOB_DONE);
|
||||
}
|
||||
|
||||
|
@ -119,7 +134,7 @@ Job::status_t V8Job::work () {
|
|||
}
|
||||
|
||||
v8::Handle<v8::Value> fArgs;
|
||||
if (_parameters != 0) {
|
||||
if (_parameters != nullptr) {
|
||||
fArgs = TRI_ObjectJson(_parameters);
|
||||
}
|
||||
else {
|
||||
|
|
|
@ -64,6 +64,12 @@ namespace triagens {
|
|||
std::string const&,
|
||||
TRI_json_t const*);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief destroys a V8 job
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
~V8Job ();
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- Job methods
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -80,7 +86,7 @@ namespace triagens {
|
|||
/// {@inheritDoc}
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
const std::string& queue ();
|
||||
std::string const& queue ();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// {@inheritDoc}
|
||||
|
@ -140,7 +146,7 @@ namespace triagens {
|
|||
/// @brief paramaters
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_json_t const* _parameters;
|
||||
TRI_json_t* _parameters;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief cancel flag
|
||||
|
|
|
@ -65,7 +65,7 @@ V8TimerTask::V8TimerTask (string const& id,
|
|||
_parameters(parameters),
|
||||
_created(TRI_microtime()) {
|
||||
|
||||
TRI_ASSERT(vocbase != 0);
|
||||
TRI_ASSERT(vocbase != nullptr);
|
||||
|
||||
// increase reference counter for the database used
|
||||
TRI_UseVocBase(_vocbase);
|
||||
|
@ -79,7 +79,7 @@ V8TimerTask::~V8TimerTask () {
|
|||
// decrease reference counter for the database used
|
||||
TRI_ReleaseVocBase(_vocbase);
|
||||
|
||||
if (_parameters != 0) {
|
||||
if (_parameters != nullptr) {
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -51,7 +51,7 @@ struct TRI_datafile_s;
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
typedef enum {
|
||||
TRI_BARRIER_ELEMENT,
|
||||
TRI_BARRIER_ELEMENT = 1,
|
||||
TRI_BARRIER_DATAFILE_DROP_CALLBACK,
|
||||
TRI_BARRIER_DATAFILE_RENAME_CALLBACK,
|
||||
TRI_BARRIER_COLLECTION_UNLOAD_CALLBACK,
|
||||
|
|
|
@ -67,7 +67,8 @@ static int const CLEANUP_INDEX_ITERATIONS = 5;
|
|||
/// @brief checks all datafiles of a collection
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
static void CleanupDocumentCollection (TRI_document_collection_t* document) {
|
||||
static void CleanupDocumentCollection (TRI_vocbase_col_t* collection,
|
||||
TRI_document_collection_t* document) {
|
||||
bool unloadChecked = false;
|
||||
|
||||
// loop until done
|
||||
|
@ -121,10 +122,21 @@ static void CleanupDocumentCollection (TRI_document_collection_t* document) {
|
|||
// we must release the lock temporarily to check if the collection is fully collected
|
||||
TRI_UnlockSpin(&container->_lock);
|
||||
|
||||
bool isDeleted = false;
|
||||
|
||||
// must not hold the spin lock while querying the collection
|
||||
if (! TRI_IsFullyCollectedDocumentCollection(document)) {
|
||||
// collection is not fully collected - postpone the unload
|
||||
return;
|
||||
// if there is still some collection to perform, check if the collection was deleted already
|
||||
if (TRI_TRY_READ_LOCK_STATUS_VOCBASE_COL(collection)) {
|
||||
isDeleted = (collection->_status == TRI_VOC_COL_STATUS_DELETED);
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
}
|
||||
|
||||
if (! isDeleted) {
|
||||
// collection is not fully collected and still undeleted - postpone the unload
|
||||
return;
|
||||
}
|
||||
// if deleted, then we may unload / delete
|
||||
}
|
||||
|
||||
unloadChecked = true;
|
||||
|
@ -248,26 +260,21 @@ void TRI_CleanupVocBase (void* data) {
|
|||
|
||||
// check if we can get the compactor lock exclusively
|
||||
if (TRI_CheckAndLockCompactorVocBase(vocbase)) {
|
||||
size_t i, n;
|
||||
|
||||
// copy all collections
|
||||
TRI_READ_LOCK_COLLECTIONS_VOCBASE(vocbase);
|
||||
TRI_CopyDataVectorPointer(&collections, &vocbase->_collections);
|
||||
TRI_READ_UNLOCK_COLLECTIONS_VOCBASE(vocbase);
|
||||
|
||||
n = collections._length;
|
||||
size_t const n = collections._length;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
TRI_vocbase_col_t* collection;
|
||||
TRI_document_collection_t* document;
|
||||
|
||||
collection = (TRI_vocbase_col_t*) collections._buffer[i];
|
||||
for (size_t i = 0; i < n; ++i) {
|
||||
TRI_vocbase_col_t* collection = static_cast<TRI_vocbase_col_t*>(collections._buffer[i]);
|
||||
|
||||
TRI_READ_LOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
document = collection->_collection;
|
||||
TRI_document_collection_t* document = collection->_collection;
|
||||
|
||||
if (document == NULL) {
|
||||
if (document == nullptr) {
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
continue;
|
||||
}
|
||||
|
@ -283,7 +290,7 @@ void TRI_CleanupVocBase (void* data) {
|
|||
document->cleanupIndexes(document);
|
||||
}
|
||||
|
||||
CleanupDocumentCollection(document);
|
||||
CleanupDocumentCollection(collection, document);
|
||||
}
|
||||
|
||||
TRI_UnlockCompactorVocBase(vocbase);
|
||||
|
|
|
@ -1188,11 +1188,8 @@ void TRI_InitMarkerDatafile (char* marker,
|
|||
int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile,
|
||||
TRI_voc_fid_t fid,
|
||||
TRI_voc_size_t maximalSize) {
|
||||
TRI_df_marker_t* position;
|
||||
TRI_df_header_marker_t header;
|
||||
int res;
|
||||
|
||||
// create the header
|
||||
TRI_df_header_marker_t header;
|
||||
TRI_InitMarkerDatafile((char*) &header, TRI_DF_MARKER_HEADER, sizeof(TRI_df_header_marker_t));
|
||||
header.base._tick = (TRI_voc_tick_t) fid;
|
||||
|
||||
|
@ -1201,7 +1198,8 @@ int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile,
|
|||
header._fid = fid;
|
||||
|
||||
// reserve space and write header to file
|
||||
res = TRI_ReserveElementDatafile(datafile, header.base._size, &position, 0);
|
||||
TRI_df_marker_t* position;
|
||||
int res = TRI_ReserveElementDatafile(datafile, header.base._size, &position, 0);
|
||||
|
||||
if (res == TRI_ERROR_NO_ERROR) {
|
||||
res = TRI_WriteCrcElementDatafile(datafile, position, &header.base, false);
|
||||
|
|
|
@ -279,7 +279,7 @@ typedef struct TRI_datafile_s {
|
|||
bool (*sync)(const struct TRI_datafile_s* const, char const*, char const*); // syncs the datafile
|
||||
int (*truncate)(struct TRI_datafile_s* const, const off_t); // truncates the datafile to a specific length
|
||||
|
||||
int _lastError; // last (cirtical) error
|
||||
int _lastError; // last (critical) error
|
||||
bool _full; // at least one request was rejected because there is not enough room
|
||||
bool _isSealed; // true, if footer has been written
|
||||
|
||||
|
|
|
@ -1688,8 +1688,6 @@ static bool OpenIterator (TRI_df_marker_t const* marker,
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
static int FillInternalIndexes (TRI_document_collection_t* document) {
|
||||
TRI_ASSERT(! triagens::wal::LogfileManager::instance()->isInRecovery());
|
||||
|
||||
int res = TRI_ERROR_NO_ERROR;
|
||||
|
||||
for (size_t i = 0; i < document->_allIndexes._length; ++i) {
|
||||
|
@ -2320,7 +2318,7 @@ TRI_datafile_t* TRI_CreateDatafileDocumentCollection (TRI_document_collection_t*
|
|||
|
||||
if (res != TRI_ERROR_NO_ERROR) {
|
||||
document->_lastError = journal->_lastError;
|
||||
LOG_ERROR("cannot create collection header in file '%s': %s", journal->getName(journal), TRI_last_error());
|
||||
LOG_ERROR("cannot create collection header in file '%s': %s", journal->getName(journal), TRI_errno_string(res));
|
||||
|
||||
// close the journal and remove it
|
||||
TRI_CloseDatafile(journal);
|
||||
|
@ -2332,7 +2330,7 @@ TRI_datafile_t* TRI_CreateDatafileDocumentCollection (TRI_document_collection_t*
|
|||
|
||||
TRI_col_header_marker_t cm;
|
||||
TRI_InitMarkerDatafile((char*) &cm, TRI_COL_MARKER_HEADER, sizeof(TRI_col_header_marker_t));
|
||||
cm.base._tick = (TRI_voc_tick_t) fid;
|
||||
cm.base._tick = static_cast<TRI_voc_tick_t>(fid);
|
||||
cm._type = (TRI_col_type_t) document->_info._type;
|
||||
cm._cid = document->_info._cid;
|
||||
|
||||
|
@ -2697,16 +2695,11 @@ TRI_document_collection_t* TRI_OpenDocumentCollection (TRI_vocbase_t* vocbase,
|
|||
|
||||
TRI_InitVocShaper(document->getShaper()); // ONLY in OPENCOLLECTION, PROTECTED by fake trx here
|
||||
|
||||
// secondary indexes must not be loaded during recovery
|
||||
// this is because creating indexes might write attribute markers into the WAL,
|
||||
// but the WAL is read-only at the point of recovery
|
||||
if (! triagens::wal::LogfileManager::instance()->isInRecovery()) {
|
||||
// fill internal indexes (this is, the edges index at the moment)
|
||||
FillInternalIndexes(document);
|
||||
// fill internal indexes (this is, the edges index at the moment)
|
||||
FillInternalIndexes(document);
|
||||
|
||||
// fill user-defined secondary indexes
|
||||
TRI_IterateIndexCollection(collection, OpenIndexIterator, collection);
|
||||
}
|
||||
// fill user-defined secondary indexes
|
||||
TRI_IterateIndexCollection(collection, OpenIndexIterator, collection);
|
||||
|
||||
return document;
|
||||
}
|
||||
|
|
|
@ -478,7 +478,6 @@ bool CollectorThread::processQueuedOperations () {
|
|||
|
||||
_numPendingOperations -= numOperations;
|
||||
|
||||
|
||||
// delete the object
|
||||
delete (*it2);
|
||||
|
||||
|
@ -652,17 +651,12 @@ int CollectorThread::processCollectionOperations (CollectorCache* cache) {
|
|||
// finally update all datafile statistics
|
||||
LOG_TRACE("updating datafile statistics for collection '%s'", document->_info._name);
|
||||
updateDatafileStatistics(document, cache);
|
||||
|
||||
// TODO: the following assertion is only true in a running system
|
||||
// if we just started the server, we don't know how many uncollected operations we have!!
|
||||
// TRI_ASSERT(document->_uncollectedLogfileEntries >= cache->totalOperationsCount);
|
||||
|
||||
document->_uncollectedLogfileEntries -= cache->totalOperationsCount;
|
||||
if (document->_uncollectedLogfileEntries < 0) {
|
||||
document->_uncollectedLogfileEntries = 0;
|
||||
}
|
||||
|
||||
cache->freeBarriers();
|
||||
|
||||
res = TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
catch (triagens::arango::Exception const& ex) {
|
||||
|
@ -866,7 +860,6 @@ int CollectorThread::transferMarkers (Logfile* logfile,
|
|||
|
||||
if (cache != nullptr) {
|
||||
// prevent memleak
|
||||
cache->freeBarriers();
|
||||
delete cache;
|
||||
}
|
||||
|
||||
|
|
|
@ -106,6 +106,7 @@ namespace triagens {
|
|||
if (operations != nullptr) {
|
||||
delete operations;
|
||||
}
|
||||
freeBarriers();
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -125,6 +126,7 @@ namespace triagens {
|
|||
for (auto it = barriers.begin(); it != barriers.end(); ++it) {
|
||||
TRI_FreeBarrier((*it));
|
||||
}
|
||||
|
||||
barriers.clear();
|
||||
}
|
||||
|
||||
|
|
|
@ -42,6 +42,7 @@
|
|||
#include "VocBase/server.h"
|
||||
#include "Wal/AllocatorThread.h"
|
||||
#include "Wal/CollectorThread.h"
|
||||
#include "Wal/RecoverState.h"
|
||||
#include "Wal/Slots.h"
|
||||
#include "Wal/SynchroniserThread.h"
|
||||
|
||||
|
@ -730,6 +731,10 @@ SlotInfo LogfileManager::allocate (void const* src,
|
|||
uint32_t size) {
|
||||
if (! _allowWrites) {
|
||||
// no writes allowed
|
||||
#ifdef TRI_ENABLE_MAINTAINER_MODE
|
||||
TRI_ASSERT(false);
|
||||
#endif
|
||||
|
||||
return SlotInfo(TRI_ERROR_ARANGO_READ_ONLY);
|
||||
}
|
||||
|
||||
|
|
|
@ -48,35 +48,10 @@ namespace triagens {
|
|||
|
||||
class AllocatorThread;
|
||||
class CollectorThread;
|
||||
struct RecoverState;
|
||||
class Slot;
|
||||
class SynchroniserThread;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- RecoverState
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief state that is built up when scanning a WAL logfile during recovery
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
struct RecoverState {
|
||||
RecoverState ()
|
||||
: collections(),
|
||||
failedTransactions(),
|
||||
droppedCollections(),
|
||||
droppedDatabases(),
|
||||
lastTick(0),
|
||||
logfilesToCollect(0) {
|
||||
}
|
||||
|
||||
std::unordered_map<TRI_voc_cid_t, TRI_voc_tick_t> collections;
|
||||
std::unordered_map<TRI_voc_tid_t, std::pair<TRI_voc_tick_t, bool>> failedTransactions;
|
||||
std::unordered_set<TRI_voc_cid_t> droppedCollections;
|
||||
std::unordered_set<TRI_voc_tick_t> droppedDatabases;
|
||||
TRI_voc_tick_t lastTick;
|
||||
int logfilesToCollect;
|
||||
};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- LogfileManagerState
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -358,14 +333,6 @@ namespace triagens {
|
|||
_throttleWhenPending = value;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief whether or not we are in the recovery mode
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
inline bool isInRecovery () const {
|
||||
return _inRecovery;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief registers a transaction
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -0,0 +1,80 @@
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief Recovery state
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
/// DISCLAIMER
|
||||
///
|
||||
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
|
||||
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
|
||||
///
|
||||
/// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
/// you may not use this file except in compliance with the License.
|
||||
/// You may obtain a copy of the License at
|
||||
///
|
||||
/// http://www.apache.org/licenses/LICENSE-2.0
|
||||
///
|
||||
/// Unless required by applicable law or agreed to in writing, software
|
||||
/// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
/// See the License for the specific language governing permissions and
|
||||
/// limitations under the License.
|
||||
///
|
||||
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
|
||||
///
|
||||
/// @author Jan Steemann
|
||||
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
|
||||
/// @author Copyright 2011-2013, triAGENS GmbH, Cologne, Germany
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#ifndef ARANGODB_WAL_RECOVER_STATE_H
|
||||
#define ARANGODB_WAL_RECOVER_STATE_H 1
|
||||
|
||||
#include "Basics/Common.h"
|
||||
#include "Basics/Mutex.h"
|
||||
#include "VocBase/voc-types.h"
|
||||
|
||||
struct TRI_server_s;
|
||||
|
||||
namespace triagens {
|
||||
namespace wal {
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- RecoverState
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief state that is built up when scanning a WAL logfile during recovery
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
struct RecoverState {
|
||||
RecoverState ()
|
||||
: collections(),
|
||||
failedTransactions(),
|
||||
droppedCollections(),
|
||||
droppedDatabases(),
|
||||
lastTick(0),
|
||||
logfilesToCollect(0) {
|
||||
}
|
||||
|
||||
std::unordered_map<TRI_voc_cid_t, TRI_voc_tick_t> collections;
|
||||
std::unordered_map<TRI_voc_tid_t, std::pair<TRI_voc_tick_t, bool>> failedTransactions;
|
||||
std::unordered_set<TRI_voc_cid_t> droppedCollections;
|
||||
std::unordered_set<TRI_voc_tick_t> droppedDatabases;
|
||||
TRI_voc_tick_t lastTick;
|
||||
int logfilesToCollect;
|
||||
};
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- END-OF-FILE
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// Local Variables:
|
||||
// mode: outline-minor
|
||||
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
|
||||
// End:
|
|
@ -1,5 +1,5 @@
|
|||
/*jslint indent: 2, nomen: true, maxlen: 100, vars: true, white: true, plusplus: true */
|
||||
/*global window, Backbone */
|
||||
/*global window, Backbone, $ */
|
||||
(function() {
|
||||
"use strict";
|
||||
|
||||
|
@ -17,6 +17,39 @@
|
|||
return raw.graph || raw;
|
||||
},
|
||||
|
||||
addEdgeDefinition: function(edgeDefinition) {
|
||||
$.ajax(
|
||||
{
|
||||
async: false,
|
||||
type: "POST",
|
||||
url: this.urlRoot + "/" + this.get("_key") + "/edge",
|
||||
data: JSON.stringify(edgeDefinition)
|
||||
}
|
||||
);
|
||||
},
|
||||
|
||||
deleteEdgeDefinition: function(edgeCollection) {
|
||||
$.ajax(
|
||||
{
|
||||
async: false,
|
||||
type: "DELETE",
|
||||
url: this.urlRoot + "/" + this.get("_key") + "/edge/" + edgeCollection
|
||||
}
|
||||
);
|
||||
},
|
||||
|
||||
modifyEdgeDefinition: function(edgeDefinition) {
|
||||
$.ajax(
|
||||
{
|
||||
async: false,
|
||||
type: "PUT",
|
||||
url: this.urlRoot + "/" + this.get("_key") + "/edge/" + edgeDefinition.collection,
|
||||
data: JSON.stringify(edgeDefinition)
|
||||
}
|
||||
);
|
||||
},
|
||||
|
||||
|
||||
defaults: {
|
||||
name: "",
|
||||
edgeDefinitions: [],
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
<script id="edgeDefinitionTable.ejs" type="text/template">
|
||||
<tr class="tableRow" id="row_newEdgeDefinitions<%= number%>">
|
||||
<tr class="tableRow" id="row_ newEdgeDefinitions<%= number%>">
|
||||
<th class="collectionTh">Edge definitions*:</th>
|
||||
<th class="collectionTh">
|
||||
<input type="hidden" id="newEdgeDefinitions<%= number%>" value="" placeholder="Edge definitions" tabindex="-1" class="select2-offscreen">
|
||||
|
|
|
@ -67,7 +67,6 @@
|
|||
<div class="tile tile-graph" id="<%=graphName %>_tile">
|
||||
<div class="iconSet">
|
||||
<span class="icon_arangodb_settings2 editGraph" id="<%=graphName %>_settings" alt="Edit graph" title="Edit graph"></span>
|
||||
<span class="icon_arangodb_info" id="<%=graphName %>_info" alt="Show graph properties" title="Show graph properties"></span>
|
||||
</div>
|
||||
<span class="icon_arangodb_edge5 tile-icon"></span>
|
||||
<div class="tileBadge">
|
||||
|
|
|
@ -43,7 +43,7 @@
|
|||
|
||||
addNewGraph: function(e) {
|
||||
e.preventDefault();
|
||||
this.createNewGraphModal2();
|
||||
this.createNewGraphModal();
|
||||
},
|
||||
|
||||
deleteGraph: function(e) {
|
||||
|
@ -120,7 +120,91 @@
|
|||
},
|
||||
|
||||
saveEditedGraph: function() {
|
||||
var name = $("#editGraphName").html(),
|
||||
vertexCollections = _.pluck($('#newVertexCollections').select2("data"), "text"),
|
||||
edgeDefinitions = [],
|
||||
newEdgeDefinitions = {},
|
||||
self = this,
|
||||
collection,
|
||||
from,
|
||||
to,
|
||||
index,
|
||||
edgeDefinitionElements;
|
||||
|
||||
edgeDefinitionElements = $('[id^=s2id_newEdgeDefinitions]').toArray();
|
||||
edgeDefinitionElements.forEach(
|
||||
function(eDElement) {
|
||||
index = $(eDElement).attr("id");
|
||||
index = index.replace("s2id_newEdgeDefinitions", "");
|
||||
collection = _.pluck($('#s2id_newEdgeDefinitions' + index).select2("data"), "text")[0];
|
||||
if (collection && collection !== "") {
|
||||
from = _.pluck($('#s2id_newFromCollections' + index).select2("data"), "text");
|
||||
to = _.pluck($('#s2id_newToCollections' + index).select2("data"), "text");
|
||||
if (from !== 1 && to !== 1) {
|
||||
var edgeDefinition = {
|
||||
collection: collection,
|
||||
from: from,
|
||||
to: to
|
||||
};
|
||||
edgeDefinitions.push(edgeDefinition);
|
||||
newEdgeDefinitions[collection] = edgeDefinition;
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
//get current edgeDefs/orphanage
|
||||
var graph = this.collection.findWhere({_key: name});
|
||||
var currentEdgeDefinitions = graph.get("edgeDefinitions");
|
||||
var currentOrphanage = graph.get("orphanCollections");
|
||||
var currentCollections = [];
|
||||
|
||||
|
||||
//evaluate all new, edited and deleted edge definitions
|
||||
var newEDs = [];
|
||||
var editedEDs = [];
|
||||
var deletedEDs = [];
|
||||
|
||||
currentEdgeDefinitions.forEach(
|
||||
function(eD) {
|
||||
var collection = eD.collection;
|
||||
currentCollections.push(collection);
|
||||
var newED = newEdgeDefinitions[collection];
|
||||
if (newED === undefined) {
|
||||
deletedEDs.push(collection);
|
||||
} else if (JSON.stringify(newED) !== JSON.stringify(eD)) {
|
||||
editedEDs.push(collection);
|
||||
}
|
||||
}
|
||||
);
|
||||
edgeDefinitions.forEach(
|
||||
function(eD) {
|
||||
var collection = eD.collection;
|
||||
if (currentCollections.indexOf(collection) === -1) {
|
||||
newEDs.push(collection);
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
newEDs.forEach(
|
||||
function(eD) {
|
||||
graph.addEdgeDefinition(newEdgeDefinitions[eD]);
|
||||
}
|
||||
);
|
||||
|
||||
editedEDs.forEach(
|
||||
function(eD) {
|
||||
graph.modifyEdgeDefinition(newEdgeDefinitions[eD]);
|
||||
}
|
||||
);
|
||||
|
||||
deletedEDs.forEach(
|
||||
function(eD) {
|
||||
graph.deleteEdgeDefinition(eD);
|
||||
}
|
||||
);
|
||||
|
||||
this.updateGraphManagementView();
|
||||
},
|
||||
|
||||
evaluateGraphName : function(str, substr) {
|
||||
|
@ -442,7 +526,7 @@
|
|||
return edgeDefinitionMap;
|
||||
},
|
||||
|
||||
createNewGraphModal2: function() {
|
||||
createNewGraphModal: function() {
|
||||
var buttons = [], collList = [], eCollList = [],
|
||||
tableContent = [], collections = this.options.collectionCollection.models;
|
||||
|
||||
|
|
|
@ -261,6 +261,7 @@
|
|||
"test/specs/views/documentsViewSpec.js",
|
||||
"test/specs/views/documentViewSpec.js",
|
||||
"test/specs/views/dashboardViewSpec.js",
|
||||
"test/specs/views/graphManagementViewSpec.js",
|
||||
"test/specs/views/newLogsViewSpec.js",
|
||||
"test/specs/views/notificationViewSpec.js",
|
||||
"test/specs/views/statisticBarViewSpec.js",
|
||||
|
|
|
@ -13,13 +13,13 @@
|
|||
col = new window.GraphCollection();
|
||||
});
|
||||
|
||||
/* it("parse", function () {
|
||||
it("parse", function () {
|
||||
expect(col.model).toEqual(window.Graph);
|
||||
expect(col.comparator).toEqual("_key");
|
||||
expect(col.url).toEqual("/_api/graph");
|
||||
expect(col.url).toEqual("/_api/gharial");
|
||||
expect(col.parse({error: false, graphs: "blub"})).toEqual("blub");
|
||||
expect(col.parse({error: true, graphs: "blub"})).toEqual(undefined);
|
||||
});*/
|
||||
});
|
||||
|
||||
});
|
||||
}());
|
||||
|
|
|
@ -38,15 +38,14 @@
|
|||
});
|
||||
});
|
||||
|
||||
/*it("should request /_api/graph on save", function() {
|
||||
it("should request /_api/graph on save", function() {
|
||||
ajaxVerify = function(opt) {
|
||||
expect(opt.url).toEqual("/_api/graph");
|
||||
expect(opt.url).toEqual("/_api/gharial");
|
||||
expect(opt.type).toEqual("POST");
|
||||
};
|
||||
model.save();
|
||||
expect($.ajax).toHaveBeenCalled();
|
||||
});
|
||||
*/
|
||||
it("should store the attributes in the model", function() {
|
||||
var id = "_graph/" + myKey,
|
||||
rev = "12345";
|
||||
|
@ -81,15 +80,15 @@
|
|||
expect(model.get("graph")).toBeUndefined();
|
||||
});
|
||||
|
||||
/* it("should request /_api/graph/_key on delete", function() {
|
||||
it("should request /_api/graph/_key on delete", function() {
|
||||
model.save();
|
||||
ajaxVerify = function(opt) {
|
||||
expect(opt.url).toEqual("/_api/graph/" + myKey);
|
||||
expect(opt.url).toEqual("/_api/gharial/" + myKey);
|
||||
expect(opt.type).toEqual("DELETE");
|
||||
};
|
||||
model.destroy();
|
||||
expect($.ajax).toHaveBeenCalled();
|
||||
});*/
|
||||
});
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -500,7 +500,7 @@
|
|||
);
|
||||
});
|
||||
|
||||
/*it("should navigate to the graphView", function () {
|
||||
/* it("should navigate to the graphView", function () {
|
||||
spyOn(graphDummy, "render");
|
||||
simpleNavigationCheck(
|
||||
"graph",
|
||||
|
@ -654,14 +654,15 @@
|
|||
);
|
||||
});
|
||||
|
||||
/*it("should route to the graph management tab", function () {
|
||||
it("should route to the graph management tab", function () {
|
||||
simpleNavigationCheck(
|
||||
"graphManagement",
|
||||
"GraphManagementView",
|
||||
"graphviewer-menu",
|
||||
{ collection: graphsDummy}
|
||||
{ collection: graphsDummy ,
|
||||
collectionCollection : { id : 'store', fetch : jasmine.any(Function) } }
|
||||
);
|
||||
});*/
|
||||
});
|
||||
|
||||
it("should route to the applications tab", function () {
|
||||
simpleNavigationCheck(
|
||||
|
|
|
@ -26,7 +26,8 @@
|
|||
div.id = "content";
|
||||
document.body.appendChild(div);
|
||||
view = new window.GraphManagementView({
|
||||
collection: graphs
|
||||
collection: graphs,
|
||||
collectionCollection: new window.arangoCollections()
|
||||
});
|
||||
});
|
||||
|
||||
|
@ -98,14 +99,13 @@
|
|||
$("#newGraphEdges").val("newEdges");
|
||||
spyOn($, "ajax").andCallFake(function (opts) {
|
||||
expect(opts.type).toEqual("POST");
|
||||
expect(opts.url).toEqual("/_api/graph");
|
||||
expect(opts.data).toEqual(JSON.stringify({
|
||||
_key: "newGraph",
|
||||
vertices: "newVertices",
|
||||
edges: "newEdges",
|
||||
_id: "",
|
||||
_rev: ""
|
||||
}));
|
||||
expect(opts.url).toEqual("/_api/gharial");
|
||||
expect(opts.data).toEqual(JSON.stringify(
|
||||
{
|
||||
"name":"newGraph",
|
||||
"edgeDefinitions":[],
|
||||
"orphanCollections":[]
|
||||
}));
|
||||
});
|
||||
$("#modalButton1").click();
|
||||
expect($.ajax).toHaveBeenCalled();
|
||||
|
|
|
@ -4065,7 +4065,16 @@ Graph.prototype._deleteEdgeDefinition = function(edgeCollection) {
|
|||
}
|
||||
}
|
||||
);
|
||||
|
||||
updateBindCollections(this);
|
||||
db._graphs.update(
|
||||
this.__name,
|
||||
{
|
||||
orphanCollections: this.__orphanCollections,
|
||||
edgeDefinitions: this.__edgeDefinitions
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
};
|
||||
|
||||
|
|
|
@ -29,6 +29,7 @@ var internal = require("internal");
|
|||
var db = require("org/arangodb").db;
|
||||
var jsunity = require("jsunity");
|
||||
var helper = require("org/arangodb/aql-helper");
|
||||
var cluster = require("org/arangodb/cluster");
|
||||
var getModifyQueryResults = helper.getModifyQueryResults;
|
||||
var assertQueryError = helper.assertQueryError;
|
||||
|
||||
|
@ -248,6 +249,11 @@ function ahuacatlRemoveSuite () {
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRemoveInvalid4 : function () {
|
||||
if (cluster.isCluster()) {
|
||||
// skip test in cluster as there are no distributed transactions yet
|
||||
return;
|
||||
}
|
||||
|
||||
assertQueryError(errors.ERROR_ARANGO_DOCUMENT_NOT_FOUND.code, "FOR i iN 0..100 REMOVE CONCAT('test', TO_STRING(i)) IN @@cn", { "@cn": cn1 });
|
||||
assertEqual(100, c1.count());
|
||||
},
|
||||
|
|
|
@ -1186,7 +1186,8 @@ int TRI_CopyToJson (TRI_memory_zone_t* zone,
|
|||
/// @brief copies a json object
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_json_t* TRI_CopyJson (TRI_memory_zone_t* zone, const TRI_json_t* const src) {
|
||||
TRI_json_t* TRI_CopyJson (TRI_memory_zone_t* zone,
|
||||
TRI_json_t const* src) {
|
||||
TRI_json_t* dst;
|
||||
int res;
|
||||
|
||||
|
|
|
@ -382,7 +382,8 @@ int TRI_CopyToJson (TRI_memory_zone_t*, TRI_json_t* dst, TRI_json_t const* src);
|
|||
/// @brief copies a json object
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_json_t* TRI_CopyJson (TRI_memory_zone_t*, const TRI_json_t* const);
|
||||
TRI_json_t* TRI_CopyJson (TRI_memory_zone_t*,
|
||||
TRI_json_t const*);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief parses a json string
|
||||
|
|
Loading…
Reference in New Issue