1
0
Fork 0

Merge branch 'devel' of github.com:triAGENS/ArangoDB into devel

This commit is contained in:
Michael Hackstein 2014-06-26 16:13:38 +02:00
commit dc7ffb4efb
37 changed files with 556 additions and 332 deletions

View File

@ -6,8 +6,8 @@ API description is available at [Http Interface for AQL Query Cursor](../HttpAql
You can also run AQL queries from arangosh. To do so, first create an You can also run AQL queries from arangosh. To do so, first create an
ArangoStatement object as follows: ArangoStatement object as follows:
arangosh> stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } ); stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
[object ArangoStatement] [object ArangoQueryCursor]
To execute the query, use the *execute* method: To execute the query, use the *execute* method:

View File

@ -102,7 +102,7 @@ evaluation. The ternary operator expects a boolean condition as its first
operand, and it returns the result of the second operand if the condition operand, and it returns the result of the second operand if the condition
evaluates to true, and the third operand otherwise. evaluates to true, and the third operand otherwise.
Example: @EXAMPLES:
u.age > 15 || u.active == true ? u.userId : null u.age > 15 || u.active == true ? u.userId : null
@ -115,7 +115,7 @@ values.
The *..* operator will produce a list of values in the defined range, with The *..* operator will produce a list of values in the defined range, with
both bounding values included. both bounding values included.
Example: @EXAMPLES
2010..2013 2010..2013
@ -274,7 +274,7 @@ For string processing, AQL offers the following functions:
- *UPPER(value)*: Upper-case *value* - *UPPER(value)*: Upper-case *value*
- *SUBSTRING(value, offset, length)*: Return a substring of *value*, - *SUBSTRING(value, offset, length)*: Return a substring of *value*,
starting at @FA{offset} and with a maximum length of *length* characters. Offsets starting at *offset* and with a maximum length of *length* characters. Offsets
start at position 0 start at position 0
- *LEFT(value, LENGTH)*: Returns the *LENGTH* leftmost characters of - *LEFT(value, LENGTH)*: Returns the *LENGTH* leftmost characters of
@ -453,12 +453,12 @@ AQL supports the following functions to operate on list values:
*list* is a document, returns the number of attribute keys of the document, *list* is a document, returns the number of attribute keys of the document,
regardless of their values. regardless of their values.
- @FN{FLATTEN(list), depth)*: Turns a list of lists into a flat list. All - *FLATTEN(list), depth)*: Turns a list of lists into a flat list. All
list elements in *list* will be expanded in the result list. Non-list elements list elements in *list* will be expanded in the result list. Non-list elements
are added as they are. The function will recurse into sub-lists up to a depth of are added as they are. The function will recurse into sub-lists up to a depth of
*depth*. *depth* has a default value of 1. *depth*. *depth* has a default value of 1.
Example: @EXAMPLES
FLATTEN([ 1, 2, [ 3, 4 ], 5, [ 6, 7 ], [ 8, [ 9, 10 ] ]) FLATTEN([ 1, 2, [ 3, 4 ], 5, [ 6, 7 ], [ 8, [ 9, 10 ] ])
@ -523,7 +523,7 @@ AQL supports the following functions to operate on list values:
- *LAST(list)*: Returns the last element in *list* or *null* if the - *LAST(list)*: Returns the last element in *list* or *null* if the
list is empty. list is empty.
- *NTH(list, position)*: Returns the list element at position @FA{position}. - *NTH(list, position)*: Returns the list element at position *position*.
Positions start at 0. If *position* is negative or beyond the upper bound of the list Positions start at 0. If *position* is negative or beyond the upper bound of the list
specified by *list*, then *null* will be returned. specified by *list*, then *null* will be returned.
@ -536,11 +536,11 @@ AQL supports the following functions to operate on list values:
- *SLICE(list, start, length)*: Extracts a slice of the list specified - *SLICE(list, start, length)*: Extracts a slice of the list specified
by *list*. The extraction will start at list element with position *start*. by *list*. The extraction will start at list element with position *start*.
Positions start at 0. Up to *length* elements will be extracted. If *length* is Positions start at 0. Up to *length* elements will be extracted. If *length* is
not specified, all list elements starting at @FA{start} will be returned. not specified, all list elements starting at *start* will be returned.
If *start* is negative, it can be used to indicate positions from the end of the If *start* is negative, it can be used to indicate positions from the end of the
list. list.
Examples: @EXAMPLES:
SLICE([ 1, 2, 3, 4, 5 ], 0, 1) SLICE([ 1, 2, 3, 4, 5 ], 0, 1)
@ -573,7 +573,7 @@ AQL supports the following functions to operate on list values:
Note: No duplicates will be removed. In order to remove duplicates, please use either Note: No duplicates will be removed. In order to remove duplicates, please use either
*UNION_DISTINCT* function or apply the *UNIQUE* on the result of *union*. *UNION_DISTINCT* function or apply the *UNIQUE* on the result of *union*.
Example: @EXAMPLES
RETURN UNION( RETURN UNION(
[ 1, 2, 3 ], [ 1, 2, 3 ],
@ -638,7 +638,7 @@ AQL supports the following functions to operate on document values:
The *examples* must be a list of 1..n example documents, with any number of attributes The *examples* must be a list of 1..n example documents, with any number of attributes
each. Note: specifying an empty list of examples is not allowed. each. Note: specifying an empty list of examples is not allowed.
Example usage: @EXAMPLE
RETURN MATCHES( RETURN MATCHES(
{ "test" : 1 }, [ { "test" : 1 }, [
@ -850,7 +850,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
returns a list of paths through the graph defined by the nodes in the collection returns a list of paths through the graph defined by the nodes in the collection
*vertexcollection* and edges in the collection *edgecollection*. For each vertex *vertexcollection* and edges in the collection *edgecollection*. For each vertex
in *vertexcollection*, it will determine the paths through the graph depending on the in *vertexcollection*, it will determine the paths through the graph depending on the
value of @FA{direction}: value of *direction*:
- *"outbound"*: Follow all paths that start at the current vertex and lead to another vertex - *"outbound"*: Follow all paths that start at the current vertex and lead to another vertex
- *"inbound"*: Follow all paths that lead from another vertex to the current vertex - *"inbound"*: Follow all paths that lead from another vertex to the current vertex
- *"any"*: Combination of *"outbound"* and *"inbound"* - *"any"*: Combination of *"outbound"* and *"inbound"*
@ -865,7 +865,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
- *source*: start vertex of path - *source*: start vertex of path
- *destination*: destination vertex of path - *destination*: destination vertex of path
Example calls: @EXAMPLES
PATHS(friends, friendrelations, "outbound", false) PATHS(friends, friendrelations, "outbound", false)
@ -962,9 +962,9 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
- *vertex*: The vertex at the traversal point - *vertex*: The vertex at the traversal point
- *path*: The path history for the traversal point. The path is a document with the - *path*: The path history for the traversal point. The path is a document with the
attributes *vertices* and *edges*, which are both lists. Note that *path* is only present attributes *vertices* and *edges*, which are both lists. Note that *path* is only present
in the result if the *paths* attribute is set in the @FA{options} in the result if the *paths* attribute is set in the *options*
Example calls: @EXAMPLES
TRAVERSAL(friends, friendrelations, "friends/john", "outbound", { TRAVERSAL(friends, friendrelations, "friends/john", "outbound", {
strategy: "depthfirst", strategy: "depthfirst",
@ -1021,7 +1021,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
- *TRAVERSAL_TREE(vertexcollection, edgecollection, startVertex, direction, connectName, options)*: - *TRAVERSAL_TREE(vertexcollection, edgecollection, startVertex, direction, connectName, options)*:
Traverses the graph described by *vertexcollection* and *edgecollection*, Traverses the graph described by *vertexcollection* and *edgecollection*,
starting at the vertex identified by id @FA{startVertex} and creates a hierarchical result. starting at the vertex identified by id *startVertex* and creates a hierarchical result.
Vertex connectivity is establish by inserted an attribute which has the name specified via Vertex connectivity is establish by inserted an attribute which has the name specified via
the *connectName* parameter. Connected vertices will be placed in this attribute as a the *connectName* parameter. Connected vertices will be placed in this attribute as a
list. list.
@ -1030,7 +1030,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
be set up in a way that resembles a depth-first, pre-order visitation result. Thus, the be set up in a way that resembles a depth-first, pre-order visitation result. Thus, the
*strategy* and *order* attributes of the *options* attribute will be ignored. *strategy* and *order* attributes of the *options* attribute will be ignored.
Example calls: @EXAMPLES
TRAVERSAL_TREE(friends, friendrelations, "friends/john", "outbound", "likes", { TRAVERSAL_TREE(friends, friendrelations, "friends/john", "outbound", "likes", {
itemOrder: "forward" itemOrder: "forward"
@ -1049,10 +1049,10 @@ This query is deprecated and will be removed soon.
Please use [Graph operations](../Aql/GraphOperations.md) instead. Please use [Graph operations](../Aql/GraphOperations.md) instead.
- *SHORTEST_PATH(vertexcollection, edgecollection, startVertex, endVertex, direction, options)*: - *SHORTEST_PATH(vertexcollection, edgecollection, startVertex, endVertex, direction, options)*:
Determines the first shortest path from the @FA{startVertex} to the *endVertex*. Determines the first shortest path from the *startVertex* to the *endVertex*.
Both vertices must be present in the vertex collection specified in *vertexcollection*, Both vertices must be present in the vertex collection specified in *vertexcollection*,
and any connecting edges must be present in the collection specified by *edgecollection*. and any connecting edges must be present in the collection specified by *edgecollection*.
Vertex connectivity is specified by the @FA{direction} parameter: Vertex connectivity is specified by the *direction* parameter:
- *"outbound"*: Vertices are connected in *_from* to *_to* order - *"outbound"*: Vertices are connected in *_from* to *_to* order
- *"inbound"*: Vertices are connected in *_to* to *_from* order - *"inbound"*: Vertices are connected in *_to* to *_from* order
- *"any"*: Vertices are connected in both *_to* to *_from* and in - *"any"*: Vertices are connected in both *_to* to *_from* and in
@ -1114,9 +1114,9 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
- *vertex*: The vertex at the traversal point - *vertex*: The vertex at the traversal point
- *path*: The path history for the traversal point. The path is a document with the - *path*: The path history for the traversal point. The path is a document with the
attributes *vertices* and *edges*, which are both lists. Note that *path* is only present attributes *vertices* and *edges*, which are both lists. Note that *path* is only present
in the result if the *paths* attribute is set in the @FA{options}. in the result if the *paths* attribute is set in the *options*.
Example calls: @EXAMPLES
SHORTEST_PATH(cities, motorways, "cities/CGN", "cities/MUC", "outbound", { SHORTEST_PATH(cities, motorways, "cities/CGN", "cities/MUC", "outbound", {
paths: true paths: true
@ -1160,7 +1160,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
To not restrict the result to specific connections, *edgeexamples* should be left To not restrict the result to specific connections, *edgeexamples* should be left
unspecified. unspecified.
Example calls: @EXAMPLES
EDGES(friendrelations, "friends/john", "outbound") EDGES(friendrelations, "friends/john", "outbound")
EDGES(friendrelations, "friends/john", "any", [ { "$label": "knows" } ]) EDGES(friendrelations, "friends/john", "any", [ { "$label": "knows" } ])
@ -1182,7 +1182,7 @@ Please use [Graph operations](../Aql/GraphOperations.md) instead.
To not restrict the result to specific connections, *edgeexamples* should be left To not restrict the result to specific connections, *edgeexamples* should be left
unspecified. unspecified.
Example calls: @EXAMPLES
NEIGHBORS(friends, friendrelations, "friends/john", "outbound") NEIGHBORS(friends, friendrelations, "friends/john", "outbound")
NEIGHBORS(users, usersrelations, "users/john", "any", [ { "$label": "recommends" } ] ) NEIGHBORS(users, usersrelations, "users/john", "any", [ { "$label": "recommends" } ] )
@ -1221,7 +1221,7 @@ function categories:
found, *null* will be returned. This function also allows *id* to be a list of ids. found, *null* will be returned. This function also allows *id* to be a list of ids.
In this case, the function will return a list of all documents that could be found. In this case, the function will return a list of all documents that could be found.
Examples: @EXAMPLES:
DOCUMENT(users, "users/john") DOCUMENT(users, "users/john")
DOCUMENT(users, "john") DOCUMENT(users, "john")
@ -1239,10 +1239,10 @@ function categories:
DOCUMENT([ "users/john", "users/amy" ]) DOCUMENT([ "users/john", "users/amy" ])
- *SKIPLIST(collection, condition, skip, limit)*: Return all documents - *SKIPLIST(collection, condition, skip, limit)*: Return all documents
from a skiplist index on collection *collection* that match the specified @FA{condition}. from a skiplist index on collection *collection* that match the specified *condition*.
This is a shortcut method to use a skiplist index for retrieving specific documents in This is a shortcut method to use a skiplist index for retrieving specific documents in
indexed order. The skiplist index supports equality and less than/greater than queries. The indexed order. The skiplist index supports equality and less than/greater than queries. The
@FA{skip} and *limit* parameters are optional but can be specified to further limit the *skip* and *limit* parameters are optional but can be specified to further limit the
results: results:
SKIPLIST(test, { created: [[ '>', 0 ]] }, 0, 100) SKIPLIST(test, { created: [[ '>', 0 ]] }, 0, 100)

View File

@ -12,9 +12,9 @@ When a fulltext index exists, it can be queried using a fulltext query.
!SUBSECTION Fulltext !SUBSECTION Fulltext
<!-- js/common/modules/org/arangodb/arango-collection-common.js--> <!-- js/common/modules/org/arangodb/arango-collection-common.js-->
@startDocuBlock simple-query-fulltext @startDocuBlock collectionFulltext
!SUBSECTION Fulltext query syntax: !SUBSECTION Fulltext Syntax:
In the simplest form, a fulltext query contains just the sought word. If In the simplest form, a fulltext query contains just the sought word. If
multiple search words are given in a query, they should be separated by commas. multiple search words are given in a query, they should be separated by commas.

View File

@ -8,8 +8,8 @@ MySQL.
*skip* used together with *limit* can be used to implement pagination. *skip* used together with *limit* can be used to implement pagination.
The *skip* operator skips over the first n documents. So, in order to create The *skip* operator skips over the first n documents. So, in order to create
result pages with 10 result documents per page, you can use `skip(n * result pages with 10 result documents per page, you can use *skip(n *
10).limit(10)` to access the 10 documents on the n.th page. This result should 10).limit(10)* to access the 10 documents on the n.th page. This result should
be sorted, so that the pagination works in a predicable way. be sorted, so that the pagination works in a predicable way.
!SUBSECTION Limit !SUBSECTION Limit

View File

@ -28,7 +28,7 @@ whether the delayed synchronization had kicked in or not.
To ensure durability of transactions on a collection that have the *waitForSync* To ensure durability of transactions on a collection that have the *waitForSync*
property set to *false*, you can set the *waitForSync* attribute of the object property set to *false*, you can set the *waitForSync* attribute of the object
that is passed to *executeTransaction*. This will force a synchronization of the that is passed to *executeTransaction*. This will force a synchronization of the
transaction to disk even for collections that have *waitForSync set to *false*: transaction to disk even for collections that have *waitForSync* set to *false*:
db._executeTransaction({ db._executeTransaction({
collections: { collections: {

View File

@ -29,7 +29,7 @@ from the collection as usual. However, as the collection ie added lazily, there
isolation from other concurrent operations or transactions. Reads from such isolation from other concurrent operations or transactions. Reads from such
collections are potentially non-repeatable. collections are potentially non-repeatable.
Example: @EXAMPLES
db._executeTransaction({ db._executeTransaction({
collections: { collections: {

View File

@ -6,13 +6,14 @@ transactions.
Transactions in ArangoDB are atomic, consistent, isolated, and durable (*ACID*). Transactions in ArangoDB are atomic, consistent, isolated, and durable (*ACID*).
These *ACID* properties provide the following guarantees: These *ACID* properties provide the following guarantees:
- The *atomicity* priniciple makes transactions either complete in their
* The *atomicity* principle makes transactions either complete in their
entirety or have no effect at all. entirety or have no effect at all.
- The *consistency* principle ensures that no constraints or other invariants * The *consistency* principle ensures that no constraints or other invariants
will be violated during or after any transaction. will be violated during or after any transaction.
- The *isolation* property will hide the modifications of a transaction from * The *isolation* property will hide the modifications of a transaction from
other transactions until the transaction commits. other transactions until the transaction commits.
- Finally, the *durability* proposition makes sure that operations from * Finally, the *durability* proposition makes sure that operations from
transactions that have committed will be made persistent. The amount of transactions that have committed will be made persistent. The amount of
transaction durability is configurable in ArangoDB, as is the durability transaction durability is configurable in ArangoDB, as is the durability
on collection level. on collection level.

View File

@ -18,7 +18,9 @@ in ArangoDB. Instead, a transaction in ArangoDB is started by providing a
description of the transaction to the *db._executeTransaction* Javascript description of the transaction to the *db._executeTransaction* Javascript
function: function:
```js
db._executeTransaction(description); db._executeTransaction(description);
```
This function will then automatically start a transaction, execute all required This function will then automatically start a transaction, execute all required
data retrieval and/or modification operations, and at the end automatically data retrieval and/or modification operations, and at the end automatically
@ -45,6 +47,7 @@ Collections for a transaction are declared by providing them in the *collections
attribute of the object passed to the *_executeTransaction* function. The attribute of the object passed to the *_executeTransaction* function. The
*collections* attribute has the sub-attributes *read* and *write*: *collections* attribute has the sub-attributes *read* and *write*:
```js
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: [ "users", "logins" ], write: [ "users", "logins" ],
@ -52,6 +55,7 @@ attribute of the object passed to the *_executeTransaction* function. The
}, },
... ...
}); });
```
*read* and *write* are optional attributes, and only need to be specified if *read* and *write* are optional attributes, and only need to be specified if
the operations inside the transactions demand for it. the operations inside the transactions demand for it.
@ -59,6 +63,7 @@ the operations inside the transactions demand for it.
The contents of *read* or *write* can each be lists with collection names or a The contents of *read* or *write* can each be lists with collection names or a
single collection name (as a string): single collection name (as a string):
```js
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: "users", write: "users",
@ -66,12 +71,12 @@ single collection name (as a string):
}, },
... ...
}); });
```
Note that it is currently optional to specify collections for read-only access. Note that it is currently optional to specify collections for read-only access.
Even without specifying them, it is still possible to read from such collections Even without specifying them, it is still possible to read from such collections
from within a transaction, but with relaxed isolation. Please refer to from within a transaction, but with relaxed isolation. Please refer to
@ref TransactionsLocking for more details. [Transactions Locking](../Transactions/LockingAndIsolation.md) for more details.
!SUBSECTION Declaration of data modification and retrieval operations !SUBSECTION Declaration of data modification and retrieval operations
@ -79,6 +84,7 @@ All data modification and retrieval operations that are to be executed inside
the transaction need to be specified in a Javascript function, using the *action* the transaction need to be specified in a Javascript function, using the *action*
attribute: attribute:
```js
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: "users" write: "users"
@ -87,25 +93,28 @@ attribute:
// all operations go here // all operations go here
} }
}); });
```
Any valid Javascript code is allowed inside *action* but the code may only Any valid Javascript code is allowed inside *action* but the code may only
access the collections declared in *collections*. access the collections declared in *collections*.
*action* may be a Javascript function as shown above, or a string representation *action* may be a Javascript function as shown above, or a string representation
of a Javascript function: of a Javascript function:
```
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: "users" write: "users"
}, },
action: "function () { doSomething(); }" action: "function () { doSomething(); }"
}); });
```
Please note that any operations specified in *action* will be executed on the Please note that any operations specified in *action* will be executed on the
server, in a separate scope. Variables will be bound late. Accessing any Javascript server, in a separate scope. Variables will be bound late. Accessing any Javascript
variables defined on the client-side or in some other server context from inside variables defined on the client-side or in some other server context from inside
a transaction may not work. a transaction may not work.
Instead, any variables used inside *action* should be defined inside *action* itself: Instead, any variables used inside *action* should be defined inside *action* itself:
```
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: "users" write: "users"
@ -115,6 +124,7 @@ Instead, any variables used inside *action* should be defined inside *action* it
db.users.save({ ... }); db.users.save({ ... });
} }
}); });
```
When the code inside the *action* attribute is executed, the transaction is When the code inside the *action* attribute is executed, the transaction is
already started and all required locks have been acquired. When the code inside already started and all required locks have been acquired. When the code inside
@ -124,6 +134,7 @@ There is no explicit commit command.
To make a transaction abort and roll back all changes, an exception needs to To make a transaction abort and roll back all changes, an exception needs to
be thrown and not caught inside the transaction: be thrown and not caught inside the transaction:
```js
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: "users" write: "users"
@ -136,6 +147,7 @@ be thrown and not caught inside the transaction:
throw "doh!"; throw "doh!";
} }
}); });
```
There is no explicit abort or roll back command. There is no explicit abort or roll back command.
@ -143,6 +155,7 @@ As mentioned earlier, a transaction will commit automatically when the end of
the *action* function is reached and no exception has been thrown. In this the *action* function is reached and no exception has been thrown. In this
case, the user can return any legal Javascript value from the function: case, the user can return any legal Javascript value from the function:
```js
db._executeTransaction({ db._executeTransaction({
collections: { collections: {
write: "users" write: "users"
@ -155,6 +168,7 @@ case, the user can return any legal Javascript value from the function:
return "hello"; return "hello";
} }
}); });
```
!SUBSECTION Examples !SUBSECTION Examples
@ -165,6 +179,7 @@ The *c1* collection needs to be declared in the *write* attribute of the
The *action* attribute contains the actual transaction code to be executed. The *action* attribute contains the actual transaction code to be executed.
This code contains all data modification operations (3 in this example). This code contains all data modification operations (3 in this example).
```js
// setup // setup
db._create("c1"); db._create("c1");
@ -179,13 +194,15 @@ This code contains all data modification operations (3 in this example).
db.c1.save({ _key: "key3" }); db.c1.save({ _key: "key3" });
} }
}); });
db.c1.count(); // 3 db.c1.count(); // 3
```
Aborting the transaction by throwing an exception in the *action* function Aborting the transaction by throwing an exception in the *action* function
will revert all changes, so as if the transaction never happened: will revert all changes, so as if the transaction never happened:
``` js
// setup // setup
db._create("c1"); db._create("c1");
@ -206,11 +223,12 @@ will revert all changes, so as if the transaction never happened:
}); });
db.c1.count(); // 0 db.c1.count(); // 0
```
The automatic rollback is also executed when an internal exception is thrown The automatic rollback is also executed when an internal exception is thrown
at some point during transaction execution: at some point during transaction execution:
```js
// setup // setup
db._create("c1"); db._create("c1");
@ -230,12 +248,13 @@ at some point during transaction execution:
}); });
db.c1.count(); // 0 db.c1.count(); // 0
```
As required by the *consistency* principle, aborting or rolling back a As required by the *consistency* principle, aborting or rolling back a
transaction will also restore secondary indexes to the state at transaction transaction will also restore secondary indexes to the state at transaction
start. The following example using a cap constraint should illustrate that: start. The following example using a cap constraint should illustrate that:
```js
// setup // setup
db._create("c1"); db._create("c1");
@ -267,6 +286,7 @@ start. The following example using a cap constraint should illustrate that:
}); });
// we now have these keys back: [ "key2", "key3", "key4" ] // we now have these keys back: [ "key2", "key3", "key4" ]
```
!SUBSECTION Cross-collection transactions !SUBSECTION Cross-collection transactions
@ -274,6 +294,7 @@ There's also the possibility to run a transaction across multiple collections.
In this case, multiple collections need to be declared in the *collections* In this case, multiple collections need to be declared in the *collections*
attribute, e.g.: attribute, e.g.:
```js
// setup // setup
db._create("c1"); db._create("c1");
db._create("c2"); db._create("c2");
@ -291,11 +312,12 @@ attribute, e.g.:
db.c1.count(); // 1 db.c1.count(); // 1
db.c2.count(); // 1 db.c2.count(); // 1
```
Again, throwing an exception from inside the *action* function will make the Again, throwing an exception from inside the *action* function will make the
transaction abort and roll back all changes in all collections: transaction abort and roll back all changes in all collections:
```js
// setup // setup
db._create("c1"); db._create("c1");
db._create("c2"); db._create("c2");
@ -321,3 +343,4 @@ transaction abort and roll back all changes in all collections:
db.c1.count(); // 0 db.c1.count(); // 0
db.c2.count(); // 0 db.c2.count(); // 0
```

View File

@ -566,8 +566,8 @@ unittests-shell-server-ahuacatl:
### @brief SHELL CLIENT TESTS ### @brief SHELL CLIENT TESTS
################################################################################ ################################################################################
UNITTESTS_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-changeMode.js) UNITTESTS_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-changeMode-noncluster.js)
UNITTESTS_NO_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-noChangeMode.js) UNITTESTS_NO_READONLY = $(addprefix --javascript.unit-tests ,@top_srcdir@/js/client/tests/shell-noChangeMode-noncluster.js)
.PHONY: unittests-shell-client-readonly .PHONY: unittests-shell-client-readonly
unittests-shell-client-readonly: unittests-shell-client-readonly:

View File

@ -56,8 +56,24 @@ V8Job::V8Job (TRI_vocbase_t* vocbase,
_vocbase(vocbase), _vocbase(vocbase),
_v8Dealer(v8Dealer), _v8Dealer(v8Dealer),
_command(command), _command(command),
_parameters(parameters), _parameters(nullptr),
_canceled(0) { _canceled(0) {
if (parameters != nullptr) {
// create our own copy of the parameters
_parameters = TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, parameters);
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destroys a V8 job
////////////////////////////////////////////////////////////////////////////////
V8Job::~V8Job () {
if (_parameters != nullptr) {
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters);
_parameters = nullptr;
}
} }
// ----------------------------------------------------------------------------- // -----------------------------------------------------------------------------
@ -76,7 +92,7 @@ Job::JobType V8Job::type () {
/// {@inheritDoc} /// {@inheritDoc}
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
const string& V8Job::queue () { string const& V8Job::queue () {
static const string queue = "STANDARD"; static const string queue = "STANDARD";
return queue; return queue;
} }
@ -90,11 +106,10 @@ Job::status_t V8Job::work () {
return status_t(JOB_DONE); return status_t(JOB_DONE);
} }
ApplicationV8::V8Context* context ApplicationV8::V8Context* context = _v8Dealer->enterContext(_vocbase, nullptr, true, false);
= _v8Dealer->enterContext(_vocbase, 0, true, false);
// note: the context might be 0 in case of shut-down // note: the context might be 0 in case of shut-down
if (context == 0) { if (context == nullptr) {
return status_t(JOB_DONE); return status_t(JOB_DONE);
} }
@ -119,7 +134,7 @@ Job::status_t V8Job::work () {
} }
v8::Handle<v8::Value> fArgs; v8::Handle<v8::Value> fArgs;
if (_parameters != 0) { if (_parameters != nullptr) {
fArgs = TRI_ObjectJson(_parameters); fArgs = TRI_ObjectJson(_parameters);
} }
else { else {

View File

@ -64,6 +64,12 @@ namespace triagens {
std::string const&, std::string const&,
TRI_json_t const*); TRI_json_t const*);
////////////////////////////////////////////////////////////////////////////////
/// @brief destroys a V8 job
////////////////////////////////////////////////////////////////////////////////
~V8Job ();
// ----------------------------------------------------------------------------- // -----------------------------------------------------------------------------
// --SECTION-- Job methods // --SECTION-- Job methods
// ----------------------------------------------------------------------------- // -----------------------------------------------------------------------------
@ -80,7 +86,7 @@ namespace triagens {
/// {@inheritDoc} /// {@inheritDoc}
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
const std::string& queue (); std::string const& queue ();
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// {@inheritDoc} /// {@inheritDoc}
@ -140,7 +146,7 @@ namespace triagens {
/// @brief paramaters /// @brief paramaters
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
TRI_json_t const* _parameters; TRI_json_t* _parameters;
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief cancel flag /// @brief cancel flag

View File

@ -65,7 +65,7 @@ V8TimerTask::V8TimerTask (string const& id,
_parameters(parameters), _parameters(parameters),
_created(TRI_microtime()) { _created(TRI_microtime()) {
TRI_ASSERT(vocbase != 0); TRI_ASSERT(vocbase != nullptr);
// increase reference counter for the database used // increase reference counter for the database used
TRI_UseVocBase(_vocbase); TRI_UseVocBase(_vocbase);
@ -79,7 +79,7 @@ V8TimerTask::~V8TimerTask () {
// decrease reference counter for the database used // decrease reference counter for the database used
TRI_ReleaseVocBase(_vocbase); TRI_ReleaseVocBase(_vocbase);
if (_parameters != 0) { if (_parameters != nullptr) {
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters); TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _parameters);
} }
} }

View File

@ -51,7 +51,7 @@ struct TRI_datafile_s;
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
typedef enum { typedef enum {
TRI_BARRIER_ELEMENT, TRI_BARRIER_ELEMENT = 1,
TRI_BARRIER_DATAFILE_DROP_CALLBACK, TRI_BARRIER_DATAFILE_DROP_CALLBACK,
TRI_BARRIER_DATAFILE_RENAME_CALLBACK, TRI_BARRIER_DATAFILE_RENAME_CALLBACK,
TRI_BARRIER_COLLECTION_UNLOAD_CALLBACK, TRI_BARRIER_COLLECTION_UNLOAD_CALLBACK,

View File

@ -67,7 +67,8 @@ static int const CLEANUP_INDEX_ITERATIONS = 5;
/// @brief checks all datafiles of a collection /// @brief checks all datafiles of a collection
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
static void CleanupDocumentCollection (TRI_document_collection_t* document) { static void CleanupDocumentCollection (TRI_vocbase_col_t* collection,
TRI_document_collection_t* document) {
bool unloadChecked = false; bool unloadChecked = false;
// loop until done // loop until done
@ -121,11 +122,22 @@ static void CleanupDocumentCollection (TRI_document_collection_t* document) {
// we must release the lock temporarily to check if the collection is fully collected // we must release the lock temporarily to check if the collection is fully collected
TRI_UnlockSpin(&container->_lock); TRI_UnlockSpin(&container->_lock);
bool isDeleted = false;
// must not hold the spin lock while querying the collection // must not hold the spin lock while querying the collection
if (! TRI_IsFullyCollectedDocumentCollection(document)) { if (! TRI_IsFullyCollectedDocumentCollection(document)) {
// collection is not fully collected - postpone the unload // if there is still some collection to perform, check if the collection was deleted already
if (TRI_TRY_READ_LOCK_STATUS_VOCBASE_COL(collection)) {
isDeleted = (collection->_status == TRI_VOC_COL_STATUS_DELETED);
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
}
if (! isDeleted) {
// collection is not fully collected and still undeleted - postpone the unload
return; return;
} }
// if deleted, then we may unload / delete
}
unloadChecked = true; unloadChecked = true;
continue; continue;
@ -248,26 +260,21 @@ void TRI_CleanupVocBase (void* data) {
// check if we can get the compactor lock exclusively // check if we can get the compactor lock exclusively
if (TRI_CheckAndLockCompactorVocBase(vocbase)) { if (TRI_CheckAndLockCompactorVocBase(vocbase)) {
size_t i, n;
// copy all collections // copy all collections
TRI_READ_LOCK_COLLECTIONS_VOCBASE(vocbase); TRI_READ_LOCK_COLLECTIONS_VOCBASE(vocbase);
TRI_CopyDataVectorPointer(&collections, &vocbase->_collections); TRI_CopyDataVectorPointer(&collections, &vocbase->_collections);
TRI_READ_UNLOCK_COLLECTIONS_VOCBASE(vocbase); TRI_READ_UNLOCK_COLLECTIONS_VOCBASE(vocbase);
n = collections._length; size_t const n = collections._length;
for (i = 0; i < n; ++i) { for (size_t i = 0; i < n; ++i) {
TRI_vocbase_col_t* collection; TRI_vocbase_col_t* collection = static_cast<TRI_vocbase_col_t*>(collections._buffer[i]);
TRI_document_collection_t* document;
collection = (TRI_vocbase_col_t*) collections._buffer[i];
TRI_READ_LOCK_STATUS_VOCBASE_COL(collection); TRI_READ_LOCK_STATUS_VOCBASE_COL(collection);
document = collection->_collection; TRI_document_collection_t* document = collection->_collection;
if (document == NULL) { if (document == nullptr) {
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection); TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
continue; continue;
} }
@ -283,7 +290,7 @@ void TRI_CleanupVocBase (void* data) {
document->cleanupIndexes(document); document->cleanupIndexes(document);
} }
CleanupDocumentCollection(document); CleanupDocumentCollection(collection, document);
} }
TRI_UnlockCompactorVocBase(vocbase); TRI_UnlockCompactorVocBase(vocbase);

View File

@ -1188,11 +1188,8 @@ void TRI_InitMarkerDatafile (char* marker,
int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile, int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile,
TRI_voc_fid_t fid, TRI_voc_fid_t fid,
TRI_voc_size_t maximalSize) { TRI_voc_size_t maximalSize) {
TRI_df_marker_t* position;
TRI_df_header_marker_t header;
int res;
// create the header // create the header
TRI_df_header_marker_t header;
TRI_InitMarkerDatafile((char*) &header, TRI_DF_MARKER_HEADER, sizeof(TRI_df_header_marker_t)); TRI_InitMarkerDatafile((char*) &header, TRI_DF_MARKER_HEADER, sizeof(TRI_df_header_marker_t));
header.base._tick = (TRI_voc_tick_t) fid; header.base._tick = (TRI_voc_tick_t) fid;
@ -1201,7 +1198,8 @@ int TRI_WriteInitialHeaderMarkerDatafile (TRI_datafile_t* datafile,
header._fid = fid; header._fid = fid;
// reserve space and write header to file // reserve space and write header to file
res = TRI_ReserveElementDatafile(datafile, header.base._size, &position, 0); TRI_df_marker_t* position;
int res = TRI_ReserveElementDatafile(datafile, header.base._size, &position, 0);
if (res == TRI_ERROR_NO_ERROR) { if (res == TRI_ERROR_NO_ERROR) {
res = TRI_WriteCrcElementDatafile(datafile, position, &header.base, false); res = TRI_WriteCrcElementDatafile(datafile, position, &header.base, false);

View File

@ -279,7 +279,7 @@ typedef struct TRI_datafile_s {
bool (*sync)(const struct TRI_datafile_s* const, char const*, char const*); // syncs the datafile bool (*sync)(const struct TRI_datafile_s* const, char const*, char const*); // syncs the datafile
int (*truncate)(struct TRI_datafile_s* const, const off_t); // truncates the datafile to a specific length int (*truncate)(struct TRI_datafile_s* const, const off_t); // truncates the datafile to a specific length
int _lastError; // last (cirtical) error int _lastError; // last (critical) error
bool _full; // at least one request was rejected because there is not enough room bool _full; // at least one request was rejected because there is not enough room
bool _isSealed; // true, if footer has been written bool _isSealed; // true, if footer has been written

View File

@ -1688,8 +1688,6 @@ static bool OpenIterator (TRI_df_marker_t const* marker,
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
static int FillInternalIndexes (TRI_document_collection_t* document) { static int FillInternalIndexes (TRI_document_collection_t* document) {
TRI_ASSERT(! triagens::wal::LogfileManager::instance()->isInRecovery());
int res = TRI_ERROR_NO_ERROR; int res = TRI_ERROR_NO_ERROR;
for (size_t i = 0; i < document->_allIndexes._length; ++i) { for (size_t i = 0; i < document->_allIndexes._length; ++i) {
@ -2320,7 +2318,7 @@ TRI_datafile_t* TRI_CreateDatafileDocumentCollection (TRI_document_collection_t*
if (res != TRI_ERROR_NO_ERROR) { if (res != TRI_ERROR_NO_ERROR) {
document->_lastError = journal->_lastError; document->_lastError = journal->_lastError;
LOG_ERROR("cannot create collection header in file '%s': %s", journal->getName(journal), TRI_last_error()); LOG_ERROR("cannot create collection header in file '%s': %s", journal->getName(journal), TRI_errno_string(res));
// close the journal and remove it // close the journal and remove it
TRI_CloseDatafile(journal); TRI_CloseDatafile(journal);
@ -2332,7 +2330,7 @@ TRI_datafile_t* TRI_CreateDatafileDocumentCollection (TRI_document_collection_t*
TRI_col_header_marker_t cm; TRI_col_header_marker_t cm;
TRI_InitMarkerDatafile((char*) &cm, TRI_COL_MARKER_HEADER, sizeof(TRI_col_header_marker_t)); TRI_InitMarkerDatafile((char*) &cm, TRI_COL_MARKER_HEADER, sizeof(TRI_col_header_marker_t));
cm.base._tick = (TRI_voc_tick_t) fid; cm.base._tick = static_cast<TRI_voc_tick_t>(fid);
cm._type = (TRI_col_type_t) document->_info._type; cm._type = (TRI_col_type_t) document->_info._type;
cm._cid = document->_info._cid; cm._cid = document->_info._cid;
@ -2697,16 +2695,11 @@ TRI_document_collection_t* TRI_OpenDocumentCollection (TRI_vocbase_t* vocbase,
TRI_InitVocShaper(document->getShaper()); // ONLY in OPENCOLLECTION, PROTECTED by fake trx here TRI_InitVocShaper(document->getShaper()); // ONLY in OPENCOLLECTION, PROTECTED by fake trx here
// secondary indexes must not be loaded during recovery
// this is because creating indexes might write attribute markers into the WAL,
// but the WAL is read-only at the point of recovery
if (! triagens::wal::LogfileManager::instance()->isInRecovery()) {
// fill internal indexes (this is, the edges index at the moment) // fill internal indexes (this is, the edges index at the moment)
FillInternalIndexes(document); FillInternalIndexes(document);
// fill user-defined secondary indexes // fill user-defined secondary indexes
TRI_IterateIndexCollection(collection, OpenIndexIterator, collection); TRI_IterateIndexCollection(collection, OpenIndexIterator, collection);
}
return document; return document;
} }

View File

@ -478,7 +478,6 @@ bool CollectorThread::processQueuedOperations () {
_numPendingOperations -= numOperations; _numPendingOperations -= numOperations;
// delete the object // delete the object
delete (*it2); delete (*it2);
@ -653,16 +652,11 @@ int CollectorThread::processCollectionOperations (CollectorCache* cache) {
LOG_TRACE("updating datafile statistics for collection '%s'", document->_info._name); LOG_TRACE("updating datafile statistics for collection '%s'", document->_info._name);
updateDatafileStatistics(document, cache); updateDatafileStatistics(document, cache);
// TODO: the following assertion is only true in a running system
// if we just started the server, we don't know how many uncollected operations we have!!
// TRI_ASSERT(document->_uncollectedLogfileEntries >= cache->totalOperationsCount);
document->_uncollectedLogfileEntries -= cache->totalOperationsCount; document->_uncollectedLogfileEntries -= cache->totalOperationsCount;
if (document->_uncollectedLogfileEntries < 0) { if (document->_uncollectedLogfileEntries < 0) {
document->_uncollectedLogfileEntries = 0; document->_uncollectedLogfileEntries = 0;
} }
cache->freeBarriers();
res = TRI_ERROR_NO_ERROR; res = TRI_ERROR_NO_ERROR;
} }
catch (triagens::arango::Exception const& ex) { catch (triagens::arango::Exception const& ex) {
@ -866,7 +860,6 @@ int CollectorThread::transferMarkers (Logfile* logfile,
if (cache != nullptr) { if (cache != nullptr) {
// prevent memleak // prevent memleak
cache->freeBarriers();
delete cache; delete cache;
} }

View File

@ -106,6 +106,7 @@ namespace triagens {
if (operations != nullptr) { if (operations != nullptr) {
delete operations; delete operations;
} }
freeBarriers();
} }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -125,6 +126,7 @@ namespace triagens {
for (auto it = barriers.begin(); it != barriers.end(); ++it) { for (auto it = barriers.begin(); it != barriers.end(); ++it) {
TRI_FreeBarrier((*it)); TRI_FreeBarrier((*it));
} }
barriers.clear(); barriers.clear();
} }

View File

@ -42,6 +42,7 @@
#include "VocBase/server.h" #include "VocBase/server.h"
#include "Wal/AllocatorThread.h" #include "Wal/AllocatorThread.h"
#include "Wal/CollectorThread.h" #include "Wal/CollectorThread.h"
#include "Wal/RecoverState.h"
#include "Wal/Slots.h" #include "Wal/Slots.h"
#include "Wal/SynchroniserThread.h" #include "Wal/SynchroniserThread.h"
@ -730,6 +731,10 @@ SlotInfo LogfileManager::allocate (void const* src,
uint32_t size) { uint32_t size) {
if (! _allowWrites) { if (! _allowWrites) {
// no writes allowed // no writes allowed
#ifdef TRI_ENABLE_MAINTAINER_MODE
TRI_ASSERT(false);
#endif
return SlotInfo(TRI_ERROR_ARANGO_READ_ONLY); return SlotInfo(TRI_ERROR_ARANGO_READ_ONLY);
} }

View File

@ -48,35 +48,10 @@ namespace triagens {
class AllocatorThread; class AllocatorThread;
class CollectorThread; class CollectorThread;
struct RecoverState;
class Slot; class Slot;
class SynchroniserThread; class SynchroniserThread;
// -----------------------------------------------------------------------------
// --SECTION-- RecoverState
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief state that is built up when scanning a WAL logfile during recovery
////////////////////////////////////////////////////////////////////////////////
struct RecoverState {
RecoverState ()
: collections(),
failedTransactions(),
droppedCollections(),
droppedDatabases(),
lastTick(0),
logfilesToCollect(0) {
}
std::unordered_map<TRI_voc_cid_t, TRI_voc_tick_t> collections;
std::unordered_map<TRI_voc_tid_t, std::pair<TRI_voc_tick_t, bool>> failedTransactions;
std::unordered_set<TRI_voc_cid_t> droppedCollections;
std::unordered_set<TRI_voc_tick_t> droppedDatabases;
TRI_voc_tick_t lastTick;
int logfilesToCollect;
};
// ----------------------------------------------------------------------------- // -----------------------------------------------------------------------------
// --SECTION-- LogfileManagerState // --SECTION-- LogfileManagerState
// ----------------------------------------------------------------------------- // -----------------------------------------------------------------------------
@ -358,14 +333,6 @@ namespace triagens {
_throttleWhenPending = value; _throttleWhenPending = value;
} }
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not we are in the recovery mode
////////////////////////////////////////////////////////////////////////////////
inline bool isInRecovery () const {
return _inRecovery;
}
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief registers a transaction /// @brief registers a transaction
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -0,0 +1,80 @@
////////////////////////////////////////////////////////////////////////////////
/// @brief Recovery state
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
/// @author Copyright 2011-2013, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
#ifndef ARANGODB_WAL_RECOVER_STATE_H
#define ARANGODB_WAL_RECOVER_STATE_H 1
#include "Basics/Common.h"
#include "Basics/Mutex.h"
#include "VocBase/voc-types.h"
struct TRI_server_s;
namespace triagens {
namespace wal {
// -----------------------------------------------------------------------------
// --SECTION-- RecoverState
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief state that is built up when scanning a WAL logfile during recovery
////////////////////////////////////////////////////////////////////////////////
struct RecoverState {
RecoverState ()
: collections(),
failedTransactions(),
droppedCollections(),
droppedDatabases(),
lastTick(0),
logfilesToCollect(0) {
}
std::unordered_map<TRI_voc_cid_t, TRI_voc_tick_t> collections;
std::unordered_map<TRI_voc_tid_t, std::pair<TRI_voc_tick_t, bool>> failedTransactions;
std::unordered_set<TRI_voc_cid_t> droppedCollections;
std::unordered_set<TRI_voc_tick_t> droppedDatabases;
TRI_voc_tick_t lastTick;
int logfilesToCollect;
};
}
}
#endif
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
// End:

View File

@ -1,5 +1,5 @@
/*jslint indent: 2, nomen: true, maxlen: 100, vars: true, white: true, plusplus: true */ /*jslint indent: 2, nomen: true, maxlen: 100, vars: true, white: true, plusplus: true */
/*global window, Backbone */ /*global window, Backbone, $ */
(function() { (function() {
"use strict"; "use strict";
@ -17,6 +17,39 @@
return raw.graph || raw; return raw.graph || raw;
}, },
addEdgeDefinition: function(edgeDefinition) {
$.ajax(
{
async: false,
type: "POST",
url: this.urlRoot + "/" + this.get("_key") + "/edge",
data: JSON.stringify(edgeDefinition)
}
);
},
deleteEdgeDefinition: function(edgeCollection) {
$.ajax(
{
async: false,
type: "DELETE",
url: this.urlRoot + "/" + this.get("_key") + "/edge/" + edgeCollection
}
);
},
modifyEdgeDefinition: function(edgeDefinition) {
$.ajax(
{
async: false,
type: "PUT",
url: this.urlRoot + "/" + this.get("_key") + "/edge/" + edgeDefinition.collection,
data: JSON.stringify(edgeDefinition)
}
);
},
defaults: { defaults: {
name: "", name: "",
edgeDefinitions: [], edgeDefinitions: [],

View File

@ -67,7 +67,6 @@
<div class="tile tile-graph" id="<%=graphName %>_tile"> <div class="tile tile-graph" id="<%=graphName %>_tile">
<div class="iconSet"> <div class="iconSet">
<span class="icon_arangodb_settings2 editGraph" id="<%=graphName %>_settings" alt="Edit graph" title="Edit graph"></span> <span class="icon_arangodb_settings2 editGraph" id="<%=graphName %>_settings" alt="Edit graph" title="Edit graph"></span>
<span class="icon_arangodb_info" id="<%=graphName %>_info" alt="Show graph properties" title="Show graph properties"></span>
</div> </div>
<span class="icon_arangodb_edge5 tile-icon"></span> <span class="icon_arangodb_edge5 tile-icon"></span>
<div class="tileBadge"> <div class="tileBadge">

View File

@ -43,7 +43,7 @@
addNewGraph: function(e) { addNewGraph: function(e) {
e.preventDefault(); e.preventDefault();
this.createNewGraphModal2(); this.createNewGraphModal();
}, },
deleteGraph: function(e) { deleteGraph: function(e) {
@ -120,7 +120,91 @@
}, },
saveEditedGraph: function() { saveEditedGraph: function() {
var name = $("#editGraphName").html(),
vertexCollections = _.pluck($('#newVertexCollections').select2("data"), "text"),
edgeDefinitions = [],
newEdgeDefinitions = {},
self = this,
collection,
from,
to,
index,
edgeDefinitionElements;
edgeDefinitionElements = $('[id^=s2id_newEdgeDefinitions]').toArray();
edgeDefinitionElements.forEach(
function(eDElement) {
index = $(eDElement).attr("id");
index = index.replace("s2id_newEdgeDefinitions", "");
collection = _.pluck($('#s2id_newEdgeDefinitions' + index).select2("data"), "text")[0];
if (collection && collection !== "") {
from = _.pluck($('#s2id_newFromCollections' + index).select2("data"), "text");
to = _.pluck($('#s2id_newToCollections' + index).select2("data"), "text");
if (from !== 1 && to !== 1) {
var edgeDefinition = {
collection: collection,
from: from,
to: to
};
edgeDefinitions.push(edgeDefinition);
newEdgeDefinitions[collection] = edgeDefinition;
}
}
}
);
//get current edgeDefs/orphanage
var graph = this.collection.findWhere({_key: name});
var currentEdgeDefinitions = graph.get("edgeDefinitions");
var currentOrphanage = graph.get("orphanCollections");
var currentCollections = [];
//evaluate all new, edited and deleted edge definitions
var newEDs = [];
var editedEDs = [];
var deletedEDs = [];
currentEdgeDefinitions.forEach(
function(eD) {
var collection = eD.collection;
currentCollections.push(collection);
var newED = newEdgeDefinitions[collection];
if (newED === undefined) {
deletedEDs.push(collection);
} else if (JSON.stringify(newED) !== JSON.stringify(eD)) {
editedEDs.push(collection);
}
}
);
edgeDefinitions.forEach(
function(eD) {
var collection = eD.collection;
if (currentCollections.indexOf(collection) === -1) {
newEDs.push(collection);
}
}
);
newEDs.forEach(
function(eD) {
graph.addEdgeDefinition(newEdgeDefinitions[eD]);
}
);
editedEDs.forEach(
function(eD) {
graph.modifyEdgeDefinition(newEdgeDefinitions[eD]);
}
);
deletedEDs.forEach(
function(eD) {
graph.deleteEdgeDefinition(eD);
}
);
this.updateGraphManagementView();
}, },
evaluateGraphName : function(str, substr) { evaluateGraphName : function(str, substr) {
@ -442,7 +526,7 @@
return edgeDefinitionMap; return edgeDefinitionMap;
}, },
createNewGraphModal2: function() { createNewGraphModal: function() {
var buttons = [], collList = [], eCollList = [], var buttons = [], collList = [], eCollList = [],
tableContent = [], collections = this.options.collectionCollection.models; tableContent = [], collections = this.options.collectionCollection.models;

View File

@ -261,6 +261,7 @@
"test/specs/views/documentsViewSpec.js", "test/specs/views/documentsViewSpec.js",
"test/specs/views/documentViewSpec.js", "test/specs/views/documentViewSpec.js",
"test/specs/views/dashboardViewSpec.js", "test/specs/views/dashboardViewSpec.js",
"test/specs/views/graphManagementViewSpec.js",
"test/specs/views/newLogsViewSpec.js", "test/specs/views/newLogsViewSpec.js",
"test/specs/views/notificationViewSpec.js", "test/specs/views/notificationViewSpec.js",
"test/specs/views/statisticBarViewSpec.js", "test/specs/views/statisticBarViewSpec.js",

View File

@ -13,13 +13,13 @@
col = new window.GraphCollection(); col = new window.GraphCollection();
}); });
/* it("parse", function () { it("parse", function () {
expect(col.model).toEqual(window.Graph); expect(col.model).toEqual(window.Graph);
expect(col.comparator).toEqual("_key"); expect(col.comparator).toEqual("_key");
expect(col.url).toEqual("/_api/graph"); expect(col.url).toEqual("/_api/gharial");
expect(col.parse({error: false, graphs: "blub"})).toEqual("blub"); expect(col.parse({error: false, graphs: "blub"})).toEqual("blub");
expect(col.parse({error: true, graphs: "blub"})).toEqual(undefined); expect(col.parse({error: true, graphs: "blub"})).toEqual(undefined);
});*/ });
}); });
}()); }());

View File

@ -38,15 +38,14 @@
}); });
}); });
/*it("should request /_api/graph on save", function() { it("should request /_api/graph on save", function() {
ajaxVerify = function(opt) { ajaxVerify = function(opt) {
expect(opt.url).toEqual("/_api/graph"); expect(opt.url).toEqual("/_api/gharial");
expect(opt.type).toEqual("POST"); expect(opt.type).toEqual("POST");
}; };
model.save(); model.save();
expect($.ajax).toHaveBeenCalled(); expect($.ajax).toHaveBeenCalled();
}); });
*/
it("should store the attributes in the model", function() { it("should store the attributes in the model", function() {
var id = "_graph/" + myKey, var id = "_graph/" + myKey,
rev = "12345"; rev = "12345";
@ -81,15 +80,15 @@
expect(model.get("graph")).toBeUndefined(); expect(model.get("graph")).toBeUndefined();
}); });
/* it("should request /_api/graph/_key on delete", function() { it("should request /_api/graph/_key on delete", function() {
model.save(); model.save();
ajaxVerify = function(opt) { ajaxVerify = function(opt) {
expect(opt.url).toEqual("/_api/graph/" + myKey); expect(opt.url).toEqual("/_api/gharial/" + myKey);
expect(opt.type).toEqual("DELETE"); expect(opt.type).toEqual("DELETE");
}; };
model.destroy(); model.destroy();
expect($.ajax).toHaveBeenCalled(); expect($.ajax).toHaveBeenCalled();
});*/ });

View File

@ -654,14 +654,15 @@
); );
}); });
/*it("should route to the graph management tab", function () { it("should route to the graph management tab", function () {
simpleNavigationCheck( simpleNavigationCheck(
"graphManagement", "graphManagement",
"GraphManagementView", "GraphManagementView",
"graphviewer-menu", "graphviewer-menu",
{ collection: graphsDummy} { collection: graphsDummy ,
collectionCollection : { id : 'store', fetch : jasmine.any(Function) } }
); );
});*/ });
it("should route to the applications tab", function () { it("should route to the applications tab", function () {
simpleNavigationCheck( simpleNavigationCheck(

View File

@ -26,7 +26,8 @@
div.id = "content"; div.id = "content";
document.body.appendChild(div); document.body.appendChild(div);
view = new window.GraphManagementView({ view = new window.GraphManagementView({
collection: graphs collection: graphs,
collectionCollection: new window.arangoCollections()
}); });
}); });
@ -98,13 +99,12 @@
$("#newGraphEdges").val("newEdges"); $("#newGraphEdges").val("newEdges");
spyOn($, "ajax").andCallFake(function (opts) { spyOn($, "ajax").andCallFake(function (opts) {
expect(opts.type).toEqual("POST"); expect(opts.type).toEqual("POST");
expect(opts.url).toEqual("/_api/graph"); expect(opts.url).toEqual("/_api/gharial");
expect(opts.data).toEqual(JSON.stringify({ expect(opts.data).toEqual(JSON.stringify(
_key: "newGraph", {
vertices: "newVertices", "name":"newGraph",
edges: "newEdges", "edgeDefinitions":[],
_id: "", "orphanCollections":[]
_rev: ""
})); }));
}); });
$("#modalButton1").click(); $("#modalButton1").click();

View File

@ -4065,7 +4065,16 @@ Graph.prototype._deleteEdgeDefinition = function(edgeCollection) {
} }
} }
); );
updateBindCollections(this); updateBindCollections(this);
db._graphs.update(
this.__name,
{
orphanCollections: this.__orphanCollections,
edgeDefinitions: this.__edgeDefinitions
}
);
}; };

View File

@ -29,6 +29,7 @@ var internal = require("internal");
var db = require("org/arangodb").db; var db = require("org/arangodb").db;
var jsunity = require("jsunity"); var jsunity = require("jsunity");
var helper = require("org/arangodb/aql-helper"); var helper = require("org/arangodb/aql-helper");
var cluster = require("org/arangodb/cluster");
var getModifyQueryResults = helper.getModifyQueryResults; var getModifyQueryResults = helper.getModifyQueryResults;
var assertQueryError = helper.assertQueryError; var assertQueryError = helper.assertQueryError;
@ -248,6 +249,11 @@ function ahuacatlRemoveSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testRemoveInvalid4 : function () { testRemoveInvalid4 : function () {
if (cluster.isCluster()) {
// skip test in cluster as there are no distributed transactions yet
return;
}
assertQueryError(errors.ERROR_ARANGO_DOCUMENT_NOT_FOUND.code, "FOR i iN 0..100 REMOVE CONCAT('test', TO_STRING(i)) IN @@cn", { "@cn": cn1 }); assertQueryError(errors.ERROR_ARANGO_DOCUMENT_NOT_FOUND.code, "FOR i iN 0..100 REMOVE CONCAT('test', TO_STRING(i)) IN @@cn", { "@cn": cn1 });
assertEqual(100, c1.count()); assertEqual(100, c1.count());
}, },

View File

@ -1186,7 +1186,8 @@ int TRI_CopyToJson (TRI_memory_zone_t* zone,
/// @brief copies a json object /// @brief copies a json object
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
TRI_json_t* TRI_CopyJson (TRI_memory_zone_t* zone, const TRI_json_t* const src) { TRI_json_t* TRI_CopyJson (TRI_memory_zone_t* zone,
TRI_json_t const* src) {
TRI_json_t* dst; TRI_json_t* dst;
int res; int res;

View File

@ -382,7 +382,8 @@ int TRI_CopyToJson (TRI_memory_zone_t*, TRI_json_t* dst, TRI_json_t const* src);
/// @brief copies a json object /// @brief copies a json object
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
TRI_json_t* TRI_CopyJson (TRI_memory_zone_t*, const TRI_json_t* const); TRI_json_t* TRI_CopyJson (TRI_memory_zone_t*,
TRI_json_t const*);
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief parses a json string /// @brief parses a json string