1
0
Fork 0

Merge branch 'devel' of github.com:triAGENS/ArangoDB into devel

This commit is contained in:
Michael Hackstein 2014-07-08 13:53:43 +02:00
commit 4cf8df05dc
20 changed files with 439 additions and 241 deletions

View File

@ -1,20 +1,23 @@
!CHAPTER High-level operations
!SUBSECTION FOR
The *FOR* keyword can be to iterate over all elements of a list.
The general syntax is:
FOR variable-name IN expression
```
FOR variable-name IN expression
```
Each list element returned by *expression* is visited exactly once. It is
required that *expression* returns a list in all cases. The empty list is
allowed, too. The current list element is made available for further processing
in the variable specified by *variable-name*.
FOR u IN users
RETURN u
```
FOR u IN users
RETURN u
```
This will iterate over all elements from the list *users* (note: this list
consists of all documents from the collection named "users" in this case) and
@ -30,16 +33,20 @@ placed in is closed.
Another example that uses a statically declared list of values to iterate over:
FOR year IN [ 2011, 2012, 2013 ]
RETURN { "year" : year, "isLeapYear" : year % 4 == 0 && (year % 100 != 0 || year % 400 == 0) }
```
FOR year IN [ 2011, 2012, 2013 ]
RETURN { "year" : year, "isLeapYear" : year % 4 == 0 && (year % 100 != 0 || year % 400 == 0) }
```
Nesting of multiple *FOR* statements is allowed, too. When *FOR* statements are
nested, a cross product of the list elements returned by the individual *FOR*
statements will be created.
FOR u IN users
FOR l IN locations
RETURN { "user" : u, "location" : l }
```
FOR u IN users
FOR l IN locations
RETURN { "user" : u, "location" : l }
```
In this example, there are two list iterations: an outer iteration over the list
*users* plus an inner iteration over the list *locations*. The inner list is
@ -55,7 +62,9 @@ query, otherwise the query result would be undefined.
The general syntax for *return* is:
RETURN expression
```
RETURN expression
```
The *expression* returned by *RETURN* is produced for each iteration the
*RETURN* statement is placed in. That means the result of a *RETURN* statement
@ -63,8 +72,10 @@ is always a list (this includes the empty list). To return all elements from
the currently iterated list without modification, the following simple form can
be used:
FOR variable-name IN expression
RETURN variable-name
```
FOR variable-name IN expression
RETURN variable-name
```
As *RETURN* allows specifying an expression, arbitrary computations can be
performed to calculate the result elements. Any of the variables valid in the
@ -78,16 +89,20 @@ it.
The *FILTER* statement can be used to restrict the results to elements that
match an arbitrary logical condition. The general syntax is:
FILTER condition
```
FILTER condition
```
*condition* must be a condition that evaluates to either *false* or *true*. If
the condition result is false, the current element is skipped, so it will not be
processed further and not be part of the result. If the condition is true, the
current element is not skipped and can be further processed.
FOR u IN users
FILTER u.active == true && u.age < 39
RETURN u
```
FOR u IN users
FILTER u.active == true && u.age < 39
RETURN u
```
In the above example, all list elements from *users* will be included that have
an attribute *active* with value *true* and that have an attribute *age* with a
@ -99,10 +114,12 @@ the same block. If multiple *FILTER* statements are used, their results will be
combined with a logical and, meaning all filter conditions must be true to
include an element.
FOR u IN users
FILTER u.active == true
FILTER u.age < 39
RETURN u
```
FOR u IN users
FILTER u.active == true
FILTER u.age < 39
RETURN u
```
!SUBSECTION SORT
@ -110,7 +127,9 @@ The *SORT* statement will force a sort of the list of already produced
intermediate results in the current block. *SORT* allows specifying one or
multiple sort criteria and directions. The general syntax is:
SORT expression direction
```
SORT expression direction
```
Specifying the *direction* is optional. The default (implicit) direction for a
sort is the ascending order. To explicitly specify the sort direction, the
@ -120,18 +139,21 @@ separated using commas.
Note: when iterating over collection-based lists, the order of documents is
always undefined unless an explicit sort order is defined using *SORT*.
FOR u IN users
SORT u.lastName, u.firstName, u.id DESC
RETURN u
```
FOR u IN users
SORT u.lastName, u.firstName, u.id DESC
RETURN u
```
!SUBSECTION LIMIT
The *LIMIT* statement allows slicing the list of result documents using an
offset and a count. It reduces the number of elements in the result to at most
the specified number. Two general forms of *LIMIT* are followed:
LIMIT count
LIMIT offset, count
```
LIMIT count
LIMIT offset, count
```
The first form allows specifying only the *count* value whereas the second form
allows specifying both *offset* and *count*. The first form is identical using
@ -141,10 +163,12 @@ The *offset* value specifies how many elements from the result shall be
discarded. It must be 0 or greater. The *count* value specifies how many
elements should be at most included in the result.
FOR u IN users
SORT u.firstName, u.lastName, u.id DESC
LIMIT 0, 5
RETURN u
```
FOR u IN users
SORT u.firstName, u.lastName, u.id DESC
LIMIT 0, 5
RETURN u
```
!SUBSECTION LET
@ -152,14 +176,18 @@ The *LET* statement can be used to assign an arbitrary value to a variable. The
variable is then introduced in the scope the *LET* statement is placed in. The
general syntax is:
LET variable-name = expression
```
LET variable-name = expression
```
*LET* statements are mostly used to declare complex computations and to avoid
repeated computations of the same value at multiple parts of a query.
FOR u IN users
LET numRecommendations = LENGTH(u.recommendations)
RETURN { "user" : u, "numRecommendations" : numRecommendations, "isPowerUser" : numRecommendations >= 10 }
```
FOR u IN users
LET numRecommendations = LENGTH(u.recommendations)
RETURN { "user" : u, "numRecommendations" : numRecommendations, "isPowerUser" : numRecommendations >= 10 }
```
In the above example, the computation of the number of recommendations is
factored out using a *LET* statement, thus avoiding computing the value twice in
@ -168,26 +196,30 @@ the *RETURN* statement.
Another use case for *LET* is to declare a complex computation in a subquery,
making the whole query more readable.
FOR u IN users
LET friends = (
FOR f IN friends
FILTER u.id == f.userId
RETURN f
)
LET memberships = (
FOR m IN memberships
FILTER u.id == m.userId
RETURN m
)
RETURN { "user" : u, "friends" : friends, "numFriends" : LENGTH(friends), "memberShips" : memberships }
```
FOR u IN users
LET friends = (
FOR f IN friends
FILTER u.id == f.userId
RETURN f
)
LET memberships = (
FOR m IN memberships
FILTER u.id == m.userId
RETURN m
)
RETURN { "user" : u, "friends" : friends, "numFriends" : LENGTH(friends), "memberShips" : memberships }
```
!SUBSECTION COLLECT
The *COLLECT* keyword can be used to group a list by one or multiple group
criteria. The two general syntaxes for *COLLECT* are:
COLLECT variable-name = expression
COLLECT variable-name = expression INTO groups
```
COLLECT variable-name = expression
COLLECT variable-name = expression INTO groups
```
The first form only groups the result by the defined group criteria defined by
*expression*. In order to further process the results produced by *COLLECT*, a
@ -198,9 +230,11 @@ The second form does the same as the first form, but additionally introduces a
variable (specified by *groups*) that contains all elements that fell into the
group. Specifying the *INTO* clause is optional-
FOR u IN users
COLLECT city = u.city INTO g
RETURN { "city" : city, "users" : g }
```
FOR u IN users
COLLECT city = u.city INTO g
RETURN { "city" : city, "users" : g }
```
In the above example, the list of *users* will be grouped by the attribute
*city*. The result is a new list of documents, with one element per distinct
@ -210,9 +244,11 @@ made available in the variable *g*. This is due to the *INTO* clause.
*COLLECT* also allows specifying multiple group criteria. Individual group
criteria can be separated by commas.
FOR u IN users
COLLECT first = u.firstName, age = u.age INTO g
RETURN { "first" : first, "age" : age, "numUsers" : LENGTH(g) }
```
FOR u IN users
COLLECT first = u.firstName, age = u.age INTO g
RETURN { "first" : first, "age" : age, "numUsers" : LENGTH(g) }
```
In the above example, the list of *users* is grouped by first names and ages
first, and for each distinct combination of first name and age, the number of
@ -235,7 +271,9 @@ restricted to a single collection, and the collection name must not be dynamic.
The syntax for a remove operation is:
REMOVE key-expression IN collection options
```
REMOVE key-expression IN collection options
```
*collection* must contain the name of the collection to remove the documents
from. *key-expression* must be an expression that contains the document identification.
@ -244,43 +282,53 @@ document, which must contain a *_key* attribute.
The following queries are thus equivalent:
FOR u IN users
REMOVE { _key: u._key } IN users
```
FOR u IN users
REMOVE { _key: u._key } IN users
FOR u IN users
REMOVE u._key IN users
FOR u IN users
REMOVE u._key IN users
FOR u IN users
REMOVE u IN users
FOR u IN users
REMOVE u IN users
```
Note that a remove operation can remove arbitrary documents, and the documents
do not need to be identical to the ones produced by a preceeding *FOR* statement:
**Note**: A remove operation can remove arbitrary documents, and the documents
do not need to be identical to the ones produced by a preceding *FOR* statement:
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users
```
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users
FOR u IN users
FILTER u.active == false
REMOVE { _key: u._key } IN backup
FOR u IN users
FILTER u.active == false
REMOVE { _key: u._key } IN backup
```
*options* can be used to suppress query errors that might occur when trying to
remove non-existing documents. For example, the following query will fail if one
of the to-be-deleted documents does not exist:
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users
```
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users
```
By specifying the *ignoreErrors* query option, these errors can be suppressed so
the query completes:
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users OPTIONS { ignoreErrors: true }
```
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users OPTIONS { ignoreErrors: true }
```
To make sure data are durable when a query returns, there is the *waitForSync*
query option:
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users OPTIONS { waitForSync: true }
```
FOR i IN 1..1000
REMOVE { _key: CONCAT('test', TO_STRING(i)) } IN users OPTIONS { waitForSync: true }
```
!SUBSECTION UPDATE
@ -294,22 +342,28 @@ restricted to a single collection, and the collection name must not be dynamic.
The two syntaxes for an update operation are:
UPDATE document IN collection options
UPDATE key-expression WITH document IN collection options
```
UPDATE document IN collection options
UPDATE key-expression WITH document IN collection options
```
*collection* must contain the name of the collection in which the documents should
be updated. *document* must be a document that contains the attributes and values
to be updated. When using the first syntax, *document* must also contain the *_key*
attribute to identify the document to be updated.
FOR u IN users
UPDATE { _key: u._key, name: CONCAT(u.firstName, u.lastName) } IN users
```
FOR u IN users
UPDATE { _key: u._key, name: CONCAT(u.firstName, u.lastName) } IN users
```
The following query is invalid because it does not contain a *_key* attribute and
thus it is not possible to determine the documents to be updated:
FOR u IN users
UPDATE { name: CONCAT(u.firstName, u.lastName) } IN users
```
FOR u IN users
UPDATE { name: CONCAT(u.firstName, u.lastName) } IN users
```
When using the second syntax, *key-expression* provides the document identification.
This can either be a string (which must then contain the document key) or a
@ -317,30 +371,36 @@ document, which must contain a *_key* attribute.
The following queries are equivalent:
FOR u IN users
UPDATE u._key WITH { name: CONCAT(u.firstName, u.lastName) } IN users
```
FOR u IN users
UPDATE u._key WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
UPDATE { _key: u._key } WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
UPDATE u WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
UPDATE { _key: u._key } WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
UPDATE u WITH { name: CONCAT(u.firstName, u.lastName) } IN users
```
An update operation may update arbitrary documents which do not need to be identical
to the ones produced by a preceeding *FOR* statement:
to the ones produced by a preceding *FOR* statement:
FOR i IN 1..1000
UPDATE CONCAT('test', TO_STRING(i)) WITH { foobar: true } IN users
```
FOR i IN 1..1000
UPDATE CONCAT('test', TO_STRING(i)) WITH { foobar: true } IN users
FOR u IN users
FILTER u.active == false
UPDATE u WITH { status: 'inactive' } IN backup
FOR u IN users
FILTER u.active == false
UPDATE u WITH { status: 'inactive' } IN backup
```
*options* can be used to suppress query errors that might occur when trying to
update non-existing documents or violating unique key constraints:
FOR i IN 1..1000
UPDATE { _key: CONCAT('test', TO_STRING(i)) } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }
```
FOR i IN 1..1000
UPDATE { _key: CONCAT('test', TO_STRING(i)) } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }
```
An update operation will only update the attributes specified in *document* and
leave other attributes untouched. Internal attributes (such as *_id*, *_key*, *_rev*,
@ -351,8 +411,10 @@ When updating an attribute with a null value, ArangoDB will not remove the attri
from the document but store a null value for it. To get rid of attributes in an update
operation, set them to null and provide the *keepNull* option:
FOR u IN users
UPDATE u WITH { foobar: true, notNeeded: null } IN users OPTIONS { keepNull: false }
```
FOR u IN users
UPDATE u WITH { foobar: true, notNeeded: null } IN users OPTIONS { keepNull: false }
```
The above query will remove the *notNeeded* attribute from the documents and update
the *foobar* attribute normally.
@ -360,8 +422,10 @@ the *foobar* attribute normally.
To make sure data are durable when an update query returns, there is the *waitForSync*
query option:
FOR u IN users
UPDATE u WITH { foobar: true } IN users OPTIONS { waitForSync: true }
```
FOR u IN users
UPDATE u WITH { foobar: true } IN users OPTIONS { waitForSync: true }
```
!SUBSECTION REPLACE
@ -375,21 +439,27 @@ restricted to a single collection, and the collection name must not be dynamic.
The two syntaxes for a replace operation are:
REPLACE document IN collection options
REPLACE key-expression WITH document IN collection options
```
REPLACE document IN collection options
REPLACE key-expression WITH document IN collection options
```
*collection* must contain the name of the collection in which the documents should
be replaced. *document* is the replacement document. When using the first syntax, *document*
must also contain the *_key* attribute to identify the document to be replaced.
FOR u IN users
REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName), status: u.status } IN users
```
FOR u IN users
REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName), status: u.status } IN users
```
The following query is invalid because it does not contain a *_key* attribute and
thus it is not possible to determine the documents to be replaced:
FOR u IN users
REPLACE { name: CONCAT(u.firstName, u.lastName, status: u.status) } IN users
```
FOR u IN users
REPLACE { name: CONCAT(u.firstName, u.lastName, status: u.status) } IN users
```
When using the second syntax, *key-expression* provides the document identification.
This can either be a string (which must then contain the document key) or a
@ -397,43 +467,51 @@ document, which must contain a *_key* attribute.
The following queries are equivalent:
FOR u IN users
REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName) } IN users
```
FOR u IN users
REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
REPLACE u._key WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
REPLACE u._key WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
REPLACE { _key: u._key } WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
REPLACE u WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
REPLACE { _key: u._key } WITH { name: CONCAT(u.firstName, u.lastName) } IN users
FOR u IN users
REPLACE u WITH { name: CONCAT(u.firstName, u.lastName) } IN users
```
A replace will fully replace an existing document, but it will not modify the values
of internal attributes (such as *_id*, *_key*, *_from* and *_to*). Replacing a document
will modify a document's revision number with a server-generated value.
A replace operation may update arbitrary documents which do not need to be identical
to the ones produced by a preceeding *FOR* statement:
to the ones produced by a preceding *FOR* statement:
FOR i IN 1..1000
REPLACE CONCAT('test', TO_STRING(i)) WITH { foobar: true } IN users
```
FOR i IN 1..1000
REPLACE CONCAT('test', TO_STRING(i)) WITH { foobar: true } IN users
FOR u IN users
FILTER u.active == false
REPLACE u WITH { status: 'inactive', name: u.name } IN backup
FOR u IN users
FILTER u.active == false
REPLACE u WITH { status: 'inactive', name: u.name } IN backup
```
*options* can be used to suppress query errors that might occur when trying to
replace non-existing documents or when violating unique key constraints:
FOR i IN 1..1000
REPLACE { _key: CONCAT('test', TO_STRING(i)) } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }
```
FOR i IN 1..1000
REPLACE { _key: CONCAT('test', TO_STRING(i)) } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }
```
To make sure data are durable when a replace query returns, there is the *waitForSync*
query option:
FOR i IN 1..1000
REPLACE { _key: CONCAT('test', TO_STRING(i)) } WITH { foobar: true } IN users OPTIONS { waitForSync: true }
```
FOR i IN 1..1000
REPLACE { _key: CONCAT('test', TO_STRING(i)) } WITH { foobar: true } IN users OPTIONS { waitForSync: true }
```
!SUBSECTION INSERT
@ -447,9 +525,11 @@ restricted to a single collection, and the collection name must not be dynamic.
The syntax for an insert operation is:
INSERT document IN collection options
```
INSERT document IN collection options
```
Note that the *INTO* keyword is also allowed in the place of *IN*.
**Note**: The *INTO* keyword is also allowed in the place of *IN*.
*collection* must contain the name of the collection into which the documents should
be inserted. *document* is the document to be inserted, and it may or may not contain
@ -457,26 +537,33 @@ a *_key* attribute. If no *_key* attribute is provided, ArangoDB will auto-gener
a value for *_key* value. Inserting a document will also auto-generate a document
revision number for the document.
FOR i IN 1..100
INSERT { value: i } IN numbers
```
FOR i IN 1..100
INSERT { value: i } IN numbers
```
When inserting into an edge collection, it is mandatory to specify the attributes
*_from* and *_to* in document:
FOR u IN users
FOR p IN products
FILTER u._key == p.recommendedBy
INSERT { _from: u._id, _to: p._id } IN recommendations
```
FOR u IN users
FOR p IN products
FILTER u._key == p.recommendedBy
INSERT { _from: u._id, _to: p._id } IN recommendations
```
*options* can be used to suppress query errors that might occur when violating unique
key constraints:
FOR i IN 1..1000
INSERT { _key: CONCAT('test', TO_STRING(i)), name: "test" } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }
```
FOR i IN 1..1000
INSERT { _key: CONCAT('test', TO_STRING(i)), name: "test" } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }
```
To make sure data are durable when an insert query returns, there is the *waitForSync*
query option:
FOR i IN 1..1000
INSERT { _key: CONCAT('test', TO_STRING(i)), name: "test" } WITH { foobar: true } IN users OPTIONS { waitForSync: true }
```
FOR i IN 1..1000
INSERT { _key: CONCAT('test', TO_STRING(i)), name: "test" } WITH { foobar: true } IN users OPTIONS { waitForSync: true }
```

View File

@ -3,35 +3,34 @@
This is an overview of ArangoDB's HTTP interface for miscellaneous functions.
<!-- lib/Admin/RestVersionHandler.cpp -->
@startDocuBlock JSF_get_api_return
<!-- ljs/actions/api-system.js -->
@startDocuBlock JSF_put_admin_wal_flush
<!-- ljs/actions/api-system.js -->
@startDocuBlock JSF_get_admin_wal_properties
<!-- ljs/actions/api-system.js -->
@startDocuBlock JSF_put_admin_wal_properties
<!-- js/actions/api-system.js -->
@startDocuBlock JSF_get_admin_time
<!-- js/actions/api-system.js -->
@startDocuBlock JSF_get_admin_echo
<!-- lib/Admin/RestShutdownHandler.cpp -->
@startDocuBlock JSF_get_api_initiate
<!-- js/actions/api-system.js -->
@startDocuBlock JSF_post_admin_test
<!-- js/actions/api-system.js -->
@startDocuBlock JSF_post_admin_execute

View File

@ -6,7 +6,6 @@ overview of which collections are present in the database. They can use this inf
to either start a full or a partial synchronization of data, e.g. to initiate a backup
or the incremental data synchronization.
<!-- arangod/RestHandler/RestReplicationHandler.cpp -->
@startDocuBlock JSF_put_api_replication_inventory
@ -31,3 +30,5 @@ parts of the dump results in the same order as they are provided.
<!-- arangod/RestHandler/RestReplicationHandler.cpp -->
@startDocuBlock JSF_put_api_replication_synchronize
<!-- arangod/RestHandler/RestReplicationHandler.cpp -->
@startDocuBlock JSF_get_api_replication_cluster_inventory

View File

@ -78,7 +78,7 @@ datafiles of collections, allowing the server to remove older write-ahead logfil
Cross-collection transactions in ArangoDB should benefit considerably by this
change, as less writes than in previous versions are required to ensure the data
of multiple collections are atomcially and durably committed. All data-modifying
of multiple collections are atomically and durably committed. All data-modifying
operations inside transactions (insert, update, remove) will write their
operations into the write-ahead log directly now. In previous versions, such
operations were buffered until the commit or rollback occurred. Transactions with
@ -128,7 +128,7 @@ or configuring it will have no effect.
This change will emit buffered intermediate print results and discard the
output buffer to quickly deliver print results to the user, and to prevent
constructing very large buffers for large resultis.
constructing very large buffers for large results.
!SECTION Miscellaneous improvements

View File

@ -27,6 +27,10 @@ data retrieval and/or modification operations, and at the end automatically
commit the transaction. If an error occurs during transaction execution, the
transaction is automatically aborted, and all changes are rolled back.
!SUBSECTION Execute transaction
<!-- js/server/modules/org/arangodb/arango-database.js -->
@startDocuBlock executeTransaction
!SUBSECTION Declaration of collections
All collections which are to participate in a transaction need to be declared
@ -52,8 +56,7 @@ db._executeTransaction({
collections: {
write: [ "users", "logins" ],
read: [ "recommendations" ]
},
...
}
});
```
@ -68,12 +71,11 @@ db._executeTransaction({
collections: {
write: "users",
read: "recommendations"
},
...
}
});
```
Note that it is currently optional to specify collections for read-only access.
**Note**: It is currently optional to specify collections for read-only access.
Even without specifying them, it is still possible to read from such collections
from within a transaction, but with relaxed isolation. Please refer to
[Transactions Locking](../Transactions/LockingAndIsolation.md) for more details.
@ -142,7 +144,6 @@ db._executeTransaction({
action: function () {
var db = require("internal").db;
db.users.save({ _key: "hello" });
// will abort and roll back the transaction
throw "doh!";
}
@ -163,7 +164,6 @@ db._executeTransaction({
action: function () {
var db = require("internal").db;
db.users.save({ _key: "hello" });
// will commit the transaction and return the value "hello"
return "hello";
}
@ -214,10 +214,8 @@ db._executeTransaction({
var db = require("internal").db;
db.c1.save({ _key: "key1" });
db.c1.count(); // 1
db.c1.save({ _key: "key2" });
db.c1.count(); // 2
throw "doh!";
}
});
@ -239,10 +237,8 @@ db._executeTransaction({
action: function () {
var db = require("internal").db;
db.c1.save({ _key: "key1" });
// will throw duplicate a key error, not explicitly requested by the user
db.c1.save({ _key: "key1" });
// we'll never get here...
}
});
@ -279,7 +275,6 @@ db._executeTransaction({
var db = require("internal").db;
// this will push out key2. we now have keys [ "key3", "key4", "key5" ]
db.c1.save({ _key: "key5" });
// will abort the transaction
throw "doh!"
}
@ -332,10 +327,8 @@ db._executeTransaction({
db.c1.save({ _key: "key" + i });
db.c2.save({ _key: "key" + i });
}
db.c1.count(); // 100
db.c2.count(); // 100
// abort
throw "doh!"
}

View File

@ -608,7 +608,6 @@ void RestReplicationHandler::handleCommandLoggerSetConfig () {
}
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_post_api_replication_batch
/// @brief handle a dump batch command
///
/// @RESTHEADER{POST /_api/replication/batch, Create new dump batch}
@ -643,11 +642,9 @@ void RestReplicationHandler::handleCommandLoggerSetConfig () {
///
/// @RESTRETURNCODE{405}
/// is returned when an invalid HTTP method is used.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_put_api_replication_prolong
/// @brief handle a dump batch command
///
/// @RESTHEADER{PUT /_api/replication/batch/id, Prolong existing dump batch}
@ -685,11 +682,9 @@ void RestReplicationHandler::handleCommandLoggerSetConfig () {
///
/// @RESTRETURNCODE{405}
/// is returned when an invalid HTTP method is used.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock JSF_api_replication_delete
/// @brief handle a dump batch command
///
/// @RESTHEADER{DELETE /_api/replication/batch/id, Deletes an existing dump batch}
@ -717,7 +712,6 @@ void RestReplicationHandler::handleCommandLoggerSetConfig () {
///
/// @RESTRETURNCODE{405}
/// is returned when an invalid HTTP method is used.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
void RestReplicationHandler::handleCommandBatch () {

View File

@ -1001,6 +1001,7 @@ int ArangoServer::runUnitTests (TRI_vocbase_t* vocbase) {
cout << TRI_StringifyV8Exception(&tryCatch);
}
else {
// will stop, so need for v8g->_canceled = true;
return EXIT_FAILURE;
}
}
@ -1065,6 +1066,7 @@ int ArangoServer::runScript (TRI_vocbase_t* vocbase) {
TRI_LogV8Exception(&tryCatch);
}
else {
// will stop, so need for v8g->_canceled = true;
return EXIT_FAILURE;
}
}

View File

@ -148,7 +148,11 @@ namespace {
}
// -----------------------------------------------------------------------------
// --SECTION-- public types
// --SECTION-- class V8Context
// -----------------------------------------------------------------------------
// -----------------------------------------------------------------------------
// --SECTION-- public methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
@ -208,6 +212,21 @@ void ApplicationV8::V8Context::handleGlobalContextMethods () {
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief executes the cancelation cleanup
////////////////////////////////////////////////////////////////////////////////
void ApplicationV8::V8Context::handleCancelationCleanup () {
v8::HandleScope scope;
LOG_DEBUG("executing cancelation cleanup context %d", (int) _id);
TRI_ExecuteJavaScriptString(_context,
v8::String::New("require('internal').cleanupCancelation();"),
v8::String::New("context cleanup method"),
false);
}
// -----------------------------------------------------------------------------
// --SECTION-- class ApplicationV8
// -----------------------------------------------------------------------------
@ -360,10 +379,25 @@ void ApplicationV8::exitContext (V8Context* context) {
// HasOutOfMemoryException must be called while there is still an isolate!
bool const hasOutOfMemoryException = context->_context->HasOutOfMemoryException();
// check for cancelation requests
bool const canceled = v8g->_canceled;
v8g->_canceled = false;
// exit the context
context->_context->Exit();
context->_isolate->Exit();
// if the execution was canceled, we need to cleanup
if (canceled) {
context->_isolate->Enter();
context->_context->Enter();
context->handleCancelationCleanup();
context->_context->Exit();
context->_isolate->Exit();
}
// try to execute new global context methods
bool runGlobal = false;
{

View File

@ -188,6 +188,12 @@ namespace triagens {
void handleGlobalContextMethods ();
////////////////////////////////////////////////////////////////////////////////
/// @brief executes the cancelation cleanup
////////////////////////////////////////////////////////////////////////////////
void handleCancelationCleanup ();
////////////////////////////////////////////////////////////////////////////////
/// @brief mutex to protect _globalMethods
////////////////////////////////////////////////////////////////////////////////

View File

@ -150,7 +150,10 @@ Job::status_t V8Job::work () {
TRI_LogV8Exception(&tryCatch);
}
else {
LOG_WARNING("caught non-printable exception in periodic task");
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
LOG_WARNING("caught non-catchable exception (aka termination) in periodic job");
}
}
}

View File

@ -730,6 +730,7 @@ static TRI_action_result_t ExecuteActionVocbase (TRI_vocbase_t* vocbase,
}
}
else {
v8g->_canceled = true;
result.isValid = false;
result.canceled = true;
}

View File

@ -3499,6 +3499,9 @@ static v8::Handle<v8::Value> ExecuteQueryCursorAhuacatl (TRI_vocbase_t* const vo
return scope.Close(v8::ThrowException(tryCatch.Exception()));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(result);
}
}
@ -3847,6 +3850,9 @@ static v8::Handle<v8::Value> JS_Transaction (v8::Arguments const& argv) {
return scope.Close(v8::ThrowException(tryCatch.Exception()));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(result);
}
}
@ -4489,6 +4495,9 @@ static v8::Handle<v8::Value> JS_NextGeneralCursor (v8::Arguments const& argv) {
return scope.Close(v8::ThrowException(tryCatch.Exception()));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(v8::Undefined());
}
}
@ -4566,6 +4575,9 @@ static v8::Handle<v8::Value> JS_ToArrayGeneralCursor (v8::Arguments const& argv)
return scope.Close(v8::ThrowException(tryCatch.Exception()));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(v8::Undefined());
}
}
@ -5433,6 +5445,12 @@ static v8::Handle<v8::Value> JS_RunAhuacatl (v8::Arguments const& argv) {
TRI_ObjectToString(tryCatch.Exception()).c_str());
return scope.Close(v8::ThrowException(errorObject));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(result);
}
}
return scope.Close(result);
@ -5562,6 +5580,9 @@ static v8::Handle<v8::Value> JS_ExplainAhuacatl (v8::Arguments const& argv) {
return scope.Close(v8::ThrowException(errorObject));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(result);
}
}
@ -5636,6 +5657,9 @@ static v8::Handle<v8::Value> JS_ParseAhuacatl (v8::Arguments const& argv) {
return scope.Close(v8::ThrowException(errorObject));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(result);
}
}

View File

@ -918,7 +918,7 @@ actions.defineHttp({
/// @brief executes one or multiple tests on the server
/// @startDocuBlock JSF_post_admin_test
///
/// @RESTHEADER{POST /_admin/test, Runs tests on the server}
/// @RESTHEADER{POST /_admin/test, Runs tests on server}
///
/// @RESTBODYPARAM{body,javascript,required}
/// A JSON body containing an attribute "tests" which lists the files
@ -975,7 +975,7 @@ actions.defineHttp({
////////////////////////////////////////////////////////////////////////////////
/// @brief executes a JavaScript program on the server
/// @startDocuBlock JSF_get_admin_execute
/// @startDocuBlock JSF_post_admin_execute
///
/// @RESTHEADER{POST /_admin/execute, Execute program}
///

View File

@ -503,74 +503,80 @@ function require (path) {
}
// create a new module
var localModule = currentPackage.defineModule(
id,
"js",
new Module(id, currentPackage, currentModule._applicationContext, path, origin, false));
try {
var localModule = currentPackage.defineModule(
id,
"js",
new Module(id, currentPackage, currentModule._applicationContext, path, origin, false));
// create a new sandbox and execute
var env = currentPackage._environment;
// create a new sandbox and execute
var env = currentPackage._environment;
var sandbox = {};
sandbox.print = internal.print;
var sandbox = {};
sandbox.print = internal.print;
if (env !== undefined) {
for (key in env) {
if (env.hasOwnProperty(key) && key !== "__myenv__") {
sandbox[key] = env[key];
if (env !== undefined) {
for (key in env) {
if (env.hasOwnProperty(key) && key !== "__myenv__") {
sandbox[key] = env[key];
}
}
}
}
var filename = fileUri2Path(origin);
var filename = fileUri2Path(origin);
if (filename !== null) {
sandbox.__filename = filename;
sandbox.__dirname = normalizeModuleName(filename + "/..");
}
sandbox.module = localModule;
sandbox.exports = localModule.exports;
sandbox.require = function(path) {
return localModule.require(path);
};
if (localModule.hasOwnProperty("_applicationContext")) {
sandbox.applicationContext = localModule._applicationContext;
}
// try to execute the module source code
var script = "(function (__myenv__) {";
for (key in sandbox) {
if (sandbox.hasOwnProperty(key)) {
script += "var " + key + " = __myenv__['" + key + "'];";
if (filename !== null) {
sandbox.__filename = filename;
sandbox.__dirname = normalizeModuleName(filename + "/..");
}
sandbox.module = localModule;
sandbox.exports = localModule.exports;
sandbox.require = function(path) {
return localModule.require(path);
};
if (localModule.hasOwnProperty("_applicationContext")) {
sandbox.applicationContext = localModule._applicationContext;
}
// try to execute the module source code
var script = "(function (__myenv__) {";
for (key in sandbox) {
if (sandbox.hasOwnProperty(key)) {
script += "var " + key + " = __myenv__['" + key + "'];";
}
}
script += "delete __myenv__;"
+ content
+ "\n});";
var fun = internal.executeScript(script, undefined, filename);
if (fun === undefined) {
e = new Error("corrupted package '" + path
+ "', cannot create module context function for: "
+ script);
e.moduleNotFound = false;
e._path = path;
e._package = currentPackage.id;
e._packageOrigin = currentPackage._origin;
throw e;
}
fun(sandbox);
return localModule;
}
script += "delete __myenv__;"
+ content
+ "\n});";
var fun = internal.executeScript(script, undefined, filename);
if (fun === undefined) {
e = new Error("corrupted package '" + path
+ "', cannot create module context function for: "
+ script);
e.moduleNotFound = false;
e._path = path;
e._package = currentPackage.id;
e._packageOrigin = currentPackage._origin;
throw e;
catch (err) {
currentPackage.clearModule(id, "js");
throw err;
}
fun(sandbox);
return localModule;
}
////////////////////////////////////////////////////////////////////////////////
@ -903,6 +909,14 @@ function require (path) {
delete REGISTER_EXECUTE_FILE;
////////////////////////////////////////////////////////////////////////////////
/// @brief cleans up after cancelation
////////////////////////////////////////////////////////////////////////////////
function cleanupCancelation () {
module.unloadAll();
}
// -----------------------------------------------------------------------------
// --SECTION-- Package
// -----------------------------------------------------------------------------
@ -1106,6 +1120,8 @@ function require (path) {
internal[key] = EXPORTS_SLOW_BUFFER[key];
}
}
internal.cleanupCancelation = cleanupCancelation;
}());
delete EXPORTS_SLOW_BUFFER;

View File

@ -167,9 +167,6 @@ ArangoDatabase.prototype._query = function (query, bindVars, cursorOptions, opti
/// - *params*: optional arguments passed to the function specified in
/// *action*.
///
/// @EXAMPLES
///
/// @verbinclude shell_transaction
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////

View File

@ -85,6 +85,9 @@ v8::Handle<v8::Value> JSLoader::executeGlobalScript (v8::Handle<v8::Context> con
return scope.Close(v8::Undefined());
}
else {
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
v8g->_canceled = true;
return scope.Close(result);
}
}
@ -121,6 +124,9 @@ bool JSLoader::loadScript (v8::Persistent<v8::Context> context, string const& na
return false;
}
else {
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
v8g->_canceled = true;
return false;
}
}

View File

@ -74,6 +74,9 @@ TRI_js_exec_context_t* TRI_CreateExecutionContext (char const* script,
ctx->_error = TRI_ERROR_INTERNAL;
}
else {
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
v8g->_canceled = true;
ctx->_error = TRI_ERROR_REQUEST_CANCELED;
}
return ctx;
@ -134,6 +137,9 @@ TRI_json_t* TRI_ExecuteResultContext (TRI_js_exec_context_t* ctx) {
ctx->_error = TRI_ERROR_INTERNAL;
}
else {
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
v8g->_canceled = true;
ctx->_error = TRI_ERROR_REQUEST_CANCELED;
}
return NULL;

View File

@ -128,9 +128,10 @@ TRI_v8_global_s::TRI_v8_global_s (v8::Isolate* isolate)
_resolver(0),
_server(0),
_vocbase(0),
_loader(0),
_allowUseDatabase(true),
_hasDeadObjects(false) {
_hasDeadObjects(false),
_loader(0),
_canceled(false) {
v8::HandleScope scope;
BufferConstant = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("Buffer"));

View File

@ -746,12 +746,6 @@ typedef struct TRI_v8_global_s {
void* _vocbase;
////////////////////////////////////////////////////////////////////////////////
/// @brief pointer to the startup loader (JSLoader*)
////////////////////////////////////////////////////////////////////////////////
void* _loader;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not useDatabase() is allowed
////////////////////////////////////////////////////////////////////////////////
@ -764,6 +758,23 @@ typedef struct TRI_v8_global_s {
////////////////////////////////////////////////////////////////////////////////
bool _hasDeadObjects;
// -----------------------------------------------------------------------------
// --SECTION-- GENERAL
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief pointer to the startup loader (JSLoader*)
////////////////////////////////////////////////////////////////////////////////
void* _loader;
////////////////////////////////////////////////////////////////////////////////
/// @brief cancel has been caught
////////////////////////////////////////////////////////////////////////////////
bool _canceled;
}
TRI_v8_global_t;

View File

@ -267,6 +267,11 @@ static bool LoadJavaScriptDirectory (char const* path,
if (tryCatch.CanContinue()) {
TRI_LogV8Exception(&tryCatch);
}
else {
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
v8g->_canceled = true;
}
}
}
@ -418,6 +423,9 @@ static v8::Handle<v8::Value> JS_Parse (v8::Arguments const& argv) {
TRI_V8_SYNTAX_ERROR(scope, err.c_str());
}
else {
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
v8g->_canceled = true;
return scope.Close(v8::Undefined());
}
}
@ -828,6 +836,9 @@ static v8::Handle<v8::Value> JS_Execute (v8::Arguments const& argv) {
return scope.Close(v8::ThrowException(tryCatch.Exception()));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(v8::Undefined());
}
}
@ -846,6 +857,9 @@ static v8::Handle<v8::Value> JS_Execute (v8::Arguments const& argv) {
return scope.Close(v8::ThrowException(tryCatch.Exception()));
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(v8::Undefined());
}
}
@ -3300,6 +3314,9 @@ v8::Handle<v8::Value> TRI_ExecuteJavaScriptString (v8::Handle<v8::Context> conte
TRI_LogV8Exception(&tryCatch);
}
else {
TRI_v8_global_t* v8g = (TRI_v8_global_t*) v8::Isolate::GetCurrent()->GetData();
v8g->_canceled = true;
return scope.Close(v8::Undefined());
}
}