1
0
Fork 0

Merge branch 'aql-query-cache' of https://github.com/arangodb/arangodb into devel

This commit is contained in:
Jan Steemann 2015-07-13 12:46:41 +02:00
commit 14ccfbabc3
66 changed files with 4244 additions and 552 deletions

View File

@ -1,6 +1,32 @@
v2.7.0 (XXXX-XX-XX)
-------------------
* AQL query result cache
The query result cache can optionally cache the complete results of all or selected AQL queries.
It can be operated in the following modes:
* `off`: the cache is disabled. No query results will be stored
* `on`: the cache will store the results of all AQL queries unless their `cache`
attribute flag is set to `false`
* `demand`: the cache will store the results of AQL queries that have their
`cache` attribute set to `true`, but will ignore all others
The mode can be set at server startup using the `--database.query-cache-mode` configuration
option and later changed at runtime.
The following HTTP REST APIs have been added for controlling the query cache:
* HTTP GET `/_api/query-cache/properties`: returns the global query cache configuration
* HTTP PUT `/_api/query-cache/properties`: modifies the global query cache configuration
* HTTP DELETE `/_api/query-cache`: invalidates all results in the query cache
The following JavaScript functions have been added for controlling the query cache:
* `require("org/arangodb/aql/cache").properties()`: returns the global query cache configuration
* `require("org/arangodb/aql/cache").properties(properties)`: modifies the global query cache configuration
* `require("org/arangodb/aql/cache").clear()`: invalidates all results in the query cache
* do not link arangoimp against V8
* AQL functon call arguments optimization

View File

@ -0,0 +1,186 @@
!CHAPTER The AQL query result cache
AQL provides an optional query result cache.
The purpose of the query cache is to avoid repeated calculation of the same
query results. It is useful if data-reading queries repeat a lot and there are
not many write queries.
The query cache is transparent so users do not need to manually invalidate
results in it if underlying collection data are modified.
!SECTION Modes
The cache can be operated in the following modes:
* `off`: the cache is disabled. No query results will be stored
* `on`: the cache will store the results of all AQL queries unless their `cache`
attribute flag is set to `false`
* `demand`: the cache will store the results of AQL queries that have their
`cache` attribute set to `true`, but will ignore all others
The mode can be set at server startup and later changed at runtime.
!SECTION Query eligibility
The query cache will consider two queries identical if they have exactly the
same query string. Any deviation in terms of whitespace, capitalization etc.
will be considered a difference. The query string will be hashed and used as
the cache lookup key. If a query uses bind parameters, these will also be hashed
and used as the cache lookup key.
That means even if the query string for two queries is identical, the query
cache will treat them as different queries if they have different bind parameter
values. Other components that will become part of a query's cache key are the
`count` and `fullCount` attributes.
If the cache is turned on, the cache will check at the very start of execution
whether it has a result ready for this particular query. If that is the case,
the query result will be served directly from the cache, which is normally
very efficient. If the query cannot be found in the cache, it will be executed
as usual.
If the query is eligible for caching and the cache is turned on, the query
result will be stored in the query cache so it can be used for subsequent
executions of the same query.
A query is eligible for caching only if all of the following conditions are met:
* the server the query executes on is not a coordinator
* the query string is at least 8 characters long
* the query is a read-only query and does not modify data in any collection
* no warnings were produced while executing the query
* the query is deterministic and only uses deterministic functions
The usage of non-deterministic functions leads to a query not being cachable.
This is intentional to avoid caching of function results which should rather
be calculated on each invocation of the query (e.g. `RAND()` or `DATE_NOW()`).
The query cache considers all user-defined AQL functions to be non-deterministic
as it has no insight into these functions.
!SECTION Cache invalidation
The query cache results are fully or partially invalidated automatically if
queries modify the data of collections that were used during the computation of
the cached query results. This is to protect users from getting stale results
from the query cache.
This also means that if the cache is turned on, then there is an additional
cache invalidation check for each data-modification operation (e.g. insert, update,
remove, truncate operations as well as AQL data-modification queries).
**Example**
If the result of the following query is present in the query cache,
then either modifying data in collection `users` or in collection `organizations`
will remove the already computed result from the cache:
```
FOR user IN users
FOR organization IN organizations
FILTER user.organization == organization._key
RETURN { user: user, organization: organization }
```
Modifying data in other collections than the named two will not lead to this
query result being removed from the cache.
!SECTION Performance considerations
The query cache is organized as a hash table, so looking up whether a query result
is present in the cache is relatively fast. Still, the query string and the bind
parameter used in the query will need to be hashed. This is a slight overhead that
will not be present if the cache is turned off or a query is marked as not cacheable.
Additionally, storing query results in the cache and fetching results from the
cache requires locking via an R/W lock. While many thread can read in parallel from
the cache, there can only be a single modifying thread at any given time. Modifications
of the query cache contents are required when a query result is stored in the cache
or during cache invalidation after data-modification operations. Cache invalidation
will require time proportional to the number of cached items that need to be invalidated.
There may be workloads in which enabling the query cache will lead to a performance
degradation. It is not recommended to turn the query cache on in workloads that only
modify data, or that modify data more often than read it. Turning on the query cache
will also provide no benefit if queries are very diverse and do not repeat often.
In read-only or read-mostly workloads, the query cache will be beneficial if the same
queries are repeated lots of times.
In general, the query cache will provide the biggest improvements for queries with
small result sets that take long to calculate. If a query's results are very big and
most of the query time is spent in copying the result from the cache to the client,
then the cache will not provide much benefit.
!SECTION Global configuration
The query cache can be configured at server start using the configuration parameter
`--database.query-cache-mode`. This will set the cache mode according to the descriptions
above.
After the server is started, the cache mode can be changed at runtime as follows:
```
require("org/arangodb/aql/cache").properties({ mode: "on" });
```
The maximum number of cached results in the cache for each database can be configured
at server start using the configuration parameter `--database.query-cache-max-results`.
This parameter can be used to put an upper bound on the number of query results in
each database's query cache and thus restrict the cache's memory consumption.
The value can also be adjusted at runtime as follows:
```
require("org/arangodb/aql/cache").properties({ maxResults: 200 });
```
!SECTION Per-query configuration
When a query is sent to the server for execution and the cache is set to `on` or `demand`,
the query executor will look into the query's `cache` attribute. If the query cache mode is
`on`, then not setting this attribute or setting it to anything but `false` will make the
query executor consult the query cache. If the query cache mode is `demand`, then setting
the `cache` attribute to `true` will make the executor look for the query in the query cache.
When the query cache mode is `off`, the executor will not look for the query in the cache.
The `cache` attribute can be set as follows via the `db._createStatement()` function:
```
var stmt = db._createStatement({
query: "FOR doc IN users LIMIT 5 RETURN doc",
cache: true /* cache attribute set here */
});
stmt.execute();
```
When using the `db._query()` function, the `cache` attribute can be set as allows:
```
db._query({
query: "FOR doc IN users LIMIT 5 RETURN doc",
cache: true /* cache attribute set here */
});
```
The `cache` attribute can be set via the HTTP REST API `POST /_api/cursor`, too.
Each query result returned will contain a `cached` attribute. This will be set to `true`
if the result was retrieved from the query cache, and `false` otherwise. Clients can use
this attribute to check if a specific query was served from the cache or not.
!SECTION Restrictions
Query results that are returned from the query cache do not contain any execution statistics,
meaning their *extra.stats* attribute will not be present. Additionally query results returned
from the cache will not contain profile information even if the *profile* option was set to
true when invoking the query.

View File

@ -107,6 +107,14 @@ the option *--disable-figures*.
@startDocuBlock databaseDisableQueryTracking
!SUBSECTION AQL Query caching mode
@startDocuBlock queryCacheMode
!SUBSECTION AQL Query cache size
@startDocuBlock queryCacheMaxResults
!SUBSECTION Index threads
@startDocuBlock indexThreads

View File

@ -0,0 +1,10 @@
!CHAPTER HTTP Interface for the AQL query cache
This section describes the API methods for controlling the AQL query cache.
@startDocuBlock DeleteApiQueryCache
@startDocuBlock GetApiQueryCacheProperties
@startDocuBlock PutApiQueryCacheProperties

View File

@ -49,7 +49,7 @@ content-type: application/json
}
```
!SUBSECTION Using a Cursor
!SUBSECTION Using a cursor
If the result set contains more documents than should be transferred in a single
roundtrip (i.e. as set via the *batchSize* attribute), the server will return
@ -168,14 +168,14 @@ content-type: application/json
The `_api/cursor` endpoint can also be used to execute modifying queries.
The following example inserts a value into the list `listValue` of the document
with key `test` in the collection `documents`. Normal PATH behaviour is to
replace the list completely and using an update AQL query with `PUSH` allows to
append to the list.
The following example appends a value into the array `arrayValue` of the document
with key `test` in the collection `documents`. Normal update behaviour is to
replace the attribute completely, and using an update AQL query with the `PUSH()`
function allows to append to the array.
```js
curl --data @- -X POST --dump http://127.0.0.1:8529/_api/cursor
{ "query": "FOR doc IN documents FILTER doc._key == @myKey UPDATE doc._key WITH { listValue: PUSH(doc.listValue, @value) } IN documents","bindVars": { "myKey": "test", "value": 42 } }
{ "query": "FOR doc IN documents FILTER doc._key == @myKey UPDATE doc._key WITH { arrayValue: PUSH(doc.arrayValue, @value) } IN documents","bindVars": { "myKey": "test", "value": 42 } }
HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8

View File

@ -58,6 +58,7 @@
* [How to invoke AQL](Aql/Invoke.md)
* [Data modification queries](Aql/DataModification.md)
* [The AQL query optimizer](Aql/Optimizer.md)
* [The AQL query result cache](Aql/QueryCache.md)
* [Language Basics](Aql/Basics.md)
* [Functions](Aql/Functions.md)
* [Type cast](Aql/TypeCastFunctions.md)
@ -169,6 +170,7 @@
* [Query Results](HttpAqlQueryCursor/QueryResults.md)
* [Accessing Cursors](HttpAqlQueryCursor/AccessingCursors.md)
* [AQL Queries](HttpAqlQuery/README.md)
* [AQL Query Cache](HttpAqlQueryCache/README.md)
* [AQL User Functions Management](HttpAqlUserFunctions/README.md)
* [Simple Queries](HttpSimpleQuery/README.md)
* [Collections](HttpCollection/README.md)

View File

@ -96,6 +96,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['count'].should eq(2)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
end
it "creates a cursor single run, without count" do
@ -111,6 +112,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['count'].should eq(nil)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
end
it "creates a cursor single run, large batch size" do
@ -126,6 +128,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['count'].should eq(2)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
end
it "creates a cursor" do
@ -142,6 +145,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
id = doc.parsed_response['id']
@ -158,6 +162,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
cmd = api + "/#{id}"
doc = ArangoDB.log_put("#{prefix}-create-for-limit-return-cont2", cmd)
@ -170,6 +175,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
cmd = api + "/#{id}"
doc = ArangoDB.log_put("#{prefix}-create-for-limit-return-cont3", cmd)
@ -195,6 +201,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
id = doc.parsed_response['id']
@ -211,6 +218,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
cmd = api + "/#{id}"
doc = ArangoDB.log_delete("#{prefix}-delete", cmd)
@ -237,6 +245,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
id = doc.parsed_response['id']
@ -265,6 +274,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(2)
doc.parsed_response['cached'].should eq(false)
id = doc.parsed_response['id']
@ -315,6 +325,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
sleep 1
id = doc.parsed_response['id']
@ -332,6 +343,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
sleep 1
@ -345,6 +357,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
# after this, the cursor might expire eventually
# the problem is that we cannot exactly determine the point in time
@ -374,6 +387,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
sleep 1
id = doc.parsed_response['id']
@ -391,6 +405,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
sleep 1
@ -404,6 +419,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
sleep 5 # this should not delete the cursor on the server
doc = ArangoDB.log_put("#{prefix}-create-ttl", cmd)
@ -415,6 +431,7 @@ describe ArangoDB do
doc.parsed_response['hasMore'].should eq(true)
doc.parsed_response['count'].should eq(5)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
end
it "creates a query that executes a v8 expression during query optimization" do
@ -429,6 +446,7 @@ describe ArangoDB do
doc.parsed_response['id'].should be_nil
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['result'].length.should eq(1)
doc.parsed_response['cached'].should eq(false)
end
it "creates a query that executes a v8 expression during query execution" do
@ -443,6 +461,7 @@ describe ArangoDB do
doc.parsed_response['id'].should be_nil
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['result'].length.should eq(10)
doc.parsed_response['cached'].should eq(false)
end
it "creates a query that executes a dynamic index expression during query execution" do
@ -457,6 +476,7 @@ describe ArangoDB do
doc.parsed_response['id'].should be_nil
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['result'].length.should eq(10)
doc.parsed_response['cached'].should eq(false)
end
it "creates a query that executes a dynamic V8 index expression during query execution" do
@ -471,7 +491,25 @@ describe ArangoDB do
doc.parsed_response['id'].should be_nil
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['result'].length.should eq(10)
doc.parsed_response['cached'].should eq(false)
end
it "creates a cursor with different bind values" do
cmd = api
body = "{ \"query\" : \"RETURN @values\", \"bindVars\" : { \"values\" : [ null, false, true, -1, 2.5, 3e4, \"\", \" \", \"true\", \"foo bar baz\", [ 1, 2, 3, \"bar\" ], { \"foo\" : \"bar\", \"\" : \"baz\", \" bar-baz \" : \"foo-bar\" } ] } }"
doc = ArangoDB.log_post("#{prefix}-test-bind-values", cmd, :body => body)
values = [ [ nil, false, true, -1, 2.5, 3e4, "", " ", "true", "foo bar baz", [ 1, 2, 3, "bar" ], { "foo" => "bar", "" => "baz", " bar-baz " => "foo-bar" } ] ]
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
doc.parsed_response['code'].should eq(201)
doc.parsed_response['id'].should be_nil
doc.parsed_response['hasMore'].should eq(false)
doc.parsed_response['result'].should eq(values)
doc.parsed_response['cached'].should eq(false)
end
end
################################################################################
@ -516,7 +554,7 @@ describe ArangoDB do
end
################################################################################
## floating points
## floating point values
################################################################################
context "fetching floating-point values:" do
@ -566,5 +604,138 @@ describe ArangoDB do
end
end
################################################################################
## query cache
################################################################################
context "testing the query cache:" do
before do
doc = ArangoDB.get("/_api/query-cache/properties")
@mode = doc.parsed_response['mode']
ArangoDB.put("/_api/query-cache/properties", :body => "{ \"mode\" : \"demand\" }")
ArangoDB.delete("/_api/query-cache")
end
after do
ArangoDB.put("/_api/query-cache/properties", :body => "{ \"mode\" : \"#{@mode}\" }")
end
it "testing without cache attribute set" do
cmd = api
body = "{ \"query\" : \"FOR i IN 1..5 RETURN i\" }"
doc = ArangoDB.log_post("#{prefix}-query-cache-disabled", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
doc.parsed_response['code'].should eq(201)
doc.parsed_response['id'].should be_nil
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
# should see same result, but not from cache
doc = ArangoDB.log_post("#{prefix}-query-cache", cmd, :body => body)
doc.code.should eq(201)
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
end
it "testing explicitly disable cache" do
cmd = api
body = "{ \"query\" : \"FOR i IN 1..5 RETURN i\", \"cache\" : false }"
doc = ArangoDB.log_post("#{prefix}-query-cache-disabled", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
doc.parsed_response['code'].should eq(201)
doc.parsed_response['id'].should be_nil
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
# should see same result, but not from cache
doc = ArangoDB.log_post("#{prefix}-query-cache", cmd, :body => body)
doc.code.should eq(201)
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
end
it "testing enabled cache" do
cmd = api
body = "{ \"query\" : \"FOR i IN 1..5 RETURN i\", \"cache\" : true }"
doc = ArangoDB.log_post("#{prefix}-query-cache-enabled", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
doc.parsed_response['code'].should eq(201)
doc.parsed_response['id'].should be_nil
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
# should see same result, but now from cache
doc = ArangoDB.log_post("#{prefix}-query-cache", cmd, :body => body)
doc.code.should eq(201)
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(true)
doc.parsed_response['extra'].should_not have_key('stats')
end
it "testing clearing the cache" do
cmd = api
body = "{ \"query\" : \"FOR i IN 1..5 RETURN i\", \"cache\" : true }"
doc = ArangoDB.log_post("#{prefix}-query-cache-enabled", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
doc.parsed_response['code'].should eq(201)
doc.parsed_response['id'].should be_nil
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
# should see same result, but now from cache
doc = ArangoDB.log_post("#{prefix}-query-cache", cmd, :body => body)
doc.code.should eq(201)
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(true)
doc.parsed_response['extra'].should_not have_key('stats')
# now clear cache
ArangoDB.delete("/_api/query-cache")
# query again. now response should not come from cache
doc = ArangoDB.log_post("#{prefix}-query-cache", cmd, :body => body)
doc.code.should eq(201)
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(false)
doc.parsed_response['extra'].should have_key('stats')
doc = ArangoDB.log_post("#{prefix}-query-cache", cmd, :body => body)
doc.code.should eq(201)
result = doc.parsed_response['result']
result.should eq([ 1, 2, 3, 4, 5 ])
doc.parsed_response['cached'].should eq(true)
doc.parsed_response['extra'].should_not have_key('stats')
end
end
end
end

View File

@ -622,6 +622,7 @@ SHELL_SERVER_AQL = @top_srcdir@/js/server/tests/aql-arithmetic.js \
@top_srcdir@/js/server/tests/aql-queries-optimiser-sort-noncluster.js \
@top_srcdir@/js/server/tests/aql-queries-simple.js \
@top_srcdir@/js/server/tests/aql-queries-variables.js \
@top_srcdir@/js/server/tests/aql-query-cache.js \
@top_srcdir@/js/server/tests/aql-range.js \
@top_srcdir@/js/server/tests/aql-ranges.js \
@top_srcdir@/js/server/tests/aql-refaccess-attribute.js \

View File

@ -1493,10 +1493,16 @@ bool AstNode::isDeterministic () const {
return ! hasFlag(VALUE_NONDETERMINISTIC);
}
if (isConstant()) {
return true;
}
// check sub-nodes first
size_t const n = numMembers();
for (size_t i = 0; i < n; ++i) {
auto member = getMember(i);
auto member = getMemberUnchecked(i);
if (! member->isDeterministic()) {
// if any sub-node is non-deterministic, we are neither
setFlag(DETERMINED_NONDETERMINISTIC, VALUE_NONDETERMINISTIC);
@ -1507,10 +1513,12 @@ bool AstNode::isDeterministic () const {
if (type == NODE_TYPE_FCALL) {
// built-in functions may or may not be deterministic
auto func = static_cast<Function*>(getData());
if (! func->isDeterministic) {
setFlag(DETERMINED_NONDETERMINISTIC, VALUE_NONDETERMINISTIC);
return false;
}
setFlag(DETERMINED_NONDETERMINISTIC);
return true;
}
@ -1526,6 +1534,41 @@ bool AstNode::isDeterministic () const {
return true;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not a node (and its subnodes) is cacheable
////////////////////////////////////////////////////////////////////////////////
bool AstNode::isCacheable () const {
if (isConstant()) {
return true;
}
// check sub-nodes first
size_t const n = numMembers();
for (size_t i = 0; i < n; ++i) {
auto member = getMemberUnchecked(i);
if (! member->isCacheable()) {
return false;
}
}
if (type == NODE_TYPE_FCALL) {
// built-in functions may or may not be cacheable
auto func = static_cast<Function*>(getData());
return func->isCacheable;
}
if (type == NODE_TYPE_FCALL_USER) {
// user functions are always non-cacheable
return false;
}
// everything else is cacheable
return true;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the object node contains dynamically named attributes
/// on its first level

View File

@ -500,6 +500,12 @@ namespace triagens {
bool isDeterministic () const;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not a node (and its subnodes) is cacheable
////////////////////////////////////////////////////////////////////////////////
bool isCacheable () const;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the object node contains dynamically named attributes
/// on its first level

View File

@ -29,6 +29,7 @@
#include "Aql/BindParameters.h"
#include "Basics/json.h"
#include "Basics/json-utilities.h"
#include "Basics/Exceptions.h"
using namespace triagens::aql;
@ -58,11 +59,23 @@ BindParameters::~BindParameters () {
}
// -----------------------------------------------------------------------------
// --SECTION-- public functions
// --SECTION-- public methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief create a hash value for the bind parameters
////////////////////////////////////////////////////////////////////////////////
uint64_t BindParameters::hash () const {
if (_json == nullptr) {
return 0x12345678abcdef;
}
return TRI_FastHashJson(_json);
}
// -----------------------------------------------------------------------------
// --SECTION-- private functions
// --SECTION-- private methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
@ -107,7 +120,6 @@ void BindParameters::process () {
_processed = true;
}
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------

View File

@ -58,7 +58,7 @@ namespace triagens {
/// @brief create the parameters
////////////////////////////////////////////////////////////////////////////////
BindParameters (TRI_json_t*);
explicit BindParameters (TRI_json_t*);
////////////////////////////////////////////////////////////////////////////////
/// @brief destroy the parameters
@ -81,6 +81,12 @@ namespace triagens {
return _parameters;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief create a hash value for the bind parameters
////////////////////////////////////////////////////////////////////////////////
uint64_t hash () const;
// -----------------------------------------------------------------------------
// --SECTION-- private methods
// -----------------------------------------------------------------------------

View File

@ -55,8 +55,8 @@ namespace triagens {
}
~Collections () {
for (auto it = _collections.begin(); it != _collections.end(); ++it) {
delete (*it).second;
for (auto& it : _collections) {
delete it.second;
}
}
@ -83,15 +83,10 @@ namespace triagens {
THROW_ARANGO_EXCEPTION(TRI_ERROR_QUERY_TOO_MANY_COLLECTIONS);
}
auto collection = new Collection(name, _vocbase, accessType);
try {
_collections.emplace(std::make_pair(name, collection));
}
catch (...) {
delete collection;
throw;
}
return collection;
std::unique_ptr<Collection> collection(new Collection(name, _vocbase, accessType));
_collections.emplace(name, collection.get());
return collection.release();
}
else {
// note that the collection is used in both read & write ops
@ -112,8 +107,8 @@ namespace triagens {
std::vector<std::string> result;
result.reserve(_collections.size());
for (auto it = _collections.begin(); it != _collections.end(); ++it) {
result.emplace_back((*it).first);
for (auto const& it : _collections) {
result.emplace_back(it.first);
}
return result;
}

View File

@ -90,168 +90,168 @@ std::unordered_map<std::string, Function const> const Executor::FunctionNames{
// r = regex (a string with a special format). note: the regex type is mutually exclusive with all other types
// type check functions
{ "IS_NULL", Function("IS_NULL", "AQL_IS_NULL", ".", true, false, true, true, &Functions::IsNull) },
{ "IS_BOOL", Function("IS_BOOL", "AQL_IS_BOOL", ".", true, false, true, true, &Functions::IsBool) },
{ "IS_NUMBER", Function("IS_NUMBER", "AQL_IS_NUMBER", ".", true, false, true, true, &Functions::IsNumber) },
{ "IS_STRING", Function("IS_STRING", "AQL_IS_STRING", ".", true, false, true, true, &Functions::IsString) },
{ "IS_ARRAY", Function("IS_ARRAY", "AQL_IS_ARRAY", ".", true, false, true, true, &Functions::IsArray) },
{ "IS_NULL", Function("IS_NULL", "AQL_IS_NULL", ".", true, true, false, true, true, &Functions::IsNull) },
{ "IS_BOOL", Function("IS_BOOL", "AQL_IS_BOOL", ".", true, true, false, true, true, &Functions::IsBool) },
{ "IS_NUMBER", Function("IS_NUMBER", "AQL_IS_NUMBER", ".", true, true, false, true, true, &Functions::IsNumber) },
{ "IS_STRING", Function("IS_STRING", "AQL_IS_STRING", ".", true, true, false, true, true, &Functions::IsString) },
{ "IS_ARRAY", Function("IS_ARRAY", "AQL_IS_ARRAY", ".", true, true, false, true, true, &Functions::IsArray) },
// IS_LIST is an alias for IS_ARRAY
{ "IS_LIST", Function("IS_LIST", "AQL_IS_LIST", ".", true, false, true, true, &Functions::IsArray) },
{ "IS_OBJECT", Function("IS_OBJECT", "AQL_IS_OBJECT", ".", true, false, true, true, &Functions::IsObject) },
{ "IS_LIST", Function("IS_LIST", "AQL_IS_LIST", ".", true, true, false, true, true, &Functions::IsArray) },
{ "IS_OBJECT", Function("IS_OBJECT", "AQL_IS_OBJECT", ".", true, true, false, true, true, &Functions::IsObject) },
// IS_DOCUMENT is an alias for IS_OBJECT
{ "IS_DOCUMENT", Function("IS_DOCUMENT", "AQL_IS_DOCUMENT", ".", true, false, true, true, &Functions::IsObject) },
{ "IS_DOCUMENT", Function("IS_DOCUMENT", "AQL_IS_DOCUMENT", ".", true, true, false, true, true, &Functions::IsObject) },
// type cast functions
{ "TO_NUMBER", Function("TO_NUMBER", "AQL_TO_NUMBER", ".", true, false, true, true, &Functions::ToNumber) },
{ "TO_STRING", Function("TO_STRING", "AQL_TO_STRING", ".", true, false, true, true, &Functions::ToString) },
{ "TO_BOOL", Function("TO_BOOL", "AQL_TO_BOOL", ".", true, false, true, true, &Functions::ToBool) },
{ "TO_ARRAY", Function("TO_ARRAY", "AQL_TO_ARRAY", ".", true, false, true, true, &Functions::ToArray) },
{ "TO_NUMBER", Function("TO_NUMBER", "AQL_TO_NUMBER", ".", true, true, false, true, true, &Functions::ToNumber) },
{ "TO_STRING", Function("TO_STRING", "AQL_TO_STRING", ".", true, true, false, true, true, &Functions::ToString) },
{ "TO_BOOL", Function("TO_BOOL", "AQL_TO_BOOL", ".", true, true, false, true, true, &Functions::ToBool) },
{ "TO_ARRAY", Function("TO_ARRAY", "AQL_TO_ARRAY", ".", true, true, false, true, true, &Functions::ToArray) },
// TO_LIST is an alias for TO_ARRAY
{ "TO_LIST", Function("TO_LIST", "AQL_TO_LIST", ".", true, false, true, true, &Functions::ToArray) },
{ "TO_LIST", Function("TO_LIST", "AQL_TO_LIST", ".", true, true, false, true, true, &Functions::ToArray) },
// string functions
{ "CONCAT", Function("CONCAT", "AQL_CONCAT", "szl|+", true, false, true, true, &Functions::Concat) },
{ "CONCAT_SEPARATOR", Function("CONCAT_SEPARATOR", "AQL_CONCAT_SEPARATOR", "s,szl|+", true, false, true, true) },
{ "CHAR_LENGTH", Function("CHAR_LENGTH", "AQL_CHAR_LENGTH", "s", true, false, true, true) },
{ "LOWER", Function("LOWER", "AQL_LOWER", "s", true, false, true, true) },
{ "UPPER", Function("UPPER", "AQL_UPPER", "s", true, false, true, true) },
{ "SUBSTRING", Function("SUBSTRING", "AQL_SUBSTRING", "s,n|n", true, false, true, true) },
{ "CONTAINS", Function("CONTAINS", "AQL_CONTAINS", "s,s|b", true, false, true, true) },
{ "LIKE", Function("LIKE", "AQL_LIKE", "s,r|b", true, false, true, true) },
{ "LEFT", Function("LEFT", "AQL_LEFT", "s,n", true, false, true, true) },
{ "RIGHT", Function("RIGHT", "AQL_RIGHT", "s,n", true, false, true, true) },
{ "TRIM", Function("TRIM", "AQL_TRIM", "s|ns", true, false, true, true) },
{ "LTRIM", Function("LTRIM", "AQL_LTRIM", "s|s", true, false, true, true) },
{ "RTRIM", Function("RTRIM", "AQL_RTRIM", "s|s", true, false, true, true) },
{ "FIND_FIRST", Function("FIND_FIRST", "AQL_FIND_FIRST", "s,s|zn,zn", true, false, true, true) },
{ "FIND_LAST", Function("FIND_LAST", "AQL_FIND_LAST", "s,s|zn,zn", true, false, true, true) },
{ "SPLIT", Function("SPLIT", "AQL_SPLIT", "s|sl,n", true, false, true, true) },
{ "SUBSTITUTE", Function("SUBSTITUTE", "AQL_SUBSTITUTE", "s,las|lsn,n", true, false, true, true) },
{ "MD5", Function("MD5", "AQL_MD5", "s", true, false, true, true, &Functions::Md5) },
{ "SHA1", Function("SHA1", "AQL_SHA1", "s", true, false, true, true, &Functions::Sha1) },
{ "RANDOM_TOKEN", Function("RANDOM_TOKEN", "AQL_RANDOM_TOKEN", "n", false, true, true, true) },
{ "CONCAT", Function("CONCAT", "AQL_CONCAT", "szl|+", true, true, false, true, true, &Functions::Concat) },
{ "CONCAT_SEPARATOR", Function("CONCAT_SEPARATOR", "AQL_CONCAT_SEPARATOR", "s,szl|+", true, true, false, true, true) },
{ "CHAR_LENGTH", Function("CHAR_LENGTH", "AQL_CHAR_LENGTH", "s", true, true, false, true, true) },
{ "LOWER", Function("LOWER", "AQL_LOWER", "s", true, true, false, true, true) },
{ "UPPER", Function("UPPER", "AQL_UPPER", "s", true, true, false, true, true) },
{ "SUBSTRING", Function("SUBSTRING", "AQL_SUBSTRING", "s,n|n", true, true, false, true, true) },
{ "CONTAINS", Function("CONTAINS", "AQL_CONTAINS", "s,s|b", true, true, false, true, true) },
{ "LIKE", Function("LIKE", "AQL_LIKE", "s,r|b", true, true, false, true, true) },
{ "LEFT", Function("LEFT", "AQL_LEFT", "s,n", true, true, false, true, true) },
{ "RIGHT", Function("RIGHT", "AQL_RIGHT", "s,n", true, true, false, true, true) },
{ "TRIM", Function("TRIM", "AQL_TRIM", "s|ns", true, true, false, true, true) },
{ "LTRIM", Function("LTRIM", "AQL_LTRIM", "s|s", true, true, false, true, true) },
{ "RTRIM", Function("RTRIM", "AQL_RTRIM", "s|s", true, true, false, true, true) },
{ "FIND_FIRST", Function("FIND_FIRST", "AQL_FIND_FIRST", "s,s|zn,zn", true, true, false, true, true) },
{ "FIND_LAST", Function("FIND_LAST", "AQL_FIND_LAST", "s,s|zn,zn", true, true, false, true, true) },
{ "SPLIT", Function("SPLIT", "AQL_SPLIT", "s|sl,n", true, true, false, true, true) },
{ "SUBSTITUTE", Function("SUBSTITUTE", "AQL_SUBSTITUTE", "s,las|lsn,n", true, true, false, true, true) },
{ "MD5", Function("MD5", "AQL_MD5", "s", true, true, false, true, true, &Functions::Md5) },
{ "SHA1", Function("SHA1", "AQL_SHA1", "s", true, true, false, true, true, &Functions::Sha1) },
{ "RANDOM_TOKEN", Function("RANDOM_TOKEN", "AQL_RANDOM_TOKEN", "n", false, false, true, true, true) },
// numeric functions
{ "FLOOR", Function("FLOOR", "AQL_FLOOR", "n", true, false, true, true) },
{ "CEIL", Function("CEIL", "AQL_CEIL", "n", true, false, true, true) },
{ "ROUND", Function("ROUND", "AQL_ROUND", "n", true, false, true, true) },
{ "ABS", Function("ABS", "AQL_ABS", "n", true, false, true, true) },
{ "RAND", Function("RAND", "AQL_RAND", "", false, false, true, true) },
{ "SQRT", Function("SQRT", "AQL_SQRT", "n", true, false, true, true) },
{ "FLOOR", Function("FLOOR", "AQL_FLOOR", "n", true, true, false, true, true) },
{ "CEIL", Function("CEIL", "AQL_CEIL", "n", true, true, false, true, true) },
{ "ROUND", Function("ROUND", "AQL_ROUND", "n", true, true, false, true, true) },
{ "ABS", Function("ABS", "AQL_ABS", "n", true, true, false, true, true) },
{ "RAND", Function("RAND", "AQL_RAND", "", false, false, false, true, true) },
{ "SQRT", Function("SQRT", "AQL_SQRT", "n", true, true, false, true, true) },
// list functions
{ "RANGE", Function("RANGE", "AQL_RANGE", "n,n|n", true, false, true, true) },
{ "UNION", Function("UNION", "AQL_UNION", "l,l|+",true, false, true, true, &Functions::Union) },
{ "UNION_DISTINCT", Function("UNION_DISTINCT", "AQL_UNION_DISTINCT", "l,l|+", true, false, true, true, &Functions::UnionDistinct) },
{ "MINUS", Function("MINUS", "AQL_MINUS", "l,l|+", true, false, true, true) },
{ "INTERSECTION", Function("INTERSECTION", "AQL_INTERSECTION", "l,l|+", true, false, true, true, &Functions::Intersection) },
{ "FLATTEN", Function("FLATTEN", "AQL_FLATTEN", "l|n", true, false, true, true) },
{ "LENGTH", Function("LENGTH", "AQL_LENGTH", "las", true, false, true, true, &Functions::Length) },
{ "MIN", Function("MIN", "AQL_MIN", "l", true, false, true, true, &Functions::Min) },
{ "MAX", Function("MAX", "AQL_MAX", "l", true, false, true, true, &Functions::Max) },
{ "SUM", Function("SUM", "AQL_SUM", "l", true, false, true, true, &Functions::Sum) },
{ "MEDIAN", Function("MEDIAN", "AQL_MEDIAN", "l", true, false, true, true) },
{ "PERCENTILE", Function("PERCENTILE", "AQL_PERCENTILE", "l,n|s", true, false, true, true) },
{ "AVERAGE", Function("AVERAGE", "AQL_AVERAGE", "l", true, false, true, true, &Functions::Average) },
{ "VARIANCE_SAMPLE", Function("VARIANCE_SAMPLE", "AQL_VARIANCE_SAMPLE", "l", true, false, true, true) },
{ "VARIANCE_POPULATION", Function("VARIANCE_POPULATION", "AQL_VARIANCE_POPULATION", "l", true, false, true, true) },
{ "STDDEV_SAMPLE", Function("STDDEV_SAMPLE", "AQL_STDDEV_SAMPLE", "l", true, false, true, true) },
{ "STDDEV_POPULATION", Function("STDDEV_POPULATION", "AQL_STDDEV_POPULATION", "l", true, false, true, true) },
{ "UNIQUE", Function("UNIQUE", "AQL_UNIQUE", "l", true, false, true, true, &Functions::Unique) },
{ "SLICE", Function("SLICE", "AQL_SLICE", "l,n|n", true, false, true, true) },
{ "REVERSE", Function("REVERSE", "AQL_REVERSE", "ls", true, false, true, true) }, // note: REVERSE() can be applied on strings, too
{ "FIRST", Function("FIRST", "AQL_FIRST", "l", true, false, true, true) },
{ "LAST", Function("LAST", "AQL_LAST", "l", true, false, true, true) },
{ "NTH", Function("NTH", "AQL_NTH", "l,n", true, false, true, true) },
{ "POSITION", Function("POSITION", "AQL_POSITION", "l,.|b", true, false, true, true) },
{ "CALL", Function("CALL", "AQL_CALL", "s|.+", false, true, false, true) },
{ "APPLY", Function("APPLY", "AQL_APPLY", "s|l", false, true, false, false) },
{ "PUSH", Function("PUSH", "AQL_PUSH", "l,.|b", true, false, true, false) },
{ "APPEND", Function("APPEND", "AQL_APPEND", "l,lz|b", true, false, true, true) },
{ "POP", Function("POP", "AQL_POP", "l", true, false, true, true) },
{ "SHIFT", Function("SHIFT", "AQL_SHIFT", "l", true, false, true, true) },
{ "UNSHIFT", Function("UNSHIFT", "AQL_UNSHIFT", "l,.|b", true, false, true, true) },
{ "REMOVE_VALUE", Function("REMOVE_VALUE", "AQL_REMOVE_VALUE", "l,.|n", true, false, true, true) },
{ "REMOVE_VALUES", Function("REMOVE_VALUES", "AQL_REMOVE_VALUES", "l,lz", true, false, true, true) },
{ "REMOVE_NTH", Function("REMOVE_NTH", "AQL_REMOVE_NTH", "l,n", true, false, true, true) },
{ "RANGE", Function("RANGE", "AQL_RANGE", "n,n|n", true, true, false, true, true) },
{ "UNION", Function("UNION", "AQL_UNION", "l,l|+", true, true, false, true, true, &Functions::Union) },
{ "UNION_DISTINCT", Function("UNION_DISTINCT", "AQL_UNION_DISTINCT", "l,l|+", true, true, false, true, true, &Functions::UnionDistinct) },
{ "MINUS", Function("MINUS", "AQL_MINUS", "l,l|+", true, true, false, true, true) },
{ "INTERSECTION", Function("INTERSECTION", "AQL_INTERSECTION", "l,l|+", true, true, false, true, true, &Functions::Intersection) },
{ "FLATTEN", Function("FLATTEN", "AQL_FLATTEN", "l|n", true, true, false, true, true) },
{ "LENGTH", Function("LENGTH", "AQL_LENGTH", "las", true, true, false, true, true, &Functions::Length) },
{ "MIN", Function("MIN", "AQL_MIN", "l", true, true, false, true, true, &Functions::Min) },
{ "MAX", Function("MAX", "AQL_MAX", "l", true, true, false, true, true, &Functions::Max) },
{ "SUM", Function("SUM", "AQL_SUM", "l", true, true, false, true, true, &Functions::Sum) },
{ "MEDIAN", Function("MEDIAN", "AQL_MEDIAN", "l", true, true, false, true, true) },
{ "PERCENTILE", Function("PERCENTILE", "AQL_PERCENTILE", "l,n|s", true, true, false, true, true) },
{ "AVERAGE", Function("AVERAGE", "AQL_AVERAGE", "l", true, true, false, true, true, &Functions::Average) },
{ "VARIANCE_SAMPLE", Function("VARIANCE_SAMPLE", "AQL_VARIANCE_SAMPLE", "l", true, true, false, true, true) },
{ "VARIANCE_POPULATION", Function("VARIANCE_POPULATION", "AQL_VARIANCE_POPULATION", "l", true, true, false, true, true) },
{ "STDDEV_SAMPLE", Function("STDDEV_SAMPLE", "AQL_STDDEV_SAMPLE", "l", true, true, false, true, true) },
{ "STDDEV_POPULATION", Function("STDDEV_POPULATION", "AQL_STDDEV_POPULATION", "l", true, true, false, true, true) },
{ "UNIQUE", Function("UNIQUE", "AQL_UNIQUE", "l", true, true, false, true, true, &Functions::Unique) },
{ "SLICE", Function("SLICE", "AQL_SLICE", "l,n|n", true, true, false, true, true) },
{ "REVERSE", Function("REVERSE", "AQL_REVERSE", "ls", true, true, false, true, true) }, // note: REVERSE() can be applied on strings, too
{ "FIRST", Function("FIRST", "AQL_FIRST", "l", true, true, false, true, true) },
{ "LAST", Function("LAST", "AQL_LAST", "l", true, true, false, true, true) },
{ "NTH", Function("NTH", "AQL_NTH", "l,n", true, true, false, true, true) },
{ "POSITION", Function("POSITION", "AQL_POSITION", "l,.|b", true, true, false, true, true) },
{ "CALL", Function("CALL", "AQL_CALL", "s|.+", false, false, true, false, true) },
{ "APPLY", Function("APPLY", "AQL_APPLY", "s|l", false, false, true, false, false) },
{ "PUSH", Function("PUSH", "AQL_PUSH", "l,.|b", true, true, false, true, false) },
{ "APPEND", Function("APPEND", "AQL_APPEND", "l,lz|b", true, true, false, true, true) },
{ "POP", Function("POP", "AQL_POP", "l", true, true, false, true, true) },
{ "SHIFT", Function("SHIFT", "AQL_SHIFT", "l", true, true, false, true, true) },
{ "UNSHIFT", Function("UNSHIFT", "AQL_UNSHIFT", "l,.|b", true, true, false, true, true) },
{ "REMOVE_VALUE", Function("REMOVE_VALUE", "AQL_REMOVE_VALUE", "l,.|n", true, true, false, true, true) },
{ "REMOVE_VALUES", Function("REMOVE_VALUES", "AQL_REMOVE_VALUES", "l,lz", true, true, false, true, true) },
{ "REMOVE_NTH", Function("REMOVE_NTH", "AQL_REMOVE_NTH", "l,n", true, true, false, true, true) },
// document functions
{ "HAS", Function("HAS", "AQL_HAS", "az,s", true, false, true, true, &Functions::Has) },
{ "ATTRIBUTES", Function("ATTRIBUTES", "AQL_ATTRIBUTES", "a|b,b", true, false, true, true, &Functions::Attributes) },
{ "VALUES", Function("VALUES", "AQL_VALUES", "a|b", true, false, true, true, &Functions::Values) },
{ "MERGE", Function("MERGE", "AQL_MERGE", "a,a|+", true, false, true, true, &Functions::Merge) },
{ "MERGE_RECURSIVE", Function("MERGE_RECURSIVE", "AQL_MERGE_RECURSIVE", "a,a|+", true, false, true, true) },
{ "DOCUMENT", Function("DOCUMENT", "AQL_DOCUMENT", "h.|.", false, true, false, true) },
{ "MATCHES", Function("MATCHES", "AQL_MATCHES", ".,l|b", true, false, true, true) },
{ "UNSET", Function("UNSET", "AQL_UNSET", "a,sl|+", true, false, true, true, &Functions::Unset) },
{ "KEEP", Function("KEEP", "AQL_KEEP", "a,sl|+", true, false, true, true, &Functions::Keep) },
{ "TRANSLATE", Function("TRANSLATE", "AQL_TRANSLATE", ".,a|.", true, false, true, true) },
{ "ZIP", Function("ZIP", "AQL_ZIP", "l,l", true, false, true, true) },
{ "HAS", Function("HAS", "AQL_HAS", "az,s", true, true, false, true, true, &Functions::Has) },
{ "ATTRIBUTES", Function("ATTRIBUTES", "AQL_ATTRIBUTES", "a|b,b", true, true, false, true, true, &Functions::Attributes) },
{ "VALUES", Function("VALUES", "AQL_VALUES", "a|b", true, true, false, true, true, &Functions::Values) },
{ "MERGE", Function("MERGE", "AQL_MERGE", "a,a|+", true, true, false, true, true, &Functions::Merge) },
{ "MERGE_RECURSIVE", Function("MERGE_RECURSIVE", "AQL_MERGE_RECURSIVE", "a,a|+", true, true, false, true, true) },
{ "DOCUMENT", Function("DOCUMENT", "AQL_DOCUMENT", "h.|.", false, false, true, false, true) },
{ "MATCHES", Function("MATCHES", "AQL_MATCHES", ".,l|b", true, true, false, true, true) },
{ "UNSET", Function("UNSET", "AQL_UNSET", "a,sl|+", true, true, false, true, true, &Functions::Unset) },
{ "KEEP", Function("KEEP", "AQL_KEEP", "a,sl|+", true, true, false, true, true, &Functions::Keep) },
{ "TRANSLATE", Function("TRANSLATE", "AQL_TRANSLATE", ".,a|.", true, true, false, true, true) },
{ "ZIP", Function("ZIP", "AQL_ZIP", "l,l", true, true, false, true, true) },
// geo functions
{ "NEAR", Function("NEAR", "AQL_NEAR", "h,n,n|nz,s", false, true, false, true) },
{ "WITHIN", Function("WITHIN", "AQL_WITHIN", "h,n,n,n|s", false, true, false, true) },
{ "WITHIN_RECTANGLE", Function("WITHIN_RECTANGLE", "AQL_WITHIN_RECTANGLE", "h,d,d,d,d", false, true, false, true) },
{ "IS_IN_POLYGON", Function("IS_IN_POLYGON", "AQL_IS_IN_POLYGON", "l,ln|nb", true, false, true, true) },
{ "NEAR", Function("NEAR", "AQL_NEAR", "h,n,n|nz,s", true, false, true, false, true) },
{ "WITHIN", Function("WITHIN", "AQL_WITHIN", "h,n,n,n|s", true, false, true, false, true) },
{ "WITHIN_RECTANGLE", Function("WITHIN_RECTANGLE", "AQL_WITHIN_RECTANGLE", "h,d,d,d,d", true, false, true, false, true) },
{ "IS_IN_POLYGON", Function("IS_IN_POLYGON", "AQL_IS_IN_POLYGON", "l,ln|nb", true, true, false, true, true) },
// fulltext functions
{ "FULLTEXT", Function("FULLTEXT", "AQL_FULLTEXT", "h,s,s|n", false, true, false, true) },
{ "FULLTEXT", Function("FULLTEXT", "AQL_FULLTEXT", "h,s,s|n", true, false, true, false, true) },
// graph functions
{ "PATHS", Function("PATHS", "AQL_PATHS", "c,h|s,ba", false, true, false, false) },
{ "GRAPH_PATHS", Function("GRAPH_PATHS", "AQL_GRAPH_PATHS", "s|a", false, true, false, false) },
{ "SHORTEST_PATH", Function("SHORTEST_PATH", "AQL_SHORTEST_PATH", "h,h,s,s,s|a", false, true, false, false) },
{ "GRAPH_SHORTEST_PATH", Function("GRAPH_SHORTEST_PATH", "AQL_GRAPH_SHORTEST_PATH", "s,als,als|a", false, true, false, false) },
{ "GRAPH_DISTANCE_TO", Function("GRAPH_DISTANCE_TO", "AQL_GRAPH_DISTANCE_TO", "s,als,als|a", false, true, false, false) },
{ "TRAVERSAL", Function("TRAVERSAL", "AQL_TRAVERSAL", "h,h,s,s|a", false, true, false, false) },
{ "GRAPH_TRAVERSAL", Function("GRAPH_TRAVERSAL", "AQL_GRAPH_TRAVERSAL", "s,als,s|a", false, true, false, false) },
{ "TRAVERSAL_TREE", Function("TRAVERSAL_TREE", "AQL_TRAVERSAL_TREE", "h,h,s,s,s|a", false, true, false, false) },
{ "GRAPH_TRAVERSAL_TREE", Function("GRAPH_TRAVERSAL_TREE", "AQL_GRAPH_TRAVERSAL_TREE", "s,als,s,s|a", false, true, false, false) },
{ "EDGES", Function("EDGES", "AQL_EDGES", "h,s,s|l,o", false, true, false, false) },
{ "GRAPH_EDGES", Function("GRAPH_EDGES", "AQL_GRAPH_EDGES", "s,als|a", false, true, false, false) },
{ "GRAPH_VERTICES", Function("GRAPH_VERTICES", "AQL_GRAPH_VERTICES", "s,als|a", false, true, false, false) },
{ "NEIGHBORS", Function("NEIGHBORS", "AQL_NEIGHBORS", "h,h,s,s|l,a", false, true, false, false) },
{ "GRAPH_NEIGHBORS", Function("GRAPH_NEIGHBORS", "AQL_GRAPH_NEIGHBORS", "s,als|a", false, true, false, false) },
{ "GRAPH_COMMON_NEIGHBORS", Function("GRAPH_COMMON_NEIGHBORS", "AQL_GRAPH_COMMON_NEIGHBORS", "s,als,als|a,a", false, true, false, false) },
{ "GRAPH_COMMON_PROPERTIES", Function("GRAPH_COMMON_PROPERTIES", "AQL_GRAPH_COMMON_PROPERTIES", "s,als,als|a", false, true, false, false) },
{ "GRAPH_ECCENTRICITY", Function("GRAPH_ECCENTRICITY", "AQL_GRAPH_ECCENTRICITY", "s|a", false, true, false, false) },
{ "GRAPH_BETWEENNESS", Function("GRAPH_BETWEENNESS", "AQL_GRAPH_BETWEENNESS", "s|a", false, true, false, false) },
{ "GRAPH_CLOSENESS", Function("GRAPH_CLOSENESS", "AQL_GRAPH_CLOSENESS", "s|a", false, true, false, false) },
{ "GRAPH_ABSOLUTE_ECCENTRICITY", Function("GRAPH_ABSOLUTE_ECCENTRICITY", "AQL_GRAPH_ABSOLUTE_ECCENTRICITY", "s,als|a", false, true, false, false) },
{ "GRAPH_ABSOLUTE_BETWEENNESS", Function("GRAPH_ABSOLUTE_BETWEENNESS", "AQL_GRAPH_ABSOLUTE_BETWEENNESS", "s,als|a", false, true, false, false) },
{ "GRAPH_ABSOLUTE_CLOSENESS", Function("GRAPH_ABSOLUTE_CLOSENESS", "AQL_GRAPH_ABSOLUTE_CLOSENESS", "s,als|a", false, true, false, false) },
{ "GRAPH_DIAMETER", Function("GRAPH_DIAMETER", "AQL_GRAPH_DIAMETER", "s|a", false, true, false, false) },
{ "GRAPH_RADIUS", Function("GRAPH_RADIUS", "AQL_GRAPH_RADIUS", "s|a", false, true, false, false) },
{ "PATHS", Function("PATHS", "AQL_PATHS", "c,h|s,ba", true, false, true, false, false) },
{ "GRAPH_PATHS", Function("GRAPH_PATHS", "AQL_GRAPH_PATHS", "s|a", false, false, true, false, false) },
{ "SHORTEST_PATH", Function("SHORTEST_PATH", "AQL_SHORTEST_PATH", "h,h,s,s,s|a", true, false, true, false, false) },
{ "GRAPH_SHORTEST_PATH", Function("GRAPH_SHORTEST_PATH", "AQL_GRAPH_SHORTEST_PATH", "s,als,als|a", false, false, true, false, false) },
{ "GRAPH_DISTANCE_TO", Function("GRAPH_DISTANCE_TO", "AQL_GRAPH_DISTANCE_TO", "s,als,als|a", false, false, true, false, false) },
{ "TRAVERSAL", Function("TRAVERSAL", "AQL_TRAVERSAL", "h,h,s,s|a", false, false, true, false, false) },
{ "GRAPH_TRAVERSAL", Function("GRAPH_TRAVERSAL", "AQL_GRAPH_TRAVERSAL", "s,als,s|a", false, false, true, false, false) },
{ "TRAVERSAL_TREE", Function("TRAVERSAL_TREE", "AQL_TRAVERSAL_TREE", "h,h,s,s,s|a", false, false, true, false, false) },
{ "GRAPH_TRAVERSAL_TREE", Function("GRAPH_TRAVERSAL_TREE", "AQL_GRAPH_TRAVERSAL_TREE", "s,als,s,s|a", false, false, true, false, false) },
{ "EDGES", Function("EDGES", "AQL_EDGES", "h,s,s|l,o", true, false, true, false, false) },
{ "GRAPH_EDGES", Function("GRAPH_EDGES", "AQL_GRAPH_EDGES", "s,als|a", false, false, true, false, false) },
{ "GRAPH_VERTICES", Function("GRAPH_VERTICES", "AQL_GRAPH_VERTICES", "s,als|a", false, false, true, false, false) },
{ "NEIGHBORS", Function("NEIGHBORS", "AQL_NEIGHBORS", "h,h,s,s|l,a", true, false, true, false, false) },
{ "GRAPH_NEIGHBORS", Function("GRAPH_NEIGHBORS", "AQL_GRAPH_NEIGHBORS", "s,als|a", false, false, true, false, false) },
{ "GRAPH_COMMON_NEIGHBORS", Function("GRAPH_COMMON_NEIGHBORS", "AQL_GRAPH_COMMON_NEIGHBORS", "s,als,als|a,a", false, false, true, false, false) },
{ "GRAPH_COMMON_PROPERTIES", Function("GRAPH_COMMON_PROPERTIES", "AQL_GRAPH_COMMON_PROPERTIES", "s,als,als|a", false, false, true, false, false) },
{ "GRAPH_ECCENTRICITY", Function("GRAPH_ECCENTRICITY", "AQL_GRAPH_ECCENTRICITY", "s|a", false, false, true, false, false) },
{ "GRAPH_BETWEENNESS", Function("GRAPH_BETWEENNESS", "AQL_GRAPH_BETWEENNESS", "s|a", false, false, true, false, false) },
{ "GRAPH_CLOSENESS", Function("GRAPH_CLOSENESS", "AQL_GRAPH_CLOSENESS", "s|a", false, false, true, false, false) },
{ "GRAPH_ABSOLUTE_ECCENTRICITY", Function("GRAPH_ABSOLUTE_ECCENTRICITY", "AQL_GRAPH_ABSOLUTE_ECCENTRICITY", "s,als|a", false, false, true, false, false) },
{ "GRAPH_ABSOLUTE_BETWEENNESS", Function("GRAPH_ABSOLUTE_BETWEENNESS", "AQL_GRAPH_ABSOLUTE_BETWEENNESS", "s,als|a", false, false, true, false, false) },
{ "GRAPH_ABSOLUTE_CLOSENESS", Function("GRAPH_ABSOLUTE_CLOSENESS", "AQL_GRAPH_ABSOLUTE_CLOSENESS", "s,als|a", false, false, true, false, false) },
{ "GRAPH_DIAMETER", Function("GRAPH_DIAMETER", "AQL_GRAPH_DIAMETER", "s|a", false, false, true, false, false) },
{ "GRAPH_RADIUS", Function("GRAPH_RADIUS", "AQL_GRAPH_RADIUS", "s|a", false, false, true, false, false) },
// date functions
{ "DATE_NOW", Function("DATE_NOW", "AQL_DATE_NOW", "", false, false, true, true) },
{ "DATE_TIMESTAMP", Function("DATE_TIMESTAMP", "AQL_DATE_TIMESTAMP", "ns|ns,ns,ns,ns,ns,ns", true, false, true, true) },
{ "DATE_ISO8601", Function("DATE_ISO8601", "AQL_DATE_ISO8601", "ns|ns,ns,ns,ns,ns,ns", true, false, true, true) },
{ "DATE_DAYOFWEEK", Function("DATE_DAYOFWEEK", "AQL_DATE_DAYOFWEEK", "ns", true, false, true, true) },
{ "DATE_YEAR", Function("DATE_YEAR", "AQL_DATE_YEAR", "ns", true, false, true, true) },
{ "DATE_MONTH", Function("DATE_MONTH", "AQL_DATE_MONTH", "ns", true, false, true, true) },
{ "DATE_DAY", Function("DATE_DAY", "AQL_DATE_DAY", "ns", true, false, true, true) },
{ "DATE_HOUR", Function("DATE_HOUR", "AQL_DATE_HOUR", "ns", true, false, true, true) },
{ "DATE_MINUTE", Function("DATE_MINUTE", "AQL_DATE_MINUTE", "ns", true, false, true, true) },
{ "DATE_SECOND", Function("DATE_SECOND", "AQL_DATE_SECOND", "ns", true, false, true, true) },
{ "DATE_MILLISECOND", Function("DATE_MILLISECOND", "AQL_DATE_MILLISECOND", "ns", true, false, true, true) },
{ "DATE_NOW", Function("DATE_NOW", "AQL_DATE_NOW", "", false, false, false, true, true) },
{ "DATE_TIMESTAMP", Function("DATE_TIMESTAMP", "AQL_DATE_TIMESTAMP", "ns|ns,ns,ns,ns,ns,ns", true, true, false, true, true) },
{ "DATE_ISO8601", Function("DATE_ISO8601", "AQL_DATE_ISO8601", "ns|ns,ns,ns,ns,ns,ns", true, true, false, true, true) },
{ "DATE_DAYOFWEEK", Function("DATE_DAYOFWEEK", "AQL_DATE_DAYOFWEEK", "ns", true, true, false, true, true) },
{ "DATE_YEAR", Function("DATE_YEAR", "AQL_DATE_YEAR", "ns", true, true, false, true, true) },
{ "DATE_MONTH", Function("DATE_MONTH", "AQL_DATE_MONTH", "ns", true, true, false, true, true) },
{ "DATE_DAY", Function("DATE_DAY", "AQL_DATE_DAY", "ns", true, true, false, true, true) },
{ "DATE_HOUR", Function("DATE_HOUR", "AQL_DATE_HOUR", "ns", true, true, false, true, true) },
{ "DATE_MINUTE", Function("DATE_MINUTE", "AQL_DATE_MINUTE", "ns", true, true, false, true, true) },
{ "DATE_SECOND", Function("DATE_SECOND", "AQL_DATE_SECOND", "ns", true, true, false, true, true) },
{ "DATE_MILLISECOND", Function("DATE_MILLISECOND", "AQL_DATE_MILLISECOND", "ns", true, true, false, true, true) },
// misc functions
{ "FAIL", Function("FAIL", "AQL_FAIL", "|s", false, true, true, true) },
{ "PASSTHRU", Function("PASSTHRU", "AQL_PASSTHRU", ".", false, false, true, true, &Functions::Passthru ) },
{ "NOOPT", Function("NOOPT", "AQL_PASSTHRU", ".", false, false, true, true, &Functions::Passthru ) },
{ "V8", Function("V8", "AQL_PASSTHRU", ".", false, false, true, true) },
{ "FAIL", Function("FAIL", "AQL_FAIL", "|s", false, false, true, true, true) },
{ "PASSTHRU", Function("PASSTHRU", "AQL_PASSTHRU", ".", false, false, false, true, true, &Functions::Passthru ) },
{ "NOOPT", Function("NOOPT", "AQL_PASSTHRU", ".", false, false, false, true, true, &Functions::Passthru ) },
{ "V8", Function("V8", "AQL_PASSTHRU", ".", false, false, false, true, true) },
#ifdef TRI_ENABLE_FAILURE_TESTS
{ "TEST_MODIFY", Function("TEST_MODIFY", "AQL_TEST_MODIFY", "s,.", false, false, true, false) },
{ "TEST_MODIFY", Function("TEST_MODIFY", "AQL_TEST_MODIFY", "s,.", false, false, false, true, false) },
#endif
{ "SLEEP", Function("SLEEP", "AQL_SLEEP", "n", false, true, true, true) },
{ "COLLECTIONS", Function("COLLECTIONS", "AQL_COLLECTIONS", "", false, true, false, true) },
{ "NOT_NULL", Function("NOT_NULL", "AQL_NOT_NULL", ".|+", true, false, true, true) },
{ "FIRST_LIST", Function("FIRST_LIST", "AQL_FIRST_LIST", ".|+", true, false, true, true) },
{ "FIRST_DOCUMENT", Function("FIRST_DOCUMENT", "AQL_FIRST_DOCUMENT", ".|+", true, false, true, true) },
{ "PARSE_IDENTIFIER", Function("PARSE_IDENTIFIER", "AQL_PARSE_IDENTIFIER", ".", true, false, true, true) },
{ "CURRENT_USER", Function("CURRENT_USER", "AQL_CURRENT_USER", "", false, false, false, true) },
{ "CURRENT_DATABASE", Function("CURRENT_DATABASE", "AQL_CURRENT_DATABASE", "", false, false, false, true) }
{ "SLEEP", Function("SLEEP", "AQL_SLEEP", "n", false, false, true, true, true) },
{ "COLLECTIONS", Function("COLLECTIONS", "AQL_COLLECTIONS", "", false, false, true, false, true) },
{ "NOT_NULL", Function("NOT_NULL", "AQL_NOT_NULL", ".|+", true, true, false, true, true) },
{ "FIRST_LIST", Function("FIRST_LIST", "AQL_FIRST_LIST", ".|+", true, true, false, true, true) },
{ "FIRST_DOCUMENT", Function("FIRST_DOCUMENT", "AQL_FIRST_DOCUMENT", ".|+", true, true, false, true, true) },
{ "PARSE_IDENTIFIER", Function("PARSE_IDENTIFIER", "AQL_PARSE_IDENTIFIER", ".", true, true, false, true, true) },
{ "CURRENT_USER", Function("CURRENT_USER", "AQL_CURRENT_USER", "", false, false, false, false, true) },
{ "CURRENT_DATABASE", Function("CURRENT_DATABASE", "AQL_CURRENT_DATABASE", "", false, false, false, false, true) }
};
////////////////////////////////////////////////////////////////////////////////

View File

@ -43,6 +43,7 @@ using namespace triagens::aql;
Function::Function (std::string const& externalName,
std::string const& internalName,
std::string const& arguments,
bool isCacheable,
bool isDeterministic,
bool canThrow,
bool canRunOnDBServer,
@ -51,6 +52,7 @@ Function::Function (std::string const& externalName,
: internalName(internalName),
externalName(externalName),
arguments(arguments),
isCacheable(isCacheable),
isDeterministic(isDeterministic),
canThrow(canThrow),
canRunOnDBServer(canRunOnDBServer),

View File

@ -57,6 +57,7 @@ namespace triagens {
Function (std::string const& externalName,
std::string const& internalName,
std::string const& arguments,
bool isCacheable,
bool isDeterministic,
bool canThrow,
bool canRunOnDBServer,
@ -131,6 +132,12 @@ namespace triagens {
std::string const arguments;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the function results may be cached by the query cache
////////////////////////////////////////////////////////////////////////////////
bool const isCacheable;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the function is deterministic (i.e. its results are
/// identical when called repeatedly with the same input values)

View File

@ -34,8 +34,10 @@
#include "Aql/ExecutionPlan.h"
#include "Aql/Optimizer.h"
#include "Aql/Parser.h"
#include "Aql/QueryCache.h"
#include "Aql/QueryList.h"
#include "Aql/ShortStringStorage.h"
#include "Basics/fasthash.h"
#include "Basics/JsonHelper.h"
#include "Basics/json.h"
#include "Basics/tri-strings.h"
@ -45,6 +47,7 @@
#include "Utils/CollectionNameResolver.h"
#include "Utils/StandaloneTransactionContext.h"
#include "Utils/V8TransactionContext.h"
#include "V8/v8-conv.h"
#include "V8Server/ApplicationV8.h"
#include "VocBase/vocbase.h"
@ -177,7 +180,7 @@ Query::Query (triagens::arango::ApplicationV8* applicationV8,
TRI_json_t* bindParameters,
TRI_json_t* options,
QueryPart part)
: _id(TRI_NextQueryIdVocBase(vocbase)),
: _id(0),
_applicationV8(applicationV8),
_vocbase(vocbase),
_executor(nullptr),
@ -201,18 +204,12 @@ Query::Query (triagens::arango::ApplicationV8* applicationV8,
_warnings(),
_part(part),
_contextOwnedByExterior(contextOwnedByExterior),
_killed(false) {
_killed(false),
_isModificationQuery(false) {
// std::cout << TRI_CurrentThreadId() << ", QUERY " << this << " CTOR: " << queryString << "\n";
TRI_ASSERT(_vocbase != nullptr);
_profile = new Profile(this);
enterState(INITIALIZATION);
_ast = new Ast(this);
_nodes.reserve(32);
_strings.reserve(32);
}
////////////////////////////////////////////////////////////////////////////////
@ -225,7 +222,7 @@ Query::Query (triagens::arango::ApplicationV8* applicationV8,
triagens::basics::Json queryStruct,
TRI_json_t* options,
QueryPart part)
: _id(TRI_NextQueryIdVocBase(vocbase)),
: _id(0),
_applicationV8(applicationV8),
_vocbase(vocbase),
_executor(nullptr),
@ -249,18 +246,12 @@ Query::Query (triagens::arango::ApplicationV8* applicationV8,
_warnings(),
_part(part),
_contextOwnedByExterior(contextOwnedByExterior),
_killed(false) {
_killed(false),
_isModificationQuery(false) {
// std::cout << TRI_CurrentThreadId() << ", QUERY " << this << " CTOR (JSON): " << _queryJson.toString() << "\n";
TRI_ASSERT(_vocbase != nullptr);
_profile = new Profile(this);
enterState(INITIALIZATION);
_ast = new Ast(this);
_nodes.reserve(32);
_strings.reserve(32);
}
////////////////////////////////////////////////////////////////////////////////
@ -508,21 +499,22 @@ void Query::registerWarning (int code,
////////////////////////////////////////////////////////////////////////////////
QueryResult Query::prepare (QueryRegistry* registry) {
enterState(PARSING);
try {
enterState(PARSING);
std::unique_ptr<Parser> parser(new Parser(this));
std::unique_ptr<ExecutionPlan> plan;
if (_queryString != nullptr) {
parser->parse(false);
// put in bind parameters
parser->ast()->injectBindParameters(_bindParameters);
}
_isModificationQuery = parser->isModificationQuery();
// create the transaction object, but do not start it yet
auto trx = new triagens::arango::AqlTransaction(createTransactionContext(), _vocbase, _collections.collections(), _part == PART_MAIN);
_trx = trx; // Save the transaction in our object
_trx = new triagens::arango::AqlTransaction(createTransactionContext(), _vocbase, _collections.collections(), _part == PART_MAIN);
bool planRegisters;
@ -639,14 +631,42 @@ QueryResult Query::prepare (QueryRegistry* registry) {
////////////////////////////////////////////////////////////////////////////////
QueryResult Query::execute (QueryRegistry* registry) {
// Now start the execution:
bool useQueryCache = canUseQueryCache();
uint64_t queryStringHash = 0;
try {
if (useQueryCache) {
// hash the query
queryStringHash = hash();
// check the query cache for an existing result
auto cacheEntry = triagens::aql::QueryCache::instance()->lookup(_vocbase, queryStringHash, _queryString, _queryLength);
triagens::aql::QueryCacheResultEntryGuard guard(cacheEntry);
if (cacheEntry != nullptr) {
// got a result from the query cache
QueryResult res(TRI_ERROR_NO_ERROR);
res.warnings = warningsToJson(TRI_UNKNOWN_MEM_ZONE);
res.json = TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, cacheEntry->_queryResult);
res.stats = nullptr;
res.cached = true;
return res;
}
}
init();
QueryResult res = prepare(registry);
if (res.code != TRI_ERROR_NO_ERROR) {
return res;
}
if (useQueryCache && (_isModificationQuery || ! _warnings.empty() || ! _ast->root()->isCacheable())) {
useQueryCache = false;
}
triagens::basics::Json jsonResult(triagens::basics::Json::Array, 16);
triagens::basics::Json stats;
@ -656,22 +676,57 @@ QueryResult Query::execute (QueryRegistry* registry) {
AqlItemBlock* value = nullptr;
try {
while (nullptr != (value = _engine->getSome(1, ExecutionBlock::DefaultBatchSize))) {
auto doc = value->getDocumentCollection(resultRegister);
if (useQueryCache) {
// iterate over result, return it and store it in query cache
while (nullptr != (value = _engine->getSome(1, ExecutionBlock::DefaultBatchSize))) {
auto doc = value->getDocumentCollection(resultRegister);
size_t const n = value->size();
// reserve space for n additional results at once
jsonResult.reserve(n);
size_t const n = value->size();
// reserve space for n additional results at once
jsonResult.reserve(n);
for (size_t i = 0; i < n; ++i) {
auto val = value->getValueReference(i, resultRegister);
for (size_t i = 0; i < n; ++i) {
auto val = value->getValueReference(i, resultRegister);
if (! val.isEmpty()) {
jsonResult.add(val.toJson(_trx, doc, true));
if (! val.isEmpty()) {
jsonResult.add(val.toJson(_trx, doc, true));
}
}
delete value;
value = nullptr;
}
if (_warnings.empty()) {
// finally store the generated result in the query cache
QueryCache::instance()->store(
_vocbase,
queryStringHash,
_queryString,
_queryLength,
TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, jsonResult.json()),
_trx->collectionNames()
);
}
}
else {
// iterate over result and return it
while (nullptr != (value = _engine->getSome(1, ExecutionBlock::DefaultBatchSize))) {
auto doc = value->getDocumentCollection(resultRegister);
size_t const n = value->size();
// reserve space for n additional results at once
jsonResult.reserve(n);
for (size_t i = 0; i < n; ++i) {
auto val = value->getValueReference(i, resultRegister);
if (! val.isEmpty()) {
jsonResult.add(val.toJson(_trx, doc, true));
}
}
delete value;
value = nullptr;
}
delete value;
value = nullptr;
}
}
catch (...) {
@ -721,40 +776,104 @@ QueryResult Query::execute (QueryRegistry* registry) {
/// may only be called with an active V8 handle scope
////////////////////////////////////////////////////////////////////////////////
QueryResultV8 Query::executeV8 (v8::Isolate* isolate, QueryRegistry* registry) {
QueryResultV8 Query::executeV8 (v8::Isolate* isolate,
QueryRegistry* registry) {
bool useQueryCache = canUseQueryCache();
uint64_t queryStringHash = 0;
// Now start the execution:
try {
if (useQueryCache) {
// hash the query
queryStringHash = hash();
// check the query cache for an existing result
auto cacheEntry = triagens::aql::QueryCache::instance()->lookup(_vocbase, queryStringHash, _queryString, _queryLength);
triagens::aql::QueryCacheResultEntryGuard guard(cacheEntry);
if (cacheEntry != nullptr) {
// got a result from the query cache
QueryResultV8 res(TRI_ERROR_NO_ERROR);
res.result = v8::Handle<v8::Array>::Cast(TRI_ObjectJson(isolate, cacheEntry->_queryResult));
res.cached = true;
return res;
}
}
init();
QueryResultV8 res = prepare(registry);
if (res.code != TRI_ERROR_NO_ERROR) {
return res;
}
if (useQueryCache && (_isModificationQuery || ! _warnings.empty() || ! _ast->root()->isCacheable())) {
useQueryCache = false;
}
QueryResultV8 result(TRI_ERROR_NO_ERROR);
result.result = v8::Array::New(isolate);
triagens::basics::Json stats;
// this is the RegisterId our results can be found in
auto const resultRegister = _engine->resultRegister();
AqlItemBlock* value = nullptr;
try {
uint32_t j = 0;
while (nullptr != (value = _engine->getSome(1, ExecutionBlock::DefaultBatchSize))) {
auto doc = value->getDocumentCollection(resultRegister);
if (useQueryCache) {
// iterate over result, return it and store it in query cache
std::unique_ptr<TRI_json_t> cacheResult(TRI_CreateArrayJson(TRI_UNKNOWN_MEM_ZONE));
size_t const n = value->size();
for (size_t i = 0; i < n; ++i) {
auto val = value->getValueReference(i, resultRegister);
uint32_t j = 0;
while (nullptr != (value = _engine->getSome(1, ExecutionBlock::DefaultBatchSize))) {
auto doc = value->getDocumentCollection(resultRegister);
if (! val.isEmpty()) {
result.result->Set(j++, val.toV8(isolate, _trx, doc));
size_t const n = value->size();
for (size_t i = 0; i < n; ++i) {
auto val = value->getValueReference(i, resultRegister);
if (! val.isEmpty()) {
result.result->Set(j++, val.toV8(isolate, _trx, doc));
auto json = val.toJson(_trx, doc, true);
TRI_PushBack3ArrayJson(TRI_UNKNOWN_MEM_ZONE, cacheResult.get(), json.steal());
}
}
delete value;
value = nullptr;
}
if (_warnings.empty()) {
// finally store the generated result in the query cache
QueryCache::instance()->store(
_vocbase,
queryStringHash,
_queryString,
_queryLength,
cacheResult.get(),
_trx->collectionNames()
);
cacheResult.release();
}
}
else {
// iterate over result and return it
uint32_t j = 0;
while (nullptr != (value = _engine->getSome(1, ExecutionBlock::DefaultBatchSize))) {
auto doc = value->getDocumentCollection(resultRegister);
size_t const n = value->size();
for (size_t i = 0; i < n; ++i) {
auto val = value->getValueReference(i, resultRegister);
if (! val.isEmpty()) {
result.result->Set(j++, val.toV8(isolate, _trx, doc));
}
}
delete value;
value = nullptr;
}
delete value;
value = nullptr;
}
}
catch (...) {
@ -803,6 +922,7 @@ QueryResultV8 Query::executeV8 (v8::Isolate* isolate, QueryRegistry* registry) {
QueryResult Query::parse () {
try {
init();
Parser parser(this);
return parser.parse(true);
}
@ -827,9 +947,10 @@ QueryResult Query::parse () {
////////////////////////////////////////////////////////////////////////////////
QueryResult Query::explain () {
enterState(PARSING);
try {
init();
enterState(PARSING);
Parser parser(this);
parser.parse(true);
@ -842,8 +963,7 @@ QueryResult Query::explain () {
// std::cout << "AST: " << triagens::basics::JsonHelper::toString(parser.ast()->toJson(TRI_UNKNOWN_MEM_ZONE)) << "\n";
// create the transaction object, but do not start it yet
auto trx = new triagens::arango::AqlTransaction(createTransactionContext(), _vocbase, _collections.collections(), true);
_trx = trx; // save the pointer in this
_trx = new triagens::arango::AqlTransaction(createTransactionContext(), _vocbase, _collections.collections(), true);
// we have an AST
int res = _trx->begin();
@ -1096,6 +1216,81 @@ TRI_json_t* Query::warningsToJson (TRI_memory_zone_t* zone) const {
// --SECTION-- private methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief initializes the query
////////////////////////////////////////////////////////////////////////////////
void Query::init () {
TRI_ASSERT(_id == 0);
_id = TRI_NextQueryIdVocBase(_vocbase);
_profile = new Profile(this);
enterState(INITIALIZATION);
_ast = new Ast(this);
_nodes.reserve(32);
_strings.reserve(32);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief calculate a hash value for the query and bind parameters
////////////////////////////////////////////////////////////////////////////////
uint64_t Query::hash () const {
// hash the query string first
uint64_t hash = triagens::aql::QueryCache::instance()->hashQueryString(_queryString, _queryLength);
// handle "fullCount" option. if this option is set, the query result will
// be different to when it is not set!
if (getBooleanOption("fullcount", false)) {
hash = fasthash64("fullcount:true", strlen("fullcount:true"), hash);
}
else {
hash = fasthash64("fullcount:false", strlen("fullcount:false"), hash);
}
// handle "count" option
if (getBooleanOption("count", false)) {
hash = fasthash64("count:true", strlen("count:true"), hash);
}
else {
hash = fasthash64("count:false", strlen("count:false"), hash);
}
// blend query hash with bind parameters
return hash ^ _bindParameters.hash();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the query cache can be used for the query
////////////////////////////////////////////////////////////////////////////////
bool Query::canUseQueryCache () const {
if (_queryString == nullptr || _queryLength < 8) {
return false;
}
auto queryCacheMode = QueryCache::instance()->mode();
if (queryCacheMode == CACHE_ALWAYS_ON && getBooleanOption("cache", true)) {
// cache mode is set to always on... query can still be excluded from cache by
// setting `cache` attribute to false.
// cannot use query cache on a coordinator at the moment
return ! triagens::arango::ServerState::instance()->isCoordinator();
}
else if (queryCacheMode == CACHE_ON_DEMAND && getBooleanOption("cache", false)) {
// cache mode is set to demand... query will only be cached if `cache`
// attribute is set to false
// cannot use query cache on a coordinator at the moment
return ! triagens::arango::ServerState::instance()->isCoordinator();
}
return false;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief fetch a numeric value from the options
////////////////////////////////////////////////////////////////////////////////

View File

@ -361,7 +361,6 @@ namespace triagens {
QueryResultV8 executeV8 (v8::Isolate* isolate, QueryRegistry*);
////////////////////////////////////////////////////////////////////////////////
/// @brief parse an AQL query
////////////////////////////////////////////////////////////////////////////////
@ -496,6 +495,24 @@ namespace triagens {
private:
////////////////////////////////////////////////////////////////////////////////
/// @brief initializes the query
////////////////////////////////////////////////////////////////////////////////
void init ();
////////////////////////////////////////////////////////////////////////////////
/// @brief calculate a hash value for the query and bind parameters
////////////////////////////////////////////////////////////////////////////////
uint64_t hash () const;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the query cache can be used for the query
////////////////////////////////////////////////////////////////////////////////
bool canUseQueryCache () const;
////////////////////////////////////////////////////////////////////////////////
/// @brief fetch a numeric value from the options
////////////////////////////////////////////////////////////////////////////////
@ -555,7 +572,7 @@ namespace triagens {
/// @brief query id
////////////////////////////////////////////////////////////////////////////////
TRI_voc_tick_t const _id;
TRI_voc_tick_t _id;
////////////////////////////////////////////////////////////////////////////////
/// @brief application v8 used in the query, we need this for V8 context access
@ -710,6 +727,12 @@ namespace triagens {
bool _killed;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the query is a data modification query
////////////////////////////////////////////////////////////////////////////////
bool _isModificationQuery;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not query tracking is disabled globally
////////////////////////////////////////////////////////////////////////////////

740
arangod/Aql/QueryCache.cpp Normal file
View File

@ -0,0 +1,740 @@
////////////////////////////////////////////////////////////////////////////////
/// @brief Aql, query cache
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
/// @author Copyright 2012-2013, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
#include "Aql/QueryCache.h"
#include "Basics/fasthash.h"
#include "Basics/json.h"
#include "Basics/Exceptions.h"
#include "Basics/MutexLocker.h"
#include "Basics/ReadLocker.h"
#include "Basics/tri-strings.h"
#include "Basics/WriteLocker.h"
#include "VocBase/vocbase.h"
using namespace triagens::aql;
// -----------------------------------------------------------------------------
// --SECTION-- private variables
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief singleton instance of the query cache
////////////////////////////////////////////////////////////////////////////////
static triagens::aql::QueryCache Instance;
////////////////////////////////////////////////////////////////////////////////
/// @brief maximum number of results in each per-database cache
////////////////////////////////////////////////////////////////////////////////
static size_t MaxResults = 128; // default value. can be changed later
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not the cache is enabled
////////////////////////////////////////////////////////////////////////////////
static std::atomic<triagens::aql::QueryCacheMode> Mode(CACHE_ON_DEMAND);
// -----------------------------------------------------------------------------
// --SECTION-- struct QueryCacheResultEntry
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief create a cache entry
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry::QueryCacheResultEntry (uint64_t hash,
char const* queryString,
size_t queryStringLength,
TRI_json_t* queryResult,
std::vector<std::string> const& collections)
: _hash(hash),
_queryString(nullptr),
_queryStringLength(queryStringLength),
_queryResult(queryResult),
_collections(collections),
_prev(nullptr),
_next(nullptr),
_refCount(0),
_deletionRequested(0) {
_queryString = TRI_DuplicateString2Z(TRI_UNKNOWN_MEM_ZONE, queryString, queryStringLength);
if (_queryString == nullptr) {
THROW_ARANGO_EXCEPTION(TRI_ERROR_OUT_OF_MEMORY);
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destroy a cache entry
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry::~QueryCacheResultEntry () {
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, _queryResult);
TRI_FreeString(TRI_UNKNOWN_MEM_ZONE, _queryString);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief check whether the element can be destroyed, and delete it if yes
////////////////////////////////////////////////////////////////////////////////
void QueryCacheResultEntry::tryDelete () {
_deletionRequested = 1;
if (_refCount == 0) {
delete this;
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief use the element, so it cannot be deleted meanwhile
////////////////////////////////////////////////////////////////////////////////
void QueryCacheResultEntry::use () {
++_refCount;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief unuse the element, so it can be deleted if required
////////////////////////////////////////////////////////////////////////////////
void QueryCacheResultEntry::unuse () {
if (--_refCount == 0) {
if (_deletionRequested == 1) {
delete this;
}
}
}
// -----------------------------------------------------------------------------
// --SECTION-- struct QueryCacheDatabaseEntry
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief create a database-specific cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheDatabaseEntry::QueryCacheDatabaseEntry ()
: _entriesByHash(),
_entriesByCollection(),
_head(nullptr),
_tail(nullptr),
_numElements(0) {
_entriesByHash.reserve(128);
_entriesByCollection.reserve(16);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destroy a database-specific cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheDatabaseEntry::~QueryCacheDatabaseEntry () {
for (auto& it : _entriesByHash) {
tryDelete(it.second);
}
_entriesByHash.clear();
_entriesByCollection.clear();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief lookup a query result in the database-specific cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* QueryCacheDatabaseEntry::lookup (uint64_t hash,
char const* queryString,
size_t queryStringLength) {
auto it = _entriesByHash.find(hash);
if (it == _entriesByHash.end()) {
// not found in cache
return nullptr;
}
// found some result in cache
if (queryStringLength != (*it).second->_queryStringLength ||
strcmp(queryString, (*it).second->_queryString) != 0) {
// found something, but obviously the result of a different query with the same hash
return nullptr;
}
// found an entry
auto entry = (*it).second;
entry->use();
return entry;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief store a query result in the database-specific cache
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::store (uint64_t hash,
QueryCacheResultEntry* entry) {
// insert entry into the cache
if (! _entriesByHash.emplace(hash, entry).second) {
// remove previous entry
auto it = _entriesByHash.find(hash);
TRI_ASSERT(it != _entriesByHash.end());
unlink((*it).second);
_entriesByHash.erase(it);
tryDelete((*it).second);
// and insert again
_entriesByHash.emplace(hash, entry);
}
try {
for (auto const& it : entry->_collections) {
auto it2 = _entriesByCollection.find(it);
if (it2 == _entriesByCollection.end()) {
// no entry found for collection. now create it
_entriesByCollection.emplace(it, std::unordered_set<uint64_t>{ hash });
}
else {
// there already was an entry for this collection
(*it2).second.emplace(hash);
}
}
}
catch (...) {
// rollback
// remove from collections
for (auto const& it : entry->_collections) {
auto it2 = _entriesByCollection.find(it);
if (it2 != _entriesByCollection.end()) {
(*it2).second.erase(hash);
}
}
// finally remove entry itself from hash table
auto it = _entriesByHash.find(hash);
TRI_ASSERT(it != _entriesByHash.end());
_entriesByHash.erase(it);
unlink((*it).second);
tryDelete((*it).second);
throw;
}
link(entry);
enforceMaxResults(MaxResults);
TRI_ASSERT(_numElements <= MaxResults);
TRI_ASSERT(_head != nullptr);
TRI_ASSERT(_tail != nullptr);
TRI_ASSERT(_tail == entry);
TRI_ASSERT(entry->_next == nullptr);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all entries for the given collections in the
/// database-specific cache
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::invalidate (std::vector<char const*> const& collections) {
for (auto const& it : collections) {
invalidate(it);
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all entries for a collection in the database-specific
/// cache
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::invalidate (char const* collection) {
auto it = _entriesByCollection.find(std::string(collection));
if (it == _entriesByCollection.end()) {
return;
}
for (auto& it2 : (*it).second) {
auto it3 = _entriesByHash.find(it2);
if (it3 != _entriesByHash.end()) {
// remove entry from the linked list
unlink((*it3).second);
// erase it from hash table
_entriesByHash.erase(it3);
// delete the object itself
tryDelete((*it3).second);
}
}
_entriesByCollection.erase(it);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief enforce maximum number of results
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::enforceMaxResults (size_t value) {
while (_numElements > value) {
// too many elements. now wipe the first element from the list
// copy old _head value as unlink() will change it...
auto head = _head;
unlink(head);
auto it = _entriesByHash.find(head->_hash);
TRI_ASSERT(it != _entriesByHash.end());
_entriesByHash.erase(it);
tryDelete(head);
}
}
// -----------------------------------------------------------------------------
// --SECTION-- private methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief check whether the element can be destroyed, and delete it if yes
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::tryDelete (QueryCacheResultEntry* e) {
e->tryDelete();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief unlink the result entry from the list
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::unlink (QueryCacheResultEntry* e) {
if (e->_prev != nullptr) {
e->_prev->_next = e->_next;
}
if (e->_next != nullptr) {
e->_next->_prev = e->_prev;
}
if (_head == e) {
_head = e->_next;
}
if (_tail == e) {
_tail = e->_prev;
}
e->_prev = nullptr;
e->_next = nullptr;
TRI_ASSERT(_numElements > 0);
--_numElements;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief link the result entry to the end of the list
////////////////////////////////////////////////////////////////////////////////
void QueryCacheDatabaseEntry::link (QueryCacheResultEntry* e) {
++_numElements;
if (_head == nullptr) {
// list is empty
TRI_ASSERT(_tail == nullptr);
// set list head and tail to the element
_head = e;
_tail = e;
return;
}
if (_tail != nullptr) {
// adjust list tail
_tail->_next = e;
}
e->_prev = _tail;
_tail = e;
}
// -----------------------------------------------------------------------------
// --SECTION-- class QueryCache
// -----------------------------------------------------------------------------
// -----------------------------------------------------------------------------
// --SECTION-- constructors / destructors
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief create the query cache
////////////////////////////////////////////////////////////////////////////////
QueryCache::QueryCache ()
: _propertiesLock(),
_entriesLock(),
_entries() {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destroy the query cache
////////////////////////////////////////////////////////////////////////////////
QueryCache::~QueryCache () {
invalidate();
}
// -----------------------------------------------------------------------------
// --SECTION-- public methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief return the query cache properties
////////////////////////////////////////////////////////////////////////////////
triagens::basics::Json QueryCache::properties () {
MUTEX_LOCKER(_propertiesLock);
triagens::basics::Json json(triagens::basics::Json::Object, 2);
json("mode", triagens::basics::Json(modeString(mode())));
json("maxResults", triagens::basics::Json(static_cast<double>(MaxResults)));
return json;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the cache properties
////////////////////////////////////////////////////////////////////////////////
void QueryCache::properties (std::pair<std::string, size_t>& result) {
MUTEX_LOCKER(_propertiesLock);
result.first = modeString(mode());
result.second = MaxResults;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief set the cache properties
////////////////////////////////////////////////////////////////////////////////
void QueryCache::setProperties (std::pair<std::string, size_t> const& properties) {
MUTEX_LOCKER(_propertiesLock);
setMode(properties.first);
setMaxResults(properties.second);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief test whether the cache might be active
/// this is a quick test that may save the caller from further bothering
/// about the query cache if case it returns `false`
////////////////////////////////////////////////////////////////////////////////
bool QueryCache::mayBeActive () const {
return (mode() != CACHE_ALWAYS_OFF);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return whether or not the query cache is enabled
////////////////////////////////////////////////////////////////////////////////
QueryCacheMode QueryCache::mode () const {
return Mode.load(std::memory_order_relaxed);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return a string version of the mode
////////////////////////////////////////////////////////////////////////////////
std::string QueryCache::modeString (QueryCacheMode mode) {
switch (mode) {
case CACHE_ALWAYS_OFF:
return "off";
case CACHE_ALWAYS_ON:
return "on";
case CACHE_ON_DEMAND:
return "demand";
}
TRI_ASSERT(false);
return "off";
}
////////////////////////////////////////////////////////////////////////////////
/// @brief lookup a query result in the cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* QueryCache::lookup (TRI_vocbase_t* vocbase,
uint64_t hash,
char const* queryString,
size_t queryStringLength) {
auto const part = getPart(vocbase);
READ_LOCKER(_entriesLock[part]);
auto it = _entries[part].find(vocbase);
if (it == _entries[part].end()) {
// no entry found for the requested database
return nullptr;
}
return (*it).second->lookup(hash, queryString, queryStringLength);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief store a query in the cache
/// if the call is successful, the cache has taken over ownership for the
/// query result!
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* QueryCache::store (TRI_vocbase_t* vocbase,
uint64_t hash,
char const* queryString,
size_t queryStringLength,
TRI_json_t* result,
std::vector<std::string> const& collections) {
if (! TRI_IsArrayJson(result)) {
return nullptr;
}
// get the right part of the cache to store the result in
auto const part = getPart(vocbase);
// create the cache entry outside the lock
std::unique_ptr<QueryCacheResultEntry> entry(new QueryCacheResultEntry(hash, queryString, queryStringLength, result, collections));
WRITE_LOCKER(_entriesLock[part]);
auto it = _entries[part].find(vocbase);
if (it == _entries[part].end()) {
// create entry for the current database
std::unique_ptr<QueryCacheDatabaseEntry> db(new QueryCacheDatabaseEntry());
it = _entries[part].emplace(vocbase, db.get()).first;
db.release();
}
// store cache entry
(*it).second->store(hash, entry.get());
return entry.release();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries for the given collections
////////////////////////////////////////////////////////////////////////////////
void QueryCache::invalidate (TRI_vocbase_t* vocbase,
std::vector<char const*> const& collections) {
auto const part = getPart(vocbase);
WRITE_LOCKER(_entriesLock[part]);
auto it = _entries[part].find(vocbase);
if (it == _entries[part].end()) {
return;
}
// invalidate while holding the lock
(*it).second->invalidate(collections);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries for a particular collection
////////////////////////////////////////////////////////////////////////////////
void QueryCache::invalidate (TRI_vocbase_t* vocbase,
char const* collection) {
auto const part = getPart(vocbase);
WRITE_LOCKER(_entriesLock[part]);
auto it = _entries[part].find(vocbase);
if (it == _entries[part].end()) {
return;
}
// invalidate while holding the lock
(*it).second->invalidate(collection);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries for a particular database
////////////////////////////////////////////////////////////////////////////////
void QueryCache::invalidate (TRI_vocbase_t* vocbase) {
QueryCacheDatabaseEntry* databaseQueryCache = nullptr;
{
auto const part = getPart(vocbase);
WRITE_LOCKER(_entriesLock[part]);
auto it = _entries[part].find(vocbase);
if (it == _entries[part].end()) {
return;
}
databaseQueryCache = (*it).second;
_entries[part].erase(it);
}
// delete without holding the lock
TRI_ASSERT(databaseQueryCache != nullptr);
delete databaseQueryCache;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries
////////////////////////////////////////////////////////////////////////////////
void QueryCache::invalidate () {
for (unsigned int i = 0; i < NumberOfParts; ++i) {
WRITE_LOCKER(_entriesLock[i]);
// must invalidate all entries now because disabling the cache will turn off
// cache invalidation when modifying data. turning on the cache later would then
// lead to invalid results being returned. this can all be prevented by fully
// clearing the cache
invalidate(i);
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief hashes a query string
////////////////////////////////////////////////////////////////////////////////
uint64_t QueryCache::hashQueryString (char const* queryString,
size_t queryLength) const {
TRI_ASSERT(queryString != nullptr);
return fasthash64(queryString, queryLength, 0x3123456789abcdef);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get the query cache instance
////////////////////////////////////////////////////////////////////////////////
QueryCache* QueryCache::instance () {
return &Instance;
}
// -----------------------------------------------------------------------------
// --SECTION-- private methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief enforce maximum number of elements in each database-specific cache
////////////////////////////////////////////////////////////////////////////////
void QueryCache::enforceMaxResults (size_t value) {
for (unsigned int i = 0; i < NumberOfParts; ++i) {
WRITE_LOCKER(_entriesLock[i]);
for (auto& it : _entries[i]) {
it.second->enforceMaxResults(value);
}
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief determine which lock to use for the cache entries
////////////////////////////////////////////////////////////////////////////////
unsigned int QueryCache::getPart (TRI_vocbase_t const* vocbase) const {
return static_cast<int>(fasthash64(vocbase, sizeof(decltype(vocbase)), 0xf12345678abcdef) % NumberOfParts);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all entries in the cache part
/// note that the caller of this method must hold the write lock
////////////////////////////////////////////////////////////////////////////////
void QueryCache::invalidate (unsigned int part) {
for (auto& it : _entries[part]) {
delete it.second;
}
_entries[part].clear();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the maximum number of results in each per-database cache
////////////////////////////////////////////////////////////////////////////////
void QueryCache::setMaxResults (size_t value) {
if (value == 0) {
return;
}
if (value > MaxResults) {
enforceMaxResults(value);
}
MaxResults = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the caching mode
////////////////////////////////////////////////////////////////////////////////
void QueryCache::setMode (QueryCacheMode value) {
if (value == mode()) {
// actually no mode change
return;
}
invalidate();
Mode.store(value, std::memory_order_release);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief enable or disable the query cache
////////////////////////////////////////////////////////////////////////////////
void QueryCache::setMode (std::string const& value) {
if (value == "demand") {
setMode(CACHE_ON_DEMAND);
}
else if (value == "on") {
setMode(CACHE_ALWAYS_ON);
}
else {
setMode(CACHE_ALWAYS_OFF);
}
}
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
// End:

481
arangod/Aql/QueryCache.h Normal file
View File

@ -0,0 +1,481 @@
////////////////////////////////////////////////////////////////////////////////
/// @brief Aql, query cache
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
/// @author Copyright 2012-2013, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
#ifndef ARANGODB_AQL_QUERY_CACHE_H
#define ARANGODB_AQL_QUERY_CACHE_H 1
#include "Basics/Common.h"
#include "Basics/JsonHelper.h"
#include "Basics/Mutex.h"
#include "Basics/ReadWriteLock.h"
struct TRI_json_t;
struct TRI_vocbase_s;
namespace triagens {
namespace aql {
// -----------------------------------------------------------------------------
// --SECTION-- public types
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief cache mode
////////////////////////////////////////////////////////////////////////////////
enum QueryCacheMode {
CACHE_ALWAYS_OFF,
CACHE_ALWAYS_ON,
CACHE_ON_DEMAND
};
// -----------------------------------------------------------------------------
// --SECTION-- struct QueryCacheResultEntry
// -----------------------------------------------------------------------------
struct QueryCacheResultEntry {
QueryCacheResultEntry () = delete;
QueryCacheResultEntry (uint64_t,
char const*,
size_t,
struct TRI_json_t*,
std::vector<std::string> const&);
~QueryCacheResultEntry ();
////////////////////////////////////////////////////////////////////////////////
/// @brief check whether the element can be destroyed, and delete it if yes
////////////////////////////////////////////////////////////////////////////////
void tryDelete ();
////////////////////////////////////////////////////////////////////////////////
/// @brief use the element, so it cannot be deleted meanwhile
////////////////////////////////////////////////////////////////////////////////
void use ();
////////////////////////////////////////////////////////////////////////////////
/// @brief unuse the element, so it can be deleted if required
////////////////////////////////////////////////////////////////////////////////
void unuse ();
// -----------------------------------------------------------------------------
// --SECTION-- member variables
// -----------------------------------------------------------------------------
uint64_t const _hash;
char* _queryString;
size_t const _queryStringLength;
struct TRI_json_t* _queryResult;
std::vector<std::string> const _collections;
QueryCacheResultEntry* _prev;
QueryCacheResultEntry* _next;
std::atomic<uint32_t> _refCount;
std::atomic<uint32_t> _deletionRequested;
};
// -----------------------------------------------------------------------------
// --SECTION-- class QueryCacheResultEntryGuard
// -----------------------------------------------------------------------------
class QueryCacheResultEntryGuard {
QueryCacheResultEntryGuard (QueryCacheResultEntryGuard const&) = delete;
QueryCacheResultEntryGuard& operator= (QueryCacheResultEntryGuard const&) = delete;
QueryCacheResultEntryGuard () = delete;
public:
explicit QueryCacheResultEntryGuard (QueryCacheResultEntry* entry)
: _entry(entry) {
}
~QueryCacheResultEntryGuard () {
if (_entry != nullptr) {
_entry->unuse();
}
}
QueryCacheResultEntry* get () {
return _entry;
}
private:
QueryCacheResultEntry* _entry;
};
// -----------------------------------------------------------------------------
// --SECTION-- struct QueryCacheDatabaseEntry
// -----------------------------------------------------------------------------
struct QueryCacheDatabaseEntry {
// -----------------------------------------------------------------------------
// --SECTION-- constructors / destructors
// -----------------------------------------------------------------------------
QueryCacheDatabaseEntry (QueryCacheDatabaseEntry const&) = delete;
QueryCacheDatabaseEntry& operator= (QueryCacheDatabaseEntry const&) = delete;
////////////////////////////////////////////////////////////////////////////////
/// @brief create a database-specific cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheDatabaseEntry ();
////////////////////////////////////////////////////////////////////////////////
/// @brief destroy a database-specific cache
////////////////////////////////////////////////////////////////////////////////
~QueryCacheDatabaseEntry ();
// -----------------------------------------------------------------------------
// --SECTION-- public methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief lookup a query result in the database-specific cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* lookup (uint64_t,
char const*,
size_t);
////////////////////////////////////////////////////////////////////////////////
/// @brief store a query result in the database-specific cache
////////////////////////////////////////////////////////////////////////////////
void store (uint64_t,
QueryCacheResultEntry*);
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all entries for the given collections in the
/// database-specific cache
////////////////////////////////////////////////////////////////////////////////
void invalidate (std::vector<char const*> const&);
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all entries for a collection in the database-specific
/// cache
////////////////////////////////////////////////////////////////////////////////
void invalidate (char const*);
////////////////////////////////////////////////////////////////////////////////
/// @brief enforce maximum number of results
////////////////////////////////////////////////////////////////////////////////
void enforceMaxResults (size_t);
// -----------------------------------------------------------------------------
// --SECTION-- private methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief check whether the element can be destroyed, and delete it if yes
////////////////////////////////////////////////////////////////////////////////
void tryDelete (QueryCacheResultEntry*);
////////////////////////////////////////////////////////////////////////////////
/// @brief unlink the result entry from the list
////////////////////////////////////////////////////////////////////////////////
void unlink (QueryCacheResultEntry*);
////////////////////////////////////////////////////////////////////////////////
/// @brief link the result entry to the end of the list
////////////////////////////////////////////////////////////////////////////////
void link (QueryCacheResultEntry*);
// -----------------------------------------------------------------------------
// --SECTION-- public variables
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief hash table that maps query hashes to query results
////////////////////////////////////////////////////////////////////////////////
std::unordered_map<uint64_t, QueryCacheResultEntry*> _entriesByHash;
////////////////////////////////////////////////////////////////////////////////
/// @brief hash table that contains all collection-specific query results
/// maps from collection names to a set of query results as defined in
/// _entriesByHash
////////////////////////////////////////////////////////////////////////////////
std::unordered_map<std::string, std::unordered_set<uint64_t>> _entriesByCollection;
////////////////////////////////////////////////////////////////////////////////
/// @brief beginning of linked list of result entries
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* _head;
////////////////////////////////////////////////////////////////////////////////
/// @brief end of linked list of result entries
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* _tail;
////////////////////////////////////////////////////////////////////////////////
/// @brief number of elements in this cache
////////////////////////////////////////////////////////////////////////////////
size_t _numElements;
};
// -----------------------------------------------------------------------------
// --SECTION-- class QueryCache
// -----------------------------------------------------------------------------
class QueryCache {
// -----------------------------------------------------------------------------
// --SECTION-- constructors / destructors
// -----------------------------------------------------------------------------
public:
QueryCache (QueryCache const&) = delete;
QueryCache& operator= (QueryCache const&) = delete;
////////////////////////////////////////////////////////////////////////////////
/// @brief create cache
////////////////////////////////////////////////////////////////////////////////
QueryCache ();
////////////////////////////////////////////////////////////////////////////////
/// @brief destroy the cache
////////////////////////////////////////////////////////////////////////////////
~QueryCache ();
// -----------------------------------------------------------------------------
// --SECTION-- public methods
// -----------------------------------------------------------------------------
public:
////////////////////////////////////////////////////////////////////////////////
/// @brief return the query cache properties
////////////////////////////////////////////////////////////////////////////////
triagens::basics::Json properties ();
////////////////////////////////////////////////////////////////////////////////
/// @brief return the cache properties
////////////////////////////////////////////////////////////////////////////////
void properties (std::pair<std::string, size_t>&);
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the cache properties
////////////////////////////////////////////////////////////////////////////////
void setProperties (std::pair<std::string, size_t> const&);
////////////////////////////////////////////////////////////////////////////////
/// @brief test whether the cache might be active
/// this is a quick test that may save the caller from further bothering
/// about the query cache if case it returns `false`
////////////////////////////////////////////////////////////////////////////////
bool mayBeActive () const;
////////////////////////////////////////////////////////////////////////////////
/// @brief return whether or not the query cache is enabled
////////////////////////////////////////////////////////////////////////////////
QueryCacheMode mode () const;
////////////////////////////////////////////////////////////////////////////////
/// @brief return a string version of the mode
////////////////////////////////////////////////////////////////////////////////
static std::string modeString (QueryCacheMode);
////////////////////////////////////////////////////////////////////////////////
/// @brief lookup a query result in the cache
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* lookup (struct TRI_vocbase_s*,
uint64_t,
char const*,
size_t);
////////////////////////////////////////////////////////////////////////////////
/// @brief store a query in the cache
/// if the call is successful, the cache has taken over ownership for the
/// query result!
////////////////////////////////////////////////////////////////////////////////
QueryCacheResultEntry* store (struct TRI_vocbase_s*,
uint64_t,
char const*,
size_t,
struct TRI_json_t*,
std::vector<std::string> const&);
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries for the given collections
////////////////////////////////////////////////////////////////////////////////
void invalidate (struct TRI_vocbase_s*,
std::vector<char const*> const&);
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries for a particular collection
////////////////////////////////////////////////////////////////////////////////
void invalidate (struct TRI_vocbase_s*,
char const*);
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries for a particular database
////////////////////////////////////////////////////////////////////////////////
void invalidate (struct TRI_vocbase_s*);
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all queries
////////////////////////////////////////////////////////////////////////////////
void invalidate ();
////////////////////////////////////////////////////////////////////////////////
/// @brief hashes a query string
////////////////////////////////////////////////////////////////////////////////
uint64_t hashQueryString (char const*,
size_t) const;
////////////////////////////////////////////////////////////////////////////////
/// @brief get the pointer to the global query cache
////////////////////////////////////////////////////////////////////////////////
static QueryCache* instance ();
// -----------------------------------------------------------------------------
// --SECTION-- private methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief enforce maximum number of results in each database-specific cache
////////////////////////////////////////////////////////////////////////////////
void enforceMaxResults (size_t);
////////////////////////////////////////////////////////////////////////////////
/// @brief determine which part of the cache to use for the cache entries
////////////////////////////////////////////////////////////////////////////////
unsigned int getPart (struct TRI_vocbase_s const*) const;
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidate all entries in the cache part
/// note that the caller of this method must hold the write lock
////////////////////////////////////////////////////////////////////////////////
void invalidate (unsigned int);
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the maximum number of elements in the cache
////////////////////////////////////////////////////////////////////////////////
void setMaxResults (size_t);
////////////////////////////////////////////////////////////////////////////////
/// @brief enable or disable the query cache
////////////////////////////////////////////////////////////////////////////////
void setMode (QueryCacheMode);
////////////////////////////////////////////////////////////////////////////////
/// @brief enable or disable the query cache
////////////////////////////////////////////////////////////////////////////////
void setMode (std::string const&);
// -----------------------------------------------------------------------------
// --SECTION-- private variables
// -----------------------------------------------------------------------------
private:
////////////////////////////////////////////////////////////////////////////////
/// @brief number of R/W locks for the query cache
////////////////////////////////////////////////////////////////////////////////
static uint64_t const NumberOfParts = 8;
////////////////////////////////////////////////////////////////////////////////
/// @brief protect mode changes with a mutex
////////////////////////////////////////////////////////////////////////////////
triagens::basics::Mutex _propertiesLock;
////////////////////////////////////////////////////////////////////////////////
/// @brief read-write lock for the cache
////////////////////////////////////////////////////////////////////////////////
triagens::basics::ReadWriteLock _entriesLock[NumberOfParts];
////////////////////////////////////////////////////////////////////////////////
/// @brief cached query entries, organized per database
////////////////////////////////////////////////////////////////////////////////
std::unordered_map<struct TRI_vocbase_s*, QueryCacheDatabaseEntry*> _entries[NumberOfParts];
};
}
}
#endif
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
// End:

View File

@ -45,6 +45,7 @@ namespace triagens {
QueryResult (QueryResult&& other) {
code = other.code;
cached = other.cached;
details = other.details;
warnings = other.warnings;
json = other.json;
@ -65,6 +66,7 @@ namespace triagens {
QueryResult (int code,
std::string const& details)
: code(code),
cached(false),
details(details),
zone(TRI_UNKNOWN_MEM_ZONE),
warnings(nullptr),
@ -98,6 +100,7 @@ namespace triagens {
}
int code;
bool cached;
std::string details;
std::unordered_set<std::string> bindParameters;
std::vector<std::string> collectionNames;

View File

@ -66,7 +66,7 @@ namespace triagens {
result() {
}
v8::Handle<v8::Array> result;
v8::Handle<v8::Array> result;
};
}

View File

@ -67,6 +67,7 @@ add_executable(
Aql/OptimizerRules.cpp
Aql/Parser.cpp
Aql/Query.cpp
Aql/QueryCache.cpp
Aql/QueryList.cpp
Aql/QueryRegistry.cpp
Aql/RangeInfo.cpp
@ -117,6 +118,7 @@ add_executable(
RestHandler/RestExportHandler.cpp
RestHandler/RestImportHandler.cpp
RestHandler/RestPleaseUpgradeHandler.cpp
RestHandler/RestQueryCacheHandler.cpp
RestHandler/RestQueryHandler.cpp
RestHandler/RestReplicationHandler.cpp
RestHandler/RestSimpleHandler.cpp

View File

@ -40,6 +40,7 @@ arangod_libarangod_a_SOURCES = \
arangod/Aql/OptimizerRules.cpp \
arangod/Aql/Parser.cpp \
arangod/Aql/Query.cpp \
arangod/Aql/QueryCache.cpp \
arangod/Aql/QueryList.cpp \
arangod/Aql/QueryRegistry.cpp \
arangod/Aql/RangeInfo.cpp \
@ -90,6 +91,7 @@ arangod_libarangod_a_SOURCES = \
arangod/RestHandler/RestExportHandler.cpp \
arangod/RestHandler/RestImportHandler.cpp \
arangod/RestHandler/RestPleaseUpgradeHandler.cpp \
arangod/RestHandler/RestQueryCacheHandler.cpp \
arangod/RestHandler/RestQueryHandler.cpp \
arangod/RestHandler/RestReplicationHandler.cpp \
arangod/RestHandler/RestSimpleHandler.cpp \

View File

@ -193,7 +193,7 @@ void RestCursorHandler::processQuery (TRI_json_t const* json) {
if (n <= batchSize) {
// result is smaller than batchSize and will be returned directly. no need to create a cursor
triagens::basics::Json result(triagens::basics::Json::Object, 6);
triagens::basics::Json result(triagens::basics::Json::Object, 7);
result.set("result", triagens::basics::Json(TRI_UNKNOWN_MEM_ZONE, queryResult.json, triagens::basics::Json::AUTOFREE));
queryResult.json = nullptr;
@ -203,6 +203,7 @@ void RestCursorHandler::processQuery (TRI_json_t const* json) {
result.set("count", triagens::basics::Json(static_cast<double>(n)));
}
result.set("cached", triagens::basics::Json(queryResult.cached));
result.set("extra", extra);
result.set("error", triagens::basics::Json(false));
result.set("code", triagens::basics::Json(static_cast<double>(_response->responseCode())));
@ -220,7 +221,7 @@ void RestCursorHandler::processQuery (TRI_json_t const* json) {
// steal the query JSON, cursor will take over the ownership
auto j = queryResult.json;
triagens::arango::JsonCursor* cursor = cursors->createFromJson(j, batchSize, extra.steal(), ttl, count);
triagens::arango::JsonCursor* cursor = cursors->createFromJson(j, batchSize, extra.steal(), ttl, count, queryResult.cached);
queryResult.json = nullptr;
try {
@ -310,6 +311,11 @@ triagens::basics::Json RestCursorHandler::buildOptions (TRI_json_t const* json)
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_TYPE_ERROR, "expecting non-zero value for <batchSize>");
}
attribute = getAttribute("cache");
if (TRI_IsBooleanJson(attribute)) {
options.set("cache", triagens::basics::Json(attribute->_value._boolean));
}
attribute = getAttribute("options");
if (TRI_IsObjectJson(attribute)) {
@ -327,6 +333,11 @@ triagens::basics::Json RestCursorHandler::buildOptions (TRI_json_t const* json)
if (strcmp(keyName, "count") != 0 &&
strcmp(keyName, "batchSize") != 0) {
if (strcmp(keyName, "cache") == 0 && options.has("cache")) {
continue;
}
options.set(keyName, triagens::basics::Json(
TRI_UNKNOWN_MEM_ZONE,
TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, value),
@ -407,9 +418,14 @@ triagens::basics::Json RestCursorHandler::buildExtra (triagens::aql::QueryResult
/// is useful to ensure garbage collection of cursors that are not fully fetched
/// by clients. If not set, a server-defined value will be used.
///
/// - *bindVars*: key/value list of bind parameters (optional).
/// - *cache*: optional boolean flag to determine whether the AQL query cache
/// shall be used. If set to *false*, then any query cache lookup will be skipped
/// for the query. If set to *true*, it will lead to the query cache being checked
/// for the query if the query cache mode is either *on* or *demand*.
///
/// - *options*: key/value list of extra options for the query (optional).
/// - *bindVars*: key/value object with bind parameters (optional).
///
/// - *options*: key/value object with extra options for the query (optional).
///
/// The following options are supported at the moment:
///
@ -432,11 +448,15 @@ triagens::basics::Json RestCursorHandler::buildExtra (triagens::aql::QueryResult
/// specific rules. To disable a rule, prefix its name with a `-`, to enable a rule, prefix it
/// with a `+`. There is also a pseudo-rule `all`, which will match all optimizer rules.
///
/// - *profile*: if set to *true*, then the additional query profiling information
/// will be returned in the *extra.stats* return attribute if the query result is not
/// served from the query cache.
///
/// If the result set can be created by the server, the server will respond with
/// *HTTP 201*. The body of the response will contain a JSON object with the
/// result set.
///
/// The returned JSON object has the following properties:
/// The returned JSON object has the following attributes:
///
/// - *error*: boolean flag to indicate that an error occurred (*false*
/// in this case)
@ -453,11 +473,17 @@ triagens::basics::Json RestCursorHandler::buildExtra (triagens::aql::QueryResult
///
/// - *id*: id of temporary cursor created on the server (optional, see above)
///
/// - *extra*: an optional JSON object with extra information about the query result.
/// For data-modification queries, the *extra* attribute will contain the number
/// of modified documents and the number of documents that could not be modified
/// - *extra*: an optional JSON object with extra information about the query result
/// contained in its *stats* sub-attribute. For data-modification queries, the
/// *extra.stats* sub-attribute will contain the number of modified documents and
/// the number of documents that could not be modified
/// due to an error (if *ignoreErrors* query option is specified)
///
/// - *cached*: a boolean flag indicating whether the query result was served
/// from the query cache or not. If the query result is served from the query
/// cache, the *extra* return attribute will not contain any *stats* sub-attribute
/// and no *profile* sub-attribute.
///
/// If the JSON representation is malformed or the query specification is
/// missing from the request, the server will respond with *HTTP 400*.
///

View File

@ -0,0 +1,269 @@
////////////////////////////////////////////////////////////////////////////////
/// @brief query cache request handler
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2014-2015 ArangoDB GmbH, Cologne, Germany
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2014-2015, ArangoDB GmbH, Cologne, Germany
/// @author Copyright 2010-2014, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
#include "RestQueryCacheHandler.h"
#include "Aql/QueryCache.h"
#include "Rest/HttpRequest.h"
using namespace std;
using namespace triagens::basics;
using namespace triagens::rest;
using namespace triagens::arango;
using namespace triagens::aql;
// -----------------------------------------------------------------------------
// --SECTION-- constructors and destructors
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief constructor
////////////////////////////////////////////////////////////////////////////////
RestQueryCacheHandler::RestQueryCacheHandler (HttpRequest* request)
: RestVocbaseBaseHandler(request) {
}
// -----------------------------------------------------------------------------
// --SECTION-- Handler methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// {@inheritDoc}
////////////////////////////////////////////////////////////////////////////////
bool RestQueryCacheHandler::isDirect () const {
return false;
}
////////////////////////////////////////////////////////////////////////////////
/// {@inheritDoc}
////////////////////////////////////////////////////////////////////////////////
HttpHandler::status_t RestQueryCacheHandler::execute () {
// extract the sub-request type
HttpRequest::HttpRequestType type = _request->requestType();
switch (type) {
case HttpRequest::HTTP_REQUEST_DELETE: clearCache(); break;
case HttpRequest::HTTP_REQUEST_GET: readProperties(); break;
case HttpRequest::HTTP_REQUEST_PUT: replaceProperties(); break;
case HttpRequest::HTTP_REQUEST_POST:
case HttpRequest::HTTP_REQUEST_HEAD:
case HttpRequest::HTTP_REQUEST_PATCH:
case HttpRequest::HTTP_REQUEST_ILLEGAL:
default: {
generateNotImplemented("ILLEGAL " + DOCUMENT_PATH);
break;
}
}
// this handler is done
return status_t(HANDLER_DONE);
}
// -----------------------------------------------------------------------------
// --SECTION-- protected methods
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief clears the AQL query cache
/// @startDocuBlock DeleteApiQueryCache
/// @RESTHEADER{DELETE /_api/query-cache, Clears any results in the AQL query cache}
///
/// @RESTRETURNCODES
///
/// @RESTRETURNCODE{200}
/// The server will respond with *HTTP 200* when the cache was cleared
/// successfully.
///
/// @RESTRETURNCODE{400}
/// The server will respond with *HTTP 400* in case of a malformed request.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
bool RestQueryCacheHandler::clearCache () {
auto queryCache = triagens::aql::QueryCache::instance();
queryCache->invalidate();
Json result(Json::Object, 2);
result
.set("error", Json(false))
.set("code", Json(HttpResponse::OK));
generateResult(HttpResponse::OK, result.json());
return true;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the global configuration for the AQL query cache
/// @startDocuBlock GetApiQueryCacheProperties
/// @RESTHEADER{GET /_api/query-cache/properties, Returns the global properties for the AQL query cache}
///
/// Returns the global AQL query cache configuration. The configuration is a
/// JSON object with the following properties:
///
/// - *mode*: the mode the AQL query cache operates in. The mode is one of the following
/// values: *off*, *on* or *demand*.
///
/// - *maxResults*: the maximum number of query results that will be stored per database-specific
/// cache.
///
/// @RESTRETURNCODES
///
/// @RESTRETURNCODE{200}
/// Is returned if the properties can be retrieved successfully.
///
/// @RESTRETURNCODE{400}
/// The server will respond with *HTTP 400* in case of a malformed request,
///
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
bool RestQueryCacheHandler::readProperties () {
try {
auto queryCache = triagens::aql::QueryCache::instance();
Json result = queryCache->properties();
generateResult(HttpResponse::OK, result.json());
}
catch (Exception const& err) {
handleError(err);
}
catch (std::exception const& ex) {
triagens::basics::Exception err(TRI_ERROR_INTERNAL, ex.what(), __FILE__, __LINE__);
handleError(err);
}
catch (...) {
triagens::basics::Exception err(TRI_ERROR_INTERNAL, __FILE__, __LINE__);
handleError(err);
}
return true;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief changes the configuration for the AQL query cache
/// @startDocuBlock PutApiQueryCacheProperties
/// @RESTHEADER{PUT /_api/query-cache/properties, Changes the global properties for the AQL query cache}
///
/// @RESTBODYPARAM{properties,json,required}
/// The global properties for AQL query cache.
///
/// The properties need to be passed in the attribute *properties* in the body
/// of the HTTP request. *properties* needs to be a JSON object with the following
/// properties:
///
/// - *mode*: the mode the AQL query cache should operate in. Possible values are
/// *off*, *on* or *demand*.
///
/// - *maxResults*: the maximum number of query results that will be stored per database-specific
/// cache.
///
/// After the properties have been changed, the current set of properties will
/// be returned in the HTTP response.
///
/// Note: changing the properties may invalidate all results in the cache.
///
/// @RESTRETURNCODES
///
/// @RESTRETURNCODE{200}
/// Is returned if the properties were changed successfully.
///
/// @RESTRETURNCODE{400}
/// The server will respond with *HTTP 400* in case of a malformed request,
///
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
bool RestQueryCacheHandler::replaceProperties () {
auto const& suffix = _request->suffix();
if (suffix.size() != 1 || suffix[0] != "properties") {
generateError(HttpResponse::BAD,
TRI_ERROR_HTTP_BAD_PARAMETER,
"expecting PUT /_api/query-cache/properties");
return true;
}
std::unique_ptr<TRI_json_t> body(parseJsonBody());
if (body == nullptr) {
// error message generated in parseJsonBody
return true;
}
auto queryCache = triagens::aql::QueryCache::instance();
try {
std::pair<std::string, size_t> cacheProperties;
queryCache->properties(cacheProperties);
auto attribute = static_cast<TRI_json_t const*>(TRI_LookupObjectJson(body.get(), "mode"));
if (TRI_IsStringJson(attribute)) {
cacheProperties.first = std::string(attribute->_value._string.data, attribute->_value._string.length - 1);
}
attribute = static_cast<TRI_json_t const*>(TRI_LookupObjectJson(body.get(), "maxResults"));
if (TRI_IsNumberJson(attribute)) {
cacheProperties.second = static_cast<size_t>(attribute->_value._number);
}
queryCache->setProperties(cacheProperties);
return readProperties();
}
catch (Exception const& err) {
handleError(err);
}
catch (std::exception const& ex) {
triagens::basics::Exception err(TRI_ERROR_INTERNAL, ex.what(), __FILE__, __LINE__);
handleError(err);
}
catch (...) {
triagens::basics::Exception err(TRI_ERROR_INTERNAL, __FILE__, __LINE__);
handleError(err);
}
return true;
}
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
// End:

View File

@ -0,0 +1,116 @@
////////////////////////////////////////////////////////////////////////////////
/// @brief query cache request handler
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2014-2015 ArangoDB GmbH, Cologne, Germany
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2014-2015, ArangoDB GmbH, Cologne, Germany
/// @author Copyright 2010-2014, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
#ifndef ARANGODB_REST_HANDLER_REST_QUERY_CACHE_HANDLER_H
#define ARANGODB_REST_HANDLER_REST_QUERY_CACHE_HANDLER_H 1
#include "Basics/Common.h"
#include "RestHandler/RestVocbaseBaseHandler.h"
namespace triagens {
namespace arango {
// -----------------------------------------------------------------------------
// --SECTION-- class RestQueryCacheHandler
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief query request handler
////////////////////////////////////////////////////////////////////////////////
class RestQueryCacheHandler : public RestVocbaseBaseHandler {
// -----------------------------------------------------------------------------
// --SECTION-- constructors and destructors
// -----------------------------------------------------------------------------
public:
////////////////////////////////////////////////////////////////////////////////
/// @brief constructor
////////////////////////////////////////////////////////////////////////////////
RestQueryCacheHandler (rest::HttpRequest*);
// -----------------------------------------------------------------------------
// --SECTION-- Handler methods
// -----------------------------------------------------------------------------
public:
////////////////////////////////////////////////////////////////////////////////
/// {@inheritDoc}
////////////////////////////////////////////////////////////////////////////////
bool isDirect () const override;
////////////////////////////////////////////////////////////////////////////////
/// {@inheritDoc}
////////////////////////////////////////////////////////////////////////////////
status_t execute () override;
// -----------------------------------------------------------------------------
// --SECTION-- protected methods
// -----------------------------------------------------------------------------
protected:
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the list of properties
////////////////////////////////////////////////////////////////////////////////
bool readProperties ();
////////////////////////////////////////////////////////////////////////////////
/// @brief changes the properties
////////////////////////////////////////////////////////////////////////////////
bool replaceProperties ();
////////////////////////////////////////////////////////////////////////////////
/// @brief clears the cache
////////////////////////////////////////////////////////////////////////////////
bool clearCache ();
};
}
}
#endif
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
// End:

View File

@ -72,7 +72,7 @@ RestQueryHandler::RestQueryHandler (HttpRequest* request, ApplicationV8* applica
////////////////////////////////////////////////////////////////////////////////
bool RestQueryHandler::isDirect () const {
return _request->requestType() != HttpRequest::HTTP_REQUEST_POST;
return _request->requestType() != HttpRequest::HTTP_REQUEST_POST;
}
////////////////////////////////////////////////////////////////////////////////
@ -142,7 +142,7 @@ HttpHandler::status_t RestQueryHandler::execute () {
/// @RESTRETURNCODES
///
/// @RESTRETURNCODE{200}
/// Is returned when the list of queries can be retrieved successfully.
/// Is returned if properties were retrieved successfully.
///
/// @RESTRETURNCODE{400}
/// The server will respond with *HTTP 400* in case of a malformed request,
@ -324,7 +324,7 @@ bool RestQueryHandler::readQuery () {
///
/// @RESTRETURNCODES
///
/// @RESTRETURNCODE{204}
/// @RESTRETURNCODE{200}
/// The server will respond with *HTTP 200* when the list of queries was
/// cleared successfully.
///

View File

@ -38,6 +38,7 @@
#include "Admin/RestHandlerCreator.h"
#include "Admin/RestShutdownHandler.h"
#include "Aql/Query.h"
#include "Aql/QueryCache.h"
#include "Aql/RestAqlHandler.h"
#include "Basics/FileUtils.h"
#include "Basics/Nonce.h"
@ -69,6 +70,7 @@
#include "RestHandler/RestExportHandler.h"
#include "RestHandler/RestImportHandler.h"
#include "RestHandler/RestPleaseUpgradeHandler.h"
#include "RestHandler/RestQueryCacheHandler.h"
#include "RestHandler/RestQueryHandler.h"
#include "RestHandler/RestReplicationHandler.h"
#include "RestHandler/RestSimpleHandler.h"
@ -104,12 +106,13 @@ bool IGNORE_DATAFILE_ERRORS;
/// @brief converts list of size_t to string
////////////////////////////////////////////////////////////////////////////////
template<typename A> string to_string (vector<A> v) {
string result = "";
string sep = "[";
template<typename T>
static std::string ToString (std::vector<T> const& v) {
std::string result = "";
std::string sep = "[";
for (auto e : v) {
result += sep + to_string(e);
for (auto const& e : v) {
result += sep + std::to_string(e);
sep = ",";
}
@ -196,6 +199,8 @@ void ArangoServer::defineHandlers (HttpHandlerFactory* factory) {
RestHandlerCreator<RestQueryHandler>::createData<ApplicationV8*>,
_applicationV8);
factory->addPrefixHandler("/_api/query-cache",
RestHandlerCreator<RestQueryCacheHandler>::createNoData);
// And now the "_admin" handlers
@ -339,6 +344,8 @@ ArangoServer::ArangoServer (int argc, char** argv)
_v8Contexts(8),
_indexThreads(2),
_databasePath(),
_queryCacheMode("off"),
_queryCacheMaxResults(128),
_defaultMaximalSize(TRI_JOURNAL_DEFAULT_MAXIMAL_SIZE),
_defaultWaitForSync(false),
_forceSyncProperties(true),
@ -591,6 +598,8 @@ void ArangoServer::buildApplicationServer () {
("database.force-sync-properties", &_forceSyncProperties, "force syncing of collection properties to disk, will use waitForSync value of collection when turned off")
("database.ignore-datafile-errors", &_ignoreDatafileErrors, "load collections even if datafiles may contain errors")
("database.disable-query-tracking", &_disableQueryTracking, "turn off AQL query tracking by default")
("database.query-cache-mode", &_queryCacheMode, "mode for the AQL query cache (on, off, demand)")
("database.query-cache-max-results", &_queryCacheMaxResults, "maximum number of results in query cache per database")
("database.index-threads", &_indexThreads, "threads to start for parallel background index creation")
;
@ -746,6 +755,11 @@ void ArangoServer::buildApplicationServer () {
// set global query tracking flag
triagens::aql::Query::DisableQueryTracking(_disableQueryTracking);
// configure the query cache
{
std::pair<std::string, size_t> cacheProperties{ _queryCacheMode, _queryCacheMaxResults };
triagens::aql::QueryCache::instance()->setProperties(cacheProperties);
}
// .............................................................................
// now run arangod
@ -1017,7 +1031,7 @@ int ArangoServer::startupServer () {
if (ns != 0 && nd != 0) {
LOG_INFO("the server has %d (hyper) cores, using %d scheduler threads, %d dispatcher threads",
(int) n, (int) ns, (int) nd);
(int) n, (int) ns, (int) nd);
}
else {
_threadAffinity = 0;
@ -1053,22 +1067,22 @@ int ArangoServer::startupServer () {
break;
case 3:
if (n < ns) {
ns = n;
}
if (n < ns) {
ns = n;
}
nd = 0;
nd = 0;
break;
break;
case 4:
if (n < nd) {
nd = n;
}
if (n < nd) {
nd = n;
}
ns = 0;
ns = 0;
break;
break;
default:
_threadAffinity = 0;
@ -1091,22 +1105,18 @@ int ArangoServer::startupServer () {
}
if (0 < ns) {
_applicationScheduler->setProcessorAffinity(ps);
_applicationScheduler->setProcessorAffinity(ps);
}
if (0 < nd) {
_applicationDispatcher->setProcessorAffinity(pd);
_applicationDispatcher->setProcessorAffinity(pd);
}
if (0 < ns && 0 < nd) {
LOG_INFO("scheduler cores: %s, dispatcher cores: %s",
to_string(ps).c_str(), to_string(pd).c_str());
if (0 < ns) {
LOG_INFO("scheduler cores: %s", ToString(ps).c_str());
}
else if (0 < ns) {
LOG_INFO("scheduler cores: %s", to_string(ps).c_str());
}
else if (0 < nd) {
LOG_INFO("dispatcher cores: %s", to_string(pd).c_str());
if (0 < nd) {
LOG_INFO("dispatcher cores: %s", ToString(pd).c_str());
}
}
else {

View File

@ -401,6 +401,41 @@ namespace triagens {
std::string _databasePath;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not to enable the AQL query cache
/// @startDocuBlock queryCacheMode
/// `--database.query-cache-mode`
///
/// Toggles the AQL query cache behavior. Possible values are:
///
/// * *off*: do not use query cache
/// * *on*: always use query cache, except for queries that have their *cache*
/// attribute set to *false*
/// * *demand*: use query cache only for queries that have their *cache*
/// attribute set to *true*
/// set
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
std::string _queryCacheMode;
////////////////////////////////////////////////////////////////////////////////
/// @brief maximum number of elements in the query cache per database
/// @startDocuBlock queryCacheMaxResults
/// `--database.query-cache-max-results`
///
/// Maximum number of query results that can be stored per database-specific
/// query cache. If a query is eligible for caching and the number of items in
/// the database's query cache is equal to this threshold value, another cached
/// query result will be removed from the cache.
///
/// This option only has an effect if the query cache mode is set to either
/// *on* or *demand*.
/// @endDocuBlock
////////////////////////////////////////////////////////////////////////////////
size_t _queryCacheMaxResults;
////////////////////////////////////////////////////////////////////////////////
/// @startDocuBlock databaseMaximalJournalSize
///

View File

@ -112,9 +112,7 @@ namespace triagens {
if (ServerState::instance()->isCoordinator()) {
return processCollectionCoordinator(collection);
}
else {
return processCollectionNormal(collection);
}
return processCollectionNormal(collection);
}
////////////////////////////////////////////////////////////////////////////////
@ -192,8 +190,7 @@ namespace triagens {
auto trx = getInternals();
for (size_t i = 0; i < trx->_collections._length; i++) {
TRI_transaction_collection_t* trxCollection
= static_cast<TRI_transaction_collection_t*>
auto trxCollection = static_cast<TRI_transaction_collection_t*>
(TRI_AtVectorPointer(&trx->_collections, i));
int res = TRI_LockCollectionTransaction(trxCollection,
trxCollection->_accessType, 0);

View File

@ -82,11 +82,13 @@ JsonCursor::JsonCursor (TRI_vocbase_t* vocbase,
size_t batchSize,
TRI_json_t* extra,
double ttl,
bool hasCount)
bool hasCount,
bool cached)
: Cursor(id, batchSize, extra, ttl, hasCount),
_vocbase(vocbase),
_json(json),
_size(TRI_LengthArrayJson(_json)) {
_size(TRI_LengthArrayJson(_json)),
_cached(cached) {
TRI_UseVocBase(vocbase);
}
@ -196,6 +198,9 @@ void JsonCursor::dump (triagens::basics::StringBuffer& buffer) {
buffer.appendText(",\"extra\":");
TRI_StringifyJson(buffer.stringBuffer(), extraJson);
}
buffer.appendText(",\"cached\":");
buffer.appendText(_cached ? "true" : "false");
if (! hasNext()) {
// mark the cursor as deleted

View File

@ -155,6 +155,7 @@ namespace triagens {
size_t,
struct TRI_json_t*,
double,
bool,
bool);
~JsonCursor ();
@ -190,6 +191,7 @@ namespace triagens {
struct TRI_vocbase_s* _vocbase;
struct TRI_json_t* _json;
size_t const _size;
bool _cached;
};
// -----------------------------------------------------------------------------

View File

@ -115,14 +115,15 @@ JsonCursor* CursorRepository::createFromJson (TRI_json_t* json,
size_t batchSize,
TRI_json_t* extra,
double ttl,
bool count) {
bool count,
bool cached) {
TRI_ASSERT(json != nullptr);
CursorId const id = TRI_NewTickServer();
triagens::arango::JsonCursor* cursor = nullptr;
try {
cursor = new triagens::arango::JsonCursor(_vocbase, id, json, batchSize, extra, ttl, count);
cursor = new triagens::arango::JsonCursor(_vocbase, id, json, batchSize, extra, ttl, count, cached);
}
catch (...) {
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);

View File

@ -84,6 +84,7 @@ namespace triagens {
size_t,
struct TRI_json_t*,
double,
bool,
bool);
////////////////////////////////////////////////////////////////////////////////

View File

@ -65,6 +65,8 @@ namespace triagens {
////////////////////////////////////////////////////////////////////////////////
private:
Transaction () = delete;
Transaction (Transaction const&) = delete;
Transaction& operator= (Transaction const&) = delete;
@ -184,6 +186,24 @@ namespace triagens {
return _errorData;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the names of all collections used in the transaction
////////////////////////////////////////////////////////////////////////////////
std::vector<std::string> collectionNames () {
std::vector<std::string> result;
for (size_t i = 0; i < _trx->_collections._length; ++i) {
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&_trx->_collections, i));
if (trxCollection->_collection != nullptr) {
result.emplace_back(trxCollection->_collection->_name);
}
}
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the collection name resolver
////////////////////////////////////////////////////////////////////////////////

View File

@ -31,6 +31,7 @@
#include "v8-vocbaseprivate.h"
#include "Aql/Query.h"
#include "Aql/QueryCache.h"
#include "Aql/QueryList.h"
#include "Aql/QueryRegistry.h"
#include "Basics/conversions.h"
@ -1200,20 +1201,21 @@ static void JS_ExecuteAqlJson (const v8::FunctionCallbackInfo<v8::Value>& args)
// return the array value as it is. this is a performance optimisation
v8::Handle<v8::Object> result = v8::Object::New(isolate);
if (queryResult.json != nullptr) {
result->Set(TRI_V8_ASCII_STRING("json"), TRI_ObjectJson(isolate, queryResult.json));
result->ForceSet(TRI_V8_ASCII_STRING("json"), TRI_ObjectJson(isolate, queryResult.json));
}
if (queryResult.stats != nullptr) {
result->Set(TRI_V8_ASCII_STRING("stats"), TRI_ObjectJson(isolate, queryResult.stats));
result->ForceSet(TRI_V8_ASCII_STRING("stats"), TRI_ObjectJson(isolate, queryResult.stats));
}
if (queryResult.profile != nullptr) {
result->Set(TRI_V8_ASCII_STRING("profile"), TRI_ObjectJson(isolate, queryResult.profile));
result->ForceSet(TRI_V8_ASCII_STRING("profile"), TRI_ObjectJson(isolate, queryResult.profile));
}
if (queryResult.warnings == nullptr) {
result->Set(TRI_V8_ASCII_STRING("warnings"), v8::Array::New(isolate));
result->ForceSet(TRI_V8_ASCII_STRING("warnings"), v8::Array::New(isolate));
}
else {
result->Set(TRI_V8_ASCII_STRING("warnings"), TRI_ObjectJson(isolate, queryResult.warnings));
result->ForceSet(TRI_V8_ASCII_STRING("warnings"), TRI_ObjectJson(isolate, queryResult.warnings));
}
result->ForceSet(TRI_V8_ASCII_STRING("cached"), v8::Boolean::New(isolate, queryResult.cached));
TRI_V8_RETURN(result);
TRI_V8_TRY_CATCH_END
@ -1300,20 +1302,21 @@ static void JS_ExecuteAql (const v8::FunctionCallbackInfo<v8::Value>& args) {
// return the array value as it is. this is a performance optimisation
v8::Handle<v8::Object> result = v8::Object::New(isolate);
result->Set(TRI_V8_ASCII_STRING("json"), queryResult.result);
result->ForceSet(TRI_V8_ASCII_STRING("json"), queryResult.result);
if (queryResult.stats != nullptr) {
result->Set(TRI_V8_ASCII_STRING("stats"), TRI_ObjectJson(isolate, queryResult.stats));
result->ForceSet(TRI_V8_ASCII_STRING("stats"), TRI_ObjectJson(isolate, queryResult.stats));
}
if (queryResult.profile != nullptr) {
result->Set(TRI_V8_ASCII_STRING("profile"), TRI_ObjectJson(isolate, queryResult.profile));
result->ForceSet(TRI_V8_ASCII_STRING("profile"), TRI_ObjectJson(isolate, queryResult.profile));
}
if (queryResult.warnings == nullptr) {
result->Set(TRI_V8_ASCII_STRING("warnings"), v8::Array::New(isolate));
result->ForceSet(TRI_V8_ASCII_STRING("warnings"), v8::Array::New(isolate));
}
else {
result->Set(TRI_V8_ASCII_STRING("warnings"), TRI_ObjectJson(isolate, queryResult.warnings));
result->ForceSet(TRI_V8_ASCII_STRING("warnings"), TRI_ObjectJson(isolate, queryResult.warnings));
}
result->ForceSet(TRI_V8_ASCII_STRING("cached"), v8::Boolean::New(isolate, queryResult.cached));
TRI_V8_RETURN(result);
TRI_V8_TRY_CATCH_END
@ -1393,11 +1396,9 @@ static void JS_QueriesCurrentAql (const v8::FunctionCallbackInfo<v8::Value>& arg
TRI_V8_THROW_EXCEPTION(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND);
}
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("AQL_QUERIES_CURRENT()");
}
auto queryList = static_cast<triagens::aql::QueryList*>(vocbase->_queries);
TRI_ASSERT(queryList != nullptr);
@ -1445,7 +1446,6 @@ static void JS_QueriesSlowAql (const v8::FunctionCallbackInfo<v8::Value>& args)
auto queryList = static_cast<triagens::aql::QueryList*>(vocbase->_queries);
TRI_ASSERT(queryList != nullptr);
if (args.Length() == 1) {
queryList->clearSlow();
@ -1532,6 +1532,75 @@ static void JS_QueryIsKilledAql (const v8::FunctionCallbackInfo<v8::Value>& args
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief configures the AQL query cache
////////////////////////////////////////////////////////////////////////////////
static void JS_QueryCachePropertiesAql (const v8::FunctionCallbackInfo<v8::Value>& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
TRI_vocbase_t* vocbase = GetContextVocBase(isolate);
if (vocbase == nullptr) {
TRI_V8_THROW_EXCEPTION(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND);
}
if (args.Length() > 1 || (args.Length() == 1 && ! args[0]->IsObject())) {
TRI_V8_THROW_EXCEPTION_USAGE("AQL_QUERY_CACHE_PROPERTIES(<properties>)");
}
auto queryCache = triagens::aql::QueryCache::instance();
if (args.Length() == 1) {
// called with options
auto obj = args[0]->ToObject();
std::pair<std::string, size_t> cacheProperties;
// fetch current configuration
queryCache->properties(cacheProperties);
if (obj->Has(TRI_V8_ASCII_STRING("mode"))) {
cacheProperties.first = TRI_ObjectToString(obj->Get(TRI_V8_ASCII_STRING("mode")));
}
if (obj->Has(TRI_V8_ASCII_STRING("maxResults"))) {
cacheProperties.second = static_cast<size_t>(TRI_ObjectToInt64(obj->Get(TRI_V8_ASCII_STRING("maxResults"))));
}
// set mode and max elements
queryCache->setProperties(cacheProperties);
}
auto properties = queryCache->properties();
TRI_V8_RETURN(TRI_ObjectJson(isolate, properties.json()));
// fetch current configuration and return it
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidates the AQL query cache
////////////////////////////////////////////////////////////////////////////////
static void JS_QueryCacheInvalidateAql (const v8::FunctionCallbackInfo<v8::Value>& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
TRI_vocbase_t* vocbase = GetContextVocBase(isolate);
if (vocbase == nullptr) {
TRI_V8_THROW_EXCEPTION(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND);
}
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("AQL_QUERY_CACHE_INVALIDATE()");
}
triagens::aql::QueryCache::instance()->invalidate();
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief Transforms VertexId to v8String
////////////////////////////////////////////////////////////////////////////////
@ -3836,6 +3905,8 @@ void TRI_InitV8VocBridge (v8::Isolate* isolate,
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("AQL_QUERIES_KILL"), JS_QueriesKillAql, true);
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("AQL_QUERY_SLEEP"), JS_QuerySleepAql, true);
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("AQL_QUERY_IS_KILLED"), JS_QueryIsKilledAql, true);
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("AQL_QUERY_CACHE_PROPERTIES"), JS_QueryCachePropertiesAql, true);
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("AQL_QUERY_CACHE_INVALIDATE"), JS_QueryCacheInvalidateAql, true);
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("CPP_SHORTEST_PATH"), JS_QueryShortestPath, true);
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("CPP_NEIGHBORS"), JS_QueryNeighbors, true);

View File

@ -98,7 +98,7 @@ static void JS_CreateCursor (const v8::FunctionCallbackInfo<v8::Value>& args) {
auto cursors = static_cast<triagens::arango::CursorRepository*>(vocbase->_cursorRepository);
try {
triagens::arango::Cursor* cursor = cursors->createFromJson(json.get(), static_cast<size_t>(batchSize), nullptr, ttl, true);
triagens::arango::Cursor* cursor = cursors->createFromJson(json.get(), static_cast<size_t>(batchSize), nullptr, ttl, true, false);
json.release();
TRI_ASSERT(cursor != nullptr);

View File

@ -29,6 +29,7 @@
#include "document-collection.h"
#include "Aql/QueryCache.h"
#include "Basics/Barrier.h"
#include "Basics/conversions.h"
#include "Basics/Exceptions.h"
@ -3597,6 +3598,8 @@ bool TRI_DropIndexDocumentCollection (TRI_document_collection_t* document,
TRI_WRITE_LOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION(document);
triagens::aql::QueryCache::instance()->invalidate(vocbase, document->_info._name);
triagens::arango::Index* found = document->removeIndex(iid);
TRI_WRITE_UNLOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION(document);
@ -3877,6 +3880,7 @@ triagens::arango::Index* TRI_EnsureCapConstraintDocumentCollection (TRI_document
if (idx != nullptr) {
if (created) {
triagens::aql::QueryCache::instance()->invalidate(document->_vocbase, document->_info._name);
int res = TRI_SaveIndex(document, idx, true);
if (res != TRI_ERROR_NO_ERROR) {
@ -4214,6 +4218,7 @@ triagens::arango::Index* TRI_EnsureGeoIndex1DocumentCollection (TRI_document_col
if (idx != nullptr) {
if (created) {
triagens::aql::QueryCache::instance()->invalidate(document->_vocbase, document->_info._name);
int res = TRI_SaveIndex(document, idx, true);
if (res != TRI_ERROR_NO_ERROR) {
@ -4254,6 +4259,7 @@ triagens::arango::Index* TRI_EnsureGeoIndex2DocumentCollection (TRI_document_col
if (idx != nullptr) {
if (created) {
triagens::aql::QueryCache::instance()->invalidate(document->_vocbase, document->_info._name);
int res = TRI_SaveIndex(document, idx, true);
if (res != TRI_ERROR_NO_ERROR) {
@ -4426,11 +4432,11 @@ triagens::arango::Index* TRI_EnsureHashIndexDocumentCollection (TRI_document_col
TRI_WRITE_LOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION(document);
// given the list of attributes (as strings)
auto idx = CreateHashIndexDocumentCollection(document, attributes, iid, sparse, unique, created);
if (idx != nullptr) {
if (created) {
triagens::aql::QueryCache::instance()->invalidate(document->_vocbase, document->_info._name);
int res = TRI_SaveIndex(document, idx, true);
if (res != TRI_ERROR_NO_ERROR) {
@ -4604,6 +4610,7 @@ triagens::arango::Index* TRI_EnsureSkiplistIndexDocumentCollection (TRI_document
if (idx != nullptr) {
if (created) {
triagens::aql::QueryCache::instance()->invalidate(document->_vocbase, document->_info._name);
int res = TRI_SaveIndex(document, idx, true);
if (res != TRI_ERROR_NO_ERROR) {
@ -4809,6 +4816,7 @@ triagens::arango::Index* TRI_EnsureFulltextIndexDocumentCollection (TRI_document
if (idx != nullptr) {
if (created) {
triagens::aql::QueryCache::instance()->invalidate(document->_vocbase, document->_info._name);
int res = TRI_SaveIndex(document, idx, true);
if (res != TRI_ERROR_NO_ERROR) {

View File

@ -35,6 +35,7 @@
#include <regex.h>
#include "Aql/QueryCache.h"
#include "Aql/QueryRegistry.h"
#include "Basics/conversions.h"
#include "Basics/files.h"
@ -2521,51 +2522,51 @@ int TRI_DropDatabaseServer (TRI_server_t* server,
return TRI_ERROR_OUT_OF_MEMORY;
}
int res = TRI_ERROR_INTERNAL;
TRI_vocbase_t* vocbase = static_cast<TRI_vocbase_t*>(TRI_RemoveKeyAssociativePointer(&server->_databases, name));
if (vocbase == nullptr) {
// not found
res = TRI_ERROR_ARANGO_DATABASE_NOT_FOUND;
return TRI_ERROR_ARANGO_DATABASE_NOT_FOUND;
}
else {
// mark as deleted
TRI_ASSERT(vocbase->_type == TRI_VOCBASE_TYPE_NORMAL);
vocbase->_isOwnAppsDirectory = removeAppsDirectory;
// mark as deleted
TRI_ASSERT(vocbase->_type == TRI_VOCBASE_TYPE_NORMAL);
if (TRI_DropVocBase(vocbase)) {
if (triagens::wal::LogfileManager::instance()->isInRecovery()) {
LOG_TRACE("dropping database '%s', directory '%s'",
vocbase->_name,
vocbase->_path);
}
else {
LOG_INFO("dropping database '%s', directory '%s'",
vocbase->_name,
vocbase->_path);
}
vocbase->_isOwnAppsDirectory = removeAppsDirectory;
res = SaveDatabaseParameters(vocbase->_id,
vocbase->_name,
true,
&vocbase->_settings,
vocbase->_path);
// invalidate all entries for the database
triagens::aql::QueryCache::instance()->invalidate(vocbase);
TRI_PushBackVectorPointer(&server->_droppedDatabases, vocbase);
// TODO: what to do in case of error?
if (writeMarker) {
WriteDropMarker(vocbase->_id);
}
if (TRI_DropVocBase(vocbase)) {
if (triagens::wal::LogfileManager::instance()->isInRecovery()) {
LOG_TRACE("dropping database '%s', directory '%s'",
vocbase->_name,
vocbase->_path);
}
else {
// already deleted
res = TRI_ERROR_ARANGO_DATABASE_NOT_FOUND;
LOG_INFO("dropping database '%s', directory '%s'",
vocbase->_name,
vocbase->_path);
}
}
return res;
int res = SaveDatabaseParameters(vocbase->_id,
vocbase->_name,
true,
&vocbase->_settings,
vocbase->_path);
// TODO: what to do here in case of error?
TRI_PushBackVectorPointer(&server->_droppedDatabases, vocbase);
if (writeMarker) {
WriteDropMarker(vocbase->_id);
}
return res;
}
// already deleted
return TRI_ERROR_ARANGO_DATABASE_NOT_FOUND;
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -29,6 +29,7 @@
#include "transaction.h"
#include "Aql/QueryCache.h"
#include "Basics/conversions.h"
#include "Basics/logging.h"
#include "Basics/tri-strings.h"
@ -118,6 +119,40 @@ static inline bool NeedWriteMarker (TRI_transaction_t const* trx,
! IsSingleOperationTransaction(trx));
}
////////////////////////////////////////////////////////////////////////////////
/// @brief clear the query cache for all collections that were modified by
/// the transaction
////////////////////////////////////////////////////////////////////////////////
void ClearQueryCache (TRI_transaction_t* trx) {
std::vector<char const*> collections;
size_t const n = trx->_collections._length;
try {
for (size_t i = 0; i < n; ++i) {
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
if (trxCollection->_accessType != TRI_TRANSACTION_WRITE ||
trxCollection->_operations == nullptr ||
trxCollection->_operations->empty()) {
// we're only interested in collections that may have been modified
continue;
}
collections.emplace_back(reinterpret_cast<char const*>(&(trxCollection->_collection->_name)));
}
if (! collections.empty()) {
triagens::aql::QueryCache::instance()->invalidate(trx->_vocbase, collections);
}
}
catch (...) {
// in case something goes wrong, we have to disable the query cache
triagens::aql::QueryCache::instance()->invalidate(trx->_vocbase);
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief return the status of the transaction as a string
////////////////////////////////////////////////////////////////////////////////
@ -152,7 +187,7 @@ static void FreeOperations (TRI_transaction_t* trx) {
bool const isSingleOperation = IsSingleOperationTransaction(trx);
for (size_t i = 0; i < n; ++i) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
TRI_document_collection_t* document = trxCollection->_collection->_collection;
if (trxCollection->_operations == nullptr) {
@ -242,7 +277,7 @@ static TRI_transaction_collection_t* FindCollection (const TRI_transaction_t* co
size_t i;
for (i = 0; i < n; ++i) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
if (cid < trxCollection->_cid) {
// collection not found
@ -270,9 +305,9 @@ static TRI_transaction_collection_t* FindCollection (const TRI_transaction_t* co
////////////////////////////////////////////////////////////////////////////////
static TRI_transaction_collection_t* CreateCollection (TRI_transaction_t* trx,
const TRI_voc_cid_t cid,
const TRI_transaction_type_e accessType,
const int nestingLevel) {
TRI_voc_cid_t cid,
TRI_transaction_type_e accessType,
int nestingLevel) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_Allocate(TRI_UNKNOWN_MEM_ZONE, sizeof(TRI_transaction_collection_t), false));
if (trxCollection == nullptr) {
@ -471,7 +506,7 @@ static int UseCollections (TRI_transaction_t* trx,
// process collections in forward order
for (size_t i = 0; i < n; ++i) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
if (trxCollection->_nestingLevel != nestingLevel) {
// only process our own collections
@ -554,7 +589,7 @@ static int UnuseCollections (TRI_transaction_t* trx,
// process collections in reverse order
while (i-- > 0) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
if (IsLocked(trxCollection) &&
(nestingLevel == 0 || trxCollection->_nestingLevel == nestingLevel)) {
@ -594,7 +629,7 @@ static int ReleaseCollections (TRI_transaction_t* trx,
// process collections in reverse order
while (i-- > 0) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
// the top level transaction releases all collections
if (trxCollection->_collection != nullptr) {
@ -839,7 +874,7 @@ void TRI_FreeTransaction (TRI_transaction_t* trx) {
// free all collections
size_t i = trx->_collections._length;
while (i-- > 0) {
TRI_transaction_collection_t* trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
auto trxCollection = static_cast<TRI_transaction_collection_t*>(TRI_AtVectorPointer(&trx->_collections, i));
FreeDitch(trxCollection);
FreeCollection(trxCollection);
@ -1233,6 +1268,8 @@ int TRI_AddOperationTransaction (triagens::wal::DocumentOperation& operation,
if (isSingleOperationTransaction) {
// operation is directly executed
operation.handle();
triagens::aql::QueryCache::instance()->invalidate(trx->_vocbase, document->_info._name);
++document->_uncollectedLogfileEntries;
@ -1367,6 +1404,11 @@ int TRI_CommitTransaction (TRI_transaction_t* trx,
UpdateTransactionStatus(trx, TRI_TRANSACTION_COMMITTED);
// if a write query, clear the query cache for the participating collections
if (trx->_type == TRI_TRANSACTION_WRITE) {
ClearQueryCache(trx);
}
FreeOperations(trx);
}

View File

@ -35,6 +35,7 @@
#include <regex.h>
#include "Aql/QueryCache.h"
#include "Aql/QueryList.h"
#include "Basics/conversions.h"
#include "Basics/files.h"
@ -710,7 +711,6 @@ static int RenameCollection (TRI_vocbase_t* vocbase,
char const* newName,
bool writeMarker) {
TRI_col_info_t info;
void const* found;
TRI_EVENTUAL_WRITE_LOCK_STATUS_VOCBASE_COL(collection);
@ -734,7 +734,7 @@ static int RenameCollection (TRI_vocbase_t* vocbase,
}
// check if the new name is unused
found = (void*) TRI_LookupByKeyAssociativePointer(&vocbase->_collectionsByName, newName);
void const* found = TRI_LookupByKeyAssociativePointer(&vocbase->_collectionsByName, newName);
if (found != nullptr) {
TRI_WRITE_UNLOCK_COLLECTIONS_VOCBASE(vocbase);
@ -815,9 +815,13 @@ static int RenameCollection (TRI_vocbase_t* vocbase,
TRI_WRITE_UNLOCK_COLLECTIONS_VOCBASE(vocbase);
// to prevent caching
// to prevent caching returning now invalid old collection name in db's NamedPropertyAccessor,
// i.e. db.<old-collection-name>
collection->_internalVersion++;
// invalidate all entries for the two collections
triagens::aql::QueryCache::instance()->invalidate(vocbase, std::vector<char const*>{ oldName, newName });
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
if (! writeMarker) {
@ -2077,6 +2081,8 @@ int TRI_DropCollectionVocBase (TRI_vocbase_t* vocbase,
TRI_EVENTUAL_WRITE_LOCK_STATUS_VOCBASE_COL(collection);
triagens::aql::QueryCache::instance()->invalidate(vocbase, collection->_name);
// .............................................................................
// collection already deleted
// .............................................................................

View File

@ -92,9 +92,9 @@ V8ClientConnection::V8ClientConnection (Endpoint* endpoint,
// connect to server and get version number
map<string, string> headerFields;
SimpleHttpResult* result = _client->request(HttpRequest::HTTP_REQUEST_GET, "/_api/version?details=true", nullptr, 0, headerFields);
std::unique_ptr<SimpleHttpResult> result(_client->request(HttpRequest::HTTP_REQUEST_GET, "/_api/version?details=true", nullptr, 0, headerFields));
if (! result || ! result->isComplete()) {
if (result == nullptr || ! result->isComplete()) {
// save error message
_lastErrorMessage = _client->getErrorMessage();
_lastHttpReturnCode = 500;
@ -135,15 +135,11 @@ V8ClientConnection::V8ClientConnection (Endpoint* endpoint,
// now set up an error message
_lastErrorMessage = _client->getErrorMessage();
if (result && result->getHttpReturnCode() > 0) {
if (result->getHttpReturnCode() > 0) {
_lastErrorMessage = StringUtils::itoa(result->getHttpReturnCode()) + ": " + result->getHttpReturnMessage();
}
}
}
if (result) {
delete result;
}
}
////////////////////////////////////////////////////////////////////////////////
@ -151,17 +147,9 @@ V8ClientConnection::V8ClientConnection (Endpoint* endpoint,
////////////////////////////////////////////////////////////////////////////////
V8ClientConnection::~V8ClientConnection () {
if (_httpResult) {
delete _httpResult;
}
if (_client) {
delete _client;
}
if (_connection) {
delete _connection;
}
delete _httpResult;
delete _client;
delete _connection;
}
// -----------------------------------------------------------------------------
@ -219,7 +207,7 @@ const string& V8ClientConnection::getDatabaseName () {
/// @brief set the current database name
////////////////////////////////////////////////////////////////////////////////
void V8ClientConnection::setDatabaseName (const string& databaseName) {
void V8ClientConnection::setDatabaseName (std::string const& databaseName) {
_databaseName = databaseName;
}
@ -274,9 +262,7 @@ v8::Handle<v8::Value> V8ClientConnection::getData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_GET, location, "", headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_GET, location, "", headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_GET, location, "", headerFields);
}
////////////////////////////////////////////////////////////////////////////////
@ -290,9 +276,7 @@ v8::Handle<v8::Value> V8ClientConnection::deleteData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_DELETE, location, "", headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_DELETE, location, "", headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_DELETE, location, "", headerFields);
}
////////////////////////////////////////////////////////////////////////////////
@ -306,9 +290,7 @@ v8::Handle<v8::Value> V8ClientConnection::headData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_HEAD, location, "", headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_HEAD, location, "", headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_HEAD, location, "", headerFields);
}
////////////////////////////////////////////////////////////////////////////////
@ -323,9 +305,7 @@ v8::Handle<v8::Value> V8ClientConnection::optionsData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_OPTIONS, location, body, headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_OPTIONS, location, body, headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_OPTIONS, location, body, headerFields);
}
////////////////////////////////////////////////////////////////////////////////
@ -340,9 +320,7 @@ v8::Handle<v8::Value> V8ClientConnection::postData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_POST, location, body, headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_POST, location, body, headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_POST, location, body, headerFields);
}
////////////////////////////////////////////////////////////////////////////////
@ -369,9 +347,7 @@ v8::Handle<v8::Value> V8ClientConnection::putData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_PUT, location, body, headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_PUT, location, body, headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_PUT, location, body, headerFields);
}
////////////////////////////////////////////////////////////////////////////////
@ -386,9 +362,7 @@ v8::Handle<v8::Value> V8ClientConnection::patchData (v8::Isolate* isolate,
if (raw) {
return requestDataRaw(isolate, HttpRequest::HTTP_REQUEST_PATCH, location, body, headerFields);
}
else {
return requestData(isolate, HttpRequest::HTTP_REQUEST_PATCH, location, body, headerFields);
}
return requestData(isolate, HttpRequest::HTTP_REQUEST_PATCH, location, body, headerFields);
}
// -----------------------------------------------------------------------------
@ -409,9 +383,8 @@ v8::Handle<v8::Value> V8ClientConnection::requestData (v8::Isolate* isolate,
_lastErrorMessage = "";
_lastHttpReturnCode = 0;
if (_httpResult != nullptr) {
delete _httpResult;
}
delete _httpResult;
_httpResult = nullptr;
_httpResult = _client->request(method, location, body, bodySize, headerFields);
@ -430,10 +403,8 @@ v8::Handle<v8::Value> V8ClientConnection::requestData (v8::Isolate* isolate,
_lastErrorMessage = "";
_lastHttpReturnCode = 0;
if (_httpResult != nullptr) {
delete _httpResult;
_httpResult = nullptr;
}
delete _httpResult;
_httpResult = nullptr;
if (body.empty()) {
_httpResult = _client->request(method, location, nullptr, 0, headerFields);
@ -451,6 +422,7 @@ v8::Handle<v8::Value> V8ClientConnection::requestData (v8::Isolate* isolate,
v8::Handle<v8::Value> V8ClientConnection::handleResult (v8::Isolate* isolate) {
v8::EscapableHandleScope scope(isolate);
if (! _httpResult->isComplete()) {
// not complete
_lastErrorMessage = _client->getErrorMessage();
@ -490,43 +462,43 @@ v8::Handle<v8::Value> V8ClientConnection::handleResult (v8::Isolate* isolate) {
return scope.Escape<v8::Value>(result);
}
else {
// complete
_lastHttpReturnCode = _httpResult->getHttpReturnCode();
// got a body
StringBuffer& sb = _httpResult->getBody();
// complete
if (sb.length() > 0) {
isolate->GetCurrentContext()->Global();
_lastHttpReturnCode = _httpResult->getHttpReturnCode();
if (_httpResult->isJson()) {
return scope.Escape<v8::Value>(TRI_FromJsonString(isolate, sb.c_str(), nullptr));
}
// got a body
StringBuffer& sb = _httpResult->getBody();
// return body as string
return scope.Escape<v8::Value>(TRI_V8_STD_STRING(sb));
if (sb.length() > 0) {
isolate->GetCurrentContext()->Global();
if (_httpResult->isJson()) {
return scope.Escape<v8::Value>(TRI_FromJsonString(isolate, sb.c_str(), nullptr));
}
else {
// no body
v8::Handle<v8::Object> result = v8::Object::New(isolate);
result->ForceSet(TRI_V8_ASCII_STRING("code"), v8::Integer::New(isolate, _lastHttpReturnCode));
if (_lastHttpReturnCode >= 400) {
string returnMessage(_httpResult->getHttpReturnMessage());
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, true));
result->ForceSet(TRI_V8_ASCII_STRING("errorNum"), v8::Integer::New(isolate, _lastHttpReturnCode));
result->ForceSet(TRI_V8_ASCII_STRING("errorMessage"), TRI_V8_STD_STRING(returnMessage));
}
else {
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, false));
}
return scope.Escape<v8::Value>(result);
}
// return body as string
return scope.Escape<v8::Value>(TRI_V8_STD_STRING(sb));
}
// no body
v8::Handle<v8::Object> result = v8::Object::New(isolate);
result->ForceSet(TRI_V8_ASCII_STRING("code"), v8::Integer::New(isolate, _lastHttpReturnCode));
if (_lastHttpReturnCode >= 400) {
string returnMessage(_httpResult->getHttpReturnMessage());
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, true));
result->ForceSet(TRI_V8_ASCII_STRING("errorNum"), v8::Integer::New(isolate, _lastHttpReturnCode));
result->ForceSet(TRI_V8_ASCII_STRING("errorMessage"), TRI_V8_STD_STRING(returnMessage));
}
else {
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, false));
}
return scope.Escape<v8::Value>(result);
}
////////////////////////////////////////////////////////////////////////////////
@ -543,10 +515,8 @@ v8::Handle<v8::Value> V8ClientConnection::requestDataRaw (v8::Isolate* isolate,
_lastErrorMessage = "";
_lastHttpReturnCode = 0;
if (_httpResult) {
delete _httpResult;
_httpResult = nullptr;
}
delete _httpResult;
_httpResult = nullptr;
if (body.empty()) {
_httpResult = _client->request(method, location, nullptr, 0, headerFields);
@ -555,7 +525,7 @@ v8::Handle<v8::Value> V8ClientConnection::requestDataRaw (v8::Isolate* isolate,
_httpResult = _client->request(method, location, body.c_str(), body.length(), headerFields);
}
if (!_httpResult->isComplete()) {
if (! _httpResult->isComplete()) {
// not complete
_lastErrorMessage = _client->getErrorMessage();
@ -593,50 +563,50 @@ v8::Handle<v8::Value> V8ClientConnection::requestDataRaw (v8::Isolate* isolate,
return scope.Escape<v8::Value>(result);
}
else {
// complete
_lastHttpReturnCode = _httpResult->getHttpReturnCode();
// create raw response
v8::Handle<v8::Object> result = v8::Object::New(isolate);
// complete
result->ForceSet(TRI_V8_ASCII_STRING("code"), v8::Integer::New(isolate, _lastHttpReturnCode));
_lastHttpReturnCode = _httpResult->getHttpReturnCode();
if (_lastHttpReturnCode >= 400) {
string returnMessage(_httpResult->getHttpReturnMessage());
// create raw response
v8::Handle<v8::Object> result = v8::Object::New(isolate);
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, true));
result->ForceSet(TRI_V8_ASCII_STRING("errorNum"), v8::Integer::New(isolate, _lastHttpReturnCode));
result->ForceSet(TRI_V8_ASCII_STRING("errorMessage"), TRI_V8_STD_STRING(returnMessage));
}
else {
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, false));
}
result->ForceSet(TRI_V8_ASCII_STRING("code"), v8::Integer::New(isolate, _lastHttpReturnCode));
// got a body, copy it into the result
StringBuffer& sb = _httpResult->getBody();
if (sb.length() > 0) {
v8::Handle<v8::String> b = TRI_V8_STD_STRING(sb);
if (_lastHttpReturnCode >= 400) {
string returnMessage(_httpResult->getHttpReturnMessage());
result->ForceSet(TRI_V8_ASCII_STRING("body"), b);
}
// copy all headers
v8::Handle<v8::Object> headers = v8::Object::New(isolate);
auto const& hf = _httpResult->getHeaderFields();
for (auto const& it : hf) {
v8::Handle<v8::String> key = TRI_V8_STD_STRING(it.first);
v8::Handle<v8::String> val = TRI_V8_STD_STRING(it.second);
headers->ForceSet(key, val);
}
result->ForceSet(TRI_V8_ASCII_STRING("headers"), headers);
// and returns
return scope.Escape<v8::Value>(result);
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, true));
result->ForceSet(TRI_V8_ASCII_STRING("errorNum"), v8::Integer::New(isolate, _lastHttpReturnCode));
result->ForceSet(TRI_V8_ASCII_STRING("errorMessage"), TRI_V8_STD_STRING(returnMessage));
}
else {
result->ForceSet(TRI_V8_ASCII_STRING("error"), v8::Boolean::New(isolate, false));
}
// got a body, copy it into the result
StringBuffer& sb = _httpResult->getBody();
if (sb.length() > 0) {
v8::Handle<v8::String> b = TRI_V8_STD_STRING(sb);
result->ForceSet(TRI_V8_ASCII_STRING("body"), b);
}
// copy all headers
v8::Handle<v8::Object> headers = v8::Object::New(isolate);
auto const& hf = _httpResult->getHeaderFields();
for (auto const& it : hf) {
v8::Handle<v8::String> key = TRI_V8_STD_STRING(it.first);
v8::Handle<v8::String> val = TRI_V8_STD_STRING(it.second);
headers->ForceSet(key, val);
}
result->ForceSet(TRI_V8_ASCII_STRING("headers"), headers);
// and returns
return scope.Escape<v8::Value>(result);
}
// -----------------------------------------------------------------------------

View File

@ -811,12 +811,17 @@ ArangoDatabase.prototype._createStatement = function (data) {
////////////////////////////////////////////////////////////////////////////////
ArangoDatabase.prototype._query = function (query, bindVars, cursorOptions, options) {
if (typeof query === "object" && query !== null && arguments.length === 1) {
return new ArangoStatement(this, query).execute();
}
var data = {
query: query,
bindVars: bindVars || undefined,
count: (cursorOptions && cursorOptions.count) || false,
batchSize: (cursorOptions && cursorOptions.batchSize) || undefined,
options: options || undefined
options: options || undefined,
cache: (options && options.cache) || undefined
};
return new ArangoStatement(this, data).execute();

View File

@ -46,6 +46,7 @@ function ArangoStatement (database, data) {
this._batchSize = null;
this._bindVars = {};
this._options = undefined;
this._cache = undefined;
if (typeof data === "string") {
data = { query: data };
@ -71,6 +72,9 @@ function ArangoStatement (database, data) {
if (data.batchSize !== undefined) {
this.setBatchSize(data.batchSize);
}
if (data.cache !== undefined) {
this.setCache(data.cache);
}
}
// -----------------------------------------------------------------------------
@ -118,6 +122,14 @@ ArangoStatement.prototype.getBindVariables = function () {
return this._bindVars;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the cache flag for the statement
////////////////////////////////////////////////////////////////////////////////
ArangoStatement.prototype.getCache = function () {
return this._cache;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the count flag for the statement
////////////////////////////////////////////////////////////////////////////////
@ -151,6 +163,14 @@ ArangoStatement.prototype.getQuery = function () {
return this._query;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the cache flag for the statement
////////////////////////////////////////////////////////////////////////////////
ArangoStatement.prototype.setCache = function (bool) {
this._cache = bool ? true : false;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the count flag for the statement
///

View File

@ -176,6 +176,10 @@ ArangoStatement.prototype.execute = function () {
body.options = this._options;
}
if (this._cache !== undefined) {
body.cache = this._cache;
}
var requestResult = this._database._connection.POST(
"/_api/cursor",
JSON.stringify(body));

View File

@ -55,6 +55,7 @@ function GeneralArrayCursor (documents, skip, limit, data) {
this._countTotal = documents.length;
this._skip = skip;
this._limit = limit;
this._cached = false;
this._extra = { };
var self = this;
@ -64,6 +65,7 @@ function GeneralArrayCursor (documents, skip, limit, data) {
self._extra[d] = data[d];
}
});
this._cached = data.cached || false;
}
this.execute();
@ -124,7 +126,7 @@ GeneralArrayCursor.prototype.execute = function () {
GeneralArrayCursor.prototype._PRINT = function (context) {
var text;
text = "GeneralArrayCursor([.. " + this._documents.length + " docs ..])";
text = "GeneralArrayCursor([.. " + this._documents.length + " docs .., cached: " + String(this._cached) + "])";
if (this._skip !== null && this._skip !== 0) {
text += ".skip(" + this._skip + ")";

View File

@ -0,0 +1,78 @@
'use strict';
////////////////////////////////////////////////////////////////////////////////
/// @brief AQL query cache management
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2012 triagens GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is triAGENS GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2013, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
var internal = require("internal");
var arangosh = require("org/arangodb/arangosh");
// -----------------------------------------------------------------------------
// --SECTION-- module "org/arangodb/aql/cache"
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief clears the query cache
////////////////////////////////////////////////////////////////////////////////
exports.clear = function () {
var db = internal.db;
var requestResult = db._connection.DELETE("/_api/query-cache");
arangosh.checkRequestResult(requestResult);
return requestResult;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief fetches or sets the query cache properties
////////////////////////////////////////////////////////////////////////////////
exports.properties = function (properties) {
var db = internal.db;
var requestResult;
if (properties !== undefined) {
requestResult = db._connection.PUT("/_api/query-cache/properties", JSON.stringify(properties));
}
else {
requestResult = db._connection.GET("/_api/query-cache/properties");
}
arangosh.checkRequestResult(requestResult);
return requestResult;
};
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// @addtogroup\\|// --SECTION--\\|/// @page\\|/// @}\\|/\\*jslint"
// End:

View File

@ -810,12 +810,17 @@ ArangoDatabase.prototype._createStatement = function (data) {
////////////////////////////////////////////////////////////////////////////////
ArangoDatabase.prototype._query = function (query, bindVars, cursorOptions, options) {
if (typeof query === "object" && query !== null && arguments.length === 1) {
return new ArangoStatement(this, query).execute();
}
var data = {
query: query,
bindVars: bindVars || undefined,
count: (cursorOptions && cursorOptions.count) || false,
batchSize: (cursorOptions && cursorOptions.batchSize) || undefined,
options: options || undefined
options: options || undefined,
cache: (options && options.cache) || undefined
};
return new ArangoStatement(this, data).execute();

View File

@ -175,6 +175,10 @@ ArangoStatement.prototype.execute = function () {
body.options = this._options;
}
if (this._cache !== undefined) {
body.cache = this._cache;
}
var requestResult = this._database._connection.POST(
"/_api/cursor",
JSON.stringify(body));

View File

@ -45,6 +45,7 @@ function ArangoStatement (database, data) {
this._batchSize = null;
this._bindVars = {};
this._options = undefined;
this._cache = undefined;
if (typeof data === "string") {
data = { query: data };
@ -70,6 +71,9 @@ function ArangoStatement (database, data) {
if (data.batchSize !== undefined) {
this.setBatchSize(data.batchSize);
}
if (data.cache !== undefined) {
this.setCache(data.cache);
}
}
// -----------------------------------------------------------------------------
@ -117,6 +121,14 @@ ArangoStatement.prototype.getBindVariables = function () {
return this._bindVars;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the cache flag for the statement
////////////////////////////////////////////////////////////////////////////////
ArangoStatement.prototype.getCache = function () {
return this._cache;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the count flag for the statement
////////////////////////////////////////////////////////////////////////////////
@ -150,6 +162,14 @@ ArangoStatement.prototype.getQuery = function () {
return this._query;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the cache flag for the statement
////////////////////////////////////////////////////////////////////////////////
ArangoStatement.prototype.setCache = function (bool) {
this._cache = bool ? true : false;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the count flag for the statement
///

View File

@ -54,6 +54,7 @@ function GeneralArrayCursor (documents, skip, limit, data) {
this._countTotal = documents.length;
this._skip = skip;
this._limit = limit;
this._cached = false;
this._extra = { };
var self = this;
@ -63,6 +64,7 @@ function GeneralArrayCursor (documents, skip, limit, data) {
self._extra[d] = data[d];
}
});
this._cached = data.cached || false;
}
this.execute();
@ -123,7 +125,7 @@ GeneralArrayCursor.prototype.execute = function () {
GeneralArrayCursor.prototype._PRINT = function (context) {
var text;
text = "GeneralArrayCursor([.. " + this._documents.length + " docs ..])";
text = "GeneralArrayCursor([.. " + this._documents.length + " docs .., cached: " + String(this._cached) + "])";
if (this._skip !== null && this._skip !== 0) {
text += ".skip(" + this._skip + ")";

View File

@ -109,6 +109,17 @@ function DatabaseSuite () {
assertEqual([ [ 1, 454 ] ], internal.db._query("return [ @low, @high ]", { low : 1, high : 454 }).toArray());
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test _query function
////////////////////////////////////////////////////////////////////////////////
testQueryObject : function () {
assertEqual([ 1 ], internal.db._query({ query: "return 1" }).toArray());
assertEqual([ [ 1, 2, 9, "foo" ] ], internal.db._query({ query: "return [ 1, 2, 9, \"foo\" ]" }).toArray());
var obj = { query: "return [ @low, @high ]", bindVars: { low : 1, high : 454 } };
assertEqual([ [ 1, 454 ] ], internal.db._query(obj).toArray());
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test _executeTransaction
////////////////////////////////////////////////////////////////////////////////

View File

@ -677,6 +677,22 @@ function StatementSuite () {
assertEqual("for u2 in users return 2", st.getQuery());
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test get/set cache
////////////////////////////////////////////////////////////////////////////////
testCache : function () {
var st = db._createStatement({ query : "for u in [ 1 ] return 1" });
assertUndefined(st.getCache());
st.setCache(true);
assertTrue(st.getCache());
st.setCache(false);
assertFalse(st.getCache());
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test get/set count
////////////////////////////////////////////////////////////////////////////////

View File

@ -8512,7 +8512,7 @@ exports.reload = reloadUserFunctions;
// initialise the query engine
resetRegexCache();
reloadUserFunctions();
//reloadUserFunctions();
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE

View File

@ -0,0 +1,65 @@
/*global AQL_QUERY_CACHE_PROPERTIES, AQL_QUERY_CACHE_INVALIDATE */
////////////////////////////////////////////////////////////////////////////////
/// @brief AQL query cache management
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2012 triagens GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is triAGENS GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2013, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
// -----------------------------------------------------------------------------
// --SECTION-- module "org/arangodb/aql/cache"
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief invalidates the query cache
////////////////////////////////////////////////////////////////////////////////
exports.clear = function () {
'use strict';
AQL_QUERY_CACHE_INVALIDATE();
};
////////////////////////////////////////////////////////////////////////////////
/// @brief fetches or sets the properties of the query cache
////////////////////////////////////////////////////////////////////////////////
exports.properties = function (properties) {
'use strict';
if (properties !== undefined) {
return AQL_QUERY_CACHE_PROPERTIES(properties);
}
return AQL_QUERY_CACHE_PROPERTIES();
};
// -----------------------------------------------------------------------------
// --SECTION-- END-OF-FILE
// -----------------------------------------------------------------------------
// Local Variables:
// mode: outline-minor
// outline-regexp: "/// @brief\\|/// @addtogroup\\|// --SECTION--\\|/// @page\\|/// @}\\|/\\*jslint"
// End:

View File

@ -91,12 +91,17 @@ ArangoDatabase.prototype._createStatement = function (data) {
////////////////////////////////////////////////////////////////////////////////
ArangoDatabase.prototype._query = function (query, bindVars, cursorOptions, options) {
if (typeof query === 'object' && query !== null && arguments.length === 1) {
return new ArangoStatement(this, query).execute();
}
var payload = {
query: query,
bindVars: bindVars || undefined,
count: (cursorOptions && cursorOptions.count) || false,
batchSize: (cursorOptions && cursorOptions.batchSize) || undefined,
options: options || undefined
options: options || undefined,
cache: (options && options.cache) || undefined
};
return new ArangoStatement(this, payload).execute();
};

View File

@ -81,6 +81,9 @@ ArangoStatement.prototype.execute = function () {
var opts = this._options || { };
if (typeof opts === 'object') {
opts._doCount = this._doCount;
if (this._cache !== undefined) {
opts.cache = this._cache;
}
}
var result = AQL_EXECUTE(this._query, this._bindVars, opts);

View File

@ -520,7 +520,7 @@ exports.historian = function () {
}
}
catch (err) {
require("console").warn("catch error in historian: %s", err);
require("console").warn("catch error in historian: %s", err.stack);
}
};

View File

@ -295,35 +295,13 @@ function ahuacatlBindTestSuite () {
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test a list bind variable
////////////////////////////////////////////////////////////////////////////////
testBindList1 : function () {
var expected = [ "" ];
var actual = getQueryResults("FOR u IN @list FILTER u == @value RETURN u", { "list" : [ "the quick fox", true, false, -5, 0, 1, null, "", [ ], { } ], "value" : [ ] });
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test a list bind variable
////////////////////////////////////////////////////////////////////////////////
testBindList2 : function () {
var expected = [ true, false, 1, null, [ ] ];
var actual = getQueryResults("FOR u IN @list FILTER u IN @value RETURN u", { "list" : [ "the quick fox", true, false, -5, 0, 1, null, "", [ ], { } ], "value" : [ true, false, 1, null, [ ] ] });
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test an array bind variable
////////////////////////////////////////////////////////////////////////////////
testBindArray1 : function () {
var expected = [ { } ];
var actual = getQueryResults("FOR u IN @list FILTER u == @value RETURN u", { "list" : [ "the quick fox", true, false, -5, 0, 1, null, "", [ ], { } ], "value" : { } });
var expected = [ "" ];
var actual = getQueryResults("FOR u IN @list FILTER u == @value RETURN u", { "list" : [ "the quick fox", true, false, -5, 0, 1, null, "", [ ], { } ], "value" : [ ] });
assertEqual(expected, actual);
},
@ -333,6 +311,28 @@ function ahuacatlBindTestSuite () {
////////////////////////////////////////////////////////////////////////////////
testBindArray2 : function () {
var expected = [ true, false, 1, null, [ ] ];
var actual = getQueryResults("FOR u IN @list FILTER u IN @value RETURN u", { "list" : [ "the quick fox", true, false, -5, 0, 1, null, "", [ ], { } ], "value" : [ true, false, 1, null, [ ] ] });
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test an object bind variable
////////////////////////////////////////////////////////////////////////////////
testBindObject1 : function () {
var expected = [ { } ];
var actual = getQueryResults("FOR u IN @list FILTER u == @value RETURN u", { "list" : [ "the quick fox", true, false, -5, 0, 1, null, "", [ ], { } ], "value" : { } });
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test an object bind variable
////////////////////////////////////////////////////////////////////////////////
testBindObject2 : function () {
var expected = [ { "brown" : true, "fox" : true, "quick" : true } ];
var list = [ { "fox" : false, "brown" : false, "quick" : false },
{ "fox" : true, "brown" : false, "quick" : false },

View File

@ -177,7 +177,6 @@ function optimizerRuleTestSuite() {
skiplist = null;
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test that rule has no effect
////////////////////////////////////////////////////////////////////////////////
@ -242,7 +241,7 @@ function optimizerRuleTestSuite() {
removeAlwaysOnClusterRules(result.plan.rules), query);
QResults[0] = AQL_EXECUTE(query, { }, paramNone).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexFromSort).json;
assertTrue(isEqual(QResults[0], QResults[1]), "result " + i + " is equal?");
allresults = getQueryMultiplePlansAndExecutions(query, {});
@ -307,8 +306,8 @@ function optimizerRuleTestSuite() {
////////////////////////////////////////////////////////////////////////////////
/// @brief this sort is replaceable by an index.
////////////////////////////////////////////////////////////////////////////////
testSortIndexable: function () {
testSortIndexable: function () {
var query = "FOR v IN " + colName + " SORT v.a RETURN [v.a, v.b]";
var XPresult;
@ -320,7 +319,7 @@ function optimizerRuleTestSuite() {
// -> use-index-for-sort alone.
XPresult = AQL_EXPLAIN(query, { }, paramIndexFromSort);
QResults[1] = AQL_EXECUTE(query, { }, paramIndexFromSort).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexFromSort).json.sort(sortArray);
// our rule should have been applied.
assertEqual([ ruleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
// The sortnode and its calculation node should have been removed.
@ -332,7 +331,7 @@ function optimizerRuleTestSuite() {
// -> combined use-index-for-sort and remove-unnecessary-calculations-2
XPresult = AQL_EXPLAIN(query, { }, paramIndexFromSort_RemoveCalculations);
QResults[2] = AQL_EXECUTE(query, { }, paramIndexFromSort_RemoveCalculations).json;
QResults[2] = AQL_EXECUTE(query, { }, paramIndexFromSort_RemoveCalculations).json.sort(sortArray);
// our rule should have been applied.
assertEqual([ ruleName, removeCalculationNodes ].sort(), removeAlwaysOnClusterRules(XPresult.plan.rules).sort());
// The sortnode and its calculation node should have been removed.
@ -343,7 +342,7 @@ function optimizerRuleTestSuite() {
hasIndexRangeNode_WithRanges(XPresult, false);
for (i = 1; i < 3; i++) {
assertTrue(isEqual(QResults[0], QResults[i]), "Result " + i + " is Equal?");
assertTrue(isEqual(QResults[0], QResults[i]), "Result " + i + " is equal?");
}
var allresults = getQueryMultiplePlansAndExecutions(query, {});
for (j = 1; j < allresults.results.length; j++) {
@ -460,7 +459,7 @@ function optimizerRuleTestSuite() {
QResults[0] = AQL_EXECUTE(query, { }, paramNone).json.sort(sortArray);
// -> use-index-for-sort alone.
QResults[1] = AQL_EXECUTE(query, { }, paramIndexFromSort).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexFromSort).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexFromSort);
// our rule should be there.
assertEqual([ ruleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
@ -473,7 +472,7 @@ function optimizerRuleTestSuite() {
hasIndexRangeNode_WithRanges(XPresult, false);
// -> combined use-index-for-sort and use-index-range
QResults[2] = AQL_EXECUTE(query, { }, paramIndexFromSort_IndexRange).json;
QResults[2] = AQL_EXECUTE(query, { }, paramIndexFromSort_IndexRange).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexFromSort_IndexRange);
assertEqual([ secondRuleName, ruleName ].sort(), removeAlwaysOnClusterRules(XPresult.plan.rules).sort());
// The sortnode should be gone, its calculation node should not have been removed yet.
@ -483,7 +482,7 @@ function optimizerRuleTestSuite() {
hasIndexRangeNode_WithRanges(XPresult, true);
// -> use-index-range alone.
QResults[3] = AQL_EXECUTE(query, { }, paramIndexRange).json;
QResults[3] = AQL_EXECUTE(query, { }, paramIndexRange).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexRange);
assertEqual([ secondRuleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
// the sortnode and its calculation node should be there.
@ -497,7 +496,7 @@ function optimizerRuleTestSuite() {
hasIndexRangeNode_WithRanges(XPresult, true);
// -> combined use-index-for-sort, remove-unnecessary-calculations-2 and use-index-range
QResults[4] = AQL_EXECUTE(query, { }, paramIndexFromSort_IndexRange_RemoveCalculations).json;
QResults[4] = AQL_EXECUTE(query, { }, paramIndexFromSort_IndexRange_RemoveCalculations).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexFromSort_IndexRange_RemoveCalculations);
assertEqual([ secondRuleName, removeCalculationNodes, ruleName ].sort(), removeAlwaysOnClusterRules(XPresult.plan.rules).sort());
@ -508,7 +507,7 @@ function optimizerRuleTestSuite() {
hasIndexRangeNode_WithRanges(XPresult, true);
for (i = 1; i < 5; i++) {
assertTrue(isEqual(QResults[0], QResults[i]), "Result " + i + " is Equal?");
assertTrue(isEqual(QResults[0], QResults[i]), "Result " + i + " is equal?");
}
var allresults = getQueryMultiplePlansAndExecutions(query, {});
for (j = 1; j < allresults.results.length; j++) {
@ -640,7 +639,7 @@ function optimizerRuleTestSuite() {
assertEqual(first.lowConst.bound, first.highConst.bound, "bounds equality");
for (i = 1; i < 2; i++) {
assertTrue(isEqual(QResults[0].sort(sortArray), QResults[i]), "Result " + i + " is Equal?");
assertTrue(isEqual(QResults[0].sort(sortArray), QResults[i].sort(sortArray)), "Result " + i + " is Equal?");
}
var allresults = getQueryMultiplePlansAndExecutions(query, {});
for (j = 1; j < allresults.results.length; j++) {
@ -670,7 +669,7 @@ function optimizerRuleTestSuite() {
QResults[0] = AQL_EXECUTE(query, { }, paramNone).json.sort(sortArray);
// -> use-index-range alone.
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexRange);
assertEqual([ secondRuleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
@ -685,7 +684,7 @@ function optimizerRuleTestSuite() {
assertEqual(first.highs.length, 0, "no variable high bound");
assertEqual(first.highConst.bound, 5, "proper value was set");
assertTrue(isEqual(QResults[0], QResults[1]), "Results are Equal?");
assertTrue(isEqual(QResults[0], QResults[1]), "Results are equal?");
var allresults = getQueryMultiplePlansAndExecutions(query, {});
for (j = 1; j < allresults.results.length; j++) {
@ -702,6 +701,7 @@ function optimizerRuleTestSuite() {
////////////////////////////////////////////////////////////////////////////////
/// @brief test in detail that an index range can be used for a greater than filter.
////////////////////////////////////////////////////////////////////////////////
testRangeGreaterThan: function () {
var query = "FOR v IN " + colName + " FILTER v.a > 5 RETURN [v.a, v.b]";
var XPresult;
@ -713,7 +713,7 @@ function optimizerRuleTestSuite() {
QResults[0] = AQL_EXECUTE(query, { }, paramNone).json.sort(sortArray);
// -> use-index-range alone.
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexRange);
assertEqual([ secondRuleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
@ -758,7 +758,7 @@ function optimizerRuleTestSuite() {
QResults[0] = AQL_EXECUTE(query, { }, paramNone).json.sort(sortArray);
// -> use-index-range alone.
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexRange);
assertEqual([ secondRuleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
@ -835,6 +835,7 @@ function optimizerRuleTestSuite() {
/// @brief test in detail that an index range can be used for an or combined
/// greater than + less than filter spanning a range.
////////////////////////////////////////////////////////////////////////////////
testRangeBandstop: function () {
var query = "FOR v IN " + colName + " FILTER v.a < 5 || v.a > 10 RETURN [v.a, v.b]";
@ -846,7 +847,7 @@ function optimizerRuleTestSuite() {
QResults[0] = AQL_EXECUTE(query, { }, paramNone).json.sort(sortArray);
// -> use-index-range alone.
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json;
QResults[1] = AQL_EXECUTE(query, { }, paramIndexRange).json.sort(sortArray);
XPresult = AQL_EXPLAIN(query, { }, paramIndexRange);
assertEqual([ secondRuleName ], removeAlwaysOnClusterRules(XPresult.plan.rules));
@ -865,7 +866,7 @@ function optimizerRuleTestSuite() {
assertEqual(first.lowConst.bound, 10, "proper value was set");
assertEqual(first.lowConst.include, false, "proper include");
assertTrue(isEqual(QResults[0], QResults[1]), "Results are Equal?");
assertTrue(isEqual(QResults[0], QResults[1]), "Results are equal?");
},
////////////////////////////////////////////////////////////////////////////////

View File

@ -0,0 +1,971 @@
/*jshint globalstrict:false, strict:false, maxlen: 500 */
/*global fail, assertEqual, assertTrue, assertFalse, AQL_EXECUTE,
AQL_QUERY_CACHE_PROPERTIES, AQL_QUERY_CACHE_INVALIDATE */
////////////////////////////////////////////////////////////////////////////////
/// @brief tests for query language, bind parameters
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2010-2012 triagens GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is triAGENS GmbH, Cologne, Germany
///
/// @author Jan Steemann
/// @author Copyright 2012, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
var jsunity = require("jsunity");
var db = require("org/arangodb").db;
var internal = require("internal");
////////////////////////////////////////////////////////////////////////////////
/// @brief test suite
////////////////////////////////////////////////////////////////////////////////
function ahuacatlQueryCacheTestSuite () {
var cacheProperties;
var c1, c2;
return {
////////////////////////////////////////////////////////////////////////////////
/// @brief set up
////////////////////////////////////////////////////////////////////////////////
setUp : function () {
cacheProperties = AQL_QUERY_CACHE_PROPERTIES();
AQL_QUERY_CACHE_INVALIDATE();
db._drop("UnitTestsAhuacatlQueryCache1");
db._drop("UnitTestsAhuacatlQueryCache2");
c1 = db._create("UnitTestsAhuacatlQueryCache1");
c2 = db._create("UnitTestsAhuacatlQueryCache2");
},
////////////////////////////////////////////////////////////////////////////////
/// @brief tear down
////////////////////////////////////////////////////////////////////////////////
tearDown : function () {
db._drop("UnitTestsAhuacatlQueryCache1");
db._drop("UnitTestsAhuacatlQueryCache2");
c1 = null;
c2 = null;
AQL_QUERY_CACHE_PROPERTIES(cacheProperties);
AQL_QUERY_CACHE_INVALIDATE();
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test setting modes
////////////////////////////////////////////////////////////////////////////////
testModes : function () {
var result;
result = AQL_QUERY_CACHE_PROPERTIES({ mode: "off" });
assertEqual("off", result.mode);
result = AQL_QUERY_CACHE_PROPERTIES();
assertEqual("off", result.mode);
result = AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
assertEqual("on", result.mode);
result = AQL_QUERY_CACHE_PROPERTIES();
assertEqual("on", result.mode);
result = AQL_QUERY_CACHE_PROPERTIES({ mode: "demand" });
assertEqual("demand", result.mode);
result = AQL_QUERY_CACHE_PROPERTIES();
assertEqual("demand", result.mode);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test rename collection
////////////////////////////////////////////////////////////////////////////////
testRenameCollection1 : function () {
if (require("org/arangodb/cluster").isCluster()) {
// renaming collections not supported in cluster
return;
}
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
c2.drop();
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
c1.rename("UnitTestsAhuacatlQueryCache2");
try {
AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache1" });
fail();
}
catch (err) {
assertEqual(internal.errors.ERROR_ARANGO_COLLECTION_NOT_FOUND.code, err.errorNum);
}
result = AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache2" });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache2" });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test rename collection
////////////////////////////////////////////////////////////////////////////////
testRenameCollection2 : function () {
if (require("org/arangodb/cluster").isCluster()) {
// renaming collections not supported in cluster
return;
}
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache2" });
assertFalse(result.cached);
assertEqual([ ], result.json);
result = AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache2" });
assertTrue(result.cached);
assertEqual([ ], result.json);
c2.drop();
c1.rename("UnitTestsAhuacatlQueryCache2");
try {
AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache1" });
fail();
}
catch (err) {
assertEqual(internal.errors.ERROR_ARANGO_COLLECTION_NOT_FOUND.code, err.errorNum);
}
result = AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache2" });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache2" });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test drop collection
////////////////////////////////////////////////////////////////////////////////
testDropCollection : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
c1.drop();
try {
AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache1" });
fail();
}
catch (err) {
assertEqual(internal.errors.ERROR_ARANGO_COLLECTION_NOT_FOUND.code, err.errorNum);
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test drop and recreation of collection
////////////////////////////////////////////////////////////////////////////////
testDropAndRecreateCollection : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
c1.drop();
try {
AQL_EXECUTE(query, { "@collection": "UnitTestsAhuacatlQueryCache1" });
fail();
}
catch (err) {
assertEqual(internal.errors.ERROR_ARANGO_COLLECTION_NOT_FOUND.code, err.errorNum);
}
// re-create collection with same name
c1 = db._create("UnitTestsAhuacatlQueryCache1");
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test adding indexes
////////////////////////////////////////////////////////////////////////////////
testAddIndexCapConstraint : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
c1.ensureCapConstraint(3);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 3, 4, 5 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test dropping indexes
////////////////////////////////////////////////////////////////////////////////
testDropIndexCapConstraint : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
c1.ensureCapConstraint(3);
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 3, 4, 5 ], result.json);
var indexes = c1.getIndexes();
assertEqual(2, indexes.length);
assertEqual("cap", indexes[1].type);
assertTrue(c1.dropIndex(indexes[1].id));
indexes = c1.getIndexes();
assertEqual(1, indexes.length);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 3, 4, 5 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test queries w/ parse error
////////////////////////////////////////////////////////////////////////////////
testParseError : function () {
var query = "FOR i IN 1..3 RETURN";
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
try {
AQL_EXECUTE(query);
fail();
}
catch (err1) {
assertEqual(internal.errors.ERROR_QUERY_PARSE.code, err1.errorNum);
}
// nothing should have been cached, so we shall be getting the same error again
try {
AQL_EXECUTE(query);
fail();
}
catch (err2) {
assertEqual(internal.errors.ERROR_QUERY_PARSE.code, err2.errorNum);
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test queries w/ other error
////////////////////////////////////////////////////////////////////////////////
testOtherError : function () {
db._drop("UnitTestsAhuacatlQueryCache3");
var query = "FOR doc IN UnitTestsAhuacatlQueryCache3 RETURN doc";
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
try {
AQL_EXECUTE(query);
fail();
}
catch (err1) {
assertEqual(internal.errors.ERROR_ARANGO_COLLECTION_NOT_FOUND.code, err1.errorNum);
}
// nothing should have been cached, so we shall be getting the same error again
try {
AQL_EXECUTE(query);
fail();
}
catch (err2) {
assertEqual(internal.errors.ERROR_ARANGO_COLLECTION_NOT_FOUND.code, err2.errorNum);
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test queries w/ warnings
////////////////////////////////////////////////////////////////////////////////
testWarnings : function () {
var query = "FOR i IN 1..3 RETURN i / 0";
var result;
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query);
assertFalse(result.cached);
assertEqual([ null, null, null ], result.json);
assertEqual(3, result.warnings.length);
result = AQL_EXECUTE(query);
assertFalse(result.cached); // won't be cached because of the warnings
assertEqual([ null, null, null ], result.json);
assertEqual(3, result.warnings.length);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test non-deterministic queries
////////////////////////////////////////////////////////////////////////////////
testNonDeterministicQueriesRandom : function () {
var query = "FOR doc IN @@collection RETURN RAND()";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual(5, result.json.length);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual(5, result.json.length);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test non-deterministic queries
////////////////////////////////////////////////////////////////////////////////
testNonDeterministicQueriesDocument : function () {
var query = "FOR i IN 1..5 RETURN DOCUMENT(@@collection, CONCAT('test', i))";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i, _key: "test" + i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual(5, result.json.length);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual(5, result.json.length);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test slightly different queries
////////////////////////////////////////////////////////////////////////////////
testSlightlyDifferentQueries : function () {
var queries = [
"FOR doc IN @@collection SORT doc.value RETURN doc.value",
"FOR doc IN @@collection SORT doc.value ASC RETURN doc.value",
" FOR doc IN @@collection SORT doc.value RETURN doc.value",
"FOR doc IN @@collection SORT doc.value RETURN doc.value",
"FOR doc IN @@collection SORT doc.value RETURN doc.value ",
"FOR doc IN @@collection RETURN doc.value",
"FOR doc IN @@collection RETURN doc.value ",
" FOR doc IN @@collection RETURN doc.value ",
"/* foo */ FOR doc IN @@collection RETURN doc.value",
"FOR doc IN @@collection RETURN doc.value /* foo */",
"FOR doc IN @@collection LIMIT 10 RETURN doc.value",
"FOR doc IN @@collection FILTER doc.value < 99 RETURN doc.value",
"FOR doc IN @@collection FILTER doc.value <= 99 RETURN doc.value",
"FOR doc IN @@collection FILTER doc.value < 98 RETURN doc.value",
"FOR doc IN @@collection RETURN doc.value + 0"
];
for (var i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
queries.forEach(function (query) {
var result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual(5, result.json.length);
});
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test same query with different bind parameters
////////////////////////////////////////////////////////////////////////////////
testDifferentBindOrders : function () {
var query = "FOR doc IN @@collection SORT doc.value LIMIT @offset, @count RETURN doc.value";
var result, i;
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
for (i = 1; i <= 10; ++i) {
c1.save({ value: i });
}
result = AQL_EXECUTE(query, { "@collection": c1.name(), offset: 0, count: 1 });
assertFalse(result.cached);
assertEqual([ 1 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name(), offset: 0, count: 1 });
assertTrue(result.cached);
assertEqual([ 1 ], result.json);
// same bind parameter values, but in exchanged order
result = AQL_EXECUTE(query, { "@collection": c1.name(), offset: 1, count: 0 });
assertFalse(result.cached);
assertEqual([ ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test same query with different bind parameters
////////////////////////////////////////////////////////////////////////////////
testDifferentBindOrdersArray : function () {
var query = "RETURN @values";
var result;
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { values: [ 1, 2, 3, 4, 5 ] });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json[0]);
result = AQL_EXECUTE(query, { values: [ 1, 2, 3, 4, 5 ] });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json[0]);
// same bind parameter values, but in exchanged order
result = AQL_EXECUTE(query, { values: [ 5, 4, 3, 2, 1 ] });
assertFalse(result.cached);
assertEqual([ 5, 4, 3, 2, 1 ], result.json[0]);
result = AQL_EXECUTE(query, { values: [ 1, 2, 3, 5, 4 ] });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 5, 4 ], result.json[0]);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test same query with different bind parameters
////////////////////////////////////////////////////////////////////////////////
testDifferentBindValues : function () {
var query = "FOR doc IN @@collection FILTER doc.value == @value RETURN doc.value";
var result, i;
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
for (i = 1; i <= 5; ++i) {
result = AQL_EXECUTE(query, { "@collection": c1.name(), value: i });
assertFalse(result.cached);
assertEqual([ i ], result.json);
}
// now the query results should be fully cached
for (i = 1; i <= 5; ++i) {
result = AQL_EXECUTE(query, { "@collection": c1.name(), value: i });
assertTrue(result.cached);
assertEqual([ i ], result.json);
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test same query with different bind parameters
////////////////////////////////////////////////////////////////////////////////
testDifferentBindValuesCollection : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
c2.save({ value: i + 1 });
}
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
// now the query results should be fully cached
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c2.name() });
assertFalse(result.cached);
assertEqual([ 2, 3, 4, 5, 6 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c2.name() });
assertTrue(result.cached);
assertEqual([ 2, 3, 4, 5, 6 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after single insert operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterRead : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result;
var doc = c1.save({ value: 1 });
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1 ], result.json);
c1.document(doc._key); // this will not invalidate cache
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after single insert operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterInsertSingle : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result;
c1.save({ value: 1 });
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1 ], result.json);
c1.save({ value: 2 }); // this will invalidate cache
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after single update operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterUpdateSingle : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result;
var doc = c1.save({ value: 1 });
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1 ], result.json);
c1.update(doc, { value: 42 }); // this will invalidate cache
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 42 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 42 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after single remove operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterRemoveSingle : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result;
c1.save({ value: 1 });
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1 ], result.json);
c1.remove(c1.any()._key); // this will invalidate cache
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after truncate operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterTruncate : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 10; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], result.json);
c1.truncate(); // this will invalidate cache
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ ], result.json);
for (i = 1; i <= 10; ++i) {
c1.save({ value: i });
}
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after AQL insert operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterAqlInsert : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
AQL_EXECUTE("INSERT { value: 9 } INTO @@collection", { "@collection" : c1.name() });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 9 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 9 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after AQL update operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterAqlUpdate : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
AQL_EXECUTE("FOR doc IN @@collection UPDATE doc._key WITH { value: doc.value + 1 } IN @@collection", { "@collection" : c1.name() });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 2, 3, 4, 5, 6 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 2, 3, 4, 5, 6 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after AQL remove operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterAqlRemove : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
AQL_EXECUTE("FOR doc IN @@collection REMOVE doc._key IN @@collection", { "@collection" : c1.name() });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation after AQL multi-collection operation
////////////////////////////////////////////////////////////////////////////////
testInvalidationAfterAqlMulti : function () {
var query = "FOR doc IN @@collection SORT doc.value RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
// collection1
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
// collection2
result = AQL_EXECUTE(query, { "@collection": c2.name() });
assertFalse(result.cached);
assertEqual([ ], result.json);
result = AQL_EXECUTE(query, { "@collection": c2.name() });
assertTrue(result.cached);
assertEqual([ ], result.json);
AQL_EXECUTE("FOR doc IN @@collection1 INSERT doc IN @@collection2", { "@collection1" : c1.name(), "@collection2" : c2.name() });
result = AQL_EXECUTE(query, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c2.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query, { "@collection": c2.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalidation of multiple queries
////////////////////////////////////////////////////////////////////////////////
testInvalidationMultipleQueries : function () {
var query1 = "FOR doc IN @@collection SORT doc.value ASC RETURN doc.value";
var query2 = "FOR doc IN @@collection SORT doc.value DESC RETURN doc.value";
var result, i;
for (i = 1; i <= 5; ++i) {
c1.save({ value: i });
}
AQL_QUERY_CACHE_PROPERTIES({ mode: "on" });
result = AQL_EXECUTE(query1, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query1, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5 ], result.json);
result = AQL_EXECUTE(query2, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 5, 4, 3, 2, 1 ], result.json);
result = AQL_EXECUTE(query2, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 5, 4, 3, 2, 1 ], result.json);
c1.save({ value: 6 });
result = AQL_EXECUTE(query1, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 6 ], result.json);
result = AQL_EXECUTE(query1, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 1, 2, 3, 4, 5, 6 ], result.json);
result = AQL_EXECUTE(query2, { "@collection": c1.name() });
assertFalse(result.cached);
assertEqual([ 6, 5, 4, 3, 2, 1 ], result.json);
result = AQL_EXECUTE(query2, { "@collection": c1.name() });
assertTrue(result.cached);
assertEqual([ 6, 5, 4, 3, 2, 1 ], result.json);
}
};
}
////////////////////////////////////////////////////////////////////////////////
/// @brief executes the test suite
////////////////////////////////////////////////////////////////////////////////
jsunity.run(ahuacatlQueryCacheTestSuite);
return jsunity.done();
// Local Variables:
// mode: outline-minor
// outline-regexp: "^\\(/// @brief\\|/// @addtogroup\\|// --SECTION--\\|/// @page\\|/// @}\\)"
// End:

View File

@ -633,15 +633,14 @@ static uint64_t FastHashJsonRecursive (uint64_t hash,
case TRI_JSON_OBJECT: {
hash = fasthash64(static_cast<const void*>("object"), 6, hash);
size_t const n = TRI_LengthVector(&object->_value._objects);
uint64_t tmphash = hash;
for (size_t i = 0; i < n; i += 2) {
auto subjson = static_cast<TRI_json_t const*>(TRI_AddressVector(&object->_value._objects, i));
TRI_ASSERT(TRI_IsStringJson(subjson));
tmphash ^= FastHashJsonRecursive(hash, subjson);
hash = FastHashJsonRecursive(hash, subjson);
subjson = static_cast<TRI_json_t const*>(TRI_AddressVector(&object->_value._objects, i + 1));
tmphash ^= FastHashJsonRecursive(hash, subjson);
hash = FastHashJsonRecursive(hash, subjson);
}
return tmphash;
return hash;
}
case TRI_JSON_ARRAY: {

View File

@ -2093,17 +2093,15 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate, yyscan_t scanner,
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief parses a list
/// @brief parses an array
////////////////////////////////////////////////////////////////////////////////
static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
yyscan_t scanner) {
v8::EscapableHandleScope scope(isolate);
struct yyguts_t * yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
v8::Handle<v8::Array> array = v8::Array::New(isolate);
bool comma = false;
uint32_t pos = 0;
int c = tri_v8_lex(scanner);
@ -2113,7 +2111,7 @@ static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
return scope.Escape<v8::Value>(array);
}
if (comma) {
if (pos > 0) {
if (c != COMMA) {
yyextra._message = "expecting comma";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
@ -2121,13 +2119,11 @@ static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
c = tri_v8_lex(scanner);
}
else {
comma = true;
}
v8::Handle<v8::Value> sub = ParseValue(isolate, scanner, c);
if (sub->IsUndefined()) {
yyextra._message = "cannot create value";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
@ -2148,8 +2144,7 @@ static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
static v8::Handle<v8::Value> ParseObject (v8::Isolate* isolate,
yyscan_t scanner) {
v8::EscapableHandleScope scope(isolate);
struct yyguts_t * yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
v8::Handle<v8::Object> object = v8::Object::New(isolate);
bool comma = false;
@ -2211,6 +2206,7 @@ static v8::Handle<v8::Value> ParseObject (v8::Isolate* isolate,
v8::Handle<v8::Value> sub = ParseValue(isolate, scanner, c);
if (sub->IsUndefined()) {
yyextra._message = "cannot create value";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
@ -2220,7 +2216,7 @@ static v8::Handle<v8::Value> ParseObject (v8::Isolate* isolate,
}
yyextra._message = "expecting an object attribute name or element, got end-of-file";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
////////////////////////////////////////////////////////////////////////////////
@ -2231,7 +2227,7 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate,
yyscan_t scanner,
int c) {
v8::EscapableHandleScope scope(isolate);
struct yyguts_t * yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
switch (c) {
case END_OF_FILE: {
@ -2253,7 +2249,6 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate,
case NUMBER_CONSTANT: {
char* ep;
double d;
if ((size_t) yyleng >= 512) {
yyextra._message = "number too big";
@ -2264,14 +2259,14 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate,
errno = 0;
// yytext is null-terminated. can use it directly without copying it into a temporary buffer
d = strtod(yytext, &ep);
double d = strtod(yytext, &ep);
if (d == HUGE_VAL && errno == ERANGE) {
yyextra._message = "number too big";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
if (d == 0 && errno == ERANGE) {
if (d == 0.0 && errno == ERANGE) {
yyextra._message = "number too small";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
@ -2366,36 +2361,31 @@ v8::Handle<v8::Value> TRI_FromJsonString (v8::Isolate* isolate,
char** error) {
v8::EscapableHandleScope scope(isolate);
v8::Handle<v8::Value> value;
YY_BUFFER_STATE buf;
int c;
struct yyguts_t * yyg;
yyscan_t scanner;
tri_v8_lex_init(&scanner);
yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
yyextra._memoryZone = TRI_CORE_MEM_ZONE;
buf = tri_v8__scan_string(text,scanner);
yyextra._memoryZone = TRI_UNKNOWN_MEM_ZONE;
YY_BUFFER_STATE buf = tri_v8__scan_string(text,scanner);
c = tri_v8_lex(scanner);
value = ParseValue(isolate, scanner, c);
int c = tri_v8_lex(scanner);
v8::Handle<v8::Value> value = ParseValue(isolate, scanner, c);
if (value->IsUndefined()) {
LOG_DEBUG("failed to parse json value: '%s'", yyextra._message);
LOG_DEBUG("failed to parse JSON value: '%s'", yyextra._message);
}
else {
c = tri_v8_lex(scanner);
if (c != END_OF_FILE) {
value = v8::Undefined(isolate);
LOG_DEBUG("failed to parse json value: expecting EOF");
LOG_DEBUG("failed to parse JSON value: expecting EOF");
}
}
if (error != nullptr) {
if (yyextra._message != nullptr) {
*error = TRI_DuplicateString(yyextra._message);
*error = TRI_DuplicateStringZ(TRI_UNKNOWN_MEM_ZONE, yyextra._message);
}
else {
*error = nullptr;

View File

@ -175,17 +175,15 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate, yyscan_t scanner,
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// @brief parses a list
/// @brief parses an array
////////////////////////////////////////////////////////////////////////////////
static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
yyscan_t scanner) {
v8::EscapableHandleScope scope(isolate);
struct yyguts_t * yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
v8::Handle<v8::Array> array = v8::Array::New(isolate);
bool comma = false;
uint32_t pos = 0;
int c = yylex(scanner);
@ -195,7 +193,7 @@ static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
return scope.Escape<v8::Value>(array);
}
if (comma) {
if (pos > 0) {
if (c != COMMA) {
yyextra._message = "expecting comma";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
@ -203,13 +201,11 @@ static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
c = yylex(scanner);
}
else {
comma = true;
}
v8::Handle<v8::Value> sub = ParseValue(isolate, scanner, c);
if (sub->IsUndefined()) {
yyextra._message = "cannot create value";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
@ -230,8 +226,7 @@ static v8::Handle<v8::Value> ParseArray (v8::Isolate* isolate,
static v8::Handle<v8::Value> ParseObject (v8::Isolate* isolate,
yyscan_t scanner) {
v8::EscapableHandleScope scope(isolate);
struct yyguts_t * yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
v8::Handle<v8::Object> object = v8::Object::New(isolate);
bool comma = false;
@ -293,6 +288,7 @@ static v8::Handle<v8::Value> ParseObject (v8::Isolate* isolate,
v8::Handle<v8::Value> sub = ParseValue(isolate, scanner, c);
if (sub->IsUndefined()) {
yyextra._message = "cannot create value";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
@ -302,7 +298,7 @@ static v8::Handle<v8::Value> ParseObject (v8::Isolate* isolate,
}
yyextra._message = "expecting an object attribute name or element, got end-of-file";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
////////////////////////////////////////////////////////////////////////////////
@ -313,7 +309,7 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate,
yyscan_t scanner,
int c) {
v8::EscapableHandleScope scope(isolate);
struct yyguts_t * yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
switch (c) {
case END_OF_FILE: {
@ -335,7 +331,6 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate,
case NUMBER_CONSTANT: {
char* ep;
double d;
if ((size_t) yyleng >= 512) {
yyextra._message = "number too big";
@ -346,14 +341,14 @@ static v8::Handle<v8::Value> ParseValue (v8::Isolate* isolate,
errno = 0;
// yytext is null-terminated. can use it directly without copying it into a temporary buffer
d = strtod(yytext, &ep);
double d = strtod(yytext, &ep);
if (d == HUGE_VAL && errno == ERANGE) {
yyextra._message = "number too big";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
if (d == 0 && errno == ERANGE) {
if (d == 0.0 && errno == ERANGE) {
yyextra._message = "number too small";
return scope.Escape<v8::Value>(v8::Undefined(isolate));
}
@ -448,36 +443,31 @@ v8::Handle<v8::Value> TRI_FromJsonString (v8::Isolate* isolate,
char** error) {
v8::EscapableHandleScope scope(isolate);
v8::Handle<v8::Value> value;
YY_BUFFER_STATE buf;
int c;
struct yyguts_t * yyg;
yyscan_t scanner;
yylex_init(&scanner);
yyg = (struct yyguts_t*) scanner;
struct yyguts_t* yyg = (struct yyguts_t*) scanner;
yyextra._memoryZone = TRI_CORE_MEM_ZONE;
buf = yy_scan_string(text, scanner);
yyextra._memoryZone = TRI_UNKNOWN_MEM_ZONE;
YY_BUFFER_STATE buf = yy_scan_string(text, scanner);
c = yylex(scanner);
value = ParseValue(isolate, scanner, c);
int c = yylex(scanner);
v8::Handle<v8::Value> value = ParseValue(isolate, scanner, c);
if (value->IsUndefined()) {
LOG_DEBUG("failed to parse json value: '%s'", yyextra._message);
LOG_DEBUG("failed to parse JSON value: '%s'", yyextra._message);
}
else {
c = yylex(scanner);
if (c != END_OF_FILE) {
value = v8::Undefined(isolate);
LOG_DEBUG("failed to parse json value: expecting EOF");
LOG_DEBUG("failed to parse JSON value: expecting EOF");
}
}
if (error != nullptr) {
if (yyextra._message != nullptr) {
*error = TRI_DuplicateString(yyextra._message);
*error = TRI_DuplicateStringZ(TRI_UNKNOWN_MEM_ZONE, yyextra._message);
}
else {
*error = nullptr;

View File

@ -284,7 +284,7 @@ static void JS_ProcessJsonFile (const v8::FunctionCallbackInfo<v8::Value>& args)
if (object->IsUndefined()) {
if (error != nullptr) {
string msg = error;
TRI_FreeString(TRI_CORE_MEM_ZONE, error);
TRI_FreeString(TRI_UNKNOWN_MEM_ZONE, error);
TRI_V8_THROW_SYNTAX_ERROR(msg.c_str());
}
else {