mirror of https://gitee.com/bigwinds/arangodb
Merge branch 'devel' of github.com:triAGENS/ArangoDB into devel
This commit is contained in:
commit
b076e92559
78
CHANGELOG
78
CHANGELOG
|
@ -1,6 +1,12 @@
|
|||
v2.4.0 (XXXX-XX-XX)
|
||||
-------------------
|
||||
|
||||
* Upgraded V8 version from 3.16.14 to 3.29.59
|
||||
|
||||
* Added Foxx generator for building Hypermedia APIs
|
||||
|
||||
A more detailed description is [here](https://www.arangodb.com/2014/12/08/building-hypermedia-apis-foxxgenerator)
|
||||
|
||||
* New `Applications` tab in web interface:
|
||||
|
||||
The `applications` tab got a complete redesign.
|
||||
|
@ -17,10 +23,11 @@ v2.4.0 (XXXX-XX-XX)
|
|||
To install a new application, a new dialogue is now available.
|
||||
It provides the features already available in the console application `foxx-manager` plus some more:
|
||||
* install an application from Github
|
||||
* install an application from a zip fileBut now also allows to install Applications directly from ArangoDBs App store.
|
||||
* install an application from a zip file
|
||||
* install an application from ArangoDB's application store
|
||||
* create a new application from scratch: this feature uses a generator to
|
||||
create a Foxx application with pre-defined CRUD methods for a given list
|
||||
of collections. The generated foxx app can either be downloaded as a zip file or
|
||||
of collections. The generated Foxx app can either be downloaded as a zip file or
|
||||
be installed on the server. Starting with a new Foxx app has never been easier.
|
||||
|
||||
* fixed issue #1102: Aardvark: Layout bug in documents overview
|
||||
|
@ -32,9 +39,10 @@ v2.4.0 (XXXX-XX-XX)
|
|||
|
||||
* fixed issue #1161: Aardvark: Click on Import JSON imports previously uploaded file
|
||||
|
||||
* removed enable-all-in-one-v8, enable-all-in-one-icu and enable-all-in-one-libev
|
||||
* removed configure options `--enable-all-in-one-v8`, `--enable-all-in-one-icu`,
|
||||
and `--enable-all-in-one-libev`.
|
||||
|
||||
* global internal rename to fix naming incompatibility with JSON:
|
||||
* global internal rename to fix naming incompatibilities with JSON:
|
||||
|
||||
Internal functions with names containing `array` have been renamed to `object`,
|
||||
internal functions with names containing `list` have been renamed to `array`.
|
||||
|
@ -47,11 +55,41 @@ v2.4.0 (XXXX-XX-XX)
|
|||
* `IS_LIST` now is an alias of the new `IS_ARRAY`
|
||||
* `IS_DOCUMENT` now is an alias of the new `IS_OBJECT`
|
||||
|
||||
The changed also renamed the option `mergeArrays` to `mergeObjects` for AQL
|
||||
data-modification query options and HTTP document modification API
|
||||
|
||||
* AQL: added optimizer rule "remove-filter-covered-by-index"
|
||||
|
||||
This rule removes FilterNodes and CalculationNodes from an execution plan if the
|
||||
filter is already covered by a previous IndexRangeNode. Removing the CalculationNode
|
||||
and the FilterNode will speed up query execution because the query requires less
|
||||
computation.
|
||||
|
||||
* AQL: range optimizations for IN and OR
|
||||
|
||||
* fixed missing makeDirectory when fetching a Foxx application from a zip file
|
||||
This change enables usage of indexes for several additional cases. Filters containing
|
||||
the `IN` operator can now make use of indexes, and multiple OR- or AND-combined filter
|
||||
conditions can now also use indexes if the filters are accessing the same indexed
|
||||
attribute.
|
||||
|
||||
* allow passing subqueries as AQL function parameters without using
|
||||
Here are a few examples of queries that can now use indexes but couldn't before:
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER doc.indexedAttribute == 1 || doc.indexedAttribute > 99
|
||||
RETURN doc
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER doc.indexedAttribute IN [ 3, 42 ] || doc.indexedAttribute > 99
|
||||
RETURN doc
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER (doc.indexedAttribute > 2 && doc.indexedAttribute < 10) ||
|
||||
(doc.indexedAttribute > 23 && doc.indexedAttribute < 42)
|
||||
RETURN doc
|
||||
|
||||
* fixed issue #500: AQL parentheses issue
|
||||
|
||||
This change allows passing subqueries as AQL function parameters without using
|
||||
duplicate brackets (e.g. `FUNC(query)` instead of `FUNC((query))`
|
||||
|
||||
* added optional `COUNT` clause to AQL `COLLECT`
|
||||
|
@ -62,8 +100,24 @@ v2.4.0 (XXXX-XX-XX)
|
|||
COLLECT age = doc.age WITH COUNT INTO length
|
||||
RETURN { age: age, count: length }
|
||||
|
||||
A count-only query is also possible:
|
||||
|
||||
FOR doc IN collection
|
||||
COLLECT WITH COUNT INTO length
|
||||
RETURN length
|
||||
|
||||
* fixed missing makeDirectory when fetching a Foxx application from a zip file
|
||||
|
||||
* fixed issue #1134: Change the default endpoint to localhost
|
||||
|
||||
This change will modify the ip address ArangoDB listens on to 127.0.0.1 by default.
|
||||
This will make new ArangoDB installations unaccessible from clients other than
|
||||
localhost unless changed. This is a security feature.
|
||||
|
||||
To make ArangoDB accessible from any client, change the server's configuration
|
||||
(`--server.endpoint`) to either `tcp://0.0.0.0:8529` or the server's publicly
|
||||
visible IP address.
|
||||
|
||||
* deprecated `Repository#modelPrototype`. Use `Repository#model` instead.
|
||||
|
||||
* IMPORTANT CHANGE: by default, system collections are included in replication and all
|
||||
|
@ -93,11 +147,17 @@ v2.4.0 (XXXX-XX-XX)
|
|||
The HTTP API methods for fetching the replication inventory and for dumping collections
|
||||
also support the `includeSystem` control flag via a URL parameter.
|
||||
|
||||
* renamed option `mergeArrays` to `mergeObjects` for AQL data-modification query options
|
||||
and HTTP document modification API
|
||||
* removed DEPRECATED replication methods:
|
||||
* `replication.logger.start()`
|
||||
* `replication.logger.stop()`
|
||||
* `replication.logger.properties()`
|
||||
* HTTP PUT `/_api/replication/logger-start`
|
||||
* HTTP PUT `/_api/replication/logger-stop`
|
||||
* HTTP GET `/_api/replication/logger-config`
|
||||
* HTTP PUT `/_api/replication/logger-config`
|
||||
|
||||
|
||||
v2.3.3 (XXXX-XX-XX)
|
||||
v2.3.3 (2014-12-17)
|
||||
-------------------
|
||||
|
||||
* fixed error handling in instanciation of distributed AQL queries, this
|
||||
|
|
|
@ -383,12 +383,23 @@ The following optimizer rules may appear in the `rules` attribute of a plan:
|
|||
|
||||
The following optimizer rules may appear in the `rules` attribute of cluster plans:
|
||||
|
||||
* `scatter-in-cluster`: to be documented soon
|
||||
* `distribute-in-cluster`: to be documented soon
|
||||
* `distribute-filtercalc-to-cluster`: to be documented soon
|
||||
* `distribute-sort-to-cluster`: to be documented soon
|
||||
* `remove-unnecessary-remote-scatter`: to be documented soon
|
||||
* `undistribute-remove-after-enum-coll`: to be documented soon
|
||||
* `distribute-in-cluster`: will appear when query parts get distributed in a cluster.
|
||||
This is not an optimization rule, and it cannot be turned off.
|
||||
* `scatter-in-cluster`: will appear when scatter, gatter, and remote nodes are inserted
|
||||
into a distributed query. This is not an optimization rule, and it cannot be turned off.
|
||||
* `distribute-filtercalc-to-cluster`: will appear when filters are moved up in a
|
||||
distributed execution plan. Filters are moved as far up in the plan as possible to
|
||||
make result sets as small as possible as early as possible.
|
||||
* `distribute-sort-to-cluster`: will appear if sorts are moved up in a distributed query.
|
||||
Sorts are moved as far up in the plan as possible to make result sets as small as possible
|
||||
as early as possible.
|
||||
* `remove-unnecessary-remote-scatter`: will appear if a RemoteNode is followed by a
|
||||
ScatterNode, and the ScatterNode is only followed by calculations or the SingletonNode.
|
||||
In this case, there is no need to distribute the calculation, and it will be handled
|
||||
centrally.
|
||||
* `undistribute-remove-after-enum-coll`: will appear if a RemoveNode can be pushed into
|
||||
the same query part that enumerates over the documents of a collection. This saves
|
||||
inter-cluster roundtrips between the EnumerateCollectionNode and the RemoveNode.
|
||||
|
||||
Note that some rules may appear multiple times in the list, with number suffixes.
|
||||
This is due to the same rule being applied multiple times, at different positions
|
||||
|
|
|
@ -80,7 +80,7 @@ Switch into the ArangoDB directory
|
|||
|
||||
In order to configure the build environment execute
|
||||
|
||||
./configure --enable-all-in-one-v8 --enable-all-in-one-libev --enable-all-in-one-icu
|
||||
./configure
|
||||
|
||||
to setup the makefiles. This will check the various system characteristics and
|
||||
installed libraries.
|
||||
|
@ -103,16 +103,16 @@ Create an empty directory
|
|||
|
||||
Check the binary by starting it using the command line.
|
||||
|
||||
unix> ./bin/arangod -c etc/relative/arangod.conf --server.endpoint tcp://127.0.0.1:12345 --server.disable-authentication true /tmp/database-dir
|
||||
unix> ./bin/arangod -c etc/relative/arangod.conf --server.endpoint tcp://127.0.0.1:8529 /tmp/database-dir
|
||||
|
||||
This will start up the ArangoDB and listen for HTTP requests on port 12345 bound
|
||||
This will start up the ArangoDB and listen for HTTP requests on port 8529 bound
|
||||
to IP address 127.0.0.1. You should see the startup messages similar to the following:
|
||||
|
||||
```
|
||||
2013-10-14T12:47:29Z [29266] INFO ArangoDB xxx ... </br>
|
||||
2013-10-14T12:47:29Z [29266] INFO using endpoint 'tcp://127.0.0.1.12345' for non-encrypted requests </br>
|
||||
2013-10-14T12:47:30Z [29266] INFO Authentication is turned off </br>
|
||||
2013-10-14T12:47:30Z [29266] INFO ArangoDB (version xxx) is ready for business. Have fun! </br>
|
||||
2013-10-14T12:47:29Z [29266] INFO ArangoDB xxx ...
|
||||
2013-10-14T12:47:29Z [29266] INFO using endpoint 'tcp://127.0.0.1.8529' for non-encrypted requests
|
||||
2013-10-14T12:47:30Z [29266] INFO Authentication is turned off
|
||||
2013-10-14T12:47:30Z [29266] INFO ArangoDB (version xxx) is ready for business. Have fun!
|
||||
```
|
||||
|
||||
If it fails with a message about the database directory, please make sure the
|
||||
|
@ -120,7 +120,7 @@ database directory you specified exists and can be written into.
|
|||
|
||||
Use your favorite browser to access the URL
|
||||
|
||||
http://127.0.0.1:12345/_api/version
|
||||
http://127.0.0.1:8529/_api/version
|
||||
|
||||
This should produce a JSON object like
|
||||
|
||||
|
@ -155,7 +155,7 @@ The ArangoShell will be installed in
|
|||
|
||||
When upgrading from a previous version of ArangoDB, please make sure you inspect ArangoDB's
|
||||
log file after an upgrade. It may also be necessary to start ArangoDB with the *--upgrade*
|
||||
parameter once to perform required upgrade or initialisation tasks.
|
||||
parameter once to perform required upgrade or initialization tasks.
|
||||
|
||||
!SECTION Devel Version
|
||||
|
||||
|
@ -185,13 +185,6 @@ newer versions of the programs and/or libraries.
|
|||
|
||||
When compiling with special configure options, you may need the following extra libraries:
|
||||
|
||||
* libev in version 3 or 4 (only when using configure option `--disable-all-in-one-libev`,
|
||||
available from http://software.schmorp.de/pkg/libev.html)
|
||||
* Google's V8 engine, version 3.16.14 (only when using configure option
|
||||
`--disable-all-in-one-v8`, available from http://code.google.com/p/v8)
|
||||
* SCons for compiling V8 (only when using configure option
|
||||
`--disable-all-in-one-v8`, see http://www.scons.org)
|
||||
* the ICU library (only when not using configure option `--enable-all-in-one-icu`)
|
||||
* the Boost test framework library (only when using configure option `--enable-maintainer-mode`)
|
||||
|
||||
|
||||
|
@ -218,7 +211,7 @@ This will call aclocal, autoheader, automake, and autoconf in the correct order.
|
|||
|
||||
In order to configure the build environment please execute
|
||||
|
||||
unix> ./configure --enable-all-in-one-v8 --enable-all-in-one-libev --enable-all-in-one-icu
|
||||
unix> ./configure
|
||||
|
||||
to setup the makefiles. This will check for the various system characteristics
|
||||
and installed libraries.
|
||||
|
@ -251,38 +244,6 @@ When used, you can start ArangoDB using this command:
|
|||
|
||||
ArangoDB will then automatically use the configuration from file *etc/relative/arangod.conf*.
|
||||
|
||||
`--enable-all-in-one-libev`
|
||||
|
||||
This tells the build system to use the bundled version
|
||||
of libev instead of using the system version.
|
||||
|
||||
`--disable-all-in-one-libev`
|
||||
|
||||
This tells the build system to use the installed
|
||||
system version of libev instead of compiling the supplied version from the
|
||||
3rdParty directory in the make run.
|
||||
|
||||
`--enable-all-in-one-v8`
|
||||
|
||||
This tells the build system to use the bundled version of
|
||||
V8 instead of using the system version.
|
||||
|
||||
`--disable-all-in-one-v8`
|
||||
|
||||
This tells the build system to use the installed system
|
||||
version of V8 instead of compiling the supplied version from the 3rdParty
|
||||
directory in the make run.
|
||||
|
||||
`--enable-all-in-one-icu`
|
||||
|
||||
This tells the build system to use the bundled version of
|
||||
ICU instead of using the system version.
|
||||
|
||||
`--disable-all-in-one-icu`
|
||||
|
||||
This tells the build system to use the bundled version of
|
||||
Boost header files. This is the default and recommended.
|
||||
|
||||
`--enable-all-in-one-etcd`
|
||||
|
||||
This tells the build system to use the bundled version
|
||||
|
@ -294,6 +255,11 @@ This tells the build system to use Go binaries located in the
|
|||
3rdParty directory. Note that ArangoDB does not ship with Go binaries, and that
|
||||
the Go binaries must be copied into this directory manually.
|
||||
|
||||
`--enable-mruby`
|
||||
|
||||
This will also build `arangoirb`, and interactive MRuby REPL shell. This is an
|
||||
experimental feature.
|
||||
|
||||
`--enable-maintainer-mode`
|
||||
|
||||
This tells the build system to use BISON and FLEX to
|
||||
|
@ -338,10 +304,7 @@ version of go into the ArangoDB source directory and build it:
|
|||
./all.bash
|
||||
|
||||
# now that go is installed, run your configure with --enable-internal-go
|
||||
./configure\
|
||||
--enable-all-in-one-v8 \
|
||||
--enable-all-in-one-libev \
|
||||
--enable-internal-go
|
||||
./configure --enable-internal-go
|
||||
|
||||
|
||||
!SUBSECTION Re-building ArangoDB after an update
|
||||
|
@ -367,4 +330,3 @@ If you forgot your previous configure options, you can look them up with
|
|||
|
||||
head config.log
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,197 @@
|
|||
!CHAPTER Features and Improvements
|
||||
|
||||
The following list shows in detail which features have been added or improved in
|
||||
ArangoDB 2.4. ArangoDB 2.4 also contains several bugfixes that are not listed
|
||||
here. For a list of bugfixes, please consult the [CHANGELOG](https://github.com/triAGENS/ArangoDB/blob/devel/CHANGELOG).
|
||||
|
||||
|
||||
!SECTION V8 version upgrade
|
||||
|
||||
The built-in version of V8 has been upgraded from 3.16.14 to 3.29.59.
|
||||
|
||||
|
||||
!SECTION FoxxGenerator
|
||||
|
||||
ArangoDB 2.4 is shipped with FoxxGenerator, a framework for building
|
||||
standardized Hypermedia APIs easily. The generated APIs can be consumed with
|
||||
client tools that understand [Siren](https://github.com/kevinswiber/siren).
|
||||
|
||||
The FoxxGenerator can create APIs based on a semantic description of entities
|
||||
and transitions. A blog series on the use cases and how to use the Foxx generator
|
||||
is here:
|
||||
|
||||
* [part 1](https://www.arangodb.com/2014/11/26/building-hypermedia-api-json)
|
||||
* [part 2](https://www.arangodb.com/2014/12/02/building-hypermedia-apis-design)
|
||||
* [part 3](https://www.arangodb.com/2014/12/08/building-hypermedia-apis-foxxgenerator)
|
||||
|
||||
A cookbook recipe for getting started with FoxxGenerator is [here](https://docs.arangodb.com/cookbook/FoxxGeneratorFirstSteps.html).
|
||||
|
||||
!SECTION AQL improvements
|
||||
|
||||
!SUBSECTION Optimizer improvements
|
||||
|
||||
The AQL optimizer has been enhanced to use of indexes in queries in several
|
||||
additional cases. Filters containing the `IN` operator can now make use of
|
||||
indexes, and multiple OR- or AND-combined filter conditions can now also use
|
||||
indexes if the filter conditions refer to the same indexed attribute.
|
||||
|
||||
Here are a few examples of queries that can now use indexes but couldn't before:
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER doc.indexedAttribute == 1 || doc.indexedAttribute > 99
|
||||
RETURN doc
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER doc.indexedAttribute IN [ 3, 42 ] || doc.indexedAttribute > 99
|
||||
RETURN doc
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER (doc.indexedAttribute > 2 && doc.indexedAttribute < 10) ||
|
||||
(doc.indexedAttribute > 23 && doc.indexedAttribute < 42)
|
||||
RETURN doc
|
||||
|
||||
|
||||
Additionally, the optimizer rule `remove-filter-covered-by-index` has been
|
||||
added. This rule removes FilterNodes and CalculationNodes from an execution
|
||||
plan if the filter condition is already covered by a previous IndexRangeNode.
|
||||
Removing the filter's CalculationNode and the FilterNode itself will speed
|
||||
up query execution because the query requires less computation.
|
||||
|
||||
|
||||
!SUBSECTION Language improvements
|
||||
|
||||
!SUBSUBSECTION `COUNT` clause
|
||||
|
||||
An optional `COUNT` clause was added to the `COLLECT` statement. The `COUNT`
|
||||
clause allows for more efficient counting of values.
|
||||
|
||||
In previous versions of ArangoDB one had to write the following to count
|
||||
documents:
|
||||
|
||||
RETURN LENGTH (
|
||||
FOR doc IN collection
|
||||
FILTER ...some condition...
|
||||
RETURN doc
|
||||
)
|
||||
|
||||
With the `COUNT` clause, the query can be modified to
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER ...some condition...
|
||||
COLLECT WITH COUNT INTO length
|
||||
RETURN length
|
||||
|
||||
The latter query will be much more efficient because it will not produce any
|
||||
intermediate results with need to be shipped from a subquery into the `LENGTH`
|
||||
function.
|
||||
|
||||
The `COUNT` clause can also be used to count the number of items in each group:
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER ...some condition...
|
||||
COLLECT group = doc.group WITH COUNT INTO length
|
||||
return { group: group, length: length }
|
||||
|
||||
|
||||
!SUBSUBSECTION `COLLECT` modifications
|
||||
|
||||
In ArangoDB 2.4, `COLLECT` operations can be made more efficient if only a
|
||||
small fragment of the group values is needed later. For these cases, `COLLECT`
|
||||
provides an optional conversion expression for the `INTO` clause. This
|
||||
expression controls the value that is inserted into the array of group values.
|
||||
It can be used for projections.
|
||||
|
||||
The following query only copies the `dateRegistered` attribute of each document
|
||||
into the groups, potentially saving a lot of memory and computation time
|
||||
compared to copying `doc` completely:
|
||||
|
||||
FOR doc IN collection
|
||||
FILTER ...some condition...
|
||||
COLLECT group = doc.group INTO dates = doc.dateRegistered
|
||||
return { group: group, maxDate: MAX(dates) }
|
||||
|
||||
|
||||
!SUBSUBSECTION Subquery syntax
|
||||
|
||||
In previous versions of ArangoDB, subqueries required extra parentheses
|
||||
around them, and this caused confusion when subqueries were used as function
|
||||
parameters. For example, the following query did not work:
|
||||
|
||||
LET values = LENGTH(
|
||||
FOR doc IN collection RETURN doc
|
||||
)
|
||||
|
||||
but had to be written as follows:
|
||||
|
||||
LET values = LENGTH((
|
||||
FOR doc IN collection RETURN doc
|
||||
))
|
||||
|
||||
This was unintuitive and is fixed in version 2.4 so that both variants of
|
||||
the query are accepted and produce the same result.
|
||||
|
||||
|
||||
!SUBSECTION Web interface
|
||||
|
||||
The `Applications` tab for Foxx applications in the web interface has got
|
||||
a complete redesign.
|
||||
|
||||
It will now only show applications that are currently running in ArangoDB.
|
||||
For a selected application, a new detailed view has been created. This view
|
||||
provides a better overview of the app, e.g.:
|
||||
|
||||
* author
|
||||
* license
|
||||
* version
|
||||
* contributors
|
||||
* download links
|
||||
* API documentation
|
||||
|
||||
Installing a new Foxx application on the server is made easy using the new
|
||||
`Add application` button. The `Add application` dialogue provides all the
|
||||
features already available in the `foxx-manager` console application plus some more:
|
||||
|
||||
* install a Foxx application from Github
|
||||
* install a Foxx application from a zip file
|
||||
* install a Foxx application from ArangoDB's application store
|
||||
* create a new Foxx application from scratch: this feature uses a generator to
|
||||
create a Foxx application with pre-defined CRUD methods for a given list of collections.
|
||||
The generated Foxx app can either be downloaded as a zip file or
|
||||
be installed on the server. Starting with a new Foxx app has never been easier.
|
||||
|
||||
|
||||
!SECTION Miscellaneous improvements
|
||||
|
||||
!SUBSECTION Default endpoint is 127.0.0.1
|
||||
|
||||
The default endpoint for the ArangoDB server has been changed from `0.0.0.0` to
|
||||
`127.0.0.1`. This will make new ArangoDB installations unaccessible from clients other
|
||||
than localhost unless the configuration is changed. This is a security precaution
|
||||
measure that has been requested as a feature a lot of times.
|
||||
|
||||
|
||||
!SUBSECTION System collections in replication
|
||||
|
||||
By default, system collections are now included in replication and all replication API
|
||||
return values. This will lead to user accounts and credentials data being replicated
|
||||
from master to slave servers. This may overwrite slave-specific database users.
|
||||
|
||||
If this is undesired, the `_users` collection can be excluded from replication
|
||||
easily by setting the `includeSystem` attribute to `false` in the following commands:
|
||||
|
||||
* replication.sync({ includeSystem: false });
|
||||
* replication.applier.properties({ includeSystem: false });
|
||||
|
||||
This will exclude all system collections (including `_aqlfunctions`, `_graphs` etc.)
|
||||
from the initial synchronisation and the continuous replication.
|
||||
|
||||
If this is also undesired, it is also possible to specify a list of collections to
|
||||
exclude from the initial synchronisation and the continuous replication using the
|
||||
`restrictCollections` attribute, e.g.:
|
||||
|
||||
replication.applier.properties({
|
||||
includeSystem: true,
|
||||
restrictType: "exclude",
|
||||
restrictCollections: [ "_users", "_graphs", "foo" ]
|
||||
});
|
||||
|
|
@ -12,7 +12,8 @@
|
|||
* [Upgrading in general](Installing/Upgrading.md)
|
||||
* [Cluster setup](Installing/Cluster.md)
|
||||
<!-- 2 -->
|
||||
* [Whats New](NewFeatures/NewFeatures23.md)
|
||||
* [Whats New](NewFeatures/NewFeatures24.md)
|
||||
* [Whats New in 2.3](NewFeatures/NewFeatures23.md)
|
||||
* [Whats New in 2.2](NewFeatures/NewFeatures22.md)
|
||||
* [Whats New in 2.1](NewFeatures/NewFeatures21.md)
|
||||
* [First Steps](FirstSteps/README.md)
|
||||
|
|
|
@ -224,7 +224,6 @@ describe ArangoDB do
|
|||
doc.parsed_response['code'].should eq(200)
|
||||
doc.parsed_response['id'].should eq(@cid)
|
||||
doc.parsed_response['name'].should eq(@cn)
|
||||
p doc.parsed_response['status']
|
||||
[2, 4].include?(doc.parsed_response['status'].to_i).should eq(true)
|
||||
end
|
||||
|
||||
|
|
|
@ -37,7 +37,6 @@ describe ArangoDB do
|
|||
end
|
||||
|
||||
after do
|
||||
ArangoDB.put(api + "/logger-stop", :body => "")
|
||||
end
|
||||
|
||||
################################################################################
|
||||
|
@ -66,96 +65,6 @@ describe ArangoDB do
|
|||
server.should have_key('version')
|
||||
end
|
||||
|
||||
################################################################################
|
||||
## start
|
||||
################################################################################
|
||||
|
||||
it "starting the logger" do
|
||||
cmd = api + "/logger-start"
|
||||
doc = ArangoDB.log_put("#{prefix}-logger-start", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
doc.parsed_response['running'].should eq(true)
|
||||
|
||||
# restart
|
||||
cmd = api + "/logger-start"
|
||||
doc = ArangoDB.log_put("#{prefix}-logger-start", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
doc.parsed_response['running'].should eq(true)
|
||||
|
||||
# fetch state
|
||||
cmd = api + "/logger-state"
|
||||
doc = ArangoDB.log_get("#{prefix}-logger-start", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
all = doc.parsed_response
|
||||
all.should have_key('state')
|
||||
all.should have_key('server')
|
||||
all.should have_key('clients')
|
||||
|
||||
state = all['state']
|
||||
state['running'].should eq(true)
|
||||
state['lastLogTick'].should match(/^\d+$/)
|
||||
state['time'].should match(/^\d+-\d+-\d+T\d+:\d+:\d+Z$/)
|
||||
|
||||
server = all['server']
|
||||
server['serverId'].should match(/^\d+$/)
|
||||
server.should have_key('version')
|
||||
end
|
||||
|
||||
################################################################################
|
||||
## start / stop
|
||||
################################################################################
|
||||
|
||||
it "starting and stopping the logger" do
|
||||
cmd = api + "/logger-start"
|
||||
doc = ArangoDB.log_put("#{prefix}-logger-startstop", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
doc.parsed_response['running'].should eq(true)
|
||||
|
||||
# stop
|
||||
cmd = api + "/logger-stop"
|
||||
doc = ArangoDB.log_put("#{prefix}-logger-startstop", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
doc.parsed_response['running'].should eq(true)
|
||||
|
||||
# fetch state
|
||||
cmd = api + "/logger-state"
|
||||
doc = ArangoDB.log_get("#{prefix}-logger-startstop", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
all = doc.parsed_response
|
||||
all.should have_key('state')
|
||||
all.should have_key('server')
|
||||
all.should have_key('clients')
|
||||
|
||||
state = all['state']
|
||||
state['running'].should eq(true)
|
||||
state['lastLogTick'].should match(/^\d+$/)
|
||||
state['time'].should match(/^\d+-\d+-\d+T\d+:\d+:\d+Z$/)
|
||||
|
||||
server = all['server']
|
||||
server['serverId'].should match(/^\d+$/)
|
||||
server.should have_key('version')
|
||||
|
||||
# stop again
|
||||
cmd = api + "/logger-stop"
|
||||
doc = ArangoDB.log_put("#{prefix}-logger-startstop", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
doc.parsed_response['running'].should eq(true)
|
||||
|
||||
# start after stop
|
||||
cmd = api + "/logger-start"
|
||||
doc = ArangoDB.log_put("#{prefix}-logger-startstop", cmd, :body => "")
|
||||
|
||||
doc.code.should eq(200)
|
||||
doc.parsed_response['running'].should eq(true)
|
||||
end
|
||||
|
||||
################################################################################
|
||||
## follow
|
||||
################################################################################
|
||||
|
|
|
@ -0,0 +1,130 @@
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief Aql, collection scanners
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
/// DISCLAIMER
|
||||
///
|
||||
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
|
||||
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
|
||||
///
|
||||
/// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
/// you may not use this file except in compliance with the License.
|
||||
/// You may obtain a copy of the License at
|
||||
///
|
||||
/// http://www.apache.org/licenses/LICENSE-2.0
|
||||
///
|
||||
/// Unless required by applicable law or agreed to in writing, software
|
||||
/// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
/// See the License for the specific language governing permissions and
|
||||
/// limitations under the License.
|
||||
///
|
||||
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
|
||||
///
|
||||
/// @author Jan Steemann
|
||||
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
|
||||
/// @author Copyright 2012-2013, triAGENS GmbH, Cologne, Germany
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "CollectionScanner.h"
|
||||
|
||||
using namespace triagens::aql;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct CollectionScanner
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
CollectionScanner::CollectionScanner (triagens::arango::AqlTransaction* trx,
|
||||
TRI_transaction_collection_t* trxCollection)
|
||||
: trx(trx),
|
||||
trxCollection(trxCollection),
|
||||
totalCount(0),
|
||||
position(0) {
|
||||
}
|
||||
|
||||
CollectionScanner::~CollectionScanner () {
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct RandomCollectionScanner
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
RandomCollectionScanner::RandomCollectionScanner (triagens::arango::AqlTransaction* trx,
|
||||
TRI_transaction_collection_t* trxCollection)
|
||||
: CollectionScanner(trx, trxCollection),
|
||||
initialPosition(0),
|
||||
step(0) {
|
||||
|
||||
}
|
||||
|
||||
int RandomCollectionScanner::scan (std::vector<TRI_doc_mptr_copy_t>& docs,
|
||||
size_t batchSize) {
|
||||
return trx->readRandom(trxCollection,
|
||||
docs,
|
||||
initialPosition,
|
||||
position,
|
||||
static_cast<TRI_voc_size_t>(batchSize),
|
||||
&step,
|
||||
&totalCount);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
void RandomCollectionScanner::reset () {
|
||||
initialPosition = 0;
|
||||
position = 0;
|
||||
step = 0;
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct LinearCollectionScanner
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
LinearCollectionScanner::LinearCollectionScanner (triagens::arango::AqlTransaction* trx,
|
||||
TRI_transaction_collection_t* trxCollection)
|
||||
: CollectionScanner(trx, trxCollection) {
|
||||
|
||||
}
|
||||
|
||||
int LinearCollectionScanner::scan (std::vector<TRI_doc_mptr_copy_t>& docs,
|
||||
size_t batchSize) {
|
||||
return trx->readIncremental(trxCollection,
|
||||
docs,
|
||||
position,
|
||||
static_cast<TRI_voc_size_t>(batchSize),
|
||||
0,
|
||||
TRI_QRY_NO_LIMIT,
|
||||
&totalCount);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
void LinearCollectionScanner::reset () {
|
||||
position = 0;
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- END-OF-FILE
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// Local Variables:
|
||||
// mode: outline-minor
|
||||
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
|
||||
// End:
|
|
@ -0,0 +1,124 @@
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief Aql, collection scanners
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
/// DISCLAIMER
|
||||
///
|
||||
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
|
||||
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
|
||||
///
|
||||
/// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
/// you may not use this file except in compliance with the License.
|
||||
/// You may obtain a copy of the License at
|
||||
///
|
||||
/// http://www.apache.org/licenses/LICENSE-2.0
|
||||
///
|
||||
/// Unless required by applicable law or agreed to in writing, software
|
||||
/// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
/// See the License for the specific language governing permissions and
|
||||
/// limitations under the License.
|
||||
///
|
||||
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
|
||||
///
|
||||
/// @author Jan Steemann
|
||||
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
|
||||
/// @author Copyright 2012-2013, triAGENS GmbH, Cologne, Germany
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#ifndef ARANGODB_AQL_COLLECTION_SCANNER_H
|
||||
#define ARANGODB_AQL_COLLECTION_SCANNER_H 1
|
||||
|
||||
#include "Basics/Common.h"
|
||||
#include "Utils/AqlTransaction.h"
|
||||
#include "VocBase/document-collection.h"
|
||||
#include "VocBase/transaction.h"
|
||||
#include "VocBase/vocbase.h"
|
||||
|
||||
namespace triagens {
|
||||
namespace aql {
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct CollectionScanner
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
struct CollectionScanner {
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
CollectionScanner (triagens::arango::AqlTransaction*,
|
||||
TRI_transaction_collection_t*);
|
||||
|
||||
virtual ~CollectionScanner ();
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
virtual int scan (std::vector<TRI_doc_mptr_copy_t>&, size_t) = 0;
|
||||
|
||||
virtual void reset () = 0;
|
||||
|
||||
triagens::arango::AqlTransaction* trx;
|
||||
TRI_transaction_collection_t* trxCollection;
|
||||
uint32_t totalCount;
|
||||
TRI_voc_size_t position;
|
||||
};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct RandomCollectionScanner
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
struct RandomCollectionScanner : public CollectionScanner {
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
RandomCollectionScanner (triagens::arango::AqlTransaction*,
|
||||
TRI_transaction_collection_t*);
|
||||
|
||||
int scan (std::vector<TRI_doc_mptr_copy_t>&,
|
||||
size_t);
|
||||
|
||||
void reset ();
|
||||
|
||||
uint32_t initialPosition;
|
||||
uint32_t step;
|
||||
};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct LinearCollectionScanner
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
struct LinearCollectionScanner : public CollectionScanner {
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
LinearCollectionScanner (triagens::arango::AqlTransaction*,
|
||||
TRI_transaction_collection_t*);
|
||||
|
||||
int scan (std::vector<TRI_doc_mptr_copy_t>&,
|
||||
size_t);
|
||||
|
||||
void reset ();
|
||||
};
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- END-OF-FILE
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// Local Variables:
|
||||
// mode: outline-minor
|
||||
// outline-regexp: "/// @brief\\|/// {@inheritDoc}\\|/// @page\\|// --SECTION--\\|/// @\\}"
|
||||
// End:
|
|
@ -26,6 +26,7 @@
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "Aql/ExecutionBlock.h"
|
||||
#include "Aql/CollectionScanner.h"
|
||||
#include "Aql/ExecutionEngine.h"
|
||||
#include "Basics/ScopeGuard.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
|
@ -55,6 +56,7 @@ using StringBuffer = triagens::basics::StringBuffer;
|
|||
#define LEAVE_BLOCK
|
||||
#endif
|
||||
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- struct AggregatorGroup
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -665,16 +667,28 @@ EnumerateCollectionBlock::EnumerateCollectionBlock (ExecutionEngine* engine,
|
|||
EnumerateCollectionNode const* ep)
|
||||
: ExecutionBlock(engine, ep),
|
||||
_collection(ep->_collection),
|
||||
_totalCount(0),
|
||||
_scanner(nullptr),
|
||||
_posInDocuments(0),
|
||||
_atBeginning(false) {
|
||||
_atBeginning(false),
|
||||
_random(ep->_random) {
|
||||
|
||||
if (_random) {
|
||||
// random scan
|
||||
_scanner = new RandomCollectionScanner(_trx, _trx->trxCollection(_collection->cid()));
|
||||
}
|
||||
else {
|
||||
// default: linear scan
|
||||
_scanner = new LinearCollectionScanner(_trx, _trx->trxCollection(_collection->cid()));
|
||||
}
|
||||
}
|
||||
|
||||
EnumerateCollectionBlock::~EnumerateCollectionBlock () {
|
||||
if (_scanner != nullptr) {
|
||||
delete _scanner;
|
||||
}
|
||||
}
|
||||
|
||||
bool EnumerateCollectionBlock::moreDocuments (size_t hint) {
|
||||
|
||||
if (hint < DefaultBatchSize) {
|
||||
hint = DefaultBatchSize;
|
||||
}
|
||||
|
@ -683,27 +697,21 @@ bool EnumerateCollectionBlock::moreDocuments (size_t hint) {
|
|||
|
||||
newDocs.reserve(hint);
|
||||
|
||||
int res = _trx->readIncremental(_trx->trxCollection(_collection->cid()),
|
||||
newDocs,
|
||||
_internalSkip,
|
||||
static_cast<TRI_voc_size_t>(hint),
|
||||
0,
|
||||
TRI_QRY_NO_LIMIT,
|
||||
&_totalCount);
|
||||
int res = _scanner->scan(newDocs, hint);
|
||||
|
||||
if (res != TRI_ERROR_NO_ERROR) {
|
||||
THROW_ARANGO_EXCEPTION(res);
|
||||
}
|
||||
|
||||
_engine->_stats.scannedFull += static_cast<int64_t>(newDocs.size());
|
||||
|
||||
if (newDocs.empty()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
_engine->_stats.scannedFull += static_cast<int64_t>(newDocs.size());
|
||||
|
||||
_documents.swap(newDocs);
|
||||
|
||||
_atBeginning = _internalSkip == 0;
|
||||
_atBeginning = false;
|
||||
_posInDocuments = 0;
|
||||
|
||||
return true;
|
||||
|
@ -757,7 +765,6 @@ AqlItemBlock* EnumerateCollectionBlock::getSome (size_t, // atLeast,
|
|||
// Get more documents from collection if _documents is empty:
|
||||
if (_posInDocuments >= _documents.size()) {
|
||||
if (! moreDocuments(atMost)) {
|
||||
TRI_ASSERT(_totalCount == 0);
|
||||
_done = true;
|
||||
return nullptr;
|
||||
}
|
||||
|
@ -838,7 +845,6 @@ size_t EnumerateCollectionBlock::skipSome (size_t atLeast, size_t atMost) {
|
|||
// Get more documents from collection if _documents is empty:
|
||||
if (_posInDocuments >= _documents.size()) {
|
||||
if (! moreDocuments(atMost)) {
|
||||
TRI_ASSERT(_totalCount == 0);
|
||||
_done = true;
|
||||
return skipped;
|
||||
}
|
||||
|
|
|
@ -28,11 +28,12 @@
|
|||
#ifndef ARANGODB_AQL_EXECUTION_BLOCK_H
|
||||
#define ARANGODB_AQL_EXECUTION_BLOCK_H 1
|
||||
|
||||
#include <Basics/JsonHelper.h>
|
||||
#include <ShapedJson/shaped-json.h>
|
||||
#include "Basics/JsonHelper.h"
|
||||
#include "ShapedJson/shaped-json.h"
|
||||
|
||||
#include "Aql/AqlItemBlock.h"
|
||||
#include "Aql/Collection.h"
|
||||
#include "Aql/CollectionScanner.h"
|
||||
#include "Aql/ExecutionNode.h"
|
||||
#include "Aql/Range.h"
|
||||
#include "Aql/WalkerWorker.h"
|
||||
|
@ -45,6 +46,8 @@
|
|||
namespace triagens {
|
||||
namespace aql {
|
||||
|
||||
struct CollectionScanner;
|
||||
|
||||
class ExecutionEngine;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -441,8 +444,8 @@ namespace triagens {
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void initializeDocuments () {
|
||||
_internalSkip = 0;
|
||||
if (!_atBeginning) {
|
||||
_scanner->reset();
|
||||
if (! _atBeginning) {
|
||||
_documents.clear();
|
||||
}
|
||||
_posInDocuments = 0;
|
||||
|
@ -493,16 +496,10 @@ namespace triagens {
|
|||
Collection* _collection;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief total number of documents in the collection
|
||||
/// @brief collection scanner
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
uint32_t _totalCount;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief internal skip value
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_voc_size_t _internalSkip;
|
||||
CollectionScanner* _scanner;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief document buffer
|
||||
|
@ -520,7 +517,13 @@ namespace triagens {
|
|||
/// @brief current position in _documents
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool _atBeginning;
|
||||
bool _atBeginning; // TODO: check if we can remove this
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief whether or not we're doing random iteration
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool const _random;
|
||||
};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -1136,6 +1139,7 @@ namespace triagens {
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
class OurLessThan {
|
||||
|
||||
public:
|
||||
OurLessThan (triagens::arango::AqlTransaction* trx,
|
||||
std::deque<AqlItemBlock*>& buffer,
|
||||
|
|
|
@ -1035,7 +1035,8 @@ EnumerateCollectionNode::EnumerateCollectionNode (ExecutionPlan* plan,
|
|||
: ExecutionNode(plan, base),
|
||||
_vocbase(plan->getAst()->query()->vocbase()),
|
||||
_collection(plan->getAst()->query()->collections()->get(JsonHelper::checkAndGetStringValue(base.json(), "collection"))),
|
||||
_outVariable(varFromJson(plan->getAst(), base, "outVariable")) {
|
||||
_outVariable(varFromJson(plan->getAst(), base, "outVariable")),
|
||||
_random(JsonHelper::checkAndGetBooleanValue(base.json(), "random")) {
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1054,7 +1055,8 @@ void EnumerateCollectionNode::toJsonHelper (triagens::basics::Json& nodes,
|
|||
// Now put info about vocbase and cid in there
|
||||
json("database", triagens::basics::Json(_vocbase->_name))
|
||||
("collection", triagens::basics::Json(_collection->getName()))
|
||||
("outVariable", _outVariable->toJson());
|
||||
("outVariable", _outVariable->toJson())
|
||||
("random", triagens::basics::Json(_random));
|
||||
|
||||
// And add it:
|
||||
nodes(json);
|
||||
|
@ -1072,7 +1074,7 @@ ExecutionNode* EnumerateCollectionNode::clone (ExecutionPlan* plan,
|
|||
outVariable = plan->getAst()->variables()->createVariable(outVariable);
|
||||
TRI_ASSERT(outVariable != nullptr);
|
||||
}
|
||||
auto c = new EnumerateCollectionNode(plan, _id, _vocbase, _collection, outVariable);
|
||||
auto c = new EnumerateCollectionNode(plan, _id, _vocbase, _collection, outVariable, _random);
|
||||
|
||||
CloneHelper(c, plan, withDependencies, withProperties);
|
||||
|
||||
|
@ -1200,8 +1202,9 @@ double EnumerateCollectionNode::estimateCost (size_t& nrItems) const {
|
|||
double depCost = _dependencies.at(0)->getCost(incoming);
|
||||
size_t count = _collection->count();
|
||||
nrItems = incoming * count;
|
||||
return depCost + count * incoming;
|
||||
// We do a full collection scan for each incoming item.
|
||||
// random iteration is slightly more expensive than linear iteration
|
||||
return depCost + nrItems * (_random ? 1.005 : 1.0);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
|
|
@ -829,11 +829,13 @@ namespace triagens {
|
|||
size_t id,
|
||||
TRI_vocbase_t* vocbase,
|
||||
Collection* collection,
|
||||
Variable const* outVariable)
|
||||
Variable const* outVariable,
|
||||
bool random)
|
||||
: ExecutionNode(plan, id),
|
||||
_vocbase(vocbase),
|
||||
_collection(collection),
|
||||
_outVariable(outVariable){
|
||||
_outVariable(outVariable),
|
||||
_random(random) {
|
||||
TRI_ASSERT(_vocbase != nullptr);
|
||||
TRI_ASSERT(_collection != nullptr);
|
||||
TRI_ASSERT(_outVariable != nullptr);
|
||||
|
@ -955,6 +957,11 @@ namespace triagens {
|
|||
|
||||
Variable const* _outVariable;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief whether or not we want random iteration
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool const _random;
|
||||
};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
|
|
@ -356,7 +356,7 @@ ExecutionNode* ExecutionPlan::fromNodeFor (ExecutionNode* previous,
|
|||
if (collection == nullptr) {
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL, "no collection for EnumerateCollection");
|
||||
}
|
||||
en = registerNode(new EnumerateCollectionNode(this, nextId(), _ast->query()->vocbase(), collection, v));
|
||||
en = registerNode(new EnumerateCollectionNode(this, nextId(), _ast->query()->vocbase(), collection, v, false));
|
||||
}
|
||||
else if (expression->type == NODE_TYPE_REFERENCE) {
|
||||
// second operand is already a variable
|
||||
|
|
|
@ -40,6 +40,7 @@ add_executable(
|
|||
Aql/AstNode.cpp
|
||||
Aql/BindParameters.cpp
|
||||
Aql/Collection.cpp
|
||||
Aql/CollectionScanner.cpp
|
||||
Aql/ExecutionBlock.cpp
|
||||
Aql/ExecutionEngine.cpp
|
||||
Aql/ExecutionNode.cpp
|
||||
|
|
|
@ -21,6 +21,7 @@ arangod_libarangod_a_SOURCES = \
|
|||
arangod/Aql/AstNode.cpp \
|
||||
arangod/Aql/BindParameters.cpp \
|
||||
arangod/Aql/Collection.cpp \
|
||||
arangod/Aql/CollectionScanner.cpp \
|
||||
arangod/Aql/ExecutionBlock.cpp \
|
||||
arangod/Aql/ExecutionEngine.cpp \
|
||||
arangod/Aql/ExecutionNode.cpp \
|
||||
|
|
|
@ -99,44 +99,12 @@ Handler::status_t RestReplicationHandler::execute() {
|
|||
if (len >= 1) {
|
||||
const string& command = suffix[0];
|
||||
|
||||
if (command == "logger-start") {
|
||||
if (type != HttpRequest::HTTP_REQUEST_PUT) {
|
||||
goto BAD_CALL;
|
||||
}
|
||||
|
||||
if (isCoordinatorError()) {
|
||||
return status_t(Handler::HANDLER_DONE);
|
||||
}
|
||||
handleCommandLoggerStart();
|
||||
}
|
||||
else if (command == "logger-stop") {
|
||||
if (type != HttpRequest::HTTP_REQUEST_PUT) {
|
||||
goto BAD_CALL;
|
||||
}
|
||||
|
||||
if (isCoordinatorError()) {
|
||||
return status_t(Handler::HANDLER_DONE);
|
||||
}
|
||||
|
||||
handleCommandLoggerStop();
|
||||
}
|
||||
else if (command == "logger-state") {
|
||||
if (command == "logger-state") {
|
||||
if (type != HttpRequest::HTTP_REQUEST_GET) {
|
||||
goto BAD_CALL;
|
||||
}
|
||||
handleCommandLoggerState();
|
||||
}
|
||||
else if (command == "logger-config") {
|
||||
if (type == HttpRequest::HTTP_REQUEST_GET) {
|
||||
handleCommandLoggerGetConfig();
|
||||
}
|
||||
else {
|
||||
if (type != HttpRequest::HTTP_REQUEST_PUT) {
|
||||
goto BAD_CALL;
|
||||
}
|
||||
handleCommandLoggerSetConfig();
|
||||
}
|
||||
}
|
||||
else if (command == "logger-follow") {
|
||||
if (type != HttpRequest::HTTP_REQUEST_GET) {
|
||||
goto BAD_CALL;
|
||||
|
@ -415,40 +383,6 @@ uint64_t RestReplicationHandler::determineChunkSize () const {
|
|||
return chunkSize;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication logger
|
||||
/// this method does nothing and is deprecated since ArangoDB 2.2
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void RestReplicationHandler::handleCommandLoggerStart () {
|
||||
// the logger in ArangoDB 2.2 is now the WAL...
|
||||
// so the logger cannot be started but is always running
|
||||
TRI_json_t result;
|
||||
|
||||
TRI_InitObjectJson(TRI_CORE_MEM_ZONE, &result);
|
||||
TRI_Insert3ObjectJson(TRI_CORE_MEM_ZONE, &result, "running", TRI_CreateBooleanJson(TRI_CORE_MEM_ZONE, true));
|
||||
|
||||
generateResult(&result);
|
||||
TRI_DestroyJson(TRI_CORE_MEM_ZONE, &result);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief stops the replication logger
|
||||
/// this method does nothing and is deprecated since ArangoDB 2.2
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void RestReplicationHandler::handleCommandLoggerStop () {
|
||||
// the logger in ArangoDB 2.2 is now the WAL...
|
||||
// so the logger cannot be stopped
|
||||
TRI_json_t result;
|
||||
|
||||
TRI_InitObjectJson(TRI_CORE_MEM_ZONE, &result);
|
||||
TRI_Insert3ObjectJson(TRI_CORE_MEM_ZONE, &result, "running", TRI_CreateBooleanJson(TRI_CORE_MEM_ZONE, true));
|
||||
|
||||
generateResult(&result);
|
||||
TRI_DestroyJson(TRI_CORE_MEM_ZONE, &result);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @startDocuBlock JSF_get_api_replication_logger_return_state
|
||||
/// @brief returns the state of the replication logger
|
||||
|
@ -567,37 +501,6 @@ void RestReplicationHandler::handleCommandLoggerState () {
|
|||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get the configuration of the replication logger
|
||||
/// this method does nothing and is deprecated since ArangoDB 2.2
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void RestReplicationHandler::handleCommandLoggerGetConfig () {
|
||||
TRI_json_t* json = TRI_CreateObjectJson(TRI_UNKNOWN_MEM_ZONE);
|
||||
|
||||
if (json == nullptr) {
|
||||
generateError(HttpResponse::SERVER_ERROR, TRI_ERROR_OUT_OF_MEMORY);
|
||||
return;
|
||||
}
|
||||
|
||||
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "autoStart", TRI_CreateBooleanJson(TRI_UNKNOWN_MEM_ZONE, true));
|
||||
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "logRemoteChanges", TRI_CreateBooleanJson(TRI_UNKNOWN_MEM_ZONE, true));
|
||||
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "maxEvents", TRI_CreateNumberJson(TRI_UNKNOWN_MEM_ZONE, 0));
|
||||
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "maxEventsSize", TRI_CreateNumberJson(TRI_UNKNOWN_MEM_ZONE, 0));
|
||||
|
||||
generateResult(json);
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief set the configuration of the replication logger
|
||||
/// this method does nothing and is deprecated since ArangoDB 2.2
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void RestReplicationHandler::handleCommandLoggerSetConfig () {
|
||||
handleCommandLoggerGetConfig();
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief handle a dump batch command
|
||||
///
|
||||
|
|
|
@ -131,36 +131,12 @@ namespace triagens {
|
|||
|
||||
uint64_t determineChunkSize () const;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief remotely start the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void handleCommandLoggerStart ();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief remotely stop the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void handleCommandLoggerStop ();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the state of the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void handleCommandLoggerState ();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the configuration of the the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void handleCommandLoggerGetConfig ();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief configure the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void handleCommandLoggerSetConfig ();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief handle a follow command for the replication log
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -45,6 +45,7 @@
|
|||
#include "VocBase/voc-shaper.h"
|
||||
#include "VocBase/voc-types.h"
|
||||
|
||||
#include "Basics/gcd.h"
|
||||
#include "Basics/logging.h"
|
||||
#include "Basics/random.h"
|
||||
#include "Basics/tri-strings.h"
|
||||
|
@ -400,6 +401,7 @@ namespace triagens {
|
|||
this->unlock(trxCollection, TRI_TRANSACTION_READ);
|
||||
|
||||
// READ-LOCK END
|
||||
*total = 0;
|
||||
return TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
|
||||
|
@ -441,6 +443,82 @@ namespace triagens {
|
|||
return TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief read all master pointers, using skip and limit and an internal
|
||||
/// offset into the primary index. this can be used for incremental access to
|
||||
/// the documents without restarting the index scan at the begin
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
int readRandom (TRI_transaction_collection_t* trxCollection,
|
||||
std::vector<TRI_doc_mptr_copy_t>& docs,
|
||||
uint32_t& initialPosition,
|
||||
uint32_t& position,
|
||||
TRI_voc_size_t batchSize,
|
||||
uint32_t* step,
|
||||
uint32_t* total) {
|
||||
if (initialPosition > 0 && position == initialPosition) {
|
||||
// already read all documents
|
||||
return TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = documentCollection(trxCollection);
|
||||
|
||||
// READ-LOCK START
|
||||
int res = this->lock(trxCollection, TRI_TRANSACTION_READ);
|
||||
|
||||
if (res != TRI_ERROR_NO_ERROR) {
|
||||
return res;
|
||||
}
|
||||
|
||||
if (document->_primaryIndex._nrUsed == 0) {
|
||||
// nothing to do
|
||||
this->unlock(trxCollection, TRI_TRANSACTION_READ);
|
||||
|
||||
// READ-LOCK END
|
||||
*total = 0;
|
||||
return TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
|
||||
if (orderBarrier(trxCollection) == nullptr) {
|
||||
return TRI_ERROR_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
*total = (uint32_t) document->_primaryIndex._nrAlloc;
|
||||
if (*step == 0) {
|
||||
TRI_ASSERT(initialPosition == 0);
|
||||
|
||||
// find a co-prime for total
|
||||
while (true) {
|
||||
*step = TRI_UInt32Random() % *total;
|
||||
if (*step > 10 && triagens::basics::binaryGcd<uint32_t>(*total, *step) == 1) {
|
||||
while (initialPosition == 0) {
|
||||
initialPosition = TRI_UInt32Random() % *total;
|
||||
}
|
||||
position = initialPosition;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TRI_voc_size_t numRead = 0;
|
||||
do {
|
||||
TRI_doc_mptr_t* d = (TRI_doc_mptr_t*) document->_primaryIndex._table[position];
|
||||
if (d != nullptr) {
|
||||
docs.emplace_back(*d);
|
||||
++numRead;
|
||||
}
|
||||
|
||||
position += *step;
|
||||
position = position % *total;
|
||||
}
|
||||
while (numRead < batchSize && position != initialPosition);
|
||||
|
||||
this->unlock(trxCollection, TRI_TRANSACTION_READ);
|
||||
// READ-LOCK END
|
||||
|
||||
return TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief delete a single document
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -766,7 +844,7 @@ namespace triagens {
|
|||
uint32_t pos = TRI_UInt32Random() % total;
|
||||
void** beg = document->_primaryIndex._table;
|
||||
|
||||
while (beg[pos] == 0) {
|
||||
while (beg[pos] == nullptr) {
|
||||
pos = TRI_UInt32Random() % total;
|
||||
}
|
||||
|
||||
|
|
|
@ -122,11 +122,11 @@ static void ExtractSkipAndLimit (const v8::FunctionCallbackInfo<v8::Value>& args
|
|||
limit = TRI_QRY_NO_LIMIT;
|
||||
|
||||
if (pos < (size_t) args.Length() && ! args[(int) pos]->IsNull() && ! args[(int) pos]->IsUndefined()) {
|
||||
skip = (TRI_voc_size_t) TRI_ObjectToDouble(args[(int) pos]);
|
||||
skip = (TRI_voc_ssize_t) TRI_ObjectToDouble(args[(int) pos]);
|
||||
}
|
||||
|
||||
if (pos + 1 < (size_t) args.Length() && ! args[(int) pos + 1]->IsNull() && ! args[(int) pos + 1]->IsUndefined()) {
|
||||
limit = (TRI_voc_ssize_t) TRI_ObjectToDouble(args[(int) pos + 1]);
|
||||
limit = (TRI_voc_size_t) TRI_ObjectToDouble(args[(int) pos + 1]);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -75,25 +75,6 @@ static void JS_StateLoggerReplication (const v8::FunctionCallbackInfo<v8::Value>
|
|||
TRI_V8_RETURN(result);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the configuration of the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
static void JS_ConfigureLoggerReplication (const v8::FunctionCallbackInfo<v8::Value>& args) {
|
||||
v8::Isolate* isolate = args.GetIsolate();
|
||||
v8::HandleScope scope(isolate);
|
||||
|
||||
// the replication logger is actually non-existing in ArangoDB 2.2 and higher
|
||||
// as there is the WAL. To be downwards-compatible, we'll return dummy values
|
||||
v8::Handle<v8::Object> result = v8::Object::New(isolate);
|
||||
result->Set(TRI_V8_ASCII_STRING("autoStart"), v8::True(isolate));
|
||||
result->Set(TRI_V8_ASCII_STRING("logRemoteChanges"), v8::True(isolate));
|
||||
result->Set(TRI_V8_ASCII_STRING("maxEvents"), v8::Number::New(isolate, 0));
|
||||
result->Set(TRI_V8_ASCII_STRING("maxEventsSize"), v8::Number::New(isolate, 0));
|
||||
|
||||
TRI_V8_RETURN(result);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get the last WAL entries
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -647,7 +628,6 @@ void TRI_InitV8replication (v8::Isolate* isolate,
|
|||
|
||||
// replication functions. not intended to be used by end users
|
||||
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("REPLICATION_LOGGER_STATE"), JS_StateLoggerReplication, true);
|
||||
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("REPLICATION_LOGGER_CONFIGURE"), JS_ConfigureLoggerReplication, true);
|
||||
#ifdef TRI_ENABLE_MAINTAINER_MODE
|
||||
TRI_AddGlobalFunctionVocbase(isolate, context, TRI_V8_ASCII_STRING("REPLICATION_LOGGER_LAST"), JS_LastLoggerReplication, true);
|
||||
#endif
|
||||
|
|
|
@ -139,7 +139,12 @@ KeyGenerator* KeyGenerator::factory (TRI_json_t const* options) {
|
|||
option = TRI_LookupObjectJson(options, "increment");
|
||||
|
||||
if (TRI_IsNumberJson(option)) {
|
||||
increment = (uint64_t) option->_value._number;
|
||||
if (option->_value._number <= 0.0) {
|
||||
// negative or 0 offset is not allowed
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
increment = static_cast<uint64_t>(option->_value._number);
|
||||
|
||||
if (increment == 0 || increment >= (1ULL << 16)) {
|
||||
return nullptr;
|
||||
|
@ -149,7 +154,11 @@ KeyGenerator* KeyGenerator::factory (TRI_json_t const* options) {
|
|||
option = TRI_LookupObjectJson(options, "offset");
|
||||
|
||||
if (TRI_IsNumberJson(option)) {
|
||||
offset = (uint64_t) option->_value._number;
|
||||
if (option->_value._number < 0.0) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
offset = static_cast<uint64_t>(option->_value._number);
|
||||
|
||||
if (offset >= UINT64_MAX) {
|
||||
return nullptr;
|
||||
|
|
|
@ -4,5 +4,5 @@ keep-alive = true
|
|||
progress = true
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
disable-authentication = true
|
||||
|
|
|
@ -44,7 +44,7 @@ threads = 4
|
|||
[scheduler]
|
||||
|
||||
# number of threads used for I/O
|
||||
threads = 1
|
||||
threads = 2
|
||||
|
||||
[javascript]
|
||||
startup-directory = @PKGDATADIR@/js
|
||||
|
|
|
@ -3,5 +3,5 @@
|
|||
progress = true
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
disable-authentication = true
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# config file for arangoimp
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
disable-authentication = true
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# config file for arangoirb
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
|
||||
[ruby]
|
||||
modules-path = @PKGDATADIR@/mr/client/modules;@PKGDATADIR@/mr/common/modules
|
||||
|
|
|
@ -3,5 +3,5 @@
|
|||
progress = true
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
disable-authentication = true
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
pretty-print = true
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
disable-authentication = true
|
||||
|
||||
[javascript]
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
pretty-print = true
|
||||
|
||||
[server]
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
disable-authentication = true
|
||||
|
||||
[javascript]
|
||||
|
|
|
@ -16,10 +16,6 @@ app-path = ./js/apps
|
|||
script = ./js/server/arango-dfdb.js
|
||||
v8-contexts = 1
|
||||
|
||||
[ruby]
|
||||
modules-path = ./mr
|
||||
action-directory = ./mr/actions
|
||||
|
||||
[log]
|
||||
level = info
|
||||
severity = human
|
||||
|
|
|
@ -4,13 +4,13 @@
|
|||
|
||||
[server]
|
||||
disable-authentication = true
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
threads = 10
|
||||
keyfile = UnitTests/server.pem
|
||||
# reuse-address = false
|
||||
|
||||
[scheduler]
|
||||
threads = 3
|
||||
threads = 2
|
||||
|
||||
[javascript]
|
||||
startup-directory = ./js
|
||||
|
@ -18,9 +18,6 @@ app-path = ./js/apps
|
|||
frontend-development = false
|
||||
v8-contexts = 5
|
||||
|
||||
[ruby]
|
||||
modules-path = ./mr/server/modules;./mr/common/modules
|
||||
|
||||
[log]
|
||||
level = info
|
||||
severity = human
|
||||
|
|
|
@ -4,13 +4,13 @@
|
|||
|
||||
[server]
|
||||
disable-authentication = true
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
threads = 10
|
||||
keyfile = UnitTests/server.pem
|
||||
# reuse-address = false
|
||||
|
||||
[scheduler]
|
||||
threads = 3
|
||||
threads = 2
|
||||
|
||||
[javascript]
|
||||
startup-directory = ./js
|
||||
|
@ -18,9 +18,6 @@ app-path = ./js/apps
|
|||
frontend-development = false
|
||||
v8-contexts = 5
|
||||
|
||||
[ruby]
|
||||
modules-path = ./mr/server/modules;./mr/common/modules
|
||||
|
||||
[log]
|
||||
level = info
|
||||
severity = human
|
||||
|
|
|
@ -4,12 +4,12 @@
|
|||
|
||||
[server]
|
||||
disable-authentication = true
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
threads = 4
|
||||
# reuse-address = false
|
||||
|
||||
[scheduler]
|
||||
threads = 1
|
||||
threads = 2
|
||||
|
||||
[javascript]
|
||||
startup-directory = ./js
|
||||
|
@ -17,9 +17,6 @@ app-path = ./js/apps
|
|||
frontend-development = false
|
||||
v8-contexts = 5
|
||||
|
||||
[ruby]
|
||||
modules-path = ./mr/server/modules;./mr/common/modules
|
||||
|
||||
[log]
|
||||
level = info
|
||||
severity = human
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
[server]
|
||||
disable-authentication = true
|
||||
endpoint = tcp://localhost:8529
|
||||
endpoint = tcp://127.0.0.1:8529
|
||||
threads = 5
|
||||
# reuse-address = false
|
||||
|
||||
|
@ -17,10 +17,6 @@ app-path = ./js/apps
|
|||
frontend-development = false
|
||||
v8-contexts = 4
|
||||
|
||||
[ruby]
|
||||
action-directory = ./mr/actions/system
|
||||
modules-path = ./mr/server/modules;./mr/common/modules
|
||||
|
||||
[log]
|
||||
level = info
|
||||
severity = human
|
||||
|
|
|
@ -42,36 +42,6 @@ var arangosh = require("org/arangodb/arangosh");
|
|||
var logger = { };
|
||||
var applier = { };
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.start = function () {
|
||||
'use strict';
|
||||
|
||||
var db = internal.db;
|
||||
|
||||
var requestResult = db._connection.PUT("/_api/replication/logger-start", "");
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
|
||||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief stops the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.stop = function () {
|
||||
'use strict';
|
||||
|
||||
var db = internal.db;
|
||||
|
||||
var requestResult = db._connection.PUT("/_api/replication/logger-stop", "");
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
|
||||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the replication logger state
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -87,29 +57,6 @@ logger.state = function () {
|
|||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief configures the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.properties = function (config) {
|
||||
'use strict';
|
||||
|
||||
var db = internal.db;
|
||||
|
||||
var requestResult;
|
||||
if (config === undefined) {
|
||||
requestResult = db._connection.GET("/_api/replication/logger-config");
|
||||
}
|
||||
else {
|
||||
requestResult = db._connection.PUT("/_api/replication/logger-config",
|
||||
JSON.stringify(config));
|
||||
}
|
||||
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
|
||||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication applier
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -41,36 +41,6 @@ var arangosh = require("org/arangodb/arangosh");
|
|||
var logger = { };
|
||||
var applier = { };
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.start = function () {
|
||||
'use strict';
|
||||
|
||||
var db = internal.db;
|
||||
|
||||
var requestResult = db._connection.PUT("/_api/replication/logger-start", "");
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
|
||||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief stops the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.stop = function () {
|
||||
'use strict';
|
||||
|
||||
var db = internal.db;
|
||||
|
||||
var requestResult = db._connection.PUT("/_api/replication/logger-stop", "");
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
|
||||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the replication logger state
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -86,29 +56,6 @@ logger.state = function () {
|
|||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief configures the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.properties = function (config) {
|
||||
'use strict';
|
||||
|
||||
var db = internal.db;
|
||||
|
||||
var requestResult;
|
||||
if (config === undefined) {
|
||||
requestResult = db._connection.GET("/_api/replication/logger-config");
|
||||
}
|
||||
else {
|
||||
requestResult = db._connection.PUT("/_api/replication/logger-config",
|
||||
JSON.stringify(config));
|
||||
}
|
||||
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
|
||||
return requestResult;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication applier
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -113,7 +113,6 @@ function ReplicationLoggerSuite () {
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
setUp : function () {
|
||||
replication.logger.properties({ maxEvents: 1048576 });
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -121,61 +120,10 @@ function ReplicationLoggerSuite () {
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
tearDown : function () {
|
||||
replication.logger.properties({ maxEvents: 1048576 });
|
||||
db._drop(cn);
|
||||
db._drop(cn2);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief start logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testStartLogger : function () {
|
||||
var actual, state;
|
||||
|
||||
// start
|
||||
actual = replication.logger.start();
|
||||
assertTrue(actual);
|
||||
|
||||
state = replication.logger.state().state;
|
||||
assertTrue(state.running);
|
||||
assertTrue(typeof state.lastLogTick === 'string');
|
||||
assertMatch(/^\d+$/, state.lastLogTick);
|
||||
assertTrue(state.totalEvents >= 0);
|
||||
|
||||
// start again
|
||||
actual = replication.logger.start();
|
||||
assertTrue(actual);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief stop logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testStopLogger : function () {
|
||||
var actual, state;
|
||||
|
||||
// start
|
||||
actual = replication.logger.start();
|
||||
assertTrue(actual);
|
||||
|
||||
state = replication.logger.state().state;
|
||||
assertTrue(state.running);
|
||||
assertTrue(typeof state.lastLogTick === 'string');
|
||||
assertMatch(/^\d+$/, state.lastLogTick);
|
||||
assertTrue(state.totalEvents >= 0);
|
||||
|
||||
// stop
|
||||
actual = replication.logger.stop();
|
||||
assertTrue(actual);
|
||||
|
||||
state = replication.logger.state().state;
|
||||
assertTrue(state.running);
|
||||
assertTrue(typeof state.lastLogTick === 'string');
|
||||
assertMatch(/^\d+$/, state.lastLogTick);
|
||||
assertTrue(state.totalEvents >= 1);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get state
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -247,8 +195,6 @@ function ReplicationLoggerSuite () {
|
|||
events = state.totalEvents;
|
||||
assertTrue(state.totalEvents >= 0);
|
||||
|
||||
replication.logger.start();
|
||||
|
||||
// do something that will cause logging
|
||||
var c = db._create(cn);
|
||||
c.save({ "test" : 1 });
|
||||
|
@ -264,21 +210,6 @@ function ReplicationLoggerSuite () {
|
|||
assertTrue(state.totalEvents > events);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test logger properties
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testPropertiesLogger : function () {
|
||||
var properties;
|
||||
|
||||
properties = replication.logger.properties();
|
||||
assertTrue(typeof properties === 'object');
|
||||
assertTrue(properties.hasOwnProperty('autoStart'));
|
||||
assertTrue(properties.hasOwnProperty('logRemoteChanges'));
|
||||
assertTrue(properties.hasOwnProperty('maxEvents'));
|
||||
assertTrue(properties.hasOwnProperty('maxEventsSize'));
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test actions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global require, db, ArangoCollection, ArangoDatabase, ArangoCursor, ShapedJson,
|
||||
RELOAD_AUTH, SYS_DEFINE_ACTION, SYS_EXECUTE_GLOBAL_CONTEXT_FUNCTION,
|
||||
WAL_FLUSH, WAL_PROPERTIES,
|
||||
REPLICATION_LOGGER_STATE, REPLICATION_LOGGER_CONFIGURE, REPLICATION_SERVER_ID,
|
||||
REPLICATION_LOGGER_STATE, REPLICATION_SERVER_ID,
|
||||
REPLICATION_APPLIER_CONFIGURE, REPLICATION_APPLIER_START, REPLICATION_APPLIER_SHUTDOWN,
|
||||
REPLICATION_APPLIER_FORGET, REPLICATION_APPLIER_STATE, REPLICATION_SYNCHRONISE,
|
||||
ENABLE_STATISTICS, DISPATCHER_THREADS, SYS_CREATE_NAMED_QUEUE, SYS_ADD_JOB,
|
||||
|
@ -228,15 +228,6 @@
|
|||
delete REPLICATION_LOGGER_STATE;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief configureReplicationLogger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
if (typeof REPLICATION_LOGGER_CONFIGURE !== "undefined") {
|
||||
internal.configureReplicationLogger = REPLICATION_LOGGER_CONFIGURE;
|
||||
delete REPLICATION_LOGGER_CONFIGURE;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief configureReplicationApplier
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -40,32 +40,6 @@ var internal = require("internal");
|
|||
var logger = { };
|
||||
var applier = { };
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication logger - this is a no-op in ArangoDB 2.2 and
|
||||
/// higher
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.start = function () {
|
||||
'use strict';
|
||||
|
||||
// the logger in ArangoDB 2.2 is now the WAL...
|
||||
// so the logger cannot be started but is always running
|
||||
return true;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief stops the replication logger - this is a no-op in ArangoDB 2.2 and
|
||||
/// higher
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.stop = function () {
|
||||
'use strict';
|
||||
|
||||
// the logger in ArangoDB 2.2 is now the WAL...
|
||||
// so the logger cannot be stopped
|
||||
return true;
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the replication logger state
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -76,16 +50,6 @@ logger.state = function () {
|
|||
return internal.getStateReplicationLogger();
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the configuration of the replication logger
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
logger.properties = function () {
|
||||
'use strict';
|
||||
|
||||
return internal.configureReplicationLogger();
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief starts the replication applier
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -0,0 +1,80 @@
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief gcd
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
/// DISCLAIMER
|
||||
///
|
||||
/// Copyright 2014 ArangoDB GmbH, Cologne, Germany
|
||||
/// Copyright 2004-2014 triAGENS GmbH, Cologne, Germany
|
||||
///
|
||||
/// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
/// you may not use this file except in compliance with the License.
|
||||
/// You may obtain a copy of the License at
|
||||
///
|
||||
/// http://www.apache.org/licenses/LICENSE-2.0
|
||||
///
|
||||
/// Unless required by applicable law or agreed to in writing, software
|
||||
/// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
/// See the License for the specific language governing permissions and
|
||||
/// limitations under the License.
|
||||
///
|
||||
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
|
||||
///
|
||||
/// @author Jan Steemann
|
||||
/// @author Copyright 2014, ArangoDB GmbH, Cologne, Germany
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#ifndef ARANGODB_BASICS_GCD_H
|
||||
#define ARANGODB_BASICS_GCD_H 1
|
||||
|
||||
#include "Basics/Common.h"
|
||||
|
||||
namespace triagens {
|
||||
namespace basics {
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief binary greatest common divisor
|
||||
/// @author http://en.wikipedia.org/wiki/Binary_GCD_algorithm
|
||||
/// note: T must be an unsigned type
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
template<typename T> static T binaryGcd (T u, T v) {
|
||||
if (u == 0) {
|
||||
return v;
|
||||
}
|
||||
if (v == 0) {
|
||||
return u;
|
||||
}
|
||||
|
||||
int shift;
|
||||
for (shift = 0; ((u | v) & 1) == 0; ++shift) {
|
||||
u >>= 1;
|
||||
v >>= 1;
|
||||
}
|
||||
|
||||
while ((u & 1) == 0) {
|
||||
u >>= 1;
|
||||
}
|
||||
|
||||
do {
|
||||
while ((v & 1) == 0) {
|
||||
v >>= 1;
|
||||
}
|
||||
|
||||
if (u > v) {
|
||||
std::swap(v, u);
|
||||
}
|
||||
v = v - u;
|
||||
}
|
||||
while (v != 0);
|
||||
|
||||
return u << shift;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
Loading…
Reference in New Issue