mirror of https://gitee.com/bigwinds/arangodb
Feature/misc spelling corrections (#5164)
This commit is contained in:
parent
2d9e5288ab
commit
34ec56d421
18
CHANGELOG
18
CHANGELOG
|
@ -685,7 +685,7 @@ v3.3.0 (2012-12-14)
|
|||
v3.3.rc8 (2017-12-12)
|
||||
---------------------
|
||||
|
||||
* UI: fixed broken foxx configuration keys. Some valid configuration values
|
||||
* UI: fixed broken Foxx configuration keys. Some valid configuration values
|
||||
could not be edited via the ui.
|
||||
|
||||
* UI: pressing the return key inside a select2 box no longer triggers the modal's
|
||||
|
@ -2584,10 +2584,10 @@ v3.1.alpha2 (2016-09-01)
|
|||
|
||||
* added experimental support for incoming gzip-compressed requests
|
||||
|
||||
* added HTTP REST APIs for online loglevel adjustments:
|
||||
* added HTTP REST APIs for online log level adjustments:
|
||||
|
||||
- GET `/_admin/log/level` returns the current loglevel settings
|
||||
- PUT `/_admin/log/level` modifies the current loglevel settings
|
||||
- GET `/_admin/log/level` returns the current log level settings
|
||||
- PUT `/_admin/log/level` modifies the current log level settings
|
||||
|
||||
* PATCH /_api/gharial/{graph-name}/vertex/{collection-name}/{vertex-key}
|
||||
- changed default value for keepNull to true
|
||||
|
@ -2821,7 +2821,7 @@ v3.0.4 (2016-08-01)
|
|||
|
||||
* added missing lock for periodic jobs access
|
||||
|
||||
* fix multiple foxx related cluster issues
|
||||
* fix multiple Foxx related cluster issues
|
||||
|
||||
* fix handling of empty AQL query strings
|
||||
|
||||
|
@ -5892,7 +5892,7 @@ v2.5.0-beta2 (2015-02-23)
|
|||
* Rewrite of Foxx routing
|
||||
|
||||
The routing of Foxx has been exposed to major internal changes we adjusted because of user feedback.
|
||||
This allows us to set the development mode per mountpoint without having to change paths and hold
|
||||
This allows us to set the development mode per mount point without having to change paths and hold
|
||||
apps at separate locations.
|
||||
|
||||
* Foxx Development mode
|
||||
|
@ -5912,8 +5912,8 @@ v2.5.0-beta2 (2015-02-23)
|
|||
* Foxx install process
|
||||
|
||||
Installing Foxx apps has been a two step process: import them into ArangoDB and mount them at a
|
||||
specific mountpoint. These operations have been joined together. You can install an app at one
|
||||
mountpoint, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
|
||||
specific mount point. These operations have been joined together. You can install an app at one
|
||||
mount point, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
|
||||
simplified to just:
|
||||
|
||||
* install: get your Foxx app up and running
|
||||
|
@ -10164,7 +10164,7 @@ v1.1.2 (2013-01-20)
|
|||
|
||||
* started with issue #317: Feature Request (from Google Groups): DATE handling
|
||||
|
||||
* backported issue #300: Extend arangoImp to Allow importing resultset-like
|
||||
* backported issue #300: Extend arangoImp to Allow importing result set-like
|
||||
(list of documents) formatted files
|
||||
|
||||
* fixed issue #337: "WaitForSync" on new collection does not work on Win/X64
|
||||
|
|
|
@ -12,7 +12,7 @@ RETURN LENGTH(collection)
|
|||
```
|
||||
|
||||
This type of call is optimized since 2.8 (no unnecessary intermediate result
|
||||
is built up in memory) and it is therefore the prefered way to determine the count.
|
||||
is built up in memory) and it is therefore the preferred way to determine the count.
|
||||
Internally, [COLLECTION_COUNT()](../Functions/Miscellaneous.md#collectioncount) is called.
|
||||
|
||||
In earlier versions with `COLLECT ... WITH COUNT INTO` available (since 2.4),
|
||||
|
|
|
@ -152,7 +152,7 @@ There are further options that can be passed in the *options* attribute of the *
|
|||
need to hold the collection locks for as long as the query cursor exists. It is advisable
|
||||
to *only* use this option on short-running queries *or* without exclusive locks (write locks on MMFiles).
|
||||
When set to *false* the query will be executed right away in its entirety.
|
||||
In that case query results are either returned right away (if the resultset is small enough),
|
||||
In that case query results are either returned right away (if the result set is small enough),
|
||||
or stored on the arangod instance and accessible via the cursor API.
|
||||
|
||||
Please note that the query options `cache`, `count` and `fullCount` will not work on streaming
|
||||
|
|
|
@ -9,7 +9,7 @@ to the ArangoDB user.
|
|||
They provide the capability to:
|
||||
* evaluate together documents located in different collections
|
||||
* filter documents based on AQL boolean expressions and functions
|
||||
* sort the resultset based on how closely each document matched the filter
|
||||
* sort the result set based on how closely each document matched the filter
|
||||
|
||||
ArangoSearch value analysis
|
||||
---------------------------
|
||||
|
|
|
@ -54,7 +54,7 @@ Solution 2: Foxx (recommended)
|
|||
The general graph module still offers the measurement functions.
|
||||
As these are typically computation expensive and create long running queries it is recommended
|
||||
to not use them in combination with other AQL features.
|
||||
Therefore the best idea is to offer these measurements directly via an API using FOXX.
|
||||
Therefore the best idea is to offer these measurements directly via an API using Foxx.
|
||||
|
||||
First we create a new [Foxx service](../../Manual/Foxx/index.html).
|
||||
Then we include the `general-graph` module in the service.
|
||||
|
|
|
@ -99,7 +99,7 @@ There are essentially 3 ways to change this behavior:
|
|||
Note that you can remove individual collections from your dump by
|
||||
deleting their pair of structure and data file in the dump directory.
|
||||
In this way you can restore your data in several steps or even
|
||||
parallelise the restore operation by running multiple `arangorestore`
|
||||
parallelize the restore operation by running multiple `arangorestore`
|
||||
processes concurrently on different dump directories. You should
|
||||
consider using different coordinators for the different `arangorestore`
|
||||
processes in this case.
|
||||
|
|
|
@ -96,7 +96,7 @@ The documentation and unit tests still require a [cygwin](https://www.cygwin.com
|
|||
|
||||
You need at least `make` from cygwin. Cygwin also offers a `cmake`. Do **not** install the cygwin cmake.
|
||||
|
||||
You should also issue these commands to generate user informations for the cygwin commands:
|
||||
You should also issue these commands to generate user information for the cygwin commands:
|
||||
|
||||
mkpasswd > /etc/passwd
|
||||
mkgroup > /etc/group
|
||||
|
|
|
@ -8,7 +8,7 @@ How do you model document inheritance given that collections do not support that
|
|||
|
||||
Lets assume you have three document collections: "subclass", "class" and "superclass". You also have two edge collections: "sub_extends_class" and "class_extends_super".
|
||||
|
||||
You can create them via arangosh or foxx:
|
||||
You can create them via arangosh or Foxx:
|
||||
|
||||
```js
|
||||
var graph_module = require("com/arangodb/general-graph");
|
||||
|
@ -24,7 +24,7 @@ This makes sure when using the graph interface that the inheritance looks like:
|
|||
* super → sub
|
||||
|
||||
To make sure everything works as expected you should use the built-in traversal in combination with Foxx. This allows you to add the inheritance security layer easily.
|
||||
To use traversals in foxx simply add the following line before defining routes:
|
||||
To use traversals in Foxx simply add the following line before defining routes:
|
||||
|
||||
```js
|
||||
var traversal = require("org/arangodb/graph/traversal");
|
||||
|
|
|
@ -19,7 +19,7 @@ For this recipe you need to install the following tools:
|
|||
### Disk usage
|
||||
You may want to monitor that ArangoDB doesn't run out of disk space. The [df Plugin](https://collectd.org/wiki/index.php/Plugin:DF) can aggregate these values for you.
|
||||
|
||||
First we need to find out which disks are used by your ArangoDB. By default you need to find **/var/lib/arango** in the mountpoints. Since nowadays many virtual file systems are also mounted on a typical \*nix system we want to sort the output of mount:
|
||||
First we need to find out which disks are used by your ArangoDB. By default you need to find **/var/lib/arango** in the mount points. Since nowadays many virtual file systems are also mounted on a typical \*nix system we want to sort the output of mount:
|
||||
|
||||
mount | sort
|
||||
/dev/sda3 on /local/home type ext4 (rw,relatime,data=ordered)
|
||||
|
@ -30,7 +30,7 @@ First we need to find out which disks are used by your ArangoDB. By default you
|
|||
....
|
||||
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1022123,mode=755)
|
||||
|
||||
So here we can see the mountpoints are `/`, `/local/home`, `/mnt/` so `/var/lib/` can be found on the root partition (`/`) `/dev/sda3` here. A production setup may be different so the OS doesn't interfere with the services.
|
||||
So here we can see the mount points are `/`, `/local/home`, `/mnt/` so `/var/lib/` can be found on the root partition (`/`) `/dev/sda3` here. A production setup may be different so the OS doesn't interfere with the services.
|
||||
|
||||
The collectd configuration `/etc/collectd/collectd.conf.d/diskusage.conf` looks like this:
|
||||
|
||||
|
@ -64,7 +64,7 @@ The collectd configuration `/etc/collectd/collectd.conf.d/diskusage.conf` looks
|
|||
Another interesting metric is the amount of data read/written to disk - its an estimate how busy your ArangoDB or the whole system currently is.
|
||||
The [Disk plugin](https://collectd.org/wiki/index.php/Plugin:Disk) aggregates these values.
|
||||
|
||||
According to the mountpoints above our configuration `/etc/collectd/collectd.conf.d/disk_io.conf` looks like this:
|
||||
According to the mount points above our configuration `/etc/collectd/collectd.conf.d/disk_io.conf` looks like this:
|
||||
|
||||
LoadPlugin disk
|
||||
<Plugin disk>
|
||||
|
|
|
@ -30,11 +30,11 @@ The app can be installed as follows:
|
|||
* click *Install*
|
||||
|
||||
Now enter a mountpoint for the application. This is the URL path under which the
|
||||
application will become available. For the example app, the mountpoint does not matter.
|
||||
application will become available. For the example app, the mount point does not matter.
|
||||
The web page in the example app assumes it is served by ArangoDB, too. So it uses a
|
||||
relative URL `autocomplete`. This is easiest to set up, but in reality you might want
|
||||
to have your web page served by a different server. In this case, your web page will
|
||||
have to call the app mountpoint you just entered.
|
||||
have to call the app mount point you just entered.
|
||||
|
||||
To see the example app in action, click on **Open**. The autocomplete textbox should be
|
||||
populated with server data when at least two letters are entered.
|
||||
|
|
|
@ -94,7 +94,7 @@ Let's dig in some deeper.
|
|||
|
||||
### Read API
|
||||
|
||||
Let's start with the above initialised key-value store in the following. Let us visit the following read operations:
|
||||
Let's start with the above initialized key-value store in the following. Let us visit the following read operations:
|
||||
|
||||
```
|
||||
curl -L http://$SERVER:$PORT/_api/agency/read -d '[["/a/b"]]'
|
||||
|
@ -220,7 +220,7 @@ is a precondition specifying that the previous value of the key `"/a/b/c"` key m
|
|||
{ "/a/b/c": [1, 2, 3] }
|
||||
```
|
||||
|
||||
Consider the agency in initialised as above let's review the responses from the agency as follows:
|
||||
Consider the agency in initialized as above let's review the responses from the agency as follows:
|
||||
|
||||
```
|
||||
curl -L http://$SERVER:$PORT/_api/agency/write -d '[[{"/a/b/c":{"op":"set","new":[1,2,3,4]},"/a/b/pi":{"op":"set","new":"some text"}},{"/a/b/c":{"old":[1,2,3]}}]]'
|
||||
|
|
|
@ -23,7 +23,7 @@ The only available view type currently is: "arangosearch".
|
|||
A natively integrated AQL extension that allows one to:
|
||||
* evaluate together documents located in different collections
|
||||
* filter documents based on AQL boolean expressions and functions
|
||||
* sort the resultset based on how closely each document matched the filter
|
||||
* sort the result set based on how closely each document matched the filter
|
||||
|
||||
### View Identifier
|
||||
|
||||
|
|
|
@ -13,10 +13,9 @@ The directory functions below shouldn't use the current working directory of the
|
|||
You will not be able to tell whether the environment the server is running in will permit directory listing,
|
||||
reading or writing of files.
|
||||
|
||||
You should either base your directories with `getTempPath()`, or as a foxx service use the
|
||||
You should either base your directories with `getTempPath()`, or as a Foxx service use the
|
||||
[module.context.basePath](../../Foxx/Reference/Context.md).
|
||||
|
||||
|
||||
Single File Directory Manipulation
|
||||
----------------------------------
|
||||
|
||||
|
|
|
@ -165,7 +165,7 @@ may be empty when called on a coordinator in a cluster.
|
|||
Additionally, the filesizes of collection and index parameter JSON files are
|
||||
not reported. These files should normally have a size of a few bytes
|
||||
each. Please also note that the *fileSize* values are reported in bytes
|
||||
and reflect the logical file sizes. Some filesystems may use optimisations
|
||||
and reflect the logical file sizes. Some filesystems may use optimizations
|
||||
(e.g. sparse files) so that the actual physical file size is somewhat
|
||||
different. Directories and sub-directories may also require space in the
|
||||
file system, but this space is not reported in the *fileSize* results.
|
||||
|
|
|
@ -26,5 +26,5 @@ Once the ArangoDB _master_ and _slaves_ have been deployed, the replication has
|
|||
to be started on each of the available _slaves_. This can be done at database level,
|
||||
or globally.
|
||||
|
||||
For further informations on how to set up the replication in _master/slave_ environment,
|
||||
For further information on how to set up the replication in _master/slave_ environment,
|
||||
please refer to [this](../../Administration/MasterSlave/SettingUp.md) _Section_.
|
|
@ -5,9 +5,9 @@ Multiple ArangoDB instances can be deployed as a fault-tolerant distributed stat
|
|||
|
||||
What is a fault-tolerant state machine in the first place?
|
||||
|
||||
In many service deployments consisting of arbitrary components distributed over multiple machines one is faced with the challenge of creating a dependable centralised knowledge base or configuration. Implementation of such a service turns out to be one of the most fundamental problems in information engineering. While it may seem as if the realisation of such a service is easily conceivable, dependablity formulates a paradoxon on computer networks per se. On the one hand, one needs a distributed system to avoid a single point of failure. On the other hand, one has to establish consensus among the computers involved.
|
||||
In many service deployments consisting of arbitrary components distributed over multiple machines one is faced with the challenge of creating a dependable centralized knowledge base or configuration. Implementation of such a service turns out to be one of the most fundamental problems in information engineering. While it may seem as if the realization of such a service is easily conceivable, dependability formulates a paradox on computer networks per se. On the one hand, one needs a distributed system to avoid a single point of failure. On the other hand, one has to establish consensus among the computers involved.
|
||||
|
||||
Consensus is the keyword here and its realisation on a network proves to be far from trivial. Many papers and conference proceedings have discussed and evaluated this key challenge. Two algorithms, historically far apart, have become widely popular, namely Paxos and its derivatives and Raft. Discussing them and their differences, although highly enjoyable, must remain far beyond the scope of this document. Find the references to the main publications at the bottom of this page.
|
||||
Consensus is the keyword here and its realization on a network proves to be far from trivial. Many papers and conference proceedings have discussed and evaluated this key challenge. Two algorithms, historically far apart, have become widely popular, namely Paxos and its derivatives and Raft. Discussing them and their differences, although highly enjoyable, must remain far beyond the scope of this document. Find the references to the main publications at the bottom of this page.
|
||||
|
||||
At ArangoDB, we decided to implement Raft as it is arguably the easier to understand and thus implement. In simple terms, Raft guarantees that a linear stream of transactions, is replicated in realtime among a group of machines through an elected leader, who in turn must have access to and project leadership upon an overall majority of participating instances. In ArangoDB we like to call the entirety of the components of the replicated transaction log, that is the machines and the ArangoDB instances, which constitute the replicated log, the agency.
|
||||
|
||||
|
@ -16,13 +16,13 @@ Startup
|
|||
|
||||
The agency must consists of an odd number of agents in order to be able to establish an overall majority and some means for the agents to be able to find one another at startup.
|
||||
|
||||
The most obvious way would be to inform all agents of the addresses and ports of the rest. This however, is more information than needed. For example, it would suffice, if all agents would know the address and port of the next agent in a cyclic fashion. Another straitforward solution would be to inform all agents of the address and port of say the first agent.
|
||||
The most obvious way would be to inform all agents of the addresses and ports of the rest. This however, is more information than needed. For example, it would suffice, if all agents would know the address and port of the next agent in a cyclic fashion. Another straightforward solution would be to inform all agents of the address and port of say the first agent.
|
||||
|
||||
Clearly all cases, which would form disjunct subsets of agents would break or in the least impair the functionality of the agency. From there on the agents will gossip the missing information about their peers.
|
||||
|
||||
Typically, one achieves fairly high fault-tolerance with low, odd number of agents while keeping the necessary network traffic at a minimum. It seems that the typical agency size will be in range of 3 to 7 agents.
|
||||
|
||||
The below commands start up a 3-host agency on one physical/logical box with ports 8529, 8530 and 8531 for demonstration purposes. The adress of the first instance, port 8529, is known to the other two. After atmost 2 rounds of gossipping, the last 2 agents will have a complete picture of their surrounding and persist it for the next restart.
|
||||
The below commands start up a 3-host agency on one physical/logical box with ports 8529, 8530 and 8531 for demonstration purposes. The address of the first instance, port 8529, is known to the other two. After at most 2 rounds of gossipping, the last 2 agents will have a complete picture of their surrounding and persist it for the next restart.
|
||||
|
||||
```
|
||||
./arangod --agency.activate true --agency.size 3 --agency.my-address tcp://localhost:8529 --server.authentication false --server.endpoint tcp://0.0.0.0:8529 agency-8529
|
||||
|
@ -76,7 +76,7 @@ curl -s localhost:8529/_api/agency/config
|
|||
}
|
||||
```
|
||||
|
||||
To highlight some details in the above output look for `"term"` and `"leaderId"`. Both are key information about the current state of the Raft algorithm. You may have noted that the first election term has established a random leader for the agency, who is in charge of replication of the state machine and for all external read and write requests until such time that the process gets isolated from the other two subsequenctly losing its leadership.
|
||||
To highlight some details in the above output look for `"term"` and `"leaderId"`. Both are key information about the current state of the Raft algorithm. You may have noted that the first election term has established a random leader for the agency, who is in charge of replication of the state machine and for all external read and write requests until such time that the process gets isolated from the other two subsequently losing its leadership.
|
||||
|
||||
Read and Write APIs
|
||||
-------------------
|
||||
|
@ -174,5 +174,5 @@ curl -L localhost:8529/_api/agency/write -d '[[{"foo":["bar","baz","qux"]}]]'
|
|||
|
||||
are equivalent for example and will create and fill an array at `/foo`. Here, again, the outermost array is the container for the transaction arrays.
|
||||
|
||||
We documented a complete guide of the API in the [API section](../../../HTTP/Agency/index.html).
|
||||
A complete guide of the API can be found in the [API section](../../HTTP/Agency/index.html).
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
Services
|
||||
========
|
||||
|
||||
The services section displays all installed foxx applications. You can create new services
|
||||
or go into a detailed view of a choosen service.
|
||||
The services section displays all installed Foxx applications. You can create new services
|
||||
or go into a detailed view of a chosen service.
|
||||
|
||||

|
||||
|
||||
|
|
|
@ -242,7 +242,7 @@ That includes a rewrite of the documentation as well as some code changes as fol
|
|||
|
||||
### Moved Foxx applications to a different folder.
|
||||
|
||||
Until 2.4 foxx apps were stored in the following folder structure:
|
||||
Until 2.4 Foxx apps were stored in the following folder structure:
|
||||
`<app-path>/databases/<dbname>/<appname>:<appversion>`.
|
||||
This caused some trouble as apps where cached based on name and version and updates did not apply.
|
||||
Also the path on filesystem and the app's access URL had no relation to one another.
|
||||
|
@ -252,7 +252,7 @@ Now the path on filesystem is identical to the URL (except the appended APP):
|
|||
### Rewrite of Foxx routing
|
||||
|
||||
The routing of Foxx has been exposed to major internal changes we adjusted because of user feedback.
|
||||
This allows us to set the development mode per mountpoint without having to change paths and hold
|
||||
This allows us to set the development mode per mount point without having to change paths and hold
|
||||
apps at separate locations.
|
||||
|
||||
### Foxx Development mode
|
||||
|
@ -272,8 +272,8 @@ latter option is only read and used during the upgrade to 2.5 and does not have
|
|||
### Foxx install process
|
||||
|
||||
Installing Foxx apps has been a two step process: import them into ArangoDB and mount them at a
|
||||
specific mountpoint. These operations have been joined together. You can install an app at one
|
||||
mountpoint, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
|
||||
specific mount point. These operations have been joined together. You can install an app at one
|
||||
mount point, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
|
||||
simplified to just:
|
||||
|
||||
* install: get your Foxx app up and running
|
||||
|
@ -302,7 +302,7 @@ your in-house devs to track down errors in the application.
|
|||
|
||||
We added a `console` object to Foxx apps. All Foxx apps now have a console object implementing
|
||||
the familiar Console API in their global scope, which can be used to log diagnostic
|
||||
messages to the database. This console also allows to read the error output of one specific foxx.
|
||||
messages to the database. This console also allows to read the error output of one specific Foxx.
|
||||
|
||||
### Foxx requests
|
||||
We added `org/arangodb/request` module, which provides a simple API for making HTTP requests
|
||||
|
|
|
@ -23,7 +23,7 @@ Specify *true* and the query will be executed in a **streaming** fashion. The qu
|
|||
not stored on the server, but calculated on the fly. *Beware*: long-running queries will
|
||||
need to hold the collection locks for as long as the query cursor exists.
|
||||
When set to *false* the query will be executed right away in its entirety.
|
||||
In that case query results are either returned right away (if the resultset is small enough),
|
||||
In that case query results are either returned right away (if the result set is small enough),
|
||||
or stored on the arangod instance and accessible via the cursor API. It is advisable
|
||||
to *only* use this option on short-running queries *or* without exclusive locks (write locks on MMFiles).
|
||||
Please note that the query options `cache`, `count` and `fullCount` will not work on streaming
|
||||
|
|
|
@ -105,7 +105,7 @@ geo or fulltext indexes. They will always be non-unique and sparse.
|
|||
|
||||
### Moved Foxx applications to a different folder.
|
||||
|
||||
Until 2.4 foxx apps were stored in the following folder structure:
|
||||
Until 2.4 Foxx apps were stored in the following folder structure:
|
||||
`<app-path>/databases/<dbname>/<appname>:<appversion>`.
|
||||
This caused some trouble as apps where cached based on name and version and updates did not apply.
|
||||
Also the path on filesystem and the app's access URL had no relation to one another.
|
||||
|
@ -129,8 +129,8 @@ latter option is only read and used during the upgrade to 2.5 and does not have
|
|||
### Foxx install process
|
||||
|
||||
Installing Foxx apps has been a two step process: import them into ArangoDB and mount them at a
|
||||
specific mountpoint. These operations have been joined together. You can install an app at one
|
||||
mountpoint, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
|
||||
specific mount point. These operations have been joined together. You can install an app at one
|
||||
mount point, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
|
||||
simplified to just:
|
||||
|
||||
* install: get your Foxx app up and running
|
||||
|
@ -144,7 +144,7 @@ Removed features
|
|||
----------------
|
||||
|
||||
* Startup switch `--javascript.frontend-development-mode`: Its major purpose was internal development
|
||||
anyway. Now the web frontend can be set to development mode similar to any other foxx app.
|
||||
anyway. Now the web frontend can be set to development mode similar to any other Foxx app.
|
||||
* Startup switch `--javascript.dev-app-path`: Was used for the development mode of Foxx. This is
|
||||
integrated with the normal app-path now and can be triggered on app level. The second app-path is
|
||||
superfluous.
|
||||
|
|
|
@ -82,11 +82,11 @@ HTTP API changes
|
|||
|
||||
### APIs added
|
||||
|
||||
The following HTTP REST APIs have been added for online loglevel adjustment of
|
||||
The following HTTP REST APIs have been added for online log level adjustment of
|
||||
the server:
|
||||
|
||||
* GET `/_admin/log/level` returns the current loglevel settings
|
||||
* PUT `/_admin/log/level` modifies the current loglevel settings
|
||||
* GET `/_admin/log/level` returns the current log level settings
|
||||
* PUT `/_admin/log/level` modifies the current log level settings
|
||||
|
||||
### APIs changed
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ ArangoSearch is a natively integrated AQL extension making use of the IResearch
|
|||
Arangosearch allows one to:
|
||||
* join documents located in different collections to one result list
|
||||
* filter documents based on AQL boolean expressions and functions
|
||||
* sort the resultset based on how closely each document matched the filter
|
||||
* sort the result set based on how closely each document matched the filter
|
||||
|
||||
A concept of value 'analysis' that is meant to break up a given value into
|
||||
a set of sub-values internally tied together by metadata which influences both
|
||||
|
|
|
@ -39,7 +39,7 @@ an optional boolean value to indicate whether the function
|
|||
results are fully deterministic (function return value solely depends on
|
||||
the input value and return value is the same for repeated calls with same
|
||||
input). The *isDeterministic* attribute is currently not used but may be
|
||||
used later for optimisations.
|
||||
used later for optimizations.
|
||||
|
||||
|
||||
@RESTRETURNCODE{400}
|
||||
|
|
|
@ -15,7 +15,7 @@ an optional boolean value to indicate whether the function
|
|||
results are fully deterministic (function return value solely depends on
|
||||
the input value and return value is the same for repeated calls with same
|
||||
input). The *isDeterministic* attribute is currently not used but may be
|
||||
used later for optimisations.
|
||||
used later for optimizations.
|
||||
|
||||
@RESTDESCRIPTION
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ this task should run each `period` seconds
|
|||
time offset in seconds from the created timestamp
|
||||
|
||||
@RESTSTRUCT{command,api_task_struct,string,required,}
|
||||
the javascript function for this dask
|
||||
the javascript function for this task
|
||||
|
||||
@RESTSTRUCT{database,api_task_struct,string,required,}
|
||||
the database this task belongs to
|
||||
|
|
|
@ -52,7 +52,7 @@ identified by its @LIT{lid} and the identifiers are in ascending
|
|||
order.
|
||||
|
||||
@RESTREPLYBODY{level,string,required,string}
|
||||
A list of the loglevels for all log entries.
|
||||
A list of the log levels for all log entries.
|
||||
|
||||
@RESTREPLYBODY{timestamp,array,required,string}
|
||||
a list of the timestamps as seconds since 1970-01-01 for all log
|
||||
|
@ -77,12 +77,12 @@ error.
|
|||
|
||||
|
||||
@startDocuBlock get_admin_loglevel
|
||||
@brief returns the current loglevel settings
|
||||
@brief returns the current log level settings
|
||||
|
||||
@RESTHEADER{GET /_admin/log/level, Return the current server loglevel}
|
||||
@RESTHEADER{GET /_admin/log/level, Return the current server log level}
|
||||
|
||||
@RESTDESCRIPTION
|
||||
Returns the server's current loglevel settings.
|
||||
Returns the server's current log level settings.
|
||||
The result is a JSON object with the log topics being the object keys, and
|
||||
the log levels being the object values.
|
||||
|
||||
|
@ -98,21 +98,21 @@ error.
|
|||
|
||||
|
||||
@startDocuBlock put_admin_loglevel
|
||||
@brief modifies the current loglevel settings
|
||||
@brief modifies the current log level settings
|
||||
|
||||
@RESTHEADER{PUT /_admin/log/level, Modify and return the current server loglevel}
|
||||
@RESTHEADER{PUT /_admin/log/level, Modify and return the current server log level}
|
||||
|
||||
@RESTDESCRIPTION
|
||||
Modifies and returns the server's current loglevel settings.
|
||||
Modifies and returns the server's current log level settings.
|
||||
The request body must be a JSON object with the log topics being the object keys
|
||||
and the log levels being the object values.
|
||||
|
||||
The result is a JSON object with the adjusted log topics being the object keys, and
|
||||
the adjusted log levels being the object values.
|
||||
|
||||
It can set the loglevel of all facilities by only specifying the loglevel as string without json.
|
||||
It can set the log level of all facilities by only specifying the log level as string without json.
|
||||
|
||||
Possible loglevels are:
|
||||
Possible log levels are:
|
||||
- FATAL - There will be no way out of this. ArangoDB will go down after this message.
|
||||
- ERROR - This is an error. you should investigate and fix it. It may harm your production.
|
||||
- WARNING - This may be serious application-wise, but we don't know.
|
||||
|
@ -121,121 +121,121 @@ Possible loglevels are:
|
|||
- TRACE - trace - prepare your log to be flooded - don't use in production.
|
||||
|
||||
@RESTBODYPARAM{agency,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{agencycomm,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{authentication,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{authorization,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{cache,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{cluster,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{collector,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{communication,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{compactor,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{config,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{datafiles,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{development,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{engines,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{general,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{graphs,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{heartbeat,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{memory,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{mmap,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{performance,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{pregel,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{queries,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{replication,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{requests,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{rocksdb,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{ssl,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{startup,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{supervision,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{syscall,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{threads,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{trx,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{v8,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{views,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{ldap,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{audit-authentication,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{audit-database,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{audit-collection,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{audit-view,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{audit-documentation,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTBODYPARAM{audit-service,string,optional,string}
|
||||
One of the possible loglevels.
|
||||
One of the possible log levels.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
|
|
|
@ -22,10 +22,10 @@ Possible return values for *role* are:
|
|||
Is returned in all cases.
|
||||
|
||||
@RESTREPLYBODY{error,boolean,required,}
|
||||
allways *false*
|
||||
always *false*
|
||||
|
||||
@RESTREPLYBODY{code,integer,required,int64}
|
||||
the HTTP status code, allways 200
|
||||
the HTTP status code, always 200
|
||||
|
||||
@RESTREPLYBODY{errorNum,integer,required,int64}
|
||||
the server error number
|
||||
|
|
|
@ -66,14 +66,14 @@ VSS of the process
|
|||
|
||||
|
||||
@RESTREPLYBODY{client,object,required,client_statistics_struct}
|
||||
informations about the connected clients and their resource usage
|
||||
information about the connected clients and their resource usage
|
||||
|
||||
|
||||
@RESTSTRUCT{sum,setof_statistics_struct,number,required,}
|
||||
sumarized value of all counts
|
||||
summarized value of all counts
|
||||
|
||||
@RESTSTRUCT{count,setof_statistics_struct,integer,required,}
|
||||
number of values sumarized
|
||||
number of values summarized
|
||||
|
||||
@RESTSTRUCT{counts,setof_statistics_struct,array,required,integer}
|
||||
array containing the values
|
||||
|
@ -113,7 +113,7 @@ the numbers of requests by Verb
|
|||
total number of http requests
|
||||
|
||||
@RESTSTRUCT{requestsAsync,http_statistics_struct,integer,required,}
|
||||
total number of asynchroneous http requests
|
||||
total number of asynchronous http requests
|
||||
|
||||
@RESTSTRUCT{requestsGet,http_statistics_struct,integer,required,}
|
||||
No of requests using the GET-verb
|
||||
|
|
|
@ -25,7 +25,7 @@ the HTTP status code - 200
|
|||
A list of active cluster endpoints.
|
||||
|
||||
@RESTSTRUCT{endpoint,cluster_endpoints_struct,string,required,}
|
||||
The bind of the coordinaror, like `tcp://[::1]:8530`
|
||||
The bind of the coordinator, like `tcp://[::1]:8530`
|
||||
|
||||
|
||||
@RESTRETURNCODE{403} server is not a coordinator or method was not GET.
|
||||
|
|
|
@ -47,7 +47,7 @@ this task should run each `period` seconds
|
|||
time offset in seconds from the created timestamp
|
||||
|
||||
@RESTREPLYBODY{command,string,required,}
|
||||
the javascript function for this dask
|
||||
the javascript function for this task
|
||||
|
||||
@RESTREPLYBODY{database,string,required,}
|
||||
the database this task belongs to
|
||||
|
|
|
@ -22,7 +22,7 @@ the figures of the collection.
|
|||
Additionally, the filesizes of collection and index parameter JSON files are
|
||||
not reported. These files should normally have a size of a few bytes
|
||||
each. Please also note that the *fileSize* values are reported in bytes
|
||||
and reflect the logical file sizes. Some filesystems may use optimisations
|
||||
and reflect the logical file sizes. Some filesystems may use optimizations
|
||||
(e.g. sparse files) so that the actual physical file size is somewhat
|
||||
different. Directories and sub-directories may also require space in the
|
||||
file system, but this space is not reported in the *fileSize* results.
|
||||
|
|
|
@ -80,7 +80,7 @@ Specify *true* and the query will be executed in a **streaming** fashion. The qu
|
|||
not stored on the server, but calculated on the fly. *Beware*: long-running queries will
|
||||
need to hold the collection locks for as long as the query cursor exists.
|
||||
When set to *false* a query will be executed right away in its entirety.
|
||||
In that case query results are either returned right away (if the resultset is small enough),
|
||||
In that case query results are either returned right away (if the result set is small enough),
|
||||
or stored on the arangod instance and accessible via the cursor API (with respect to the `ttl`).
|
||||
It is advisable to *only* use this option on short-running queries or without exclusive locks
|
||||
(write-locks on MMFiles).
|
||||
|
|
|
@ -16,7 +16,7 @@ new database will be accessible after it is created.
|
|||
Each user object can contain the following attributes:
|
||||
|
||||
@RESTSTRUCT{username,get_api_database_new_USERS,string,required,string}
|
||||
Loginname of the user to be created
|
||||
Login name of the user to be created
|
||||
|
||||
@RESTSTRUCT{passwd,get_api_database_new_USERS,string,required,string}
|
||||
The user password as a string. If not specified, it will default to an empty string.
|
||||
|
|
|
@ -18,7 +18,7 @@ name of the collection that contains the edges.
|
|||
@RESTBODYPARAM{graphName,string,optional,string}
|
||||
name of the graph that contains the edges.
|
||||
Either *edgeCollection* or *graphName* has to be given.
|
||||
In case both values are set the *graphName* is prefered.
|
||||
In case both values are set the *graphName* is preferred.
|
||||
|
||||
@RESTBODYPARAM{filter,string,optional,string}
|
||||
default is to include all nodes:
|
||||
|
@ -29,7 +29,7 @@ can return four different string values:
|
|||
- *"prune"* -> the edges of this vertex will not be followed.
|
||||
- *""* or *undefined* -> visit the vertex and follow its edges.
|
||||
- *Array* -> containing any combination of the above.
|
||||
If there is at least one *"exclude"* or *"prune"* respectivly
|
||||
If there is at least one *"exclude"* or *"prune"* respectively
|
||||
is contained, it's effect will occur.
|
||||
|
||||
@RESTBODYPARAM{minDepth,string,optional,string}
|
||||
|
|
|
@ -70,7 +70,7 @@ may be empty when called on a coordinator in a cluster.
|
|||
Additionally, the filesizes of collection and index parameter JSON files are
|
||||
not reported. These files should normally have a size of a few bytes
|
||||
each. Please also note that the *fileSize* values are reported in bytes
|
||||
and reflect the logical file sizes. Some filesystems may use optimisations
|
||||
and reflect the logical file sizes. Some filesystems may use optimizations
|
||||
(e.g. sparse files) so that the actual physical file size is somewhat
|
||||
different. Directories and sub-directories may also require space in the
|
||||
file system, but this space is not reported in the *fileSize* results.
|
||||
|
|
|
@ -301,11 +301,11 @@ combining role and type.
|
|||
\end{code}
|
||||
|
||||
\motivation{
|
||||
English is the prefered language for international development.
|
||||
English is the preferred language for international development.
|
||||
}
|
||||
|
||||
%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
\guideline{The length of a name should corresponde to the scope}
|
||||
\guideline{The length of a name should correspond to the scope}
|
||||
%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Variables with a large scope should have long names, variables with a small
|
||||
|
@ -439,7 +439,7 @@ established.
|
|||
\end{code}
|
||||
|
||||
\motivation{
|
||||
The american initialize should be preferred over the british initialise. The
|
||||
The American initialize should be preferred over the British initialise. The
|
||||
abbreviation \trcode{init} should be avoided.
|
||||
}
|
||||
|
||||
|
|
|
@ -1131,7 +1131,7 @@ Done:
|
|||
StrCmp $PURGE_DB "0" dontDeleteDatabases
|
||||
DetailPrint 'Removing database files from $DATADIR: '
|
||||
RMDir /r "$DATADIR"
|
||||
DetailPrint 'Removing foxx services files from $APPDIR: '
|
||||
DetailPrint 'Removing Foxx services files from $APPDIR: '
|
||||
RMDir /r "$APPDIR"
|
||||
RMDir /r "$INSTDIR\var\lib\arangodb3-apps"
|
||||
RMDir "$INSTDIR\var\lib"
|
||||
|
|
|
@ -112,7 +112,7 @@ A sample version to help working with the arangod rescue console may look like t
|
|||
};
|
||||
print = internal.print;
|
||||
|
||||
HINT: You shouldn't lean on these variables in your foxx services.
|
||||
HINT: You shouldn't lean on these variables in your Foxx services.
|
||||
______________________________________________________________________________________________________
|
||||
|
||||
JSLint
|
||||
|
@ -347,8 +347,8 @@ syntax --option value --sub:option value. Using Valgrind could look like this:
|
|||
- we specify some arangod arguments via --extraArgs which increase the server performance
|
||||
- we specify to run using valgrind (this is supported by all facilities)
|
||||
- we specify some valgrind commandline arguments
|
||||
- we set the loglevel to debug
|
||||
- we force the logging not to happen asynchroneous
|
||||
- we set the log level to debug
|
||||
- we force the logging not to happen asynchronous
|
||||
- eventually you may still add temporary `console.log()` statements to tests you debug.
|
||||
|
||||
Debugging AQL execution blocks
|
||||
|
@ -385,7 +385,7 @@ Running a test against a ready started server (in contrast to starting one by it
|
|||
**scripts/unittest** is mostly only a wrapper; The backend functionality lives in:
|
||||
**js/client/modules/@arangodb/testing.js**
|
||||
|
||||
Running foxx tests with a fake foxx Repo
|
||||
Running Foxx tests with a fake Foxx Repo
|
||||
----------------------------------------
|
||||
Since downloading fox apps from github can be cumbersome with shaky DSL
|
||||
and DOS'ed github, we can fake it like this:
|
||||
|
|
|
@ -156,7 +156,7 @@ class Constituent : public Thread {
|
|||
std::string _id; // My own id
|
||||
|
||||
// Last time an AppendEntriesRPC message has arrived, this is used to
|
||||
// organise out-of-patience in the follower:
|
||||
// organize out-of-patience in the follower:
|
||||
std::atomic<double> _lastHeartbeatSeen;
|
||||
|
||||
role_t _role; // My role
|
||||
|
|
|
@ -40,7 +40,7 @@ namespace consensus {
|
|||
|
||||
class Agent;
|
||||
|
||||
/// @brief This class organises the startup of the agency until the point
|
||||
/// @brief This class organizes the startup of the agency until the point
|
||||
/// where the RAFT implementation can commence function
|
||||
class Inception : public Thread {
|
||||
|
||||
|
|
|
@ -119,7 +119,7 @@ Slice Node::slice() const {
|
|||
}
|
||||
|
||||
|
||||
/// @brief Optimisation, which avoids recreating of Builder for output if
|
||||
/// @brief Optimization, which avoids recreating of Builder for output if
|
||||
/// changes have not happened since last call
|
||||
void Node::rebuildVecBuf() const {
|
||||
if (_vecBufDirty) { // Dirty vector buffer
|
||||
|
|
|
@ -77,7 +77,7 @@ class Store;
|
|||
/// Nodes are are always constructed as element and can become an array through
|
||||
/// assignment operator.
|
||||
/// toBuilder(Builder&) will create a _vecBuf, when needed as a means to
|
||||
/// optimisation by avoiding to build it before necessary.
|
||||
/// optimization by avoiding to build it before necessary.
|
||||
class Node {
|
||||
public:
|
||||
/// @brief Slash-segmented path
|
||||
|
|
|
@ -153,10 +153,10 @@ class GraphNode : public ExecutionNode {
|
|||
/// @brief Reference to the pseudo variable
|
||||
AstNode* _tmpObjVarNode;
|
||||
|
||||
/// @brief Pseudo string value node to hold the last visted vertex id.
|
||||
/// @brief Pseudo string value node to hold the last visited vertex id.
|
||||
AstNode* _tmpIdNode;
|
||||
|
||||
/// @brief input graphInfo only used for serialisation & info
|
||||
/// @brief input graphInfo only used for serialization & info
|
||||
arangodb::velocypack::Builder _graphInfo;
|
||||
|
||||
/// @brief the edge collection names
|
||||
|
@ -171,7 +171,7 @@ class GraphNode : public ExecutionNode {
|
|||
/// @brief Options for traversals
|
||||
std::unique_ptr<graph::BaseOptions> _options;
|
||||
|
||||
/// @brief Pseudo string value node to hold the last visted vertex id.
|
||||
/// @brief Pseudo string value node to hold the last visited vertex id.
|
||||
/// @brief Flag if the options have been build.
|
||||
/// Afterwards this class is not copyable anymore.
|
||||
bool _optionsBuilt;
|
||||
|
|
|
@ -148,7 +148,7 @@ static std::shared_ptr<VPackBuilder> QueryAllUsers(
|
|||
}
|
||||
|
||||
/// Convert documents from _system/_users into the format used in
|
||||
/// the REST user API and foxx
|
||||
/// the REST user API and Foxx
|
||||
static void ConvertLegacyFormat(VPackSlice doc, VPackBuilder& result) {
|
||||
if (doc.isExternal()) {
|
||||
doc = doc.resolveExternals();
|
||||
|
@ -162,7 +162,7 @@ static void ConvertLegacyFormat(VPackSlice doc, VPackBuilder& result) {
|
|||
}
|
||||
|
||||
// private, will acquire _userCacheLock in write-mode and release it.
|
||||
// will also aquire _loadFromDBLock and release it
|
||||
// will also acquire _loadFromDBLock and release it
|
||||
void auth::UserManager::loadFromDB() {
|
||||
TRI_ASSERT(_queryRegistry != nullptr);
|
||||
TRI_ASSERT(ServerState::instance()->isSingleServerOrCoordinator());
|
||||
|
|
|
@ -81,7 +81,7 @@ namespace arangodb {
|
|||
/// }
|
||||
///
|
||||
/// In this way, the mutex of the condition variable can at the same time
|
||||
/// organise mutual exclusion of the callback function and the checking of
|
||||
/// organize mutual exclusion of the callback function and the checking of
|
||||
/// the termination condition in the main thread.
|
||||
/// The wait for condition variable can conveniently be done with the
|
||||
/// method executeByCallbackOrTimeout below.
|
||||
|
|
|
@ -901,7 +901,7 @@ void HeartbeatThread::runCoordinator() {
|
|||
updateServerMode(readOnlySlice);
|
||||
}
|
||||
|
||||
// the foxx stuff needs an updated list of coordinators
|
||||
// the Foxx stuff needs an updated list of coordinators
|
||||
// and this is only updated when current version has changed
|
||||
if (invalidateCoordinators) {
|
||||
ClusterInfo::instance()->invalidateCurrentCoordinators();
|
||||
|
@ -1002,7 +1002,7 @@ void HeartbeatThread::dispatchedJobResult(DBServerAgencySyncResult result) {
|
|||
}
|
||||
}
|
||||
if (doSleep) {
|
||||
// Sleep a little longer, since this might be due to some synchronisation
|
||||
// Sleep a little longer, since this might be due to some synchronization
|
||||
// of shards going on in the background
|
||||
std::this_thread::sleep_for(std::chrono::microseconds(500000));
|
||||
std::this_thread::sleep_for(std::chrono::microseconds(500000));
|
||||
|
|
|
@ -575,4 +575,4 @@ NS_END // arangodb
|
|||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- END-OF-FILE
|
||||
// -----------------------------------------------------------------------------
|
||||
// -----------------------------------------------------------------------------
|
||||
|
|
|
@ -1710,7 +1710,7 @@ PrimaryKeyIndexReader* IResearchView::snapshot(
|
|||
<< "failed to sync while creating snapshot for IResearch view '" << name() << "', previous snapshot will be used instead";
|
||||
}
|
||||
|
||||
auto cookiePtr = irs::memory::make_unique<ViewStateRead>(_asyncSelf->mutex()); // will aquire read-lock to prevent data-store deallocation
|
||||
auto cookiePtr = irs::memory::make_unique<ViewStateRead>(_asyncSelf->mutex()); // will acquire read-lock to prevent data-store deallocation
|
||||
auto& reader = cookiePtr->_snapshot;
|
||||
|
||||
if (!_asyncSelf->get()) {
|
||||
|
@ -2013,4 +2013,4 @@ NS_END // arangodb
|
|||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- END-OF-FILE
|
||||
// -----------------------------------------------------------------------------
|
||||
// -----------------------------------------------------------------------------
|
||||
|
|
|
@ -306,7 +306,7 @@ class MMFilesSkiplist {
|
|||
}
|
||||
|
||||
// Now the element is successfully inserted, the rest is performance
|
||||
// optimisation:
|
||||
// optimization:
|
||||
for (lev = 1; lev < newNode->_height; lev++) {
|
||||
newNode->_next[lev] = pos[lev]->_next[lev];
|
||||
pos[lev]->_next[lev] = newNode;
|
||||
|
@ -349,7 +349,7 @@ class MMFilesSkiplist {
|
|||
// Now delete where next points to:
|
||||
for (lev = next->_height - 1; lev >= 0; lev--) {
|
||||
// Note the order from top to bottom. The element remains in the
|
||||
// skiplist as long as we are at a level > 0, only some optimisations
|
||||
// skiplist as long as we are at a level > 0, only some optimizations
|
||||
// in performance vanish before that. Only when we have removed it at
|
||||
// level 0, it is really gone.
|
||||
pos[lev]->_next[lev] = next->_next[lev];
|
||||
|
|
|
@ -357,7 +357,7 @@ int MMFilesWalSlots::returnUsed(MMFilesWalSlotInfo& slotInfo, bool wakeUpSynchro
|
|||
return TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
|
||||
/// @brief get the next synchronisable region
|
||||
/// @brief get the next synchronizable region
|
||||
MMFilesWalSyncRegion MMFilesWalSlots::getSyncRegion() {
|
||||
bool sealRequested = false;
|
||||
MMFilesWalSyncRegion region;
|
||||
|
|
|
@ -1005,7 +1005,7 @@ int TRI_RemoveWordsMMFilesFulltextIndex(TRI_fts_index_t* ftx,
|
|||
if (prev != nullptr) {
|
||||
// check if current word has a shared/common prefix with the previous word
|
||||
// inserted
|
||||
// in case this is true, we can use an optimisation and do not need to
|
||||
// in case this is true, we can use an optimization and do not need to
|
||||
// traverse the
|
||||
// tree from the root again. instead, we just start at the node at the end
|
||||
// of the
|
||||
|
@ -1120,7 +1120,7 @@ int TRI_InsertWordsMMFilesFulltextIndex(TRI_fts_index_t* ftx,
|
|||
if (prev != nullptr) {
|
||||
// check if current word has a shared/common prefix with the previous word
|
||||
// inserted
|
||||
// in case this is true, we can use an optimisation and do not need to
|
||||
// in case this is true, we can use an optimization and do not need to
|
||||
// traverse the
|
||||
// tree from the root again. instead, we just start at the node at the end
|
||||
// of the
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
|
||||
/// @brief we'll set this bit (the highest of a uint32_t) if the list is sorted
|
||||
/// if the list is not sorted, this bit is cleared
|
||||
/// This is done as a space optimisation. A big index will contain a lot of
|
||||
/// This is done as a space optimization. A big index will contain a lot of
|
||||
/// document lists, and saving an extra boolean value will likely cost an extra
|
||||
/// 4 or 8 bytes due to padding. Avoiding saving the sorted flag in an extra
|
||||
/// member greatly reduces the index sizes
|
||||
|
|
|
@ -102,7 +102,7 @@ class VectorTypedBuffer : public TypedBuffer<T> {
|
|||
|
||||
void appendEmptyElement() {
|
||||
_vector.push_back(T());
|
||||
this->_ptr = _vector.data(); // might change adress
|
||||
this->_ptr = _vector.data(); // might change address
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
@ -278,7 +278,7 @@ void RestAdminLogHandler::setLogLevel() {
|
|||
auto const type = _request->requestType();
|
||||
|
||||
if (type == rest::RequestType::GET) {
|
||||
// report loglevel
|
||||
// report log level
|
||||
VPackBuilder builder;
|
||||
builder.openObject();
|
||||
auto const& levels = Logger::logLevelTopics();
|
||||
|
@ -289,7 +289,7 @@ void RestAdminLogHandler::setLogLevel() {
|
|||
|
||||
generateResult(rest::ResponseCode::OK, builder.slice());
|
||||
} else if (type == rest::RequestType::PUT) {
|
||||
// set loglevel
|
||||
// set log level
|
||||
bool parseSuccess = false;
|
||||
VPackSlice slice = this->parseVPackBody(parseSuccess);
|
||||
if (!parseSuccess) {
|
||||
|
@ -307,7 +307,7 @@ void RestAdminLogHandler::setLogLevel() {
|
|||
}
|
||||
}
|
||||
|
||||
// now report current loglevel
|
||||
// now report current log level
|
||||
VPackBuilder builder;
|
||||
builder.openObject();
|
||||
auto const& levels = Logger::logLevelTopics();
|
||||
|
|
|
@ -346,7 +346,7 @@ class RestReplicationHandler : public RestVocbaseBaseHandler {
|
|||
|
||||
//////////////////////////////////////////////////////////////////////////////
|
||||
/// SECTION:
|
||||
/// Functions to be implemented by specialisation
|
||||
/// Functions to be implemented by specialization
|
||||
//////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -216,7 +216,7 @@ class RocksDBCuckooIndexEstimator {
|
|||
default: {
|
||||
LOG_TOPIC(WARN, arangodb::Logger::ENGINES)
|
||||
<< "unable to restore index estimates: invalid format found";
|
||||
// Do not construct from serialisation, use other constructor instead
|
||||
// Do not construct from serialization, use other constructor instead
|
||||
THROW_ARANGO_EXCEPTION(TRI_ERROR_INTERNAL);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2926,7 +2926,7 @@ bool transaction::Methods::getBestIndexHandleForFilterCondition(
|
|||
|
||||
auto indexes = indexesForCollection(collectionName);
|
||||
|
||||
// Const cast is save here. Giving computeSpecialisation == false
|
||||
// Const cast is save here. Giving computeSpecialization == false
|
||||
// Makes sure node is NOT modified.
|
||||
return findIndexHandleForAndNode(indexes, node, reference, itemsInCollection,
|
||||
usedIndex);
|
||||
|
|
|
@ -337,7 +337,7 @@ UpgradeResult methods::Upgrade::runTasks(
|
|||
|
||||
// check that the database occurs in the database list
|
||||
if (!(t.databaseFlags & dbFlag)) {
|
||||
// special optimisation: for local server and new database,
|
||||
// special optimization: for local server and new database,
|
||||
// an upgrade-only task can be viewed as executed.
|
||||
if (isLocal && dbFlag == DATABASE_INIT &&
|
||||
t.databaseFlags == DATABASE_UPGRADE) {
|
||||
|
|
|
@ -97,11 +97,11 @@ namespace {
|
|||
T& mutex,
|
||||
std::atomic<std::thread::id>& owner,
|
||||
arangodb::basics::LockerType type,
|
||||
bool aquire,
|
||||
bool acquire,
|
||||
char const* file,
|
||||
int line
|
||||
): _locker(&mutex, type, false, file, line), _owner(owner), _update(noop) {
|
||||
if (aquire) {
|
||||
if (acquire) {
|
||||
lock();
|
||||
}
|
||||
}
|
||||
|
@ -147,7 +147,7 @@ namespace {
|
|||
#define NAME_EXPANDER__(name, line) NAME__(name, line)
|
||||
#define NAME(name) NAME_EXPANDER__(name, __LINE__)
|
||||
#define RECURSIVE_READ_LOCKER(lock, owner) RecursiveReadLocker<typename std::decay<decltype (lock)>::type> NAME(RecursiveLocker)(lock, owner, __FILE__, __LINE__)
|
||||
#define RECURSIVE_WRITE_LOCKER_NAMED(name, lock, owner, aquire) RecursiveWriteLocker<typename std::decay<decltype (lock)>::type> name(lock, owner, arangodb::basics::LockerType::BLOCKING, aquire, __FILE__, __LINE__)
|
||||
#define RECURSIVE_WRITE_LOCKER_NAMED(name, lock, owner, acquire) RecursiveWriteLocker<typename std::decay<decltype (lock)>::type> name(lock, owner, arangodb::basics::LockerType::BLOCKING, acquire, __FILE__, __LINE__)
|
||||
#define RECURSIVE_WRITE_LOCKER(lock, owner) RECURSIVE_WRITE_LOCKER_NAMED(NAME(RecursiveLocker), lock, owner, true)
|
||||
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/* jshint strict: false */
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief foxx administration actions
|
||||
// / @brief Foxx administration actions
|
||||
// /
|
||||
// / @file
|
||||
// /
|
||||
|
@ -324,7 +324,7 @@ actions.defineHttp({
|
|||
});
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Toggles the development mode of a foxx service
|
||||
// / @brief Toggles the development mode of a Foxx service
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
actions.defineHttp({
|
||||
|
|
|
@ -67,11 +67,11 @@ foxxRouter.use(installer)
|
|||
Flag to install the service in legacy mode.
|
||||
`)
|
||||
.queryParam('upgrade', joi.boolean().default(false), dd`
|
||||
Flag to upgrade the service installed at the mountpoint.
|
||||
Flag to upgrade the service installed at the mount point.
|
||||
Triggers setup.
|
||||
`)
|
||||
.queryParam('replace', joi.boolean().default(false), dd`
|
||||
Flag to replace the service installed at the mountpoint.
|
||||
Flag to replace the service installed at the mount point.
|
||||
Triggers teardown and setup.
|
||||
`);
|
||||
|
||||
|
@ -366,7 +366,7 @@ router.get('/fishbowl', function (req, res) {
|
|||
}
|
||||
res.json(store.availableJson());
|
||||
})
|
||||
.summary('List of all foxx services submitted to the Foxx store.')
|
||||
.summary('List of all Foxx services submitted to the Foxx store.')
|
||||
.description(dd`
|
||||
This function contacts the fishbowl and reports which services are available for install.
|
||||
`);
|
||||
|
@ -398,7 +398,7 @@ anonymousRouter.get('/download/zip', function (req, res) {
|
|||
.queryParam('nonce', joi.string().required(), 'Cryptographic nonce that authorizes the download.')
|
||||
.summary('Download a service as zip archive')
|
||||
.description(dd`
|
||||
Download a foxx service packed in a zip archive.
|
||||
Download a Foxx service packed in a zip archive.
|
||||
`);
|
||||
|
||||
anonymousRouter.use('/docs/standalone', module.context.createDocumentationRouter((req, res) => {
|
||||
|
|
|
@ -1168,7 +1168,7 @@
|
|||
tableContent.push(
|
||||
window.modalView.createTextEntry(
|
||||
'new-app-mount',
|
||||
'Mountpoint',
|
||||
'Mount point',
|
||||
mountPoint,
|
||||
'The path the app will be mounted. Is not allowed to start with _',
|
||||
'mountpoint',
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
strong = document.createElement("strong");
|
||||
strong.appendChild(
|
||||
document.createTextNode(
|
||||
"Sorry App not found. Please use the query param path and submit the urlEncoded mountpoint of an App."
|
||||
"Sorry App not found. Please use the query param path and submit the urlEncoded mount point of an App."
|
||||
)
|
||||
);
|
||||
div.appendChild(strong);
|
||||
|
|
|
@ -87,7 +87,7 @@
|
|||
}, 'The Add App dialog should be hidden.', 750);
|
||||
});
|
||||
|
||||
it('should offer the mountpoint input but not the run teardown', function () {
|
||||
it('should offer the mount point input but not the run teardown', function () {
|
||||
runs(function () {
|
||||
expect($('#new-app-mount').length).toEqual(1);
|
||||
expect($('#new-app-teardown').length).toEqual(0);
|
||||
|
@ -268,7 +268,7 @@
|
|||
}, 'The Add App dialog should be hidden.', 750);
|
||||
});
|
||||
|
||||
it('should not offer the mountpoint input but the run teardown', function () {
|
||||
it('should not offer the mount point input but the run teardown', function () {
|
||||
runs(function () {
|
||||
expect($('#new-app-mount').length).toEqual(0);
|
||||
expect($('#new-app-teardown').length).toEqual(1);
|
||||
|
@ -438,7 +438,7 @@
|
|||
}, 'The Add App dialog should be hidden.', 750);
|
||||
});
|
||||
|
||||
it('should not offer the mountpoint input but the run teardown', function () {
|
||||
it('should not offer the mount point input but the run teardown', function () {
|
||||
runs(function () {
|
||||
expect($('#new-app-mount').length).toEqual(0);
|
||||
expect($('#new-app-teardown').length).toEqual(1);
|
||||
|
|
|
@ -128,26 +128,26 @@ var help = function () {
|
|||
/* jshint maxlen: 200 */
|
||||
var commands = {
|
||||
'available': 'lists all Foxx services available in the local repository',
|
||||
'configuration': 'request the configuration information for the given mountpoint',
|
||||
'configure': 'sets the configuration for the given mountpoint',
|
||||
'updateDeps': 'links the dependencies in manifest to a mountpoint',
|
||||
'dependencies': 'request the dependencies information for the given mountpoint',
|
||||
'development': 'activates development mode for the given mountpoint',
|
||||
'configuration': 'request the configuration information for the given mount point',
|
||||
'configure': 'sets the configuration for the given mount point',
|
||||
'updateDeps': 'links the dependencies in manifest to a mount point',
|
||||
'dependencies': 'request the dependencies information for the given mount point',
|
||||
'development': 'activates development mode for the given mount point',
|
||||
'help': 'shows this help',
|
||||
'info': 'displays information about a Foxx service',
|
||||
'install': 'installs a foxx service identified by the given information to the given mountpoint',
|
||||
'install': 'installs a Foxx service identified by the given information to the given mount point',
|
||||
'installed': "alias for the 'list' command",
|
||||
'list': 'lists all installed Foxx services',
|
||||
'production': 'activates production mode for the given mountpoint',
|
||||
'production': 'activates production mode for the given mount point',
|
||||
'replace': ['replaces an installed Foxx service',
|
||||
'WARNING: this action will remove service data if the service implements teardown!' ],
|
||||
'run': 'runs the given script of a foxx service mounted at the given mountpoint',
|
||||
'run': 'runs the given script of a Foxx service mounted at the given mount point',
|
||||
'search': 'searches the local foxx-apps repository',
|
||||
'set-dependencies': 'sets the dependencies for the given mountpoint',
|
||||
'set-dependencies': 'sets the dependencies for the given mount point',
|
||||
'setup': 'executes the setup script',
|
||||
'teardown': [ 'executes the teardown script',
|
||||
'WARNING: this action will remove service data if the service implements teardown!' ],
|
||||
'tests': 'runs the tests of a foxx service mounted at the given mountpoint',
|
||||
'tests': 'runs the tests of a Foxx service mounted at the given mount point',
|
||||
'uninstall': ['uninstalls a Foxx service and calls its teardown method',
|
||||
'WARNING: this will remove all data and code of the service!' ],
|
||||
'update': 'updates the local foxx-apps repository with data from the central foxx-apps repository',
|
||||
|
@ -255,7 +255,7 @@ var moveAppToServer = function (serviceInfo) {
|
|||
};
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Installs a new foxx service on the given mount point.
|
||||
// / @brief Installs a new Foxx service on the given mount point.
|
||||
// /
|
||||
// / TODO: Long Documentation!
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -288,7 +288,7 @@ var install = function (serviceInfo, mount, options) {
|
|||
};
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Uninstalls the foxx service on the given mount point.
|
||||
// / @brief Uninstalls the Foxx service on the given mount point.
|
||||
// /
|
||||
// / TODO: Long Documentation!
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -314,7 +314,7 @@ var uninstall = function (mount, options) {
|
|||
};
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Replaces a foxx service on the given mount point by an other one.
|
||||
// / @brief Replaces a Foxx service on the given mount point by an other one.
|
||||
// /
|
||||
// / TODO: Long Documentation!
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -428,7 +428,7 @@ var production = function (mount) {
|
|||
};
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Configure the service at the mountpoint
|
||||
// / @brief Configure the service at the mount point
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var configure = function (mount, options) {
|
||||
|
@ -451,7 +451,7 @@ var configure = function (mount, options) {
|
|||
};
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Get the configuration for the service at the given mountpoint
|
||||
// / @brief Get the configuration for the service at the given mount point
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var configuration = function (mount) {
|
||||
|
@ -468,7 +468,7 @@ var configuration = function (mount) {
|
|||
return res;
|
||||
};
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Link Dependencies to the installed mountpoint the service at the mountpoint
|
||||
// / @brief Link Dependencies to the installed mount point the service at the mount point
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var updateDeps = function (mount, options) {
|
||||
|
@ -490,7 +490,7 @@ var updateDeps = function (mount, options) {
|
|||
};
|
||||
};
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Configure the dependencies of the service at the mountpoint
|
||||
// / @brief Configure the dependencies of the service at the mount point
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var setDependencies = function (mount, options) {
|
||||
|
@ -513,7 +513,7 @@ var setDependencies = function (mount, options) {
|
|||
};
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Get the dependencies of the service at the given mountpoint
|
||||
// / @brief Get the dependencies of the service at the given mount point
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var dependencies = function (mount) {
|
||||
|
@ -734,7 +734,7 @@ exports.run = run;
|
|||
exports.help = help;
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Exports from foxx utils module.
|
||||
// / @brief Exports from Foxx utils module.
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
exports.list = utils.list;
|
||||
|
@ -742,7 +742,7 @@ exports.listDevelopment = utils.listDevelopment;
|
|||
exports.listDevelopmentJson = utils.listDevelopmentJson;
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Exports from foxx store module.
|
||||
// / @brief Exports from Foxx store module.
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
exports.available = store.available;
|
||||
|
|
|
@ -29,7 +29,7 @@ const functionsDocumentation = {
|
|||
'foxx_manager': 'foxx manager tests'
|
||||
};
|
||||
const optionsDocumentation = [
|
||||
' - `skipFoxxQueues`: omit the test for the foxx queues'
|
||||
' - `skipFoxxQueues`: omit the test for the Foxx queues'
|
||||
];
|
||||
|
||||
const pu = require('@arangodb/process-utils');
|
||||
|
|
|
@ -84,7 +84,7 @@ describe('Foxx service', () => {
|
|||
FILTER queue._key != 'default'
|
||||
RETURN queue
|
||||
`).toArray();
|
||||
expect(queuesAfter.length - queuesBefore.length).to.equal(1, 'Could not register foxx queue');
|
||||
expect(queuesAfter.length - queuesBefore.length).to.equal(1, 'Could not register Foxx queue');
|
||||
});
|
||||
|
||||
it('should not register a queue two times', () => {
|
||||
|
@ -107,7 +107,7 @@ describe('Foxx service', () => {
|
|||
method: 'post'
|
||||
});
|
||||
expect(res.code).to.equal(204);
|
||||
expect(waitForJob()).to.equal(true, 'job from foxx queue did not run!');
|
||||
expect(waitForJob()).to.equal(true, 'job from Foxx queue did not run!');
|
||||
const jobResult = db._query(aql`
|
||||
FOR i IN foxx_queue_test
|
||||
FILTER i.job == true
|
||||
|
|
|
@ -96,7 +96,7 @@ describe('User Rights Management', () => {
|
|||
} catch (e) {}
|
||||
});
|
||||
|
||||
it('register a foxx service', () => {
|
||||
it('register a Foxx service', () => {
|
||||
if (dbLevel['rw'].has(name)) {
|
||||
try {
|
||||
foxxManager.install(fs.join(basePath, 'minimal-working-service'), mount);
|
||||
|
@ -112,8 +112,8 @@ describe('User Rights Management', () => {
|
|||
} catch (e) {
|
||||
if (e.errorNum === errors.ERROR_ARANGO_READ_ONLY.code ||
|
||||
e.errorNum === errors.ERROR_FORBIDDEN.code) {
|
||||
expect(false).to.be.equal(true, `${name} could not register foxx service with sufficient rights`);
|
||||
} // FIXME: workarkound ignore all other errors for now
|
||||
expect(false).to.be.equal(true, `${name} could not register Foxx service with sufficient rights`);
|
||||
} // FIXME: workaround ignore all other errors for now
|
||||
}
|
||||
} else {
|
||||
try {
|
||||
|
@ -128,7 +128,7 @@ describe('User Rights Management', () => {
|
|||
FILTER service.mount == ${mount}
|
||||
RETURN service.checksum
|
||||
`).toArray().length;
|
||||
expect(size).to.equal(0, `${name} could register foxx service with insufficient rights`);
|
||||
expect(size).to.equal(0, `${name} could register Foxx service with insufficient rights`);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
|
|
@ -147,7 +147,7 @@ function FoxxmasterSuite() {
|
|||
assertTrue(continueExternal(instance.pid));
|
||||
// mop: currently supervision would run every 5s
|
||||
if (!ok) {
|
||||
throw new Error('Supervision should have moved the foxxqueues and foxxqueues should have been started to run on a new coordinator');
|
||||
throw new Error('Supervision should have moved the Foxx queues and Foxx queues should have been started to run on a new coordinator');
|
||||
}
|
||||
}
|
||||
};
|
||||
|
|
|
@ -1173,7 +1173,7 @@ describe('Foxx service', () => {
|
|||
expect(service).to.have.property('legacy', false);
|
||||
});
|
||||
|
||||
it('informations should be returned', () => {
|
||||
it('information should be returned', () => {
|
||||
FoxxManager.install(basePath, mount);
|
||||
const resp = request.get('/_api/foxx/service', {qs: {mount}});
|
||||
const service = resp.json;
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
'use strict';
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Spec for foxx manager
|
||||
// / @brief Spec for Foxx manager
|
||||
// /
|
||||
// / @file
|
||||
// /
|
||||
|
@ -61,7 +61,7 @@ describe('Foxx Manager', function () {
|
|||
db._dropDatabase('tmpFMDB2');
|
||||
});
|
||||
|
||||
it('should allow to install apps on same mountpoint', function () {
|
||||
it('should allow to install apps on same mount point', function () {
|
||||
const download = require('internal').download;
|
||||
arango.reconnect(originalEndpoint, 'tmpFMDB', 'root', '');
|
||||
expect(function () {
|
||||
|
|
|
@ -73,7 +73,7 @@ describe('Foxx service', () => {
|
|||
FOR queue IN _queues
|
||||
RETURN queue
|
||||
`).toArray();
|
||||
expect(queuesAfter.length - queuesBefore.length).to.equal(1, 'Could not register foxx queue');
|
||||
expect(queuesAfter.length - queuesBefore.length).to.equal(1, 'Could not register Foxx queue');
|
||||
});
|
||||
it('should not register a queue two times', () => {
|
||||
const queuesBefore = db._query(aql`
|
||||
|
|
|
@ -172,7 +172,7 @@ function validateMount (mount, internal) {
|
|||
if (mount[0] !== '/') {
|
||||
throw new ArangoError({
|
||||
errorNum: errors.ERROR_INVALID_MOUNTPOINT.code,
|
||||
errorMessage: 'Mountpoint has to start with /.'
|
||||
errorMessage: 'Mount point has to start with /.'
|
||||
});
|
||||
}
|
||||
if (!mountRegEx.test(mount)) {
|
||||
|
@ -180,7 +180,7 @@ function validateMount (mount, internal) {
|
|||
if (!internal || mount.length !== 1) {
|
||||
throw new ArangoError({
|
||||
errorNum: errors.ERROR_INVALID_MOUNTPOINT.code,
|
||||
errorMessage: 'Mountpoint can only contain a-z, A-Z, 0-9 or _.'
|
||||
errorMessage: 'Mount point can only contain a-z, A-Z, 0-9 or _.'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
@ -197,13 +197,13 @@ function validateMount (mount, internal) {
|
|||
if (mountNumberRegEx.test(mount)) {
|
||||
throw new ArangoError({
|
||||
errorNum: errors.ERROR_INVALID_MOUNTPOINT.code,
|
||||
errorMessage: 'Mointpoints are not allowed to start with a number, - or %.'
|
||||
errorMessage: 'Mount points are not allowed to start with a number, - or %.'
|
||||
});
|
||||
}
|
||||
if (mountAppRegEx.test(mount)) {
|
||||
throw new ArangoError({
|
||||
errorNum: errors.ERROR_INVALID_MOUNTPOINT.code,
|
||||
errorMessage: 'Mountpoint is not allowed to contain /app/.'
|
||||
errorMessage: 'Mount point is not allowed to contain /app/.'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
@ -219,7 +219,7 @@ function getServiceDefinition (mount) {
|
|||
}
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Update the app installed at this mountpoint with the new app
|
||||
// / @brief Update the app installed at this mount point with the new app
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function updateService (mount, update) {
|
||||
|
@ -248,7 +248,7 @@ function joinLastPath (tempPath) {
|
|||
}
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief creates a zip archive of a foxx app. Returns the absolute path
|
||||
// / @brief creates a zip archive of a Foxx app. Returns the absolute path
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
function zipDirectory (directory, zipFilename) {
|
||||
if (!fs.isDirectory(directory)) {
|
||||
|
|
|
@ -160,7 +160,7 @@ var updateFishbowlFromZip = function (filename) {
|
|||
}
|
||||
});
|
||||
|
||||
require('console').debug('Updated local foxx repository with ' + toSave.length + ' service(s)');
|
||||
require('console').debug('Updated local Foxx repository with ' + toSave.length + ' service(s)');
|
||||
}
|
||||
} catch (err) {
|
||||
if (tempPath !== undefined && tempPath !== '') {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
'use strict';
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief Spec for foxx manager
|
||||
// / @brief Spec for Foxx manager
|
||||
// /
|
||||
// / @file
|
||||
// /
|
||||
|
@ -248,7 +248,7 @@ describe('Foxx Manager install', function () {
|
|||
});
|
||||
});
|
||||
|
||||
describe('should not install on invalid mountpoint', function () {
|
||||
describe('should not install on invalid mount point', function () {
|
||||
it('starting with _', function () {
|
||||
const mount = '/_disallowed';
|
||||
expect(function () {
|
||||
|
|
|
@ -975,7 +975,7 @@ function flattenRoutingTree (tree) {
|
|||
}
|
||||
|
||||
//
|
||||
// @brief creates the foxx routing actions
|
||||
// @brief creates the Foxx routing actions
|
||||
//
|
||||
|
||||
function foxxRouting (req, res, options, next) {
|
||||
|
@ -1051,7 +1051,7 @@ function buildRouting (dbname) {
|
|||
// allow the collection to unload
|
||||
routing = null;
|
||||
|
||||
// install the foxx routes
|
||||
// install the Foxx routes
|
||||
var mountPoints = FoxxManager._mountPoints();
|
||||
|
||||
for (let i = 0; i < mountPoints.length; i++) {
|
||||
|
@ -1665,7 +1665,7 @@ function resultCursor (req, res, cursor, code, options) {
|
|||
var extra;
|
||||
|
||||
if (Array.isArray(cursor)) {
|
||||
// performance optimisation: if the value passed in is an array, we can
|
||||
// performance optimization: if the value passed in is an array, we can
|
||||
// use it as it is
|
||||
hasCount = ((options && options.countRequested) ? true : false);
|
||||
count = cursor.length;
|
||||
|
@ -1673,7 +1673,7 @@ function resultCursor (req, res, cursor, code, options) {
|
|||
hasNext = false;
|
||||
cursorId = null;
|
||||
} else if (typeof cursor === 'object' && cursor.hasOwnProperty('json')) {
|
||||
// cursor is a regular JS object (performance optimisation)
|
||||
// cursor is a regular JS object (performance optimization)
|
||||
hasCount = Boolean(options && options.countRequested);
|
||||
count = cursor.json.length;
|
||||
rows = cursor.json;
|
||||
|
|
|
@ -360,7 +360,7 @@ function getLocalCollections () {
|
|||
return result;
|
||||
}
|
||||
|
||||
function organiseLeaderResign (database, collId, shardName) {
|
||||
function organizeLeaderResign (database, collId, shardName) {
|
||||
console.topic('heartbeat=info', "trying to withdraw as leader of shard '%s/%s' of '%s/%s'",
|
||||
database, shardName, database, collId);
|
||||
// This starts a write transaction, just to wait for any ongoing
|
||||
|
@ -1012,7 +1012,7 @@ function executePlanForCollections(plannedCollections) {
|
|||
if (shardMap.hasOwnProperty(collection) &&
|
||||
shardMap[collection][0] === '_' + ourselves) {
|
||||
if (collections[collection].theLeader === "") {
|
||||
organiseLeaderResign(database, collections[collection].planId,
|
||||
organizeLeaderResign(database, collections[collection].planId,
|
||||
collection);
|
||||
}
|
||||
} else {
|
||||
|
|
|
@ -250,7 +250,7 @@ function transformControllerToRoute (routeInfo, route, isDevel) {
|
|||
}
|
||||
// Default Error Handler
|
||||
if (!e.statusCode) {
|
||||
console.errorLines(`Error in foxx route "${route}": ${e.stack}`);
|
||||
console.errorLines(`Error in Foxx route "${route}": ${e.stack}`);
|
||||
}
|
||||
actions.resultException(req, res, e, undefined, isDevel);
|
||||
}
|
||||
|
|
|
@ -1075,7 +1075,7 @@ exports._mountPoints = getMountPoints;
|
|||
exports._isClusterReady = isClusterReadyForBusiness;
|
||||
|
||||
// -------------------------------------------------
|
||||
// Exports from foxx utils module
|
||||
// Exports from Foxx utils module
|
||||
// -------------------------------------------------
|
||||
|
||||
exports.getServiceDefinition = utils.getServiceDefinition;
|
||||
|
@ -1084,7 +1084,7 @@ exports.listDevelopment = utils.listDevelopment;
|
|||
exports.listDevelopmentJson = utils.listDevelopmentJson;
|
||||
|
||||
// -------------------------------------------------
|
||||
// Exports from foxx store module
|
||||
// Exports from Foxx store module
|
||||
// -------------------------------------------------
|
||||
|
||||
exports.available = store.available;
|
||||
|
|
|
@ -70,7 +70,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non-collection access, limit > 0
|
||||
/// @brief check limit optimization for non-collection access, limit > 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionNoRestriction : function () {
|
||||
|
@ -110,7 +110,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit > 0
|
||||
/// @brief check limit optimization for full collection access, limit > 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionNoRestriction : function () {
|
||||
|
@ -144,7 +144,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non collection access, limit 0
|
||||
/// @brief check limit optimization for non collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionNoRestrictionEmpty : function () {
|
||||
|
@ -177,7 +177,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit 0
|
||||
/// @brief check limit optimization for full collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionNoRestrictionEmpty : function () {
|
||||
|
@ -204,7 +204,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non-collection access, limit 0
|
||||
/// @brief check limit optimization for non-collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionDoubleLimitEmpty : function () {
|
||||
|
@ -222,7 +222,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit 0
|
||||
/// @brief check limit optimization for full collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionDoubleLimitEmpty : function () {
|
||||
|
@ -235,7 +235,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with 2 limits
|
||||
/// @brief check limit optimization with 2 limits
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionLimitLimit : function () {
|
||||
|
@ -270,7 +270,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with 2 limits
|
||||
/// @brief check limit optimization with 2 limits
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionLimitLimit : function () {
|
||||
|
@ -297,7 +297,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non-collection access, limit > 0 and
|
||||
/// @brief check limit optimization for non-collection access, limit > 0 and
|
||||
/// filter conditions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
@ -349,7 +349,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit > 0 and
|
||||
/// @brief check limit optimization for full collection access, limit > 0 and
|
||||
/// filter conditions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
@ -395,7 +395,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit > 0 and
|
||||
/// @brief check limit optimization for full collection access, limit > 0 and
|
||||
/// filter conditions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
@ -441,7 +441,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionHashIndex1 : function () {
|
||||
|
@ -458,7 +458,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionHashIndex2 : function () {
|
||||
|
@ -481,7 +481,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFilterFilterCollectionHashIndex : function () {
|
||||
|
@ -504,7 +504,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSkiplistIndex1 : function () {
|
||||
|
@ -521,7 +521,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSkiplistIndex2 : function () {
|
||||
|
@ -539,7 +539,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFilterFilterSkiplistIndex : function () {
|
||||
|
@ -557,7 +557,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort1 : function () {
|
||||
|
@ -573,7 +573,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort2 : function () {
|
||||
|
@ -591,7 +591,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort3 : function () {
|
||||
|
@ -604,7 +604,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort4 : function () {
|
||||
|
|
|
@ -71,7 +71,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non-collection access, limit > 0
|
||||
/// @brief check limit optimization for non-collection access, limit > 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionNoRestriction : function () {
|
||||
|
@ -111,7 +111,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit > 0
|
||||
/// @brief check limit optimization for full collection access, limit > 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionNoRestriction : function () {
|
||||
|
@ -145,7 +145,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non collection access, limit 0
|
||||
/// @brief check limit optimization for non collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionNoRestrictionEmpty : function () {
|
||||
|
@ -178,7 +178,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit 0
|
||||
/// @brief check limit optimization for full collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionNoRestrictionEmpty : function () {
|
||||
|
@ -205,7 +205,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non-collection access, limit 0
|
||||
/// @brief check limit optimization for non-collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionDoubleLimitEmpty : function () {
|
||||
|
@ -223,7 +223,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit 0
|
||||
/// @brief check limit optimization for full collection access, limit 0
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionDoubleLimitEmpty : function () {
|
||||
|
@ -236,7 +236,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with 2 limits
|
||||
/// @brief check limit optimization with 2 limits
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitNonCollectionLimitLimit : function () {
|
||||
|
@ -271,7 +271,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with 2 limits
|
||||
/// @brief check limit optimization with 2 limits
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionLimitLimit : function () {
|
||||
|
@ -298,7 +298,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for non-collection access, limit > 0 and
|
||||
/// @brief check limit optimization for non-collection access, limit > 0 and
|
||||
/// filter conditions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
@ -350,7 +350,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit > 0 and
|
||||
/// @brief check limit optimization for full collection access, limit > 0 and
|
||||
/// filter conditions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
@ -396,7 +396,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation for full collection access, limit > 0 and
|
||||
/// @brief check limit optimization for full collection access, limit > 0 and
|
||||
/// filter conditions
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
@ -459,7 +459,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionHashIndex2 : function () {
|
||||
|
@ -481,7 +481,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionHashIndex3 : function () {
|
||||
|
@ -503,7 +503,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFilterFilterCollectionHashIndex : function () {
|
||||
|
@ -527,7 +527,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSkiplistIndex1 : function () {
|
||||
|
@ -544,7 +544,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSkiplistIndex2 : function () {
|
||||
|
@ -562,7 +562,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSkiplist3 : function () {
|
||||
|
@ -580,7 +580,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index large skip
|
||||
/// @brief check limit optimization with index large skip
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSkiplist4 : function () {
|
||||
|
@ -601,7 +601,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with index
|
||||
/// @brief check limit optimization with index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFilterFilterSkiplistIndex : function () {
|
||||
|
@ -619,7 +619,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort1 : function () {
|
||||
|
@ -635,7 +635,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort2 : function () {
|
||||
|
@ -653,7 +653,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort3 : function () {
|
||||
|
@ -666,7 +666,7 @@ function ahuacatlQueryOptimizerLimitTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check limit optimisation with sort
|
||||
/// @brief check limit optimization with sort
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testLimitFullCollectionSort4 : function () {
|
||||
|
|
|
@ -71,7 +71,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation without index
|
||||
/// @brief check sort optimization without index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testNoIndex1 : function () {
|
||||
|
@ -100,7 +100,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation without index
|
||||
/// @brief check sort optimization without index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testNoIndex2 : function () {
|
||||
|
@ -129,7 +129,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testSkiplist1 : function () {
|
||||
|
@ -159,7 +159,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testSkiplist2 : function () {
|
||||
|
@ -190,7 +190,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testSkiplist3 : function () {
|
||||
|
@ -320,7 +320,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields1 : function () {
|
||||
|
@ -370,7 +370,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields2 : function () {
|
||||
|
@ -422,7 +422,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields3 : function () {
|
||||
|
@ -454,7 +454,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields4 : function () {
|
||||
|
@ -486,7 +486,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields5 : function () {
|
||||
|
|
|
@ -69,7 +69,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation without index
|
||||
/// @brief check sort optimization without index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testNoIndex1 : function () {
|
||||
|
@ -89,7 +89,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation without index
|
||||
/// @brief check sort optimization without index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testNoIndex2 : function () {
|
||||
|
@ -109,7 +109,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testSkiplist1 : function () {
|
||||
|
@ -131,7 +131,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testSkiplist2 : function () {
|
||||
|
@ -152,7 +152,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testSkiplist3 : function () {
|
||||
|
@ -239,7 +239,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields1 : function () {
|
||||
|
@ -269,7 +269,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields2 : function () {
|
||||
|
@ -299,7 +299,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields3 : function () {
|
||||
|
@ -321,7 +321,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields4 : function () {
|
||||
|
@ -343,7 +343,7 @@ function ahuacatlQueryOptimizerSortTestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check sort optimisation with skiplist index
|
||||
/// @brief check sort optimization with skiplist index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testMultipleFields5 : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined1TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined2TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined3TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined4TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined5TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined6TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined7TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined8TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined9TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined10TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined11TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined12TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined13TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined14TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined15TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
@ -63,7 +63,7 @@ function ahuacatlRangesCombined16TestSuite () {
|
|||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test range optimisations
|
||||
/// @brief test range optimizations
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testRanges : function () {
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*global assertEqual */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief tests for query language, range optimisations
|
||||
/// @brief tests for query language, range optimizations
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue