1
0
Fork 0

Merge branch 'devel' of github.com:arangodb/arangodb into devel

This commit is contained in:
Michael Hackstein 2017-06-12 23:45:26 +02:00
commit fef243cc71
66 changed files with 1114 additions and 1034 deletions

View File

@ -713,6 +713,9 @@ class Slice {
}
VELOCYPACK_ASSERT(h > 0x00 && h <= 0x0e);
if (h >= sizeof(SliceStaticData::WidthMap) / sizeof(SliceStaticData::WidthMap[0])) {
throw Exception(Exception::InternalError, "invalid Array/Object type");
}
return readIntegerNonEmpty<ValueLength>(_start + 1,
SliceStaticData::WidthMap[h]);
}

View File

@ -409,8 +409,6 @@ The following optimizer rules may appear in the `rules` attribute of a plan:
* `interchange-adjacent-enumerations`: will appear if a query contains multiple
*FOR* statements whose order were permuted. Permutation of *FOR* statements is
performed because it may enable further optimizations by other rules.
* `remove-sort-rand`: will appear when a *SORT RAND()* expression is removed by
moving the random iteration into an *EnumerateCollectionNode*. (MMFiles-specific)
* `remove-collect-variables`: will appear if an *INTO* clause was removed from a *COLLECT*
statement because the result of *INTO* is not used. May also appear if a result
of a *COLLECT* statement's *AGGREGATE* variables is not used.
@ -441,7 +439,10 @@ The following optimizer rules may appear in the `rules` attribute of a plan:
* `inline-subqueries`: will appear when a subquery was pulled out in its surrounding scope,
e.g. `FOR x IN (FOR y IN collection FILTER y.value >= 5 RETURN y.test) RETURN x.a`
would become `FOR tmp IN collection FILTER tmp.value >= 5 LET x = tmp.test RETURN x.a`
* `geo-index-optimizer`: will appear when a geo index is utilized (MMFiles-specific)
* `geo-index-optimizer`: will appear when a geo index is utilized.
* `remove-sort-rand`: will appear when a *SORT RAND()* expression is removed by
moving the random iteration into an *EnumerateCollectionNode*. This optimizer rule
is specific for the MMFiles storage engine.
The following optimizer rules may appear in the `rules` attribute of cluster plans:

View File

@ -7,6 +7,9 @@ Modifying a Collection
<!-- js/actions/api-collection.js -->
@startDocuBlock JSF_put_api_collection_unload
<!-- js/actions/api-collection.js -->
@startDocuBlock JSF_put_api_collection_load_indexes_in_memory
<!-- js/actions/api-collection.js -->
@startDocuBlock JSF_put_api_collection_properties

View File

@ -91,3 +91,32 @@ to produce a 2.8-compatible dump with a 3.0 ArangoDB, please specify the option
`--compat28 true` when invoking arangodump.
unix> arangodump --compat28 true --collection myvalues --output-directory "dump"
### Advanced cluster options
Starting with version 3.1.17, collections may be created with shard
distribution identical to an existing prototypical collection;
i.e. shards are distributed in the very same pattern as in the
prototype collection. Such collections cannot be dumped without the
reference collection or arangodump with yield an error.
unix> arangodump --collection clonedCollection --output-directory "dump"
ERROR Collection clonedCollection's shard distribution is based on a that of collection prototypeCollection, which is not dumped along. You may dump the collection regardless of the missing prototype collection by using the --ignore-distribute-shards-like-errors parameter.
There are two ways to approach that problem: Solve it, i.e. dump the
prototype collection along:
unix> arangodump --collection clonedCollection --collection prototypeCollection --output-directory "dump"
Processed 2 collection(s), wrote 81920 byte(s) into datafiles, sent 1 batch(es)
Or override that behaviour to be able to dump the collection
individually.
unix> arangodump --collection B clonedCollection --output-directory "dump" --ignore-distribute-shards-like-errors
Processed 1 collection(s), wrote 34217 byte(s) into datafiles, sent 1 batch(es)
No that in consequence, restoring such a collection without its
prototype is affected. [arangorestore](Arangorestore.md)

View File

@ -160,3 +160,20 @@ be ignored.
Note that in a cluster, every newly created collection will have a new
ID, it is not possible to reuse the ID from the originally dumped
collection. This is for safety reasons to ensure consistency of IDs.
### Restoring collections with sharding prototypes
*arangorestore* will yield an error, while trying to restore a
collection, whose shard distribution follows a collection, which does
not exist in the cluster and which was not dumped along:
unix> arangorestore --collection clonedCollection --server.database mydb --input-directory "dump"
ERROR got error from server: HTTP 500 (Internal Server Error): ArangoError 1486: must not have a distributeShardsLike attribute pointing to an unknown collection
Processed 0 collection(s), read 0 byte(s) from datafiles, sent 0 batch(es)
The collection can be restored by overriding the error message as
follows:
unix> arangorestore --collection clonedCollection --server.database mydb --input-directory "dump" --ignore-distribute-shards-like-errors

View File

@ -1,21 +1,8 @@
Clusters Options
================
### Node ID
<!-- arangod/Cluster/ApplicationCluster.h -->
This server's id: `--cluster.my-local-info info`
Some local information about the server in the cluster, this can for
example be an IP address with a process ID or any string unique to
the server. Specifying *info* is mandatory on startup if the server
id (see below) is not specified. Each server of the cluster must
have a unique local info. This is ignored if my-id below is specified.
### Agency endpoint
<!-- arangod/Cluster/ApplicationCluster.h -->
<!-- arangod/Cluster/ClusterFeature.h -->
List of agency endpoints:
@ -37,48 +24,12 @@ alternative endpoint if one of them becomes unavailable.
**Examples**
```
--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint
tcp://192.168.1.2:4002
--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint tcp://192.168.1.2:4002 ...
```
### Agency prefix
<!-- arangod/Cluster/ApplicationCluster.h -->
### My address
Global agency prefix:
`--cluster.agency-prefix prefix`
The global key prefix used in all requests to the agency. The specified
prefix will become part of each agency key. Specifying the key prefix
allows managing multiple ArangoDB clusters with the same agency
server(s).
*prefix* must consist of the letters *a-z*, *A-Z* and the digits *0-9*
only. Specifying a prefix is mandatory.
**Examples**
```
--cluster.prefix mycluster
```
### MyId
<!-- arangod/Cluster/ApplicationCluster.h -->
This server's id: `--cluster.my-id id`
The local server's id in the cluster. Specifying *id* is mandatory on
startup. Each server of the cluster must have a unique id.
Specifying the id is very important because the server id is used for
determining the server's role and tasks in the cluster.
*id* must be a string consisting of the letters *a-z*, *A-Z* or the
digits *0-9* only.
### MyAddress
<!-- arangod/Cluster/ApplicationCluster.h -->
<!-- arangod/Cluster/ClusterFeature.h -->
This server's address / endpoint:
@ -97,6 +48,53 @@ for the server's id, ArangoDB will refuse to start.
**Examples**
Listen only on interface with address `192.168.1.1`
```
--cluster.my-address tcp://192.168.1.1:8530
```
Listen on all ipv4 and ipv6 addresses, which are configured on port `8530`
```
--cluster.my-address ssl://[::]:8530
```
### My role
<!-- arangod/Cluster/ClusterFeature.h -->
This server's role:
`--cluster.my-role [dbserver|coordinator]`
The server's role. Is this instance a db server (backend data server)
or a coordinator (frontend server for external and application access)
### Node ID (deprecated)
<!-- arangod/Cluster/ClusterFeature.h -->
This server's id: `--cluster.my-local-info info`
Some local information about the server in the cluster, this can for
example be an IP address with a process ID or any string unique to
the server. Specifying *info* is mandatory on startup if the server
id (see below) is not specified. Each server of the cluster must
have a unique local info. This is ignored if my-id below is specified.
This option is deprecated and will be removed in a future release. The
cluster node ids have been dropped in favour of once generated UUIDs.
### More advanced options (should generally remain untouched)
<!-- arangod/Cluster/ClusterFeature.h -->
Synchroneous replication timing: `--cluster.synchronous-replication-timeout-factor double`
Strech or clinch timeouts for internal synchroneous replication
mechanism between db servers. All such timeouts are affected by this
change. Please change only with intent and great care. Default at `1.0`.
System replication factor: `--cluster.system-replication-factorinteger`
Change default replication factor for system collections. Default at `2`.

View File

@ -94,8 +94,7 @@ false.
`--rocksdb.use-direct-writes` (Hidden)
Only meaningful on Linux. If set,use `O_DIRECT` for writing files. Default:
true.
Only meaningful on Linux. If set,use `O_DIRECT` for writing files. Default: false.
`--rocksdb.use-fsync` (Hidden)

View File

@ -1,17 +1,38 @@
Introduction to Replication
===========================
Replication allows you to *replicate* data onto another machine. It forms the base of all disaster recovery and failover features ArangoDB offers.
Replication allows you to *replicate* data onto another machine. It
forms the base of all disaster recovery and failover features ArangoDB
offers.
ArangoDB offers asynchronous and synchronous replication which both have their pros and cons. Both modes may and should be combined in a real world scenario and be applied in the usecase where they excel most.
ArangoDB offers asynchronous and synchronous replication which both
have their pros and cons. Both modes may and should be combined in a
real world scenario and be applied in the usecase where they excel
most.
We will describe pros and cons of each of them in the following sections.
We will describe pros and cons of each of them in the following
sections.
### Synchronous replication
Synchronous replication only works in in a cluster and is typically used for mission critical data which must be accessible at all times. Synchronous replication generally stores a copy of the data on another host and keeps it in sync. Essentially when storing data after enabling synchronous replication the cluster will wait for all replicas to write all the data before greenlighting the write operation to the client. This will naturally increase the latency a bit, since one more network hop is needed for each write. However it will enable the cluster to immediately fail over to a replica whenever an outage has been detected, without losing any committed data, and mostly without even signaling an error condition to the client.
Synchronous replication only works within a cluster and is typically
used for mission critical data which must be accessible at all
times. Synchronous replication generally stores a copy of a shard's
data on another db server and keeps it in sync. Essentially, when storing
data after enabling synchronous replication the cluster will wait for
all replicas to write all the data before greenlighting the write
operation to the client. This will naturally increase the latency a
bit, since one more network hops is needed for each write. However, it
will enable the cluster to immediately fail over to a replica whenever
an outage has been detected, without losing any committed data, and
mostly without even signaling an error condition to the client.
Synchronous replication is organized in a way that every shard has a leader and r-1 followers. The number of followers can be controlled using the `replicationFactor` whenever you create a collection, the `replicationFactor` is the total number of copies being kept, that is, it is one plus the number of followers.
Synchronous replication is organized such that every shard has a
leader and `r-1` followers, where `r` denoted the replication
factor. The number of followers can be controlled using the
`replicationFactor` parameter whenever you create a collection, the
`replicationFactor` parameter is the total number of copies being
kept, that is, it is one plus the number of followers.
### Satellite collections
@ -23,4 +44,8 @@ Satellite collections are an enterprise only feature.
### Asynchronous replication
In ArangoDB any write operation will be logged to the write-ahead log. When using Asynchronous replication slaves will connect to a master and apply all the events from the log in the same order locally. After that, they will have the same state of data as the master database.
In ArangoDB any write operation will be logged to the write-ahead
log. When using Asynchronous replication slaves will connect to a
master and apply all the events from the log in the same order
locally. After that, they will have the same state of data as the
master database.

View File

@ -7,10 +7,23 @@ Synchronous replication requires an operational ArangoDB cluster.
### Enabling synchronous replication
Synchronous replication can be enabled per collection. When creating you can specify the number of replicas using *replicationFactor*. The default is `1` which effectively *disables* synchronous replication.
Synchronous replication can be enabled per collection. When creating a
collection you may specify the number of replicas using the
*replicationFactor* parameter. The default value is set to `1` which
effectively *disables* synchronous replication.
Example:
127.0.0.1:8530@_system> db._create("test", {"replicationFactor": 3})
Any write operation will require 2 replicas to report success from now on.
In the above case, any write operation will require 2 replicas to
report success from now on.
### Preparing growth
You may create a collection with higher replication factor than
available. When additional db servers become available the shards are
automatically replicated to the newly available machines.
Multiple replicas of the same shard can never coexist on the same db
server instance.

View File

@ -12,6 +12,11 @@ Drops a *collection* and all its indexes and data.
In order to drop a system collection, an *options* object
with attribute *isSystem* set to *true* must be specified.
**Note**: dropping a collection in a cluster, which is prototype for
sharing in other collections is prohibited. In order to be able to
drop such a collection, all dependent collections must be dropped
first.
**Examples**
@startDocuBlockInline collectionDrop
@ -184,6 +189,7 @@ loads a collection
Loads a collection into memory.
**Note**: cluster collections are loaded at all times.
**Examples**
@ -199,7 +205,6 @@ Loads a collection into memory.
@endDocuBlock collectionLoad
### Revision
<!-- arangod/V8Server/v8-collection.cpp -->
@ -268,6 +273,7 @@ unloads a collection
Starts unloading a collection from memory. Note that unloading is deferred
until all query have finished.
**Note**: cluster collections cannot be unloaded.
**Examples**
@ -283,7 +289,6 @@ until all query have finished.
@endDocuBlock CollectionUnload
### Rename
<!-- arangod/V8Server/v8-collection.cpp -->
@ -318,7 +323,6 @@ database.
@endDocuBlock collectionRename
### Rotate
<!-- arangod/V8Server/v8-collection.cpp -->
@ -331,6 +335,7 @@ current journal of the collection a read-only datafile so it may become a
candidate for garbage collection. If there is currently no journal available
for the collection, the operation will fail with an error.
**Note**: this method is not available in a cluster.
**Note**: this method is specific for the MMFiles storage engine, and there
it is not available in a cluster.

View File

@ -142,12 +142,16 @@ to the [naming conventions](../NamingConventions/README.md).
servers holding copies take over, usually without an error being
reported.
- *distributeShardsLike* distribute the shards of this collection
cloning the shard distribution of another.
When using the *Enterprise* version of ArangoDB the replicationFactor
may be set to "satellite" making the collection locally joinable
on every database server. This reduces the number of network hops
dramatically when using joins in AQL at the costs of reduced write
performance on these collections.
`db._create(collection-name, properties, type)`
Specifies the optional *type* of the collection, it can either be *document*
@ -311,6 +315,8 @@ In order to drop a system collection, one must specify an *options* object
with attribute *isSystem* set to *true*. Otherwise it is not possible to
drop system collections.
**Note**: cluster collection, which are prototypes for collections
with *distributeShardsLike* parameter, cannot be dropped.
*Examples*

View File

@ -1,29 +1,50 @@
Automatic native Clusters
-------------------------
Similar as the Mesos framework aranges an ArangoDB cluster in a DC/OS environment for you, `arangodb` can do this for you in a plain environment.
Similarly to how the Mesos framework aranges an ArangoDB cluster in a
DC/OS environment for you, `arangodb` can do this for you in a plain
environment.
With `arangodb` you launch a primary node. It will bind a network port, and output the commands you need to cut'n'paste into the other nodes:
By invoking the first `arangodb` you launch a primary node. It will
bind a network port, and output the commands you need to cut'n'paste
into the other nodes. Let's review the process of such a startup on
three hosts named `h01`, `h02`, and `h03`:
# arangodb
2017/05/11 11:00:52 Starting arangodb version 0.7.0, build 90aebe6
2017/05/11 11:00:52 Serving as master with ID '85c05e3b' on :8528...
2017/05/11 11:00:52 Waiting for 3 servers to show up.
2017/05/11 11:00:52 Use the following commands to start other servers:
arangodb@h01 ~> arangodb --ownAddress h01:4000
2017/06/12 14:59:38 Starting arangodb version 0.5.0+git, build 5f97368
2017/06/12 14:59:38 Serving as master with ID '52698769' on h01:4000...
2017/06/12 14:59:38 Waiting for 3 servers to show up.
2017/06/12 14:59:38 Use the following commands to start other servers:
arangodb --data.dir=./db2 --starter.join 127.0.0.1
arangodb --data.dir=./db3 --starter.join 127.0.0.1
2017/05/11 11:00:52 Listening on 0.0.0.0:8528 (:8528)
arangodb --dataDir=./db2 --join h01:4000
So you cut the lines `arangodb --data.dir=./db2 --starter.join 127.0.0.1` and execute them for the other nodes.
If you run it on another node on your network, replace the `--starter.join 127.0.0.1` by the public IP of the first host.
arangodb --dataDir=./db3 --join h01:4000
2017/06/12 14:59:38 Listening on 0.0.0.0:4000 (h01:4000)
So you cut the lines `arangodb --data.dir=./db2 --starter.join
127.0.0.1` and execute them for the other nodes. If you run it on
another node on your network, replace the `--starter.join 127.0.0.1`
by the public IP of the first host.
arangodbh02 ~> arangodb --dataDir=./db2 --join h01:4000
2017/06/12 14:48:50 Starting arangodb version 0.5.0+git, build 5f97368
2017/06/12 14:48:50 Contacting master h01:4000...
2017/06/12 14:48:50 Waiting for 3 servers to show up...
2017/06/12 14:48:50 Listening on 0.0.0.0:4000 (:4000)
arangodbh03 ~> arangodb --dataDir=./db3 --join h01:4000
2017/06/12 14:48:50 Starting arangodb version 0.5.0+git, build 5f97368
2017/06/12 14:48:50 Contacting master h01:4000...
2017/06/12 14:48:50 Waiting for 3 servers to show up...
2017/06/12 14:48:50 Listening on 0.0.0.0:4000 (:4000)
Once the two other processes joined the cluster, and started their ArangoDB server processes (this may take a while depending on your system), it will inform you where to connect the Cluster from a Browser, shell or your programm:
2017/05/11 11:01:47 Your cluster can now be accessed with a browser at
`http://localhost:8529` or
2017/05/11 11:01:47 using `arangosh --server.endpoint tcp://localhost:8529`.
...
2017/06/12 14:55:21 coordinator up and running.
At this point you may access your cluster at either coordinator
endpoint, http://h01:4002/, http://h02:4002/ or http://h03:4002/.
Automatic native local test Clusters
------------------------------------
@ -49,8 +70,10 @@ In the Docker world you need to take care about where persistant data is stored,
(You can use any type of docker volume that fits your setup instead.)
We then need to determine the the IP of the docker host where you intend to run ArangoDB starter on.
Depending on your host os, [ipconfig](https://en.wikipedia.org/wiki/Ipconfig) can be used, on a Linux host `/sbin/ip route`:
We then need to determine the the IP of the docker host where you
intend to run ArangoDB starter on. Depending on your operating system
execute `ip addr, ifconfig or ipconfig` to determine your local ip
address.
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.32
@ -64,22 +87,24 @@ So this example uses the IP `192.168.1.32`:
It will start the master instance, and command you to start the slave instances:
2017/05/11 09:04:24 Starting arangodb version 0.7.0, build 90aebe6
2017/05/11 09:04:24 Serving as master with ID 'fc673b3b' on 192.168.140.80:8528...
2017/05/11 09:04:24 Waiting for 3 servers to show up.
2017/05/11 09:04:24 Use the following commands to start other servers:
Unable to find image 'arangodb/arangodb-starter:latest' locally
latest: Pulling from arangodb/arangodb-starter
Digest: sha256:b87d20c0b4757b7daa4cb7a9f55cb130c90a09ddfd0366a91970bcf31a7fd5a4
Status: Downloaded newer image for arangodb/arangodb-starter:latest
2017/06/12 13:26:14 Starting arangodb version 0.7.1, build f128884
2017/06/12 13:26:14 Serving as master with ID '46a2b40d' on 192.168.1.32:8528...
2017/06/12 13:26:14 Waiting for 3 servers to show up.
2017/06/12 13:26:14 Use the following commands to start other servers:
docker volume create arangodb2 && \
docker run -it --name=adb2 --rm -p 8533:8528 -v arangodb2:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
docker volume create arangodb2 && \
docker run -it --name=adb2 --rm -p 8533:8528 -v arangodb2:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter:0.7 \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
docker volume create arangodb3 && \
docker run -it --name=adb3 --rm -p 8538:8528 -v arangodb3:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
2017/05/11 09:04:24 Listening on 0.0.0.0:8528 (192.168.1.32:8528)
docker volume create arangodb3 && \
docker run -it --name=adb3 --rm -p 8538:8528 -v arangodb3:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter:0.7 \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
Once you start the other instances, it will continue like this:
@ -93,9 +118,9 @@ Once you start the other instances, it will continue like this:
2017/05/11 09:05:52 Starting dbserver on port 8530
2017/05/11 09:05:53 Looking for a running instance of coordinator on port 8529
2017/05/11 09:05:53 Starting coordinator on port 8529
2017/05/11 09:05:58 agent up and running (version 3.1.19).
2017/05/11 09:06:15 dbserver up and running (version 3.1.19).
2017/05/11 09:06:31 coordinator up and running (version 3.1.19).
2017/05/11 09:05:58 agent up and running (version 3.2.devel).
2017/05/11 09:06:15 dbserver up and running (version 3.2.devel).
2017/05/11 09:06:31 coordinator up and running (version 3.2.devel).
And at least it tells you where you can work with your cluster:

View File

@ -18,29 +18,29 @@ then the commands you have to use are (you can use host names if they can be res
On 192.168.1.1:
```
arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.1:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
sudo arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.1:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
```
On 192.168.1.2:
```
arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.2:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
sudo arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.2:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
```
On 192.168.1.3:
```
arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.3:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.endpoint tcp://192.168.1.1:5001 --agency.endpoint tcp://192.168.1.2:5001 --agency.endpoint tcp://192.168.1.3:5001 --agency.supervision true --database.directory agency
sudo arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.3:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.endpoint tcp://192.168.1.1:5001 --agency.endpoint tcp://192.168.1.2:5001 --agency.endpoint tcp://192.168.1.3:5001 --agency.supervision true --database.directory agency
```
On 192.168.1.1:
```
arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://192.168.1.1:8529 --cluster.my-local-info db1 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary1 &
sudo arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://192.168.1.1:8529 --cluster.my-local-info db1 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary1 &
```
On 192.168.1.2:
```
arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8530 --cluster.my-address tcp://192.168.1.2:8530 --cluster.my-local-info db2 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary2 &
sudo arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8530 --cluster.my-address tcp://192.168.1.2:8530 --cluster.my-local-info db2 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary2 &
```
On 192.168.1.3:

View File

@ -139,6 +139,40 @@ Same as above. Instead of an index an index handle can be given.
@endDocuBlock col_dropIndex
### Load Indexes into Memory
<!-- arangod/V8Server/v8-vocindex.cpp -->
Loads all indexes of this collection into Memory.
`collection.loadIndexesIntoMemory()`
This function tries to cache all index entries
of this collection into the main memory.
Therefore it iterates over all indexes of the collection
and stores the indexed values, not the entire document data,
in Memory.
All lookups that could be found in the cache are much faster
than lookups not stored in the cache so you get a nice performance boost.
It is also guaranteed that the cache is consistent with the stored data.
For the time beeing this function is only useful on RocksDB storage engine,
as in MMFiles engine all indexes are in memory anyways.
On RocksDB this function honors all memory limits, if the indexes you want
to load are smaller than your memory limit this function guarantees that most
index values are cached.
If the index is larger than your memory limit this function will fill up values
up to this limit and for the time beeing there is no way to control which indexes
of the collection should have priority over others.
@startDocuBlockInline LoadIndexesIntoMemory
@EXAMPLE_ARANGOSH_OUTPUT{loadIndexesIntoMemory}
~db._drop("example");
~db._createEdgeCollection("example");
db.example.loadIndexesIntoMemory();
~db._drop("example");
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock loadIndexesIntoMemory
Database Methods
----------------

View File

@ -33,14 +33,22 @@ REST API
--------
* Removed undocumented internal HTTP API:
* PUT _api/edges
* PUT /_api/edges
The documented GET _api/edges and the undocumented POST _api/edges remains unmodified.
The documented GET /_api/edges and the undocumented POST /_api/edges remains unmodified.
* change undocumented behaviour in case of invalid revision ids in
`If-Match` and `If-None-Match` headers from returning HTTP status code 400 (bad request)
to returning HTTP status code 412 (precondition failed).
* the REST API for fetching the list of currently running AQL queries and the REST API
for fetching the list of slow AQL queries now return an extra *bindVars* attribute which
contains the bind parameters used by the queries.
This affects the return values of the following API endpoints:
* GET /_api/query/current
* GET /_api/query/slow
JavaScript API
--------------

View File

@ -0,0 +1,62 @@
@startDocuBlock JSF_put_api_collection_loadindexesintomemory
@brief Load Indexes into Memory
@RESTHEADER{PUT /_api/collection/{collection-name}/loadIndexesIntoMemory, Load Indexes into Memory}
@RESTURLPARAMETERS
@RESTURLPARAM{collection-name,string,required}
@RESTDESCRIPTION
This route tries to cache all index entries
of this collection into the main memory.
Therefore it iterates over all indexes of the collection
and stores the indexed values, not the entire document data,
in Memory.
All lookups that could be found in the cache are much faster
than lookups not stored in the cache so you get a nice performance boost.
It is also guaranteed that the cache is consistent with the stored data.
For the time beeing this function is only useful on RocksDB storage engine,
as in MMFiles engine all indexes are in memory anyways.
On RocksDB this function honors all memory limits, if the indexes you want
to load are smaller than your memory limit this function guarantees that most
index values are cached.
If the index is larger than your memory limit this function will fill up values
up to this limit and for the time beeing there is no way to control which indexes
of the collection should have priority over others.
On sucess this function returns `true`
@RESTRETURNCODES
@RESTRETURNCODE{200}
If the indexes have all been loaded
@RESTRETURNCODE{400}
If the *collection-name* is missing, then a *HTTP 400* is
returned.
@RESTRETURNCODE{404}
If the *collection-name* is unknown, then a *HTTP 404* is returned.
@EXAMPLES
@EXAMPLE_ARANGOSH_RUN{RestCollectionIdentifierLoadIndexesIntoMemory}
var cn = "products";
db._drop(cn);
var coll = db._create(cn);
var url = "/_api/collection/"+ coll.name() + "/loadIndexesIntoMemory";
var response = logCurlRequest('PUT', url, '');
assert(response.code === 200);
logJsonResponse(response);
db._drop(cn);
@END_EXAMPLE_ARANGOSH_RUN
@endDocuBlock

View File

@ -22,7 +22,8 @@ It returns an object with the attributes
- *result*: will be *true* if rotation succeeded
**Note**: This method is not available in a cluster.
**Note**: this method is specific for the MMFiles storage engine, and there
it is not available in a cluster.
@RESTRETURNCODES

View File

@ -92,6 +92,18 @@ to bring the satellite collections involved in the query into sync.
The default value is *60.0* (seconds). When the max time has been reached the query
will be stopped.
@RESTSTRUCT{maxTransactionSize,JSF_post_api_cursor_opts,integer,optional,int64}
Transaction size limit in bytes. Honored by the RocksDB storage engine only.
@RESTSTRUCT{intermediateCommitSize,JSF_post_api_cursor_opts,integer,optional,int64}
Maximum total size of operations after which an intermediate commit is performed
automatically. Honored by the RocksDB storage engine only.
@RESTSTRUCT{intermediateCommitCount,JSF_post_api_cursor_opts,integer,optional,int64}
Maximum number of operations after which an intermediate commit is performed
automatically. Honored by the RocksDB storage engine only.
@RESTDESCRIPTION
The query details include the query string plus optional query options and
bind parameters. These values need to be passed in a JSON representation in

View File

@ -22,7 +22,7 @@ Define if the created graph should be smart.
This only has effect in Enterprise version.
@RESTBODYPARAM{options,object,optional,post_api_gharial_create_opts}
a json object which is only useful in Enterprise version and with isSmart set to true.
a JSON object which is only useful in Enterprise version and with isSmart set to true.
It can contain the following attributes:
@RESTSTRUCT{smartGraphAttribute,post_api_gharial_create_opts,string,required,}

View File

@ -19,7 +19,7 @@ keys.
@RESTRETURNCODES
@RESTRETURNCODE{200}
returns a json object containing a list of indexes on that collection.
returns a JSON object containing a list of indexes on that collection.
@EXAMPLES

View File

@ -36,10 +36,18 @@ not time out waiting for a lock.
@RESTBODYPARAM{params,string,optional,string}
optional arguments passed to *action*.
@RESTBODYPARAM{maxTransactionSize,integer,optional,int64}
Transaction size limit in bytes. Honored by the RocksDB storage engine only.
@RESTBODYPARAM{intermediateCommitSize,integer,optional,int64}
Maximum total size of operations after which an intermediate commit is performed
automatically. Honored by the RocksDB storage engine only.
@RESTBODYPARAM{intermediateCommitCount,integer,optional,int64}
Maximum number of operations after which an intermediate commit is performed
automatically. Honored by the RocksDB storage engine only.
@RESTDESCRIPTION
Contains the *collections* and *action*.
The transaction description must be passed in the body of the POST request.
If the transaction is fully executed and committed on the server,

View File

@ -0,0 +1 @@
arangosh&gt; db.example.loadIndexesIntoMemory();

View File

@ -70,7 +70,7 @@ stage('checkout') {
poll: false,
scm: [
$class: 'GitSCM',
branches: [[name: "* /${env.BRANCH_NAME}"]],
branches: [[name: "*/${env.BRANCH_NAME}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'enterprise']],
submoduleCfg: [],
@ -84,7 +84,7 @@ stage('checkout') {
poll: false,
scm: [
$class: 'GitSCM',
branches: [[name: "* /devel"]],
branches: [[name: "*/devel"]],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'enterprise']],
submoduleCfg: [],

View File

@ -16,7 +16,7 @@
; !include "GetTime.nsh"
;--------------------------------
; get commandline parameters
!include FileFunc.nsh
!include "FileFunc.nsh"
!insertmacro GetParameters
!insertmacro GetOptions
@ -70,6 +70,13 @@ Var ServiceUp ; did the service start?
!define TEMP1 $R0 ;Temp variable
Var DATADIR
Var APPDIR
Var LOGFILE
Var ini_DATADIR
Var ini_APPDIR
Var ini_LOGFILE
Var UpgradeInstall
;----------------------------------------
; The first dialog page
@ -79,6 +86,7 @@ Var Dlg1_RB_all_users
Var Dlg1_RB_cur_user
Var Dlg1_CB_custom_path
;Var Dlg1_CB_custom_logging
Var Dlg1_CB_automatic_update
Var Dlg1_CB_keep_backup
Var Dlg1_CB_add_path
@ -121,20 +129,40 @@ Function ${un}determine_install_scope
SetShellVarContext all
StrCpy $INSTDIR "@CPACK_NSIS_INSTALL_ROOT@\@CPACK_PACKAGE_INSTALL_DIRECTORY@"
StrCpy $DATADIR "$APPDATA\ArangoDB"
StrCpy $APPDIR "$APPDATA\ArangoDB-apps"
${Else}
SetShellVarContext current
StrCpy $INSTDIR "$LOCALAPPDATA\@CPACK_PACKAGE_INSTALL_DIRECTORY@"
StrCpy $DATADIR "$LOCALAPPDATA\ArangoDB"
StrCpy $APPDIR "$LOCALAPPDATA\ArangoDB-apps"
${EndIf}
FunctionEnd
!macroend
!insertmacro determine_install_scope ""
!insertmacro determine_install_scope "un."
Function un.ReadSettings
SetRegView ${BITS}
; A single user isn''t in HKCC...
StrCpy $TRI_INSTALL_SCOPE_ALL '0'
ReadRegStr $TRI_INSTALL_SCOPE_ALL HKCC "Software\@CPACK_NSIS_PACKAGE_NAME@" "scopeAll"
ReadRegStr $TRI_INSTALL_SERVICE HKCC "Software\@CPACK_NSIS_PACKAGE_NAME@" "service"
call un.determine_install_scope
ReadRegStr $START_MENU SHCTX "${TRI_UNINSTALL_REG_PATH}" "StartMenu"
;MessageBox MB_OK "Start menu is in: $START_MENU"
ReadRegStr $ADD_TO_PATH SHCTX "${TRI_UNINSTALL_REG_PATH}" "AddToPath"
;MessageBox MB_OK "Add to path: $ADD_TO_PATH"
ReadRegStr $ADD_DESKTOP_ICON SHCTX "${TRI_UNINSTALL_REG_PATH}" "InstallToDesktop"
ReadRegStr $DATADIR SHCTX "${TRI_UNINSTALL_REG_PATH}" "DATADIR"
ReadRegStr $APPDIR SHCTX "${TRI_UNINSTALL_REG_PATH}" "APPDIR"
FunctionEnd
Function check_previous_install
StrCpy $UpgradeInstall "0"
IfFileExists "$DATADIR\@INC_CPACK_ARANGO_DATA_DIR@\SERVER" OldFound
IfFileExists "$DATADIR\@INC_CPACK_ARANGO_DATA_DIR@\ENGINE" OldFound
return
@ -142,11 +170,6 @@ OldFound:
StrCpy $UpgradeInstall "1"
FunctionEnd
Function upgradeDatabase
FunctionEnd
Function disableBackButton
GetDlgItem $0 $HWNDParent 3
EnableWindow $0 0
@ -160,21 +183,6 @@ Function stop_old_service
${EndIf}
FunctionEnd
Var mycheckbox ; You could just store the HWND in $1 etc if you don't want this extra variable
Function un.ModifyUnWelcome
; SendMessage $HWNDPARENT ${WM_SETTEXT} 0 "STR:Titlebar - UnWelcome BLARG"
${NSD_CreateCheckbox} 120u -20u 50% 20u "Delete databases with uninstallation?"
Pop $mycheckbox
; ${NSD_Check} $mycheckbox ; don't check it by default
FunctionEnd
Function un.LeaveUnWelcome
${NSD_GetState} $mycheckbox $0
${If} $0 <> 0
StrCpy $PURGE_DB "1"
${EndIf}
FunctionEnd
!include Sections.nsh
@ -340,6 +348,15 @@ Var AR_RegFlags
!define MUI_DIRECTORYPAGE_VARIABLE $DATADIR
!insertmacro MUI_PAGE_DIRECTORY
!define MUI_PAGE_HEADER_TEXT "Choose the services folder for @CPACK_PACKAGE_NAME@"
!define MUI_PAGE_HEADER_SUBTEXT ""
!define MUI_DIRECTORYPAGE_TEXT_TOP "choose the folder where @CPACK_PACKAGE_NAME@ will install Foxx services"
!define MUI_DIRECTORYPAGE_TEXT_DESTINATION "Foxx services folder"
!define MUI_PAGE_CUSTOMFUNCTION_PRE hide_install_directory
!define MUI_PAGE_CUSTOMFUNCTION_LEAVE check_foxx_directory
!define MUI_DIRECTORYPAGE_VARIABLE $APPDIR
!insertmacro MUI_PAGE_DIRECTORY
Page custom InstallOptionsPage2 InstallOptionsPage2_results
!define MUI_PAGE_CUSTOMFUNCTION_PRE hide_install_directory
@ -359,10 +376,8 @@ Var AR_RegFlags
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_PAGE_FINISH
!define MUI_WELCOMEPAGE_TEXT 'Please choose whether we should also remove the database files along with the programm files.'
!define MUI_PAGE_CUSTOMFUNCTION_SHOW un.ModifyUnWelcome
!define MUI_PAGE_CUSTOMFUNCTION_LEAVE un.LeaveUnWelcome
!insertmacro MUI_UNPAGE_WELCOME
UninstPage custom un.unInstallOptionsPage1 un.unInstallOptionsPage1_results
!insertmacro MUI_UNPAGE_CONFIRM
!insertmacro MUI_UNPAGE_INSTFILES
@ -421,6 +436,8 @@ Var AR_RegFlags
!insertmacro MUI_LANGUAGE "Ukrainian"
!insertmacro MUI_LANGUAGE "Welsh"
!insertmacro Locate
;--------------------------------
; Create custom pages
Function InstallOptionsPage1
@ -440,40 +457,40 @@ displayAgain:
${If} $Dlg1_Dialog == error
Abort
${EndIf}
${NSD_CreateLabel} 0 0 100% 6% "Install @CPACK_PACKAGE_NAME@"
Pop $0 ; Don't care...
${NSD_CreateLabel} 0 20 100% 6% "Choose to install @CPACK_PACKAGE_NAME@ for all users or the current user:"
Pop $0 ; Don't care...
${NSD_CreateRadioButton} 5 40 40% 6% "for all users"
${NSD_CreateRadioButton} 5 40 80% 6% "for all users (and ArangoDB as a service)"
Pop $Dlg1_RB_all_users
${NSD_SetState} $Dlg1_RB_all_users ${BST_CHECKED}
${NSD_CreateRadioButton} 5 60 50% 6% "for the current user"
Pop $Dlg1_RB_cur_user
; Checkboxes
${NSD_CreateCheckBox} 0 -120 100% 6% "Choose custom install paths for databases and installation"
${NSD_CreateCheckBox} 0 -100 100% 6% "Choose custom install paths for databases and installation"
Pop $Dlg1_CB_custom_path
${NSD_CreateCheckBox} 0 -100 100% 6% "Automatically update existing ArangoDB database"
;${NSD_CreateCheckBox} 0 -120 100% 6% "Enable ArangoDBs own logfiles"
;Pop $Dlg1_CB_custom_logging
${NSD_CreateCheckBox} 0 -80 100% 6% "Automatically update existing ArangoDB database"
Pop $Dlg1_CB_automatic_update
${NSD_SetState} $Dlg1_CB_automatic_update ${BST_CHECKED}
${NSD_CreateCheckBox} 0 -80 100% 6% "Keep a backup of databases"
${NSD_CreateCheckBox} 0 -60 100% 6% "Keep a backup of databases"
Pop $Dlg1_CB_keep_backup
${NSD_SetState} $Dlg1_CB_keep_backup ${BST_CHECKED}
${NSD_CreateCheckBox} 0 -60 100% 6% "add @CPACK_PACKAGE_NAME@ to the Path"
${NSD_CreateCheckBox} 0 -40 100% 6% "add @CPACK_PACKAGE_NAME@ to the Path"
Pop $Dlg1_CB_add_path
${NSD_SetState} $Dlg1_CB_add_path ${BST_CHECKED}
${NSD_CreateCheckBox} 0 -40 100% 6% "Start @CPACK_PACKAGE_NAME@ as system service"
Pop $Dlg1_CB_as_service
${NSD_CreateCheckBox} 0 -20 100% 6% "Create @CPACK_PACKAGE_NAME@ Desktop Icon"
Pop $Dlg1_CB_DesktopIcon
${NSD_SetState} $Dlg1_CB_DesktopIcon ${BST_CHECKED}
@ -482,12 +499,10 @@ displayAgain:
${If} $AllowGlobalInstall == "0"
EnableWindow $Dlg1_RB_all_users 0
EnableWindow $Dlg1_CB_as_service 0
${Else}
${NSD_SetState} $Dlg1_CB_as_service ${BST_CHECKED}
${EndIf}
nsDialogs::Show
Return
Return
FunctionEnd
;--------------------------------
@ -500,8 +515,10 @@ Function InstallOptionsPage1_results
${If} $R0 = ${BST_CHECKED}
StrCpy $TRI_INSTALL_SCOPE_ALL "1"
StrCpy $TRI_INSTALL_SERVICE "1"
${Else}
StrCpy $TRI_INSTALL_SCOPE_ALL "0"
StrCpy $TRI_INSTALL_SERVICE "0"
${EndIf}
@ -512,6 +529,12 @@ Function InstallOptionsPage1_results
StrCpy $ChooseInstallPath "0"
${EndIf}
;${NSD_GetState} $Dlg1_CB_custom_logging $R0
;${If} $R0 = ${BST_CHECKED}
; StrCpy $LOGFILE "$APPDIR\LOG.txt"
;${EndIf}
${NSD_GetState} $Dlg1_CB_automatic_update $R0
${If} $R0 = ${BST_CHECKED}
StrCpy $AUTOMATIC_UPDATE "1"
@ -533,13 +556,6 @@ Function InstallOptionsPage1_results
StrCpy $ADD_TO_PATH '0'
${EndIf}
${NSD_GetState} $Dlg1_CB_as_service $R0
${If} $R0 = ${BST_CHECKED}
StrCpy $TRI_INSTALL_SERVICE "1"
${Else}
StrCpy $TRI_INSTALL_SERVICE "0"
${EndIf}
${NSD_GetState} $Dlg1_CB_DesktopIcon $R0
${If} $R0 = ${BST_CHECKED}
StrCpy $ADD_DESKTOP_ICON "1"
@ -561,7 +577,7 @@ Function InstallOptionsPage2
continueUI:
${If} $UpgradeInstall == "1"
StrCpy $STORAGE_ENGINE "auto"
Return
Return
${EndIf}
nsDialogs::Create 1018
Pop $Dlg2_Dialog
@ -571,7 +587,7 @@ continueUI:
${EndIf}
!insertmacro MUI_HEADER_TEXT "Configure @CPACK_PACKAGE_NAME@" "Choose configuration options for @CPACK_NSIS_PACKAGE_NAME@"
${NSD_CreateLabel} 0, 0, 100% 6% "Type password for the ArangoDB root user:"
Pop $0 ; Don't care...
${NSD_CreatePassword} 5u 30 30% 6% ""
@ -595,19 +611,19 @@ continueUI:
${NSD_CB_SelectString} $DLG2_droplist "auto"
nsDialogs::Show
Return
Return
FunctionEnd
;--------------------------------
Function InstallOptionsPage2_results
${If} $UpgradeInstall == "1"
Return
Return
${EndIf}
Push $R0
Push $R1
Push $R2
${NSD_GetText} $DLG2_droplist $STORAGE_ENGINE
${NSD_GetText} $Dlg2_PW_2 $PASSWORD_AGAIN
${NSD_GetText} $Dlg2_PW_1 $PASSWORD
StrCmp $PASSWORD $PASSWORD_AGAIN done 0
@ -634,9 +650,12 @@ FunctionEnd
Function UpgradeExisting
; Check whether we acutally should upgrade:
DetailPrint "Checking whether an existing database needs upgrade: "
ExecWait "$INSTDIR\${SBIN_DIR}\arangod.exe --server.rest-server false --log.foreground-tty false --database.check-version" $0
DetailPrint "done Checking whether an existing database needs upgrade: $0"
${If} $0 == 1
${AndIf} $AUTOMATIC_UPDATE == "1"
DetailPrint "Yes."
${If} $AUTOMATIC_BACKUP == "1"
; We should and we should keep a backup:
@ -644,12 +663,15 @@ Function UpgradeExisting
StrCpy $BackupPath "$DATADIR_$2-$1-$0_$4_$5_$6"
CreateDirectory "$BackupPath"
ClearErrors
DetailPrint "Copying old database from $DATADIR to backup directory $BackupPath"
CopyFiles "$DATADIR" "$BackupPath"
IfErrors 0 +2
MessageBox MB_ICONEXCLAMATION "The upgrade failed to copy the files $\r$\nfrom '$DATADIR'$\r$\nto '$BackupPath'; please do it manually.$\r$\n$\r$\nClick OK to continue."
${EndIf}
DetailPrint "Attempting to run database upgrade: "
; Now actually do the upgrade
ExecWait "$INSTDIR\${SBIN_DIR}\arangod.exe --server.rest-server false --log.level error --database.auto-upgrade true" $0
DetailPrint "Done running database upgrade: $0"
${If} $0 == 1
MessageBox MB_ICONEXCLAMATION "the Upgrade failed, please do a manual upgrade"
Abort
@ -657,6 +679,112 @@ Function UpgradeExisting
${EndIf}
FunctionEnd
Function SetDBPassword
DetailPrint "preserving password for install process:"
System::Call 'Kernel32::SetEnvironmentVariable(t, t)i ("ARANGODB_DEFAULT_ROOT_PASSWORD", "$PASSWORD").r0'
DetailPrint "preserved password: $0"
StrCmp $0 0 error
DetailPrint "Attempting to initialize the database with a password: "
ExecWait "$INSTDIR\${SBIN_DIR}\arangod.exe --database.init-database --server.rest-server false --server.statistics false --foxx.queues false" $0
DetailPrint "Done initializing password: $0"
${If} $0 == 0
return
${EndIf}
error:
MessageBox MB_OK "Failed to initialize database password.$\r$\nPlease check the windows event log for details."
Abort
FunctionEnd
Function InstallService
DetailPrint "Installing service ${TRI_SVC_NAME}: "
SimpleSC::InstallService '${TRI_SVC_NAME}' '' '16' '2' '"$INSTDIR\${SBIN_DIR}\arangod.exe" --start-service' '' '' ''
Pop $0
DetailPrint "Status: $0; Setting Description: ${TRI_FRIENDLY_SVC_NAME}"
SimpleSC::SetServiceDescription '${TRI_SVC_NAME}' '${TRI_FRIENDLY_SVC_NAME}'
Pop $0
DetailPrint "Status: $0; Starting Service"
SimpleSC::StartService '${TRI_SVC_NAME}' '' 45
Pop $0
DetailPrint "Status: $0"
${If} $0 != "0"
Call QueryServiceStatus
Pop $2
Pop $1
Pop $0
!define SC_WAITED "Waited 40 seconds for the ArangoDB service to come up;"
!define SC_SV "Please look at $\r$\n`sc query ${TRI_SVC_NAME}`"
!define SC_EVLOG "and the Windows Eventlog for eventual errors!"
${If} $1 == "1066"
${If} $2 == "1"
MessageBox MB_OK "${SC_WAITED}$\r$\nbut it exited with an error; $\r$\n${SC_SV}$\r$\n${SC_EVLOG}"
Abort
${ElseIf} $2 == "2"
MessageBox MB_OK "${SC_WAITED}$\r$\nbut it exited with a fatal error; $\r$\n${SC_SV}$\r$\n${SC_EVLOG}"
Abort
${EndIf}
${ElseIf} $1 != "0"
MessageBox MB_OK "${SC_WAITED}$\r$\n${SC_SV}$\r$\n${SC_EVLOG} $0 $1 $2"
Abort
${EndIf}
Abort
${Else}
StrCpy $ServiceUp "1"
${EndIf}
FunctionEnd
Function assignFileRightsCB
StrCpy $0 "0"
AccessControl::GrantOnFile \
"$R9" "(BU)" "GenericRead + GenericWrite + GenericExecute"
Push $0
FunctionEnd
Function assignFileRights
ClearErrors
${Locate} "$INSTDIR" "/L=FD /M=*.*" "assignFileRightsCB"
${Locate} "$DATADIR" "/L=FD /M=*.*" "assignFileRightsCB"
${Locate} "$APPDIR" "/L=FD /M=*.*" "assignFileRightsCB"
FunctionEnd
Var mycheckbox ; You could just store the HWND in $1 etc if you don't want this extra variable
;--------------------------------
; Create custom pages
Function un.unInstallOptionsPage1
IfSilent 0 continueUI
Return
continueUI:
call un.ReadSettings
!insertmacro MUI_HEADER_TEXT "Unnstall Options" "Choose options for uninstalling @CPACK_NSIS_PACKAGE_NAME@"
nsDialogs::Create 1018
Pop $Dlg1_Dialog
${If} $Dlg1_Dialog == error
Abort
${EndIf}
${NSD_CreateLabel} 0 0 100% 60u "Your databases are stored in $\r$\n'$DATADIR'$\r$\n, and your Foxx services in$\r$\n'$APPDIR'$\r$\nShould we remove it during the uninstallation?"
Pop $0 ; Don't care...
${NSD_CreateCheckbox} 5u -20u 50% 20u "Delete databases with uninstallation?"
Pop $mycheckbox
nsDialogs::Show
Return
FunctionEnd
;--------------------------------
Function un.unInstallOptionsPage1_results
${NSD_GetState} $mycheckbox $0
${If} $0 <> 0
StrCpy $PURGE_DB "1"
${EndIf}
FunctionEnd
;--------------------------------
;Installer Sections
@ -667,7 +795,7 @@ Section "-Core installation"
; SetRegView controlls where die regkeys are written to
; SetRegView 32 writes the keys into Wow6432
SetRegView ${BITS}
@CPACK_NSIS_FULL_INSTALL@
;Store installation folder
@ -700,6 +828,12 @@ Section "-Core installation"
StrCmp "0" "$ADD_DESKTOP_ICON" noDesktopIcon
CreateShortCut "$DESKTOP\Arango Shell.lnk" "$INSTDIR\${BIN_DIR}\arangosh.exe" '' '$INSTDIR\resources\arangodb.ico' '0' SW_SHOWMAXIMIZED
CreateShortCut "$DESKTOP\Arango Management Interface.lnk" "http://127.0.0.1:8529" '' '$INSTDIR\resources\arangodb.ico' '0' SW_SHOWMAXIMIZED
${If} $TRI_INSTALL_SERVICE == "0"
${AndIf} $ADD_DESKTOP_ICON == "1"
; if we don't install a service, add a desktop icon for the server.
CreateShortCut "$DESKTOP\ArangoDB Server.lnk" "$INSTDIR\${SBIN_DIR}\arangod.exe" '' '$INSTDIR\resources\arangodb.ico' '0' SW_SHOWMINIMIZED
${EndIf}
noDesktopIcon:
; Write special uninstall registry entries
@ -710,9 +844,19 @@ Section "-Core installation"
!insertmacro MUI_STARTMENU_WRITE_END
!insertmacro AddToRegistry "DATADIR" "$DATADIR"
!insertmacro AddToRegistry "APPDIR" "$APPDIR"
; Create a file containing the settings we want to be overwritten:
StrCpy $newCfgValues "[database]$\r$\ndirectory = $DATADIR$\r$\n[server]$\r$\nstorage-engine = $STORAGE_ENGINE$\r$\n"
${If} $DATADIR != ""
StrCpy $ini_DATADIR "[database]$\r$\ndirectory = $DATADIR$\r$\n"
${EndIf}
${If} $APPDIR != ""
StrCpy $ini_APPDIR "[javascript]$\r$\napp-path = $APPDIR$\r$\n"
${EndIf}
${If} $LOGFILE != ""
StrCpy $ini_LOGFILE "[log]$\r$\nfile = $LOGFILE$\r$\n"
${EndIf}
StrCpy $newCfgValues "$ini_APPDIR$ini_DATADIR[server]$\r$\nstorage-engine = $STORAGE_ENGINE$\r$\n"
StrCpy $newCfgValuesFile "$INSTDIR\etc\arangodb3\newValues.ini"
FileOpen $4 "$newCfgValuesFile" w
FileWrite $4 "$newCfgValues"
@ -723,55 +867,16 @@ Section "-Core installation"
call ReadINIFileKeys
Delete "$newCfgValuesFile"
CreateDirectory $APPDIR
Call assignFileRights
${If} $UpgradeInstall == "1"
Call UpgradeExisting
${Else}
System::Call 'Kernel32::SetEnvironmentVariable(t, t)i ("ARANGODB_DEFAULT_ROOT_PASSWORD", "$PASSWORD").r0'
StrCmp $0 0 error
ExecWait "$INSTDIR\${SBIN_DIR}\arangod.exe --database.init-database --server.rest-server false --server.statistics false --foxx.queues false" $0
${If} $0 == 0
Goto doneUpgrade
${EndIf}
error:
MessageBox MB_OK "Failed to initialize database password."
Abort
doneUpgrade:
Call SetDBPassword
${EndIf}
Call assignFileRights
${If} $TRI_INSTALL_SERVICE == "1"
DetailPrint "Installing service ${TRI_SVC_NAME}: "
SimpleSC::InstallService '${TRI_SVC_NAME}' '' '16' '2' '"$INSTDIR\${SBIN_DIR}\arangod.exe" --start-service' '' '' ''
Pop $0
DetailPrint "Status: $0; Setting Description: ${TRI_FRIENDLY_SVC_NAME}"
SimpleSC::SetServiceDescription '${TRI_SVC_NAME}' '${TRI_FRIENDLY_SVC_NAME}'
Pop $0
DetailPrint "Status: $0; Starting Service"
SimpleSC::StartService '${TRI_SVC_NAME}' '' 45
Pop $0
DetailPrint "Status: $0"
${If} $0 != "0"
Call QueryServiceStatus
Pop $2
Pop $1
Pop $0
!define SC_WAITED "Waited 40 seconds for the ArangoDB service to come up;"
!define SC_SV "Please look at $\r$\n`sc query ${TRI_SVC_NAME}`"
!define SC_EVLOG "and the Windows Eventlog for eventual errors!"
${If} $1 == "1066"
${If} $2 == "1"
MessageBox MB_OK "${SC_WAITED}$\r$\nbut it exited with an error; $\r$\n${SC_SV}$\r$\n${SC_EVLOG}"
Abort
${ElseIf} $2 == "2"
MessageBox MB_OK "${SC_WAITED}$\r$\nbut it exited with a fatal error; $\r$\n${SC_SV}$\r$\n${SC_EVLOG}"
Abort
${EndIf}
${ElseIf} $1 != "0"
MessageBox MB_OK "${SC_WAITED}$\r$\n${SC_SV}$\r$\n${SC_EVLOG} $0 $1 $2"
Abort
${EndIf}
Abort
${Else}
StrCpy $ServiceUp "1"
${EndIf}
Call InstallService
${EndIf}
SectionEnd
@ -791,24 +896,11 @@ continueUI:
${EndIf}
FunctionEnd
Function assign_proper_access_rights
StrCpy $0 "0"
AccessControl::GrantOnFile \
"$INSTDIR" "(BU)" "GenericRead + GenericWrite + GenericExecute"
Pop $R0
${If} $R0 == error
Pop $R0
StrCpy $0 "1"
DetailPrint `AccessControl error: $R0`
; MessageBox MB_OK "target directory $INSTDIR can not get cannot get correct access rigths"
${EndIf}
FunctionEnd
Function is_writable
; is does not matter if we do some errors here
${If} $TRI_INSTALL_SCOPE_ALL == '1'
CreateDirectory $INSTDIR
Call assign_proper_access_rights
Call assignFileRights
${EndIf}
FunctionEnd
@ -816,6 +908,9 @@ Function check_database_directory
call check_previous_install
FunctionEnd
Function check_foxx_directory
FunctionEnd
Function check_installation_directory
ClearErrors
Call is_writable
@ -829,7 +924,7 @@ FunctionEnd
Function insert_registration_keys
ClearErrors
WriteRegExpandStr HKCC "Software\@CPACK_NSIS_PACKAGE_NAME@" "service" "$TRI_INSTALL_SERVICE"
IfErrors there_are_erros
WriteRegExpandStr HKCC "Software\@CPACK_NSIS_PACKAGE_NAME@" "scopeAll" "$TRI_INSTALL_SCOPE_ALL"
@ -853,7 +948,7 @@ Function un.onInit
; arango may be installed on diferent places
; determine if the arango was installed for a local user
${GetParameters} $R0
${GetParameters} $R0
${GetOptions} $R0 "/PURGE_DB=" $PURGE_DB
IfErrors 0 +2
StrCpy $PURGE_DB "0"
@ -893,31 +988,19 @@ FunctionEnd
;Uninstaller Section
Section "Uninstall"
call un.ReadSettings
; SetRegView controlls where die regkeys are written to
; SetRegView 32 writes the keys into Wow6432
; this variable was defined by eld and included in NSIS.template.in
; we probably need this for the install/uninstall software list.
SetRegView ${BITS}
; A single user isn''t in HKCC...
StrCpy $TRI_INSTALL_SCOPE_ALL '0'
ReadRegStr $TRI_INSTALL_SCOPE_ALL HKCC "Software\@CPACK_NSIS_PACKAGE_NAME@" "scopeAll"
ReadRegStr $TRI_INSTALL_SERVICE HKCC "Software\@CPACK_NSIS_PACKAGE_NAME@" "service"
call un.determine_install_scope
ReadRegStr $START_MENU SHCTX "${TRI_UNINSTALL_REG_PATH}" "StartMenu"
;MessageBox MB_OK "Start menu is in: $START_MENU"
ReadRegStr $ADD_TO_PATH SHCTX "${TRI_UNINSTALL_REG_PATH}" "AddToPath"
;MessageBox MB_OK "Add to path: $ADD_TO_PATH"
ReadRegStr $ADD_DESKTOP_ICON SHCTX "${TRI_UNINSTALL_REG_PATH}" "InstallToDesktop"
ReadRegStr $DATADIR SHCTX "${TRI_UNINSTALL_REG_PATH}" "DATADIR"
;MessageBox MB_OK "Install to desktop: $ADD_DESKTOP_ICON "
StrCmp "0" "$ADD_DESKTOP_ICON" noDesktopIconRemove
Delete "$DESKTOP\Arango Shell.lnk"
Delete "$DESKTOP\Arango Management Interface.lnk"
${If} $TRI_INSTALL_SERVICE == "0"
Delete "$DESKTOP\ArangoDB Server.lnk"
${EndIf}
noDesktopIconRemove:
@ -959,6 +1042,8 @@ Done:
StrCmp $PURGE_DB "0" dontDeleteDatabases
DetailPrint 'Removing database files from $DATADIR: '
RMDir /r "$DATADIR"
DetailPrint 'Removing foxx services files from $APPDIR: '
RMDir /r "$APPDIR"
RMDir /r "$INSTDIR\var\lib\arangodb3-apps"
RMDir "$INSTDIR\var\lib"
RMDir /r "$INSTDIR\var\log\arangodb3"
@ -1052,27 +1137,35 @@ SectionEnd
; first visible page and sets up $INSTDIR properly...
; Choose different default installation folder based on SV_ALLUSERS...
; "Program Files" for AllUsers, "My Documents" for JustMe...
var allPathOpts
var CMDINSTDIR
Function .onInit
${GetParameters} $R0
ClearErrors
${GetOptions} $R0 "/PASSWORD=" $PASSWORD
IfErrors 0 +2
ReadEnvStr $PASSWORD PASSWORD
# we only want to manipulate INSTDIR here if /INSTDIR is realy set!
${GetParameters} $R0
# we only want to manipulate INSTDIR here if /INSTDIR is really set!
${GetParameters} $R0
ClearErrors
${GetOptions} $R0 "/INSTDIR=" $CMDINSTDIR
IfErrors +3 0
IfErrors +2 0
StrCpy $INSTDIR "$CMDINSTDIR"
StrCpy $DATADIR "$INSTDIR\var\lib\arangodb3"
# we only want to manipulate INSTDIR here if /INSTDIR is realy set!
${GetParameters} $R0
# we only want to manipulate APPDIR here if /APPDIR is really set!
StrCpy $APPDIR "$INSTDIR\var\lib\arangodb3"
${GetParameters} $R0
ClearErrors
${GetOptions} $R0 "/APPDIR=" $CMDINSTDIR
IfErrors +2 0
StrCpy $APPDIR "$CMDINSTDIR"
# we only want to manipulate DATABASEDIR here if /DATABASEDIR is really set!
${GetParameters} $R0
ClearErrors
${GetOptions} $R0 "/DATABASEDIR=" $CMDINSTDIR
IfErrors +2 0
@ -1084,11 +1177,11 @@ Function .onInit
IfErrors 0 +2
StrCpy $STORAGE_ENGINE "auto"
${GetParameters} $R0
${GetParameters} $R0
${GetOptions} $R0 "/DESKTOPICON=" $ADD_DESKTOP_ICON
IfErrors 0 +2
StrCpy $ADD_DESKTOP_ICON "1"
${GetParameters} $R0
ClearErrors
${GetOptions} $R0 "/PATH=" $ADD_TO_PATH

View File

@ -14,6 +14,7 @@ ArangoDB
3.0: [![Build Status](https://secure.travis-ci.org/arangodb/arangodb.png?branch=3.0)](http://travis-ci.org/arangodb/arangodb)
3.1: [![Build Status](https://secure.travis-ci.org/arangodb/arangodb.png?branch=3.1)](http://travis-ci.org/arangodb/arangodb)
3.2: [![Build Status](https://secure.travis-ci.org/arangodb/arangodb.png?branch=3.2)](http://travis-ci.org/arangodb/arangodb)
Slack: [![ArangoDB-Logo](https://slack.arangodb.com/badge.svg)](https://slack.arangodb.com)
@ -66,24 +67,27 @@ and get it running in the cloud.
Other features of ArangoDB include:
- **Schema-free schemata** let you combine the space efficiency of MySQL with the
performance power of NoSQL
- Use a **data-centric microservices** approach with ArangoDB Foxx and fuse your
application-logic and database together for maximal throughput
- JavaScript for all: **no language zoo**, you can use one language from your
browser to your back-end
- ArangoDB is **multi-threaded** - exploit the power of all your cores
- **Flexible data modeling**: model your data as combination of key-value pairs,
documents or graphs - perfect for social relations
- Free **index choice**: use the correct index for your problem, be it a skip
list or a fulltext search
- Configurable **durability**: let the application decide if it needs more
durability or more performance
- Different **storage engines**: ArangoDB provides a storage engine for mostly
in-memory operations and an alternative storage engine based on RocksDB which
handle datasets that are much bigger than RAM.
- **Powerful query language** (AQL) to retrieve and modify data
- **Transactions**: run queries on multiple documents or collections with
optional transactional consistency and isolation
- **Replication** and **Sharding**: set up the database in a master-slave
configuration or spread bigger datasets across multiple servers
- Configurable **durability**: let the application decide if it needs more
durability or more performance
- **Schema-free schemata** let you combine the space efficiency of MySQL with the
performance power of NoSQL
- Free **index choice**: use the correct index for your problem, be it a skiplist
or a fulltext search
- ArangoDB is **multi-threaded** - exploit the power of all your cores
- It is **open source** (Apache License 2.0)
For more in-depth information read the [design goals of ArangoDB](https://www.arangodb.com/2012/03/07/avocadodbs-design-objectives)
@ -115,7 +119,7 @@ issue tracker for reporting them:
[https://github.com/arangodb/arangodb/issues](https://github.com/arangodb/arangodb/issues)
You can use the Google group for improvements, feature requests, comments:
You can use our Google group for improvements, feature requests, comments:
[https://www.arangodb.com/community](https://www.arangodb.com/community)

View File

@ -61,8 +61,6 @@ int EnumerateCollectionBlock::initialize() {
auto logicalCollection = _collection->getCollection();
auto cid = logicalCollection->planId();
auto dbName = logicalCollection->dbName();
auto collectionInfoCurrent = ClusterInfo::instance()->getCollectionCurrent(
dbName, std::to_string(cid));
double maxWait = _engine->getQuery()->queryOptions().satelliteSyncWait;
bool inSync = false;
@ -72,6 +70,8 @@ int EnumerateCollectionBlock::initialize() {
double endTime = startTime + maxWait;
while (!inSync) {
auto collectionInfoCurrent = ClusterInfo::instance()->getCollectionCurrent(
dbName, std::to_string(cid));
auto followers = collectionInfoCurrent->servers(_collection->getName());
inSync = std::find(followers.begin(), followers.end(),
ServerState::instance()->getId()) != followers.end();
@ -90,7 +90,7 @@ int EnumerateCollectionBlock::initialize() {
if (!inSync) {
THROW_ARANGO_EXCEPTION_MESSAGE(
TRI_ERROR_CLUSTER_AQL_COLLECTION_OUT_OF_SYNC,
"collection " + _collection->name);
"collection " + _collection->name + " did not come into sync in time (" + std::to_string(maxWait) +")");
}
}

View File

@ -27,6 +27,7 @@
#include "Aql/ExecutionBlock.h"
#include "Aql/ExecutionEngine.h"
#include "Aql/Query.h"
#include "Basics/Exceptions.h"
#include "Basics/StaticStrings.h"
#include "Basics/StringUtils.h"
#include "Basics/VPackStringBufferAdapter.h"
@ -799,6 +800,9 @@ void RestAqlHandler::handleUseQuery(std::string const& operation, Query* query,
int res;
try {
res = query->engine()->initialize();
} catch (arangodb::basics::Exception const& ex) {
generateError(rest::ResponseCode::SERVER_ERROR, ex.code(), "initialize lead to an exception: " + ex.message());
return;
} catch (...) {
generateError(rest::ResponseCode::SERVER_ERROR,
TRI_ERROR_HTTP_SERVER_ERROR,
@ -820,6 +824,9 @@ void RestAqlHandler::handleUseQuery(std::string const& operation, Query* query,
items.reset(new AqlItemBlock(query->resourceMonitor(), querySlice.get("items")));
res = query->engine()->initializeCursor(items.get(), pos);
}
} catch (arangodb::basics::Exception const& ex) {
generateError(rest::ResponseCode::SERVER_ERROR, ex.code(), "initializeCursor lead to an exception: " + ex.message());
return;
} catch (...) {
generateError(rest::ResponseCode::SERVER_ERROR,
TRI_ERROR_HTTP_SERVER_ERROR,
@ -846,6 +853,9 @@ void RestAqlHandler::handleUseQuery(std::string const& operation, Query* query,
// delete the query from the registry
_queryRegistry->destroy(_vocbase, _qId, errorCode);
_qId = 0;
} catch (arangodb::basics::Exception const& ex) {
generateError(rest::ResponseCode::SERVER_ERROR, ex.code(), "shutdown lead to an exception: " + ex.message());
return;
} catch (...) {
generateError(rest::ResponseCode::SERVER_ERROR,
TRI_ERROR_HTTP_SERVER_ERROR,

View File

@ -91,6 +91,23 @@ void ClusterFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
options->addObsoleteOption("--cluster.disable-dispatcher-frontend",
"The dispatcher feature isn't available anymore; Use ArangoDBStarter for this now!",
true);
options->addObsoleteOption("--cluster.dbserver-config",
"The dbserver-config is not available anymore, Use ArangoDBStarter",
true);
options->addObsoleteOption("--cluster.coordinator-config",
"The coordinator-config is not available anymore, Use ArangoDBStarter",
true);
options->addObsoleteOption("--cluster.data-path",
"path to cluster database directory",
true);
options->addObsoleteOption("--cluster.log-path",
"path to log directory for the cluster",
true);
options->addObsoleteOption("--cluster.arangod-path",
"path to the arangod for the cluster",
true);
options->addOption("--cluster.agency-endpoint",
"agency endpoint to connect to",
@ -111,26 +128,6 @@ void ClusterFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
options->addOption("--cluster.my-address", "this server's endpoint",
new StringParameter(&_myAddress));
options->addOption("--cluster.data-path",
"path to cluster database directory",
new StringParameter(&_dataPath));
options->addOption("--cluster.log-path",
"path to log directory for the cluster",
new StringParameter(&_logPath));
options->addOption("--cluster.arangod-path",
"path to the arangod for the cluster",
new StringParameter(&_arangodPath));
options->addOption("--cluster.dbserver-config",
"path to the DBserver configuration",
new StringParameter(&_dbserverConfig));
options->addOption("--cluster.coordinator-config",
"path to the coordinator configuration",
new StringParameter(&_coordinatorConfig));
options->addOption("--cluster.system-replication-factor",
"replication factor for system collections",
new UInt32Parameter(&_systemReplicationFactor));
@ -212,12 +209,6 @@ void ClusterFeature::reportRole(arangodb::ServerState::RoleEnum role) {
void ClusterFeature::prepare() {
ServerState::instance()->setDataPath(_dataPath);
ServerState::instance()->setLogPath(_logPath);
ServerState::instance()->setArangodPath(_arangodPath);
ServerState::instance()->setDBserverConfig(_dbserverConfig);
ServerState::instance()->setCoordinatorConfig(_coordinatorConfig);
auto v8Dealer = ApplicationServer::getFeature<V8DealerFeature>("V8Dealer");
v8Dealer->defineDouble("SYS_DEFAULT_REPLICATION_FACTOR_SYSTEM",

View File

@ -61,11 +61,6 @@ class ClusterFeature : public application_features::ApplicationFeature {
std::string _myId;
std::string _myRole;
std::string _myAddress;
std::string _dataPath;
std::string _logPath;
std::string _arangodPath;
std::string _dbserverConfig;
std::string _coordinatorConfig;
uint32_t _systemReplicationFactor = 2;
bool _createWaitsForSyncReplication = true;
double _syncReplTimeoutFactor = 1.0;

View File

@ -665,7 +665,7 @@ int warmupOnCoordinator(std::string const& dbname,
"", coordTransactionID, "shard:" + p.first,
arangodb::rest::RequestType::GET,
"/_db/" + StringUtils::urlEncode(dbname) + "/_api/collection/" +
StringUtils::urlEncode(p.first) + "/warmup",
StringUtils::urlEncode(p.first) + "/loadIndexesIntoMemory",
std::shared_ptr<std::string const>(), headers, nullptr, 300.0);
}

View File

@ -54,11 +54,6 @@ static ServerState Instance;
ServerState::ServerState()
: _id(),
_dataPath(),
_logPath(),
_arangodPath(),
_dbserverConfig(),
_coordinatorConfig(),
_address(),
_lock(),
_role(),
@ -783,60 +778,6 @@ void ServerState::setState(StateEnum state) {
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the data path
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getDataPath() {
READ_LOCKER(readLocker, _lock);
return _dataPath;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the data path
////////////////////////////////////////////////////////////////////////////////
void ServerState::setDataPath(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_dataPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the log path
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getLogPath() {
READ_LOCKER(readLocker, _lock);
return _logPath;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the log path
////////////////////////////////////////////////////////////////////////////////
void ServerState::setLogPath(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_logPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the arangod path
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getArangodPath() {
READ_LOCKER(readLocker, _lock);
return _arangodPath;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the arangod path
////////////////////////////////////////////////////////////////////////////////
void ServerState::setArangodPath(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_arangodPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the JavaScript startup path
////////////////////////////////////////////////////////////////////////////////
@ -855,42 +796,6 @@ void ServerState::setJavaScriptPath(std::string const& value) {
_javaScriptStartupPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the DBserver config
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getDBserverConfig() {
READ_LOCKER(readLocker, _lock);
return _dbserverConfig;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the DBserver config
////////////////////////////////////////////////////////////////////////////////
void ServerState::setDBserverConfig(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_dbserverConfig = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the coordinator config
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getCoordinatorConfig() {
READ_LOCKER(readLocker, _lock);
return _coordinatorConfig;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the coordinator config
////////////////////////////////////////////////////////////////////////////////
void ServerState::setCoordinatorConfig(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_coordinatorConfig = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief redetermine the server role, we do this after a plan change.
/// This is needed for automatic failover. This calls determineRole with

View File

@ -179,36 +179,6 @@ class ServerState {
/// @brief set the current state
void setState(StateEnum);
/// @brief gets the data path
std::string getDataPath();
/// @brief sets the data path
void setDataPath(std::string const&);
/// @brief gets the log path
std::string getLogPath();
/// @brief sets the log path
void setLogPath(std::string const&);
/// @brief gets the arangod path
std::string getArangodPath();
/// @brief sets the arangod path
void setArangodPath(std::string const&);
/// @brief gets the DBserver config
std::string getDBserverConfig();
/// @brief sets the DBserver config
void setDBserverConfig(std::string const&);
/// @brief gets the coordinator config
std::string getCoordinatorConfig();
/// @brief sets the coordinator config
void setCoordinatorConfig(std::string const&);
/// @brief gets the JavaScript startup path
std::string getJavaScriptPath();
@ -296,24 +266,9 @@ class ServerState {
/// @brief the server's description
std::string _description;
/// @brief the data path, can be set just once
std::string _dataPath;
/// @brief the log path, can be set just once
std::string _logPath;
/// @brief the arangod path, can be set just once
std::string _arangodPath;
/// @brief the JavaScript startup path, can be set just once
std::string _javaScriptStartupPath;
/// @brief the DBserver config, can be set just once
std::string _dbserverConfig;
/// @brief the coordinator config, can be set just once
std::string _coordinatorConfig;
/// @brief the server's own address, can be set just once
std::string _address;

View File

@ -1102,60 +1102,6 @@ static void JS_DescriptionServerState(
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the data path
////////////////////////////////////////////////////////////////////////////////
static void JS_DataPathServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("dataPath()");
}
std::string const path = ServerState::instance()->getDataPath();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the log path
////////////////////////////////////////////////////////////////////////////////
static void JS_LogPathServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("logPath()");
}
std::string const path = ServerState::instance()->getLogPath();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the arangod path
////////////////////////////////////////////////////////////////////////////////
static void JS_ArangodPathServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("arangodPath()");
}
std::string const path = ServerState::instance()->getArangodPath();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the javascript startup path
////////////////////////////////////////////////////////////////////////////////
@ -1174,43 +1120,6 @@ static void JS_JavaScriptPathServerState(
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the DBserver config
////////////////////////////////////////////////////////////////////////////////
static void JS_DBserverConfigServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("dbserverConfig()");
}
std::string const path = ServerState::instance()->getDBserverConfig();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the coordinator config
////////////////////////////////////////////////////////////////////////////////
static void JS_CoordinatorConfigServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
ONLY_IN_CLUSTER
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("coordinatorConfig()");
}
std::string const path = ServerState::instance()->getCoordinatorConfig();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
#ifdef DEBUG_SYNC_REPLICATION
////////////////////////////////////////////////////////////////////////////////
/// @brief set arangoserver state to initialized
@ -2142,18 +2051,8 @@ void TRI_InitV8Cluster(v8::Isolate* isolate, v8::Handle<v8::Context> context) {
JS_IdOfPrimaryServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("description"),
JS_DescriptionServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("dataPath"),
JS_DataPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("logPath"),
JS_LogPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("arangodPath"),
JS_ArangodPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("javaScriptPath"),
JS_JavaScriptPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("dbserverConfig"),
JS_DBserverConfigServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("coordinatorConfig"),
JS_CoordinatorConfigServerState);
#ifdef DEBUG_SYNC_REPLICATION
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("enableSyncReplicationDebug"),
JS_EnableSyncReplicationDebug);

View File

@ -125,7 +125,7 @@ void GeneralCommTask::executeRequest(
// give up, if we cannot find a handler
if (handler == nullptr) {
LOG_TOPIC(TRACE, arangodb::Logger::FIXME) << "no handler is known, giving up";
handleSimpleError(rest::ResponseCode::NOT_FOUND, messageId);
handleSimpleError(rest::ResponseCode::NOT_FOUND, *request, messageId);
return;
}
@ -158,7 +158,7 @@ void GeneralCommTask::executeRequest(
processResponse(response.get());
return;
} else {
handleSimpleError(rest::ResponseCode::SERVER_ERROR, TRI_ERROR_QUEUE_FULL,
handleSimpleError(rest::ResponseCode::SERVER_ERROR, *request, TRI_ERROR_QUEUE_FULL,
TRI_errno_string(TRI_ERROR_QUEUE_FULL), messageId);
}
}
@ -170,7 +170,7 @@ void GeneralCommTask::executeRequest(
}
if (!ok) {
handleSimpleError(rest::ResponseCode::SERVER_ERROR, messageId);
handleSimpleError(rest::ResponseCode::SERVER_ERROR, *request, messageId);
}
}
@ -274,7 +274,8 @@ bool GeneralCommTask::handleRequest(std::shared_ptr<RestHandler> handler) {
bool ok = SchedulerFeature::SCHEDULER->queue(std::move(job));
if (!ok) {
handleSimpleError(rest::ResponseCode::SERVICE_UNAVAILABLE, TRI_ERROR_QUEUE_FULL,
handleSimpleError(rest::ResponseCode::SERVICE_UNAVAILABLE, *(handler->request()), TRI_ERROR_QUEUE_FULL,
TRI_errno_string(TRI_ERROR_QUEUE_FULL), messageId);
}

View File

@ -99,9 +99,9 @@ class GeneralCommTask : public SocketTask {
virtual void addResponse(GeneralResponse*, RequestStatistics*) = 0;
virtual void handleSimpleError(rest::ResponseCode, uint64_t messageId) = 0;
virtual void handleSimpleError(rest::ResponseCode, GeneralRequest const&, uint64_t messageId) = 0;
virtual void handleSimpleError(rest::ResponseCode, int code,
virtual void handleSimpleError(rest::ResponseCode, GeneralRequest const&, int code,
std::string const& errorMessage,
uint64_t messageId) = 0;

View File

@ -68,16 +68,17 @@ HttpCommTask::HttpCommTask(EventLoop loop, GeneralServer* server,
ConnectionStatistics::SET_HTTP(_connectionStatistics);
}
void HttpCommTask::handleSimpleError(rest::ResponseCode code,
uint64_t /* messageId */) {
void HttpCommTask::handleSimpleError(rest::ResponseCode code, GeneralRequest const& req, uint64_t /* messageId */) {
std::unique_ptr<GeneralResponse> response(new HttpResponse(code));
response->setContentType(req.contentTypeResponse());
addResponse(response.get(), stealStatistics(1UL));
}
void HttpCommTask::handleSimpleError(rest::ResponseCode code, int errorNum,
void HttpCommTask::handleSimpleError(rest::ResponseCode code, GeneralRequest const& req, int errorNum,
std::string const& errorMessage,
uint64_t /* messageId */) {
std::unique_ptr<GeneralResponse> response(new HttpResponse(code));
response->setContentType(req.contentTypeResponse());
VPackBuilder builder;
builder.openObject();
@ -261,8 +262,9 @@ bool HttpCommTask::processRead(double startTime) {
LOG_TOPIC(WARN, arangodb::Logger::FIXME) << "maximal header size is " << MaximalHeaderSize
<< ", request header size is " << headerLength;
HttpRequest tmpRequest(_connectionInfo, nullptr, 0, _allowMethodOverride);
// header is too large
handleSimpleError(rest::ResponseCode::REQUEST_HEADER_FIELDS_TOO_LARGE,
handleSimpleError(rest::ResponseCode::REQUEST_HEADER_FIELDS_TOO_LARGE, tmpRequest,
1); // ID does not matter for http (http default is 1)
_closeRequested = true;
@ -320,7 +322,7 @@ bool HttpCommTask::processRead(double startTime) {
if (_protocolVersion != rest::ProtocolVersion::HTTP_1_0 &&
_protocolVersion != rest::ProtocolVersion::HTTP_1_1) {
handleSimpleError(rest::ResponseCode::HTTP_VERSION_NOT_SUPPORTED, 1);
handleSimpleError(rest::ResponseCode::HTTP_VERSION_NOT_SUPPORTED, *_incompleteRequest, 1);
_closeRequested = true;
return false;
@ -330,7 +332,7 @@ bool HttpCommTask::processRead(double startTime) {
_fullUrl = _incompleteRequest->fullUrl();
if (_fullUrl.size() > 16384) {
handleSimpleError(rest::ResponseCode::REQUEST_URI_TOO_LONG, 1);
handleSimpleError(rest::ResponseCode::REQUEST_URI_TOO_LONG, *_incompleteRequest, 1);
_closeRequested = true;
return false;
@ -432,7 +434,7 @@ bool HttpCommTask::processRead(double startTime) {
<< "'";
// bad request, method not allowed
handleSimpleError(rest::ResponseCode::METHOD_NOT_ALLOWED, 1);
handleSimpleError(rest::ResponseCode::METHOD_NOT_ALLOWED, *_incompleteRequest, 1);
_closeRequested = true;
return false;
@ -481,7 +483,7 @@ bool HttpCommTask::processRead(double startTime) {
std::string uncompressed;
if (!StringUtils::gzipUncompress(_readBuffer.c_str() + _bodyPosition,
_bodyLength, uncompressed)) {
handleSimpleError(rest::ResponseCode::BAD, TRI_ERROR_BAD_PARAMETER,
handleSimpleError(rest::ResponseCode::BAD, *_incompleteRequest, TRI_ERROR_BAD_PARAMETER,
"gzip decoding error", 1);
return false;
}
@ -491,7 +493,7 @@ bool HttpCommTask::processRead(double startTime) {
std::string uncompressed;
if (!StringUtils::gzipDeflate(_readBuffer.c_str() + _bodyPosition,
_bodyLength, uncompressed)) {
handleSimpleError(rest::ResponseCode::BAD, TRI_ERROR_BAD_PARAMETER,
handleSimpleError(rest::ResponseCode::BAD, *_incompleteRequest, TRI_ERROR_BAD_PARAMETER,
"gzip deflate error", 1);
return false;
}
@ -573,12 +575,12 @@ bool HttpCommTask::processRead(double startTime) {
}
// not found
else if (authResult == rest::ResponseCode::NOT_FOUND) {
handleSimpleError(authResult, TRI_ERROR_ARANGO_DATABASE_NOT_FOUND,
handleSimpleError(authResult, *_incompleteRequest, TRI_ERROR_ARANGO_DATABASE_NOT_FOUND,
TRI_errno_string(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND), 1);
}
// forbidden
else if (authResult == rest::ResponseCode::FORBIDDEN) {
handleSimpleError(authResult, TRI_ERROR_USER_CHANGE_PASSWORD,
handleSimpleError(authResult, *_incompleteRequest, TRI_ERROR_USER_CHANGE_PASSWORD,
"change password", 1);
} else { // not authenticated
HttpResponse response(rest::ResponseCode::UNAUTHORIZED);
@ -644,7 +646,7 @@ bool HttpCommTask::checkContentLength(HttpRequest* request,
if (bodyLength < 0) {
// bad request, body length is < 0. this is a client error
handleSimpleError(rest::ResponseCode::LENGTH_REQUIRED);
handleSimpleError(rest::ResponseCode::LENGTH_REQUIRED, *request);
return false;
}
@ -661,7 +663,7 @@ bool HttpCommTask::checkContentLength(HttpRequest* request,
<< ", request body size is " << bodyLength;
// request entity too large
handleSimpleError(rest::ResponseCode::REQUEST_ENTITY_TOO_LARGE,
handleSimpleError(rest::ResponseCode::REQUEST_ENTITY_TOO_LARGE, *request,
0); // FIXME
return false;
}

View File

@ -22,7 +22,7 @@ class HttpCommTask final : public GeneralCommTask {
arangodb::Endpoint::TransportType transportType() override {
return arangodb::Endpoint::TransportType::HTTP;
};
}
// convert from GeneralResponse to httpResponse
void addResponse(GeneralResponse* response,
@ -34,19 +34,19 @@ class HttpCommTask final : public GeneralCommTask {
}
addResponse(httpResponse, stat);
};
}
protected:
private:
bool processRead(double startTime) override;
void compactify() override;
std::unique_ptr<GeneralResponse> createResponse(
rest::ResponseCode, uint64_t messageId) override final;
void handleSimpleError(rest::ResponseCode code,
void handleSimpleError(rest::ResponseCode code, GeneralRequest const&,
uint64_t messageId = 1) override final;
void handleSimpleError(rest::ResponseCode, int code,
void handleSimpleError(rest::ResponseCode, GeneralRequest const&, int code,
std::string const& errorMessage,
uint64_t messageId = 1) override final;

View File

@ -267,15 +267,16 @@ void VstCommTask::handleAuthentication(VPackSlice const& header,
_authenticatedUser = std::move(result._username);
}
}
VstRequest fakeRequest( _connectionInfo, VstInputMessage{}, 0, true /*fakeRequest*/);
if (authOk) {
// mop: hmmm...user should be completely ignored if there is no auth IMHO
// obi: user who sends authentication expects a reply
handleSimpleError(rest::ResponseCode::OK, TRI_ERROR_NO_ERROR,
handleSimpleError(rest::ResponseCode::OK, fakeRequest, TRI_ERROR_NO_ERROR,
"authentication successful", messageId);
} else {
_authenticatedUser.clear();
handleSimpleError(rest::ResponseCode::UNAUTHORIZED,
handleSimpleError(rest::ResponseCode::UNAUTHORIZED, fakeRequest,
TRI_ERROR_HTTP_UNAUTHORIZED, "authentication failed",
messageId);
}
@ -344,7 +345,8 @@ bool VstCommTask::processRead(double startTime) {
try {
type = header.at(1).getNumber<int>();
} catch (std::exception const& e) {
handleSimpleError(rest::ResponseCode::BAD, chunkHeader._messageID);
VstRequest fakeRequest( _connectionInfo, VstInputMessage{}, 0);
handleSimpleError(rest::ResponseCode::BAD, fakeRequest, chunkHeader._messageID);
LOG_TOPIC(DEBUG, Logger::COMMUNICATION)
<< "VstCommTask: "
<< "VPack Validation failed: " << e.what();
@ -378,7 +380,7 @@ bool VstCommTask::processRead(double startTime) {
if (level != AuthLevel::RW) {
events::NotAuthorized(request.get());
handleSimpleError(rest::ResponseCode::UNAUTHORIZED, TRI_ERROR_FORBIDDEN,
handleSimpleError(rest::ResponseCode::UNAUTHORIZED, *request, TRI_ERROR_FORBIDDEN,
"not authorized to execute this request",
chunkHeader._messageID);
} else {
@ -386,7 +388,7 @@ bool VstCommTask::processRead(double startTime) {
// make sure we have a database
if (request->requestContext() == nullptr) {
handleSimpleError(
rest::ResponseCode::NOT_FOUND,
rest::ResponseCode::NOT_FOUND, *request,
TRI_ERROR_ARANGO_DATABASE_NOT_FOUND,
TRI_errno_string(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND),
chunkHeader._messageID);
@ -454,10 +456,12 @@ std::unique_ptr<GeneralResponse> VstCommTask::createResponse(
}
void VstCommTask::handleSimpleError(rest::ResponseCode responseCode,
GeneralRequest const& req,
int errorNum,
std::string const& errorMessage,
uint64_t messageId) {
VstResponse response(responseCode, messageId);
response.setContentType(req.contentTypeResponse());
VPackBuilder builder;
builder.openObject();
@ -489,7 +493,8 @@ bool VstCommTask::getMessageFromSingleChunk(
try {
payloads = validateAndCount(vpackBegin, chunkEnd);
} catch (std::exception const& e) {
handleSimpleError(rest::ResponseCode::BAD,
VstRequest fakeRequest( _connectionInfo, VstInputMessage{}, 0, true /*isFake*/);
handleSimpleError(rest::ResponseCode::BAD, fakeRequest,
TRI_ERROR_ARANGO_DATABASE_NOT_FOUND, e.what(),
chunkHeader._messageID);
LOG_TOPIC(DEBUG, Logger::COMMUNICATION)
@ -498,7 +503,8 @@ bool VstCommTask::getMessageFromSingleChunk(
closeTask(rest::ResponseCode::BAD);
return false;
} catch (...) {
handleSimpleError(rest::ResponseCode::BAD, chunkHeader._messageID);
VstRequest fakeRequest( _connectionInfo, VstInputMessage{}, 0, true /*isFake*/);
handleSimpleError(rest::ResponseCode::BAD, fakeRequest, chunkHeader._messageID);
LOG_TOPIC(DEBUG, Logger::COMMUNICATION) << "VstCommTask: "
<< "VPack Validation failed";
closeTask(rest::ResponseCode::BAD);
@ -580,7 +586,8 @@ bool VstCommTask::getMessageFromMultiChunks(
im._buffer.data() + im._buffer.byteSize()));
} catch (std::exception const& e) {
handleSimpleError(rest::ResponseCode::BAD,
VstRequest fakeRequest( _connectionInfo, VstInputMessage{}, 0, true /*isFake*/);
handleSimpleError(rest::ResponseCode::BAD, fakeRequest,
TRI_ERROR_ARANGO_DATABASE_NOT_FOUND, e.what(),
chunkHeader._messageID);
LOG_TOPIC(DEBUG, Logger::COMMUNICATION)
@ -589,7 +596,8 @@ bool VstCommTask::getMessageFromMultiChunks(
closeTask(rest::ResponseCode::BAD);
return false;
} catch (...) {
handleSimpleError(rest::ResponseCode::BAD, chunkHeader._messageID);
VstRequest fakeRequest( _connectionInfo, VstInputMessage{}, 0, true /*isFake*/);
handleSimpleError(rest::ResponseCode::BAD, fakeRequest, chunkHeader._messageID);
LOG_TOPIC(DEBUG, Logger::COMMUNICATION) << "VstCommTask: "
<< "VPack Validation failed!";
closeTask(rest::ResponseCode::BAD);

View File

@ -70,12 +70,13 @@ class VstCommTask : public GeneralCommTask {
void handleAuthentication(VPackSlice const& header, uint64_t messageId);
void handleSimpleError(rest::ResponseCode code, uint64_t id) override {
void handleSimpleError(rest::ResponseCode code, GeneralRequest const& req, uint64_t id) override {
VstResponse response(code, id);
response.setContentType(req.contentTypeResponse());
addResponse(&response, nullptr);
}
void handleSimpleError(rest::ResponseCode, int code,
void handleSimpleError(rest::ResponseCode, GeneralRequest const&, int code,
std::string const& errorMessage,
uint64_t messageId) override;

View File

@ -94,7 +94,9 @@ RestStatus RestImportHandler::execute() {
switch (_response->transportType()) {
case Endpoint::TransportType::HTTP: {
if (found &&
if (_request->contentType() == arangodb::ContentType::VPACK){
createFromVPack(documentType);
} else if (found &&
(documentType == "documents" || documentType == "array" ||
documentType == "list" || documentType == "auto")) {
createFromJson(documentType);

View File

@ -3337,7 +3337,7 @@ void TRI_InitV8Collections(v8::Handle<v8::Context> context,
JS_UpdateVocbaseCol);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("version"),
JS_VersionVocbaseCol);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("warmup"),
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("loadIndexesIntoMemory"),
JS_WarmupVocbaseCol);
TRI_InitV8IndexCollection(isolate, rt);

View File

@ -468,17 +468,15 @@ function put_api_collection_load (req, res, collection) {
}
// //////////////////////////////////////////////////////////////////////////////
// / @brief was docuBlock JSF_put_api_collection_warmup
// / @brief was docuBlock JSF_put_api_collection_loadIndexesIntoMemory
// //////////////////////////////////////////////////////////////////////////////
function put_api_collection_warmup (req, res, collection) {
function put_api_collection_load_indexes_in_memory (req, res, collection) {
try {
// Warmup the indexes
collection.warmup();
// Load all index values into Memory
collection.loadIndexesIntoMemory();
var result = collectionRepresentation(collection);
actions.resultOk(req, res, actions.HTTP_OK, result);
actions.resultOk(req, res, actions.HTTP_OK, true);
} catch (err) {
actions.resultException(req, res, err, undefined, false);
}
@ -632,12 +630,12 @@ function put_api_collection (req, res) {
put_api_collection_rename(req, res, collection);
} else if (sub === 'rotate') {
put_api_collection_rotate(req, res, collection);
} else if (sub === 'warmup') {
put_api_collection_warmup(req, res, collection);
} else if (sub === 'loadIndexesIntoMemory') {
put_api_collection_load_indexes_in_memory(req, res, collection);
} else {
actions.resultNotFound(req, res, arangodb.ERROR_HTTP_NOT_FOUND,
"expecting one of the actions 'load', 'unload',"
+ " 'truncate', 'properties', 'rename'");
+ " 'truncate', 'properties', 'rename', 'loadIndexesIntoMemory'");
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -2781,4 +2781,4 @@ var cutByResolution = function (str) {
</div>
<div id="workMonitorContent" class="innerContent">
</div></script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="filterSelectDiv" style="display:none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1496768732353"></script><script src="app.js?version=1496768732353"></script></body></html>
</div></script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="filterSelectDiv" style="display:none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1497180582772"></script><script src="app.js?version=1497180582772"></script></body></html>

File diff suppressed because one or more lines are too long

View File

@ -160,9 +160,9 @@
$.ajax({
cache: false,
type: 'PUT',
url: arangoHelper.databaseUrl('/_api/collection/' + this.get('id') + '/warmup'),
url: arangoHelper.databaseUrl('/_api/collection/' + this.get('id') + '/loadIndexesIntoMemory'),
success: function () {
arangoHelper.arangoNotification('Warmup started.');
arangoHelper.arangoNotification('Loading indexes into Memory.');
},
error: function () {
arangoHelper.arangoError('Collection error.');

View File

@ -302,7 +302,7 @@
);
buttons.push(
window.modalView.createDeleteButton(
'Warmup',
'Load Indexes in Memory',
this.warumupCollection.bind(this)
)
);

View File

@ -247,7 +247,7 @@
);
buttons.push(
window.modalView.createNotificationButton(
'Warmup',
'Load Indexes in Memory',
this.warmupCollection.bind(this)
)
);

View File

@ -1398,12 +1398,12 @@ ArangoCollection.prototype.removeByKeys = function (keys) {
};
// //////////////////////////////////////////////////////////////////////////////
// / @brief warmup indexes of a collection
// / @brief load indexes of a collection into memory
// //////////////////////////////////////////////////////////////////////////////
ArangoCollection.prototype.warmup = function () {
ArangoCollection.prototype.loadIndexesIntoMemory = function () {
this._status = null;
var requestResult = this._database._connection.PUT(this._baseurl('warmup'), '');
var requestResult = this._database._connection.PUT(this._baseurl('loadIndexesIntoMemory'), '');
this._status = null;
arangosh.checkRequestResult(requestResult);

View File

@ -124,7 +124,7 @@ describe('Index figures', function () {
});
describe('warmup', function() {
describe('loading indexes into memory', function() {
before('insert document', function() {
// We insert enough documents to trigger resizing
@ -148,7 +148,7 @@ describe('Index figures', function () {
expect(edgeIndex.figures.cacheSize).to.be.a('number');
let oldSize = edgeIndex.figures.cacheSize;
col.warmup();
col.loadIndexesIntoMemory();
// Test if the memory consumption goes up
let indexes2 = col.getIndexes(true);

View File

@ -102,11 +102,13 @@ void EnvironmentFeature::prepare() {
}
#endif
#if 0
// TODO: 12-06-2017 turn off check for now and reactivate for next beta
try {
std::string value =
basics::FileUtils::slurp("/proc/sys/vm/overcommit_memory");
uint64_t v = basics::StringUtils::uint64(value);
if (v != 0 && v != 1) {
if (v != 0 && v != 2) {
// from https://www.kernel.org/doc/Documentation/sysctl/vm.txt:
//
// When this flag is 0, the kernel attempts to estimate the amount
@ -117,13 +119,14 @@ void EnvironmentFeature::prepare() {
// policy that attempts to prevent any overcommit of memory.
LOG_TOPIC(WARN, Logger::MEMORY)
<< "/proc/sys/vm/overcommit_memory is set to '" << v
<< "'. It is recommended to set it to a value of 0 or 1";
<< "'. It is recommended to set it to a value of 0 or 2";
LOG_TOPIC(WARN, Logger::MEMORY) << "execute 'sudo bash -c \"echo 0 > "
"/proc/sys/vm/overcommit_memory\"'";
}
} catch (...) {
// file not found or value not convertible into integer
}
#endif
try {
std::string value =

View File

@ -60,7 +60,7 @@ std::string const& lookupStringInMap(
}
VstRequest::VstRequest(ConnectionInfo const& connectionInfo,
VstInputMessage&& message, uint64_t messageId)
VstInputMessage&& message, uint64_t messageId, bool isFake)
: GeneralRequest(connectionInfo),
_message(std::move(message)),
_headers(nullptr),
@ -68,7 +68,9 @@ VstRequest::VstRequest(ConnectionInfo const& connectionInfo,
_protocol = "vst";
_contentType = ContentType::VPACK;
_contentTypeResponse = ContentType::VPACK;
parseHeaderInformation();
if(!isFake){
parseHeaderInformation();
}
_user = "root";
}

View File

@ -60,7 +60,7 @@ class VstRequest final : public GeneralRequest {
private:
VstRequest(ConnectionInfo const& connectionInfo, VstInputMessage&& message,
uint64_t messageId);
uint64_t messageId, bool isFake = false);
public:
~VstRequest() {}

View File

@ -56,16 +56,19 @@ test_tools(){
main(){
#test for basic tools
test_tools
TARGET=$1
shift
if test -z "$TARGET"; then
./scripts/build-deb.sh --buildDir build-docu --parallel 2
# we expect this to be a symlink, so no -r ;-)
echo "#############################################"
echo "RELINKING BUILD DIRECTORY !!!!!!!!!!!!!!!!!!!"
echo "#############################################"
rm -f build
ln -s build-docu build
./utils/generateExamples.sh
fi
./utils/generateSwagger.sh

View File

@ -1,73 +0,0 @@
#!/bin/bash
export PID=$$
self=$0
if test -f "${self}.js"; then
export SCRIPT=${self}.js
else
export SCRIPT=$1
shift
fi
if test -n "$ORIGINAL_PATH"; then
# running in cygwin...
PS='\'
export EXT=".exe"
else
export EXT=""
PS='/'
fi;
LOGFILE="out${PS}log-$PID"
DBDIR="out${PS}data-$PID"
mkdir -p ${DBDIR}
export PORT=`expr 1024 + $RANDOM`
declare -a ARGS
export VG=''
export VXML=''
for i in "$@"; do
# no valgrind on cygwin, don't care.
if test "$i" == valgrind; then
export VG='/usr/bin/valgrind --log-file=/tmp/valgrindlog.%p'
elif test "$i" == valgrindxml; then
export VG='/usr/bin/valgrind --xml=yes --xml-file=valgrind_testrunner'
export VXML="valgrind=\"${VG}\""
export VG=${VG}'.xml '
else
ARGS+=(--javascript.script-parameter)
ARGS+=("$i")
fi
done
echo Database has its data in ${DBDIR}
echo Logfile is in ${LOGFILE}
$VG build/bin/arangod \
--configuration none \
--cluster.arangod-path bin${PS}arangod \
--cluster.coordinator-config etc${PS}relative${PS}arangod-coordinator.conf \
--cluster.dbserver-config etc${PS}relative${PS}arangod-dbserver.conf \
--cluster.disable-dispatcher-frontend false \
--cluster.disable-dispatcher-kickstarter false \
--cluster.data-path cluster \
--cluster.log-path cluster \
--database.directory ${DBDIR} \
--log.file ${LOGFILE} \
--server.endpoint tcp://127.0.0.1:$PORT \
--javascript.startup-directory js \
--javascript.app-path js${PS}apps \
--javascript.script $SCRIPT \
--no-server \
--temp-path ${PS}var${PS}tmp \
"${ARGS[@]}" \
$VXML
if test $? -eq 0; then
echo "removing ${LOGFILE} ${DBDIR}"
rm -rf ${LOGFILE} ${DBDIR}
else
echo "failed - don't remove ${LOGFILE} ${DBDIR} - here's the logfile:"
cat ${LOGFILE}
fi
echo Server has terminated.

View File

@ -1 +0,0 @@
run

View File

@ -650,7 +650,7 @@ def restbodyparam(cargo, r=Regexen()):
if restBodyParam == None:
# https://github.com/swagger-api/swagger-ui/issues/1430
# once this is solved we can skip this:
operation['description'] += "\n**A json post document with these Properties is required:**\n"
operation['description'] += "\n**A JSON object with these properties is required:**\n"
restBodyParam = {
'name': 'Json Request Body',
'x-description-offset': len(swagger['paths'][httpPath][method]['description']),