1
0
Fork 0

cluster documentation varnish (#2553)

This commit is contained in:
Kaveh Vahedipour 2017-06-12 19:02:11 +02:00 committed by Frank Celler
parent 9edb884bc8
commit c1abc0333d
16 changed files with 239 additions and 451 deletions

View File

@ -91,3 +91,32 @@ to produce a 2.8-compatible dump with a 3.0 ArangoDB, please specify the option
`--compat28 true` when invoking arangodump.
unix> arangodump --compat28 true --collection myvalues --output-directory "dump"
### Advanced cluster options
Starting with version 3.1.17, collections may be created with shard
distribution identical to an existing prototypical collection;
i.e. shards are distributed in the very same pattern as in the
prototype collection. Such collections cannot be dumped without the
reference collection or arangodump with yield an error.
unix> arangodump --collection clonedCollection --output-directory "dump"
ERROR Collection clonedCollection's shard distribution is based on a that of collection prototypeCollection, which is not dumped along. You may dump the collection regardless of the missing prototype collection by using the --ignore-distribute-shards-like-errors parameter.
There are two ways to approach that problem: Solve it, i.e. dump the
prototype collection along:
unix> arangodump --collection clonedCollection --collection prototypeCollection --output-directory "dump"
Processed 2 collection(s), wrote 81920 byte(s) into datafiles, sent 1 batch(es)
Or override that behaviour to be able to dump the collection
individually.
unix> arangodump --collection B clonedCollection --output-directory "dump" --ignore-distribute-shards-like-errors
Processed 1 collection(s), wrote 34217 byte(s) into datafiles, sent 1 batch(es)
No that in consequence, restoring such a collection without its
prototype is affected. [arangorestore](Arangorestore.md)

View File

@ -160,3 +160,20 @@ be ignored.
Note that in a cluster, every newly created collection will have a new
ID, it is not possible to reuse the ID from the originally dumped
collection. This is for safety reasons to ensure consistency of IDs.
### Restoring collections with sharding prototypes
*arangorestore* will yield an error, while trying to restore a
collection, whose shard distribution follows a collection, which does
not exist in the cluster and which was not dumped along:
unix> arangorestore --collection clonedCollection --server.database mydb --input-directory "dump"
ERROR got error from server: HTTP 500 (Internal Server Error): ArangoError 1486: must not have a distributeShardsLike attribute pointing to an unknown collection
Processed 0 collection(s), read 0 byte(s) from datafiles, sent 0 batch(es)
The collection can be restored by overriding the error message as
follows:
unix> arangorestore --collection clonedCollection --server.database mydb --input-directory "dump" --ignore-distribute-shards-like-errors

View File

@ -1,21 +1,8 @@
Clusters Options
================
### Node ID
<!-- arangod/Cluster/ApplicationCluster.h -->
This server's id: `--cluster.my-local-info info`
Some local information about the server in the cluster, this can for
example be an IP address with a process ID or any string unique to
the server. Specifying *info* is mandatory on startup if the server
id (see below) is not specified. Each server of the cluster must
have a unique local info. This is ignored if my-id below is specified.
### Agency endpoint
<!-- arangod/Cluster/ApplicationCluster.h -->
<!-- arangod/Cluster/ClusterFeature.h -->
List of agency endpoints:
@ -37,48 +24,12 @@ alternative endpoint if one of them becomes unavailable.
**Examples**
```
--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint
tcp://192.168.1.2:4002
--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint tcp://192.168.1.2:4002 ...
```
### Agency prefix
<!-- arangod/Cluster/ApplicationCluster.h -->
### My address
Global agency prefix:
`--cluster.agency-prefix prefix`
The global key prefix used in all requests to the agency. The specified
prefix will become part of each agency key. Specifying the key prefix
allows managing multiple ArangoDB clusters with the same agency
server(s).
*prefix* must consist of the letters *a-z*, *A-Z* and the digits *0-9*
only. Specifying a prefix is mandatory.
**Examples**
```
--cluster.prefix mycluster
```
### MyId
<!-- arangod/Cluster/ApplicationCluster.h -->
This server's id: `--cluster.my-id id`
The local server's id in the cluster. Specifying *id* is mandatory on
startup. Each server of the cluster must have a unique id.
Specifying the id is very important because the server id is used for
determining the server's role and tasks in the cluster.
*id* must be a string consisting of the letters *a-z*, *A-Z* or the
digits *0-9* only.
### MyAddress
<!-- arangod/Cluster/ApplicationCluster.h -->
<!-- arangod/Cluster/ClusterFeature.h -->
This server's address / endpoint:
@ -97,6 +48,53 @@ for the server's id, ArangoDB will refuse to start.
**Examples**
Listen only on interface with address `192.168.1.1`
```
--cluster.my-address tcp://192.168.1.1:8530
```
Listen on all ipv4 and ipv6 addresses, which are configured on port `8530`
```
--cluster.my-address ssl://[::]:8530
```
### My role
<!-- arangod/Cluster/ClusterFeature.h -->
This server's role:
`--cluster.my-role [dbserver|coordinator]`
The server's role. Is this instance a db server (backend data server)
or a coordinator (frontend server for external and application access)
### Node ID (deprecated)
<!-- arangod/Cluster/ClusterFeature.h -->
This server's id: `--cluster.my-local-info info`
Some local information about the server in the cluster, this can for
example be an IP address with a process ID or any string unique to
the server. Specifying *info* is mandatory on startup if the server
id (see below) is not specified. Each server of the cluster must
have a unique local info. This is ignored if my-id below is specified.
This option is deprecated and will be removed in a future release. The
cluster node ids have been dropped in favour of once generated UUIDs.
### More advanced options (should generally remain untouched)
<!-- arangod/Cluster/ClusterFeature.h -->
Synchroneous replication timing: `--cluster.synchronous-replication-timeout-factor double`
Strech or clinch timeouts for internal synchroneous replication
mechanism between db servers. All such timeouts are affected by this
change. Please change only with intent and great care. Default at `1.0`.
System replication factor: `--cluster.system-replication-factorinteger`
Change default replication factor for system collections. Default at `2`.

View File

@ -1,17 +1,38 @@
Introduction to Replication
===========================
Replication allows you to *replicate* data onto another machine. It forms the base of all disaster recovery and failover features ArangoDB offers.
Replication allows you to *replicate* data onto another machine. It
forms the base of all disaster recovery and failover features ArangoDB
offers.
ArangoDB offers asynchronous and synchronous replication which both have their pros and cons. Both modes may and should be combined in a real world scenario and be applied in the usecase where they excel most.
ArangoDB offers asynchronous and synchronous replication which both
have their pros and cons. Both modes may and should be combined in a
real world scenario and be applied in the usecase where they excel
most.
We will describe pros and cons of each of them in the following sections.
We will describe pros and cons of each of them in the following
sections.
### Synchronous replication
Synchronous replication only works in in a cluster and is typically used for mission critical data which must be accessible at all times. Synchronous replication generally stores a copy of the data on another host and keeps it in sync. Essentially when storing data after enabling synchronous replication the cluster will wait for all replicas to write all the data before greenlighting the write operation to the client. This will naturally increase the latency a bit, since one more network hop is needed for each write. However it will enable the cluster to immediately fail over to a replica whenever an outage has been detected, without losing any committed data, and mostly without even signaling an error condition to the client.
Synchronous replication only works within a cluster and is typically
used for mission critical data which must be accessible at all
times. Synchronous replication generally stores a copy of a shard's
data on another db server and keeps it in sync. Essentially, when storing
data after enabling synchronous replication the cluster will wait for
all replicas to write all the data before greenlighting the write
operation to the client. This will naturally increase the latency a
bit, since one more network hops is needed for each write. However, it
will enable the cluster to immediately fail over to a replica whenever
an outage has been detected, without losing any committed data, and
mostly without even signaling an error condition to the client.
Synchronous replication is organized in a way that every shard has a leader and r-1 followers. The number of followers can be controlled using the `replicationFactor` whenever you create a collection, the `replicationFactor` is the total number of copies being kept, that is, it is one plus the number of followers.
Synchronous replication is organized such that every shard has a
leader and `r-1` followers, where `r` denoted the replication
factor. The number of followers can be controlled using the
`replicationFactor` parameter whenever you create a collection, the
`replicationFactor` parameter is the total number of copies being
kept, that is, it is one plus the number of followers.
### Satellite collections
@ -23,4 +44,8 @@ Satellite collections are an enterprise only feature.
### Asynchronous replication
In ArangoDB any write operation will be logged to the write-ahead log. When using Asynchronous replication slaves will connect to a master and apply all the events from the log in the same order locally. After that, they will have the same state of data as the master database.
In ArangoDB any write operation will be logged to the write-ahead
log. When using Asynchronous replication slaves will connect to a
master and apply all the events from the log in the same order
locally. After that, they will have the same state of data as the
master database.

View File

@ -7,10 +7,23 @@ Synchronous replication requires an operational ArangoDB cluster.
### Enabling synchronous replication
Synchronous replication can be enabled per collection. When creating you can specify the number of replicas using *replicationFactor*. The default is `1` which effectively *disables* synchronous replication.
Synchronous replication can be enabled per collection. When creating a
collection you may specify the number of replicas using the
*replicationFactor* parameter. The default value is set to `1` which
effectively *disables* synchronous replication.
Example:
127.0.0.1:8530@_system> db._create("test", {"replicationFactor": 3})
Any write operation will require 2 replicas to report success from now on.
In the above case, any write operation will require 2 replicas to
report success from now on.
### Preparing growth
You may create a collection with higher replication factor than
available. When additional db servers become available the shards are
automatically replicated to the newly available machines.
Multiple replicas of the same shard can never coexist on the same db
server instance.

View File

@ -12,6 +12,11 @@ Drops a *collection* and all its indexes and data.
In order to drop a system collection, an *options* object
with attribute *isSystem* set to *true* must be specified.
**Note**: dropping a collection in a cluster, which is prototype for
sharing in other collections is prohibited. In order to be able to
drop such a collection, all dependent collections must be dropped
first.
**Examples**
@startDocuBlockInline collectionDrop
@ -184,6 +189,7 @@ loads a collection
Loads a collection into memory.
**Note**: cluster collections are loaded at all times.
**Examples**
@ -199,7 +205,6 @@ Loads a collection into memory.
@endDocuBlock collectionLoad
### Revision
<!-- arangod/V8Server/v8-collection.cpp -->
@ -268,6 +273,7 @@ unloads a collection
Starts unloading a collection from memory. Note that unloading is deferred
until all query have finished.
**Note**: cluster collections cannot be unloaded.
**Examples**
@ -283,7 +289,6 @@ until all query have finished.
@endDocuBlock CollectionUnload
### Rename
<!-- arangod/V8Server/v8-collection.cpp -->
@ -318,7 +323,6 @@ database.
@endDocuBlock collectionRename
### Rotate
<!-- arangod/V8Server/v8-collection.cpp -->

View File

@ -142,12 +142,16 @@ to the [naming conventions](../NamingConventions/README.md).
servers holding copies take over, usually without an error being
reported.
- *distributeShardsLike* distribute the shards of this collection
cloning the shard distribution of another.
When using the *Enterprise* version of ArangoDB the replicationFactor
may be set to "satellite" making the collection locally joinable
on every database server. This reduces the number of network hops
dramatically when using joins in AQL at the costs of reduced write
performance on these collections.
`db._create(collection-name, properties, type)`
Specifies the optional *type* of the collection, it can either be *document*
@ -311,6 +315,8 @@ In order to drop a system collection, one must specify an *options* object
with attribute *isSystem* set to *true*. Otherwise it is not possible to
drop system collections.
**Note**: cluster collection, which are prototypes for collections
with *distributeShardsLike* parameter, cannot be dropped.
*Examples*

View File

@ -1,29 +1,50 @@
Automatic native Clusters
-------------------------
Similar as the Mesos framework aranges an ArangoDB cluster in a DC/OS environment for you, `arangodb` can do this for you in a plain environment.
Similarly to how the Mesos framework aranges an ArangoDB cluster in a
DC/OS environment for you, `arangodb` can do this for you in a plain
environment.
With `arangodb` you launch a primary node. It will bind a network port, and output the commands you need to cut'n'paste into the other nodes:
By invoking the first `arangodb` you launch a primary node. It will
bind a network port, and output the commands you need to cut'n'paste
into the other nodes. Let's review the process of such a startup on
three hosts named `h01`, `h02`, and `h03`:
# arangodb
2017/05/11 11:00:52 Starting arangodb version 0.7.0, build 90aebe6
2017/05/11 11:00:52 Serving as master with ID '85c05e3b' on :8528...
2017/05/11 11:00:52 Waiting for 3 servers to show up.
2017/05/11 11:00:52 Use the following commands to start other servers:
arangodb@h01 ~> arangodb --ownAddress h01:4000
2017/06/12 14:59:38 Starting arangodb version 0.5.0+git, build 5f97368
2017/06/12 14:59:38 Serving as master with ID '52698769' on h01:4000...
2017/06/12 14:59:38 Waiting for 3 servers to show up.
2017/06/12 14:59:38 Use the following commands to start other servers:
arangodb --data.dir=./db2 --starter.join 127.0.0.1
arangodb --data.dir=./db3 --starter.join 127.0.0.1
2017/05/11 11:00:52 Listening on 0.0.0.0:8528 (:8528)
arangodb --dataDir=./db2 --join h01:4000
So you cut the lines `arangodb --data.dir=./db2 --starter.join 127.0.0.1` and execute them for the other nodes.
If you run it on another node on your network, replace the `--starter.join 127.0.0.1` by the public IP of the first host.
arangodb --dataDir=./db3 --join h01:4000
2017/06/12 14:59:38 Listening on 0.0.0.0:4000 (h01:4000)
So you cut the lines `arangodb --data.dir=./db2 --starter.join
127.0.0.1` and execute them for the other nodes. If you run it on
another node on your network, replace the `--starter.join 127.0.0.1`
by the public IP of the first host.
arangodbh02 ~> arangodb --dataDir=./db2 --join h01:4000
2017/06/12 14:48:50 Starting arangodb version 0.5.0+git, build 5f97368
2017/06/12 14:48:50 Contacting master h01:4000...
2017/06/12 14:48:50 Waiting for 3 servers to show up...
2017/06/12 14:48:50 Listening on 0.0.0.0:4000 (:4000)
arangodbh03 ~> arangodb --dataDir=./db3 --join h01:4000
2017/06/12 14:48:50 Starting arangodb version 0.5.0+git, build 5f97368
2017/06/12 14:48:50 Contacting master h01:4000...
2017/06/12 14:48:50 Waiting for 3 servers to show up...
2017/06/12 14:48:50 Listening on 0.0.0.0:4000 (:4000)
Once the two other processes joined the cluster, and started their ArangoDB server processes (this may take a while depending on your system), it will inform you where to connect the Cluster from a Browser, shell or your programm:
2017/05/11 11:01:47 Your cluster can now be accessed with a browser at
`http://localhost:8529` or
2017/05/11 11:01:47 using `arangosh --server.endpoint tcp://localhost:8529`.
...
2017/06/12 14:55:21 coordinator up and running.
At this point you may access your cluster at either coordinator
endpoint, http://h01:4002/, http://h02:4002/ or http://h03:4002/.
Automatic native local test Clusters
------------------------------------
@ -49,8 +70,10 @@ In the Docker world you need to take care about where persistant data is stored,
(You can use any type of docker volume that fits your setup instead.)
We then need to determine the the IP of the docker host where you intend to run ArangoDB starter on.
Depending on your host os, [ipconfig](https://en.wikipedia.org/wiki/Ipconfig) can be used, on a Linux host `/sbin/ip route`:
We then need to determine the the IP of the docker host where you
intend to run ArangoDB starter on. Depending on your operating system
execute `ip addr, ifconfig or ipconfig` to determine your local ip
address.
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.32
@ -64,22 +87,24 @@ So this example uses the IP `192.168.1.32`:
It will start the master instance, and command you to start the slave instances:
2017/05/11 09:04:24 Starting arangodb version 0.7.0, build 90aebe6
2017/05/11 09:04:24 Serving as master with ID 'fc673b3b' on 192.168.140.80:8528...
2017/05/11 09:04:24 Waiting for 3 servers to show up.
2017/05/11 09:04:24 Use the following commands to start other servers:
Unable to find image 'arangodb/arangodb-starter:latest' locally
latest: Pulling from arangodb/arangodb-starter
Digest: sha256:b87d20c0b4757b7daa4cb7a9f55cb130c90a09ddfd0366a91970bcf31a7fd5a4
Status: Downloaded newer image for arangodb/arangodb-starter:latest
2017/06/12 13:26:14 Starting arangodb version 0.7.1, build f128884
2017/06/12 13:26:14 Serving as master with ID '46a2b40d' on 192.168.1.32:8528...
2017/06/12 13:26:14 Waiting for 3 servers to show up.
2017/06/12 13:26:14 Use the following commands to start other servers:
docker volume create arangodb2 && \
docker run -it --name=adb2 --rm -p 8533:8528 -v arangodb2:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
docker volume create arangodb2 && \
docker run -it --name=adb2 --rm -p 8533:8528 -v arangodb2:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter:0.7 \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
docker volume create arangodb3 && \
docker run -it --name=adb3 --rm -p 8538:8528 -v arangodb3:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
2017/05/11 09:04:24 Listening on 0.0.0.0:8528 (192.168.1.32:8528)
docker volume create arangodb3 && \
docker run -it --name=adb3 --rm -p 8538:8528 -v arangodb3:/data \
-v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter:0.7 \
--starter.address=192.168.1.32 --starter.join=192.168.1.32
Once you start the other instances, it will continue like this:
@ -93,9 +118,9 @@ Once you start the other instances, it will continue like this:
2017/05/11 09:05:52 Starting dbserver on port 8530
2017/05/11 09:05:53 Looking for a running instance of coordinator on port 8529
2017/05/11 09:05:53 Starting coordinator on port 8529
2017/05/11 09:05:58 agent up and running (version 3.1.19).
2017/05/11 09:06:15 dbserver up and running (version 3.1.19).
2017/05/11 09:06:31 coordinator up and running (version 3.1.19).
2017/05/11 09:05:58 agent up and running (version 3.2.devel).
2017/05/11 09:06:15 dbserver up and running (version 3.2.devel).
2017/05/11 09:06:31 coordinator up and running (version 3.2.devel).
And at least it tells you where you can work with your cluster:

View File

@ -18,29 +18,29 @@ then the commands you have to use are (you can use host names if they can be res
On 192.168.1.1:
```
arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.1:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
sudo arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.1:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
```
On 192.168.1.2:
```
arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.2:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
sudo arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.2:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.supervision true --database.directory agency
```
On 192.168.1.3:
```
arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.3:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.endpoint tcp://192.168.1.1:5001 --agency.endpoint tcp://192.168.1.2:5001 --agency.endpoint tcp://192.168.1.3:5001 --agency.supervision true --database.directory agency
sudo arangod --server.endpoint tcp://0.0.0.0:5001 --agency.my-address tcp://192.168.1.3:5001 --server.authentication false --agency.activate true --agency.size 3 --agency.endpoint tcp://192.168.1.1:5001 --agency.endpoint tcp://192.168.1.2:5001 --agency.endpoint tcp://192.168.1.3:5001 --agency.supervision true --database.directory agency
```
On 192.168.1.1:
```
arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://192.168.1.1:8529 --cluster.my-local-info db1 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary1 &
sudo arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://192.168.1.1:8529 --cluster.my-local-info db1 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary1 &
```
On 192.168.1.2:
```
arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8530 --cluster.my-address tcp://192.168.1.2:8530 --cluster.my-local-info db2 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary2 &
sudo arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8530 --cluster.my-address tcp://192.168.1.2:8530 --cluster.my-local-info db2 --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://192.168.1.1:5001 --cluster.agency-endpoint tcp://192.168.1.2:5001 --cluster.agency-endpoint tcp://192.168.1.3:5001 --database.directory primary2 &
```
On 192.168.1.3:

View File

@ -91,6 +91,23 @@ void ClusterFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
options->addObsoleteOption("--cluster.disable-dispatcher-frontend",
"The dispatcher feature isn't available anymore; Use ArangoDBStarter for this now!",
true);
options->addObsoleteOption("--cluster.dbserver-config",
"The dbserver-config is not available anymore, Use ArangoDBStarter",
true);
options->addObsoleteOption("--cluster.coordinator-config",
"The coordinator-config is not available anymore, Use ArangoDBStarter",
true);
options->addObsoleteOption("--cluster.data-path",
"path to cluster database directory",
true);
options->addObsoleteOption("--cluster.log-path",
"path to log directory for the cluster",
true);
options->addObsoleteOption("--cluster.arangod-path",
"path to the arangod for the cluster",
true);
options->addOption("--cluster.agency-endpoint",
"agency endpoint to connect to",
@ -111,26 +128,6 @@ void ClusterFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
options->addOption("--cluster.my-address", "this server's endpoint",
new StringParameter(&_myAddress));
options->addOption("--cluster.data-path",
"path to cluster database directory",
new StringParameter(&_dataPath));
options->addOption("--cluster.log-path",
"path to log directory for the cluster",
new StringParameter(&_logPath));
options->addOption("--cluster.arangod-path",
"path to the arangod for the cluster",
new StringParameter(&_arangodPath));
options->addOption("--cluster.dbserver-config",
"path to the DBserver configuration",
new StringParameter(&_dbserverConfig));
options->addOption("--cluster.coordinator-config",
"path to the coordinator configuration",
new StringParameter(&_coordinatorConfig));
options->addOption("--cluster.system-replication-factor",
"replication factor for system collections",
new UInt32Parameter(&_systemReplicationFactor));
@ -212,12 +209,6 @@ void ClusterFeature::reportRole(arangodb::ServerState::RoleEnum role) {
void ClusterFeature::prepare() {
ServerState::instance()->setDataPath(_dataPath);
ServerState::instance()->setLogPath(_logPath);
ServerState::instance()->setArangodPath(_arangodPath);
ServerState::instance()->setDBserverConfig(_dbserverConfig);
ServerState::instance()->setCoordinatorConfig(_coordinatorConfig);
auto v8Dealer = ApplicationServer::getFeature<V8DealerFeature>("V8Dealer");
v8Dealer->defineDouble("SYS_DEFAULT_REPLICATION_FACTOR_SYSTEM",

View File

@ -61,11 +61,6 @@ class ClusterFeature : public application_features::ApplicationFeature {
std::string _myId;
std::string _myRole;
std::string _myAddress;
std::string _dataPath;
std::string _logPath;
std::string _arangodPath;
std::string _dbserverConfig;
std::string _coordinatorConfig;
uint32_t _systemReplicationFactor = 2;
bool _createWaitsForSyncReplication = true;
double _syncReplTimeoutFactor = 1.0;

View File

@ -54,11 +54,6 @@ static ServerState Instance;
ServerState::ServerState()
: _id(),
_dataPath(),
_logPath(),
_arangodPath(),
_dbserverConfig(),
_coordinatorConfig(),
_address(),
_lock(),
_role(),
@ -783,60 +778,6 @@ void ServerState::setState(StateEnum state) {
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the data path
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getDataPath() {
READ_LOCKER(readLocker, _lock);
return _dataPath;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the data path
////////////////////////////////////////////////////////////////////////////////
void ServerState::setDataPath(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_dataPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the log path
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getLogPath() {
READ_LOCKER(readLocker, _lock);
return _logPath;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the log path
////////////////////////////////////////////////////////////////////////////////
void ServerState::setLogPath(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_logPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the arangod path
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getArangodPath() {
READ_LOCKER(readLocker, _lock);
return _arangodPath;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the arangod path
////////////////////////////////////////////////////////////////////////////////
void ServerState::setArangodPath(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_arangodPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the JavaScript startup path
////////////////////////////////////////////////////////////////////////////////
@ -855,42 +796,6 @@ void ServerState::setJavaScriptPath(std::string const& value) {
_javaScriptStartupPath = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the DBserver config
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getDBserverConfig() {
READ_LOCKER(readLocker, _lock);
return _dbserverConfig;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the DBserver config
////////////////////////////////////////////////////////////////////////////////
void ServerState::setDBserverConfig(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_dbserverConfig = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the coordinator config
////////////////////////////////////////////////////////////////////////////////
std::string ServerState::getCoordinatorConfig() {
READ_LOCKER(readLocker, _lock);
return _coordinatorConfig;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets the coordinator config
////////////////////////////////////////////////////////////////////////////////
void ServerState::setCoordinatorConfig(std::string const& value) {
WRITE_LOCKER(writeLocker, _lock);
_coordinatorConfig = value;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief redetermine the server role, we do this after a plan change.
/// This is needed for automatic failover. This calls determineRole with

View File

@ -179,36 +179,6 @@ class ServerState {
/// @brief set the current state
void setState(StateEnum);
/// @brief gets the data path
std::string getDataPath();
/// @brief sets the data path
void setDataPath(std::string const&);
/// @brief gets the log path
std::string getLogPath();
/// @brief sets the log path
void setLogPath(std::string const&);
/// @brief gets the arangod path
std::string getArangodPath();
/// @brief sets the arangod path
void setArangodPath(std::string const&);
/// @brief gets the DBserver config
std::string getDBserverConfig();
/// @brief sets the DBserver config
void setDBserverConfig(std::string const&);
/// @brief gets the coordinator config
std::string getCoordinatorConfig();
/// @brief sets the coordinator config
void setCoordinatorConfig(std::string const&);
/// @brief gets the JavaScript startup path
std::string getJavaScriptPath();
@ -296,24 +266,9 @@ class ServerState {
/// @brief the server's description
std::string _description;
/// @brief the data path, can be set just once
std::string _dataPath;
/// @brief the log path, can be set just once
std::string _logPath;
/// @brief the arangod path, can be set just once
std::string _arangodPath;
/// @brief the JavaScript startup path, can be set just once
std::string _javaScriptStartupPath;
/// @brief the DBserver config, can be set just once
std::string _dbserverConfig;
/// @brief the coordinator config, can be set just once
std::string _coordinatorConfig;
/// @brief the server's own address, can be set just once
std::string _address;

View File

@ -1102,60 +1102,6 @@ static void JS_DescriptionServerState(
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the data path
////////////////////////////////////////////////////////////////////////////////
static void JS_DataPathServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("dataPath()");
}
std::string const path = ServerState::instance()->getDataPath();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the log path
////////////////////////////////////////////////////////////////////////////////
static void JS_LogPathServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("logPath()");
}
std::string const path = ServerState::instance()->getLogPath();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the arangod path
////////////////////////////////////////////////////////////////////////////////
static void JS_ArangodPathServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("arangodPath()");
}
std::string const path = ServerState::instance()->getArangodPath();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the javascript startup path
////////////////////////////////////////////////////////////////////////////////
@ -1174,43 +1120,6 @@ static void JS_JavaScriptPathServerState(
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the DBserver config
////////////////////////////////////////////////////////////////////////////////
static void JS_DBserverConfigServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("dbserverConfig()");
}
std::string const path = ServerState::instance()->getDBserverConfig();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the coordinator config
////////////////////////////////////////////////////////////////////////////////
static void JS_CoordinatorConfigServerState(
v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
ONLY_IN_CLUSTER
if (args.Length() != 0) {
TRI_V8_THROW_EXCEPTION_USAGE("coordinatorConfig()");
}
std::string const path = ServerState::instance()->getCoordinatorConfig();
TRI_V8_RETURN_STD_STRING(path);
TRI_V8_TRY_CATCH_END
}
#ifdef DEBUG_SYNC_REPLICATION
////////////////////////////////////////////////////////////////////////////////
/// @brief set arangoserver state to initialized
@ -2142,18 +2051,8 @@ void TRI_InitV8Cluster(v8::Isolate* isolate, v8::Handle<v8::Context> context) {
JS_IdOfPrimaryServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("description"),
JS_DescriptionServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("dataPath"),
JS_DataPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("logPath"),
JS_LogPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("arangodPath"),
JS_ArangodPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("javaScriptPath"),
JS_JavaScriptPathServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("dbserverConfig"),
JS_DBserverConfigServerState);
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("coordinatorConfig"),
JS_CoordinatorConfigServerState);
#ifdef DEBUG_SYNC_REPLICATION
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("enableSyncReplicationDebug"),
JS_EnableSyncReplicationDebug);

View File

@ -1,73 +0,0 @@
#!/bin/bash
export PID=$$
self=$0
if test -f "${self}.js"; then
export SCRIPT=${self}.js
else
export SCRIPT=$1
shift
fi
if test -n "$ORIGINAL_PATH"; then
# running in cygwin...
PS='\'
export EXT=".exe"
else
export EXT=""
PS='/'
fi;
LOGFILE="out${PS}log-$PID"
DBDIR="out${PS}data-$PID"
mkdir -p ${DBDIR}
export PORT=`expr 1024 + $RANDOM`
declare -a ARGS
export VG=''
export VXML=''
for i in "$@"; do
# no valgrind on cygwin, don't care.
if test "$i" == valgrind; then
export VG='/usr/bin/valgrind --log-file=/tmp/valgrindlog.%p'
elif test "$i" == valgrindxml; then
export VG='/usr/bin/valgrind --xml=yes --xml-file=valgrind_testrunner'
export VXML="valgrind=\"${VG}\""
export VG=${VG}'.xml '
else
ARGS+=(--javascript.script-parameter)
ARGS+=("$i")
fi
done
echo Database has its data in ${DBDIR}
echo Logfile is in ${LOGFILE}
$VG build/bin/arangod \
--configuration none \
--cluster.arangod-path bin${PS}arangod \
--cluster.coordinator-config etc${PS}relative${PS}arangod-coordinator.conf \
--cluster.dbserver-config etc${PS}relative${PS}arangod-dbserver.conf \
--cluster.disable-dispatcher-frontend false \
--cluster.disable-dispatcher-kickstarter false \
--cluster.data-path cluster \
--cluster.log-path cluster \
--database.directory ${DBDIR} \
--log.file ${LOGFILE} \
--server.endpoint tcp://127.0.0.1:$PORT \
--javascript.startup-directory js \
--javascript.app-path js${PS}apps \
--javascript.script $SCRIPT \
--no-server \
--temp-path ${PS}var${PS}tmp \
"${ARGS[@]}" \
$VXML
if test $? -eq 0; then
echo "removing ${LOGFILE} ${DBDIR}"
rm -rf ${LOGFILE} ${DBDIR}
else
echo "failed - don't remove ${LOGFILE} ${DBDIR} - here's the logfile:"
cat ${LOGFILE}
fi
echo Server has terminated.

View File

@ -1 +0,0 @@
run