mirror of https://gitee.com/bigwinds/arangodb
Remove obsolete docs about kickstarter / dispatcher / planner
This commit is contained in:
parent
0064e25bc2
commit
33e200da88
|
@ -1,224 +0,0 @@
|
|||
!CHAPTER How to try it out
|
||||
|
||||
In this text we assume that you are working with a standard installation
|
||||
of ArangoDB with at least a version number of 2.0. This means that everything
|
||||
is compiled for cluster operation, that *etcd* is compiled and
|
||||
the executable is installed in the location mentioned in the
|
||||
configuration file. The first step is to switch on the dispatcher
|
||||
functionality in your configuration of *arangod*. In order to do this, change
|
||||
the *cluster.disable-dispatcher-kickstarter* and
|
||||
*cluster.disable-dispatcher-interface* options in *arangod.conf* both
|
||||
to *false*.
|
||||
|
||||
**Note**: Once you switch *cluster.disable-dispatcher-interface* to
|
||||
*false*, the usual web front end is automatically replaced with the
|
||||
web front end for cluster planning. Therefore you can simply point
|
||||
your browser to *http://localhost:8529* (if you are running on the
|
||||
standard port) and you are guided through the planning and launching of
|
||||
a cluster with a graphical user interface. Alternatively, you can follow
|
||||
the instructions below to do the same on the command line interface.
|
||||
|
||||
We will first plan and launch a cluster, such that all your servers run
|
||||
on the local machine.
|
||||
|
||||
Start up a regular ArangoDB, either in console mode or connect to it with
|
||||
the Arango shell *arangosh*. Then you can ask it to plan a cluster for
|
||||
you:
|
||||
|
||||
```js
|
||||
arangodb> var Planner = require("@arangodb/cluster").Planner;
|
||||
arangodb> p = new Planner({numberOfDBservers:3, numberOfCoordinators:2});
|
||||
[object Object]
|
||||
```
|
||||
|
||||
If you are curious you can look at the plan of your cluster:
|
||||
|
||||
```
|
||||
arangodb> p.getPlan();
|
||||
```
|
||||
|
||||
This will show you a huge JSON document. More interestingly, some further
|
||||
components tell you more about the layout of your cluster:
|
||||
|
||||
```js
|
||||
arangodb> p.DBservers;
|
||||
[
|
||||
{
|
||||
"id" : "Pavel",
|
||||
"dispatcher" : "me",
|
||||
"port" : 8629
|
||||
},
|
||||
{
|
||||
"id" : "Perry",
|
||||
"dispatcher" : "me",
|
||||
"port" : 8630
|
||||
},
|
||||
{
|
||||
"id" : "Pancho",
|
||||
"dispatcher" : "me",
|
||||
"port" : 8631
|
||||
}
|
||||
]
|
||||
|
||||
arangodb> p.coordinators;
|
||||
[
|
||||
{
|
||||
"id" : "Claus",
|
||||
"dispatcher" : "me",
|
||||
"port" : 8530
|
||||
},
|
||||
{
|
||||
"id" : "Chantalle",
|
||||
"dispatcher" : "me",
|
||||
"port" : 8531
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
This tells you the ports on which your ArangoDB processes will listen.
|
||||
We will need the 8530 (or whatever appears on your machine) for the
|
||||
coordinators below.
|
||||
|
||||
More interesting is that such a cluster plan document can be used to
|
||||
start up the cluster conveniently using a *Kickstarter* object. Please
|
||||
note that the *launch* method of the kickstarter shown below initializes
|
||||
all data directories and log files, so if you have previously used the
|
||||
same cluster plan you will lose all your data. Use the *relaunch* method
|
||||
described below instead in that case.
|
||||
|
||||
```js
|
||||
arangodb> var Kickstarter = require("@arangodb/cluster").Kickstarter;
|
||||
arangodb> k = new Kickstarter(p.getPlan());
|
||||
arangodb> k.launch();
|
||||
```js
|
||||
|
||||
That is all you have to do, to fire up your first cluster. You will see some
|
||||
output, which you can safely ignore (as long as no error happens).
|
||||
|
||||
From that point on, you can contact one of the coordinators and use the cluster
|
||||
as if it were a single ArangoDB instance (use the port number from above
|
||||
instead of 8530, if you get a different one) (probably from another
|
||||
shell window):
|
||||
|
||||
```js
|
||||
$ arangosh --server.endpoint tcp://localhost:8530
|
||||
[... some output omitted]
|
||||
arangosh [_system]> db._databases();
|
||||
[
|
||||
"_system"
|
||||
]
|
||||
```js
|
||||
This for example, lists the cluster wide databases.
|
||||
|
||||
Now, let us create a sharded collection. Note, that we only have to specify
|
||||
the number of shards to use in addition to the usual command.
|
||||
The shards are automatically distributed among your DBservers:
|
||||
|
||||
```js
|
||||
arangosh [_system]> example = db._create("example",{numberOfShards:6});
|
||||
[ArangoCollection 1000001, "example" (type document, status loaded)]
|
||||
arangosh [_system]> x = example.save({"name":"Hans", "age":44});
|
||||
{
|
||||
"error" : false,
|
||||
"_id" : "example/1000008",
|
||||
"_rev" : "13460426",
|
||||
"_key" : "1000008"
|
||||
}
|
||||
arangosh [_system]> example.document(x._key);
|
||||
{
|
||||
"age" : 44,
|
||||
"name" : "Hans",
|
||||
"_id" : "example/1000008",
|
||||
"_rev" : "13460426",
|
||||
"_key" : "1000008"
|
||||
}
|
||||
```js
|
||||
|
||||
You can shut down your cluster by using the following Kickstarter
|
||||
method (in the ArangoDB console):
|
||||
|
||||
```js
|
||||
arangodb> k.shutdown();
|
||||
```
|
||||
|
||||
If you want to start your cluster again without losing data you have
|
||||
previously stored in it, you can use the *relaunch* method in exactly the
|
||||
same way as you previously used the *launch* method:
|
||||
|
||||
```js
|
||||
arangodb> k.relaunch();
|
||||
```
|
||||
|
||||
**Note**: If you have destroyed the object *k* for example because you
|
||||
have shutdown the ArangoDB instance in which you planned the cluster,
|
||||
then you can reproduce it for a *relaunch* operation, provided you have
|
||||
kept the cluster plan object provided by the *getPlan* method. If you
|
||||
had for example done:
|
||||
|
||||
```js
|
||||
arangodb> var plan = p.getPlan();
|
||||
arangodb> require("fs").write("saved_plan.json",JSON.stringify(plan));
|
||||
```
|
||||
|
||||
Then you can later do (in another session):
|
||||
|
||||
```js
|
||||
arangodb> var plan = require("fs").read("saved_plan.json");
|
||||
arangodb> plan = JSON.parse(plan);
|
||||
arangodb> var Kickstarter = require("@arangodb/cluster").Kickstarter;
|
||||
arangodb> var k = new Kickstarter(plan);
|
||||
arangodb> k.relaunch();
|
||||
```
|
||||
|
||||
to start the existing cluster anew.
|
||||
|
||||
You can check, whether or not, all your cluster processes are still
|
||||
running, by issuing:
|
||||
|
||||
```js
|
||||
arangodb> k.isHealthy();
|
||||
```
|
||||
|
||||
This will show you the status of all processes in the cluster. You
|
||||
should see "RUNNING" there, in all the relevant places.
|
||||
|
||||
Finally, to clean up the whole cluster (losing all the data stored in
|
||||
it), do:
|
||||
|
||||
```js
|
||||
arangodb> k.shutdown();
|
||||
arangodb> k.cleanup();
|
||||
```
|
||||
|
||||
We conclude this section with another example using two machines, which
|
||||
will act as two dispatchers. We start from scratch using two machines,
|
||||
running on the network addresses *tcp://192.168.173.78:8529* and
|
||||
*tcp://192.168.173.13:6789*. Both need to have a regular ArangoDB
|
||||
instance installed and running. Please make sure, that both bind to
|
||||
all network devices, so that they can talk to each other. Also enable
|
||||
the dispatcher functionality on both of them, as described above.
|
||||
|
||||
```js
|
||||
arangodb> var Planner = require("@arangodb/cluster").Planner;
|
||||
arangodb> var p = new Planner({
|
||||
dispatchers: {"me":{"endpoint":"tcp://192.168.173.78:8529"},
|
||||
"theother":{"endpoint":"tcp://192.168.173.13:6789"}},
|
||||
"numberOfCoordinators":2, "numberOfDBservers": 2});
|
||||
```
|
||||
|
||||
With these commands, you create a cluster plan involving two machines.
|
||||
The planner will put one DBserver and one Coordinator on each machine.
|
||||
You can now launch this cluster exactly as explained earlier:
|
||||
|
||||
```js
|
||||
arangodb> var Kickstarter = require("@arangodb/cluster").Kickstarter;
|
||||
arangodb> k = new Kickstarter(p.getPlan());
|
||||
arangodb> k.launch();
|
||||
```
|
||||
|
||||
Likewise, the methods *shutdown*, *relaunch*, *isHealthy* and *cleanup*
|
||||
work exactly as in the single server case.
|
||||
|
||||
See [the corresponding chapter of the reference manual](../ModulePlanner/README.md)
|
||||
for detailed information about the *Planner* and *Kickstarter* classes.
|
||||
|
|
@ -14,48 +14,3 @@ coordinators exactly as they would talk to a single ArangoDB instance
|
|||
via the REST interface. The coordinators know about the configuration of
|
||||
the cluster and automatically forward the incoming requests to the
|
||||
right DBservers.
|
||||
|
||||
As a central highly available service to hold the cluster configuration
|
||||
and to synchronize reconfiguration and fail-over operations we currently
|
||||
use a an external program called *etcd* (see [Github
|
||||
page](https://github.com/coreos/etcd)). It provides a hierarchical
|
||||
key value store with strong consistency and reliability promises.
|
||||
This is called the "agency" and its processes are called "agents".
|
||||
|
||||
All this is admittedly a relatively complicated setup and involves a lot
|
||||
of steps for the startup and shutdown of clusters. Therefore we have created
|
||||
convenience functionality to plan, setup, start and shutdown clusters.
|
||||
|
||||
The whole process works in two phases, first the "planning" phase and
|
||||
then the "running" phase. In the planning phase it is decided which
|
||||
processes with which roles run on which machine, which ports they use,
|
||||
where the central agency resides and what ports its agents use. The
|
||||
result of the planning phase is a "cluster plan", which is just a
|
||||
relatively big data structure in JSON format. You can then use this
|
||||
cluster plan to startup, shutdown, check and cleanup your cluster.
|
||||
|
||||
This latter phase uses so-called "dispatchers". A dispatcher is yet another
|
||||
ArangoDB instance and you have to install exactly one such instance on
|
||||
every machine that will take part in your cluster. No special
|
||||
configuration whatsoever is needed and you can organize authentication
|
||||
exactly as you would in a normal ArangoDB instance. You only have
|
||||
to activate the dispatcher functionality in the configuration file
|
||||
(see options *cluster.disable-dispatcher-kickstarter* and
|
||||
*cluster.disable-dispatcher-interface*, which are both initially
|
||||
set to *true* in the standard setup we ship).
|
||||
|
||||
However, you can use any of these dispatchers to plan and start your
|
||||
cluster. In the planning phase, you have to tell the planner about all
|
||||
dispatchers in your cluster and it will automatically distribute your
|
||||
agency, DBserver and coordinator processes amongst the dispatchers. The
|
||||
result is the cluster plan which you feed into the kickstarter. The
|
||||
kickstarter is a program that actually uses the dispatchers to
|
||||
manipulate the processes in your cluster. It runs on one of the
|
||||
dispatchers, which analyses the cluster plan and executes those actions,
|
||||
for which it is personally responsible, and forwards all other actions
|
||||
to the corresponding dispatchers. This is possible, because the cluster
|
||||
plan incorporates the information about all dispatchers.
|
||||
|
||||
We also offer a graphical user interface to the cluster planner and
|
||||
dispatcher.
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ road map):
|
|||
maintenance and scaling of a cluster. However, in version 2.0 the
|
||||
cluster layout is static and no redistribution of data between the
|
||||
DBservers or moving of shards between servers is possible.
|
||||
* At this stage the sharding of an [edge collection](../Glossary/README.md#edge-collection) is independent of
|
||||
* At this stage the sharding of an [edge collection](../Appendix/Glossary.md#edge-collection) is independent of
|
||||
the sharding of the corresponding vertex collection in a graph.
|
||||
For version 2.2 we plan to synchronize the two, to allow for more
|
||||
efficient graph traversal functions in large, sharded graphs. We
|
||||
|
@ -137,16 +137,6 @@ to implement efficiently:
|
|||
to be the revision of the latest inserted document. Again,
|
||||
maintaining a global revision number over all shards is very
|
||||
difficult to maintain efficiently.
|
||||
* The methods *db.<collection>.first()* and *db.<collection>.last()* are
|
||||
unsupported for collections with more than one shard. The reason for
|
||||
this, is that temporal order in a highly parallelized environment
|
||||
like a cluster is difficult or even impossible to achieve
|
||||
efficiently. In a cluster it is entirely possible that two
|
||||
different coordinators add two different documents to two
|
||||
different shards *at the same time*. In such a situation it is not
|
||||
even well-defined which of the two documents is "later". The only
|
||||
way to overcome this fundamental problem would again be a central
|
||||
locking mechanism, which is not desirable for performance reasons.
|
||||
* Contrary to the situation in a single instance, objects representing
|
||||
sharded collections are broken after their database is dropped.
|
||||
In a future version they might report that they are broken, but
|
||||
|
|
|
@ -1,18 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`new require("@arangodb/cluster").Kickstarter(plan)`
|
||||
|
||||
This constructor constructs a kickstarter object. Its first
|
||||
argument is a cluster plan as for example provided by the planner
|
||||
(see Cluster Planner Constructor and the general
|
||||
explanations before this reference). The second argument is
|
||||
optional and is taken to be "me" if omitted, it is the ID of the
|
||||
dispatcher this object should consider itself to be. If the plan
|
||||
contains startup commands for the dispatcher with this ID, these
|
||||
commands are executed immediately. Otherwise they are handed over
|
||||
to another responsible dispatcher via a REST call.
|
||||
|
||||
The resulting object of this constructors allows to launch,
|
||||
shutdown, relaunch the cluster described in the plan.
|
||||
|
|
@ -1,161 +0,0 @@
|
|||
|
||||
|
||||
|
||||
*new require("@arangodb/cluster").Planner(userConfig)*
|
||||
|
||||
This constructor builds a cluster planner object. The one and only
|
||||
argument is an object that can have the properties described below.
|
||||
The planner can plan clusters on a single machine (basically for
|
||||
testing purposes) and on multiple machines. The resulting "cluster plans"
|
||||
can be used by the kickstarter to start up the processes comprising
|
||||
the cluster, including the agency. To this end, there has to be one
|
||||
dispatcher on every machine participating in the cluster. A dispatcher
|
||||
is a simple instance of ArangoDB, compiled with the cluster extensions,
|
||||
but not running in cluster mode. This is why the configuration option
|
||||
*dispatchers* below is of central importance.
|
||||
|
||||
- *dispatchers*: an object with a property for each dispatcher,
|
||||
the property name is the ID of the dispatcher and the value
|
||||
should be an object with at least the property *endpoint*
|
||||
containing the endpoint of the corresponding dispatcher.
|
||||
Further optional properties are:
|
||||
|
||||
- *avoidPorts* which is an object
|
||||
in which all port numbers that should not be used are bound to
|
||||
*true*, default is empty, that is, all ports can be used
|
||||
- *arangodExtraArgs*, which is a list of additional
|
||||
command line arguments that will be given to DBservers and
|
||||
coordinators started by this dispatcher, the default is
|
||||
an empty list. These arguments will be appended to those
|
||||
produced automatically, such that one can overwrite
|
||||
things with this.
|
||||
- *allowCoordinators*, which is a boolean value indicating
|
||||
whether or not coordinators should be started on this
|
||||
dispatcher, the default is *true*
|
||||
- *allowDBservers*, which is a boolean value indicating
|
||||
whether or not DBservers should be started on this dispatcher,
|
||||
the default is *true*
|
||||
- *allowAgents*, which is a boolean value indicating whether or
|
||||
not agents should be started on this dispatcher, the default is
|
||||
*true*
|
||||
- *username*, which is a string that contains the user name
|
||||
for authentication with this dispatcher
|
||||
- *passwd*, which is a string that contains the password
|
||||
for authentication with this dispatcher, if not both
|
||||
*username* and *passwd* are set, then no authentication
|
||||
is used between dispatchers. Note that this will not work
|
||||
if the dispatchers are configured with authentication.
|
||||
|
||||
If *.dispatchers* is empty (no property), then an entry for the
|
||||
local arangod itself is automatically added. Note that if the
|
||||
only configured dispatcher has endpoint *tcp://localhost:*,
|
||||
all processes are started in a special "local" mode and are
|
||||
configured to bind their endpoints only to the localhost device.
|
||||
In all other cases both agents and *arangod* instances bind
|
||||
their endpoints to all available network devices.
|
||||
- *numberOfAgents*: the number of agents in the agency,
|
||||
usually there is no reason to deviate from the default of 3. The
|
||||
planner distributes them amongst the dispatchers, if possible.
|
||||
- *agencyPrefix*: a string that is used as prefix for all keys of
|
||||
configuration data stored in the agency.
|
||||
- *numberOfDBservers*: the number of DBservers in the
|
||||
cluster. The planner distributes them evenly amongst the dispatchers.
|
||||
- *startSecondaries*: a boolean flag indicating whether or not
|
||||
secondary servers are started. In this version, this flag is
|
||||
silently ignored, since we do not yet have secondary servers.
|
||||
- *numberOfCoordinators*: the number of coordinators in the cluster,
|
||||
the planner distributes them evenly amongst the dispatchers.
|
||||
- *DBserverIDs*: a list of DBserver IDs (strings). If the planner
|
||||
runs out of IDs it creates its own ones using *DBserver*
|
||||
concatenated with a unique number.
|
||||
- *coordinatorIDs*: a list of coordinator IDs (strings). If the planner
|
||||
runs out of IDs it creates its own ones using *Coordinator*
|
||||
concatenated with a unique number.
|
||||
- *dataPath*: this is a string and describes the path under which
|
||||
the agents, the DBservers and the coordinators store their
|
||||
data directories. This can either be an absolute path (in which
|
||||
case all machines in the clusters must use the same path), or
|
||||
it can be a relative path. In the latter case it is relative
|
||||
to the directory that is configured in the dispatcher with the
|
||||
*cluster.data-path* option (command line or configuration file).
|
||||
The directories created will be called *data-PREFIX-ID* where
|
||||
*PREFIX* is replaced with the agency prefix (see above) and *ID*
|
||||
is the ID of the DBserver or coordinator.
|
||||
- *logPath*: this is a string and describes the path under which
|
||||
the DBservers and the coordinators store their log file. This can
|
||||
either be an absolute path (in which case all machines in the cluster
|
||||
must use the same path), or it can be a relative path. In the
|
||||
latter case it is relative to the directory that is configured
|
||||
in the dispatcher with the *cluster.log-path* option.
|
||||
- *arangodPath*: this is a string and describes the path to the
|
||||
actual executable *arangod* that will be started for the
|
||||
DBservers and coordinators. If this is an absolute path, it
|
||||
obviously has to be the same on all machines in the cluster
|
||||
as described for *dataPath*. If it is an empty string, the
|
||||
dispatcher uses the executable that is configured with the
|
||||
*cluster.arangod-path* option, which is by default the same
|
||||
executable as the dispatcher uses.
|
||||
- *agentPath*: this is a string and describes the path to the
|
||||
actual executable that will be started for the agents in the
|
||||
agency. If this is an absolute path, it obviously has to be
|
||||
the same on all machines in the cluster, as described for
|
||||
*arangodPath*. If it is an empty string, the dispatcher
|
||||
uses its *cluster.agent-path* option.
|
||||
- *agentExtPorts*: a list of port numbers to use for the external
|
||||
ports of the agents. When running out of numbers in this list,
|
||||
the planner increments the last one used by one for every port
|
||||
needed. Note that the planner checks availability of the ports
|
||||
during the planning phase by contacting the dispatchers on the
|
||||
different machines, and uses only ports that are free during
|
||||
the planning phase. Obviously, if those ports are connected
|
||||
before the actual startup, things can go wrong.
|
||||
- *agentIntPorts*: a list of port numbers to use for the internal
|
||||
ports of the agents. The same comments as for *agentExtPorts*
|
||||
apply.
|
||||
- *DBserverPorts*: a list of port numbers to use for the
|
||||
DBservers. The same comments as for *agentExtPorts* apply.
|
||||
- *coordinatorPorts*: a list of port numbers to use for the
|
||||
coordinators. The same comments as for *agentExtPorts* apply.
|
||||
- *useSSLonDBservers*: a boolean flag indicating whether or not
|
||||
we use SSL on all DBservers in the cluster
|
||||
- *useSSLonCoordinators*: a boolean flag indicating whether or not
|
||||
we use SSL on all coordinators in the cluster
|
||||
- *valgrind*: a string to contain the path of the valgrind binary
|
||||
if we should run the cluster components in it
|
||||
- *valgrindopts*: commandline options to the valgrind process
|
||||
- *valgrindXmlFileBase*: pattern for logfiles
|
||||
- *valgrindTestname*: name of test to add to the logfiles
|
||||
- *valgrindHosts*: which host classes should run in valgrind?
|
||||
Coordinator / DBServer
|
||||
- *extremeVerbosity* : if set to true, then there will be more test
|
||||
run output, especially for cluster tests.
|
||||
|
||||
All these values have default values. Here is the current set of
|
||||
default values:
|
||||
|
||||
```js
|
||||
{
|
||||
"agencyPrefix" : "arango",
|
||||
"numberOfAgents" : 1,
|
||||
"numberOfDBservers" : 2,
|
||||
"startSecondaries" : false,
|
||||
"numberOfCoordinators" : 1,
|
||||
"DBserverIDs" : ["Pavel", "Perry", "Pancho", "Paul", "Pierre",
|
||||
"Pit", "Pia", "Pablo" ],
|
||||
"coordinatorIDs" : ["Claus", "Chantalle", "Claire", "Claudia",
|
||||
"Claas", "Clemens", "Chris" ],
|
||||
"dataPath" : "", // means configured in dispatcher
|
||||
"logPath" : "", // means configured in dispatcher
|
||||
"arangodPath" : "", // means configured as dispatcher
|
||||
"agentPath" : "", // means configured in dispatcher
|
||||
"agentExtPorts" : [4001],
|
||||
"agentIntPorts" : [7001],
|
||||
"DBserverPorts" : [8629],
|
||||
"coordinatorPorts" : [8530],
|
||||
"dispatchers" : {"me": {"endpoint": "tcp://localhost:"}},
|
||||
// this means only we as a local instance
|
||||
"useSSLonDBservers" : false,
|
||||
"useSSLonCoordinators" : false
|
||||
};
|
||||
```
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Kickstarter.cleanup()`
|
||||
|
||||
This cleans up all the data and logs of a previously shut down cluster.
|
||||
To this end, other dispatchers are contacted as necessary.
|
||||
[Use shutdown](../ModulePlanner/README.md#shutdown) first and
|
||||
use with caution, since potentially a lot of data is being erased with
|
||||
this call!
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Kickstarter.isHealthy()`
|
||||
|
||||
This checks that all processes belonging to a running cluster are
|
||||
healthy. To this end, other dispatchers are contacted as necessary.
|
||||
At this stage it is only checked that the processes are still up and
|
||||
running.
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Kickstarter.launch()`
|
||||
|
||||
This starts up a cluster as described in the plan which was given to
|
||||
the constructor. To this end, other dispatchers are contacted as
|
||||
necessary. All startup commands for the local dispatcher are
|
||||
executed immediately.
|
||||
|
||||
The result is an object that contains information about the started
|
||||
processes, this object is also stored in the Kickstarter object
|
||||
itself. We do not go into details here about the data structure,
|
||||
but the most important information are the process IDs of the
|
||||
started processes. The corresponding
|
||||
[see shutdown method](../ModulePlanner/README.md#shutdown) needs this
|
||||
information to shut down all processes.
|
||||
|
||||
Note that all data in the DBservers and all log files and all agency
|
||||
information in the cluster is deleted by this call. This is because
|
||||
it is intended to set up a cluster for the first time. See
|
||||
the [relaunch method](../ModulePlanner/README.md#relaunch)
|
||||
for restarting a cluster without data loss.
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Kickstarter.relaunch()`
|
||||
|
||||
This starts up a cluster as described in the plan which was given to
|
||||
the constructor. To this end, other dispatchers are contacted as
|
||||
necessary. All startup commands for the local dispatcher are
|
||||
executed immediately.
|
||||
|
||||
The result is an object that contains information about the started
|
||||
processes, this object is also stored in the Kickstarter object
|
||||
itself. We do not go into details here about the data structure,
|
||||
but the most important information are the process IDs of the
|
||||
started processes. The corresponding
|
||||
[shutdown method ](../ModulePlanner/README.md#shutdown) needs this information to
|
||||
shut down all processes.
|
||||
|
||||
Note that this methods needs that all data in the DBservers and the
|
||||
agency information in the cluster are already set up properly. See
|
||||
the [launch method](../ModulePlanner/README.md#launch) for
|
||||
starting a cluster for the first time.
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Kickstarter.shutdown()`
|
||||
|
||||
This shuts down a cluster as described in the plan which was given to
|
||||
the constructor. To this end, other dispatchers are contacted as
|
||||
necessary. All processes in the cluster are gracefully shut down
|
||||
in the right order.
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Kickstarter.upgrade(username, passwd)`
|
||||
|
||||
This performs an upgrade procedure on a cluster as described in
|
||||
the plan which was given to the constructor. To this end, other
|
||||
dispatchers are contacted as necessary. All commands for the local
|
||||
dispatcher are executed immediately. The basic approach for the
|
||||
upgrade is as follows: The agency is started first (exactly as
|
||||
in relaunch), no configuration is sent there (exactly as in the
|
||||
relaunch action), all servers are first started with the option
|
||||
"--upgrade" and then normally. In the end, the upgrade-database.js
|
||||
script is run on one of the coordinators, as in the launch action.
|
||||
|
||||
The result is an object that contains information about the started
|
||||
processes, this object is also stored in the Kickstarter object
|
||||
itself. We do not go into details here about the data structure,
|
||||
but the most important information are the process IDs of the
|
||||
started processes. The corresponding
|
||||
[shutdown method](../ModulePlanner/README.md#shutdown) needs
|
||||
this information to shut down all processes.
|
||||
|
||||
Note that this methods needs that all data in the DBservers and the
|
||||
agency information in the cluster are already set up properly. See
|
||||
the [launch method](../ModulePlanner/README.md#launch) for
|
||||
starting a cluster for the first time.
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
|
||||
|
||||
|
||||
`Planner.getPlan()`
|
||||
|
||||
returns the cluster plan as a JavaScript object. The result of this
|
||||
method can be given to the constructor of a Kickstarter.
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
|
||||
@startDocuBlock JSF_cluster_dispatcher_POST
|
||||
@brief exposes the dispatcher functionality to start up, shutdown,
|
||||
relaunch, upgrade or cleanup a cluster according to a cluster plan
|
||||
as for example provided by the kickstarter.
|
||||
|
||||
@RESTHEADER{POST /_admin/clusterDispatch,execute startup commands}
|
||||
|
||||
@RESTQUERYPARAMETERS
|
||||
|
||||
@RESTBODYPARAM{clusterPlan,object,required,}
|
||||
is a cluster plan (see JSF_cluster_planner_POST),
|
||||
|
||||
@RESTBODYPARAM{myname,string,required,string}
|
||||
is the ID of this dispatcher, this is used to decide
|
||||
which commands are executed locally and which are forwarded
|
||||
to other dispatchers
|
||||
|
||||
@RESTBODYPARAM{action,string,required,string}
|
||||
can be one of the following:
|
||||
- "launch": the cluster is launched for the first time, all
|
||||
data directories and log files are cleaned and created
|
||||
- "shutdown": the cluster is shut down, the additional property
|
||||
*runInfo* (see below) must be bound as well
|
||||
- "relaunch": the cluster is launched again, all data directories
|
||||
and log files are untouched and need to be there already
|
||||
- "cleanup": use this after a shutdown to remove all data in the
|
||||
data directories and all log files, use with caution
|
||||
- "isHealthy": checks whether or not the processes involved
|
||||
in the cluster are running or not. The additional property
|
||||
*runInfo* (see above) must be bound as well
|
||||
- "upgrade": performs an upgrade of a cluster, to this end,
|
||||
the agency is started, and then every server is once started
|
||||
with the "--upgrade" option, and then normally. Finally,
|
||||
the script "verion-check.js" is run on one of the coordinators
|
||||
for the cluster.
|
||||
|
||||
@RESTBODYPARAM{runInfo,object,optional,}
|
||||
this is needed for the "shutdown" and "isHealthy" actions
|
||||
only and should be the structure that "launch", "relaunch" or
|
||||
"upgrade" returned. It contains runtime information like process
|
||||
IDs.
|
||||
|
||||
@RESTDESCRIPTION
|
||||
The body must be an object with the following properties:
|
||||
|
||||
This call executes the plan by either doing the work personally
|
||||
or by delegating to other dispatchers.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
@RESTRETURNCODE{200} is returned when everything went well.
|
||||
|
||||
@RESTRETURNCODE{400} the posted body was not valid JSON, or something
|
||||
went wrong with the startup.
|
||||
@endDocuBlock
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
|
||||
@startDocuBlock JSF_cluster_planner_POST
|
||||
@brief exposes the cluster planning functionality
|
||||
|
||||
@RESTHEADER{POST /_admin/clusterPlanner, Produce cluster startup plan}
|
||||
|
||||
@RESTALLBODYPARAM{clusterPlan,object,required}
|
||||
A cluster plan object
|
||||
|
||||
@RESTDESCRIPTION Given a description of a cluster, this plans the details
|
||||
of a cluster and returns a JSON description of a plan to start up this
|
||||
cluster.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
@RESTRETURNCODE{200} is returned when everything went well.
|
||||
|
||||
@RESTRETURNCODE{400} the posted body was not valid JSON.
|
||||
@endDocuBlock
|
||||
|
Loading…
Reference in New Issue