mirror of https://gitee.com/bigwinds/arangodb
Merge branch 'devel' of ssh://github.com/ArangoDB/ArangoDB into devel
This commit is contained in:
commit
c0d4118888
|
@ -20,7 +20,7 @@ FOR u IN users
|
|||
RETURN u
|
||||
```
|
||||
|
||||
It is allowed to specify multiple *FILTER* statements in a query, and even in
|
||||
It is allowed to specify multiple *FILTER* statements in a query, even in
|
||||
the same block. If multiple *FILTER* statements are used, their results will be
|
||||
combined with a logical and, meaning all filter conditions must be true to
|
||||
include an element.
|
||||
|
@ -38,3 +38,66 @@ value less than *39* (including *null* ones). All other elements from *users*
|
|||
will be skipped and not be included in the result produced by *RETURN*.
|
||||
You may refer to the chapter [Accessing Data from Collections](../Fundamentals/DocumentData.md) for
|
||||
a description of the impact of non-existent or null attributes.
|
||||
|
||||
Note that the positions of *FILTER* statements can influence the result of a query.
|
||||
There are 16 active users in the test data for instance:
|
||||
|
||||
```js
|
||||
FOR u IN users
|
||||
FILTER u.active == true
|
||||
RETURN u
|
||||
```
|
||||
|
||||
We can limit the result set to 5 users at most:
|
||||
|
||||
```js
|
||||
FOR u IN users
|
||||
FILTER u.active == true
|
||||
LIMIT 5
|
||||
RETURN u
|
||||
```
|
||||
|
||||
This may return the user documents of Jim, Diego, Anthony, Michael and Chloe for
|
||||
instance. Which ones are returned is undefined, since there is no *SORT* statement
|
||||
to ensure a particular order. If we add a second *FILTER* statement to only return
|
||||
women...
|
||||
|
||||
```js
|
||||
FOR u IN users
|
||||
FILTER u.active == true
|
||||
LIMIT 5
|
||||
FILTER u.gender == "f"
|
||||
RETURN u
|
||||
```
|
||||
|
||||
... it might just return the Chloe document, because the *LIMIT* is applied before
|
||||
the second *FILTER*. No more than 5 documents arrive at the second *FILTER* block,
|
||||
and not all of them fulfill the gender criterion, eventhough there are more than
|
||||
5 active female users in the collection. A more deterministic result can be achieved
|
||||
by adding a *SORT* block:
|
||||
|
||||
```js
|
||||
FOR u IN users
|
||||
FILTER u.active == true
|
||||
SORT u.age ASC
|
||||
LIMIT 5
|
||||
FILTER u.gender == "f"
|
||||
RETURN u
|
||||
```
|
||||
|
||||
This will return the users Mariah and Mary. If sorted by age in *DESC* order,
|
||||
then the Sophia, Emma and Madison documents are returned. A *FILTER* after a
|
||||
*LIMIT* is not very common however, and you probably want such a query instead:
|
||||
|
||||
```js
|
||||
FOR u IN users
|
||||
FILTER u.active == true AND u.gender == "f"
|
||||
SORT u.age ASC
|
||||
LIMIT 5
|
||||
RETURN u
|
||||
```
|
||||
|
||||
The significance of where *FILTER* blocks are placed allows that this single
|
||||
keyword can assume the roles of two SQL keywords, *WHERE* as well as *HAVING*.
|
||||
AQL's *FILTER* thus works with *COLLECT* aggregates the same as with any other
|
||||
intermediate result, document attribute etc.
|
||||
|
|
|
@ -52,7 +52,9 @@ The *LIKE* operator checks whether its left operand matches the pattern specifie
|
|||
in its right operand. The pattern can consist of regular characters and wildcards.
|
||||
The supported wildcards are *_* to match a single arbitrary character, and *%* to
|
||||
match any number of arbitrary characters. Literal *%* and *_* need to be escaped
|
||||
with a backslash.
|
||||
with a backslash. Backslashes need to be escaped themselves, which effectively
|
||||
means that two reverse solidus characters need to preceed a literal percent sign
|
||||
or underscore.
|
||||
|
||||
```
|
||||
"abc" LIKE "a%" // true
|
||||
|
|
|
@ -904,6 +904,30 @@ in 3.0 because the changes in server behavior controlled by this option were cha
|
|||
even before ArangoDB 2.0. This should have left enough time for client applications
|
||||
to adapt to the new behavior, making the option superfluous in 3.0.
|
||||
|
||||
!SUBSECTION Thread options
|
||||
|
||||
The options `--server.threads` and `--scheduler.threads` now have a default value of
|
||||
`0`. When `--server.threads` is set to `0` on startup, the suitable number of
|
||||
threads will be determined by ArangoDB by asking the OS for the number of available
|
||||
CPUs and using that as a baseline. If the number of CPUs is lower than 4, ArangoDB
|
||||
will still start 4 dispatcher threads. When `--scheduler.threads` is set to `0`,
|
||||
then ArangoDB will automatically determine the number of scheduler threads to start.
|
||||
This will normally create 2 scheduler threads.
|
||||
|
||||
If the exact number of threads needs to be set by the admin, then it is still possible
|
||||
to set `--server.threads` and `--scheduler.threads` to non-zero values. ArangoDB will
|
||||
use these values and start that many threads (note that some threads may be created
|
||||
lazily so they may not be present directly after startup).
|
||||
|
||||
The number of V8 JavaScript contexts to be created (`--javascript.v8-contexts`) now
|
||||
has a default value of `0` too, meaning that ArangoDB will create as many V8 contexts
|
||||
as there will be dispatcher threads (controlled by the `--server.threads` option).
|
||||
Setting this option to a non-zero value will create exactly as many V8 contexts as
|
||||
specified.
|
||||
|
||||
Setting these options explicitly to non-zero values may be beneficial in environments
|
||||
that have few resources (processing time, maximum thread count, available memory).
|
||||
|
||||
!SECTION Authentication
|
||||
|
||||
The default value for `--server.authentication` is now `true` in the configuration
|
||||
|
@ -939,6 +963,9 @@ For all client tools, the option `--server.disable-authentication` was renamed t
|
|||
`--server.authentication`. Note that the meaning of the option `--server.authentication`
|
||||
is the opposite of the previous `--server.disable-authentication`.
|
||||
|
||||
The option `--server.ssl-protocol` was renamed to `--ssl.protocol`. The meaning of
|
||||
the option is unchanged.
|
||||
|
||||
The command-line option `--quiet` was removed from all client tools except arangosh
|
||||
because it had no effect in them.
|
||||
|
||||
|
|
|
@ -112,6 +112,8 @@ Node::Node(Node&& other) :
|
|||
|
||||
Node::Node(Node const& other) :
|
||||
_node_name(other._node_name),
|
||||
_parent(nullptr),
|
||||
_store(nullptr),
|
||||
_children(other._children),
|
||||
_value(other._value) {}
|
||||
|
||||
|
|
|
@ -30,7 +30,6 @@
|
|||
#include "VocBase/server.h"
|
||||
|
||||
#include <thread>
|
||||
|
||||
using namespace arangodb;
|
||||
|
||||
using namespace arangodb::consensus;
|
||||
|
@ -64,7 +63,11 @@ inline arangodb::consensus::write_ret_t transact (
|
|||
|
||||
}
|
||||
|
||||
enum JOB_STATUS {TODO, PENDING, FINISHED, FAILED, NOTFOUND};
|
||||
|
||||
static std::string const pendingPrefix = "/Target/Pending/";
|
||||
static std::string const finishedPrefix = "/Target/Finished/";
|
||||
static std::string const failedPrefix = "/Target/Failed/";
|
||||
static std::string const planColPrefix = "/Plan/Collections/";
|
||||
static std::string const curColPrefix = "/Current/Collections/";
|
||||
static std::string const toDoPrefix = "/Target/ToDo/";
|
||||
|
@ -99,13 +102,58 @@ bool Job::exists() const {
|
|||
}
|
||||
|
||||
|
||||
struct MoveShard : public Job {
|
||||
bool Job::finish(bool success = true) const {
|
||||
|
||||
Builder pending, finished;
|
||||
|
||||
// Get todo entry
|
||||
pending.openArray();
|
||||
_snapshot(pendingPrefix + _jobId).toBuilder(pending);
|
||||
pending.close();
|
||||
|
||||
// Prepare peding entry, block toserver
|
||||
finished.openArray();
|
||||
|
||||
// --- Add finished
|
||||
finished.openObject();
|
||||
finished.add(_agencyPrefix + (success ? finishedPrefix : failedPrefix)
|
||||
+ _jobId, VPackValue(VPackValueType::Object));
|
||||
finished.add("timeFinished",
|
||||
VPackValue(printTimestamp(std::chrono::system_clock::now())));
|
||||
for (auto const& obj : VPackObjectIterator(pending.slice()[0])) {
|
||||
finished.add(obj.key.copyString(), obj.value);
|
||||
}
|
||||
finished.close();
|
||||
|
||||
// --- Delete pending
|
||||
finished.add(_agencyPrefix + pendingPrefix + _jobId,
|
||||
VPackValue(VPackValueType::Object));
|
||||
finished.add("op", VPackValue("delete"));
|
||||
finished.close();
|
||||
|
||||
// --- Need precond?
|
||||
finished.close(); finished.close();
|
||||
|
||||
write_ret_t res = transact(_agent, finished);
|
||||
|
||||
if (res.accepted && res.indices.size()==1 && res.indices[0]) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
|
||||
}
|
||||
|
||||
|
||||
struct MoveShard : public Job {
|
||||
|
||||
MoveShard(Node const& snapshot, Agent* agent, std::string const& jobId,
|
||||
std::string const& creator, std::string const& agencyPrefix,
|
||||
std::string const& database, std::string const& collection,
|
||||
std::string const& shard, std::string const& from,
|
||||
std::string const& to) :
|
||||
std::string const& database = std::string(),
|
||||
std::string const& collection = std::string(),
|
||||
std::string const& shard = std::string(),
|
||||
std::string const& from = std::string(),
|
||||
std::string const& to = std::string()) :
|
||||
Job(snapshot, agent, jobId, creator, agencyPrefix), _database(database),
|
||||
_collection(collection), _shard(shard), _from(from), _to(to) {
|
||||
|
||||
|
@ -244,6 +292,9 @@ struct MoveShard : public Job {
|
|||
write_ret_t res = transact(_agent, pending);
|
||||
|
||||
if (res.accepted && res.indices.size()==1 && res.indices[0]) {
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) << "Pending: Move shard " + _shard
|
||||
+ " from " + _from + " to " + _to;
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -255,75 +306,48 @@ struct MoveShard : public Job {
|
|||
|
||||
|
||||
virtual unsigned status () const {
|
||||
|
||||
|
||||
Node const& target = _snapshot("/Target");
|
||||
|
||||
if (target.exists(std::string("/ToDo/") + _jobId).size() == 2) {
|
||||
|
||||
return 0;
|
||||
return TODO;
|
||||
|
||||
} else if (target.exists(std::string("/Pending/") + _jobId).size() == 2) {
|
||||
|
||||
std::string planPath =
|
||||
planColPrefix + _database + "/" + _collection + "/shards/" + _shard;
|
||||
std::string curPath =
|
||||
curColPrefix + _database + "/" + _collection + "/" + _shard + "/servers";
|
||||
|
||||
Node const& job = _snapshot(pendingPrefix + _jobId);
|
||||
std::string database = job("database").toJson(),
|
||||
collection = job("collection").toJson(),
|
||||
shard = job("shard").toJson(),
|
||||
planPath = planColPrefix + database + "/" + collection + "/shards/"
|
||||
+ shard,
|
||||
curPath = curColPrefix + database + "/" + collection + "/" + shard
|
||||
+ "/servers";
|
||||
|
||||
Node const& planned = _snapshot(planPath);
|
||||
Node const& current = _snapshot(curPath);
|
||||
|
||||
if (planned.slice()[0] == current.slice()[0]) {
|
||||
|
||||
Builder pending, finished;
|
||||
|
||||
// Get todo entry
|
||||
pending.openArray();
|
||||
_snapshot(pendingPrefix + _jobId).toBuilder(pending);
|
||||
pending.close();
|
||||
|
||||
// Prepare peding entry, block toserver
|
||||
finished.openArray();
|
||||
|
||||
// --- Add finished
|
||||
finished.openObject();
|
||||
finished.add(_agencyPrefix + pendingPrefix + _jobId,
|
||||
VPackValue(VPackValueType::Object));
|
||||
finished.add("timeStarted",
|
||||
VPackValue(printTimestamp(std::chrono::system_clock::now())));
|
||||
for (auto const& obj : VPackObjectIterator(pending.slice()[0])) {
|
||||
finished.add(obj.key.copyString(), obj.value);
|
||||
}
|
||||
finished.close();
|
||||
|
||||
// --- Delete pending
|
||||
pending.add(_agencyPrefix + pendingPrefix + _jobId,
|
||||
VPackValue(VPackValueType::Object));
|
||||
pending.add("op", VPackValue("delete"));
|
||||
pending.close();
|
||||
|
||||
finished.close(); finished.close();
|
||||
|
||||
write_ret_t res = transact(_agent, finished);
|
||||
|
||||
if (res.accepted && res.indices.size()==1 && res.indices[0]) {
|
||||
return 2;
|
||||
if (finish()) {
|
||||
return FINISHED;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return 1;
|
||||
return PENDING;
|
||||
|
||||
} else if (target.exists(std::string("/Finished/") + _jobId).size() == 2) {
|
||||
|
||||
return 2;
|
||||
return FINISHED;
|
||||
|
||||
} else if (target.exists(std::string("/Failed/") + _jobId).size() == 2) {
|
||||
|
||||
return 3;
|
||||
return FAILED;
|
||||
|
||||
}
|
||||
|
||||
return 4;
|
||||
return NOTFOUND;
|
||||
|
||||
}
|
||||
|
||||
|
@ -345,7 +369,7 @@ struct FailedServer : public Job {
|
|||
Job(snapshot, agent, jobId, creator, agencyPrefix), _failed(failed) {
|
||||
|
||||
if (exists()) {
|
||||
if (!status()) {
|
||||
if (status() == TODO) {
|
||||
start();
|
||||
}
|
||||
} else {
|
||||
|
@ -359,9 +383,6 @@ struct FailedServer : public Job {
|
|||
|
||||
virtual bool start() const {
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) <<
|
||||
"Pending: DB Server " + _failed + " failed.";
|
||||
|
||||
// Copy todo to pending
|
||||
Builder todo, pending;
|
||||
|
||||
|
@ -408,16 +429,19 @@ struct FailedServer : public Job {
|
|||
|
||||
pending.close(); pending.close();
|
||||
|
||||
size_t sub = 0;
|
||||
|
||||
// Transact to agency
|
||||
write_ret_t res = transact(_agent, pending);
|
||||
|
||||
if (res.accepted && res.indices.size()==1 && res.indices[0]) {
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) <<
|
||||
"Pending: DB Server " + _failed + " failed.";
|
||||
|
||||
Node::Children const& databases =
|
||||
_snapshot("/Plan/Collections").children();
|
||||
|
||||
size_t sub = 0;
|
||||
|
||||
for (auto const& database : databases) {
|
||||
for (auto const& collptr : database.second->children()) {
|
||||
Node const& collection = *(collptr.second);
|
||||
|
@ -476,8 +500,7 @@ struct FailedServer : public Job {
|
|||
return true;
|
||||
}
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY)
|
||||
<< "Failed to insert job " + _jobId;
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) << "Failed to insert job " + _jobId;
|
||||
return false;
|
||||
|
||||
}
|
||||
|
@ -485,21 +508,46 @@ struct FailedServer : public Job {
|
|||
virtual unsigned status () const {
|
||||
|
||||
Node const& target = _snapshot("/Target");
|
||||
unsigned res = 4;
|
||||
|
||||
if (target.exists(std::string("/ToDo/") + _jobId).size() == 2) {
|
||||
res = 0;
|
||||
start(); // try to start job
|
||||
|
||||
return TODO;
|
||||
|
||||
} else if (target.exists(std::string("/Pending/") + _jobId).size() == 2) {
|
||||
res = 1;
|
||||
// Any sub jobs pending?
|
||||
// If not, any subjobs failed? Move to failed
|
||||
// Else move to Finished
|
||||
} else {
|
||||
// Remove any blocks on
|
||||
|
||||
Node::Children const& subJobs = _snapshot(pendingPrefix).children();
|
||||
|
||||
size_t found = 0;
|
||||
|
||||
for (auto const& subJob : subJobs) {
|
||||
if (!subJob.first.compare(0, _jobId.size()+1, _jobId + "-")) {
|
||||
found++;
|
||||
Node const& sj = *(subJob.second);
|
||||
std::string subJobId = sj("jobId").slice().copyString();
|
||||
std::string creator = sj("creator").slice().copyString();
|
||||
MoveShard(_snapshot, _agent, subJobId, creator, _agencyPrefix);
|
||||
}
|
||||
}
|
||||
|
||||
if (!found) {
|
||||
if (finish()) {
|
||||
return FINISHED;
|
||||
}
|
||||
}
|
||||
|
||||
return PENDING;
|
||||
|
||||
} else if (target.exists(std::string("/Finished/") + _jobId).size() == 2) {
|
||||
|
||||
return FINISHED;
|
||||
|
||||
} else if (target.exists(std::string("/Failed/") + _jobId).size() == 2) {
|
||||
|
||||
return FAILED;
|
||||
|
||||
}
|
||||
|
||||
return res;
|
||||
return NOTFOUND;
|
||||
|
||||
}
|
||||
|
||||
|
@ -508,6 +556,148 @@ struct FailedServer : public Job {
|
|||
};
|
||||
|
||||
|
||||
struct CleanOutServer : public Job {
|
||||
|
||||
CleanOutServer (Node const& snapshot, Agent* agent, std::string const& jobId,
|
||||
std::string const& creator, std::string const& prefix,
|
||||
std::string const& server) :
|
||||
Job(snapshot, agent, jobId, creator, prefix), _server(server) {
|
||||
}
|
||||
|
||||
virtual ~CleanOutServer () {}
|
||||
|
||||
virtual unsigned status () const {
|
||||
return 0;
|
||||
}
|
||||
|
||||
virtual bool create () const {
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) << "Todo: Clean out server " + _server;
|
||||
|
||||
std::string path = _agencyPrefix + toDoPrefix + _jobId;
|
||||
|
||||
Builder todo;
|
||||
todo.openArray();
|
||||
todo.openObject();
|
||||
todo.add(path, VPackValue(VPackValueType::Object));
|
||||
todo.add("type", VPackValue("cleanOutServer"));
|
||||
todo.add("server", VPackValue(_server));
|
||||
todo.add("jobId", VPackValue(_jobId));
|
||||
todo.add("creator", VPackValue(_creator));
|
||||
todo.add("timeCreated",
|
||||
VPackValue(printTimestamp(std::chrono::system_clock::now())));
|
||||
todo.close(); todo.close(); todo.close();
|
||||
|
||||
write_ret_t res = transact(_agent, todo);
|
||||
|
||||
if (res.accepted && res.indices.size()==1 && res.indices[0]) {
|
||||
return true;
|
||||
}
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) << "Failed to insert job " + _jobId;
|
||||
return false;
|
||||
|
||||
};
|
||||
|
||||
virtual bool start() const {
|
||||
|
||||
// Copy todo to pending
|
||||
Builder todo, pending;
|
||||
|
||||
// Get todo entry
|
||||
todo.openArray();
|
||||
_snapshot(toDoPrefix + _jobId).toBuilder(todo);
|
||||
todo.close();
|
||||
|
||||
// Prepare peding entry, block toserver
|
||||
pending.openArray();
|
||||
|
||||
// --- Add pending
|
||||
pending.openObject();
|
||||
pending.add(_agencyPrefix + pendingPrefix + _jobId,
|
||||
VPackValue(VPackValueType::Object));
|
||||
pending.add("timeStarted",
|
||||
VPackValue(printTimestamp(std::chrono::system_clock::now())));
|
||||
for (auto const& obj : VPackObjectIterator(todo.slice()[0])) {
|
||||
pending.add(obj.key.copyString(), obj.value);
|
||||
}
|
||||
pending.close();
|
||||
|
||||
// --- Delete todo
|
||||
pending.add(_agencyPrefix + toDoPrefix + _jobId,
|
||||
VPackValue(VPackValueType::Object));
|
||||
pending.add("op", VPackValue("delete"));
|
||||
pending.close();
|
||||
|
||||
// --- Block toServer
|
||||
pending.add(_agencyPrefix + blockedServersPrefix + _server,
|
||||
VPackValue(VPackValueType::Object));
|
||||
pending.add("jobId", VPackValue(_jobId));
|
||||
pending.close();
|
||||
|
||||
pending.close();
|
||||
|
||||
// Preconditions
|
||||
// --- Check that toServer not blocked
|
||||
pending.openObject();
|
||||
pending.add(_agencyPrefix + blockedServersPrefix + _server,
|
||||
VPackValue(VPackValueType::Object));
|
||||
pending.add("oldEmpty", VPackValue(true));
|
||||
pending.close();
|
||||
|
||||
pending.close(); pending.close();
|
||||
|
||||
// Transact to agency
|
||||
write_ret_t res = transact(_agent, pending);
|
||||
|
||||
if (res.accepted && res.indices.size()==1 && res.indices[0]) {
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) << "Pending: Clean out server " + _server;
|
||||
|
||||
Node::Children const& databases =
|
||||
_snapshot("/Plan/Collections").children();
|
||||
|
||||
size_t sub = 0;
|
||||
|
||||
for (auto const& database : databases) {
|
||||
for (auto const& collptr : database.second->children()) {
|
||||
Node const& collection = *(collptr.second);
|
||||
Node const& replicationFactor = collection("replicationFactor");
|
||||
if (replicationFactor.slice().getUInt() > 1) {
|
||||
for (auto const& shard : collection("shards").children()) {
|
||||
VPackArrayIterator dbsit(shard.second->slice());
|
||||
|
||||
// Only proceed if leader and create job
|
||||
if ((*dbsit.begin()).copyString() != _server) {
|
||||
continue;
|
||||
}
|
||||
|
||||
MoveShard(
|
||||
_snapshot, _agent, _jobId + "-" + std::to_string(sub++), _jobId,
|
||||
_agencyPrefix, database.first, collptr.first, shard.first,
|
||||
_server, shard.second->slice()[1].copyString());
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
LOG_TOPIC(INFO, Logger::AGENCY) <<
|
||||
"Precondition failed for starting job " + _jobId;
|
||||
|
||||
return false;
|
||||
|
||||
|
||||
};
|
||||
|
||||
std::string const& _server;
|
||||
|
||||
};
|
||||
|
||||
|
||||
std::string Supervision::_agencyPrefix = "/arango";
|
||||
|
||||
Supervision::Supervision()
|
||||
|
|
|
@ -50,8 +50,9 @@ struct Job {
|
|||
Job(Node const&, Agent*, std::string const& jobId,
|
||||
std::string const& creator, std::string const& agencyPrefix);
|
||||
virtual ~Job();
|
||||
virtual unsigned status () const = 0;
|
||||
virtual bool exists () const;
|
||||
virtual bool finish (bool) const;
|
||||
virtual unsigned status () const = 0;
|
||||
virtual bool create () const = 0;
|
||||
virtual bool start() const = 0;
|
||||
Node const& _snapshot;
|
||||
|
|
|
@ -54,6 +54,8 @@ CheckVersionFeature::CheckVersionFeature(
|
|||
void CheckVersionFeature::collectOptions(
|
||||
std::shared_ptr<ProgramOptions> options) {
|
||||
options->addSection("database", "Configure the database");
|
||||
|
||||
options->addOldOption("check-version", "database.check-version");
|
||||
|
||||
options->addHiddenOption("--database.check-version",
|
||||
"checks the versions of the database and exit",
|
||||
|
|
|
@ -69,6 +69,8 @@ DatabaseFeature::DatabaseFeature(ApplicationServer* server)
|
|||
|
||||
void DatabaseFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addSection("database", "Configure the database");
|
||||
|
||||
options->addOldOption("server.disable-replication-applier", "database.replication-applier");
|
||||
|
||||
options->addOption("--database.directory", "path to the database directory",
|
||||
new StringParameter(&_directory));
|
||||
|
|
|
@ -51,6 +51,9 @@ EndpointFeature::EndpointFeature(
|
|||
}
|
||||
|
||||
void EndpointFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addOldOption("server.backlog-size", "tcp.backlog-size");
|
||||
options->addOldOption("server.reuse-address", "tcp.reuse-address");
|
||||
|
||||
options->addSection("server", "Server features");
|
||||
|
||||
options->addOption("--server.endpoint",
|
||||
|
|
|
@ -46,6 +46,10 @@ QueryRegistryFeature::QueryRegistryFeature(ApplicationServer* server)
|
|||
void QueryRegistryFeature::collectOptions(
|
||||
std::shared_ptr<ProgramOptions> options) {
|
||||
options->addSection("query", "Configure queries");
|
||||
|
||||
options->addOldOption("database.query-cache-mode", "query.cache-mode");
|
||||
options->addOldOption("database.query-cache-max-results", "query.cache-entries");
|
||||
options->addOldOption("database.disable-query-tracking", "query.tracking");
|
||||
|
||||
options->addOption("--query.tracking", "whether to track queries",
|
||||
new BooleanParameter(&_queryTracking));
|
||||
|
|
|
@ -108,6 +108,14 @@ void RestServerFeature::collectOptions(
|
|||
std::shared_ptr<ProgramOptions> options) {
|
||||
options->addSection("server", "Server features");
|
||||
|
||||
options->addOldOption("server.disable-authentication", "server.authentication");
|
||||
options->addOldOption("server.disable-authentication-unix-sockets", "server.authentication-unix-sockets");
|
||||
options->addOldOption("server.authenticate-system-only", "server.authentication-system-only");
|
||||
options->addOldOption("server.allow-method-override", "http.allow-method-override");
|
||||
options->addOldOption("server.hide-product-header", "http.hide-product-header");
|
||||
options->addOldOption("server.keep-alive-timeout", "http.keep-alive-timeout");
|
||||
options->addOldOption("server.default-api-compatibility", "");
|
||||
|
||||
options->addOption("--server.authentication",
|
||||
"enable or disable authentication for ALL client requests",
|
||||
new BooleanParameter(&_authentication));
|
||||
|
|
|
@ -58,6 +58,8 @@ UpgradeFeature::UpgradeFeature(
|
|||
|
||||
void UpgradeFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addSection("database", "Configure the database");
|
||||
|
||||
options->addOldOption("upgrade", "--database.auto-upgrade");
|
||||
|
||||
options->addOption("--database.auto-upgrade",
|
||||
"perform a database upgrade if necessary",
|
||||
|
|
|
@ -39,6 +39,8 @@ StatisticsFeature::StatisticsFeature(application_features::ApplicationServer* se
|
|||
}
|
||||
|
||||
void StatisticsFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addOldOption("server.disable-statistics", "server.statistics");
|
||||
|
||||
options->addSection("server", "Server features");
|
||||
|
||||
options->addHiddenOption(
|
||||
|
|
|
@ -40,6 +40,9 @@ FoxxQueuesFeature::FoxxQueuesFeature(
|
|||
|
||||
void FoxxQueuesFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addSection("foxx", "Configure Foxx");
|
||||
|
||||
options->addOldOption("server.foxx-queues", "foxx.queues");
|
||||
options->addOldOption("server.foxx-queues-poll-interval", "foxx.queues-poll-interval");
|
||||
|
||||
options->addOption(
|
||||
"--foxx.queues",
|
||||
|
|
|
@ -41,6 +41,8 @@ TempFeature::TempFeature(application_features::ApplicationServer* server,
|
|||
}
|
||||
|
||||
void TempFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addOldOption("temp-path", "temp.path");
|
||||
|
||||
options->addSection("temp", "Configure the temporary files");
|
||||
|
||||
options->addOption("--temp.path", "path for temporary files",
|
||||
|
|
|
@ -64,6 +64,12 @@ LoggerFeature::LoggerFeature(application_features::ApplicationServer* server,
|
|||
}
|
||||
|
||||
void LoggerFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addOldOption("log.tty", "log.foreground-tty");
|
||||
options->addOldOption("log.content-filter", "");
|
||||
options->addOldOption("log.source-filter", "");
|
||||
options->addOldOption("log.application", "");
|
||||
options->addOldOption("log.facility", "");
|
||||
|
||||
options->addHiddenOption("--log", "the global or topic-specific log level",
|
||||
new VectorParameter<StringParameter>(&_levels));
|
||||
|
||||
|
|
|
@ -141,6 +141,16 @@ class ProgramOptions {
|
|||
// set context for error reporting
|
||||
void setContext(std::string const& value) { _context = value; }
|
||||
|
||||
// sets the old options map
|
||||
void setOldOptions(std::unordered_map<std::string, std::string> const& old) {
|
||||
_oldOptions = old;
|
||||
}
|
||||
|
||||
// sets a single old option and its replacement name
|
||||
void addOldOption(std::string const& old, std::string const& replacement) {
|
||||
_oldOptions[old] = replacement;
|
||||
}
|
||||
|
||||
// adds a section to the options
|
||||
void addSection(Section const& section) {
|
||||
checkIfSealed();
|
||||
|
@ -208,11 +218,21 @@ class ProgramOptions {
|
|||
|
||||
// prints the names for all section help options
|
||||
void printSectionsHelp() const {
|
||||
char const* colorStart;
|
||||
char const* colorEnd;
|
||||
|
||||
if (isatty(STDOUT_FILENO)) {
|
||||
colorStart = TRI_SHELL_COLOR_BRIGHT;
|
||||
colorEnd = TRI_SHELL_COLOR_RESET;
|
||||
} else {
|
||||
colorStart = colorEnd = "";
|
||||
}
|
||||
|
||||
// print names of sections
|
||||
std::cout << _more;
|
||||
for (auto const& it : _sections) {
|
||||
if (!it.second.name.empty() && it.second.hasOptions()) {
|
||||
std::cout << " --help-" << it.second.name;
|
||||
std::cout << " " << colorStart << "--help-" << it.second.name << colorEnd;
|
||||
}
|
||||
}
|
||||
std::cout << std::endl;
|
||||
|
@ -364,7 +384,7 @@ class ProgramOptions {
|
|||
return (*it2).second.parameter->requiresValue();
|
||||
}
|
||||
|
||||
// returns a pointer to an option, specified by option name
|
||||
// returns a pointer to an option value, specified by option name
|
||||
// returns a nullptr if the option is unknown
|
||||
template <typename T>
|
||||
T* get(std::string const& name) {
|
||||
|
@ -385,20 +405,83 @@ class ProgramOptions {
|
|||
|
||||
return dynamic_cast<T>(option.parameter.get());
|
||||
}
|
||||
|
||||
// returns an option description
|
||||
std::string getDescription(std::string const& name) {
|
||||
auto parts = Option::splitName(name);
|
||||
auto it = _sections.find(parts.first);
|
||||
|
||||
if (it == _sections.end()) {
|
||||
return "";
|
||||
}
|
||||
|
||||
auto it2 = (*it).second.options.find(parts.second);
|
||||
|
||||
if (it2 == (*it).second.options.end()) {
|
||||
return "";
|
||||
}
|
||||
|
||||
return (*it2).second.description;
|
||||
}
|
||||
|
||||
// handle an unknown option
|
||||
bool unknownOption(std::string const& name) {
|
||||
fail("unknown option '--" + name + "'");
|
||||
char const* colorStart;
|
||||
char const* colorEnd;
|
||||
|
||||
if (isatty(STDERR_FILENO)) {
|
||||
colorStart = TRI_SHELL_COLOR_BRIGHT;
|
||||
colorEnd = TRI_SHELL_COLOR_RESET;
|
||||
} else {
|
||||
colorStart = colorEnd = "";
|
||||
}
|
||||
|
||||
fail(std::string("unknown option '") + colorStart + "--" + name + colorEnd + "'");
|
||||
|
||||
auto similarOptions = similar(name, 8, 4);
|
||||
if (!similarOptions.empty()) {
|
||||
std::cerr << "Did you mean one of these?" << std::endl;
|
||||
if (similarOptions.size() == 1) {
|
||||
std::cerr << "Did you mean this?" << std::endl;
|
||||
} else {
|
||||
std::cerr << "Did you mean one of these?" << std::endl;
|
||||
}
|
||||
// determine maximum width
|
||||
size_t maxWidth = 0;
|
||||
for (auto const& it : similarOptions) {
|
||||
std::cerr << " " << it << std::endl;
|
||||
maxWidth = (std::max)(maxWidth, it.size());
|
||||
}
|
||||
|
||||
for (auto const& it : similarOptions) {
|
||||
std::cerr << " " << colorStart << Option::pad(it, maxWidth) << colorEnd
|
||||
<< " " << getDescription(it)
|
||||
<< std::endl;
|
||||
}
|
||||
std::cerr << std::endl;
|
||||
}
|
||||
std::cerr << "Use --help or --help-all to get an overview of available options"
|
||||
|
||||
auto it = _oldOptions.find(name);
|
||||
if (it != _oldOptions.end()) {
|
||||
// a now removed or renamed option was specified...
|
||||
auto& now = (*it).second;
|
||||
if (now.empty()) {
|
||||
std::cerr << "Please note that the specified option '"
|
||||
<< colorStart << "--" << name << colorEnd
|
||||
<< "' has been removed in this ArangoDB version";
|
||||
} else {
|
||||
std::cerr << "Please note that the specified option '"
|
||||
<< colorStart << "--" << name << colorEnd
|
||||
<< "' has been renamed to '--" << colorStart
|
||||
<< now << colorEnd << "' in this ArangoDB version";
|
||||
}
|
||||
|
||||
std::cerr << std::endl
|
||||
<< "Please be sure to read the manual section about changed options"
|
||||
<< std::endl << std::endl;
|
||||
}
|
||||
|
||||
std::cerr << "Use " << colorStart << "--help" << colorEnd
|
||||
<< " or " << colorStart << "--help-all" << colorEnd
|
||||
<< " to get an overview of available options"
|
||||
<< std::endl << std::endl;
|
||||
|
||||
return false;
|
||||
|
@ -507,6 +590,9 @@ class ProgramOptions {
|
|||
// shorthands for options, translating from short options to long option names
|
||||
// e.g. "-c" to "--configuration"
|
||||
std::unordered_map<std::string, std::string> _shorthands;
|
||||
// map with old options and their new equivalents, used for printing more
|
||||
// meaningful error messages when an invalid (but once valid) option was used
|
||||
std::unordered_map<std::string, std::string> _oldOptions;
|
||||
// callback function for determining the terminal width
|
||||
TerminalWidthFuncType _terminalWidth;
|
||||
// callback function for determining the similarity between two option names
|
||||
|
|
|
@ -52,6 +52,13 @@ SslServerFeature::SslServerFeature(application_features::ApplicationServer* serv
|
|||
}
|
||||
|
||||
void SslServerFeature::collectOptions(std::shared_ptr<ProgramOptions> options) {
|
||||
options->addOldOption("server.cafile", "ssl.cafile");
|
||||
options->addOldOption("server.keyfile", "ssl.keyfile");
|
||||
options->addOldOption("server.ssl-cache", "ssl.session-cache");
|
||||
options->addOldOption("server.ssl-cipher-list", "ssl.cipher-list");
|
||||
options->addOldOption("server.ssl-options", "ssl.options");
|
||||
options->addOldOption("server.ssl-protocol", "ssl.protocol");
|
||||
|
||||
options->addSection("ssl", "Configure SSL communication");
|
||||
|
||||
options->addOption("--ssl.cafile", "ca file used for secure connections",
|
||||
|
|
|
@ -66,6 +66,7 @@ start() {
|
|||
ROLE="PRIMARY"
|
||||
elif [ "$1" == "coordinator" ]; then
|
||||
ROLE="COORDINATOR"
|
||||
sleep 1
|
||||
fi
|
||||
TYPE=$1
|
||||
PORT=$2
|
||||
|
|
Loading…
Reference in New Issue