mirror of https://gitee.com/bigwinds/arangodb
Merge branch 'devel' of https://github.com/arangodb/arangodb into devel
This commit is contained in:
commit
2477077944
|
@ -1,4 +1,9 @@
|
|||
!BOOK ArangoDB VERSION_NUMBER HTTP API Documentation
|
||||
|
||||
Welcome to the ArangoDB HTTP API documentation!
|
||||
Welcome to the ArangoDB HTTP API documentation! This documentation is
|
||||
for API developers. As a user or administrator of ArangoDB you should
|
||||
not need the information provided herein.
|
||||
|
||||
In general, as a user of ArangoDB you will use one the language
|
||||
[drivers](http://www.arangodb.com).
|
||||
|
||||
|
|
|
@ -1,31 +0,0 @@
|
|||
!CHAPTER ARM
|
||||
|
||||
Currently ARM linux is *unsupported*; The initialization of the BOOST lockfree queue doesn't work.
|
||||
|
||||
The ArangoDB packages for ARM require the kernel to allow unaligned memory access.
|
||||
How the kernel handles unaligned memory access is configurable at runtime by
|
||||
checking and adjusting the contents `/proc/cpu/alignment`.
|
||||
|
||||
In order to operate on ARM, ArangoDB requires the bit 1 to be set. This will
|
||||
make the kernel trap and adjust unaligned memory accesses. If this bit is not
|
||||
set, the kernel may send a SIGBUS signal to ArangoDB and terminate it.
|
||||
|
||||
To set bit 1 in `/proc/cpu/alignment` use the following command as a privileged
|
||||
user (e.g. root):
|
||||
|
||||
echo "2" > /proc/cpu/alignment
|
||||
|
||||
Note that this setting affects all user processes and not just ArangoDB. Setting
|
||||
the alignment with the above command will also not make the setting permanent,
|
||||
so it will be lost after a restart of the system. In order to make the setting
|
||||
permanent, it should be executed during system startup or before starting arangod.
|
||||
|
||||
The ArangoDB start/stop scripts do not adjust the alignment setting, but rely on
|
||||
the environment to have the correct alignment setting already. The reason for this
|
||||
is that the alignment settings also affect all other user processes (which ArangoDB
|
||||
is not aware of) and thus may have side-effects outside of ArangoDB. It is therefore
|
||||
more reasonable to have the system administrator carry out the changes.
|
||||
|
||||
If the alignment settings are not correct, ArangoDB will log an error at startup
|
||||
and abort.
|
||||
|
|
@ -1,41 +1,12 @@
|
|||
!CHAPTER Linux
|
||||
|
||||
You can find binary packages for various Linux distributions
|
||||
You can find binary packages for the most common Linux distributions
|
||||
[here](http://www.arangodb.com/install/).
|
||||
|
||||
We provide packages for:
|
||||
|
||||
* Centos
|
||||
* Debian
|
||||
* Fedora
|
||||
* [Linux-Mint](#linux-mint)
|
||||
* Mandriva
|
||||
* OpenSUSE
|
||||
* RedHat RHEL
|
||||
* SUSE SLE
|
||||
* Ubuntu
|
||||
|
||||
|
||||
!SECTION Using a Package Manager to install ArangoDB
|
||||
|
||||
Follow the instructions on the [install](https://www.arangodb.com/install)
|
||||
page to use your favorite package manager for the major distributions. After
|
||||
setting up the ArangoDB repository you can easily install ArangoDB using yum,
|
||||
aptitude, urpmi or zypper.
|
||||
Follow the instructions to use your favorite package manager for the
|
||||
major distributions. After setting up the ArangoDB repository you can
|
||||
easily install ArangoDB using yum, aptitude, urpmi or zypper.
|
||||
|
||||
!SUBSECTION Linux Mint
|
||||
|
||||
Please use the corresponding Ubuntu or Debian packages.
|
||||
|
||||
!SECTION Using Vagrant and Chef
|
||||
|
||||
A Chef recipe is available from jbianquetti at:
|
||||
|
||||
https://github.com/jbianquetti/chef-arangodb
|
||||
|
||||
!SECTION Using ansible
|
||||
|
||||
An [Ansible](http://ansible.com) role is available through [Ansible-Galaxy](https://galaxy.ansible.com)
|
||||
|
||||
* Role on Ansible-Galaxy: https://galaxy.ansible.com/list#/roles/2344
|
||||
* Source on Github: https://github.com/stackmagic/ansible-arangodb
|
||||
|
|
|
@ -1,11 +1,9 @@
|
|||
!CHAPTER Mac OS X
|
||||
|
||||
The preferred method for installing ArangoDB under Mac OS X is
|
||||
[homebrew](#homebrew). However, in case you are not using homebrew, we provide a [command-line
|
||||
app](#command-line-app) which contains all the executables.
|
||||
|
||||
There is also a version available in the [AppStore](#apples-app-store), which comes with a nice
|
||||
graphical user interface to start and stop the server.
|
||||
[homebrew](#homebrew). However, in case you are not using homebrew, we
|
||||
provide a [command-line app](#command-line-app) or [graphical
|
||||
app](#graphical-app) which contains all the executables.
|
||||
|
||||
!SECTION Homebrew
|
||||
|
||||
|
@ -24,10 +22,6 @@ The ArangoDB shell will be installed as:
|
|||
|
||||
/usr/local/bin/arangosh
|
||||
|
||||
If you want to install the latest (unstable) version use:
|
||||
|
||||
brew install --HEAD arangodb
|
||||
|
||||
You can uninstall ArangoDB using:
|
||||
|
||||
brew uninstall arangodb
|
||||
|
@ -41,25 +35,29 @@ Then remove the LaunchAgent:
|
|||
|
||||
rm ~/Library/LaunchAgents/homebrew.mxcl.arangodb.plist
|
||||
|
||||
**Note**: If the latest ArangoDB Version is not shown in homebrew, you also need to update homebrew:
|
||||
**Note**: If the latest ArangoDB Version is not shown in homebrew, you
|
||||
also need to update homebrew:
|
||||
|
||||
brew update
|
||||
|
||||
!SUBSECTION Known issues
|
||||
- Performance - the LLVM delivered as of Mac OS X El Capitan builds slow binaries. Use GCC instead
|
||||
- Performance - the LLVM delivered as of Mac OS X El Capitan builds slow binaries. Use GCC instead,
|
||||
until this issue has been fixed by Apple.
|
||||
- the Commandline argument parsing doesn't accept blanks in filenames; the CLI version below does.
|
||||
|
||||
!SECTION Apple's App Store
|
||||
!SECTION Graphical App
|
||||
In case you are not using homebrew, we also provide a graphical app. You can
|
||||
download it from [here](https://www.arangodb.com/install).
|
||||
|
||||
ArangoDB is available in Apple's App-Store. Please note, that it sometimes takes
|
||||
days or weeks until the latest versions are available.
|
||||
Choose *Mac OS X*. Download and install the application *ArangoDB* in
|
||||
your application folder.
|
||||
|
||||
!SECTION Command-Line App
|
||||
In case you are not using homebrew, we also provide a command-line app. You can
|
||||
download it from [here](https://www.arangodb.com/install).
|
||||
|
||||
Choose *Mac OS X* and go to *Grab binary packages directly*. This allows you to
|
||||
install the application *ArangoDB-CLI* in your application folder.
|
||||
Choose *Mac OS X*. Download and install the application *ArangoDB-CLI*
|
||||
in your application folder.
|
||||
|
||||
Starting the application will start the server and open a terminal window
|
||||
showing you the log-file.
|
||||
|
@ -84,4 +82,3 @@ showing you the log-file.
|
|||
Note that it is possible to install both, the homebrew version and the command-line
|
||||
app. You should, however, edit the configuration files of one version and change
|
||||
the port used.
|
||||
|
||||
|
|
|
@ -17,8 +17,14 @@ Head to [arangodb.com/download](https://www.arangodb.com/download/),
|
|||
select your operating system and download ArangoDB. You may also follow
|
||||
the instructions on how to install with a package manager, if available.
|
||||
|
||||
Start up the server by running `arangod`.
|
||||
!TODO explain how to do that on all major platforms in the most simple way
|
||||
If you installed a binary package under Linux, the server is
|
||||
automatically started.
|
||||
|
||||
If you installed ArangoDB using homebrew under MacOS X, start the
|
||||
server by running `/usr/local/sbin/arangod`.
|
||||
|
||||
If you installed ArangoDB under Windows as a service, the server is
|
||||
automatically started.
|
||||
|
||||
For startup parameters, installation in a cluster and so on, see
|
||||
[Installing](Installing/README.md).
|
||||
|
@ -27,28 +33,36 @@ For startup parameters, installation in a cluster and so on, see
|
|||
|
||||
The server itself speaks REST and VelocyStream, but you can use the
|
||||
graphical web interface to keep it simple. There's also
|
||||
[arangosh](../Administration/Arangosh/README.md), a synchronous shell for
|
||||
interaction with the server. If you're a developer, you might prefer the shell
|
||||
over the GUI. It does not provide features like syntax highlighting however.
|
||||
[arangosh](../Administration/Arangosh/README.md), a synchronous shell
|
||||
for interaction with the server. If you're a developer, you might
|
||||
prefer the shell over the GUI. It does not provide features like
|
||||
syntax highlighting however.
|
||||
|
||||
The web interface will become available shortly after you started `arangod`.
|
||||
You can access it in your browser at http://localhost:8529 - if not, please
|
||||
see [Troubleshooting](../Troubleshooting/README.md). It should look like this:
|
||||
see [Troubleshooting](../Troubleshooting/README.md).
|
||||
|
||||
By default, authentication is enabled. There default user is
|
||||
`root`. Depending on the installation method used, the installation
|
||||
processed either prompted for the root password or the default root
|
||||
password is empty.
|
||||
|
||||
It should look like this:
|
||||
TODO MISSING
|
||||
|
||||
!SUBSECTION Databases, collections and documents
|
||||
|
||||
Databases are sets of collections. Collections store records, which are refered
|
||||
to as documents. Collections are the equivalent of tables in RDBMS, and documents
|
||||
can be thought of as rows in a table. The difference is that you don't define
|
||||
what columns (or rather attribures) there will be in advance. Every document
|
||||
in any collection can have arbitrary attribute keys and values. Documents in
|
||||
a single collection will likely have a similar structure in practice however,
|
||||
but the database system itself does not require it and will operate stable and
|
||||
fast no matter how your data looks like.
|
||||
Databases are sets of collections. Collections store records, which are referred
|
||||
to as documents. Collections are the equivalent of tables in RDBMS, and
|
||||
documents can be thought of as rows in a table. The difference is that you don't
|
||||
define what columns (or rather attribures) there will be in advance. Every
|
||||
document in any collection can have arbitrary attribute keys and
|
||||
values. Documents in a single collection will likely have a similar structure in
|
||||
practice however, but the database system itself does not require it and will
|
||||
operate stable and fast no matter how your data looks like.
|
||||
|
||||
Every server instance comes with a `_system` database.
|
||||
|
||||
|
||||
!SECTION ArangoDB programs
|
||||
|
||||
The ArangoDB package comes with the following programs:
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
* [Linux](GettingStarted/Installing/Linux.md)
|
||||
* [Mac OS X](GettingStarted/Installing/MacOSX.md)
|
||||
* [Windows](GettingStarted/Installing/Windows.md)
|
||||
* [ARM](GettingStarted/Installing/ARM.md)
|
||||
* [Compiling](GettingStarted/Installing/Compiling.md)
|
||||
* [Cluster setup](GettingStarted/Installing/Cluster.md)
|
||||
* [Using the Web Interface](GettingStarted/WebInterface.md)
|
||||
|
|
|
@ -252,7 +252,7 @@ OperationID ClusterComm::asyncRequest(
|
|||
: TRI_microtime() + timeout;
|
||||
|
||||
op->result.setDestination(destination, logConnectionErrors());
|
||||
if (op->result.status == CL_COMM_ERROR) {
|
||||
if (op->result.status == CL_COMM_BACKEND_UNAVAILABLE) {
|
||||
// We put it into the received queue right away for error reporting:
|
||||
ClusterCommResult const resCopy(op->result);
|
||||
LOG(DEBUG) << "In asyncRequest, putting failed request "
|
||||
|
@ -1131,7 +1131,11 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
|
|||
continue;
|
||||
}
|
||||
auto it = opIDtoIndex.find(res.operationID);
|
||||
TRI_ASSERT(it != opIDtoIndex.end()); // we should really know this!
|
||||
if (it == opIDtoIndex.end()) {
|
||||
// Ooops, we got a response to which we did not send the request
|
||||
LOG(ERR) << "Received ClusterComm response for a request we did not send!";
|
||||
continue;
|
||||
}
|
||||
size_t index = it->second;
|
||||
if (res.status == CL_COMM_RECEIVED) {
|
||||
requests[index].result = res;
|
||||
|
@ -1148,6 +1152,7 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
|
|||
<< (int) res.answer_code;
|
||||
} else if (res.status == CL_COMM_BACKEND_UNAVAILABLE ||
|
||||
(res.status == CL_COMM_TIMEOUT && !res.sendWasComplete)) {
|
||||
requests[index].result = res;
|
||||
LOG_TOPIC(TRACE, logTopic) << "ClusterComm::performRequests: "
|
||||
<< "got BACKEND_UNAVAILABLE or TIMEOUT from "
|
||||
<< requests[index].destination << ":"
|
||||
|
@ -1157,7 +1162,7 @@ size_t ClusterComm::performRequests(std::vector<ClusterCommRequest>& requests,
|
|||
requests[index].result = res;
|
||||
requests[index].done = true;
|
||||
nrDone++;
|
||||
LOG_TOPIC(TRACE, logTopic) << "ClusterComm::peformRequests: "
|
||||
LOG_TOPIC(TRACE, logTopic) << "ClusterComm::performRequests: "
|
||||
<< "got no answer from " << requests[index].destination << ":"
|
||||
<< requests[index].path << " with error " << res.status;
|
||||
}
|
||||
|
|
|
@ -200,7 +200,7 @@ struct ClusterCommResult {
|
|||
bool sendWasComplete;
|
||||
|
||||
ClusterCommResult()
|
||||
: status(CL_COMM_ERROR),
|
||||
: status(CL_COMM_BACKEND_UNAVAILABLE),
|
||||
dropped(false),
|
||||
single(false),
|
||||
answer_code(GeneralResponse::ResponseCode::PROCESSING),
|
||||
|
|
|
@ -1731,7 +1731,7 @@ int ClusterInfo::dropIndexCoordinator(std::string const& databaseName,
|
|||
res.slice()[0].get(std::vector<std::string>(
|
||||
{ AgencyComm::prefix(), "Plan", "Collections", databaseName, collectionID }
|
||||
));
|
||||
if (previous.isObject()) {
|
||||
if (!previous.isObject()) {
|
||||
return TRI_ERROR_ARANGO_COLLECTION_NOT_FOUND;
|
||||
}
|
||||
|
||||
|
@ -1942,7 +1942,7 @@ void ClusterInfo::loadServers() {
|
|||
result.slice()[0].get(std::vector<std::string>(
|
||||
{AgencyComm::prefix(), "Current", "ServersRegistered"}));
|
||||
|
||||
if (!serversRegistered.isNone()) {
|
||||
if (serversRegistered.isObject()) {
|
||||
decltype(_servers) newServers;
|
||||
|
||||
for (auto const& res : VPackObjectIterator(serversRegistered)) {
|
||||
|
@ -2074,7 +2074,7 @@ void ClusterInfo::loadCurrentCoordinators() {
|
|||
result.slice()[0].get(std::vector<std::string>(
|
||||
{AgencyComm::prefix(), "Current", "Coordinators"}));
|
||||
|
||||
if (!currentCoordinators.isNone()) {
|
||||
if (currentCoordinators.isObject()) {
|
||||
decltype(_coordinators) newCoordinators;
|
||||
|
||||
for (auto const& coordinator : VPackObjectIterator(currentCoordinators)) {
|
||||
|
@ -2131,10 +2131,9 @@ void ClusterInfo::loadCurrentDBServers() {
|
|||
result.slice()[0].get(std::vector<std::string>(
|
||||
{AgencyComm::prefix(), "Current", "DBServers"}));
|
||||
|
||||
if (!currentDBServers.isNone()) {
|
||||
if (currentDBServers.isObject()) {
|
||||
decltype(_DBServers) newDBServers;
|
||||
|
||||
//for (; it != result._values.end(); ++it) {
|
||||
for (auto const& dbserver : VPackObjectIterator(currentDBServers)) {
|
||||
newDBServers.emplace(
|
||||
std::make_pair(dbserver.key.copyString(), dbserver.value.copyString()));
|
||||
|
|
|
@ -47,7 +47,10 @@ static double const CL_DEFAULT_TIMEOUT = 60.0;
|
|||
namespace arangodb {
|
||||
|
||||
static int handleGeneralCommErrors(ClusterCommResult const* res) {
|
||||
// This function creates an error code from a ClusterCommResult.
|
||||
// This function creates an error code from a ClusterCommResult,
|
||||
// but only if it is a communication error. If the communication
|
||||
// was successful and there was an HTTP error code, this function
|
||||
// returns TRI_ERROR_NO_ERROR.
|
||||
// If TRI_ERROR_NO_ERROR is returned, then the result was CL_COMM_RECEIVED
|
||||
// and .answer can safely be inspected.
|
||||
if (res->status == CL_COMM_TIMEOUT) {
|
||||
|
@ -797,9 +800,10 @@ int createDocumentOnCoordinator(
|
|||
TRI_ASSERT(requests.size() == 1);
|
||||
auto const& req = requests[0];
|
||||
auto& res = req.result;
|
||||
if (nrDone == 0 || res.status != CL_COMM_RECEIVED) {
|
||||
// There has been Communcation error. Handle and return it.
|
||||
return handleGeneralCommErrors(&res);
|
||||
|
||||
int commError = handleGeneralCommErrors(&res);
|
||||
if (commError != TRI_ERROR_NO_ERROR) {
|
||||
return commError;
|
||||
}
|
||||
|
||||
responseCode = res.answer_code;
|
||||
|
@ -955,9 +959,12 @@ int deleteDocumentOnCoordinator(
|
|||
TRI_ASSERT(requests.size() == 1);
|
||||
auto const& req = requests[0];
|
||||
auto& res = req.result;
|
||||
if (nrDone == 0 || res.status != CL_COMM_RECEIVED) {
|
||||
return handleGeneralCommErrors(&res);
|
||||
|
||||
int commError = handleGeneralCommErrors(&res);
|
||||
if (commError != TRI_ERROR_NO_ERROR) {
|
||||
return commError;
|
||||
}
|
||||
|
||||
responseCode = res.answer_code;
|
||||
TRI_ASSERT(res.answer != nullptr);
|
||||
auto parsedResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
|
||||
|
@ -1294,22 +1301,28 @@ int getDocumentOnCoordinator(
|
|||
// Only one can answer, we react a bit differently
|
||||
size_t count;
|
||||
int nrok = 0;
|
||||
int commError = TRI_ERROR_NO_ERROR;
|
||||
for (count = requests.size(); count > 0; count--) {
|
||||
auto const& req = requests[count - 1];
|
||||
auto res = req.result;
|
||||
if (res.status == CL_COMM_RECEIVED) {
|
||||
if (res.answer_code !=
|
||||
arangodb::GeneralResponse::ResponseCode::NOT_FOUND ||
|
||||
(nrok == 0 && count == 1)) {
|
||||
(nrok == 0 && count == 1 && commError == TRI_ERROR_NO_ERROR)) {
|
||||
nrok++;
|
||||
responseCode = res.answer_code;
|
||||
TRI_ASSERT(res.answer != nullptr);
|
||||
auto parsedResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
|
||||
resultBody.swap(parsedResult);
|
||||
}
|
||||
} else {
|
||||
commError = handleGeneralCommErrors(&res);
|
||||
}
|
||||
}
|
||||
// Note that nrok is always at least 1!
|
||||
if (nrok == 0) {
|
||||
// This can only happen, if a commError was encountered!
|
||||
return commError;
|
||||
}
|
||||
if (nrok > 1) {
|
||||
return TRI_ERROR_CLUSTER_GOT_CONTRADICTING_ANSWERS;
|
||||
}
|
||||
|
@ -1567,9 +1580,6 @@ int getFilteredEdgesOnCoordinator(
|
|||
int error = handleGeneralCommErrors(&res);
|
||||
if (error != TRI_ERROR_NO_ERROR) {
|
||||
cc->drop("", coordTransactionID, 0, "");
|
||||
if (res.status == CL_COMM_ERROR || res.status == CL_COMM_DROPPED) {
|
||||
return TRI_ERROR_INTERNAL;
|
||||
}
|
||||
return error;
|
||||
}
|
||||
std::shared_ptr<VPackBuilder> shardResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
|
||||
|
@ -1823,22 +1833,28 @@ int modifyDocumentOnCoordinator(
|
|||
if (!useMultiple) {
|
||||
// Only one can answer, we react a bit differently
|
||||
int nrok = 0;
|
||||
int commError = TRI_ERROR_NO_ERROR;
|
||||
for (size_t count = shardList->size(); count > 0; count--) {
|
||||
auto const& req = requests[count - 1];
|
||||
auto res = req.result;
|
||||
if (res.status == CL_COMM_RECEIVED) {
|
||||
if (res.answer_code !=
|
||||
arangodb::GeneralResponse::ResponseCode::NOT_FOUND ||
|
||||
(nrok == 0 && count == 1)) {
|
||||
(nrok == 0 && count == 1 && commError == TRI_ERROR_NO_ERROR)) {
|
||||
nrok++;
|
||||
responseCode = res.answer_code;
|
||||
TRI_ASSERT(res.answer != nullptr);
|
||||
auto parsedResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
|
||||
resultBody.swap(parsedResult);
|
||||
}
|
||||
} else {
|
||||
commError = handleGeneralCommErrors(&res);
|
||||
}
|
||||
}
|
||||
// Note that nrok is always at least 1!
|
||||
if (nrok == 0) {
|
||||
// This can only happen, if a commError was encountered!
|
||||
return commError;
|
||||
}
|
||||
if (nrok > 1) {
|
||||
return TRI_ERROR_CLUSTER_GOT_CONTRADICTING_ANSWERS;
|
||||
}
|
||||
|
|
|
@ -1852,7 +1852,7 @@ static void JS_Drop(v8::FunctionCallbackInfo<v8::Value> const& args) {
|
|||
TRI_V8_CURRENT_GLOBALS_AND_SCOPE;
|
||||
|
||||
if (args.Length() != 1) {
|
||||
TRI_V8_THROW_EXCEPTION_USAGE("wait(obj)");
|
||||
TRI_V8_THROW_EXCEPTION_USAGE("drop(obj)");
|
||||
}
|
||||
// Possible options:
|
||||
// - clientTransactionID (string)
|
||||
|
@ -1860,14 +1860,6 @@ static void JS_Drop(v8::FunctionCallbackInfo<v8::Value> const& args) {
|
|||
// - operationID (number)
|
||||
// - shardID (string)
|
||||
|
||||
// Disabled to allow communication originating in a DBserver:
|
||||
// 31.7.2014 Max
|
||||
|
||||
// if (ServerState::instance()->getRole() != ServerState::ROLE_COORDINATOR) {
|
||||
// TRI_V8_THROW_EXCEPTION_INTERNAL(scope,"request works only in coordinator
|
||||
// role");
|
||||
// }
|
||||
|
||||
ClusterComm* cc = ClusterComm::instance();
|
||||
|
||||
if (cc == nullptr) {
|
||||
|
@ -1911,6 +1903,25 @@ static void JS_Drop(v8::FunctionCallbackInfo<v8::Value> const& args) {
|
|||
TRI_V8_TRY_CATCH_END
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get an ID for use with coordTransactionId
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
static void JS_GetId(v8::FunctionCallbackInfo<v8::Value> const& args) {
|
||||
TRI_V8_TRY_CATCH_BEGIN(isolate);
|
||||
|
||||
if (args.Length() != 0) {
|
||||
TRI_V8_THROW_EXCEPTION_USAGE("getId()");
|
||||
}
|
||||
|
||||
auto id = TRI_NewTickServer();
|
||||
std::string st = StringUtils::itoa(id);
|
||||
v8::Handle<v8::String> s = TRI_V8_ASCII_STRING(st.c_str());
|
||||
|
||||
TRI_V8_RETURN(s);
|
||||
TRI_V8_TRY_CATCH_END
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief creates a global cluster context
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -2106,6 +2117,7 @@ void TRI_InitV8Cluster(v8::Isolate* isolate, v8::Handle<v8::Context> context) {
|
|||
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("enquire"), JS_Enquire);
|
||||
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("wait"), JS_Wait);
|
||||
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("drop"), JS_Drop);
|
||||
TRI_AddMethodVocbase(isolate, rt, TRI_V8_ASCII_STRING("getId"), JS_GetId);
|
||||
|
||||
v8g->ClusterCommTempl.Reset(isolate, rt);
|
||||
TRI_AddGlobalFunctionVocbase(isolate, context,
|
||||
|
|
|
@ -59,7 +59,9 @@ class RestServerFeature final
|
|||
}
|
||||
|
||||
static std::string getJwtSecret() {
|
||||
TRI_ASSERT(RESTSERVER != nullptr);
|
||||
if (RESTSERVER == nullptr) {
|
||||
return std::string();
|
||||
}
|
||||
return RESTSERVER->jwtSecret();
|
||||
}
|
||||
|
||||
|
|
|
@ -194,8 +194,7 @@ actions.defineHttp({
|
|||
return;
|
||||
}
|
||||
var DBserver = req.parameters.DBserver;
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout:10 };
|
||||
var options = { timeout:10 };
|
||||
var op = ArangoClusterComm.asyncRequest("GET","server:"+DBserver,"_system",
|
||||
"/_admin/statistics","",{},options);
|
||||
var r = ArangoClusterComm.wait(op);
|
||||
|
@ -343,8 +342,7 @@ actions.defineHttp({
|
|||
}
|
||||
else {
|
||||
// query a remote statistics collection
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout:10 };
|
||||
var options = { timeout:10 };
|
||||
var op = ArangoClusterComm.asyncRequest("POST","server:"+DBserver,"_system",
|
||||
"/_api/cursor",JSON.stringify({query: myQueryVal, bindVars: bind}),{},options);
|
||||
var r = ArangoClusterComm.wait(op);
|
||||
|
|
|
@ -109,7 +109,7 @@ authRouter.use((req, res, next) => {
|
|||
});
|
||||
|
||||
|
||||
authRouter.get('/api/*', module.context.apiDocumentation({
|
||||
router.get('/api/*', module.context.apiDocumentation({
|
||||
swaggerJson(req, res) {
|
||||
res.json(API_DOCS);
|
||||
}
|
||||
|
|
|
@ -443,8 +443,7 @@ router.get("/cluster", function (req, res) {
|
|||
|
||||
const DBserver = req.queryParams.DBserver;
|
||||
let type = req.queryParams.type;
|
||||
const coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
const options = { coordTransactionID: coord.coordTransactionID, timeout: 10 };
|
||||
const options = { timeout: 10 };
|
||||
|
||||
if (type !== "short" && type !== "long") {
|
||||
type = "short";
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/*jshint strict: false */
|
||||
/*global ArangoClusterComm, ArangoClusterInfo, require, exports, module */
|
||||
/*global ArangoClusterComm, require, exports, module */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief ArangoCollection
|
||||
|
@ -135,7 +135,7 @@ ArangoCollection.prototype.truncate = function () {
|
|||
}
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
shards.forEach(function (shard) {
|
||||
|
@ -148,7 +148,7 @@ ArangoCollection.prototype.truncate = function () {
|
|||
options);
|
||||
});
|
||||
|
||||
cluster.wait(coord, shards);
|
||||
cluster.wait(coord, shards.length);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -259,7 +259,7 @@ ArangoCollection.prototype.any = function () {
|
|||
if (cluster.isCoordinator()) {
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
shards.forEach(function (shard) {
|
||||
|
@ -274,7 +274,7 @@ ArangoCollection.prototype.any = function () {
|
|||
options);
|
||||
});
|
||||
|
||||
var results = cluster.wait(coord, shards), i;
|
||||
var results = cluster.wait(coord, shards.length), i;
|
||||
for (i = 0; i < results.length; ++i) {
|
||||
var body = JSON.parse(results[i].body);
|
||||
if (body.document !== null) {
|
||||
|
@ -356,7 +356,7 @@ ArangoCollection.prototype.removeByExample = function (example,
|
|||
if (cluster.isCoordinator()) {
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
if (limit > 0 && shards.length > 1) {
|
||||
|
@ -383,7 +383,7 @@ ArangoCollection.prototype.removeByExample = function (example,
|
|||
});
|
||||
|
||||
var deleted = 0;
|
||||
var results = cluster.wait(coord, shards);
|
||||
var results = cluster.wait(coord, shards.length);
|
||||
for (i = 0; i < results.length; ++i) {
|
||||
var body = JSON.parse(results[i].body);
|
||||
deleted += (body.deleted || 0);
|
||||
|
@ -443,7 +443,7 @@ ArangoCollection.prototype.replaceByExample = function (example,
|
|||
if (cluster.isCoordinator()) {
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
if (limit > 0 && shards.length > 1) {
|
||||
|
@ -471,7 +471,7 @@ ArangoCollection.prototype.replaceByExample = function (example,
|
|||
});
|
||||
|
||||
var replaced = 0;
|
||||
var results = cluster.wait(coord, shards), i;
|
||||
var results = cluster.wait(coord, shards.length), i;
|
||||
for (i = 0; i < results.length; ++i) {
|
||||
var body = JSON.parse(results[i].body);
|
||||
replaced += (body.replaced || 0);
|
||||
|
@ -542,7 +542,7 @@ ArangoCollection.prototype.updateByExample = function (example,
|
|||
if (cluster.isCoordinator()) {
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
if (limit > 0 && shards.length > 1) {
|
||||
|
@ -572,7 +572,7 @@ ArangoCollection.prototype.updateByExample = function (example,
|
|||
});
|
||||
|
||||
var updated = 0;
|
||||
var results = cluster.wait(coord, shards), i;
|
||||
var results = cluster.wait(coord, shards.length), i;
|
||||
for (i = 0; i < results.length; ++i) {
|
||||
var body = JSON.parse(results[i].body);
|
||||
updated += (body.updated || 0);
|
||||
|
|
|
@ -1353,53 +1353,56 @@ var shardList = function (dbName, collectionName) {
|
|||
/// @brief wait for a distributed response
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var waitForDistributedResponse = function (data, shards) {
|
||||
var waitForDistributedResponse = function (data, numberOfRequests) {
|
||||
var received = [ ];
|
||||
try {
|
||||
|
||||
while (received.length < shards.length) {
|
||||
var result = global.ArangoClusterComm.wait(data);
|
||||
var status = result.status;
|
||||
while (received.length < numberOfRequests) {
|
||||
var result = global.ArangoClusterComm.wait(data);
|
||||
var status = result.status;
|
||||
|
||||
if (status === "ERROR") {
|
||||
raiseError(arangodb.errors.ERROR_INTERNAL.code,
|
||||
"received an error from a DB server: " + JSON.stringify(result));
|
||||
}
|
||||
else if (status === "TIMEOUT") {
|
||||
raiseError(arangodb.errors.ERROR_CLUSTER_TIMEOUT.code,
|
||||
arangodb.errors.ERROR_CLUSTER_TIMEOUT.message);
|
||||
}
|
||||
else if (status === "DROPPED") {
|
||||
raiseError(arangodb.errors.ERROR_INTERNAL.code,
|
||||
"the operation was dropped");
|
||||
}
|
||||
else if (status === "RECEIVED") {
|
||||
received.push(result);
|
||||
if (status === "ERROR") {
|
||||
raiseError(arangodb.errors.ERROR_INTERNAL.code,
|
||||
"received an error from a DB server: " + JSON.stringify(result));
|
||||
}
|
||||
else if (status === "TIMEOUT") {
|
||||
raiseError(arangodb.errors.ERROR_CLUSTER_TIMEOUT.code,
|
||||
arangodb.errors.ERROR_CLUSTER_TIMEOUT.message);
|
||||
}
|
||||
else if (status === "DROPPED") {
|
||||
raiseError(arangodb.errors.ERROR_INTERNAL.code,
|
||||
"the operation was dropped");
|
||||
}
|
||||
else if (status === "RECEIVED") {
|
||||
received.push(result);
|
||||
|
||||
if (result.headers && result.headers.hasOwnProperty('x-arango-response-code')) {
|
||||
var code = parseInt(result.headers['x-arango-response-code'].substr(0, 3), 10);
|
||||
if (result.headers && result.headers.hasOwnProperty('x-arango-response-code')) {
|
||||
var code = parseInt(result.headers['x-arango-response-code'].substr(0, 3), 10);
|
||||
|
||||
if (code >= 400) {
|
||||
var body;
|
||||
if (code >= 400) {
|
||||
var body;
|
||||
|
||||
try {
|
||||
body = JSON.parse(result.body);
|
||||
try {
|
||||
body = JSON.parse(result.body);
|
||||
}
|
||||
catch (err) {
|
||||
raiseError(arangodb.errors.ERROR_INTERNAL.code,
|
||||
"error parsing JSON received from a DB server: " + err.message);
|
||||
}
|
||||
|
||||
raiseError(body.errorNum,
|
||||
body.errorMessage);
|
||||
}
|
||||
catch (err) {
|
||||
raiseError(arangodb.errors.ERROR_INTERNAL.code,
|
||||
"error parsing JSON received from a DB server: " + err.message);
|
||||
}
|
||||
|
||||
raiseError(body.errorNum,
|
||||
body.errorMessage);
|
||||
}
|
||||
}
|
||||
else {
|
||||
// something else... wait without GC
|
||||
require("internal").wait(0.1, false);
|
||||
}
|
||||
}
|
||||
else {
|
||||
// something else... wait without GC
|
||||
require("internal").wait(0.1, false);
|
||||
}
|
||||
} finally {
|
||||
global.ArangoClusterComm.drop(data);
|
||||
}
|
||||
|
||||
return received;
|
||||
};
|
||||
|
||||
|
@ -1543,7 +1546,7 @@ var bootstrapDbServers = function (isRelaunch) {
|
|||
var i;
|
||||
|
||||
var options = {
|
||||
coordTransactionID: global.ArangoClusterInfo.uniqid(),
|
||||
coordTransactionID: global.ArangoClusterComm.getId(),
|
||||
timeout: 90
|
||||
};
|
||||
|
||||
|
|
|
@ -616,7 +616,7 @@ function createService(mount, options, activateDevelopment) {
|
|||
|
||||
function uploadToPeerCoordinators(serviceInfo, coordinators) {
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
let req = fs.readBuffer(joinPath(fs.getTempPath(), serviceInfo));
|
||||
let httpOptions = {};
|
||||
|
@ -628,8 +628,9 @@ function uploadToPeerCoordinators(serviceInfo, coordinators) {
|
|||
ArangoClusterComm.asyncRequest('POST', 'server:' + coordinators[i], db._name(),
|
||||
'/_api/upload', req, httpOptions, coordOptions);
|
||||
}
|
||||
delete coordOptions.clientTransactionID;
|
||||
return {
|
||||
results: cluster.wait(coordOptions, coordinators),
|
||||
results: cluster.wait(coordOptions, coordinators.length),
|
||||
mapping
|
||||
};
|
||||
}
|
||||
|
@ -1159,7 +1160,7 @@ function install(serviceInfo, mount, options) {
|
|||
let intOpts = JSON.parse(JSON.stringify(options));
|
||||
intOpts.__clusterDistribution = true;
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
let httpOptions = {};
|
||||
for (let i = 0; i < res.length; ++i) {
|
||||
|
@ -1170,14 +1171,14 @@ function install(serviceInfo, mount, options) {
|
|||
ArangoClusterComm.asyncRequest('POST', 'server:' + mapping[res[i].clientTransactionID], db._name(),
|
||||
'/_admin/foxx/install', JSON.stringify(intReq), httpOptions, coordOptions);
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators);
|
||||
cluster.wait(coordOptions, res.length);
|
||||
} else {
|
||||
/*jshint -W075:true */
|
||||
let req = {appInfo: serviceInfo, mount, options};
|
||||
/*jshint -W075:false */
|
||||
let httpOptions = {};
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
req.options.__clusterDistribution = true;
|
||||
req = JSON.stringify(req);
|
||||
|
@ -1187,6 +1188,7 @@ function install(serviceInfo, mount, options) {
|
|||
'/_admin/foxx/install', req, httpOptions, coordOptions);
|
||||
}
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators.length - 1);
|
||||
}
|
||||
}
|
||||
reloadRouting();
|
||||
|
@ -1283,7 +1285,7 @@ function uninstall(mount, options) {
|
|||
/*jshint -W075:false */
|
||||
let httpOptions = {};
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
req.options.__clusterDistribution = true;
|
||||
req.options.force = true;
|
||||
|
@ -1294,6 +1296,7 @@ function uninstall(mount, options) {
|
|||
'/_admin/foxx/uninstall', req, httpOptions, coordOptions);
|
||||
}
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators.length - 1);
|
||||
}
|
||||
reloadRouting();
|
||||
return service.simpleJSON();
|
||||
|
@ -1327,7 +1330,7 @@ function replace(serviceInfo, mount, options) {
|
|||
let intOpts = JSON.parse(JSON.stringify(options));
|
||||
intOpts.__clusterDistribution = true;
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
let httpOptions = {};
|
||||
for (let i = 0; i < res.length; ++i) {
|
||||
|
@ -1338,7 +1341,7 @@ function replace(serviceInfo, mount, options) {
|
|||
ArangoClusterComm.asyncRequest('POST', 'server:' + mapping[res[i].coordinatorTransactionID], db._name(),
|
||||
'/_admin/foxx/replace', JSON.stringify(intReq), httpOptions, coordOptions);
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators);
|
||||
cluster.wait(coordOptions, res.length);
|
||||
} else {
|
||||
let intOpts = JSON.parse(JSON.stringify(options));
|
||||
/*jshint -W075:true */
|
||||
|
@ -1346,7 +1349,7 @@ function replace(serviceInfo, mount, options) {
|
|||
/*jshint -W075:false */
|
||||
let httpOptions = {};
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
req.options.__clusterDistribution = true;
|
||||
req.options.force = true;
|
||||
|
@ -1355,6 +1358,7 @@ function replace(serviceInfo, mount, options) {
|
|||
ArangoClusterComm.asyncRequest('POST', 'server:' + coordinators[i], db._name(),
|
||||
'/_admin/foxx/replace', req, httpOptions, coordOptions);
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators.length);
|
||||
}
|
||||
}
|
||||
_uninstall(mount, {teardown: true,
|
||||
|
@ -1394,7 +1398,7 @@ function upgrade(serviceInfo, mount, options) {
|
|||
let intOpts = JSON.parse(JSON.stringify(options));
|
||||
intOpts.__clusterDistribution = true;
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
let httpOptions = {};
|
||||
for (let i = 0; i < res.length; ++i) {
|
||||
|
@ -1405,7 +1409,7 @@ function upgrade(serviceInfo, mount, options) {
|
|||
ArangoClusterComm.asyncRequest('POST', 'server:' + mapping[res[i].coordinatorTransactionID], db._name(),
|
||||
'/_admin/foxx/update', JSON.stringify(intReq), httpOptions, coordOptions);
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators);
|
||||
cluster.wait(coordOptions, res.length);
|
||||
} else {
|
||||
let intOpts = JSON.parse(JSON.stringify(options));
|
||||
/*jshint -W075:true */
|
||||
|
@ -1413,7 +1417,7 @@ function upgrade(serviceInfo, mount, options) {
|
|||
/*jshint -W075:false */
|
||||
let httpOptions = {};
|
||||
let coordOptions = {
|
||||
coordTransactionID: ArangoClusterInfo.uniqid()
|
||||
coordTransactionID: ArangoClusterComm.getId()
|
||||
};
|
||||
req.options.__clusterDistribution = true;
|
||||
req.options.force = true;
|
||||
|
@ -1422,6 +1426,7 @@ function upgrade(serviceInfo, mount, options) {
|
|||
ArangoClusterComm.asyncRequest('POST', 'server:' + coordinators[i], db._name(),
|
||||
'/_admin/foxx/update', req, httpOptions, coordOptions);
|
||||
}
|
||||
cluster.wait(coordOptions, coordinators.length);
|
||||
}
|
||||
}
|
||||
var oldService = lookupService(mount);
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/*jshint strict: false */
|
||||
/*global ArangoClusterComm, ArangoClusterInfo */
|
||||
/*global ArangoClusterComm */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief Arango Simple Query Language
|
||||
|
@ -219,7 +219,7 @@ SimpleQueryNear.prototype.execute = function () {
|
|||
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this._collection.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
var _limit = 0;
|
||||
|
@ -256,7 +256,7 @@ SimpleQueryNear.prototype.execute = function () {
|
|||
options);
|
||||
});
|
||||
|
||||
var result = cluster.wait(coord, shards);
|
||||
var result = cluster.wait(coord, shards.length);
|
||||
|
||||
result.forEach(function(part) {
|
||||
var body = JSON.parse(part.body);
|
||||
|
@ -343,7 +343,7 @@ SimpleQueryWithin.prototype.execute = function () {
|
|||
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this._collection.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
|
||||
var _limit = 0;
|
||||
|
@ -381,7 +381,7 @@ SimpleQueryWithin.prototype.execute = function () {
|
|||
options);
|
||||
});
|
||||
|
||||
var result = cluster.wait(coord, shards);
|
||||
var result = cluster.wait(coord, shards.length);
|
||||
|
||||
result.forEach(function(part) {
|
||||
var body = JSON.parse(part.body);
|
||||
|
@ -460,7 +460,7 @@ SimpleQueryFulltext.prototype.execute = function () {
|
|||
if (cluster.isCoordinator()) {
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this._collection.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
var _limit = 0;
|
||||
if (this._limit > 0) {
|
||||
|
@ -486,7 +486,7 @@ SimpleQueryFulltext.prototype.execute = function () {
|
|||
options);
|
||||
});
|
||||
|
||||
var result = cluster.wait(coord, shards);
|
||||
var result = cluster.wait(coord, shards.length);
|
||||
|
||||
result.forEach(function(part) {
|
||||
var body = JSON.parse(part.body);
|
||||
|
@ -547,7 +547,7 @@ SimpleQueryWithinRectangle.prototype.execute = function () {
|
|||
if (cluster.isCoordinator()) {
|
||||
var dbName = require("internal").db._name();
|
||||
var shards = cluster.shardList(dbName, this._collection.name());
|
||||
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
|
||||
var coord = { coordTransactionID: ArangoClusterComm.getId() };
|
||||
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
|
||||
var _limit = 0;
|
||||
if (this._limit > 0) {
|
||||
|
@ -578,7 +578,7 @@ SimpleQueryWithinRectangle.prototype.execute = function () {
|
|||
});
|
||||
|
||||
var _documents = [ ], total = 0;
|
||||
result = cluster.wait(coord, shards);
|
||||
result = cluster.wait(coord, shards.length);
|
||||
|
||||
result.forEach(function(part) {
|
||||
var body = JSON.parse(part.body);
|
||||
|
|
|
@ -39,14 +39,19 @@ var assertQueryError = helper.assertQueryError;
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function ahuacatlNumericFunctionsTestSuite () {
|
||||
function assertAlmostEqual(a, b) {
|
||||
function assertAlmostEqual(a, b, text) {
|
||||
if (typeof(a) === 'number') {
|
||||
a = a.toPrecision(8);
|
||||
}
|
||||
if (typeof(b) === 'number') {
|
||||
b = b.toPrecision(8);
|
||||
}
|
||||
assertEqual(a, b);
|
||||
if (((a === 0) && (b === 0.0))||
|
||||
((b === 0) && (a === 0.0))) {
|
||||
return;
|
||||
}
|
||||
|
||||
assertEqual(a, b, text);
|
||||
}
|
||||
|
||||
return {
|
||||
|
@ -59,13 +64,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
var expected = 3.141592653589793;
|
||||
|
||||
var query = "RETURN PI()";
|
||||
assertAlmostEqual(expected, getQueryResults(query)[0]);
|
||||
assertAlmostEqual(expected, getQueryResults(query)[0], "comparing PI");
|
||||
|
||||
query = "RETURN NOOPT(PI())";
|
||||
assertAlmostEqual(expected, getQueryResults(query)[0]);
|
||||
assertAlmostEqual(expected, getQueryResults(query)[0], "comparing NOOPT(PI)");
|
||||
|
||||
query = "RETURN NOOPT(V8(PI()))";
|
||||
assertAlmostEqual(expected, getQueryResults(query)[0]);
|
||||
assertAlmostEqual(expected, getQueryResults(query)[0], "comparing NOOPT(V8(PI))");
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -131,13 +136,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN LOG(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(LOG(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(LOG(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -204,13 +209,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN LOG2(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(LOG2(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(LOG2(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -277,13 +282,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN LOG10(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(LOG10(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(LOG10(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -350,13 +355,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN EXP(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(EXP(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(EXP(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -423,13 +428,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN EXP2(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(EXP2(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(EXP2(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -494,13 +499,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN RADIANS(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(RADIANS(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(RADIANS(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -565,13 +570,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN DEGREES(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(DEGREES(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(DEGREES(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -638,13 +643,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN SIN(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(SIN(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(SIN(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -711,13 +716,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN COS(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(COS(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(COS(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -784,13 +789,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN TAN(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(TAN(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(TAN(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -857,13 +862,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN ASIN(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(ASIN(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(ASIN(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -930,13 +935,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN ACOS(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(ACOS(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(ACOS(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -1003,13 +1008,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN ATAN(@v)";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(ATAN(@v))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(ATAN(@v)))";
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0]);
|
||||
assertAlmostEqual(v[1], getQueryResults(query, { v: v[0] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -3832,13 +3837,13 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
|
||||
values.forEach(function(v) {
|
||||
var query = "RETURN ATAN2(@v1, @v2)";
|
||||
assertAlmostEqual(v[2], getQueryResults(query, { v1: v[0], v2: v[1] })[0]);
|
||||
assertAlmostEqual(v[2], getQueryResults(query, { v1: v[0], v2: v[1] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(ATAN2(@v1, @v2))";
|
||||
assertAlmostEqual(v[2], getQueryResults(query, { v1: v[0], v2: v[1] })[0]);
|
||||
assertAlmostEqual(v[2], getQueryResults(query, { v1: v[0], v2: v[1] })[0], query + " " + JSON.stringify(v));
|
||||
|
||||
query = "RETURN NOOPT(V8(ATAN2(@v1, @v2)))";
|
||||
assertAlmostEqual(v[2], getQueryResults(query, { v1: v[0], v2: v[1] })[0]);
|
||||
assertAlmostEqual(v[2], getQueryResults(query, { v1: v[0], v2: v[1] })[0], query + " " + JSON.stringify(v));
|
||||
});
|
||||
},
|
||||
|
||||
|
@ -4125,26 +4130,29 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
];
|
||||
|
||||
data.forEach(function (value) {
|
||||
var actual = getQueryResults("RETURN SQRT(" + JSON.stringify(value[0]) + ")");
|
||||
var query = "RETURN SQRT(" + JSON.stringify(value[0]) + ")";
|
||||
var actual = getQueryResults(query);
|
||||
// if (value[1] === null) {
|
||||
// assertEqual(0, actual[0]);
|
||||
// }
|
||||
// else {
|
||||
assertAlmostEqual(value[1], actual[0]);
|
||||
assertAlmostEqual(value[1], actual[0], query);
|
||||
// }
|
||||
actual = getQueryResults("RETURN NOOPT(SQRT(" + JSON.stringify(value[0]) + "))");
|
||||
query = "RETURN NOOPT(SQRT(" + JSON.stringify(value[0]) + "))";
|
||||
actual = getQueryResults(query);
|
||||
// if (value[1] === null) {
|
||||
// assertEqual(0, actual[0]);
|
||||
// }
|
||||
// else {
|
||||
assertAlmostEqual(value[1], actual[0]);
|
||||
assertAlmostEqual(value[1], actual[0], query);
|
||||
// }
|
||||
actual = getQueryResults("RETURN NOOPT(V8(SQRT(" + JSON.stringify(value[0]) + ")))");
|
||||
query = "RETURN NOOPT(V8(SQRT(" + JSON.stringify(value[0]) + ")))";
|
||||
actual = getQueryResults(query);
|
||||
// if (value[1] === null) {
|
||||
// assertEqual(0, actual[0]);
|
||||
// }
|
||||
// else {
|
||||
assertAlmostEqual(value[1], actual[0]);
|
||||
assertAlmostEqual(value[1], actual[0], query);
|
||||
// }
|
||||
});
|
||||
},
|
||||
|
@ -5795,14 +5803,14 @@ function ahuacatlNumericFunctionsTestSuite () {
|
|||
}
|
||||
var query = "RETURN POW(" + JSON.stringify(value[0]) + ", " + JSON.stringify(value[1]) + ")";
|
||||
var actual = getQueryResults(query);
|
||||
assertAlmostEqual(value[2], actual[0]);
|
||||
assertAlmostEqual(value[2], actual[0], query + " " + JSON.stringify(value));
|
||||
|
||||
actual = getQueryResults("RETURN NOOPT(POW(" + JSON.stringify(value[0]) + ", " + JSON.stringify(value[1]) + "))");
|
||||
assertAlmostEqual(value[2], actual[0], value);
|
||||
assertAlmostEqual(value[2], actual[0], value, query + " " + JSON.stringify(value));
|
||||
|
||||
query = "RETURN NOOPT(V8(POW(" + JSON.stringify(value[0]) + ", " + JSON.stringify(value[1]) + ")))";
|
||||
actual = getQueryResults(query);
|
||||
assertAlmostEqual(value[2], actual[0]);
|
||||
assertAlmostEqual(value[2], actual[0], query + " " + JSON.stringify(value));
|
||||
});
|
||||
},
|
||||
|
||||
|
|
|
@ -345,11 +345,11 @@ bool copyDirectoryRecursive(std::string const& source,
|
|||
std::string const& target, std::string& error) {
|
||||
|
||||
bool rc = true;
|
||||
#ifdef TRI_HAVE_WIN32_LIST_FILES
|
||||
auto isSubDirectory = [](struct _finddata_t item) -> bool {
|
||||
return ((item.attrib & _A_SUBDIR) != 0);
|
||||
|
||||
auto isSubDirectory = [](std::string const& name) -> bool {
|
||||
return isDirectory(name);
|
||||
};
|
||||
|
||||
#ifdef TRI_HAVE_WIN32_LIST_FILES
|
||||
struct _finddata_t oneItem;
|
||||
intptr_t handle;
|
||||
|
||||
|
@ -363,10 +363,6 @@ bool copyDirectoryRecursive(std::string const& source,
|
|||
|
||||
do {
|
||||
#else
|
||||
auto isSubDirectory = [](std::string const& name) -> bool {
|
||||
return isDirectory(name);
|
||||
};
|
||||
|
||||
struct dirent* d = (struct dirent*)TRI_Allocate(
|
||||
TRI_UNKNOWN_MEM_ZONE, (offsetof(struct dirent, d_name) + PATH_MAX + 1),
|
||||
false);
|
||||
|
|
Loading…
Reference in New Issue