mirror of https://gitee.com/bigwinds/arangodb
Merge branch 'sharding' of https://github.com/triAGENS/ArangoDB into sharding
This commit is contained in:
commit
23e0122f0f
16
CHANGELOG
16
CHANGELOG
|
@ -1,6 +1,16 @@
|
|||
v1.5.0 (XXXX-XX-XX)
|
||||
-------------------
|
||||
|
||||
* allow direct access from the `db` object to collections whose names start
|
||||
with an underscore (e.g. db._users).
|
||||
|
||||
Previously, access to such collections via the `db` object was possible from
|
||||
arangosh, but not from arangod (and thus Foxx and actions). The only way
|
||||
to access such collections from these places was via the `db._collection(<name>)`
|
||||
workaround.
|
||||
|
||||
* issue #738: added __dirname, __filename pseudo-globals. Fixes #733. (@by pluma)
|
||||
|
||||
* allow `\n` (as well as `\r\n`) as line terminator in batch requests sent to
|
||||
`/_api/batch` HTTP API.
|
||||
|
||||
|
@ -108,6 +118,12 @@ v1.5.0 (XXXX-XX-XX)
|
|||
v1.4.6 (XXXX-XX-XX)
|
||||
-------------------
|
||||
|
||||
* issue #736: AQL function to parse collection and key from document handle
|
||||
|
||||
* added fm.rescan() method for Foxx-Manager
|
||||
|
||||
* fixed issue #734: foxx cookie and route problem
|
||||
|
||||
* added method `fm.configJson` for arangosh
|
||||
|
||||
* include `startupPath` in result of API `/_api/foxx/config`
|
||||
|
|
|
@ -7,4 +7,4 @@ server: triagens GmbH High-Performance HTTP Server
|
|||
connection: Keep-Alive
|
||||
content-type: application/json; charset=utf-8
|
||||
|
||||
{"error":false,"created":2,"errors":0}
|
||||
{"error":false,"created":2,"empty":0,"errors":0}
|
||||
|
|
|
@ -8,4 +8,4 @@ server: triagens GmbH High-Performance HTTP Server
|
|||
connection: Keep-Alive
|
||||
content-type: application/json; charset=utf-8
|
||||
|
||||
{"error":false,"created":2,"errors":0}
|
||||
{"error":false,"created":2,"empty":0,"errors":0}
|
||||
|
|
|
@ -93,7 +93,14 @@ the data are line-wise JSON documents (type = documents) or a JSON list (type =
|
|||
The server will respond with an HTTP 201 if everything went well. The number of
|
||||
documents imported will be returned in the `created` attribute of the
|
||||
response. If any documents were skipped or incorrectly formatted, this will be
|
||||
returned in the `errors` attribute.
|
||||
returned in the `errors` attribute. There will also be an attribute `empty` in
|
||||
the response, which will contain a value of `0`.
|
||||
|
||||
If the `details` parameter was set to `true` in the request, the response will
|
||||
also contain an attribute `details` which is a list of details about errors that
|
||||
occurred on the server side during the import. This list might be empty if no
|
||||
errors occurred.
|
||||
|
||||
|
||||
Importing Headers and Values {#HttpImportHeaderData}
|
||||
====================================================
|
||||
|
@ -112,7 +119,13 @@ are needed or allowed in this data section.
|
|||
The server will again respond with an HTTP 201 if everything went well. The
|
||||
number of documents imported will be returned in the `created` attribute of the
|
||||
response. If any documents were skipped or incorrectly formatted, this will be
|
||||
returned in the `errors` attribute.
|
||||
returned in the `errors` attribute. The number of empty lines in the input file
|
||||
will be returned in the `empty` attribute.
|
||||
|
||||
If the `details` parameter was set to `true` in the request, the response will
|
||||
also contain an attribute `details` which is a list of details about errors that
|
||||
occurred on the server side during the import. This list might be empty if no
|
||||
errors occurred.
|
||||
|
||||
Importing into Edge Collections {#HttpImportEdges}
|
||||
==================================================
|
||||
|
|
|
@ -52,7 +52,8 @@ specify a password, you will be prompted for one.
|
|||
Note that the collection (`users` in this case) must already exist or the import
|
||||
will fail. If you want to create a new collection with the import data, you need
|
||||
to specify the `--create-collection` option. Note that it is only possible to
|
||||
create a document collection using the `--create-collection` flag.
|
||||
create a document collection using the `--create-collection` flag, and no edge
|
||||
collections.
|
||||
|
||||
unix> arangoimp --file "data.json" --type json --collection "users" --create-collection true
|
||||
|
||||
|
@ -65,6 +66,18 @@ Please note that by default, _arangoimp_ will import data into the specified
|
|||
collection in the default database (`_system`). To specify a different database,
|
||||
use the `--server.database` option when invoking _arangoimp_.
|
||||
|
||||
An _arangoimp_ import will print out the final results on the command line.
|
||||
By default, it shows the number of documents created, the number of errors that
|
||||
occurred on the server side, and the total number of input file lines/documents
|
||||
that it processed. Additionally, _arangoimp_ will print out details about errors
|
||||
that happended on the server-side (if any).
|
||||
|
||||
Example:
|
||||
|
||||
created: 2
|
||||
errors: 0
|
||||
total: 2
|
||||
|
||||
Importing CSV Data {#ImpManualCsv}
|
||||
==================================
|
||||
|
||||
|
@ -114,3 +127,50 @@ with the `--separator` argument.
|
|||
An example command line to execute the TSV import is:
|
||||
|
||||
unix> arangoimp --file "data.tsv" --type tsv --collection "users"
|
||||
|
||||
Importing into an Edge Collection {#ImpManualEdges}
|
||||
===================================================
|
||||
|
||||
arangoimp can also be used to import data into an existing edge collection.
|
||||
The import data must, for each edge to import, contain at least the `_from` and
|
||||
`_to` attributes. These indicate which other two documents the edge should connect.
|
||||
It is necessary that these attributes are set for all records, and point to
|
||||
valid document ids in existing collections.
|
||||
|
||||
Example:
|
||||
|
||||
{ "_from" : "users/1234", "_to" : "users/4321", "desc" : "1234 is connected to 4321" }
|
||||
|
||||
Note that the edge collection must already exist when the import is started. Using
|
||||
the `--create-collection` flag will not work because arangoimp will always try to
|
||||
create a regular document collection if the target collection does not exist.
|
||||
|
||||
Attribute Naming and Special Attributes {#ImpManualAttributes}
|
||||
==============================================================
|
||||
|
||||
Attributes whose names start with an underscore are treated in a special way by
|
||||
ArangoDB:
|
||||
|
||||
- the optional `_key` attribute contains the document's key. If specified, the value
|
||||
must be formally valid (e.g. must be a string and conform to the naming conventions
|
||||
for @ref DocumentKeys). Additionally, the key value must be unique within the
|
||||
collection the import is run for.
|
||||
- `_from`: when importing into an edge collection, this attribute contains the id
|
||||
of one of the documents connected by the edge. The value of `_from` must be a
|
||||
syntactially valid document id and the referred collection must exist.
|
||||
- `_to`: when importing into an edge collection, this attribute contains the id
|
||||
of the other document connected by the edge. The value of `_to` must be a
|
||||
syntactially valid document id and the referred collection must exist.
|
||||
- `_rev`: this attribute contains the revision number of a document. However, the
|
||||
revision numbers are managed by ArangoDB and cannot be specified on import. Thus
|
||||
any value in this attribute is ignored on import.
|
||||
- all other attributes starting with an underscore are discarded on import without
|
||||
any warnings.
|
||||
|
||||
If you import values into `_key`, you should make sure they are valid and unique.
|
||||
|
||||
When importing data into an edge collection, you should make sure that all import
|
||||
documents can `_from` and `_to` and that their values point to existing documents.
|
||||
|
||||
Finally you should make sure that all other attributes in the import file do not
|
||||
start with an underscore - otherwise they might be discarded.
|
||||
|
|
|
@ -5,3 +5,5 @@ TOC {#ImpManualTOC}
|
|||
- @ref ImpManualJson
|
||||
- @ref ImpManualCsv
|
||||
- @ref ImpManualTsv
|
||||
- @ref ImpManualEdges
|
||||
- @ref ImpManualAttributes
|
||||
|
|
|
@ -1259,6 +1259,19 @@ AQL supports the following functions to operate on document values:
|
|||
|
||||
RETURN KEEP(doc, 'firstname', 'name', 'likes')
|
||||
|
||||
- @FN{PARSE_IDENTIFIER(@FA{document-handle})}: parses the document handle specified in
|
||||
@FA{document-handle} and returns a the handle's individual parts a separate attributes.
|
||||
This function can be used to easily determine the collection name and key from a given document.
|
||||
The @FA{document-handle} can either be a regular document from a collection, or a document
|
||||
identifier string (e.g. `_users/1234`). Passing either a non-string or a non-document or a
|
||||
document without an `_id` attribute will result in an error.
|
||||
|
||||
RETURN PARSE_IDENTIFIER('_users/my-user')
|
||||
[ { "collection" : "_users", "key" : "my-user" } ]
|
||||
|
||||
RETURN PARSE_IDENTIFIER({ "_id" : "mycollection/mykey", "value" : "some value" })
|
||||
[ { "collection" : "mycollection", "key" : "mykey" } ]
|
||||
|
||||
@subsubsection AqlFunctionsGeo Geo functions
|
||||
|
||||
AQL offers the following functions to filter data based on geo indexes:
|
||||
|
|
|
@ -714,6 +714,7 @@ TRI_associative_pointer_t* TRI_CreateFunctionsAql (void) {
|
|||
REGISTER_FUNCTION("NOT_NULL", "NOT_NULL", true, false, ".|+", NULL);
|
||||
REGISTER_FUNCTION("FIRST_LIST", "FIRST_LIST", true, false, ".|+", NULL);
|
||||
REGISTER_FUNCTION("FIRST_DOCUMENT", "FIRST_DOCUMENT", true, false, ".|+", NULL);
|
||||
REGISTER_FUNCTION("PARSE_IDENTIFIER", "PARSE_IDENTIFIER", true, false, ".", NULL);
|
||||
|
||||
if (! result) {
|
||||
TRI_FreeFunctionsAql(functions);
|
||||
|
|
|
@ -401,8 +401,18 @@ void ApplicationCluster::stop () {
|
|||
|
||||
{
|
||||
AgencyCommLocker locker("Current", "WRITE");
|
||||
|
||||
|
||||
if (locker.successful()) {
|
||||
// unregister ourselves
|
||||
ServerState::RoleEnum role = ServerState::instance()->getRole();
|
||||
|
||||
if (role == ServerState::ROLE_PRIMARY) {
|
||||
comm.removeValues("Current/DBServers/" + _myId, false);
|
||||
}
|
||||
else if (role == ServerState::ROLE_COORDINATOR) {
|
||||
comm.removeValues("Current/Coordinators/" + _myId, false);
|
||||
}
|
||||
|
||||
// unregister ourselves
|
||||
comm.removeValues("Current/ServersRegistered/" + _myId, false);
|
||||
}
|
||||
|
|
|
@ -113,6 +113,86 @@ CollectionInfo::~CollectionInfo () {
|
|||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- CollectionInfoCurrent class
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief creates an empty collection info object
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent::CollectionInfoCurrent () {
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief creates a collection info object from json
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent::CollectionInfoCurrent (ShardID const& shardID, TRI_json_t* json) {
|
||||
_jsons.insert(make_pair<ShardID, TRI_json_t*>(shardID, json));
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief creates a collection info object from another
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent::CollectionInfoCurrent (CollectionInfoCurrent const& other) :
|
||||
_jsons(other._jsons) {
|
||||
copyAllJsons();
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief creates a collection info object from json
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent& CollectionInfoCurrent::operator= (CollectionInfoCurrent const& other) {
|
||||
if (this == &other) {
|
||||
return *this;
|
||||
}
|
||||
freeAllJsons();
|
||||
_jsons = other._jsons;
|
||||
copyAllJsons();
|
||||
return *this;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief destroys a collection info object
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent::~CollectionInfoCurrent () {
|
||||
freeAllJsons();
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief free all pointers to TRI_json_t in the map _jsons
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void CollectionInfoCurrent::freeAllJsons () {
|
||||
map<ShardID, TRI_json_t*>::iterator it;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
if (it->second != 0) {
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, it->second);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief copy TRI_json_t behind the pointers in the map _jsons
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void CollectionInfoCurrent::copyAllJsons () {
|
||||
map<ShardID, TRI_json_t*>::iterator it;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
if (0 != it->second) {
|
||||
it->second = TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, it->second);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- private methods
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -171,6 +251,7 @@ uint64_t ClusterInfo::uniqid (uint64_t count) {
|
|||
|
||||
if (_uniqid._currentValue >= _uniqid._upperValue) {
|
||||
uint64_t fetch = count;
|
||||
|
||||
if (fetch < MinIdsPerBatch) {
|
||||
fetch = MinIdsPerBatch;
|
||||
}
|
||||
|
@ -181,13 +262,16 @@ uint64_t ClusterInfo::uniqid (uint64_t count) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
_uniqid._currentValue = result._index;
|
||||
_uniqid._currentValue = result._index + count;
|
||||
_uniqid._upperValue = _uniqid._currentValue + fetch - 1;
|
||||
|
||||
return _uniqid._currentValue++;
|
||||
|
||||
return result._index;
|
||||
}
|
||||
|
||||
return ++_uniqid._currentValue;
|
||||
uint64_t result = _uniqid._currentValue;
|
||||
_uniqid._currentValue += count;
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -198,10 +282,12 @@ void ClusterInfo::flush () {
|
|||
WRITE_LOCKER(_lock);
|
||||
|
||||
_collectionsValid = false;
|
||||
_collectionsCurrentValid = false;
|
||||
_serversValid = false;
|
||||
_DBServersValid = false;
|
||||
|
||||
_collections.clear();
|
||||
_collectionsCurrent.clear();
|
||||
_servers.clear();
|
||||
_shardIds.clear();
|
||||
|
||||
|
@ -441,15 +527,20 @@ void ClusterInfo::loadCurrentDatabases () {
|
|||
/// Usually one does not have to call this directly.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void ClusterInfo::loadPlannedCollections () {
|
||||
void ClusterInfo::loadPlannedCollections (bool acquireLock) {
|
||||
static const std::string prefix = "Plan/Collections";
|
||||
|
||||
AgencyCommResult result;
|
||||
|
||||
{
|
||||
AgencyCommLocker locker("Plan", "READ");
|
||||
if (acquireLock) {
|
||||
AgencyCommLocker locker("Plan", "READ");
|
||||
|
||||
if (locker.successful()) {
|
||||
if (locker.successful()) {
|
||||
result = _agency.getValues(prefix, true);
|
||||
}
|
||||
}
|
||||
else {
|
||||
result = _agency.getValues(prefix, true);
|
||||
}
|
||||
}
|
||||
|
@ -459,7 +550,6 @@ void ClusterInfo::loadPlannedCollections () {
|
|||
|
||||
WRITE_LOCKER(_lock);
|
||||
_collections.clear();
|
||||
_shardIds.clear();
|
||||
|
||||
std::map<std::string, AgencyCommResultEntry>::iterator it = result._values.begin();
|
||||
|
||||
|
@ -499,17 +589,6 @@ void ClusterInfo::loadPlannedCollections () {
|
|||
(*it2).second.insert(std::make_pair<CollectionID, CollectionInfo>(collection, collectionData));
|
||||
(*it2).second.insert(std::make_pair<CollectionID, CollectionInfo>(collectionData.name(), collectionData));
|
||||
|
||||
std::map<std::string, std::string> shards = collectionData.shardIds();
|
||||
std::map<std::string, std::string>::const_iterator it3 = shards.begin();
|
||||
|
||||
while (it3 != shards.end()) {
|
||||
const std::string shardId = (*it3).first;
|
||||
const std::string serverId = (*it3).second;
|
||||
|
||||
_shardIds.insert(std::make_pair<ShardID, ServerID>(shardId, serverId));
|
||||
++it3;
|
||||
}
|
||||
|
||||
}
|
||||
_collectionsValid = true;
|
||||
return;
|
||||
|
@ -529,7 +608,7 @@ CollectionInfo ClusterInfo::getCollection (DatabaseID const& databaseID,
|
|||
int tries = 0;
|
||||
|
||||
if (! _collectionsValid) {
|
||||
loadPlannedCollections();
|
||||
loadPlannedCollections(true);
|
||||
++tries;
|
||||
}
|
||||
|
||||
|
@ -550,7 +629,7 @@ CollectionInfo ClusterInfo::getCollection (DatabaseID const& databaseID,
|
|||
}
|
||||
|
||||
// must load collections outside the lock
|
||||
loadPlannedCollections();
|
||||
loadPlannedCollections(true);
|
||||
}
|
||||
|
||||
return CollectionInfo();
|
||||
|
@ -566,7 +645,7 @@ TRI_col_info_t ClusterInfo::getCollectionProperties (CollectionInfo const& colle
|
|||
info._type = collection.type();
|
||||
info._cid = collection.id();
|
||||
info._revision = 0; // TODO
|
||||
info._maximalSize = collection.maximalSize();
|
||||
info._maximalSize = collection.journalSize();
|
||||
|
||||
const std::string name = collection.name();
|
||||
memcpy(info._name, name.c_str(), name.size());
|
||||
|
@ -599,7 +678,7 @@ const std::vector<CollectionInfo> ClusterInfo::getCollections (DatabaseID const&
|
|||
std::vector<CollectionInfo> result;
|
||||
|
||||
// always reload
|
||||
loadPlannedCollections();
|
||||
loadPlannedCollections(true);
|
||||
|
||||
READ_LOCKER(_lock);
|
||||
// look up database by id
|
||||
|
@ -625,6 +704,147 @@ const std::vector<CollectionInfo> ClusterInfo::getCollections (DatabaseID const&
|
|||
return result;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief (re-)load the information about current collections from the agency
|
||||
/// Usually one does not have to call this directly. Note that this is
|
||||
/// necessarily complicated, since here we have to consider information
|
||||
/// about all shards of a collection.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void ClusterInfo::loadCurrentCollections (bool acquireLock) {
|
||||
static const std::string prefix = "Current/Collections";
|
||||
|
||||
AgencyCommResult result;
|
||||
|
||||
{
|
||||
if (acquireLock) {
|
||||
AgencyCommLocker locker("Current", "READ");
|
||||
|
||||
if (locker.successful()) {
|
||||
result = _agency.getValues(prefix, true);
|
||||
}
|
||||
}
|
||||
else {
|
||||
result = _agency.getValues(prefix, true);
|
||||
}
|
||||
}
|
||||
|
||||
if (result.successful()) {
|
||||
result.parse(prefix + "/", false);
|
||||
|
||||
WRITE_LOCKER(_lock);
|
||||
_collectionsCurrent.clear();
|
||||
_shardIds.clear();
|
||||
|
||||
std::map<std::string, AgencyCommResultEntry>::iterator it = result._values.begin();
|
||||
|
||||
for (; it != result._values.end(); ++it) {
|
||||
const std::string key = (*it).first;
|
||||
|
||||
// each entry consists of a database id, a collection id, and a shardID,
|
||||
// separated by '/'
|
||||
std::vector<std::string> parts = triagens::basics::StringUtils::split(key, '/');
|
||||
|
||||
if (parts.size() != 3) {
|
||||
// invalid entry
|
||||
LOG_WARNING("found invalid collection key in current in agency: '%s'", key.c_str());
|
||||
continue;
|
||||
}
|
||||
|
||||
const std::string database = parts[0];
|
||||
const std::string collection = parts[1];
|
||||
const std::string shardID = parts[2];
|
||||
|
||||
// check whether we have created an entry for the database already
|
||||
AllCollectionsCurrent::iterator it2 = _collectionsCurrent.find(database);
|
||||
|
||||
if (it2 == _collectionsCurrent.end()) {
|
||||
// not yet, so create an entry for the database
|
||||
DatabaseCollectionsCurrent empty;
|
||||
_collectionsCurrent.insert(std::make_pair<DatabaseID, DatabaseCollectionsCurrent>(database, empty));
|
||||
it2 = _collectionsCurrent.find(database);
|
||||
}
|
||||
|
||||
TRI_json_t* json = (*it).second._json;
|
||||
// steal the json
|
||||
(*it).second._json = 0;
|
||||
|
||||
// check whether we already have a CollectionInfoCurrent:
|
||||
DatabaseCollectionsCurrent::iterator it3;
|
||||
it3 = it2->second.find(collection);
|
||||
if (it3 == it2->second.end()) {
|
||||
const CollectionInfoCurrent collectionDataCurrent(shardID, json);
|
||||
it2->second.insert(make_pair<CollectionID, CollectionInfoCurrent>
|
||||
(collection, collectionDataCurrent));
|
||||
it3 = it2->second.find(collection);
|
||||
}
|
||||
else {
|
||||
it3->second.add(shardID, json);
|
||||
}
|
||||
|
||||
// Note that we have only inserted the CollectionInfoCurrent under
|
||||
// the collection ID and not under the name! It is not possible
|
||||
// to query the current collection info by name. This is because
|
||||
// the correct place to hold the current name is in the plan.
|
||||
// Thus: Look there and get the collection ID from there. Then
|
||||
// ask about the current collection info.
|
||||
|
||||
// Now take note of this shard and its responsible server:
|
||||
std::string DBserver = triagens::basics::JsonHelper::getStringValue
|
||||
(json, "DBserver", "");
|
||||
if (DBserver != "") {
|
||||
_shardIds.insert(make_pair<ShardID, ServerID>(shardID, DBserver));
|
||||
}
|
||||
}
|
||||
_collectionsCurrentValid = true;
|
||||
return;
|
||||
}
|
||||
|
||||
LOG_TRACE("Error while loading %s", prefix.c_str());
|
||||
_collectionsCurrentValid = false;
|
||||
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief ask about a collection in current. This returns information about
|
||||
/// all shards in the collection.
|
||||
/// If it is not found in the cache, the cache is reloaded once.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent ClusterInfo::getCollectionCurrent
|
||||
(DatabaseID const& databaseID,
|
||||
CollectionID const& collectionID) {
|
||||
int tries = 0;
|
||||
|
||||
if (! _collectionsCurrentValid) {
|
||||
loadCurrentCollections(true);
|
||||
++tries;
|
||||
}
|
||||
|
||||
while (++tries <= 2) {
|
||||
{
|
||||
READ_LOCKER(_lock);
|
||||
// look up database by id
|
||||
AllCollectionsCurrent::const_iterator it = _collectionsCurrent.find(databaseID);
|
||||
|
||||
if (it != _collectionsCurrent.end()) {
|
||||
// look up collection by id
|
||||
DatabaseCollectionsCurrent::const_iterator it2 = (*it).second.find(collectionID);
|
||||
|
||||
if (it2 != (*it).second.end()) {
|
||||
return (*it2).second;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// must load collections outside the lock
|
||||
loadCurrentCollections(true);
|
||||
}
|
||||
|
||||
return CollectionInfoCurrent();
|
||||
}
|
||||
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief create database in coordinator, the return value is an ArangoDB
|
||||
/// error code and the errorMsg is set accordingly. One possible error
|
||||
|
@ -654,6 +874,7 @@ int ClusterInfo::createDatabaseCoordinator (string const& name,
|
|||
if (res._statusCode == triagens::rest::HttpResponse::PRECONDITION_FAILED) {
|
||||
return setErrormsg(TRI_ERROR_CLUSTER_DATABASE_NAME_EXISTS, errorMsg);
|
||||
}
|
||||
|
||||
return setErrormsg(TRI_ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE_IN_PLAN,
|
||||
errorMsg);
|
||||
}
|
||||
|
@ -747,12 +968,20 @@ int ClusterInfo::dropDatabaseCoordinator (string const& name, string& errorMsg,
|
|||
if (! ac.exists("Plan/Databases/" + name)) {
|
||||
return setErrormsg(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND, errorMsg);
|
||||
}
|
||||
|
||||
res = ac.removeValues("Plan/Collections/" + name, true);
|
||||
|
||||
if (! res.successful()) {
|
||||
return setErrormsg(TRI_ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN,
|
||||
errorMsg);
|
||||
}
|
||||
|
||||
res = ac.removeValues("Plan/Databases/"+name, false);
|
||||
if (!res.successful()) {
|
||||
if (res._statusCode == rest::HttpResponse::NOT_FOUND) {
|
||||
return setErrormsg(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND, errorMsg);
|
||||
}
|
||||
|
||||
return setErrormsg(TRI_ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN,
|
||||
errorMsg);
|
||||
}
|
||||
|
@ -810,9 +1039,28 @@ int ClusterInfo::createCollectionCoordinator (string const& databaseName,
|
|||
{
|
||||
AgencyCommLocker locker("Plan", "WRITE");
|
||||
|
||||
|
||||
if (! locker.successful()) {
|
||||
return setErrormsg(TRI_ERROR_CLUSTER_COULD_NOT_LOCK_PLAN, errorMsg);
|
||||
}
|
||||
|
||||
{
|
||||
// check if a collection with the same name is already planned
|
||||
loadPlannedCollections(false);
|
||||
|
||||
READ_LOCKER(_lock);
|
||||
AllCollections::const_iterator it = _collections.find(databaseName);
|
||||
if (it != _collections.end()) {
|
||||
const std::string name = JsonHelper::getStringValue(json, "name", "");
|
||||
|
||||
DatabaseCollections::const_iterator it2 = (*it).second.find(name);
|
||||
|
||||
if (it2 != (*it).second.end()) {
|
||||
// collection already exists!
|
||||
return TRI_ERROR_ARANGO_DUPLICATE_NAME;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (! ac.exists("Plan/Databases/" + databaseName)) {
|
||||
return setErrormsg(TRI_ERROR_ARANGO_DATABASE_NOT_FOUND, errorMsg);
|
||||
|
@ -1139,7 +1387,7 @@ ServerID ClusterInfo::getResponsibleServer (ShardID const& shardID) {
|
|||
int tries = 0;
|
||||
|
||||
if (! _collectionsValid) {
|
||||
loadPlannedCollections();
|
||||
loadPlannedCollections(true);
|
||||
tries++;
|
||||
}
|
||||
|
||||
|
@ -1154,7 +1402,7 @@ ServerID ClusterInfo::getResponsibleServer (ShardID const& shardID) {
|
|||
}
|
||||
|
||||
// must load collections outside the lock
|
||||
loadPlannedCollections();
|
||||
loadCurrentCollections(true);
|
||||
}
|
||||
|
||||
return ServerID("");
|
||||
|
|
|
@ -183,8 +183,8 @@ namespace triagens {
|
|||
/// @brief returns the maximal journal size
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_voc_size_t maximalSize () const {
|
||||
return triagens::basics::JsonHelper::getNumericValue<TRI_voc_size_t>(_json, "maximalSize", 0);
|
||||
TRI_voc_size_t journalSize () const {
|
||||
return triagens::basics::JsonHelper::getNumericValue<TRI_voc_size_t>(_json, "journalSize", 0);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -219,6 +219,415 @@ namespace triagens {
|
|||
};
|
||||
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- class CollectionInfoCurrent
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
class CollectionInfoCurrent {
|
||||
friend class ClusterInfo;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors / destructors
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
public:
|
||||
|
||||
CollectionInfoCurrent ();
|
||||
|
||||
CollectionInfoCurrent (ShardID const&, struct TRI_json_s*);
|
||||
|
||||
CollectionInfoCurrent (CollectionInfoCurrent const&);
|
||||
|
||||
CollectionInfoCurrent& operator= (CollectionInfoCurrent const&);
|
||||
|
||||
~CollectionInfoCurrent ();
|
||||
|
||||
private:
|
||||
|
||||
void freeAllJsons ();
|
||||
|
||||
void copyAllJsons ();
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public methods
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
public:
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief add a new shardID and JSON pair, returns true if OK and false
|
||||
/// if the shardID already exists. In the latter case nothing happens.
|
||||
/// The CollectionInfoCurrent object takes ownership of the TRI_json_t*.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool add (ShardID const& shardID, TRI_json_t* json) {
|
||||
map<ShardID, TRI_json_t*>::iterator it = _jsons.find(shardID);
|
||||
if (it == _jsons.end()) {
|
||||
_jsons.insert(make_pair<ShardID, TRI_json_t*>(shardID, json));
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the collection id
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_voc_cid_t id () const {
|
||||
// The id will always be the same in every shard
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.begin();
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = it->second;
|
||||
return triagens::basics::JsonHelper::stringUInt64(_json, "id");
|
||||
}
|
||||
else {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the collection type
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_col_type_e type () const {
|
||||
// The type will always be the same in every shard
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.begin();
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = it->second;
|
||||
return triagens::basics::JsonHelper::getNumericValue<TRI_col_type_e>
|
||||
(_json, "type", TRI_COL_TYPE_UNKNOWN);
|
||||
}
|
||||
else {
|
||||
return TRI_COL_TYPE_UNKNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the collection status for one shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_vocbase_col_status_e status (ShardID const& shardID) const {
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.find(shardID);
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = _jsons.begin()->second;
|
||||
return triagens::basics::JsonHelper::getNumericValue
|
||||
<TRI_vocbase_col_status_e>
|
||||
(_json, "status", TRI_VOC_COL_STATUS_CORRUPTED);
|
||||
}
|
||||
return TRI_VOC_COL_STATUS_CORRUPTED;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the collection status for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, TRI_vocbase_col_status_e> status () const {
|
||||
map<ShardID, TRI_vocbase_col_status_e> m;
|
||||
map<ShardID, TRI_json_t*>::const_iterator it;
|
||||
TRI_vocbase_col_status_e s;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
TRI_json_t* _json = it->second;
|
||||
s = triagens::basics::JsonHelper::getNumericValue
|
||||
<TRI_vocbase_col_status_e>
|
||||
(_json, "status", TRI_VOC_COL_STATUS_CORRUPTED);
|
||||
m.insert(make_pair<ShardID, TRI_vocbase_col_status_e>(it->first,s));
|
||||
}
|
||||
return m;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief local helper to return boolean flags
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
private:
|
||||
|
||||
bool getFlag (char const* name, ShardID const& shardID) const {
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.find(shardID);
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = _jsons.begin()->second;
|
||||
return triagens::basics::JsonHelper::getBooleanValue(_json,
|
||||
name, false);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief local helper to return a map to boolean
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> getFlag (char const* name ) const {
|
||||
map<ShardID, bool> m;
|
||||
map<ShardID, TRI_json_t*>::const_iterator it;
|
||||
bool b;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
TRI_json_t* _json = it->second;
|
||||
b = triagens::basics::JsonHelper::getBooleanValue(_json,
|
||||
name, false);
|
||||
m.insert(make_pair<ShardID, bool>(it->first,b));
|
||||
}
|
||||
return m;
|
||||
}
|
||||
|
||||
public:
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the deleted flag for a shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool deleted (ShardID const& shardID) const {
|
||||
return getFlag("deleted", shardID);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the deleted flag for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> deleted () const {
|
||||
return getFlag("deleted");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the doCompact flag for a shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool doCompact (ShardID const& shardID) const {
|
||||
return getFlag("doCompact", shardID);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the doCompact flag for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> doCompact () const {
|
||||
return getFlag("doCompact");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the isSystem flag for a shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool isSystem (ShardID const& shardID) const {
|
||||
return getFlag("isSystem", shardID);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the isSystem flag for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> isSystem () const {
|
||||
return getFlag("isSystem");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the isVolatile flag for a shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool isVolatile (ShardID const& shardID) const {
|
||||
return getFlag("isVolatile", shardID);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the isVolatile flag for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> isVolatile () const {
|
||||
return getFlag("isVolatile");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the error flag for a shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool error (ShardID const& shardID) const {
|
||||
return getFlag("error", shardID);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the error flag for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> error () const {
|
||||
return getFlag("error");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the waitForSync flag for a shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool waitForSync (ShardID const& shardID) const {
|
||||
return getFlag("waitForSync", shardID);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the waitForSync flag for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, bool> waitForSync () const {
|
||||
return getFlag("waitForSync");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns a copy of the key options
|
||||
/// the caller is responsible for freeing it
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_json_t* keyOptions () const {
|
||||
// The id will always be the same in every shard
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.begin();
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = it->second;
|
||||
TRI_json_t const* keyOptions
|
||||
= triagens::basics::JsonHelper::getArrayElement
|
||||
(_json, "keyOptions");
|
||||
|
||||
if (keyOptions != 0) {
|
||||
return TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, keyOptions);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the maximal journal size for one shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TRI_voc_size_t journalSize (ShardID const& shardID) const {
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.find(shardID);
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = _jsons.begin()->second;
|
||||
return triagens::basics::JsonHelper::getNumericValue
|
||||
<TRI_voc_size_t> (_json, "journalSize", 0);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the maximal journal size for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, TRI_voc_size_t> journalSize () const {
|
||||
map<ShardID, TRI_voc_size_t> m;
|
||||
map<ShardID, TRI_json_t*>::const_iterator it;
|
||||
TRI_voc_size_t s;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
TRI_json_t* _json = it->second;
|
||||
s = triagens::basics::JsonHelper::getNumericValue
|
||||
<TRI_voc_size_t> (_json, "journalSize", 0);
|
||||
m.insert(make_pair<ShardID, TRI_voc_size_t>(it->first,s));
|
||||
}
|
||||
return m;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the errorNum for one shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
int errorNum (ShardID const& shardID) const {
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.find(shardID);
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = _jsons.begin()->second;
|
||||
return triagens::basics::JsonHelper::getNumericValue
|
||||
<int> (_json, "errorNum", 0);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the errorNum for all shardIDs
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
map<ShardID, int> errorNum () const {
|
||||
map<ShardID, int> m;
|
||||
map<ShardID, TRI_json_t*>::const_iterator it;
|
||||
TRI_voc_size_t s;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
TRI_json_t* _json = it->second;
|
||||
s = triagens::basics::JsonHelper::getNumericValue
|
||||
<int> (_json, "errorNum", 0);
|
||||
m.insert(make_pair<ShardID, int>(it->first,s));
|
||||
}
|
||||
return m;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the shard keys
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
vector<string> shardKeys () const {
|
||||
// The shardKeys will always be the same in every shard
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.begin();
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = it->second;
|
||||
TRI_json_t* const node
|
||||
= triagens::basics::JsonHelper::getArrayElement
|
||||
(_json, "shardKeys");
|
||||
return triagens::basics::JsonHelper::stringList(node);
|
||||
}
|
||||
else {
|
||||
vector<string> result;
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the shard ids that are currently in the collection
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
vector<ShardID> shardIDs () const {
|
||||
vector<ShardID> v;
|
||||
map<ShardID, TRI_json_t*>::const_iterator it;
|
||||
for (it = _jsons.begin(); it != _jsons.end(); ++it) {
|
||||
v.push_back(it->first);
|
||||
}
|
||||
return v;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the responsible server for one shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
string responsibleServer (ShardID const& shardID) const {
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.find(shardID);
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = _jsons.begin()->second;
|
||||
return triagens::basics::JsonHelper::getStringValue
|
||||
(_json, "DBserver", "");
|
||||
}
|
||||
return string("");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief returns the errorMessage entry for one shardID
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
string errorMessage (ShardID const& shardID) const {
|
||||
map<ShardID, TRI_json_t*>::const_iterator it = _jsons.find(shardID);
|
||||
if (it != _jsons.end()) {
|
||||
TRI_json_t* _json = _jsons.begin()->second;
|
||||
return triagens::basics::JsonHelper::getStringValue
|
||||
(_json, "errorMessage", "");
|
||||
}
|
||||
return string("");
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- private methods
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- private variables
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
private:
|
||||
|
||||
map<ShardID, TRI_json_t*> _jsons;
|
||||
};
|
||||
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- class ClusterInfo
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -230,8 +639,14 @@ namespace triagens {
|
|||
class ClusterInfo {
|
||||
private:
|
||||
|
||||
typedef std::map<CollectionID, CollectionInfo> DatabaseCollections;
|
||||
typedef std::map<DatabaseID, DatabaseCollections> AllCollections;
|
||||
typedef std::map<CollectionID, CollectionInfo>
|
||||
DatabaseCollections;
|
||||
typedef std::map<DatabaseID, DatabaseCollections>
|
||||
AllCollections;
|
||||
typedef std::map<CollectionID, CollectionInfoCurrent>
|
||||
DatabaseCollectionsCurrent;
|
||||
typedef std::map<DatabaseID, DatabaseCollectionsCurrent>
|
||||
AllCollectionsCurrent;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- constructors and destructors
|
||||
|
@ -315,7 +730,7 @@ namespace triagens {
|
|||
/// Usually one does not have to call this directly.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void loadPlannedCollections ();
|
||||
void loadPlannedCollections (bool = true);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief flushes the list of planned databases
|
||||
|
@ -370,6 +785,24 @@ namespace triagens {
|
|||
|
||||
const std::vector<CollectionInfo> getCollections (DatabaseID const&);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief (re-)load the information about current collections from the agency
|
||||
/// Usually one does not have to call this directly. Note that this is
|
||||
/// necessarily complicated, since here we have to consider information
|
||||
/// about all shards of a collection.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void loadCurrentCollections (bool = true);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief ask about a collection in current. This returns information about
|
||||
/// all shards in the collection.
|
||||
/// If it is not found in the cache, the cache is reloaded once.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
CollectionInfoCurrent getCollectionCurrent (DatabaseID const&,
|
||||
CollectionID const&);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief create database in coordinator
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -497,16 +930,25 @@ namespace triagens {
|
|||
_uniqid;
|
||||
|
||||
// Cached data from the agency, we reload whenever necessary:
|
||||
std::map<DatabaseID, struct TRI_json_s*> _plannedDatabases; // from Plan/Databases
|
||||
std::map<DatabaseID, std::map<ServerID, struct TRI_json_s*> > _currentDatabases; // from Current/Databases
|
||||
std::map<DatabaseID, struct TRI_json_s*> _plannedDatabases;
|
||||
// from Plan/Databases
|
||||
std::map<DatabaseID, std::map<ServerID, struct TRI_json_s*> >
|
||||
_currentDatabases; // from Current/Databases
|
||||
|
||||
AllCollections _collections; // from Current/Collections/
|
||||
bool _collectionsValid;
|
||||
std::map<ServerID, std::string> _servers; // from Current/ServersRegistered
|
||||
bool _serversValid;
|
||||
std::map<ServerID, ServerID> _DBServers; // from Current/DBServers
|
||||
bool _DBServersValid;
|
||||
std::map<ShardID, ServerID> _shardIds; // from Current/ShardLocation
|
||||
AllCollections _collections;
|
||||
// from Plan/Collections/
|
||||
bool _collectionsValid;
|
||||
AllCollectionsCurrent _collectionsCurrent;
|
||||
// from Current/Collections/
|
||||
bool _collectionsCurrentValid;
|
||||
std::map<ServerID, std::string> _servers;
|
||||
// from Current/ServersRegistered
|
||||
bool _serversValid;
|
||||
std::map<ServerID, ServerID> _DBServers;
|
||||
// from Current/DBServers
|
||||
bool _DBServersValid;
|
||||
std::map<ShardID, ServerID> _shardIds;
|
||||
// from Plan/Collections/ ???
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- private static variables
|
||||
|
|
|
@ -203,9 +203,14 @@ namespace triagens {
|
|||
|
||||
TRI_ExecuteJavaScriptString(v8::Context::GetCurrent(), v8::String::New(content), v8::String::New(file), false);
|
||||
}
|
||||
|
||||
// get the pointer to the least used vocbase
|
||||
TRI_v8_global_t* v8g = (TRI_v8_global_t*) context->_isolate->GetData();
|
||||
void* orig = v8g->_vocbase;
|
||||
|
||||
_applicationV8->exitContext(context);
|
||||
TRI_ReleaseDatabaseServer(_server, vocbase);
|
||||
|
||||
TRI_ReleaseDatabaseServer(_server, (TRI_vocbase_t*) orig);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -194,7 +194,7 @@ void HeartbeatThread::run () {
|
|||
// nothing to do here
|
||||
}
|
||||
else {
|
||||
const double remain = TRI_microtime() - start - interval;
|
||||
const double remain = interval - (TRI_microtime() - start);
|
||||
|
||||
if (remain > 0.0) {
|
||||
usleep((useconds_t) (remain * 1000.0 * 1000.0));
|
||||
|
|
|
@ -745,7 +745,7 @@ static v8::Handle<v8::Value> JS_FlushClusterInfo (v8::Arguments const& argv) {
|
|||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get the responsible server
|
||||
/// @brief get the info about a collection in Plan
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
static v8::Handle<v8::Value> JS_GetCollectionInfoClusterInfo (v8::Arguments const& argv) {
|
||||
|
@ -766,6 +766,17 @@ static v8::Handle<v8::Value> JS_GetCollectionInfoClusterInfo (v8::Arguments cons
|
|||
result->Set(v8::String::New("type"), v8::Number::New((int) ci.type()));
|
||||
result->Set(v8::String::New("status"), v8::Number::New((int) ci.status()));
|
||||
|
||||
const string statusString = ci.statusString();
|
||||
result->Set(v8::String::New("statusString"),
|
||||
v8::String::New(statusString.c_str(), statusString.size()));
|
||||
|
||||
result->Set(v8::String::New("deleted"), v8::Boolean::New(ci.deleted()));
|
||||
result->Set(v8::String::New("doCompact"), v8::Boolean::New(ci.doCompact()));
|
||||
result->Set(v8::String::New("isSystem"), v8::Boolean::New(ci.isSystem()));
|
||||
result->Set(v8::String::New("isVolatile"), v8::Boolean::New(ci.isVolatile()));
|
||||
result->Set(v8::String::New("waitForSync"), v8::Boolean::New(ci.waitForSync()));
|
||||
result->Set(v8::String::New("journalSize"), v8::Number::New(ci.journalSize()));
|
||||
|
||||
const std::vector<std::string>& sks = ci.shardKeys();
|
||||
v8::Handle<v8::Array> shardKeys = v8::Array::New(sks.size());
|
||||
for (uint32_t i = 0, n = sks.size(); i < n; ++i) {
|
||||
|
@ -789,6 +800,71 @@ static v8::Handle<v8::Value> JS_GetCollectionInfoClusterInfo (v8::Arguments cons
|
|||
return scope.Close(result);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get the info about a collection in Current
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
static v8::Handle<v8::Value> JS_GetCollectionInfoCurrentClusterInfo (v8::Arguments const& argv) {
|
||||
v8::HandleScope scope;
|
||||
|
||||
if (argv.Length() != 3) {
|
||||
TRI_V8_EXCEPTION_USAGE(scope, "getCollectionInfoCurrent(<database-id>, <collection-id>, <shardID>)");
|
||||
}
|
||||
|
||||
ShardID shardID = TRI_ObjectToString(argv[2]);
|
||||
|
||||
CollectionInfo ci = ClusterInfo::instance()->getCollection(
|
||||
TRI_ObjectToString(argv[0]),
|
||||
TRI_ObjectToString(argv[1]));
|
||||
|
||||
v8::Handle<v8::Object> result = v8::Object::New();
|
||||
// First some stuff from Plan for which Current does not make sense:
|
||||
const std::string cid = triagens::basics::StringUtils::itoa(ci.id());
|
||||
const std::string& name = ci.name();
|
||||
result->Set(v8::String::New("id"), v8::String::New(cid.c_str(), cid.size()));
|
||||
result->Set(v8::String::New("name"), v8::String::New(name.c_str(), name.size()));
|
||||
|
||||
CollectionInfoCurrent cic = ClusterInfo::instance()->getCollectionCurrent(
|
||||
TRI_ObjectToString(argv[0]), cid);
|
||||
|
||||
result->Set(v8::String::New("type"), v8::Number::New((int) ci.type()));
|
||||
// Now the Current information, if we actually got it:
|
||||
TRI_vocbase_col_status_e s = cic.status(shardID);
|
||||
result->Set(v8::String::New("status"), v8::Number::New((int) cic.status(shardID)));
|
||||
if (s == TRI_VOC_COL_STATUS_CORRUPTED) {
|
||||
return scope.Close(result);
|
||||
}
|
||||
const string statusString = TRI_GetStatusStringCollectionVocBase(s);
|
||||
result->Set(v8::String::New("statusString"),
|
||||
v8::String::New(statusString.c_str(), statusString.size()));
|
||||
|
||||
result->Set(v8::String::New("deleted"), v8::Boolean::New(cic.deleted(shardID)));
|
||||
result->Set(v8::String::New("doCompact"), v8::Boolean::New(cic.doCompact(shardID)));
|
||||
result->Set(v8::String::New("isSystem"), v8::Boolean::New(cic.isSystem(shardID)));
|
||||
result->Set(v8::String::New("isVolatile"), v8::Boolean::New(cic.isVolatile(shardID)));
|
||||
result->Set(v8::String::New("waitForSync"), v8::Boolean::New(cic.waitForSync(shardID)));
|
||||
result->Set(v8::String::New("journalSize"), v8::Number::New(cic.journalSize(shardID)));
|
||||
const std::string serverID = cic.responsibleServer(shardID);
|
||||
result->Set(v8::String::New("responsibleServer"),
|
||||
v8::String::New(serverID.c_str(), serverID.size()));
|
||||
|
||||
// TODO: fill "indexes"
|
||||
v8::Handle<v8::Array> indexes = v8::Array::New();
|
||||
result->Set(v8::String::New("indexes"), indexes);
|
||||
|
||||
// Finally, report any possible error:
|
||||
bool error = cic.error(shardID);
|
||||
result->Set(v8::String::New("error"), v8::Boolean::New(error));
|
||||
if (error) {
|
||||
result->Set(v8::String::New("errorNum"), v8::Number::New(cic.errorNum(shardID)));
|
||||
const string errorMessage = cic.errorMessage(shardID);
|
||||
result->Set(v8::String::New("errorMessage"),
|
||||
v8::String::New(errorMessage.c_str(), errorMessage.size()));
|
||||
}
|
||||
|
||||
return scope.Close(result);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief get the responsible server
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1595,6 +1671,7 @@ void TRI_InitV8Cluster (v8::Handle<v8::Context> context) {
|
|||
TRI_AddMethodVocbase(rt, "listDatabases", JS_ListDatabases);
|
||||
TRI_AddMethodVocbase(rt, "flush", JS_FlushClusterInfo, true);
|
||||
TRI_AddMethodVocbase(rt, "getCollectionInfo", JS_GetCollectionInfoClusterInfo);
|
||||
TRI_AddMethodVocbase(rt, "getCollectionInfoCurrent", JS_GetCollectionInfoCurrentClusterInfo);
|
||||
TRI_AddMethodVocbase(rt, "getResponsibleServer", JS_GetResponsibleServerClusterInfo);
|
||||
TRI_AddMethodVocbase(rt, "getServerEndpoint", JS_GetServerEndpointClusterInfo);
|
||||
TRI_AddMethodVocbase(rt, "getDBServers", JS_GetDBServers);
|
||||
|
|
|
@ -244,14 +244,11 @@ int RestImportHandler::handleSingleDocument (ImportTransactionType& trx,
|
|||
/// @RESTQUERYPARAM{type,string,required}
|
||||
/// Determines how the body of the request will be interpreted. `type` can have
|
||||
/// the following values:
|
||||
///
|
||||
/// - `documents`: when this type is used, each line in the request body is
|
||||
/// expected to be an individual JSON-encoded document. Multiple JSON documents
|
||||
/// in the request body need to be separated by newlines.
|
||||
///
|
||||
/// - `list`: when this type is used, the request body must contain a single
|
||||
/// JSON-encoded list of individual documents to import.
|
||||
///
|
||||
/// - `auto`: if set, this will automatically determine the body type (either
|
||||
/// `documents` or `list`).
|
||||
///
|
||||
|
@ -736,8 +733,9 @@ bool RestImportHandler::createFromJson (const string& type) {
|
|||
///
|
||||
/// @RESTBODYPARAM{documents,string,required}
|
||||
/// The body must consist of JSON-encoded lists of attribute values, with one
|
||||
/// line per per document. The first line of the request must be a JSON-encoded
|
||||
/// list of attribute names.
|
||||
/// line per per document. The first row of the request must be a JSON-encoded
|
||||
/// list of attribute names. These attribute names are used for the data in the
|
||||
/// subsequent rows.
|
||||
///
|
||||
/// @RESTQUERYPARAMETERS
|
||||
///
|
||||
|
|
|
@ -2082,11 +2082,6 @@ static v8::Handle<v8::Value> EnsureGeoIndexVocbaseCol (v8::Arguments const& argv
|
|||
|
||||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
TRI_index_t* idx = 0;
|
||||
bool created;
|
||||
|
@ -4440,11 +4435,6 @@ static v8::Handle<v8::Value> JS_UpgradeVocbaseCol (v8::Arguments const& argv) {
|
|||
|
||||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_collection_t* col = &primary->base;
|
||||
|
||||
#ifdef TRI_ENABLE_LOGGER
|
||||
|
@ -5156,11 +5146,6 @@ static v8::Handle<v8::Value> JS_DropIndexVocbaseCol (v8::Arguments const& argv)
|
|||
|
||||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
|
||||
if (argv.Length() != 1) {
|
||||
|
@ -5242,11 +5227,6 @@ static v8::Handle<v8::Value> JS_EnsureCapConstraintVocbaseCol (v8::Arguments con
|
|||
|
||||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
TRI_index_t* idx = 0;
|
||||
bool created;
|
||||
|
@ -5343,11 +5323,6 @@ static v8::Handle<v8::Value> EnsureBitarray (v8::Arguments const& argv, bool sup
|
|||
|
||||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
|
||||
// .............................................................................
|
||||
|
@ -6307,11 +6282,6 @@ static v8::Handle<v8::Value> JS_PropertiesVocbaseCol (v8::Arguments const& argv)
|
|||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
TRI_collection_t* base = &primary->base;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(base->_info._type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
|
||||
// check if we want to change some parameters
|
||||
|
@ -6391,24 +6361,22 @@ static v8::Handle<v8::Value> JS_PropertiesVocbaseCol (v8::Arguments const& argv)
|
|||
// return the current parameter set
|
||||
v8::Handle<v8::Object> result = v8::Object::New();
|
||||
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(base->_info._type)) {
|
||||
result->Set(v8g->DoCompactKey, base->_info._doCompact ? v8::True() : v8::False());
|
||||
result->Set(v8g->IsSystemKey, base->_info._isSystem ? v8::True() : v8::False());
|
||||
result->Set(v8g->IsVolatileKey, base->_info._isVolatile ? v8::True() : v8::False());
|
||||
result->Set(v8g->JournalSizeKey, v8::Number::New(base->_info._maximalSize));
|
||||
result->Set(v8g->DoCompactKey, base->_info._doCompact ? v8::True() : v8::False());
|
||||
result->Set(v8g->IsSystemKey, base->_info._isSystem ? v8::True() : v8::False());
|
||||
result->Set(v8g->IsVolatileKey, base->_info._isVolatile ? v8::True() : v8::False());
|
||||
result->Set(v8g->JournalSizeKey, v8::Number::New(base->_info._maximalSize));
|
||||
|
||||
TRI_json_t* keyOptions = primary->_keyGenerator->toJson(primary->_keyGenerator);
|
||||
TRI_json_t* keyOptions = primary->_keyGenerator->toJson(primary->_keyGenerator);
|
||||
|
||||
if (keyOptions != 0) {
|
||||
result->Set(v8g->KeyOptionsKey, TRI_ObjectJson(keyOptions)->ToObject());
|
||||
if (keyOptions != 0) {
|
||||
result->Set(v8g->KeyOptionsKey, TRI_ObjectJson(keyOptions)->ToObject());
|
||||
|
||||
TRI_FreeJson(TRI_CORE_MEM_ZONE, keyOptions);
|
||||
}
|
||||
else {
|
||||
result->Set(v8g->KeyOptionsKey, v8::Array::New());
|
||||
}
|
||||
result->Set(v8g->WaitForSyncKey, base->_info._waitForSync ? v8::True() : v8::False());
|
||||
TRI_FreeJson(TRI_CORE_MEM_ZONE, keyOptions);
|
||||
}
|
||||
else {
|
||||
result->Set(v8g->KeyOptionsKey, v8::Array::New());
|
||||
}
|
||||
result->Set(v8g->WaitForSyncKey, base->_info._waitForSync ? v8::True() : v8::False());
|
||||
|
||||
ReleaseCollection(collection);
|
||||
return scope.Close(result);
|
||||
|
@ -6675,12 +6643,6 @@ static v8::Handle<v8::Value> JS_RotateVocbaseCol (v8::Arguments const& argv) {
|
|||
}
|
||||
|
||||
TRI_primary_collection_t* primary = collection->_collection;
|
||||
TRI_collection_t* base = &primary->base;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(base->_info._type)) {
|
||||
ReleaseCollection(collection);
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "unknown collection type");
|
||||
}
|
||||
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
|
||||
|
@ -7324,7 +7286,7 @@ static v8::Handle<v8::Value> MapGetVocBase (v8::Local<v8::String> name,
|
|||
return scope.Close(v8::Handle<v8::Value>());
|
||||
}
|
||||
|
||||
if (*key == '_' || // hide system collections
|
||||
if (*key == '_' ||
|
||||
strcmp(key, "hasOwnProperty") == 0 || // this prevents calling the property getter again (i.e. recursion!)
|
||||
strcmp(key, "toString") == 0 ||
|
||||
strcmp(key, "toJSON") == 0) {
|
||||
|
@ -7338,9 +7300,8 @@ static v8::Handle<v8::Value> MapGetVocBase (v8::Local<v8::String> name,
|
|||
cacheKey.push_back('*');
|
||||
|
||||
v8::Local<v8::String> cacheName = v8::String::New(cacheKey.c_str(), cacheKey.size());
|
||||
|
||||
v8::Handle<v8::Object> holder = info.Holder()->ToObject();
|
||||
|
||||
|
||||
if (holder->HasRealNamedProperty(cacheName)) {
|
||||
v8::Handle<v8::Object> value = holder->GetRealNamedProperty(cacheName)->ToObject();
|
||||
|
||||
|
@ -7397,10 +7358,13 @@ static v8::Handle<v8::Value> MapGetVocBase (v8::Local<v8::String> name,
|
|||
#endif
|
||||
|
||||
if (collection == 0) {
|
||||
return scope.Close(v8::Undefined());
|
||||
}
|
||||
if (*key == '_') {
|
||||
// we need to do this here...
|
||||
// otherwise we'd hide all non-collection attributes such as
|
||||
// db._drop
|
||||
return scope.Close(v8::Handle<v8::Value>());
|
||||
}
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
return scope.Close(v8::Undefined());
|
||||
}
|
||||
|
||||
|
@ -8825,14 +8789,15 @@ static v8::Handle<v8::Value> MapGetNamedShapedJson (v8::Local<v8::String> name,
|
|||
v8::Handle<v8::Object> self = info.Holder();
|
||||
|
||||
if (self->InternalFieldCount() <= SLOT_BARRIER) {
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "corrupted shaped json");
|
||||
// we better not throw here... otherwise this will cause a segfault
|
||||
return scope.Close(v8::Handle<v8::Value>());
|
||||
}
|
||||
|
||||
// get shaped json
|
||||
void* marker = TRI_UnwrapClass<void*>(self, WRP_SHAPED_JSON_TYPE);
|
||||
|
||||
if (marker == 0) {
|
||||
TRI_V8_EXCEPTION_INTERNAL(scope, "corrupted shaped json");
|
||||
return scope.Close(v8::Handle<v8::Value>());
|
||||
}
|
||||
|
||||
// convert the JavaScript string to a string
|
||||
|
|
|
@ -357,11 +357,6 @@ bool TRI_LoadAuthInfo (TRI_vocbase_t* vocbase) {
|
|||
LOG_FATAL_AND_EXIT("collection '_users' cannot be loaded");
|
||||
}
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(primary->base._info._type)) {
|
||||
TRI_ReleaseCollectionVocBase(vocbase, collection);
|
||||
LOG_FATAL_AND_EXIT("collection '_users' has an unknown collection type");
|
||||
}
|
||||
|
||||
TRI_WriteLockReadWriteLock(&vocbase->_authInfoLock);
|
||||
|
||||
// .............................................................................
|
||||
|
|
|
@ -249,7 +249,6 @@ void TRI_CleanupVocBase (void* data) {
|
|||
// check if we can get the compactor lock exclusively
|
||||
if (TRI_CheckAndLockCompactorVocBase(vocbase)) {
|
||||
size_t i, n;
|
||||
TRI_col_type_e type;
|
||||
|
||||
// copy all collections
|
||||
TRI_READ_LOCK_COLLECTIONS_VOCBASE(vocbase);
|
||||
|
@ -261,6 +260,7 @@ void TRI_CleanupVocBase (void* data) {
|
|||
for (i = 0; i < n; ++i) {
|
||||
TRI_vocbase_col_t* collection;
|
||||
TRI_primary_collection_t* primary;
|
||||
TRI_document_collection_t* document;
|
||||
|
||||
collection = (TRI_vocbase_col_t*) collections._buffer[i];
|
||||
|
||||
|
@ -273,24 +273,20 @@ void TRI_CleanupVocBase (void* data) {
|
|||
continue;
|
||||
}
|
||||
|
||||
type = primary->base._info._type;
|
||||
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
// we're the only ones that can unload the collection, so using
|
||||
// the collection pointer outside the lock is ok
|
||||
|
||||
// maybe cleanup indexes, unload the collection or some datafiles
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
TRI_document_collection_t* document = (TRI_document_collection_t*) primary;
|
||||
document = (TRI_document_collection_t*) primary;
|
||||
|
||||
// clean indexes?
|
||||
if (iterations % (uint64_t) CLEANUP_INDEX_ITERATIONS == 0) {
|
||||
document->cleanupIndexes(document);
|
||||
}
|
||||
|
||||
CleanupDocumentCollection(document);
|
||||
// clean indexes?
|
||||
if (iterations % (uint64_t) CLEANUP_INDEX_ITERATIONS == 0) {
|
||||
document->cleanupIndexes(document);
|
||||
}
|
||||
|
||||
CleanupDocumentCollection(document);
|
||||
}
|
||||
|
||||
TRI_UnlockCompactorVocBase(vocbase);
|
||||
|
|
|
@ -1077,43 +1077,34 @@ char* TRI_GetDirectoryCollection (char const* path,
|
|||
TRI_col_type_e type,
|
||||
TRI_voc_cid_t cid) {
|
||||
char* filename;
|
||||
char* tmp1;
|
||||
char* tmp2;
|
||||
|
||||
assert(path);
|
||||
assert(name);
|
||||
assert(path != NULL);
|
||||
assert(name != NULL);
|
||||
|
||||
// other collections use the collection identifier
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
char* tmp1;
|
||||
char* tmp2;
|
||||
tmp1 = TRI_StringUInt64(cid);
|
||||
|
||||
tmp1 = TRI_StringUInt64(cid);
|
||||
if (tmp1 == NULL) {
|
||||
TRI_set_errno(TRI_ERROR_OUT_OF_MEMORY);
|
||||
|
||||
if (tmp1 == NULL) {
|
||||
TRI_set_errno(TRI_ERROR_OUT_OF_MEMORY);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tmp2 = TRI_Concatenate2String("collection-", tmp1);
|
||||
|
||||
if (tmp2 == NULL) {
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, tmp1);
|
||||
|
||||
TRI_set_errno(TRI_ERROR_OUT_OF_MEMORY);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
filename = TRI_Concatenate2File(path, tmp2);
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, tmp1);
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, tmp2);
|
||||
}
|
||||
// oops, unknown collection type
|
||||
else {
|
||||
TRI_set_errno(TRI_ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tmp2 = TRI_Concatenate2String("collection-", tmp1);
|
||||
|
||||
if (tmp2 == NULL) {
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, tmp1);
|
||||
|
||||
TRI_set_errno(TRI_ERROR_OUT_OF_MEMORY);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
filename = TRI_Concatenate2File(path, tmp2);
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, tmp1);
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, tmp2);
|
||||
|
||||
if (filename == NULL) {
|
||||
TRI_set_errno(TRI_ERROR_OUT_OF_MEMORY);
|
||||
}
|
||||
|
@ -1610,9 +1601,7 @@ int TRI_UpdateCollectionInfo (TRI_vocbase_t* vocbase,
|
|||
TRI_collection_t* collection,
|
||||
TRI_col_info_t const* parameter) {
|
||||
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(collection->_info._type)) {
|
||||
TRI_LOCK_JOURNAL_ENTRIES_DOC_COLLECTION((TRI_document_collection_t*) collection);
|
||||
}
|
||||
TRI_LOCK_JOURNAL_ENTRIES_DOC_COLLECTION((TRI_document_collection_t*) collection);
|
||||
|
||||
if (parameter != NULL) {
|
||||
collection->_info._doCompact = parameter->_doCompact;
|
||||
|
@ -1629,9 +1618,7 @@ int TRI_UpdateCollectionInfo (TRI_vocbase_t* vocbase,
|
|||
// ... probably a few others missing here ...
|
||||
}
|
||||
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(collection->_info._type)) {
|
||||
TRI_UNLOCK_JOURNAL_ENTRIES_DOC_COLLECTION((TRI_document_collection_t*) collection);
|
||||
}
|
||||
TRI_UNLOCK_JOURNAL_ENTRIES_DOC_COLLECTION((TRI_document_collection_t*) collection);
|
||||
|
||||
return TRI_SaveCollectionInfo(collection->_directory, &collection->_info, vocbase->_settings.forceSyncProperties);
|
||||
}
|
||||
|
|
|
@ -142,26 +142,6 @@ struct TRI_vocbase_col_s;
|
|||
/// @}
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public macros
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @addtogroup VocBase
|
||||
/// @{
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return whether the collection is a document collection
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#define TRI_IS_DOCUMENT_COLLECTION(type) \
|
||||
((type) == TRI_COL_TYPE_DOCUMENT || (type) == TRI_COL_TYPE_EDGE)
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @}
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public types
|
||||
// -----------------------------------------------------------------------------
|
||||
|
|
|
@ -1471,7 +1471,6 @@ void TRI_CompactorVocBase (void* data) {
|
|||
for (i = 0; i < n; ++i) {
|
||||
TRI_vocbase_col_t* collection;
|
||||
TRI_primary_collection_t* primary;
|
||||
TRI_col_type_e type;
|
||||
bool doCompact;
|
||||
bool worked;
|
||||
|
||||
|
@ -1492,35 +1491,32 @@ void TRI_CompactorVocBase (void* data) {
|
|||
|
||||
worked = false;
|
||||
doCompact = primary->base._info._doCompact;
|
||||
type = primary->base._info._type;
|
||||
|
||||
// for document collection, compactify datafiles
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
if (collection->_status == TRI_VOC_COL_STATUS_LOADED && doCompact) {
|
||||
TRI_barrier_t* ce;
|
||||
|
||||
// check whether someone else holds a read-lock on the compaction lock
|
||||
if (! TRI_TryWriteLockReadWriteLock(&primary->_compactionLock)) {
|
||||
// someone else is holding the compactor lock, we'll not compact
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
continue;
|
||||
}
|
||||
|
||||
ce = TRI_CreateBarrierCompaction(&primary->_barrierList);
|
||||
|
||||
if (ce == NULL) {
|
||||
// out of memory
|
||||
LOG_WARNING("out of memory when trying to create a barrier element");
|
||||
}
|
||||
else {
|
||||
worked = CompactifyDocumentCollection((TRI_document_collection_t*) primary);
|
||||
|
||||
TRI_FreeBarrier(ce);
|
||||
}
|
||||
if (collection->_status == TRI_VOC_COL_STATUS_LOADED && doCompact) {
|
||||
TRI_barrier_t* ce;
|
||||
|
||||
// read-unlock the compaction lock
|
||||
TRI_WriteUnlockReadWriteLock(&primary->_compactionLock);
|
||||
// check whether someone else holds a read-lock on the compaction lock
|
||||
if (! TRI_TryWriteLockReadWriteLock(&primary->_compactionLock)) {
|
||||
// someone else is holding the compactor lock, we'll not compact
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
continue;
|
||||
}
|
||||
|
||||
ce = TRI_CreateBarrierCompaction(&primary->_barrierList);
|
||||
|
||||
if (ce == NULL) {
|
||||
// out of memory
|
||||
LOG_WARNING("out of memory when trying to create a barrier element");
|
||||
}
|
||||
else {
|
||||
worked = CompactifyDocumentCollection((TRI_document_collection_t*) primary);
|
||||
|
||||
TRI_FreeBarrier(ce);
|
||||
}
|
||||
|
||||
// read-unlock the compaction lock
|
||||
TRI_WriteUnlockReadWriteLock(&primary->_compactionLock);
|
||||
}
|
||||
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
|
|
@ -344,11 +344,6 @@ TRI_index_t* TRI_LookupIndex (TRI_primary_collection_t* primary,
|
|||
TRI_index_t* idx;
|
||||
size_t i;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(primary->base._info._type)) {
|
||||
TRI_set_errno(TRI_ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
doc = (TRI_document_collection_t*) primary;
|
||||
|
||||
for (i = 0; i < doc->_allIndexes._length; ++i) {
|
||||
|
|
|
@ -227,7 +227,6 @@ static bool CheckJournalDocumentCollection (TRI_document_collection_t* document)
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void TRI_SynchroniserVocBase (void* data) {
|
||||
TRI_col_type_e type;
|
||||
TRI_vocbase_t* vocbase = data;
|
||||
TRI_vector_pointer_t collections;
|
||||
|
||||
|
@ -256,6 +255,7 @@ void TRI_SynchroniserVocBase (void* data) {
|
|||
for (i = 0; i < n; ++i) {
|
||||
TRI_vocbase_col_t* collection;
|
||||
TRI_primary_collection_t* primary;
|
||||
bool result;
|
||||
|
||||
collection = collections._buffer[i];
|
||||
|
||||
|
@ -274,17 +274,11 @@ void TRI_SynchroniserVocBase (void* data) {
|
|||
primary = collection->_collection;
|
||||
|
||||
// for document collection, first sync and then seal
|
||||
type = primary->base._info._type;
|
||||
result = CheckSyncDocumentCollection((TRI_document_collection_t*) primary);
|
||||
worked |= result;
|
||||
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
bool result;
|
||||
|
||||
result = CheckSyncDocumentCollection((TRI_document_collection_t*) primary);
|
||||
worked |= result;
|
||||
|
||||
result = CheckJournalDocumentCollection((TRI_document_collection_t*) primary);
|
||||
worked |= result;
|
||||
}
|
||||
result = CheckJournalDocumentCollection((TRI_document_collection_t*) primary);
|
||||
worked |= result;
|
||||
|
||||
TRI_READ_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
}
|
||||
|
|
|
@ -257,17 +257,6 @@ static bool UnloadCollectionCallback (TRI_collection_t* col, void* data) {
|
|||
return true;
|
||||
}
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
LOG_ERROR("cannot unload collection '%s' of type '%d'",
|
||||
collection->_name,
|
||||
(int) collection->_type);
|
||||
|
||||
collection->_status = TRI_VOC_COL_STATUS_LOADED;
|
||||
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (TRI_ContainsBarrierList(&collection->_collection->_barrierList, TRI_BARRIER_ELEMENT) ||
|
||||
TRI_ContainsBarrierList(&collection->_collection->_barrierList, TRI_BARRIER_COLLECTION_REPLICATION) ||
|
||||
TRI_ContainsBarrierList(&collection->_collection->_barrierList, TRI_BARRIER_COLLECTION_COMPACTION)) {
|
||||
|
@ -348,17 +337,6 @@ static bool DropCollectionCallback (TRI_collection_t* col,
|
|||
// .............................................................................
|
||||
|
||||
if (collection->_collection != NULL) {
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(collection->_type)) {
|
||||
LOG_ERROR("cannot drop collection '%s' of type %d",
|
||||
collection->_name,
|
||||
(int) collection->_type);
|
||||
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
regfree(&re);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
document = (TRI_document_collection_t*) collection->_collection;
|
||||
|
||||
res = TRI_CloseDocumentCollection(document);
|
||||
|
@ -975,76 +953,70 @@ static int ScanPath (TRI_vocbase_t* vocbase,
|
|||
else {
|
||||
// we found a collection that is still active
|
||||
TRI_col_type_e type = info._type;
|
||||
TRI_vocbase_col_t* c;
|
||||
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
TRI_vocbase_col_t* c;
|
||||
if (info._version < TRI_COL_VERSION) {
|
||||
// collection is too "old"
|
||||
|
||||
if (info._version < TRI_COL_VERSION) {
|
||||
// collection is too "old"
|
||||
|
||||
if (! isUpgrade) {
|
||||
LOG_ERROR("collection '%s' has a too old version. Please start the server with the --upgrade option.",
|
||||
info._name);
|
||||
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, file);
|
||||
TRI_DestroyVectorString(&files);
|
||||
TRI_FreeCollectionInfoOptions(&info);
|
||||
regfree(&re);
|
||||
|
||||
return TRI_set_errno(res);
|
||||
}
|
||||
else {
|
||||
LOG_INFO("upgrading collection '%s'", info._name);
|
||||
|
||||
res = TRI_ERROR_NO_ERROR;
|
||||
|
||||
if (info._version < TRI_COL_VERSION_13) {
|
||||
res = TRI_UpgradeCollection13(vocbase, file, &info);
|
||||
}
|
||||
if (! isUpgrade) {
|
||||
LOG_ERROR("collection '%s' has a too old version. Please start the server with the --upgrade option.",
|
||||
info._name);
|
||||
|
||||
if (res == TRI_ERROR_NO_ERROR && info._version < TRI_COL_VERSION_15) {
|
||||
res = TRI_UpgradeCollection15(vocbase, file, &info);
|
||||
}
|
||||
|
||||
if (res != TRI_ERROR_NO_ERROR) {
|
||||
LOG_ERROR("upgrading collection '%s' failed.", info._name);
|
||||
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, file);
|
||||
TRI_DestroyVectorString(&files);
|
||||
TRI_FreeCollectionInfoOptions(&info);
|
||||
regfree(&re);
|
||||
|
||||
return TRI_set_errno(res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
c = AddCollection(vocbase, type, info._name, info._cid, file);
|
||||
|
||||
if (c == NULL) {
|
||||
LOG_ERROR("failed to add document collection from '%s'", file);
|
||||
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, file);
|
||||
TRI_DestroyVectorString(&files);
|
||||
TRI_FreeCollectionInfoOptions(&info);
|
||||
regfree(&re);
|
||||
|
||||
return TRI_set_errno(TRI_ERROR_ARANGO_CORRUPTED_COLLECTION);
|
||||
return TRI_set_errno(res);
|
||||
}
|
||||
else {
|
||||
LOG_INFO("upgrading collection '%s'", info._name);
|
||||
|
||||
c->_status = TRI_VOC_COL_STATUS_UNLOADED;
|
||||
res = TRI_ERROR_NO_ERROR;
|
||||
|
||||
if (info._version < TRI_COL_VERSION_13) {
|
||||
res = TRI_UpgradeCollection13(vocbase, file, &info);
|
||||
}
|
||||
|
||||
if (iterateMarkers) {
|
||||
// iterating markers may be time-consuming. we'll only do it if
|
||||
// we have to
|
||||
TRI_IterateTicksCollection(file, StartupTickIterator, NULL);
|
||||
}
|
||||
if (res == TRI_ERROR_NO_ERROR && info._version < TRI_COL_VERSION_15) {
|
||||
res = TRI_UpgradeCollection15(vocbase, file, &info);
|
||||
}
|
||||
|
||||
LOG_DEBUG("added document collection from '%s'", file);
|
||||
if (res != TRI_ERROR_NO_ERROR) {
|
||||
LOG_ERROR("upgrading collection '%s' failed.", info._name);
|
||||
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, file);
|
||||
TRI_DestroyVectorString(&files);
|
||||
TRI_FreeCollectionInfoOptions(&info);
|
||||
regfree(&re);
|
||||
|
||||
return TRI_set_errno(res);
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
LOG_DEBUG("skipping collection of unknown type %d", (int) type);
|
||||
|
||||
c = AddCollection(vocbase, type, info._name, info._cid, file);
|
||||
|
||||
if (c == NULL) {
|
||||
LOG_ERROR("failed to add document collection from '%s'", file);
|
||||
|
||||
TRI_FreeString(TRI_CORE_MEM_ZONE, file);
|
||||
TRI_DestroyVectorString(&files);
|
||||
TRI_FreeCollectionInfoOptions(&info);
|
||||
regfree(&re);
|
||||
|
||||
return TRI_set_errno(TRI_ERROR_ARANGO_CORRUPTED_COLLECTION);
|
||||
}
|
||||
|
||||
c->_status = TRI_VOC_COL_STATUS_UNLOADED;
|
||||
|
||||
if (iterateMarkers) {
|
||||
// iterating markers may be time-consuming. we'll only do it if
|
||||
// we have to
|
||||
TRI_IterateTicksCollection(file, StartupTickIterator, NULL);
|
||||
}
|
||||
|
||||
LOG_DEBUG("added document collection from '%s'", file);
|
||||
}
|
||||
TRI_FreeCollectionInfoOptions(&info);
|
||||
}
|
||||
|
@ -1071,8 +1043,6 @@ static int ScanPath (TRI_vocbase_t* vocbase,
|
|||
|
||||
static int LoadCollectionVocBase (TRI_vocbase_t* vocbase,
|
||||
TRI_vocbase_col_t* collection) {
|
||||
TRI_col_type_e type;
|
||||
|
||||
// .............................................................................
|
||||
// read lock
|
||||
// .............................................................................
|
||||
|
@ -1165,50 +1135,40 @@ static int LoadCollectionVocBase (TRI_vocbase_t* vocbase,
|
|||
|
||||
// unloaded, load collection
|
||||
if (collection->_status == TRI_VOC_COL_STATUS_UNLOADED) {
|
||||
type = (TRI_col_type_e) collection->_type;
|
||||
TRI_document_collection_t* document;
|
||||
|
||||
if (TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
TRI_document_collection_t* document;
|
||||
// set the status to loading
|
||||
collection->_status = TRI_VOC_COL_STATUS_LOADING;
|
||||
|
||||
// set the status to loading
|
||||
collection->_status = TRI_VOC_COL_STATUS_LOADING;
|
||||
// release the lock on the collection temporarily
|
||||
// this will allow other threads to check the collection's
|
||||
// status while it is loading (loading may take a long time because of
|
||||
// disk activity)
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
document = TRI_OpenDocumentCollection(vocbase, collection->_path);
|
||||
|
||||
// lock again the adjust the status
|
||||
TRI_WRITE_LOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
// no one else must have changed the status
|
||||
assert(collection->_status == TRI_VOC_COL_STATUS_LOADING);
|
||||
|
||||
if (document == NULL) {
|
||||
collection->_status = TRI_VOC_COL_STATUS_CORRUPTED;
|
||||
|
||||
// release the lock on the collection temporarily
|
||||
// this will allow other threads to check the collection's
|
||||
// status while it is loading (loading may take a long time because of
|
||||
// disk activity)
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
document = TRI_OpenDocumentCollection(vocbase, collection->_path);
|
||||
|
||||
// lock again the adjust the status
|
||||
TRI_WRITE_LOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
// no one else must have changed the status
|
||||
assert(collection->_status == TRI_VOC_COL_STATUS_LOADING);
|
||||
|
||||
if (document == NULL) {
|
||||
collection->_status = TRI_VOC_COL_STATUS_CORRUPTED;
|
||||
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
return TRI_set_errno(TRI_ERROR_ARANGO_CORRUPTED_COLLECTION);
|
||||
}
|
||||
|
||||
collection->_collection = &document->base;
|
||||
collection->_status = TRI_VOC_COL_STATUS_LOADED;
|
||||
TRI_CopyString(collection->_path, document->base.base._directory, sizeof(collection->_path) - 1);
|
||||
|
||||
// release the WRITE lock and try again
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
return LoadCollectionVocBase(vocbase, collection);
|
||||
return TRI_set_errno(TRI_ERROR_ARANGO_CORRUPTED_COLLECTION);
|
||||
}
|
||||
else {
|
||||
LOG_ERROR("unknown collection type %d for '%s'", (int) type, collection->_name);
|
||||
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
return TRI_set_errno(TRI_ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE);
|
||||
}
|
||||
collection->_collection = &document->base;
|
||||
collection->_status = TRI_VOC_COL_STATUS_LOADED;
|
||||
TRI_CopyString(collection->_path, document->base.base._directory, sizeof(collection->_path) - 1);
|
||||
|
||||
// release the WRITE lock and try again
|
||||
TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL(collection);
|
||||
|
||||
return LoadCollectionVocBase(vocbase, collection);
|
||||
}
|
||||
|
||||
LOG_ERROR("unknown collection status %d for '%s'", (int) collection->_status, collection->_name);
|
||||
|
@ -1963,10 +1923,9 @@ TRI_vocbase_col_t* TRI_CreateCollectionVocBase (TRI_vocbase_t* vocbase,
|
|||
TRI_voc_cid_t cid,
|
||||
TRI_server_id_t generatingServer) {
|
||||
TRI_vocbase_col_t* collection;
|
||||
TRI_col_type_e type;
|
||||
char* name;
|
||||
|
||||
assert(parameter);
|
||||
assert(parameter != NULL);
|
||||
name = parameter->_name;
|
||||
|
||||
// check that the name does not contain any strange characters
|
||||
|
@ -1976,15 +1935,6 @@ TRI_vocbase_col_t* TRI_CreateCollectionVocBase (TRI_vocbase_t* vocbase,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
type = (TRI_col_type_e) parameter->_type;
|
||||
|
||||
if (! TRI_IS_DOCUMENT_COLLECTION(type)) {
|
||||
LOG_ERROR("unknown collection type: %d", (int) parameter->_type);
|
||||
|
||||
TRI_set_errno(TRI_ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
TRI_ReadLockReadWriteLock(&vocbase->_inventoryLock);
|
||||
|
||||
collection = CreateCollection(vocbase, parameter, cid, generatingServer);
|
||||
|
|
|
@ -303,7 +303,8 @@ int main (int argc, char* argv[]) {
|
|||
BaseClient.sslProtocol(),
|
||||
false);
|
||||
|
||||
if (! ClientConnection->isConnected() || ClientConnection->getLastHttpReturnCode() != HttpResponse::OK) {
|
||||
if (! ClientConnection->isConnected() ||
|
||||
ClientConnection->getLastHttpReturnCode() != HttpResponse::OK) {
|
||||
cerr << "Could not connect to endpoint '" << BaseClient.endpointServer()->getSpecification()
|
||||
<< "', database: '" << BaseClient.databaseName() << "'" << endl;
|
||||
cerr << "Error message: '" << ClientConnection->getErrorMessage() << "'" << endl;
|
||||
|
@ -358,18 +359,18 @@ int main (int argc, char* argv[]) {
|
|||
|
||||
// collection name
|
||||
if (CollectionName == "") {
|
||||
cerr << "collection name is missing." << endl;
|
||||
cerr << "Collection name is missing." << endl;
|
||||
TRI_EXIT_FUNCTION(EXIT_FAILURE, NULL);
|
||||
}
|
||||
|
||||
// filename
|
||||
if (FileName == "") {
|
||||
cerr << "file name is missing." << endl;
|
||||
cerr << "File name is missing." << endl;
|
||||
TRI_EXIT_FUNCTION(EXIT_FAILURE, NULL);
|
||||
}
|
||||
|
||||
if (FileName != "-" && ! FileUtils::isRegularFile(FileName)) {
|
||||
cerr << "file '" << FileName << "' is not a regular file." << endl;
|
||||
cerr << "Cannot open file '" << FileName << "'" << endl;
|
||||
TRI_EXIT_FUNCTION(EXIT_FAILURE, NULL);
|
||||
}
|
||||
|
||||
|
@ -415,9 +416,6 @@ int main (int argc, char* argv[]) {
|
|||
cerr << "error message: " << ih.getErrorMessage() << endl;
|
||||
}
|
||||
|
||||
// calling dispose in V8 3.10.x causes a segfault. the v8 docs says its not necessary to call it upon program termination
|
||||
// v8::V8::Dispose();
|
||||
|
||||
TRIAGENS_REST_SHUTDOWN;
|
||||
|
||||
arangoimpExitFunction(ret, NULL);
|
||||
|
|
|
@ -145,6 +145,24 @@ actions.defineHttp({
|
|||
}
|
||||
})
|
||||
});
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief rescans the FOXX application directory
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
actions.defineHttp({
|
||||
url : "_admin/foxx/rescan",
|
||||
context : "admin",
|
||||
prefix : false,
|
||||
|
||||
callback: easyPostCallback({
|
||||
body: true,
|
||||
callback: function (body) {
|
||||
foxxManager.scanAppDirectory();
|
||||
return true;
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief sets up a FOXX application
|
||||
|
|
|
@ -123,8 +123,8 @@
|
|||
"ERROR_CLUSTER_DATABASE_NAME_EXISTS" : { "code" : 1460, "message" : "database name already exists" },
|
||||
"ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE_IN_PLAN" : { "code" : 1461, "message" : "could not create database in plan" },
|
||||
"ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE" : { "code" : 1462, "message" : "could not create database" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN" : { "code" : 1463, "message" : "could not remove databasefrom plan" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT" : { "code" : 1464, "message" : "could not remove databasefrom current" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN" : { "code" : 1463, "message" : "could not remove database from plan" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT" : { "code" : 1464, "message" : "could not remove database from current" },
|
||||
"ERROR_QUERY_KILLED" : { "code" : 1500, "message" : "query killed" },
|
||||
"ERROR_QUERY_PARSE" : { "code" : 1501, "message" : "%s" },
|
||||
"ERROR_QUERY_EMPTY" : { "code" : 1502, "message" : "query is empty" },
|
||||
|
@ -183,6 +183,7 @@
|
|||
"ERROR_GRAPH_COULD_NOT_CREATE_EDGE" : { "code" : 1907, "message" : "could not create edge" },
|
||||
"ERROR_GRAPH_COULD_NOT_CHANGE_EDGE" : { "code" : 1908, "message" : "could not change edge" },
|
||||
"ERROR_GRAPH_TOO_MANY_ITERATIONS" : { "code" : 1909, "message" : "too many iterations" },
|
||||
"ERROR_GRAPH_INVALID_FILTER_RESULT" : { "code" : 1910, "message" : "invalid filter result" },
|
||||
"ERROR_SESSION_UNKNOWN" : { "code" : 1950, "message" : "unknown session" },
|
||||
"ERROR_SESSION_EXPIRED" : { "code" : 1951, "message" : "session expired" },
|
||||
"SIMPLE_CLIENT_UNKNOWN_ERROR" : { "code" : 2000, "message" : "unknown client error" },
|
||||
|
|
|
@ -31,6 +31,7 @@ module.define("org/arangodb/graph/traversal", function(exports, module) {
|
|||
|
||||
var graph = require("org/arangodb/graph");
|
||||
var arangodb = require("org/arangodb");
|
||||
var BinaryHeap = require("org/arangodb/heap").BinaryHeap;
|
||||
var ArangoError = arangodb.ArangoError;
|
||||
|
||||
var db = arangodb.db;
|
||||
|
@ -38,9 +39,54 @@ var db = arangodb.db;
|
|||
var ArangoTraverser;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public functions
|
||||
// --SECTION-- helper functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief clone any object
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function clone (obj) {
|
||||
if (obj === null || typeof(obj) !== "object") {
|
||||
return obj;
|
||||
}
|
||||
|
||||
var copy, i;
|
||||
|
||||
if (Array.isArray(obj)) {
|
||||
copy = [ ];
|
||||
|
||||
for (i = 0; i < obj.length; ++i) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
else if (obj instanceof Object) {
|
||||
copy = { };
|
||||
|
||||
if (obj.hasOwnProperty) {
|
||||
for (i in obj) {
|
||||
if (obj.hasOwnProperty(i)) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return copy;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief traversal abortion exception
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var abortedException = function (message, options) {
|
||||
'use strict';
|
||||
this.message = message || "traversal intentionally aborted by user";
|
||||
this.options = options || { };
|
||||
this._intentionallyAborted = true;
|
||||
};
|
||||
|
||||
abortedException.prototype = new Error();
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- datasources
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -366,35 +412,6 @@ function trackingVisitor (config, result, vertex, path) {
|
|||
return;
|
||||
}
|
||||
|
||||
function clone (obj) {
|
||||
if (obj === null || typeof(obj) !== "object") {
|
||||
return obj;
|
||||
}
|
||||
|
||||
var copy, i;
|
||||
|
||||
if (Array.isArray(obj)) {
|
||||
copy = [ ];
|
||||
|
||||
for (i = 0; i < obj.length; ++i) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
else if (obj instanceof Object) {
|
||||
copy = { };
|
||||
|
||||
if (obj.hasOwnProperty) {
|
||||
for (i in obj) {
|
||||
if (obj.hasOwnProperty(i)) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return copy;
|
||||
}
|
||||
|
||||
if (result.visited.vertices) {
|
||||
result.visited.vertices.push(clone(vertex));
|
||||
}
|
||||
|
@ -555,7 +572,10 @@ function parseFilterResult (args) {
|
|||
return;
|
||||
}
|
||||
|
||||
throw "invalid filter result";
|
||||
var err = new ArangoError();
|
||||
err.errorNum = arangodb.errors.ERROR_GRAPH_INVALID_FILTER_RESULT.code;
|
||||
err.errorMessage = arangodb.errors.ERROR_GRAPH_INVALID_FILTER_RESULT.message;
|
||||
throw err;
|
||||
}
|
||||
|
||||
processArgument(args);
|
||||
|
@ -629,6 +649,10 @@ function checkReverse (config) {
|
|||
|
||||
function breadthFirstSearch () {
|
||||
return {
|
||||
requiresEndVertex: function () {
|
||||
return false;
|
||||
},
|
||||
|
||||
getPathItems: function (id, items) {
|
||||
var visited = { };
|
||||
var ignore = items.length - 1;
|
||||
|
@ -757,6 +781,10 @@ function breadthFirstSearch () {
|
|||
|
||||
function depthFirstSearch () {
|
||||
return {
|
||||
requiresEndVertex: function () {
|
||||
return false;
|
||||
},
|
||||
|
||||
getPathItems: function (id, items) {
|
||||
var visited = { };
|
||||
items.forEach(function (item) {
|
||||
|
@ -854,6 +882,240 @@ function depthFirstSearch () {
|
|||
};
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief implementation details for dijkstra shortest path strategy
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function dijkstraSearch () {
|
||||
return {
|
||||
nodes: { },
|
||||
|
||||
requiresEndVertex: function () {
|
||||
return true;
|
||||
},
|
||||
|
||||
makeNode: function (vertex) {
|
||||
var id = vertex._id;
|
||||
if (! this.nodes.hasOwnProperty(id)) {
|
||||
this.nodes[id] = { vertex: vertex, dist: Infinity };
|
||||
}
|
||||
|
||||
return this.nodes[id];
|
||||
},
|
||||
|
||||
vertexList: function (vertex) {
|
||||
var result = [ ];
|
||||
while (vertex) {
|
||||
result.push(vertex);
|
||||
vertex = vertex.parent;
|
||||
}
|
||||
return result;
|
||||
},
|
||||
|
||||
buildPath: function (vertex) {
|
||||
var path = { vertices: [ vertex.vertex ], edges: [ ] };
|
||||
var v = vertex;
|
||||
|
||||
while (v.parent) {
|
||||
path.vertices.unshift(v.parent.vertex);
|
||||
path.edges.unshift(v.parentEdge);
|
||||
v = v.parent;
|
||||
}
|
||||
return path;
|
||||
},
|
||||
|
||||
run: function (config, result, startVertex, endVertex) {
|
||||
var maxIterations = config.maxIterations, visitCounter = 0;
|
||||
|
||||
var heap = new BinaryHeap(function (node) {
|
||||
return node.dist;
|
||||
});
|
||||
|
||||
var startNode = this.makeNode(startVertex);
|
||||
startNode.dist = 0;
|
||||
heap.push(startNode);
|
||||
|
||||
while (heap.size() > 0) {
|
||||
if (visitCounter++ > maxIterations) {
|
||||
var err = new ArangoError();
|
||||
err.errorNum = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.code;
|
||||
err.errorMessage = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.message;
|
||||
throw err;
|
||||
}
|
||||
|
||||
var currentNode = heap.pop();
|
||||
var i, n;
|
||||
|
||||
if (currentNode.vertex._id === endVertex._id) {
|
||||
var vertices = this.vertexList(currentNode);
|
||||
if (config.order !== ArangoTraverser.PRE_ORDER) {
|
||||
vertices.reverse();
|
||||
}
|
||||
|
||||
n = vertices.length;
|
||||
for (i = 0; i < n; ++i) {
|
||||
config.visitor(config, result, vertices[i].vertex, this.buildPath(vertices[i]));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (currentNode.visited) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (currentNode.dist === Infinity) {
|
||||
break;
|
||||
}
|
||||
|
||||
currentNode.visited = true;
|
||||
var dist = currentNode.dist;
|
||||
|
||||
var path = this.buildPath(currentNode);
|
||||
var connected = config.expander(config, currentNode.vertex, path);
|
||||
n = connected.length;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
var neighbor = this.makeNode(connected[i].vertex);
|
||||
|
||||
if (neighbor.visited) {
|
||||
continue;
|
||||
}
|
||||
|
||||
var edge = connected[i].edge;
|
||||
var weight = 1;
|
||||
if (config.distance) {
|
||||
weight = config.distance(config, currentNode.vertex, neighbor.vertex, edge);
|
||||
}
|
||||
|
||||
var alt = dist + weight;
|
||||
if (alt < neighbor.dist) {
|
||||
neighbor.dist = alt;
|
||||
neighbor.parent = currentNode;
|
||||
neighbor.parentEdge = edge;
|
||||
heap.push(neighbor);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief implementation details for a* shortest path strategy
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function astarSearch () {
|
||||
return {
|
||||
nodes: { },
|
||||
|
||||
requiresEndVertex: function () {
|
||||
return true;
|
||||
},
|
||||
|
||||
makeNode: function (vertex) {
|
||||
var id = vertex._id;
|
||||
if (! this.nodes.hasOwnProperty(id)) {
|
||||
this.nodes[id] = { vertex: vertex, f: 0, g: 0, h: 0 };
|
||||
}
|
||||
|
||||
return this.nodes[id];
|
||||
},
|
||||
|
||||
vertexList: function (vertex) {
|
||||
var result = [ ];
|
||||
while (vertex) {
|
||||
result.push(vertex);
|
||||
vertex = vertex.parent;
|
||||
}
|
||||
return result;
|
||||
},
|
||||
|
||||
buildPath: function (vertex) {
|
||||
var path = { vertices: [ vertex.vertex ], edges: [ ] };
|
||||
var v = vertex;
|
||||
|
||||
while (v.parent) {
|
||||
path.vertices.unshift(v.parent.vertex);
|
||||
path.edges.unshift(v.parentEdge);
|
||||
v = v.parent;
|
||||
}
|
||||
return path;
|
||||
},
|
||||
|
||||
run: function (config, result, startVertex, endVertex) {
|
||||
var maxIterations = config.maxIterations, visitCounter = 0;
|
||||
|
||||
var heap = new BinaryHeap(function (node) {
|
||||
return node.f;
|
||||
});
|
||||
|
||||
heap.push(this.makeNode(startVertex));
|
||||
|
||||
|
||||
while (heap.size() > 0) {
|
||||
if (visitCounter++ > maxIterations) {
|
||||
var err = new ArangoError();
|
||||
err.errorNum = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.code;
|
||||
err.errorMessage = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.message;
|
||||
throw err;
|
||||
}
|
||||
|
||||
var currentNode = heap.pop();
|
||||
var i, n;
|
||||
|
||||
if (currentNode.vertex._id === endVertex._id) {
|
||||
var vertices = this.vertexList(currentNode);
|
||||
if (config.order !== ArangoTraverser.PRE_ORDER) {
|
||||
vertices.reverse();
|
||||
}
|
||||
|
||||
n = vertices.length;
|
||||
for (i = 0; i < n; ++i) {
|
||||
config.visitor(config, result, vertices[i].vertex, this.buildPath(vertices[i]));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
currentNode.closed = true;
|
||||
|
||||
var path = this.buildPath(currentNode);
|
||||
var connected = config.expander(config, currentNode.vertex, path);
|
||||
n = connected.length;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
var neighbor = this.makeNode(connected[i].vertex);
|
||||
|
||||
if (neighbor.closed) {
|
||||
continue;
|
||||
}
|
||||
|
||||
var gScore = currentNode.g + 1;// + neighbor.cost;
|
||||
var beenVisited = neighbor.visited;
|
||||
|
||||
if (! beenVisited || gScore < neighbor.g) {
|
||||
var edge = connected[i].edge;
|
||||
neighbor.visited = true;
|
||||
neighbor.parent = currentNode;
|
||||
neighbor.parentEdge = edge;
|
||||
neighbor.h = 1;
|
||||
if (config.distance && ! neighbor.h) {
|
||||
neighbor.h = config.distance(config, neighbor.vertex, endVertex, edge);
|
||||
}
|
||||
neighbor.g = gScore;
|
||||
neighbor.f = neighbor.g + neighbor.h;
|
||||
|
||||
if (! beenVisited) {
|
||||
heap.push(neighbor);
|
||||
}
|
||||
else {
|
||||
heap.rescoreElement(neighbor);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @}
|
||||
|
@ -959,7 +1221,9 @@ ArangoTraverser = function (config) {
|
|||
|
||||
config.strategy = validate(config.strategy, {
|
||||
depthfirst: ArangoTraverser.DEPTH_FIRST,
|
||||
breadthfirst: ArangoTraverser.BREADTH_FIRST
|
||||
breadthfirst: ArangoTraverser.BREADTH_FIRST,
|
||||
astar: ArangoTraverser.ASTAR_SEARCH,
|
||||
dijkstra: ArangoTraverser.DIJKSTRA_SEARCH
|
||||
}, "strategy");
|
||||
|
||||
config.order = validate(config.order, {
|
||||
|
@ -1054,23 +1318,54 @@ ArangoTraverser = function (config) {
|
|||
/// @brief execute the traversal
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoTraverser.prototype.traverse = function (result, startVertex) {
|
||||
// check the start vertex
|
||||
if (startVertex === undefined || startVertex === null) {
|
||||
throw "invalid startVertex specified for traversal";
|
||||
}
|
||||
|
||||
ArangoTraverser.prototype.traverse = function (result, startVertex, endVertex) {
|
||||
// get the traversal strategy
|
||||
var strategy;
|
||||
if (this.config.strategy === ArangoTraverser.BREADTH_FIRST) {
|
||||
|
||||
if (this.config.strategy === ArangoTraverser.ASTAR_SEARCH) {
|
||||
strategy = astarSearch();
|
||||
}
|
||||
else if (this.config.strategy === ArangoTraverser.DIJKSTRA_SEARCH) {
|
||||
strategy = dijkstraSearch();
|
||||
}
|
||||
else if (this.config.strategy === ArangoTraverser.BREADTH_FIRST) {
|
||||
strategy = breadthFirstSearch();
|
||||
}
|
||||
else {
|
||||
strategy = depthFirstSearch();
|
||||
}
|
||||
|
||||
// check the start vertex
|
||||
if (startVertex === undefined ||
|
||||
startVertex === null ||
|
||||
typeof startVertex !== 'object') {
|
||||
var err1 = new ArangoError();
|
||||
err1.errorNum = arangodb.errors.ERROR_BAD_PARAMETER.code;
|
||||
err1.errorMessage = arangodb.errors.ERROR_BAD_PARAMETER.message +
|
||||
": invalid startVertex specified for traversal";
|
||||
throw err1;
|
||||
}
|
||||
|
||||
if (strategy.requiresEndVertex() &&
|
||||
(endVertex === undefined ||
|
||||
endVertex === null ||
|
||||
typeof endVertex !== 'object')) {
|
||||
var err2 = new ArangoError();
|
||||
err2.errorNum = arangodb.errors.ERROR_BAD_PARAMETER.code;
|
||||
err2.errorMessage = arangodb.errors.ERROR_BAD_PARAMETER.message +
|
||||
": invalid endVertex specified for traversal";
|
||||
throw err2;
|
||||
}
|
||||
|
||||
// run the traversal
|
||||
strategy.run(this.config, result, startVertex);
|
||||
try {
|
||||
strategy.run(this.config, result, startVertex, endVertex);
|
||||
}
|
||||
catch (err3) {
|
||||
if (typeof err3 !== "object" || ! err3._intentionallyAborted) {
|
||||
throw err3;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1116,6 +1411,18 @@ ArangoTraverser.BREADTH_FIRST = 0;
|
|||
|
||||
ArangoTraverser.DEPTH_FIRST = 1;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief astar search
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoTraverser.ASTAR_SEARCH = 2;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief dijkstra search
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoTraverser.DIJKSTRA_SEARCH = 3;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief pre-order traversal
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1181,6 +1488,7 @@ exports.visitAllFilter = visitAllFilter;
|
|||
exports.maxDepthFilter = maxDepthFilter;
|
||||
exports.minDepthFilter = minDepthFilter;
|
||||
exports.includeMatchingAttributesFilter = includeMatchingAttributesFilter;
|
||||
exports.abortedException = abortedException;
|
||||
|
||||
exports.Traverser = ArangoTraverser;
|
||||
|
||||
|
|
|
@ -683,6 +683,9 @@ exports.run = function (args) {
|
|||
exports.mount(args[1], args[2]);
|
||||
}
|
||||
}
|
||||
else if (type === 'rescan') {
|
||||
exports.rescan();
|
||||
}
|
||||
else if (type === 'setup') {
|
||||
exports.setup(args[1]);
|
||||
}
|
||||
|
@ -821,6 +824,18 @@ exports.fetch = function (type, location, version) {
|
|||
return arangosh.checkRequestResult(res);
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief rescans the FOXX application directory
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
exports.rescan = function () {
|
||||
'use strict';
|
||||
|
||||
var res = arango.POST("/_admin/foxx/rescan", "");
|
||||
|
||||
return arangosh.checkRequestResult(res);
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief mounts a FOXX application
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1435,6 +1450,8 @@ exports.help = function () {
|
|||
"setup" : "setup executes the setup script (app must already be mounted)",
|
||||
"install" : "fetches a foxx application from the central foxx-apps repository, mounts it to a local URL " +
|
||||
"and sets it up",
|
||||
"rescan" : "rescans the foxx application directory on the server side (only needed if server-side apps " +
|
||||
"directory is modified by other processes)",
|
||||
"replace" : "replaces an aleady existing foxx application with the current local version",
|
||||
"teardown" : "teardown execute the teardown script (app must be still be mounted)",
|
||||
"unmount" : "unmounts a mounted foxx application",
|
||||
|
|
|
@ -123,8 +123,8 @@
|
|||
"ERROR_CLUSTER_DATABASE_NAME_EXISTS" : { "code" : 1460, "message" : "database name already exists" },
|
||||
"ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE_IN_PLAN" : { "code" : 1461, "message" : "could not create database in plan" },
|
||||
"ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE" : { "code" : 1462, "message" : "could not create database" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN" : { "code" : 1463, "message" : "could not remove databasefrom plan" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT" : { "code" : 1464, "message" : "could not remove databasefrom current" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN" : { "code" : 1463, "message" : "could not remove database from plan" },
|
||||
"ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT" : { "code" : 1464, "message" : "could not remove database from current" },
|
||||
"ERROR_QUERY_KILLED" : { "code" : 1500, "message" : "query killed" },
|
||||
"ERROR_QUERY_PARSE" : { "code" : 1501, "message" : "%s" },
|
||||
"ERROR_QUERY_EMPTY" : { "code" : 1502, "message" : "query is empty" },
|
||||
|
@ -183,6 +183,7 @@
|
|||
"ERROR_GRAPH_COULD_NOT_CREATE_EDGE" : { "code" : 1907, "message" : "could not create edge" },
|
||||
"ERROR_GRAPH_COULD_NOT_CHANGE_EDGE" : { "code" : 1908, "message" : "could not change edge" },
|
||||
"ERROR_GRAPH_TOO_MANY_ITERATIONS" : { "code" : 1909, "message" : "too many iterations" },
|
||||
"ERROR_GRAPH_INVALID_FILTER_RESULT" : { "code" : 1910, "message" : "invalid filter result" },
|
||||
"ERROR_SESSION_UNKNOWN" : { "code" : 1950, "message" : "unknown session" },
|
||||
"ERROR_SESSION_EXPIRED" : { "code" : 1951, "message" : "session expired" },
|
||||
"SIMPLE_CLIENT_UNKNOWN_ERROR" : { "code" : 2000, "message" : "unknown client error" },
|
||||
|
|
|
@ -735,6 +735,17 @@ function require (path) {
|
|||
}
|
||||
}
|
||||
|
||||
// actually the file name can be set via the path attribute
|
||||
if (origin === undefined) {
|
||||
origin = description.path;
|
||||
}
|
||||
// strip protocol (e.g. file://)
|
||||
if (typeof origin === 'string') {
|
||||
origin = origin.replace(/^[a-z]+:\/\//, '');
|
||||
}
|
||||
|
||||
sandbox.__filename = origin;
|
||||
sandbox.__dirname = typeof origin === 'string' ? origin.split('/').slice(0, -1).join('/') : origin;
|
||||
sandbox.module = module;
|
||||
sandbox.exports = module.exports;
|
||||
sandbox.require = function(path) { return module.require(path); };
|
||||
|
@ -1326,6 +1337,8 @@ function require (path) {
|
|||
}
|
||||
}
|
||||
|
||||
sandbox.__filename = full;
|
||||
sandbox.__dirname = full.split('/').slice(0, -1).join('/');
|
||||
sandbox.module = appModule;
|
||||
sandbox.applicationContext = appContext;
|
||||
|
||||
|
|
|
@ -30,6 +30,7 @@
|
|||
|
||||
var graph = require("org/arangodb/graph");
|
||||
var arangodb = require("org/arangodb");
|
||||
var BinaryHeap = require("org/arangodb/heap").BinaryHeap;
|
||||
var ArangoError = arangodb.ArangoError;
|
||||
|
||||
var db = arangodb.db;
|
||||
|
@ -37,9 +38,54 @@ var db = arangodb.db;
|
|||
var ArangoTraverser;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- public functions
|
||||
// --SECTION-- helper functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief clone any object
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function clone (obj) {
|
||||
if (obj === null || typeof(obj) !== "object") {
|
||||
return obj;
|
||||
}
|
||||
|
||||
var copy, i;
|
||||
|
||||
if (Array.isArray(obj)) {
|
||||
copy = [ ];
|
||||
|
||||
for (i = 0; i < obj.length; ++i) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
else if (obj instanceof Object) {
|
||||
copy = { };
|
||||
|
||||
if (obj.hasOwnProperty) {
|
||||
for (i in obj) {
|
||||
if (obj.hasOwnProperty(i)) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return copy;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief traversal abortion exception
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
var abortedException = function (message, options) {
|
||||
'use strict';
|
||||
this.message = message || "traversal intentionally aborted by user";
|
||||
this.options = options || { };
|
||||
this._intentionallyAborted = true;
|
||||
};
|
||||
|
||||
abortedException.prototype = new Error();
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- datasources
|
||||
// -----------------------------------------------------------------------------
|
||||
|
@ -365,35 +411,6 @@ function trackingVisitor (config, result, vertex, path) {
|
|||
return;
|
||||
}
|
||||
|
||||
function clone (obj) {
|
||||
if (obj === null || typeof(obj) !== "object") {
|
||||
return obj;
|
||||
}
|
||||
|
||||
var copy, i;
|
||||
|
||||
if (Array.isArray(obj)) {
|
||||
copy = [ ];
|
||||
|
||||
for (i = 0; i < obj.length; ++i) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
else if (obj instanceof Object) {
|
||||
copy = { };
|
||||
|
||||
if (obj.hasOwnProperty) {
|
||||
for (i in obj) {
|
||||
if (obj.hasOwnProperty(i)) {
|
||||
copy[i] = clone(obj[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return copy;
|
||||
}
|
||||
|
||||
if (result.visited.vertices) {
|
||||
result.visited.vertices.push(clone(vertex));
|
||||
}
|
||||
|
@ -554,7 +571,10 @@ function parseFilterResult (args) {
|
|||
return;
|
||||
}
|
||||
|
||||
throw "invalid filter result";
|
||||
var err = new ArangoError();
|
||||
err.errorNum = arangodb.errors.ERROR_GRAPH_INVALID_FILTER_RESULT.code;
|
||||
err.errorMessage = arangodb.errors.ERROR_GRAPH_INVALID_FILTER_RESULT.message;
|
||||
throw err;
|
||||
}
|
||||
|
||||
processArgument(args);
|
||||
|
@ -628,6 +648,10 @@ function checkReverse (config) {
|
|||
|
||||
function breadthFirstSearch () {
|
||||
return {
|
||||
requiresEndVertex: function () {
|
||||
return false;
|
||||
},
|
||||
|
||||
getPathItems: function (id, items) {
|
||||
var visited = { };
|
||||
var ignore = items.length - 1;
|
||||
|
@ -756,6 +780,10 @@ function breadthFirstSearch () {
|
|||
|
||||
function depthFirstSearch () {
|
||||
return {
|
||||
requiresEndVertex: function () {
|
||||
return false;
|
||||
},
|
||||
|
||||
getPathItems: function (id, items) {
|
||||
var visited = { };
|
||||
items.forEach(function (item) {
|
||||
|
@ -853,6 +881,240 @@ function depthFirstSearch () {
|
|||
};
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief implementation details for dijkstra shortest path strategy
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function dijkstraSearch () {
|
||||
return {
|
||||
nodes: { },
|
||||
|
||||
requiresEndVertex: function () {
|
||||
return true;
|
||||
},
|
||||
|
||||
makeNode: function (vertex) {
|
||||
var id = vertex._id;
|
||||
if (! this.nodes.hasOwnProperty(id)) {
|
||||
this.nodes[id] = { vertex: vertex, dist: Infinity };
|
||||
}
|
||||
|
||||
return this.nodes[id];
|
||||
},
|
||||
|
||||
vertexList: function (vertex) {
|
||||
var result = [ ];
|
||||
while (vertex) {
|
||||
result.push(vertex);
|
||||
vertex = vertex.parent;
|
||||
}
|
||||
return result;
|
||||
},
|
||||
|
||||
buildPath: function (vertex) {
|
||||
var path = { vertices: [ vertex.vertex ], edges: [ ] };
|
||||
var v = vertex;
|
||||
|
||||
while (v.parent) {
|
||||
path.vertices.unshift(v.parent.vertex);
|
||||
path.edges.unshift(v.parentEdge);
|
||||
v = v.parent;
|
||||
}
|
||||
return path;
|
||||
},
|
||||
|
||||
run: function (config, result, startVertex, endVertex) {
|
||||
var maxIterations = config.maxIterations, visitCounter = 0;
|
||||
|
||||
var heap = new BinaryHeap(function (node) {
|
||||
return node.dist;
|
||||
});
|
||||
|
||||
var startNode = this.makeNode(startVertex);
|
||||
startNode.dist = 0;
|
||||
heap.push(startNode);
|
||||
|
||||
while (heap.size() > 0) {
|
||||
if (visitCounter++ > maxIterations) {
|
||||
var err = new ArangoError();
|
||||
err.errorNum = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.code;
|
||||
err.errorMessage = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.message;
|
||||
throw err;
|
||||
}
|
||||
|
||||
var currentNode = heap.pop();
|
||||
var i, n;
|
||||
|
||||
if (currentNode.vertex._id === endVertex._id) {
|
||||
var vertices = this.vertexList(currentNode);
|
||||
if (config.order !== ArangoTraverser.PRE_ORDER) {
|
||||
vertices.reverse();
|
||||
}
|
||||
|
||||
n = vertices.length;
|
||||
for (i = 0; i < n; ++i) {
|
||||
config.visitor(config, result, vertices[i].vertex, this.buildPath(vertices[i]));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (currentNode.visited) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (currentNode.dist === Infinity) {
|
||||
break;
|
||||
}
|
||||
|
||||
currentNode.visited = true;
|
||||
var dist = currentNode.dist;
|
||||
|
||||
var path = this.buildPath(currentNode);
|
||||
var connected = config.expander(config, currentNode.vertex, path);
|
||||
n = connected.length;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
var neighbor = this.makeNode(connected[i].vertex);
|
||||
|
||||
if (neighbor.visited) {
|
||||
continue;
|
||||
}
|
||||
|
||||
var edge = connected[i].edge;
|
||||
var weight = 1;
|
||||
if (config.distance) {
|
||||
weight = config.distance(config, currentNode.vertex, neighbor.vertex, edge);
|
||||
}
|
||||
|
||||
var alt = dist + weight;
|
||||
if (alt < neighbor.dist) {
|
||||
neighbor.dist = alt;
|
||||
neighbor.parent = currentNode;
|
||||
neighbor.parentEdge = edge;
|
||||
heap.push(neighbor);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief implementation details for a* shortest path strategy
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function astarSearch () {
|
||||
return {
|
||||
nodes: { },
|
||||
|
||||
requiresEndVertex: function () {
|
||||
return true;
|
||||
},
|
||||
|
||||
makeNode: function (vertex) {
|
||||
var id = vertex._id;
|
||||
if (! this.nodes.hasOwnProperty(id)) {
|
||||
this.nodes[id] = { vertex: vertex, f: 0, g: 0, h: 0 };
|
||||
}
|
||||
|
||||
return this.nodes[id];
|
||||
},
|
||||
|
||||
vertexList: function (vertex) {
|
||||
var result = [ ];
|
||||
while (vertex) {
|
||||
result.push(vertex);
|
||||
vertex = vertex.parent;
|
||||
}
|
||||
return result;
|
||||
},
|
||||
|
||||
buildPath: function (vertex) {
|
||||
var path = { vertices: [ vertex.vertex ], edges: [ ] };
|
||||
var v = vertex;
|
||||
|
||||
while (v.parent) {
|
||||
path.vertices.unshift(v.parent.vertex);
|
||||
path.edges.unshift(v.parentEdge);
|
||||
v = v.parent;
|
||||
}
|
||||
return path;
|
||||
},
|
||||
|
||||
run: function (config, result, startVertex, endVertex) {
|
||||
var maxIterations = config.maxIterations, visitCounter = 0;
|
||||
|
||||
var heap = new BinaryHeap(function (node) {
|
||||
return node.f;
|
||||
});
|
||||
|
||||
heap.push(this.makeNode(startVertex));
|
||||
|
||||
|
||||
while (heap.size() > 0) {
|
||||
if (visitCounter++ > maxIterations) {
|
||||
var err = new ArangoError();
|
||||
err.errorNum = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.code;
|
||||
err.errorMessage = arangodb.errors.ERROR_GRAPH_TOO_MANY_ITERATIONS.message;
|
||||
throw err;
|
||||
}
|
||||
|
||||
var currentNode = heap.pop();
|
||||
var i, n;
|
||||
|
||||
if (currentNode.vertex._id === endVertex._id) {
|
||||
var vertices = this.vertexList(currentNode);
|
||||
if (config.order !== ArangoTraverser.PRE_ORDER) {
|
||||
vertices.reverse();
|
||||
}
|
||||
|
||||
n = vertices.length;
|
||||
for (i = 0; i < n; ++i) {
|
||||
config.visitor(config, result, vertices[i].vertex, this.buildPath(vertices[i]));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
currentNode.closed = true;
|
||||
|
||||
var path = this.buildPath(currentNode);
|
||||
var connected = config.expander(config, currentNode.vertex, path);
|
||||
n = connected.length;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
var neighbor = this.makeNode(connected[i].vertex);
|
||||
|
||||
if (neighbor.closed) {
|
||||
continue;
|
||||
}
|
||||
|
||||
var gScore = currentNode.g + 1;// + neighbor.cost;
|
||||
var beenVisited = neighbor.visited;
|
||||
|
||||
if (! beenVisited || gScore < neighbor.g) {
|
||||
var edge = connected[i].edge;
|
||||
neighbor.visited = true;
|
||||
neighbor.parent = currentNode;
|
||||
neighbor.parentEdge = edge;
|
||||
neighbor.h = 1;
|
||||
if (config.distance && ! neighbor.h) {
|
||||
neighbor.h = config.distance(config, neighbor.vertex, endVertex, edge);
|
||||
}
|
||||
neighbor.g = gScore;
|
||||
neighbor.f = neighbor.g + neighbor.h;
|
||||
|
||||
if (! beenVisited) {
|
||||
heap.push(neighbor);
|
||||
}
|
||||
else {
|
||||
heap.rescoreElement(neighbor);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @}
|
||||
|
@ -958,7 +1220,9 @@ ArangoTraverser = function (config) {
|
|||
|
||||
config.strategy = validate(config.strategy, {
|
||||
depthfirst: ArangoTraverser.DEPTH_FIRST,
|
||||
breadthfirst: ArangoTraverser.BREADTH_FIRST
|
||||
breadthfirst: ArangoTraverser.BREADTH_FIRST,
|
||||
astar: ArangoTraverser.ASTAR_SEARCH,
|
||||
dijkstra: ArangoTraverser.DIJKSTRA_SEARCH
|
||||
}, "strategy");
|
||||
|
||||
config.order = validate(config.order, {
|
||||
|
@ -1053,23 +1317,54 @@ ArangoTraverser = function (config) {
|
|||
/// @brief execute the traversal
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoTraverser.prototype.traverse = function (result, startVertex) {
|
||||
// check the start vertex
|
||||
if (startVertex === undefined || startVertex === null) {
|
||||
throw "invalid startVertex specified for traversal";
|
||||
}
|
||||
|
||||
ArangoTraverser.prototype.traverse = function (result, startVertex, endVertex) {
|
||||
// get the traversal strategy
|
||||
var strategy;
|
||||
if (this.config.strategy === ArangoTraverser.BREADTH_FIRST) {
|
||||
|
||||
if (this.config.strategy === ArangoTraverser.ASTAR_SEARCH) {
|
||||
strategy = astarSearch();
|
||||
}
|
||||
else if (this.config.strategy === ArangoTraverser.DIJKSTRA_SEARCH) {
|
||||
strategy = dijkstraSearch();
|
||||
}
|
||||
else if (this.config.strategy === ArangoTraverser.BREADTH_FIRST) {
|
||||
strategy = breadthFirstSearch();
|
||||
}
|
||||
else {
|
||||
strategy = depthFirstSearch();
|
||||
}
|
||||
|
||||
// check the start vertex
|
||||
if (startVertex === undefined ||
|
||||
startVertex === null ||
|
||||
typeof startVertex !== 'object') {
|
||||
var err1 = new ArangoError();
|
||||
err1.errorNum = arangodb.errors.ERROR_BAD_PARAMETER.code;
|
||||
err1.errorMessage = arangodb.errors.ERROR_BAD_PARAMETER.message +
|
||||
": invalid startVertex specified for traversal";
|
||||
throw err1;
|
||||
}
|
||||
|
||||
if (strategy.requiresEndVertex() &&
|
||||
(endVertex === undefined ||
|
||||
endVertex === null ||
|
||||
typeof endVertex !== 'object')) {
|
||||
var err2 = new ArangoError();
|
||||
err2.errorNum = arangodb.errors.ERROR_BAD_PARAMETER.code;
|
||||
err2.errorMessage = arangodb.errors.ERROR_BAD_PARAMETER.message +
|
||||
": invalid endVertex specified for traversal";
|
||||
throw err2;
|
||||
}
|
||||
|
||||
// run the traversal
|
||||
strategy.run(this.config, result, startVertex);
|
||||
try {
|
||||
strategy.run(this.config, result, startVertex, endVertex);
|
||||
}
|
||||
catch (err3) {
|
||||
if (typeof err3 !== "object" || ! err3._intentionallyAborted) {
|
||||
throw err3;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1115,6 +1410,18 @@ ArangoTraverser.BREADTH_FIRST = 0;
|
|||
|
||||
ArangoTraverser.DEPTH_FIRST = 1;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief astar search
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoTraverser.ASTAR_SEARCH = 2;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief dijkstra search
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoTraverser.DIJKSTRA_SEARCH = 3;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief pre-order traversal
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1180,6 +1487,7 @@ exports.visitAllFilter = visitAllFilter;
|
|||
exports.maxDepthFilter = maxDepthFilter;
|
||||
exports.minDepthFilter = minDepthFilter;
|
||||
exports.includeMatchingAttributesFilter = includeMatchingAttributesFilter;
|
||||
exports.abortedException = abortedException;
|
||||
|
||||
exports.Traverser = ArangoTraverser;
|
||||
|
||||
|
|
|
@ -0,0 +1,189 @@
|
|||
/*jslint indent: 2, nomen: true, maxlen: 100, sloppy: true, vars: true, white: true, plusplus: true, continue: true */
|
||||
/*global exports */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief binary min heap
|
||||
///
|
||||
/// @file
|
||||
///
|
||||
/// DISCLAIMER
|
||||
///
|
||||
/// Copyright 2011-2013 triagens GmbH, Cologne, Germany
|
||||
///
|
||||
/// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
/// you may not use this file except in compliance with the License.
|
||||
/// You may obtain a copy of the License at
|
||||
///
|
||||
/// http://www.apache.org/licenses/LICENSE-2.0
|
||||
///
|
||||
/// Unless required by applicable law or agreed to in writing, software
|
||||
/// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
/// See the License for the specific language governing permissions and
|
||||
/// limitations under the License.
|
||||
///
|
||||
/// Copyright holder is triAGENS GmbH, Cologne, Germany
|
||||
///
|
||||
/// @author Jan Steemann
|
||||
/// @author Copyright 2011-2013, triAGENS GmbH, Cologne, Germany
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// This file contains significant portions from the min heap published here:
|
||||
/// http://github.com/bgrins/javascript-astar
|
||||
/// Copyright (c) 2010, Brian Grinstead, http://briangrinstead.com
|
||||
/// Freely distributable under the MIT License.
|
||||
/// Includes Binary Heap (with modifications) from Marijn Haverbeke.
|
||||
/// http://eloquentjavascript.net/appendix2.html
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief constructor
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function BinaryHeap (scoreFunction) {
|
||||
this.values = [ ];
|
||||
this.scoreFunction = scoreFunction;
|
||||
}
|
||||
|
||||
BinaryHeap.prototype = {
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief push an element into the heap
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
push: function (element) {
|
||||
this.values.push(element);
|
||||
this._sinkDown(this.values.length - 1);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief pop the min element from the heap
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
pop: function () {
|
||||
var result = this.values[0];
|
||||
var end = this.values.pop();
|
||||
if (this.values.length > 0) {
|
||||
this.values[0] = end;
|
||||
this._bubbleUp(0);
|
||||
}
|
||||
return result;
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief remove a specific element from the heap
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
remove: function (node) {
|
||||
var i = this.values.indexOf(node);
|
||||
var end = this.values.pop();
|
||||
|
||||
if (i !== this.values.length - 1) {
|
||||
this.values[i] = end;
|
||||
|
||||
if (this.scoreFunction(end) < this.scoreFunction(node)) {
|
||||
this._sinkDown(i);
|
||||
}
|
||||
else {
|
||||
this._bubbleUp(i);
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return number of elements in heap
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
size: function() {
|
||||
return this.values.length;
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief reposition an element in the heap
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
rescoreElement: function (node) {
|
||||
this._sinkDown(this.values.indexOf(node));
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief move an element down the heap
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
_sinkDown: function (n) {
|
||||
var element = this.values[n];
|
||||
|
||||
while (n > 0) {
|
||||
var parentN = Math.floor((n + 1) / 2) - 1,
|
||||
parent = this.values[parentN];
|
||||
|
||||
if (this.scoreFunction(element) < this.scoreFunction(parent)) {
|
||||
this.values[parentN] = element;
|
||||
this.values[n] = parent;
|
||||
n = parentN;
|
||||
}
|
||||
else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief bubble up an element
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
_bubbleUp: function (n) {
|
||||
var length = this.values.length,
|
||||
element = this.values[n],
|
||||
elemScore = this.scoreFunction(element);
|
||||
|
||||
while (true) {
|
||||
var child2n = (n + 1) * 2;
|
||||
var child1n = child2n - 1;
|
||||
var swap = null;
|
||||
var child1Score;
|
||||
|
||||
if (child1n < length) {
|
||||
var child1 = this.values[child1n];
|
||||
child1Score = this.scoreFunction(child1);
|
||||
|
||||
if (child1Score < elemScore) {
|
||||
swap = child1n;
|
||||
}
|
||||
}
|
||||
|
||||
if (child2n < length) {
|
||||
var child2 = this.values[child2n];
|
||||
var child2Score = this.scoreFunction(child2);
|
||||
if (child2Score < (swap === null ? elemScore : child1Score)) {
|
||||
swap = child2n;
|
||||
}
|
||||
}
|
||||
|
||||
if (swap !== null) {
|
||||
this.values[n] = this.values[swap];
|
||||
this.values[swap] = element;
|
||||
n = swap;
|
||||
}
|
||||
else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- MODULE EXPORTS
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
exports.BinaryHeap = BinaryHeap;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// --SECTION-- END-OF-FILE
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// Local Variables:
|
||||
// mode: outline-minor
|
||||
// outline-regexp: "^\\(/// @brief\\|/// @addtogroup\\|// --SECTION--\\|/// @page\\|/// @\\}\\)"
|
||||
// End:
|
|
@ -3254,6 +3254,36 @@ function FIRST_DOCUMENT () {
|
|||
return null;
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief return the parts of a document identifier separately
|
||||
///
|
||||
/// returns a document with the attributes `collection` and `key` or fails if
|
||||
/// the individual parts cannot be determined.
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function PARSE_IDENTIFIER (value) {
|
||||
"use strict";
|
||||
|
||||
if (TYPEWEIGHT(value) === TYPEWEIGHT_STRING) {
|
||||
var parts = value.split('/');
|
||||
if (parts.length === 2) {
|
||||
return {
|
||||
collection: parts[0],
|
||||
key: parts[1]
|
||||
};
|
||||
}
|
||||
// fall through intentional
|
||||
}
|
||||
else if (TYPEWEIGHT(value) === TYPEWEIGHT_DOCUMENT) {
|
||||
if (value.hasOwnProperty('_id')) {
|
||||
return PARSE_IDENTIFIER(value._id);
|
||||
}
|
||||
// fall through intentional
|
||||
}
|
||||
|
||||
THROW(INTERNAL.errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH, "PARSE_IDENTIFIER");
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief check whether a document has a specific attribute
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -4048,6 +4078,7 @@ exports.GRAPH_NEIGHBORS = GRAPH_NEIGHBORS;
|
|||
exports.NOT_NULL = NOT_NULL;
|
||||
exports.FIRST_LIST = FIRST_LIST;
|
||||
exports.FIRST_DOCUMENT = FIRST_DOCUMENT;
|
||||
exports.PARSE_IDENTIFIER = PARSE_IDENTIFIER;
|
||||
exports.HAS = HAS;
|
||||
exports.ATTRIBUTES = ATTRIBUTES;
|
||||
exports.UNSET = UNSET;
|
||||
|
|
|
@ -252,6 +252,46 @@ function dropLocalDatabases (plannedDatabases) {
|
|||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief clean up what's in Current/Databases for ourselves
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
function cleanupCurrentDatabases () {
|
||||
var ourselves = ArangoServerState.id();
|
||||
|
||||
var dropDatabaseAgency = function (payload) {
|
||||
try {
|
||||
ArangoAgency.remove("Current/Databases/" + payload.name + "/" + ourselves);
|
||||
}
|
||||
catch (err) {
|
||||
// ignore errors
|
||||
}
|
||||
};
|
||||
|
||||
var all = ArangoAgency.get("Current/Databases", true);
|
||||
var currentDatabases = getByPrefix(all, "Current/Databases/", true);
|
||||
var localDatabases = getLocalDatabases();
|
||||
var name;
|
||||
|
||||
for (name in currentDatabases) {
|
||||
if (currentDatabases.hasOwnProperty(name)) {
|
||||
if (! localDatabases.hasOwnProperty(name)) {
|
||||
// we found a database we don't have locally
|
||||
|
||||
if (currentDatabases[name].hasOwnProperty(ourselves)) {
|
||||
// we are entered for a database that we don't have locally
|
||||
console.info("remvoing entry for local database '%s'", name);
|
||||
|
||||
writeLocked({ part: "Current" },
|
||||
dropDatabaseAgency,
|
||||
[ { name: name } ]);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief handle database changes
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -262,6 +302,7 @@ function handleDatabaseChanges (plan, current) {
|
|||
db._useDatabase("_system");
|
||||
createLocalDatabases(plannedDatabases);
|
||||
dropLocalDatabases(plannedDatabases);
|
||||
cleanupCurrentDatabases();
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -271,8 +312,8 @@ function handleDatabaseChanges (plan, current) {
|
|||
function createLocalCollections (plannedCollections) {
|
||||
var ourselves = ArangoServerState.id();
|
||||
|
||||
var createCollectionAgency = function (database, payload) {
|
||||
ArangoAgency.set("Current/Collections/" + database + "/" + payload.id + "/" + ourselves,
|
||||
var createCollectionAgency = function (database, shard, payload) {
|
||||
ArangoAgency.set("Current/Collections/" + database + "/" + payload.id + "/" + shard,
|
||||
payload);
|
||||
};
|
||||
|
||||
|
@ -334,9 +375,10 @@ function createLocalCollections (plannedCollections) {
|
|||
payload.errorMessage = err2.errorMessage;
|
||||
}
|
||||
|
||||
payload.DBserver = ourselves;
|
||||
writeLocked({ part: "Current" },
|
||||
createCollectionAgency,
|
||||
[ database, payload ]);
|
||||
[ database, shard, payload ]);
|
||||
}
|
||||
else {
|
||||
// collection exists, now compare collection properties
|
||||
|
@ -368,9 +410,10 @@ function createLocalCollections (plannedCollections) {
|
|||
payload.errorMessage = err3.errorMessage;
|
||||
}
|
||||
|
||||
payload.DBserver = ourselves;
|
||||
writeLocked({ part: "Current" },
|
||||
createCollectionAgency,
|
||||
[ database, payload ]);
|
||||
[ database, shard, payload ]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -397,9 +440,9 @@ function createLocalCollections (plannedCollections) {
|
|||
function dropLocalCollections (plannedCollections) {
|
||||
var ourselves = ArangoServerState.id();
|
||||
|
||||
var dropCollectionAgency = function (database, id) {
|
||||
var dropCollectionAgency = function (database, shardID, id) {
|
||||
try {
|
||||
ArangoAgency.remove("Current/Collections/" + database + "/" + id + "/" + ourselves);
|
||||
ArangoAgency.remove("Current/Collections/" + database + "/" + id + "/" + shardID);
|
||||
}
|
||||
catch (err) {
|
||||
// ignore errors
|
||||
|
@ -446,7 +489,7 @@ function dropLocalCollections (plannedCollections) {
|
|||
|
||||
writeLocked({ part: "Current" },
|
||||
dropCollectionAgency,
|
||||
[ database, collections[collection].planId ]);
|
||||
[ database, collection, collections[collection].planId ]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -822,6 +822,17 @@ exports.scanAppDirectory = function () {
|
|||
scanDirectory(module.appPath());
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief rescans the FOXX application directory
|
||||
/// this function is a trampoline for scanAppDirectory
|
||||
/// the shorter function name is only here to keep compatibility with the
|
||||
/// client-side foxx manager
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
exports.rescan = function () {
|
||||
return exports.scanAppDirectory();
|
||||
};
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief mounts a FOXX application
|
||||
///
|
||||
|
|
|
@ -1869,6 +1869,90 @@ function ahuacatlFunctionsTestSuite () {
|
|||
assertEqual(expected, actual);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test parse identifier function
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testParseIdentifier : function () {
|
||||
var actual;
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER('foo/bar')");
|
||||
assertEqual([ { collection: 'foo', key: 'bar' } ], actual);
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER('this-is-a-collection-name/and-this-is-an-id')");
|
||||
assertEqual([ { collection: 'this-is-a-collection-name', key: 'and-this-is-an-id' } ], actual);
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER('MY_COLLECTION/MY_DOC')");
|
||||
assertEqual([ { collection: 'MY_COLLECTION', key: 'MY_DOC' } ], actual);
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER('_users/AbC')");
|
||||
assertEqual([ { collection: '_users', key: 'AbC' } ], actual);
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER({ _id: 'foo/bar', value: 'baz' })");
|
||||
assertEqual([ { collection: 'foo', key: 'bar' } ], actual);
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER({ ignore: true, _id: '_system/VALUE', value: 'baz' })");
|
||||
assertEqual([ { collection: '_system', key: 'VALUE' } ], actual);
|
||||
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER({ value: 123, _id: 'Some-Odd-Collection/THIS_IS_THE_KEY' })");
|
||||
assertEqual([ { collection: 'Some-Odd-Collection', key: 'THIS_IS_THE_KEY' } ], actual);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test parse identifier function
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testParseIdentifierCollection : function () {
|
||||
var cn = "UnitTestsAhuacatlFunctions";
|
||||
|
||||
internal.db._drop(cn);
|
||||
var cx = internal.db._create(cn);
|
||||
cx.save({ "title" : "123", "value" : 456, "_key" : "foobar" });
|
||||
cx.save({ "_key" : "so-this-is-it", "title" : "nada", "value" : 123 });
|
||||
|
||||
var expected, actual;
|
||||
|
||||
expected = [ { collection: cn, key: "foobar" } ];
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER(DOCUMENT(CONCAT(@cn, '/', @key)))", { cn: cn, key: "foobar" });
|
||||
assertEqual(expected, actual);
|
||||
|
||||
expected = [ { collection: cn, key: "foobar" } ];
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER(DOCUMENT(CONCAT(@cn, '/', @key)))", { cn: cn, key: "foobar" });
|
||||
assertEqual(expected, actual);
|
||||
|
||||
expected = [ { collection: cn, key: "foobar" } ];
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER(DOCUMENT(CONCAT(@cn, '/', 'foobar')))", { cn: cn });
|
||||
assertEqual(expected, actual);
|
||||
|
||||
expected = [ { collection: cn, key: "foobar" } ];
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER(DOCUMENT([ @key ])[0])", { key: "UnitTestsAhuacatlFunctions/foobar" });
|
||||
assertEqual(expected, actual);
|
||||
|
||||
expected = [ { collection: cn, key: "so-this-is-it" } ];
|
||||
actual = getQueryResults("RETURN PARSE_IDENTIFIER(DOCUMENT([ 'UnitTestsAhuacatlFunctions/so-this-is-it' ])[0])");
|
||||
assertEqual(expected, actual);
|
||||
|
||||
internal.db._drop(cn);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test parse identifier function
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testParseIdentifier : function () {
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_NUMBER_MISMATCH.code, "RETURN PARSE_IDENTIFIER()");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_NUMBER_MISMATCH.code, "RETURN PARSE_IDENTIFIER('foo', 'bar')");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER(null)");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER(false)");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER(3)");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER(\"foo\")");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER('foo bar')");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER('foo/bar/baz')");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER([ ])");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER({ })");
|
||||
assertQueryError(errors.ERROR_QUERY_FUNCTION_ARGUMENT_TYPE_MISMATCH.code, "RETURN PARSE_IDENTIFIER({ foo: 'bar' })");
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test document function
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -413,15 +413,12 @@ void RestJobHandler::getJob () {
|
|||
///
|
||||
/// @RESTURLPARAM{type,string,required}
|
||||
/// The type of jobs to delete. `type` can be:
|
||||
///
|
||||
/// - `all`: deletes all jobs results. Currently executing or queued async jobs
|
||||
/// will not be stopped by this call.
|
||||
///
|
||||
/// - `expired`: deletes expired results. To determine the expiration status of
|
||||
/// a result, pass the `stamp` URL parameter. `stamp` needs to be a UNIX
|
||||
/// timestamp, and all async job results created at a lower timestamp will be
|
||||
/// deleted.
|
||||
///
|
||||
/// - an actual job-id: in this case, the call will remove the result of the
|
||||
/// specified async job. If the job is currently executing or queued, it will
|
||||
/// not be aborted.
|
||||
|
|
|
@ -158,8 +158,8 @@ ERROR_CLUSTER_COULD_NOT_REMOVE_COLLECTION_IN_CURRENT,1459,"could not remove coll
|
|||
ERROR_CLUSTER_DATABASE_NAME_EXISTS,1460,"database name already exists","Will be raised when a coordinator in a cluster tries to create a database and the database name already exists."
|
||||
ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE_IN_PLAN,1461,"could not create database in plan","Will be raised when a coordinator in a cluster cannot create an entry for a new database in the Plan hierarchy in the agency."
|
||||
ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE,1462,"could not create database","Will be raised when a coordinator in a cluster notices that some DBServers report problems when creating databases for a new cluster wide database."
|
||||
ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN,1463,"could not remove databasefrom plan","Will be raised when a coordinator in a cluster cannot remove an entry for a database in the Plan hierarchy in the agency."
|
||||
ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT,1464,"could not remove databasefrom current","Will be raised when a coordinator in a cluster cannot remove an entry for a database in the Current hierarchy in the agency."
|
||||
ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN,1463,"could not remove database from plan","Will be raised when a coordinator in a cluster cannot remove an entry for a database in the Plan hierarchy in the agency."
|
||||
ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT,1464,"could not remove database from current","Will be raised when a coordinator in a cluster cannot remove an entry for a database in the Current hierarchy in the agency."
|
||||
|
||||
################################################################################
|
||||
## ArangoDB query errors
|
||||
|
@ -258,6 +258,7 @@ ERROR_GRAPH_INVALID_EDGE,1906,"invalid edge","Will be raised when an invalid edg
|
|||
ERROR_GRAPH_COULD_NOT_CREATE_EDGE,1907,"could not create edge","Will be raised when the edge could not be created"
|
||||
ERROR_GRAPH_COULD_NOT_CHANGE_EDGE,1908,"could not change edge","Will be raised when the edge could not be changed"
|
||||
ERROR_GRAPH_TOO_MANY_ITERATIONS,1909,"too many iterations","Will be raised when too many iterations are done in a graph traversal"
|
||||
ERROR_GRAPH_INVALID_FILTER_RESULT,1910,"invalid filter result","Will be raised when an invalid filter result is returned in a graph traversal"
|
||||
|
||||
################################################################################
|
||||
## Session errors
|
||||
|
|
|
@ -119,8 +119,8 @@ void TRI_InitialiseErrorMessages (void) {
|
|||
REG_ERROR(ERROR_CLUSTER_DATABASE_NAME_EXISTS, "database name already exists");
|
||||
REG_ERROR(ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE_IN_PLAN, "could not create database in plan");
|
||||
REG_ERROR(ERROR_CLUSTER_COULD_NOT_CREATE_DATABASE, "could not create database");
|
||||
REG_ERROR(ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN, "could not remove databasefrom plan");
|
||||
REG_ERROR(ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT, "could not remove databasefrom current");
|
||||
REG_ERROR(ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN, "could not remove database from plan");
|
||||
REG_ERROR(ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT, "could not remove database from current");
|
||||
REG_ERROR(ERROR_QUERY_KILLED, "query killed");
|
||||
REG_ERROR(ERROR_QUERY_PARSE, "%s");
|
||||
REG_ERROR(ERROR_QUERY_EMPTY, "query is empty");
|
||||
|
@ -179,6 +179,7 @@ void TRI_InitialiseErrorMessages (void) {
|
|||
REG_ERROR(ERROR_GRAPH_COULD_NOT_CREATE_EDGE, "could not create edge");
|
||||
REG_ERROR(ERROR_GRAPH_COULD_NOT_CHANGE_EDGE, "could not change edge");
|
||||
REG_ERROR(ERROR_GRAPH_TOO_MANY_ITERATIONS, "too many iterations");
|
||||
REG_ERROR(ERROR_GRAPH_INVALID_FILTER_RESULT, "invalid filter result");
|
||||
REG_ERROR(ERROR_SESSION_UNKNOWN, "unknown session");
|
||||
REG_ERROR(ERROR_SESSION_EXPIRED, "session expired");
|
||||
REG_ERROR(SIMPLE_CLIENT_UNKNOWN_ERROR, "unknown client error");
|
||||
|
|
|
@ -274,10 +274,10 @@ extern "C" {
|
|||
/// Will be raised when a coordinator in a cluster notices that some
|
||||
/// DBServers report problems when creating databases for a new cluster wide
|
||||
/// database.
|
||||
/// - 1463: @LIT{could not remove databasefrom plan}
|
||||
/// - 1463: @LIT{could not remove database from plan}
|
||||
/// Will be raised when a coordinator in a cluster cannot remove an entry for
|
||||
/// a database in the Plan hierarchy in the agency.
|
||||
/// - 1464: @LIT{could not remove databasefrom current}
|
||||
/// - 1464: @LIT{could not remove database from current}
|
||||
/// Will be raised when a coordinator in a cluster cannot remove an entry for
|
||||
/// a database in the Current hierarchy in the agency.
|
||||
/// - 1500: @LIT{query killed}
|
||||
|
@ -417,6 +417,9 @@ extern "C" {
|
|||
/// Will be raised when the edge could not be changed
|
||||
/// - 1909: @LIT{too many iterations}
|
||||
/// Will be raised when too many iterations are done in a graph traversal
|
||||
/// - 1910: @LIT{invalid filter result}
|
||||
/// Will be raised when an invalid filter result is returned in a graph
|
||||
/// traversal
|
||||
/// - 1950: @LIT{unknown session}
|
||||
/// Will be raised when an invalid/unknown session id is passed to the server
|
||||
/// - 1951: @LIT{session expired}
|
||||
|
@ -1602,7 +1605,7 @@ void TRI_InitialiseErrorMessages (void);
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief 1463: ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_PLAN
|
||||
///
|
||||
/// could not remove databasefrom plan
|
||||
/// could not remove database from plan
|
||||
///
|
||||
/// Will be raised when a coordinator in a cluster cannot remove an entry for a
|
||||
/// database in the Plan hierarchy in the agency.
|
||||
|
@ -1613,7 +1616,7 @@ void TRI_InitialiseErrorMessages (void);
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief 1464: ERROR_CLUSTER_COULD_NOT_REMOVE_DATABASE_IN_CURRENT
|
||||
///
|
||||
/// could not remove databasefrom current
|
||||
/// could not remove database from current
|
||||
///
|
||||
/// Will be raised when a coordinator in a cluster cannot remove an entry for a
|
||||
/// database in the Current hierarchy in the agency.
|
||||
|
@ -2217,6 +2220,17 @@ void TRI_InitialiseErrorMessages (void);
|
|||
|
||||
#define TRI_ERROR_GRAPH_TOO_MANY_ITERATIONS (1909)
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief 1910: ERROR_GRAPH_INVALID_FILTER_RESULT
|
||||
///
|
||||
/// invalid filter result
|
||||
///
|
||||
/// Will be raised when an invalid filter result is returned in a graph
|
||||
/// traversal
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#define TRI_ERROR_GRAPH_INVALID_FILTER_RESULT (1910)
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief 1950: ERROR_SESSION_UNKNOWN
|
||||
///
|
||||
|
|
Loading…
Reference in New Issue