mirror of https://gitee.com/bigwinds/arangodb
Named indices (#8370)
This commit is contained in:
parent
cdb4b46554
commit
413e90508f
|
@ -1,6 +1,10 @@
|
||||||
devel
|
devel
|
||||||
-----
|
-----
|
||||||
|
|
||||||
|
* added "name" property for indices
|
||||||
|
|
||||||
|
If a name is not specified on index creation, one will be auto-generated.
|
||||||
|
|
||||||
* Under normal circumstances there should be no need to connect to a
|
* Under normal circumstances there should be no need to connect to a
|
||||||
database server in a cluster with one of the client tools, and it is
|
database server in a cluster with one of the client tools, and it is
|
||||||
likely that any user operations carried out there with one of the client
|
likely that any user operations carried out there with one of the client
|
||||||
|
@ -9,7 +13,7 @@ devel
|
||||||
The client tools arangosh, arangodump and arangorestore will now emit
|
The client tools arangosh, arangodump and arangorestore will now emit
|
||||||
a warning when connecting with them to a database server node in a cluster.
|
a warning when connecting with them to a database server node in a cluster.
|
||||||
|
|
||||||
* fix compation behaviour of followers
|
* fix compation behavior of followers
|
||||||
|
|
||||||
* added "random" masking to mask any data type, added wildcard masking
|
* added "random" masking to mask any data type, added wildcard masking
|
||||||
|
|
||||||
|
|
|
@ -227,7 +227,8 @@ Fetches information about the index with the given _indexHandle_ and returns it.
|
||||||
|
|
||||||
The handle of the index to look up. This can either be a fully-qualified
|
The handle of the index to look up. This can either be a fully-qualified
|
||||||
identifier or the collection-specific key of the index. If the value is an
|
identifier or the collection-specific key of the index. If the value is an
|
||||||
object, its _id_ property will be used instead.
|
object, its _id_ property will be used instead. Alternatively, the index
|
||||||
|
may be looked up by name.
|
||||||
|
|
||||||
**Examples**
|
**Examples**
|
||||||
|
|
||||||
|
@ -243,6 +244,12 @@ assert.equal(result.id, index.id);
|
||||||
|
|
||||||
const result = await collection.index(index.id.split("/")[1]);
|
const result = await collection.index(index.id.split("/")[1]);
|
||||||
assert.equal(result.id, index.id);
|
assert.equal(result.id, index.id);
|
||||||
|
|
||||||
|
// -- or --
|
||||||
|
|
||||||
|
const result = await collection.index(index.name);
|
||||||
|
assert.equal(result.id, index.id);
|
||||||
|
assert.equal(result.name, index.name);
|
||||||
// result contains the properties of the index
|
// result contains the properties of the index
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -35,6 +35,15 @@ Because the index handle is unique within the database, you can leave out the
|
||||||
db._index("demo/362549736");
|
db._index("demo/362549736");
|
||||||
```
|
```
|
||||||
|
|
||||||
|
An index may also be looked up by its name. Since names are only unique within
|
||||||
|
a collection, rather than within the database, the lookup must also include the
|
||||||
|
collection name.
|
||||||
|
|
||||||
|
```js
|
||||||
|
db._index("demo/primary")
|
||||||
|
db.demo.index("primary")
|
||||||
|
```
|
||||||
|
|
||||||
Collection Methods
|
Collection Methods
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
|
@ -86,6 +95,10 @@ Other attributes may be necessary, depending on the index type.
|
||||||
- *fulltext*: fulltext index
|
- *fulltext*: fulltext index
|
||||||
- *geo*: geo index, with _one_ or _two_ attributes
|
- *geo*: geo index, with _one_ or _two_ attributes
|
||||||
|
|
||||||
|
**name** can be a string. Index names are subject to the same character
|
||||||
|
restrictions as collection names. If omitted, a name will be auto-generated so
|
||||||
|
that it is unique with respect to the collection, e.g. `idx_832910498`.
|
||||||
|
|
||||||
**sparse** can be *true* or *false*.
|
**sparse** can be *true* or *false*.
|
||||||
|
|
||||||
For *hash*, and *skiplist* the sparsity can be controlled, *fulltext* and *geo*
|
For *hash*, and *skiplist* the sparsity can be controlled, *fulltext* and *geo*
|
||||||
|
@ -260,4 +273,3 @@ used (if you omit `colors: false` you will get nice colors in ArangoShell):
|
||||||
~db._drop("example");
|
~db._drop("example");
|
||||||
@END_EXAMPLE_ARANGOSH_OUTPUT
|
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||||
@endDocuBlock IndexVerify
|
@endDocuBlock IndexVerify
|
||||||
|
|
||||||
|
|
|
@ -214,6 +214,15 @@ entries, and will continue to work.
|
||||||
|
|
||||||
Existing `_modules` collections will also remain functional.
|
Existing `_modules` collections will also remain functional.
|
||||||
|
|
||||||
|
### Named indices
|
||||||
|
|
||||||
|
Indices now have an additional `name` field, which allows for more useful
|
||||||
|
identifiers. System indices, like the primary and edge indices, have default
|
||||||
|
names (`primary` and `edge`, respectively). If no `name` value is specified
|
||||||
|
on index creation, one will be auto-generated (e.g. `idx_13820395`). The index
|
||||||
|
name _cannot_ be changed after index creation. No two indices on the same
|
||||||
|
collection may share the same name, but two indices on different collections
|
||||||
|
may.
|
||||||
|
|
||||||
Client tools
|
Client tools
|
||||||
------------
|
------------
|
||||||
|
|
|
@ -56,3 +56,14 @@ undefined.
|
||||||
This change is about making queries as the above fail with a parse error, as an
|
This change is about making queries as the above fail with a parse error, as an
|
||||||
unknown variable `key1` is accessed here, avoiding the undefined behavior. This is
|
unknown variable `key1` is accessed here, avoiding the undefined behavior. This is
|
||||||
also in line with what the documentation states about variable invalidation.
|
also in line with what the documentation states about variable invalidation.
|
||||||
|
|
||||||
|
Miscellaneous
|
||||||
|
-------------
|
||||||
|
|
||||||
|
### Index creation
|
||||||
|
|
||||||
|
In previous versions of ArangoDB, if one attempted to create an index with a
|
||||||
|
specified `_id`, and that `_id` was already in use, the server would typically
|
||||||
|
return the existing index with matching `_id`. This is somewhat unintuitive, as
|
||||||
|
it would ignore if the rest of the definition did not match. This behavior has
|
||||||
|
been changed so that the server will now return a duplicate identifier error.
|
||||||
|
|
|
@ -32,6 +32,10 @@ Indexing the system attribute *_id* is not supported for user-defined indexes.
|
||||||
Manually creating an index using *_id* as an index attribute will fail with
|
Manually creating an index using *_id* as an index attribute will fail with
|
||||||
an error.
|
an error.
|
||||||
|
|
||||||
|
Optionally, an index name may be specified as a string in the *name* attribute.
|
||||||
|
Index names have the same restrictions as collection names. If no value is
|
||||||
|
specified, one will be auto-generated.
|
||||||
|
|
||||||
Some indexes can be created as unique or non-unique variants. Uniqueness
|
Some indexes can be created as unique or non-unique variants. Uniqueness
|
||||||
can be controlled for most indexes by specifying the *unique* flag in the
|
can be controlled for most indexes by specifying the *unique* flag in the
|
||||||
index details. Setting it to *true* will create a unique index.
|
index details. Setting it to *true* will create a unique index.
|
||||||
|
@ -76,4 +80,3 @@ target index will not support, then an *HTTP 400* is returned.
|
||||||
@RESTRETURNCODE{404}
|
@RESTRETURNCODE{404}
|
||||||
If *collection* is unknown, then an *HTTP 404* is returned.
|
If *collection* is unknown, then an *HTTP 404* is returned.
|
||||||
@endDocuBlock
|
@endDocuBlock
|
||||||
|
|
||||||
|
|
|
@ -496,8 +496,7 @@ void ClusterInfo::loadPlan() {
|
||||||
auto planSlice = planBuilder->slice();
|
auto planSlice = planBuilder->slice();
|
||||||
|
|
||||||
if (!planSlice.isObject()) {
|
if (!planSlice.isObject()) {
|
||||||
LOG_TOPIC(ERR, Logger::CLUSTER)
|
LOG_TOPIC(ERR, Logger::CLUSTER) << "\"Plan\" is not an object in agency";
|
||||||
<< "\"Plan\" is not an object in agency";
|
|
||||||
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -512,12 +511,12 @@ void ClusterInfo::loadPlan() {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
LOG_TOPIC(TRACE, Logger::CLUSTER)
|
LOG_TOPIC(TRACE, Logger::CLUSTER) << "loadPlan: newPlanVersion=" << newPlanVersion;
|
||||||
<< "loadPlan: newPlanVersion=" << newPlanVersion;
|
|
||||||
|
|
||||||
if (newPlanVersion == 0) {
|
if (newPlanVersion == 0) {
|
||||||
LOG_TOPIC(WARN, Logger::CLUSTER)
|
LOG_TOPIC(WARN, Logger::CLUSTER)
|
||||||
<< "Attention: /arango/Plan/Version in the agency is not set or not a positive number.";
|
<< "Attention: /arango/Plan/Version in the agency is not set or not a "
|
||||||
|
"positive number.";
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
|
@ -598,8 +597,7 @@ void ClusterInfo::loadPlan() {
|
||||||
LOG_TOPIC(INFO, Logger::AGENCY)
|
LOG_TOPIC(INFO, Logger::AGENCY)
|
||||||
<< "Views in the plan is not a valid json object."
|
<< "Views in the plan is not a valid json object."
|
||||||
<< " Views will be ignored for now and the invalid information"
|
<< " Views will be ignored for now and the invalid information"
|
||||||
<< " will be repaired. VelocyPack: "
|
<< " will be repaired. VelocyPack: " << viewsSlice.toJson();
|
||||||
<< viewsSlice.toJson();
|
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -627,8 +625,7 @@ void ClusterInfo::loadPlan() {
|
||||||
LOG_TOPIC(INFO, Logger::AGENCY)
|
LOG_TOPIC(INFO, Logger::AGENCY)
|
||||||
<< "View entry is not a valid json object."
|
<< "View entry is not a valid json object."
|
||||||
<< " The view will be ignored for now and the invalid "
|
<< " The view will be ignored for now and the invalid "
|
||||||
<< "information will be repaired. VelocyPack: "
|
<< "information will be repaired. VelocyPack: " << viewSlice.toJson();
|
||||||
<< viewSlice.toJson();
|
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -645,8 +642,7 @@ void ClusterInfo::loadPlan() {
|
||||||
LOG_TOPIC(ERR, Logger::AGENCY)
|
LOG_TOPIC(ERR, Logger::AGENCY)
|
||||||
<< "Failed to create view '" << viewId
|
<< "Failed to create view '" << viewId
|
||||||
<< "'. The view will be ignored for now and the invalid "
|
<< "'. The view will be ignored for now and the invalid "
|
||||||
<< "information will be repaired. VelocyPack: "
|
<< "information will be repaired. VelocyPack: " << viewSlice.toJson();
|
||||||
<< viewSlice.toJson();
|
|
||||||
planValid = false; // view creation failure
|
planValid = false; // view creation failure
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
|
@ -668,8 +664,7 @@ void ClusterInfo::loadPlan() {
|
||||||
<< "Failed to load information for view '" << viewId
|
<< "Failed to load information for view '" << viewId
|
||||||
<< "': " << ex.what() << ". invalid information in Plan. The "
|
<< "': " << ex.what() << ". invalid information in Plan. The "
|
||||||
<< "view will be ignored for now and the invalid "
|
<< "view will be ignored for now and the invalid "
|
||||||
<< "information will be repaired. VelocyPack: "
|
<< "information will be repaired. VelocyPack: " << viewSlice.toJson();
|
||||||
<< viewSlice.toJson();
|
|
||||||
|
|
||||||
TRI_ASSERT(false);
|
TRI_ASSERT(false);
|
||||||
continue;
|
continue;
|
||||||
|
@ -682,8 +677,7 @@ void ClusterInfo::loadPlan() {
|
||||||
<< "Failed to load information for view '" << viewId
|
<< "Failed to load information for view '" << viewId
|
||||||
<< ". invalid information in Plan. The view will "
|
<< ". invalid information in Plan. The view will "
|
||||||
<< "be ignored for now and the invalid information will "
|
<< "be ignored for now and the invalid information will "
|
||||||
<< "be repaired. VelocyPack: "
|
<< "be repaired. VelocyPack: " << viewSlice.toJson();
|
||||||
<< viewSlice.toJson();
|
|
||||||
|
|
||||||
TRI_ASSERT(false);
|
TRI_ASSERT(false);
|
||||||
continue;
|
continue;
|
||||||
|
@ -757,8 +751,7 @@ void ClusterInfo::loadPlan() {
|
||||||
LOG_TOPIC(INFO, Logger::AGENCY)
|
LOG_TOPIC(INFO, Logger::AGENCY)
|
||||||
<< "Collections in the plan is not a valid json object."
|
<< "Collections in the plan is not a valid json object."
|
||||||
<< " Collections will be ignored for now and the invalid "
|
<< " Collections will be ignored for now and the invalid "
|
||||||
<< "information will be repaired. VelocyPack: "
|
<< "information will be repaired. VelocyPack: " << collectionsSlice.toJson();
|
||||||
<< collectionsSlice.toJson();
|
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -787,8 +780,7 @@ void ClusterInfo::loadPlan() {
|
||||||
LOG_TOPIC(WARN, Logger::AGENCY)
|
LOG_TOPIC(WARN, Logger::AGENCY)
|
||||||
<< "Collection entry is not a valid json object."
|
<< "Collection entry is not a valid json object."
|
||||||
<< " The collection will be ignored for now and the invalid "
|
<< " The collection will be ignored for now and the invalid "
|
||||||
<< "information will be repaired. VelocyPack: "
|
<< "information will be repaired. VelocyPack: " << collectionSlice.toJson();
|
||||||
<< collectionSlice.toJson();
|
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -826,8 +818,7 @@ void ClusterInfo::loadPlan() {
|
||||||
if (isCoordinator) {
|
if (isCoordinator) {
|
||||||
// copying over index estimates from the old version of the
|
// copying over index estimates from the old version of the
|
||||||
// collection into the new one
|
// collection into the new one
|
||||||
LOG_TOPIC(TRACE, Logger::CLUSTER)
|
LOG_TOPIC(TRACE, Logger::CLUSTER) << "copying index estimates";
|
||||||
<< "copying index estimates";
|
|
||||||
|
|
||||||
// it is effectively safe to access _plannedCollections in
|
// it is effectively safe to access _plannedCollections in
|
||||||
// read-only mode here, as the only places that modify
|
// read-only mode here, as the only places that modify
|
||||||
|
@ -887,12 +878,10 @@ void ClusterInfo::loadPlan() {
|
||||||
// If it happens in unhealthy situations the
|
// If it happens in unhealthy situations the
|
||||||
// cluster should not fail.
|
// cluster should not fail.
|
||||||
LOG_TOPIC(ERR, Logger::AGENCY)
|
LOG_TOPIC(ERR, Logger::AGENCY)
|
||||||
<< "Failed to load information for collection '"
|
<< "Failed to load information for collection '" << collectionId
|
||||||
<< collectionId << "': " << ex.what()
|
<< "': " << ex.what() << ". invalid information in plan. The "
|
||||||
<< ". invalid information in plan. The "
|
|
||||||
<< "collection will be ignored for now and the invalid "
|
<< "collection will be ignored for now and the invalid "
|
||||||
<< "information will be repaired. VelocyPack: "
|
<< "information will be repaired. VelocyPack: " << collectionSlice.toJson();
|
||||||
<< collectionSlice.toJson();
|
|
||||||
|
|
||||||
TRI_ASSERT(false);
|
TRI_ASSERT(false);
|
||||||
continue;
|
continue;
|
||||||
|
@ -905,8 +894,7 @@ void ClusterInfo::loadPlan() {
|
||||||
<< "Failed to load information for collection '" << collectionId
|
<< "Failed to load information for collection '" << collectionId
|
||||||
<< ". invalid information in plan. The collection will "
|
<< ". invalid information in plan. The collection will "
|
||||||
<< "be ignored for now and the invalid information will "
|
<< "be ignored for now and the invalid information will "
|
||||||
<< "be repaired. VelocyPack: "
|
<< "be repaired. VelocyPack: " << collectionSlice.toJson();
|
||||||
<< collectionSlice.toJson();
|
|
||||||
|
|
||||||
TRI_ASSERT(false);
|
TRI_ASSERT(false);
|
||||||
continue;
|
continue;
|
||||||
|
@ -998,8 +986,7 @@ void ClusterInfo::loadCurrent() {
|
||||||
auto currentSlice = currentBuilder->slice();
|
auto currentSlice = currentBuilder->slice();
|
||||||
|
|
||||||
if (!currentSlice.isObject()) {
|
if (!currentSlice.isObject()) {
|
||||||
LOG_TOPIC(ERR, Logger::CLUSTER)
|
LOG_TOPIC(ERR, Logger::CLUSTER) << "Current is not an object!";
|
||||||
<< "Current is not an object!";
|
|
||||||
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -1016,7 +1003,8 @@ void ClusterInfo::loadCurrent() {
|
||||||
|
|
||||||
if (newCurrentVersion == 0) {
|
if (newCurrentVersion == 0) {
|
||||||
LOG_TOPIC(WARN, Logger::CLUSTER)
|
LOG_TOPIC(WARN, Logger::CLUSTER)
|
||||||
<< "Attention: /arango/Current/Version in the agency is not set or not a positive number.";
|
<< "Attention: /arango/Current/Version in the agency is not set or not "
|
||||||
|
"a positive number.";
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
|
@ -1025,7 +1013,8 @@ void ClusterInfo::loadCurrent() {
|
||||||
if (_currentProt.isValid && newCurrentVersion <= _currentVersion) {
|
if (_currentProt.isValid && newCurrentVersion <= _currentVersion) {
|
||||||
LOG_TOPIC(DEBUG, Logger::CLUSTER)
|
LOG_TOPIC(DEBUG, Logger::CLUSTER)
|
||||||
<< "We already know this or a later version, do not update. "
|
<< "We already know this or a later version, do not update. "
|
||||||
<< "newCurrentVersion=" << newCurrentVersion << " _currentVersion=" << _currentVersion;
|
<< "newCurrentVersion=" << newCurrentVersion
|
||||||
|
<< " _currentVersion=" << _currentVersion;
|
||||||
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -1052,7 +1041,8 @@ void ClusterInfo::loadCurrent() {
|
||||||
|
|
||||||
std::unordered_map<ServerID, velocypack::Slice> serverList;
|
std::unordered_map<ServerID, velocypack::Slice> serverList;
|
||||||
|
|
||||||
for (auto const& serverSlicePair: velocypack::ObjectIterator(databaseSlicePair.value)) {
|
for (auto const& serverSlicePair :
|
||||||
|
velocypack::ObjectIterator(databaseSlicePair.value)) {
|
||||||
serverList.emplace(serverSlicePair.key.copyString(), serverSlicePair.value);
|
serverList.emplace(serverSlicePair.key.copyString(), serverSlicePair.value);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1069,7 +1059,8 @@ void ClusterInfo::loadCurrent() {
|
||||||
auto const databaseName = databaseSlice.key.copyString();
|
auto const databaseName = databaseSlice.key.copyString();
|
||||||
DatabaseCollectionsCurrent databaseCollections;
|
DatabaseCollectionsCurrent databaseCollections;
|
||||||
|
|
||||||
for (auto const& collectionSlice: velocypack::ObjectIterator(databaseSlice.value)) {
|
for (auto const& collectionSlice :
|
||||||
|
velocypack::ObjectIterator(databaseSlice.value)) {
|
||||||
auto const collectionName = collectionSlice.key.copyString();
|
auto const collectionName = collectionSlice.key.copyString();
|
||||||
|
|
||||||
auto collectionDataCurrent =
|
auto collectionDataCurrent =
|
||||||
|
@ -1819,17 +1810,19 @@ Result ClusterInfo::createCollectionCoordinator( // create collection
|
||||||
if (result[0].isObject()) {
|
if (result[0].isObject()) {
|
||||||
auto tres = result[0];
|
auto tres = result[0];
|
||||||
|
|
||||||
if (!tres.hasKey(std::vector<std::string>({AgencyCommManager::path(), "Supervision"}))) {
|
if (!tres.hasKey(std::vector<std::string>(
|
||||||
|
{AgencyCommManager::path(), "Supervision"}))) {
|
||||||
return Result(TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN);
|
return Result(TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN);
|
||||||
}
|
}
|
||||||
|
|
||||||
std::string errorMsg;
|
std::string errorMsg;
|
||||||
|
|
||||||
for (auto const& s: velocypack::ObjectIterator(tres.get(
|
for (auto const& s :
|
||||||
std::vector<std::string>({AgencyCommManager::path(),
|
velocypack::ObjectIterator(tres.get(std::vector<std::string>(
|
||||||
"Supervision", "Shards"})))) {
|
{AgencyCommManager::path(), "Supervision", "Shards"})))) {
|
||||||
errorMsg += std::string("Shard ") + s.key.copyString();
|
errorMsg += std::string("Shard ") + s.key.copyString();
|
||||||
errorMsg += " of prototype collection is blocked by supervision job ";
|
errorMsg +=
|
||||||
|
" of prototype collection is blocked by supervision job ";
|
||||||
errorMsg += s.value.copyString();
|
errorMsg += s.value.copyString();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1859,14 +1852,12 @@ Result ClusterInfo::createCollectionCoordinator( // create collection
|
||||||
|
|
||||||
events::CreateCollection(name, TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN);
|
events::CreateCollection(name, TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN);
|
||||||
|
|
||||||
return Result(
|
return Result(TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN, // code
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_COLLECTION_IN_PLAN, // code
|
std::string("file: ") + __FILE__ + " line: " + std::to_string(__LINE__) +
|
||||||
std::string("file: ") + __FILE__ + " line: " + std::to_string(__LINE__)
|
" HTTP code: " + std::to_string(res.httpCode()) +
|
||||||
+ " HTTP code: " + std::to_string(res.httpCode())
|
" error message: " + res.errorMessage() +
|
||||||
+ " error message: " + res.errorMessage()
|
" error details: " + res.errorDetails() +
|
||||||
+ " error details: " + res.errorDetails()
|
" body: " + res.body());
|
||||||
+ " body: " + res.body()
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update our cache:
|
// Update our cache:
|
||||||
|
@ -1979,7 +1970,9 @@ Result ClusterInfo::dropCollectionCoordinator( // drop collection
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!clones.empty()) {
|
if (!clones.empty()) {
|
||||||
std::string errorMsg("Collection must not be dropped while it is a sharding prototype for collection(s)");
|
std::string errorMsg(
|
||||||
|
"Collection must not be dropped while it is a sharding prototype for "
|
||||||
|
"collection(s)");
|
||||||
|
|
||||||
for (auto const& i : clones) {
|
for (auto const& i : clones) {
|
||||||
errorMsg += std::string(" ") + i;
|
errorMsg += std::string(" ") + i;
|
||||||
|
@ -2267,20 +2260,18 @@ Result ClusterInfo::createViewCoordinator( // create view
|
||||||
|
|
||||||
return Result( // result
|
return Result( // result
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN, // code
|
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN, // code
|
||||||
std::string("Precondition that view ") + name + " with ID " + viewID + " does not yet exist failed. Cannot create view."
|
std::string("Precondition that view ") + name + " with ID " + viewID +
|
||||||
);
|
" does not yet exist failed. Cannot create view.");
|
||||||
}
|
}
|
||||||
|
|
||||||
events::CreateView(name, TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN);
|
events::CreateView(name, TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN);
|
||||||
|
|
||||||
return Result( // result
|
return Result( // result
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN, // code
|
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_VIEW_IN_PLAN, // code
|
||||||
std::string("file: ") + __FILE__ + " line: " + std::to_string(__LINE__)
|
std::string("file: ") + __FILE__ + " line: " + std::to_string(__LINE__) +
|
||||||
+ " HTTP code: " + std::to_string(res.httpCode())
|
" HTTP code: " + std::to_string(res.httpCode()) +
|
||||||
+ " error message: " + res.errorMessage()
|
" error message: " + res.errorMessage() +
|
||||||
+ " error details: " + res.errorDetails()
|
" error details: " + res.errorDetails() + " body: " + res.body());
|
||||||
+ " body: " + res.body()
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update our cache:
|
// Update our cache:
|
||||||
|
@ -2318,8 +2309,8 @@ Result ClusterInfo::dropViewCoordinator( // drop view
|
||||||
if (res.errorCode() == int(arangodb::ResponseCode::PRECONDITION_FAILED)) {
|
if (res.errorCode() == int(arangodb::ResponseCode::PRECONDITION_FAILED)) {
|
||||||
result = Result( // result
|
result = Result( // result
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_REMOVE_COLLECTION_IN_PLAN, // FIXME COULD_NOT_REMOVE_VIEW_IN_PLAN
|
TRI_ERROR_CLUSTER_COULD_NOT_REMOVE_COLLECTION_IN_PLAN, // FIXME COULD_NOT_REMOVE_VIEW_IN_PLAN
|
||||||
std::string("Precondition that view with ID ")+ viewID + " already exist failed. Cannot create view."
|
std::string("Precondition that view with ID ") + viewID +
|
||||||
);
|
" already exist failed. Cannot create view.");
|
||||||
|
|
||||||
// Dump agency plan:
|
// Dump agency plan:
|
||||||
auto const ag = ac.getValues("/");
|
auto const ag = ac.getValues("/");
|
||||||
|
@ -2333,12 +2324,10 @@ Result ClusterInfo::dropViewCoordinator( // drop view
|
||||||
} else {
|
} else {
|
||||||
result = Result( // result
|
result = Result( // result
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_REMOVE_COLLECTION_IN_PLAN, // FIXME COULD_NOT_REMOVE_VIEW_IN_PLAN
|
TRI_ERROR_CLUSTER_COULD_NOT_REMOVE_COLLECTION_IN_PLAN, // FIXME COULD_NOT_REMOVE_VIEW_IN_PLAN
|
||||||
std::string("file: ") + __FILE__ + " line: " + std::to_string(__LINE__)
|
std::string("file: ") + __FILE__ + " line: " + std::to_string(__LINE__) +
|
||||||
+ " HTTP code: " + std::to_string(res.httpCode())
|
" HTTP code: " + std::to_string(res.httpCode()) +
|
||||||
+ " error message: " + res.errorMessage()
|
" error message: " + res.errorMessage() +
|
||||||
+ " error details: " + res.errorDetails()
|
" error details: " + res.errorDetails() + " body: " + res.body());
|
||||||
+ " body: " + res.body()
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2468,8 +2457,7 @@ Result ClusterInfo::setCollectionStatusCoordinator(std::string const& databaseNa
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
Result ClusterInfo::ensureIndexCoordinator( // create index
|
Result ClusterInfo::ensureIndexCoordinator( // create index
|
||||||
std::string const& databaseName, // database name
|
std::string const& databaseName, // database name
|
||||||
std::string const& collectionID,
|
std::string const& collectionID, VPackSlice const& slice, bool create,
|
||||||
VPackSlice const& slice, bool create,
|
|
||||||
VPackBuilder& resultBuilder,
|
VPackBuilder& resultBuilder,
|
||||||
double timeout // request timeout
|
double timeout // request timeout
|
||||||
) {
|
) {
|
||||||
|
@ -2495,8 +2483,7 @@ Result ClusterInfo::ensureIndexCoordinator( // create index
|
||||||
do {
|
do {
|
||||||
resultBuilder.clear();
|
resultBuilder.clear();
|
||||||
res = ensureIndexCoordinatorInner( // creat index
|
res = ensureIndexCoordinatorInner( // creat index
|
||||||
databaseName, collectionID, idString, slice, create, resultBuilder, timeout
|
databaseName, collectionID, idString, slice, create, resultBuilder, timeout);
|
||||||
);
|
|
||||||
|
|
||||||
// Note that this function sets the errorMsg unless it is precondition
|
// Note that this function sets the errorMsg unless it is precondition
|
||||||
// failed, in which case we retry, if this times out, we need to set
|
// failed, in which case we retry, if this times out, we need to set
|
||||||
|
@ -2521,8 +2508,7 @@ Result ClusterInfo::ensureIndexCoordinator( // create index
|
||||||
} catch (basics::Exception const& ex) {
|
} catch (basics::Exception const& ex) {
|
||||||
res = Result( // result
|
res = Result( // result
|
||||||
ex.code(), // code
|
ex.code(), // code
|
||||||
TRI_errno_string(ex.code()) + std::string(", exception: ") + ex.what()
|
TRI_errno_string(ex.code()) + std::string(", exception: ") + ex.what());
|
||||||
);
|
|
||||||
} catch (...) {
|
} catch (...) {
|
||||||
res = Result(TRI_ERROR_INTERNAL);
|
res = Result(TRI_ERROR_INTERNAL);
|
||||||
}
|
}
|
||||||
|
@ -2556,10 +2542,8 @@ Result ClusterInfo::ensureIndexCoordinator( // create index
|
||||||
// is outside this function here in `ensureIndexCoordinator`.
|
// is outside this function here in `ensureIndexCoordinator`.
|
||||||
Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
||||||
std::string const& databaseName, // database name
|
std::string const& databaseName, // database name
|
||||||
std::string const& collectionID,
|
std::string const& collectionID, std::string const& idString,
|
||||||
std::string const& idString,
|
VPackSlice const& slice, bool create, VPackBuilder& resultBuilder,
|
||||||
VPackSlice const& slice, bool create,
|
|
||||||
VPackBuilder& resultBuilder,
|
|
||||||
double timeout // request timeout
|
double timeout // request timeout
|
||||||
) {
|
) {
|
||||||
AgencyComm ac;
|
AgencyComm ac;
|
||||||
|
@ -2605,8 +2589,7 @@ Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
||||||
TRI_ASSERT(other.isObject());
|
TRI_ASSERT(other.isObject());
|
||||||
|
|
||||||
if (true == arangodb::Index::Compare(slice, other)) {
|
if (true == arangodb::Index::Compare(slice, other)) {
|
||||||
{ // found an existing index...
|
{ // found an existing index... Copy over all elements in slice.
|
||||||
// Copy over all elements in slice.
|
|
||||||
VPackObjectBuilder b(&resultBuilder);
|
VPackObjectBuilder b(&resultBuilder);
|
||||||
resultBuilder.add(VPackObjectIterator(other));
|
resultBuilder.add(VPackObjectIterator(other));
|
||||||
resultBuilder.add("isNewlyCreated", VPackValue(false));
|
resultBuilder.add("isNewlyCreated", VPackValue(false));
|
||||||
|
@ -2614,6 +2597,14 @@ Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
||||||
|
|
||||||
return Result(TRI_ERROR_NO_ERROR);
|
return Result(TRI_ERROR_NO_ERROR);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (true == arangodb::Index::CompareIdentifiers(slice, other)) {
|
||||||
|
// found an existing index with a same identifier (i.e. name)
|
||||||
|
// but different definition, throw an error
|
||||||
|
return Result(TRI_ERROR_ARANGO_DUPLICATE_IDENTIFIER,
|
||||||
|
"duplicate value for `" + arangodb::StaticStrings::IndexId +
|
||||||
|
"` or `" + arangodb::StaticStrings::IndexName + "`");
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2724,11 +2715,10 @@ Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
||||||
|
|
||||||
return Result( // result
|
return Result( // result
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_INDEX_IN_PLAN, // code
|
TRI_ERROR_CLUSTER_COULD_NOT_CREATE_INDEX_IN_PLAN, // code
|
||||||
std::string(" Failed to execute ") + trx.toJson()
|
std::string(" Failed to execute ") + trx.toJson() +
|
||||||
+ " ResultCode: " + std::to_string(result.errorCode())
|
" ResultCode: " + std::to_string(result.errorCode()) +
|
||||||
+ " HttpCode: " + std::to_string(result.httpCode())
|
" HttpCode: " + std::to_string(result.httpCode()) + " " +
|
||||||
+ " " + std::string(__FILE__) + ":" + std::to_string(__LINE__)
|
std::string(__FILE__) + ":" + std::to_string(__LINE__));
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// From here on we want to roll back the index creation if we run into
|
// From here on we want to roll back the index creation if we run into
|
||||||
|
@ -2836,8 +2826,8 @@ Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
||||||
if (tmpRes < 0) { // timeout
|
if (tmpRes < 0) { // timeout
|
||||||
return Result( // result
|
return Result( // result
|
||||||
TRI_ERROR_CLUSTER_TIMEOUT, // code
|
TRI_ERROR_CLUSTER_TIMEOUT, // code
|
||||||
"Index could not be created within timeout, giving up and rolling back index creation."
|
"Index could not be created within timeout, giving up and "
|
||||||
);
|
"rolling back index creation.");
|
||||||
}
|
}
|
||||||
|
|
||||||
return Result(tmpRes);
|
return Result(tmpRes);
|
||||||
|
@ -2856,8 +2846,7 @@ Result ClusterInfo::ensureIndexCoordinatorInner( // create index
|
||||||
if (tmpRes < 0) { // timeout
|
if (tmpRes < 0) { // timeout
|
||||||
return Result( // result
|
return Result( // result
|
||||||
TRI_ERROR_CLUSTER_TIMEOUT, // code
|
TRI_ERROR_CLUSTER_TIMEOUT, // code
|
||||||
"Timed out while trying to roll back index creation failure"
|
"Timed out while trying to roll back index creation failure");
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return Result(tmpRes);
|
return Result(tmpRes);
|
||||||
|
@ -3034,8 +3023,8 @@ Result ClusterInfo::dropIndexCoordinator( // drop index
|
||||||
|
|
||||||
return Result( // result
|
return Result( // result
|
||||||
TRI_ERROR_CLUSTER_COULD_NOT_DROP_INDEX_IN_PLAN, // code
|
TRI_ERROR_CLUSTER_COULD_NOT_DROP_INDEX_IN_PLAN, // code
|
||||||
std::string(" Failed to execute ") + trx.toJson() + " ResultCode: " + std::to_string(result.errorCode())
|
std::string(" Failed to execute ") + trx.toJson() +
|
||||||
);
|
" ResultCode: " + std::to_string(result.errorCode()));
|
||||||
}
|
}
|
||||||
|
|
||||||
// load our own cache:
|
// load our own cache:
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Max Neunhoeffer
|
/// @author Max Neunhoeffer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "ClusterMethods.h"
|
|
||||||
#include "Basics/NumberUtils.h"
|
#include "Basics/NumberUtils.h"
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
|
@ -30,6 +29,7 @@
|
||||||
#include "Basics/tri-strings.h"
|
#include "Basics/tri-strings.h"
|
||||||
#include "Cluster/ClusterComm.h"
|
#include "Cluster/ClusterComm.h"
|
||||||
#include "Cluster/ClusterInfo.h"
|
#include "Cluster/ClusterInfo.h"
|
||||||
|
#include "ClusterMethods.h"
|
||||||
#include "Graph/Traverser.h"
|
#include "Graph/Traverser.h"
|
||||||
#include "Indexes/Index.h"
|
#include "Indexes/Index.h"
|
||||||
#include "RestServer/TtlFeature.h"
|
#include "RestServer/TtlFeature.h"
|
||||||
|
@ -2778,8 +2778,7 @@ std::shared_ptr<LogicalCollection> ClusterMethods::persistCollectionInAgency(
|
||||||
VPackBuilder velocy = col->toVelocyPackIgnore(ignoreKeys, false, false);
|
VPackBuilder velocy = col->toVelocyPackIgnore(ignoreKeys, false, false);
|
||||||
auto& dbName = col->vocbase().name();
|
auto& dbName = col->vocbase().name();
|
||||||
auto res = ci->createCollectionCoordinator( // create collection
|
auto res = ci->createCollectionCoordinator( // create collection
|
||||||
dbName, std::to_string(col->id()),
|
dbName, std::to_string(col->id()), col->numberOfShards(),
|
||||||
col->numberOfShards(),
|
|
||||||
col->replicationFactor(), waitForSyncReplication,
|
col->replicationFactor(), waitForSyncReplication,
|
||||||
velocy.slice(), // collection definition
|
velocy.slice(), // collection definition
|
||||||
240.0 // request timeout
|
240.0 // request timeout
|
||||||
|
|
|
@ -20,13 +20,13 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "ClusterCollection.h"
|
|
||||||
#include "Basics/ReadLocker.h"
|
#include "Basics/ReadLocker.h"
|
||||||
#include "Basics/Result.h"
|
#include "Basics/Result.h"
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
#include "Basics/WriteLocker.h"
|
#include "Basics/WriteLocker.h"
|
||||||
#include "Cluster/ClusterMethods.h"
|
#include "Cluster/ClusterMethods.h"
|
||||||
|
#include "ClusterCollection.h"
|
||||||
#include "ClusterEngine/ClusterEngine.h"
|
#include "ClusterEngine/ClusterEngine.h"
|
||||||
#include "ClusterEngine/ClusterIndex.h"
|
#include "ClusterEngine/ClusterIndex.h"
|
||||||
#include "Indexes/Index.h"
|
#include "Indexes/Index.h"
|
||||||
|
@ -314,6 +314,7 @@ void ClusterCollection::prepareIndexes(arangodb::velocypack::Slice indexesSlice)
|
||||||
|
|
||||||
if (indexesSlice.length() == 0 && _indexes.empty()) {
|
if (indexesSlice.length() == 0 && _indexes.empty()) {
|
||||||
engine->indexFactory().fillSystemIndexes(_logicalCollection, indexes);
|
engine->indexFactory().fillSystemIndexes(_logicalCollection, indexes);
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
engine->indexFactory().prepareIndexes(_logicalCollection, indexesSlice, indexes);
|
engine->indexFactory().prepareIndexes(_logicalCollection, indexesSlice, indexes);
|
||||||
}
|
}
|
||||||
|
@ -423,7 +424,8 @@ LocalDocumentId ClusterCollection::lookupKey(transaction::Methods* trx,
|
||||||
THROW_ARANGO_EXCEPTION(TRI_ERROR_NOT_IMPLEMENTED);
|
THROW_ARANGO_EXCEPTION(TRI_ERROR_NOT_IMPLEMENTED);
|
||||||
}
|
}
|
||||||
|
|
||||||
Result ClusterCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
Result ClusterCollection::read(transaction::Methods* trx,
|
||||||
|
arangodb::velocypack::StringRef const& key,
|
||||||
ManagedDocumentResult& result, bool) {
|
ManagedDocumentResult& result, bool) {
|
||||||
return Result(TRI_ERROR_NOT_IMPLEMENTED);
|
return Result(TRI_ERROR_NOT_IMPLEMENTED);
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,7 +20,6 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "ClusterEngine.h"
|
|
||||||
#include "ApplicationFeatures/RocksDBOptionFeature.h"
|
#include "ApplicationFeatures/RocksDBOptionFeature.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
#include "Basics/FileUtils.h"
|
#include "Basics/FileUtils.h"
|
||||||
|
@ -32,6 +31,7 @@
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
#include "Basics/WriteLocker.h"
|
#include "Basics/WriteLocker.h"
|
||||||
#include "Basics/build.h"
|
#include "Basics/build.h"
|
||||||
|
#include "ClusterEngine.h"
|
||||||
#include "ClusterEngine/ClusterCollection.h"
|
#include "ClusterEngine/ClusterCollection.h"
|
||||||
#include "ClusterEngine/ClusterIndexFactory.h"
|
#include "ClusterEngine/ClusterIndexFactory.h"
|
||||||
#include "ClusterEngine/ClusterRestHandlers.h"
|
#include "ClusterEngine/ClusterRestHandlers.h"
|
||||||
|
@ -64,12 +64,15 @@ using namespace arangodb;
|
||||||
using namespace arangodb::application_features;
|
using namespace arangodb::application_features;
|
||||||
using namespace arangodb::options;
|
using namespace arangodb::options;
|
||||||
|
|
||||||
|
std::string const ClusterEngine::EngineName("Cluster");
|
||||||
|
std::string const ClusterEngine::FeatureName("ClusterEngine");
|
||||||
|
|
||||||
// fall back to the using the mock storage engine
|
// fall back to the using the mock storage engine
|
||||||
bool ClusterEngine::Mocking = false;
|
bool ClusterEngine::Mocking = false;
|
||||||
|
|
||||||
// create the storage engine
|
// create the storage engine
|
||||||
ClusterEngine::ClusterEngine(application_features::ApplicationServer& server)
|
ClusterEngine::ClusterEngine(application_features::ApplicationServer& server)
|
||||||
: StorageEngine(server, "Cluster", "ClusterEngine",
|
: StorageEngine(server, EngineName, FeatureName,
|
||||||
std::unique_ptr<IndexFactory>(new ClusterIndexFactory())),
|
std::unique_ptr<IndexFactory>(new ClusterIndexFactory())),
|
||||||
_actualEngine(nullptr) {
|
_actualEngine(nullptr) {
|
||||||
setOptional(true);
|
setOptional(true);
|
||||||
|
|
|
@ -20,10 +20,10 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "ClusterIndex.h"
|
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
#include "ClusterEngine/ClusterEngine.h"
|
#include "ClusterEngine/ClusterEngine.h"
|
||||||
|
#include "ClusterIndex.h"
|
||||||
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
||||||
#include "Indexes/SortedIndexAttributeMatcher.h"
|
#include "Indexes/SortedIndexAttributeMatcher.h"
|
||||||
#include "StorageEngine/EngineSelectorFeature.h"
|
#include "StorageEngine/EngineSelectorFeature.h"
|
||||||
|
@ -94,6 +94,7 @@ void ClusterIndex::toVelocyPack(VPackBuilder& builder,
|
||||||
|
|
||||||
for (auto pair : VPackObjectIterator(_info.slice())) {
|
for (auto pair : VPackObjectIterator(_info.slice())) {
|
||||||
if (!pair.key.isEqualString(StaticStrings::IndexId) &&
|
if (!pair.key.isEqualString(StaticStrings::IndexId) &&
|
||||||
|
!pair.key.isEqualString(StaticStrings::IndexName) &&
|
||||||
!pair.key.isEqualString(StaticStrings::IndexType) &&
|
!pair.key.isEqualString(StaticStrings::IndexType) &&
|
||||||
!pair.key.isEqualString(StaticStrings::IndexFields) &&
|
!pair.key.isEqualString(StaticStrings::IndexFields) &&
|
||||||
!pair.key.isEqualString("selectivityEstimate") && !pair.key.isEqualString("figures") &&
|
!pair.key.isEqualString("selectivityEstimate") && !pair.key.isEqualString("figures") &&
|
||||||
|
@ -269,13 +270,17 @@ bool ClusterIndex::supportsFilterCondition(
|
||||||
|
|
||||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||||
case TRI_IDX_TYPE_TTL_INDEX: {
|
case TRI_IDX_TYPE_TTL_INDEX: {
|
||||||
return SortedIndexAttributeMatcher::supportsFilterCondition(
|
return SortedIndexAttributeMatcher::supportsFilterCondition(allIndexes, this,
|
||||||
allIndexes, this, node, reference, itemsInIndex, estimatedItems, estimatedCost);
|
node, reference,
|
||||||
|
itemsInIndex, estimatedItems,
|
||||||
|
estimatedCost);
|
||||||
}
|
}
|
||||||
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
||||||
// same for both engines
|
// same for both engines
|
||||||
return SortedIndexAttributeMatcher::supportsFilterCondition(
|
return SortedIndexAttributeMatcher::supportsFilterCondition(allIndexes, this,
|
||||||
allIndexes, this, node, reference, itemsInIndex, estimatedItems, estimatedCost);
|
node, reference,
|
||||||
|
itemsInIndex, estimatedItems,
|
||||||
|
estimatedCost);
|
||||||
}
|
}
|
||||||
|
|
||||||
case TRI_IDX_TYPE_UNKNOWN:
|
case TRI_IDX_TYPE_UNKNOWN:
|
||||||
|
@ -324,8 +329,9 @@ bool ClusterIndex::supportsSortCondition(arangodb::aql::SortCondition const* sor
|
||||||
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
||||||
if (_engineType == ClusterEngineType::MMFilesEngine ||
|
if (_engineType == ClusterEngineType::MMFilesEngine ||
|
||||||
_engineType == ClusterEngineType::RocksDBEngine) {
|
_engineType == ClusterEngineType::RocksDBEngine) {
|
||||||
return SortedIndexAttributeMatcher::supportsSortCondition(
|
return SortedIndexAttributeMatcher::supportsSortCondition(this, sortCondition, reference,
|
||||||
this, sortCondition, reference, itemsInIndex, estimatedCost, coveredAttributes);
|
itemsInIndex, estimatedCost,
|
||||||
|
coveredAttributes);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,13 +20,13 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "ClusterIndexFactory.h"
|
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
#include "Cluster/ServerState.h"
|
#include "Cluster/ServerState.h"
|
||||||
#include "ClusterEngine/ClusterEngine.h"
|
#include "ClusterEngine/ClusterEngine.h"
|
||||||
#include "ClusterEngine/ClusterIndex.h"
|
#include "ClusterEngine/ClusterIndex.h"
|
||||||
|
#include "ClusterIndexFactory.h"
|
||||||
#include "Indexes/Index.h"
|
#include "Indexes/Index.h"
|
||||||
#include "StorageEngine/EngineSelectorFeature.h"
|
#include "StorageEngine/EngineSelectorFeature.h"
|
||||||
#include "VocBase/LogicalCollection.h"
|
#include "VocBase/LogicalCollection.h"
|
||||||
|
@ -69,8 +69,7 @@ struct DefaultIndexFactory : public arangodb::IndexTypeFactory {
|
||||||
|
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition, TRI_idx_iid_t id,
|
||||||
TRI_idx_iid_t id,
|
|
||||||
bool // isClusterConstructor
|
bool // isClusterConstructor
|
||||||
) const override {
|
) const override {
|
||||||
auto* clusterEngine =
|
auto* clusterEngine =
|
||||||
|
@ -121,13 +120,13 @@ struct DefaultIndexFactory : public arangodb::IndexTypeFactory {
|
||||||
};
|
};
|
||||||
|
|
||||||
struct EdgeIndexFactory : public DefaultIndexFactory {
|
struct EdgeIndexFactory : public DefaultIndexFactory {
|
||||||
explicit EdgeIndexFactory(std::string const& type) : DefaultIndexFactory(type) {}
|
explicit EdgeIndexFactory(std::string const& type)
|
||||||
|
: DefaultIndexFactory(type) {}
|
||||||
|
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
if (!isClusterConstructor) {
|
if (!isClusterConstructor) {
|
||||||
// this indexes cannot be created directly
|
// this indexes cannot be created directly
|
||||||
return arangodb::Result(TRI_ERROR_INTERNAL, "cannot create edge index");
|
return arangodb::Result(TRI_ERROR_INTERNAL, "cannot create edge index");
|
||||||
|
@ -153,13 +152,13 @@ struct EdgeIndexFactory : public DefaultIndexFactory {
|
||||||
};
|
};
|
||||||
|
|
||||||
struct PrimaryIndexFactory : public DefaultIndexFactory {
|
struct PrimaryIndexFactory : public DefaultIndexFactory {
|
||||||
explicit PrimaryIndexFactory(std::string const& type) : DefaultIndexFactory(type) {}
|
explicit PrimaryIndexFactory(std::string const& type)
|
||||||
|
: DefaultIndexFactory(type) {}
|
||||||
|
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
if (!isClusterConstructor) {
|
if (!isClusterConstructor) {
|
||||||
// this indexes cannot be created directly
|
// this indexes cannot be created directly
|
||||||
return arangodb::Result(TRI_ERROR_INTERNAL,
|
return arangodb::Result(TRI_ERROR_INTERNAL,
|
||||||
|
@ -213,13 +212,14 @@ ClusterIndexFactory::ClusterIndexFactory() {
|
||||||
emplace(ttlIndexFactory._type, ttlIndexFactory);
|
emplace(ttlIndexFactory._type, ttlIndexFactory);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" => "hash")
|
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" =>
|
||||||
/// used to display storage engine capabilities
|
/// "hash") used to display storage engine capabilities
|
||||||
std::unordered_map<std::string, std::string> ClusterIndexFactory::indexAliases() const {
|
std::unordered_map<std::string, std::string> ClusterIndexFactory::indexAliases() const {
|
||||||
auto* ce = static_cast<ClusterEngine*>(EngineSelectorFeature::ENGINE);
|
auto* ce = static_cast<ClusterEngine*>(EngineSelectorFeature::ENGINE);
|
||||||
auto* ae = ce->actualEngine();
|
auto* ae = ce->actualEngine();
|
||||||
if (!ae) {
|
if (!ae) {
|
||||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL, "no actual storage engine for ClusterIndexFactory");
|
THROW_ARANGO_EXCEPTION_MESSAGE(
|
||||||
|
TRI_ERROR_INTERNAL, "no actual storage engine for ClusterIndexFactory");
|
||||||
}
|
}
|
||||||
return ae->indexFactory().indexAliases();
|
return ae->indexFactory().indexAliases();
|
||||||
}
|
}
|
||||||
|
@ -254,6 +254,7 @@ void ClusterIndexFactory::fillSystemIndexes(arangodb::LogicalCollection& col,
|
||||||
input.openObject();
|
input.openObject();
|
||||||
input.add(StaticStrings::IndexType, VPackValue("primary"));
|
input.add(StaticStrings::IndexType, VPackValue("primary"));
|
||||||
input.add(StaticStrings::IndexId, VPackValue("0"));
|
input.add(StaticStrings::IndexId, VPackValue("0"));
|
||||||
|
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNamePrimary));
|
||||||
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
||||||
input.add(VPackValue(StaticStrings::KeyString));
|
input.add(VPackValue(StaticStrings::KeyString));
|
||||||
input.close();
|
input.close();
|
||||||
|
@ -277,14 +278,20 @@ void ClusterIndexFactory::fillSystemIndexes(arangodb::LogicalCollection& col,
|
||||||
input.add(StaticStrings::IndexType,
|
input.add(StaticStrings::IndexType,
|
||||||
VPackValue(Index::oldtypeName(Index::TRI_IDX_TYPE_EDGE_INDEX)));
|
VPackValue(Index::oldtypeName(Index::TRI_IDX_TYPE_EDGE_INDEX)));
|
||||||
input.add(StaticStrings::IndexId, VPackValue("1"));
|
input.add(StaticStrings::IndexId, VPackValue("1"));
|
||||||
|
|
||||||
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
||||||
input.add(VPackValue(StaticStrings::FromString));
|
input.add(VPackValue(StaticStrings::FromString));
|
||||||
|
|
||||||
if (ct == ClusterEngineType::MMFilesEngine) {
|
if (ct == ClusterEngineType::MMFilesEngine) {
|
||||||
input.add(VPackValue(StaticStrings::ToString));
|
input.add(VPackValue(StaticStrings::ToString));
|
||||||
}
|
}
|
||||||
|
|
||||||
input.close();
|
input.close();
|
||||||
|
|
||||||
|
if (ct == ClusterEngineType::MMFilesEngine) {
|
||||||
|
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNameEdge));
|
||||||
|
} else if (ct == ClusterEngineType::RocksDBEngine) {
|
||||||
|
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNameEdgeFrom));
|
||||||
|
}
|
||||||
|
|
||||||
input.add(StaticStrings::IndexUnique, VPackValue(false));
|
input.add(StaticStrings::IndexUnique, VPackValue(false));
|
||||||
input.add(StaticStrings::IndexSparse, VPackValue(false));
|
input.add(StaticStrings::IndexSparse, VPackValue(false));
|
||||||
input.close();
|
input.close();
|
||||||
|
@ -299,6 +306,7 @@ void ClusterIndexFactory::fillSystemIndexes(arangodb::LogicalCollection& col,
|
||||||
input.add(StaticStrings::IndexType,
|
input.add(StaticStrings::IndexType,
|
||||||
VPackValue(Index::oldtypeName(Index::TRI_IDX_TYPE_EDGE_INDEX)));
|
VPackValue(Index::oldtypeName(Index::TRI_IDX_TYPE_EDGE_INDEX)));
|
||||||
input.add(StaticStrings::IndexId, VPackValue("2"));
|
input.add(StaticStrings::IndexId, VPackValue("2"));
|
||||||
|
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNameEdgeTo));
|
||||||
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
||||||
input.add(VPackValue(StaticStrings::ToString));
|
input.add(VPackValue(StaticStrings::ToString));
|
||||||
input.close();
|
input.close();
|
||||||
|
|
|
@ -21,17 +21,17 @@
|
||||||
/// @author Jan Steemann
|
/// @author Jan Steemann
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "Index.h"
|
|
||||||
#include "Aql/Ast.h"
|
#include "Aql/Ast.h"
|
||||||
#include "Aql/AstNode.h"
|
#include "Aql/AstNode.h"
|
||||||
#include "Aql/Variable.h"
|
#include "Aql/Variable.h"
|
||||||
#include "Basics/datetime.h"
|
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
#include "Basics/HashSet.h"
|
#include "Basics/HashSet.h"
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
|
#include "Basics/datetime.h"
|
||||||
#include "Cluster/ServerState.h"
|
#include "Cluster/ServerState.h"
|
||||||
|
#include "Index.h"
|
||||||
|
|
||||||
#ifdef USE_IRESEARCH
|
#ifdef USE_IRESEARCH
|
||||||
#include "IResearch/IResearchCommon.h"
|
#include "IResearch/IResearchCommon.h"
|
||||||
|
@ -104,8 +104,7 @@ bool canBeNull(arangodb::aql::AstNode const* op, arangodb::aql::AstNode const* a
|
||||||
// now check if the accessed attribute is _key, _rev or _id.
|
// now check if the accessed attribute is _key, _rev or _id.
|
||||||
// all of these cannot be null
|
// all of these cannot be null
|
||||||
auto attributeName = access->getStringRef();
|
auto attributeName = access->getStringRef();
|
||||||
if (attributeName == StaticStrings::KeyString ||
|
if (attributeName == StaticStrings::KeyString || attributeName == StaticStrings::IdString ||
|
||||||
attributeName == StaticStrings::IdString ||
|
|
||||||
attributeName == StaticStrings::RevString) {
|
attributeName == StaticStrings::RevString) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
@ -157,16 +156,42 @@ bool typeMatch(char const* type, size_t len, char const* expected) {
|
||||||
return (len == ::strlen(expected)) && (::memcmp(type, expected, len) == 0);
|
return (len == ::strlen(expected)) && (::memcmp(type, expected, len) == 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::string defaultIndexName(VPackSlice const& slice) {
|
||||||
|
auto type =
|
||||||
|
arangodb::Index::type(slice.get(arangodb::StaticStrings::IndexType).copyString());
|
||||||
|
if (type == arangodb::Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX) {
|
||||||
|
return arangodb::StaticStrings::IndexNamePrimary;
|
||||||
|
} else if (type == arangodb::Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX) {
|
||||||
|
if (EngineSelectorFeature::isRocksDB()) {
|
||||||
|
auto fields = slice.get(arangodb::StaticStrings::IndexFields);
|
||||||
|
TRI_ASSERT(fields.isArray());
|
||||||
|
auto firstField = fields.at(0);
|
||||||
|
TRI_ASSERT(firstField.isString());
|
||||||
|
bool isFromIndex = firstField.isEqualString(arangodb::StaticStrings::FromString);
|
||||||
|
return isFromIndex ? arangodb::StaticStrings::IndexNameEdgeFrom
|
||||||
|
: arangodb::StaticStrings::IndexNameEdgeTo;
|
||||||
|
}
|
||||||
|
return arangodb::StaticStrings::IndexNameEdge;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string idString = arangodb::basics::VelocyPackHelper::getStringValue(
|
||||||
|
slice, arangodb::StaticStrings::IndexId.c_str(),
|
||||||
|
std::to_string(TRI_NewTickServer()));
|
||||||
|
return std::string("idx_").append(idString);
|
||||||
|
}
|
||||||
|
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
// If the Index is on a coordinator instance the index may not access the
|
// If the Index is on a coordinator instance the index may not access the
|
||||||
// logical collection because it could be gone!
|
// logical collection because it could be gone!
|
||||||
|
|
||||||
Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection,
|
Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection,
|
||||||
|
std::string const& name,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& fields,
|
std::vector<std::vector<arangodb::basics::AttributeName>> const& fields,
|
||||||
bool unique, bool sparse)
|
bool unique, bool sparse)
|
||||||
: _iid(iid),
|
: _iid(iid),
|
||||||
_collection(collection),
|
_collection(collection),
|
||||||
|
_name(name),
|
||||||
_fields(fields),
|
_fields(fields),
|
||||||
_useExpansion(::hasExpansion(_fields)),
|
_useExpansion(::hasExpansion(_fields)),
|
||||||
_unique(unique),
|
_unique(unique),
|
||||||
|
@ -177,6 +202,8 @@ Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection,
|
||||||
Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection, VPackSlice const& slice)
|
Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection, VPackSlice const& slice)
|
||||||
: _iid(iid),
|
: _iid(iid),
|
||||||
_collection(collection),
|
_collection(collection),
|
||||||
|
_name(arangodb::basics::VelocyPackHelper::getStringValue(
|
||||||
|
slice, arangodb::StaticStrings::IndexName, ::defaultIndexName(slice))),
|
||||||
_fields(::parseFields(slice.get(arangodb::StaticStrings::IndexFields),
|
_fields(::parseFields(slice.get(arangodb::StaticStrings::IndexFields),
|
||||||
Index::allowExpansion(Index::type(
|
Index::allowExpansion(Index::type(
|
||||||
slice.get(arangodb::StaticStrings::IndexType).copyString())))),
|
slice.get(arangodb::StaticStrings::IndexType).copyString())))),
|
||||||
|
@ -188,6 +215,12 @@ Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection, VPackSl
|
||||||
|
|
||||||
Index::~Index() {}
|
Index::~Index() {}
|
||||||
|
|
||||||
|
void Index::name(std::string const& newName) {
|
||||||
|
if (_name.empty()) {
|
||||||
|
_name = newName;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
size_t Index::sortWeight(arangodb::aql::AstNode const* node) {
|
size_t Index::sortWeight(arangodb::aql::AstNode const* node) {
|
||||||
switch (node->type) {
|
switch (node->type) {
|
||||||
case arangodb::aql::NODE_TYPE_OPERATOR_BINARY_EQ:
|
case arangodb::aql::NODE_TYPE_OPERATOR_BINARY_EQ:
|
||||||
|
@ -380,6 +413,25 @@ bool Index::validateHandle(char const* key, size_t* split) {
|
||||||
/// @brief generate a new index id
|
/// @brief generate a new index id
|
||||||
TRI_idx_iid_t Index::generateId() { return TRI_NewTickServer(); }
|
TRI_idx_iid_t Index::generateId() { return TRI_NewTickServer(); }
|
||||||
|
|
||||||
|
/// @brief check if two index definitions share any identifiers (_id, name)
|
||||||
|
bool Index::CompareIdentifiers(velocypack::Slice const& lhs, velocypack::Slice const& rhs) {
|
||||||
|
VPackSlice lhsId = lhs.get(arangodb::StaticStrings::IndexId);
|
||||||
|
VPackSlice rhsId = rhs.get(arangodb::StaticStrings::IndexId);
|
||||||
|
if (lhsId.isString() && rhsId.isString() &&
|
||||||
|
arangodb::basics::VelocyPackHelper::compare(lhsId, rhsId, true) == 0) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
VPackSlice lhsName = lhs.get(arangodb::StaticStrings::IndexName);
|
||||||
|
VPackSlice rhsName = rhs.get(arangodb::StaticStrings::IndexName);
|
||||||
|
if (lhsName.isString() && rhsName.isString() &&
|
||||||
|
arangodb::basics::VelocyPackHelper::compare(lhsName, rhsName, true) == 0) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/// @brief index comparator, used by the coordinator to detect if two index
|
/// @brief index comparator, used by the coordinator to detect if two index
|
||||||
/// contents are the same
|
/// contents are the same
|
||||||
bool Index::Compare(VPackSlice const& lhs, VPackSlice const& rhs) {
|
bool Index::Compare(VPackSlice const& lhs, VPackSlice const& rhs) {
|
||||||
|
@ -438,6 +490,7 @@ void Index::toVelocyPack(VPackBuilder& builder,
|
||||||
arangodb::velocypack::Value(std::to_string(_iid)));
|
arangodb::velocypack::Value(std::to_string(_iid)));
|
||||||
builder.add(arangodb::StaticStrings::IndexType,
|
builder.add(arangodb::StaticStrings::IndexType,
|
||||||
arangodb::velocypack::Value(oldtypeName(type())));
|
arangodb::velocypack::Value(oldtypeName(type())));
|
||||||
|
builder.add(arangodb::StaticStrings::IndexName, arangodb::velocypack::Value(name()));
|
||||||
|
|
||||||
builder.add(arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields));
|
builder.add(arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields));
|
||||||
builder.openArray();
|
builder.openArray();
|
||||||
|
@ -944,14 +997,17 @@ std::ostream& operator<<(std::ostream& stream, arangodb::Index const& index) {
|
||||||
return stream;
|
return stream;
|
||||||
}
|
}
|
||||||
|
|
||||||
double Index::getTimestamp(arangodb::velocypack::Slice const& doc, std::string const& attributeName) const {
|
double Index::getTimestamp(arangodb::velocypack::Slice const& doc,
|
||||||
|
std::string const& attributeName) const {
|
||||||
VPackSlice value = doc.get(attributeName);
|
VPackSlice value = doc.get(attributeName);
|
||||||
|
|
||||||
if (value.isString()) {
|
if (value.isString()) {
|
||||||
// string value. we expect it to be YYYY-MM-DD etc.
|
// string value. we expect it to be YYYY-MM-DD etc.
|
||||||
tp_sys_clock_ms tp;
|
tp_sys_clock_ms tp;
|
||||||
if (basics::parseDateTime(value.copyString(), tp)) {
|
if (basics::parseDateTime(value.copyString(), tp)) {
|
||||||
return static_cast<double>(std::chrono::duration_cast<std::chrono::seconds>(tp.time_since_epoch()).count());
|
return static_cast<double>(
|
||||||
|
std::chrono::duration_cast<std::chrono::seconds>(tp.time_since_epoch())
|
||||||
|
.count());
|
||||||
}
|
}
|
||||||
// invalid date format
|
// invalid date format
|
||||||
// fall-through intentional
|
// fall-through intentional
|
||||||
|
|
|
@ -77,7 +77,7 @@ class Index {
|
||||||
Index(Index const&) = delete;
|
Index(Index const&) = delete;
|
||||||
Index& operator=(Index const&) = delete;
|
Index& operator=(Index const&) = delete;
|
||||||
|
|
||||||
Index(TRI_idx_iid_t iid, LogicalCollection& collection,
|
Index(TRI_idx_iid_t iid, LogicalCollection& collection, std::string const& name,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& fields,
|
std::vector<std::vector<arangodb::basics::AttributeName>> const& fields,
|
||||||
bool unique, bool sparse);
|
bool unique, bool sparse);
|
||||||
|
|
||||||
|
@ -113,6 +113,17 @@ class Index {
|
||||||
/// @brief return the index id
|
/// @brief return the index id
|
||||||
inline TRI_idx_iid_t id() const { return _iid; }
|
inline TRI_idx_iid_t id() const { return _iid; }
|
||||||
|
|
||||||
|
/// @brief return the index name
|
||||||
|
inline std::string const& name() const {
|
||||||
|
if (_name == StaticStrings::IndexNameEdgeFrom || _name == StaticStrings::IndexNameEdgeTo) {
|
||||||
|
return StaticStrings::IndexNameEdge;
|
||||||
|
}
|
||||||
|
return _name;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// @brief set the name, if it is currently unset
|
||||||
|
void name(std::string const&);
|
||||||
|
|
||||||
/// @brief return the index fields
|
/// @brief return the index fields
|
||||||
inline std::vector<std::vector<arangodb::basics::AttributeName>> const& fields() const {
|
inline std::vector<std::vector<arangodb::basics::AttributeName>> const& fields() const {
|
||||||
return _fields;
|
return _fields;
|
||||||
|
@ -225,6 +236,9 @@ class Index {
|
||||||
/// @brief generate a new index id
|
/// @brief generate a new index id
|
||||||
static TRI_idx_iid_t generateId();
|
static TRI_idx_iid_t generateId();
|
||||||
|
|
||||||
|
/// @brief check if two index definitions share any identifiers (_id, name)
|
||||||
|
static bool CompareIdentifiers(velocypack::Slice const& lhs, velocypack::Slice const& rhs);
|
||||||
|
|
||||||
/// @brief index comparator, used by the coordinator to detect if two index
|
/// @brief index comparator, used by the coordinator to detect if two index
|
||||||
/// contents are the same
|
/// contents are the same
|
||||||
static bool Compare(velocypack::Slice const& lhs, velocypack::Slice const& rhs);
|
static bool Compare(velocypack::Slice const& lhs, velocypack::Slice const& rhs);
|
||||||
|
@ -257,9 +271,10 @@ class Index {
|
||||||
/// @brief return the selectivity estimate of the index
|
/// @brief return the selectivity estimate of the index
|
||||||
/// must only be called if hasSelectivityEstimate() returns true
|
/// must only be called if hasSelectivityEstimate() returns true
|
||||||
///
|
///
|
||||||
/// The extra arangodb::velocypack::StringRef is only used in the edge index as direction
|
/// The extra arangodb::velocypack::StringRef is only used in the edge index
|
||||||
/// attribute attribute, a Slice would be more flexible.
|
/// as direction attribute attribute, a Slice would be more flexible.
|
||||||
virtual double selectivityEstimate(arangodb::velocypack::StringRef const& extra = arangodb::velocypack::StringRef()) const;
|
virtual double selectivityEstimate(arangodb::velocypack::StringRef const& extra =
|
||||||
|
arangodb::velocypack::StringRef()) const;
|
||||||
|
|
||||||
/// @brief update the cluster selectivity estimate
|
/// @brief update the cluster selectivity estimate
|
||||||
virtual void updateClusterSelectivityEstimate(double /*estimate*/) {
|
virtual void updateClusterSelectivityEstimate(double /*estimate*/) {
|
||||||
|
@ -385,10 +400,12 @@ class Index {
|
||||||
/// @brief extracts a timestamp value from a document
|
/// @brief extracts a timestamp value from a document
|
||||||
/// returns a negative value if the document does not contain the specified
|
/// returns a negative value if the document does not contain the specified
|
||||||
/// attribute, or the attribute does not contain a valid timestamp or date string
|
/// attribute, or the attribute does not contain a valid timestamp or date string
|
||||||
double getTimestamp(arangodb::velocypack::Slice const& doc, std::string const& attributeName) const;
|
double getTimestamp(arangodb::velocypack::Slice const& doc,
|
||||||
|
std::string const& attributeName) const;
|
||||||
|
|
||||||
TRI_idx_iid_t const _iid;
|
TRI_idx_iid_t const _iid;
|
||||||
LogicalCollection& _collection;
|
LogicalCollection& _collection;
|
||||||
|
std::string _name;
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>> const _fields;
|
std::vector<std::vector<arangodb::basics::AttributeName>> const _fields;
|
||||||
bool const _useExpansion;
|
bool const _useExpansion;
|
||||||
|
|
||||||
|
|
|
@ -21,13 +21,13 @@
|
||||||
/// @author Michael Hackstein
|
/// @author Michael Hackstein
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "IndexFactory.h"
|
|
||||||
#include "Basics/AttributeNameParser.h"
|
#include "Basics/AttributeNameParser.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
#include "Cluster/ServerState.h"
|
#include "Cluster/ServerState.h"
|
||||||
|
#include "IndexFactory.h"
|
||||||
#include "Indexes/Index.h"
|
#include "Indexes/Index.h"
|
||||||
#include "RestServer/BootstrapFeature.h"
|
#include "RestServer/BootstrapFeature.h"
|
||||||
#include "VocBase/LogicalCollection.h"
|
#include "VocBase/LogicalCollection.h"
|
||||||
|
@ -80,13 +80,12 @@ bool IndexTypeFactory::equal(arangodb::Index::IndexType type,
|
||||||
arangodb::velocypack::Slice const& lhs,
|
arangodb::velocypack::Slice const& lhs,
|
||||||
arangodb::velocypack::Slice const& rhs,
|
arangodb::velocypack::Slice const& rhs,
|
||||||
bool attributeOrderMatters) const {
|
bool attributeOrderMatters) const {
|
||||||
|
|
||||||
// unique must be identical if present
|
// unique must be identical if present
|
||||||
auto value = lhs.get(arangodb::StaticStrings::IndexUnique);
|
auto value = lhs.get(arangodb::StaticStrings::IndexUnique);
|
||||||
|
|
||||||
if (value.isBoolean()) {
|
if (value.isBoolean()) {
|
||||||
if (arangodb::basics::VelocyPackHelper::compare(
|
if (arangodb::basics::VelocyPackHelper::compare(value, rhs.get(arangodb::StaticStrings::IndexUnique),
|
||||||
value, rhs.get(arangodb::StaticStrings::IndexUnique), false)) {
|
false)) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -95,8 +94,8 @@ bool IndexTypeFactory::equal(arangodb::Index::IndexType type,
|
||||||
value = lhs.get(arangodb::StaticStrings::IndexSparse);
|
value = lhs.get(arangodb::StaticStrings::IndexSparse);
|
||||||
|
|
||||||
if (value.isBoolean()) {
|
if (value.isBoolean()) {
|
||||||
if (arangodb::basics::VelocyPackHelper::compare(
|
if (arangodb::basics::VelocyPackHelper::compare(value, rhs.get(arangodb::StaticStrings::IndexSparse),
|
||||||
value, rhs.get(arangodb::StaticStrings::IndexSparse), false)) {
|
false)) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -220,6 +219,29 @@ Result IndexFactory::enhanceIndexDefinition( // normalizze deefinition
|
||||||
arangodb::velocypack::Value(std::to_string(id)));
|
arangodb::velocypack::Value(std::to_string(id)));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
auto nameSlice = definition.get(StaticStrings::IndexName);
|
||||||
|
std::string name;
|
||||||
|
|
||||||
|
if (nameSlice.isString() && (nameSlice.getStringLength() != 0)) {
|
||||||
|
name = nameSlice.copyString();
|
||||||
|
} else {
|
||||||
|
// we should set the name for special types explicitly elsewhere, but just in case...
|
||||||
|
if (Index::type(type.copyString()) == Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX) {
|
||||||
|
name = StaticStrings::IndexNamePrimary;
|
||||||
|
} else if (Index::type(type.copyString()) == Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX) {
|
||||||
|
name = StaticStrings::IndexNameEdge;
|
||||||
|
} else {
|
||||||
|
// generate a name
|
||||||
|
name = "idx_" + std::to_string(TRI_HybridLogicalClock());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!TRI_vocbase_t::IsAllowedName(false, velocypack::StringRef(name))) {
|
||||||
|
return Result(TRI_ERROR_ARANGO_ILLEGAL_NAME);
|
||||||
|
}
|
||||||
|
|
||||||
|
normalized.add(StaticStrings::IndexName, arangodb::velocypack::Value(name));
|
||||||
|
|
||||||
return factory.normalize(normalized, definition, isCreation, vocbase);
|
return factory.normalize(normalized, definition, isCreation, vocbase);
|
||||||
} catch (basics::Exception const& ex) {
|
} catch (basics::Exception const& ex) {
|
||||||
return Result(ex.code(), ex.what());
|
return Result(ex.code(), ex.what());
|
||||||
|
@ -286,7 +308,8 @@ std::unordered_map<std::string, std::string> IndexFactory::indexAliases() const
|
||||||
TRI_idx_iid_t IndexFactory::validateSlice(arangodb::velocypack::Slice info,
|
TRI_idx_iid_t IndexFactory::validateSlice(arangodb::velocypack::Slice info,
|
||||||
bool generateKey, bool isClusterConstructor) {
|
bool generateKey, bool isClusterConstructor) {
|
||||||
if (!info.isObject()) {
|
if (!info.isObject()) {
|
||||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, "expecting object for index definition");
|
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"expecting object for index definition");
|
||||||
}
|
}
|
||||||
|
|
||||||
TRI_idx_iid_t iid = 0;
|
TRI_idx_iid_t iid = 0;
|
||||||
|
@ -319,34 +342,35 @@ TRI_idx_iid_t IndexFactory::validateSlice(arangodb::velocypack::Slice info,
|
||||||
|
|
||||||
/// @brief process the fields list, deduplicate it, and add it to the json
|
/// @brief process the fields list, deduplicate it, and add it to the json
|
||||||
Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& builder,
|
Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& builder,
|
||||||
size_t minFields, size_t maxField, bool create,
|
size_t minFields, size_t maxField,
|
||||||
bool allowExpansion) {
|
bool create, bool allowExpansion) {
|
||||||
TRI_ASSERT(builder.isOpenObject());
|
TRI_ASSERT(builder.isOpenObject());
|
||||||
std::unordered_set<arangodb::velocypack::StringRef> fields;
|
std::unordered_set<arangodb::velocypack::StringRef> fields;
|
||||||
auto fieldsSlice = definition.get(arangodb::StaticStrings::IndexFields);
|
auto fieldsSlice = definition.get(arangodb::StaticStrings::IndexFields);
|
||||||
|
|
||||||
builder.add(
|
builder.add(arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields));
|
||||||
arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields)
|
|
||||||
);
|
|
||||||
builder.openArray();
|
builder.openArray();
|
||||||
|
|
||||||
if (fieldsSlice.isArray()) {
|
if (fieldsSlice.isArray()) {
|
||||||
// "fields" is a list of fields
|
// "fields" is a list of fields
|
||||||
for (auto const& it : VPackArrayIterator(fieldsSlice)) {
|
for (auto const& it : VPackArrayIterator(fieldsSlice)) {
|
||||||
if (!it.isString()) {
|
if (!it.isString()) {
|
||||||
return Result(TRI_ERROR_BAD_PARAMETER, "index field names must be strings");
|
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"index field names must be strings");
|
||||||
}
|
}
|
||||||
|
|
||||||
arangodb::velocypack::StringRef f(it);
|
arangodb::velocypack::StringRef f(it);
|
||||||
|
|
||||||
if (f.empty() || (create && f == StaticStrings::IdString)) {
|
if (f.empty() || (create && f == StaticStrings::IdString)) {
|
||||||
// accessing internal attributes is disallowed
|
// accessing internal attributes is disallowed
|
||||||
return Result(TRI_ERROR_BAD_PARAMETER, "_id attribute cannot be indexed");
|
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"_id attribute cannot be indexed");
|
||||||
}
|
}
|
||||||
|
|
||||||
if (fields.find(f) != fields.end()) {
|
if (fields.find(f) != fields.end()) {
|
||||||
// duplicate attribute name
|
// duplicate attribute name
|
||||||
return Result(TRI_ERROR_BAD_PARAMETER, "duplicate attribute name in index fields list");
|
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"duplicate attribute name in index fields list");
|
||||||
}
|
}
|
||||||
|
|
||||||
std::vector<basics::AttributeName> temp;
|
std::vector<basics::AttributeName> temp;
|
||||||
|
@ -359,7 +383,8 @@ Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& bui
|
||||||
|
|
||||||
size_t cc = fields.size();
|
size_t cc = fields.size();
|
||||||
if (cc == 0 || cc < minFields || cc > maxField) {
|
if (cc == 0 || cc < minFields || cc > maxField) {
|
||||||
return Result(TRI_ERROR_BAD_PARAMETER, "invalid number of index attributes");
|
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"invalid number of index attributes");
|
||||||
}
|
}
|
||||||
|
|
||||||
builder.close();
|
builder.close();
|
||||||
|
@ -367,16 +392,11 @@ Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& bui
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief process the unique flag and add it to the json
|
/// @brief process the unique flag and add it to the json
|
||||||
void IndexFactory::processIndexUniqueFlag(VPackSlice definition,
|
void IndexFactory::processIndexUniqueFlag(VPackSlice definition, VPackBuilder& builder) {
|
||||||
VPackBuilder& builder) {
|
|
||||||
bool unique = basics::VelocyPackHelper::getBooleanValue(
|
bool unique = basics::VelocyPackHelper::getBooleanValue(
|
||||||
definition, arangodb::StaticStrings::IndexUnique.c_str(), false
|
definition, arangodb::StaticStrings::IndexUnique.c_str(), false);
|
||||||
);
|
|
||||||
|
|
||||||
builder.add(
|
builder.add(arangodb::StaticStrings::IndexUnique, arangodb::velocypack::Value(unique));
|
||||||
arangodb::StaticStrings::IndexUnique,
|
|
||||||
arangodb::velocypack::Value(unique)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief process the sparse flag and add it to the json
|
/// @brief process the sparse flag and add it to the json
|
||||||
|
@ -384,38 +404,30 @@ void IndexFactory::processIndexSparseFlag(VPackSlice definition,
|
||||||
VPackBuilder& builder, bool create) {
|
VPackBuilder& builder, bool create) {
|
||||||
if (definition.hasKey(arangodb::StaticStrings::IndexSparse)) {
|
if (definition.hasKey(arangodb::StaticStrings::IndexSparse)) {
|
||||||
bool sparseBool = basics::VelocyPackHelper::getBooleanValue(
|
bool sparseBool = basics::VelocyPackHelper::getBooleanValue(
|
||||||
definition, arangodb::StaticStrings::IndexSparse.c_str(), false
|
definition, arangodb::StaticStrings::IndexSparse.c_str(), false);
|
||||||
);
|
|
||||||
|
|
||||||
builder.add(
|
builder.add(arangodb::StaticStrings::IndexSparse,
|
||||||
arangodb::StaticStrings::IndexSparse,
|
arangodb::velocypack::Value(sparseBool));
|
||||||
arangodb::velocypack::Value(sparseBool)
|
|
||||||
);
|
|
||||||
} else if (create) {
|
} else if (create) {
|
||||||
// not set. now add a default value
|
// not set. now add a default value
|
||||||
builder.add(
|
builder.add(arangodb::StaticStrings::IndexSparse, arangodb::velocypack::Value(false));
|
||||||
arangodb::StaticStrings::IndexSparse,
|
|
||||||
arangodb::velocypack::Value(false)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief process the deduplicate flag and add it to the json
|
/// @brief process the deduplicate flag and add it to the json
|
||||||
void IndexFactory::processIndexDeduplicateFlag(VPackSlice definition,
|
void IndexFactory::processIndexDeduplicateFlag(VPackSlice definition, VPackBuilder& builder) {
|
||||||
VPackBuilder& builder) {
|
bool dup = basics::VelocyPackHelper::getBooleanValue(definition, "deduplicate", true);
|
||||||
bool dup = basics::VelocyPackHelper::getBooleanValue(definition,
|
|
||||||
"deduplicate", true);
|
|
||||||
builder.add("deduplicate", VPackValue(dup));
|
builder.add("deduplicate", VPackValue(dup));
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief process the geojson flag and add it to the json
|
/// @brief process the geojson flag and add it to the json
|
||||||
void IndexFactory::processIndexGeoJsonFlag(VPackSlice definition,
|
void IndexFactory::processIndexGeoJsonFlag(VPackSlice definition, VPackBuilder& builder) {
|
||||||
VPackBuilder& builder) {
|
|
||||||
auto fieldsSlice = definition.get(arangodb::StaticStrings::IndexFields);
|
auto fieldsSlice = definition.get(arangodb::StaticStrings::IndexFields);
|
||||||
|
|
||||||
if (fieldsSlice.isArray() && fieldsSlice.length() == 1) {
|
if (fieldsSlice.isArray() && fieldsSlice.length() == 1) {
|
||||||
// only add geoJson for indexes with a single field (with needs to be an array)
|
// only add geoJson for indexes with a single field (with needs to be an array)
|
||||||
bool geoJson = basics::VelocyPackHelper::getBooleanValue(definition, "geoJson", false);
|
bool geoJson =
|
||||||
|
basics::VelocyPackHelper::getBooleanValue(definition, "geoJson", false);
|
||||||
|
|
||||||
builder.add("geoJson", VPackValue(geoJson));
|
builder.add("geoJson", VPackValue(geoJson));
|
||||||
}
|
}
|
||||||
|
@ -451,11 +463,13 @@ Result IndexFactory::enhanceJsonIndexTtl(VPackSlice definition,
|
||||||
|
|
||||||
VPackSlice v = definition.get(StaticStrings::IndexExpireAfter);
|
VPackSlice v = definition.get(StaticStrings::IndexExpireAfter);
|
||||||
if (!v.isNumber()) {
|
if (!v.isNumber()) {
|
||||||
return Result(TRI_ERROR_BAD_PARAMETER, "expireAfter attribute must be a number");
|
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"expireAfter attribute must be a number");
|
||||||
}
|
}
|
||||||
double d = v.getNumericValue<double>();
|
double d = v.getNumericValue<double>();
|
||||||
if (d < 0.0) {
|
if (d < 0.0) {
|
||||||
return Result(TRI_ERROR_BAD_PARAMETER, "expireAfter attribute must greater equal to zero");
|
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"expireAfter attribute must greater equal to zero");
|
||||||
}
|
}
|
||||||
builder.add(arangodb::StaticStrings::IndexExpireAfter, v);
|
builder.add(arangodb::StaticStrings::IndexExpireAfter, v);
|
||||||
|
|
||||||
|
@ -468,9 +482,8 @@ Result IndexFactory::enhanceJsonIndexTtl(VPackSlice definition,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief enhances the json of a geo, geo1 or geo2 index
|
/// @brief enhances the json of a geo, geo1 or geo2 index
|
||||||
Result IndexFactory::enhanceJsonIndexGeo(VPackSlice definition,
|
Result IndexFactory::enhanceJsonIndexGeo(VPackSlice definition, VPackBuilder& builder,
|
||||||
VPackBuilder& builder, bool create,
|
bool create, int minFields, int maxFields) {
|
||||||
int minFields, int maxFields) {
|
|
||||||
Result res = processIndexFields(definition, builder, minFields, maxFields, create, false);
|
Result res = processIndexFields(definition, builder, minFields, maxFields, create, false);
|
||||||
|
|
||||||
if (res.ok()) {
|
if (res.ok()) {
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Jan Steemann
|
/// @author Jan Steemann
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "MMFilesCollection.h"
|
|
||||||
#include "ApplicationFeatures/ApplicationServer.h"
|
#include "ApplicationFeatures/ApplicationServer.h"
|
||||||
#include "Aql/PlanCache.h"
|
#include "Aql/PlanCache.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
|
@ -51,6 +50,7 @@
|
||||||
#include "MMFiles/MMFilesLogfileManager.h"
|
#include "MMFiles/MMFilesLogfileManager.h"
|
||||||
#include "MMFiles/MMFilesPrimaryIndex.h"
|
#include "MMFiles/MMFilesPrimaryIndex.h"
|
||||||
#include "MMFiles/MMFilesTransactionState.h"
|
#include "MMFiles/MMFilesTransactionState.h"
|
||||||
|
#include "MMFilesCollection.h"
|
||||||
#include "RestServer/DatabaseFeature.h"
|
#include "RestServer/DatabaseFeature.h"
|
||||||
#include "Scheduler/Scheduler.h"
|
#include "Scheduler/Scheduler.h"
|
||||||
#include "Scheduler/SchedulerFeature.h"
|
#include "Scheduler/SchedulerFeature.h"
|
||||||
|
@ -733,9 +733,13 @@ int MMFilesCollection::close() {
|
||||||
|
|
||||||
if ((++tries % 10) == 0) {
|
if ((++tries % 10) == 0) {
|
||||||
if (hasDocumentDitch) {
|
if (hasDocumentDitch) {
|
||||||
LOG_TOPIC(WARN, Logger::ENGINES) << "waiting for cleanup of document ditches for collection '" << _logicalCollection.name() << "'. has other: " << hasOtherDitch;
|
LOG_TOPIC(WARN, Logger::ENGINES)
|
||||||
|
<< "waiting for cleanup of document ditches for collection '"
|
||||||
|
<< _logicalCollection.name() << "'. has other: " << hasOtherDitch;
|
||||||
} else {
|
} else {
|
||||||
LOG_TOPIC(WARN, Logger::ENGINES) << "waiting for cleanup of ditches for collection '" << _logicalCollection.name() << "'";
|
LOG_TOPIC(WARN, Logger::ENGINES)
|
||||||
|
<< "waiting for cleanup of ditches for collection '"
|
||||||
|
<< _logicalCollection.name() << "'";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2021,7 +2025,8 @@ Result MMFilesCollection::read(transaction::Methods* trx, VPackSlice const& key,
|
||||||
return Result(TRI_ERROR_NO_ERROR);
|
return Result(TRI_ERROR_NO_ERROR);
|
||||||
}
|
}
|
||||||
|
|
||||||
Result MMFilesCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
Result MMFilesCollection::read(transaction::Methods* trx,
|
||||||
|
arangodb::velocypack::StringRef const& key,
|
||||||
ManagedDocumentResult& result, bool lock) {
|
ManagedDocumentResult& result, bool lock) {
|
||||||
// copy string into a vpack string
|
// copy string into a vpack string
|
||||||
transaction::BuilderLeaser builder(trx);
|
transaction::BuilderLeaser builder(trx);
|
||||||
|
@ -2203,7 +2208,9 @@ std::shared_ptr<Index> MMFilesCollection::createIndex(transaction::Methods& trx,
|
||||||
if (idx != nullptr) {
|
if (idx != nullptr) {
|
||||||
// We already have this index.
|
// We already have this index.
|
||||||
if (idx->type() == arangodb::Index::TRI_IDX_TYPE_TTL_INDEX) {
|
if (idx->type() == arangodb::Index::TRI_IDX_TYPE_TTL_INDEX) {
|
||||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, "there can only be one ttl index per collection");
|
THROW_ARANGO_EXCEPTION_MESSAGE(
|
||||||
|
TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"there can only be one ttl index per collection");
|
||||||
}
|
}
|
||||||
created = false;
|
created = false;
|
||||||
return idx;
|
return idx;
|
||||||
|
@ -2227,8 +2234,18 @@ std::shared_ptr<Index> MMFilesCollection::createIndex(transaction::Methods& trx,
|
||||||
}
|
}
|
||||||
|
|
||||||
auto other = PhysicalCollection::lookupIndex(idx->id());
|
auto other = PhysicalCollection::lookupIndex(idx->id());
|
||||||
|
if (!other) {
|
||||||
|
other = PhysicalCollection::lookupIndex(idx->name());
|
||||||
|
}
|
||||||
if (other) {
|
if (other) {
|
||||||
return other;
|
// definition shares an identifier with an existing index with a
|
||||||
|
// different definition
|
||||||
|
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_ARANGO_DUPLICATE_IDENTIFIER,
|
||||||
|
"duplicate value for `" +
|
||||||
|
arangodb::StaticStrings::IndexId +
|
||||||
|
"` or `" +
|
||||||
|
arangodb::StaticStrings::IndexName +
|
||||||
|
"`");
|
||||||
}
|
}
|
||||||
|
|
||||||
TRI_ASSERT(idx->type() != Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX);
|
TRI_ASSERT(idx->type() != Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX);
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Dr. Frank Celler
|
/// @author Dr. Frank Celler
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "MMFilesEdgeIndex.h"
|
|
||||||
#include "Aql/AstNode.h"
|
#include "Aql/AstNode.h"
|
||||||
#include "Aql/SortCondition.h"
|
#include "Aql/SortCondition.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
|
@ -32,6 +31,7 @@
|
||||||
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
||||||
#include "MMFiles/MMFilesCollection.h"
|
#include "MMFiles/MMFilesCollection.h"
|
||||||
#include "MMFiles/MMFilesIndexLookupContext.h"
|
#include "MMFiles/MMFilesIndexLookupContext.h"
|
||||||
|
#include "MMFilesEdgeIndex.h"
|
||||||
#include "StorageEngine/TransactionState.h"
|
#include "StorageEngine/TransactionState.h"
|
||||||
#include "Transaction/Context.h"
|
#include "Transaction/Context.h"
|
||||||
#include "Transaction/Helpers.h"
|
#include "Transaction/Helpers.h"
|
||||||
|
@ -174,7 +174,7 @@ void MMFilesEdgeIndexIterator::reset() {
|
||||||
}
|
}
|
||||||
|
|
||||||
MMFilesEdgeIndex::MMFilesEdgeIndex(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection)
|
MMFilesEdgeIndex::MMFilesEdgeIndex(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection)
|
||||||
: MMFilesIndex(iid, collection,
|
: MMFilesIndex(iid, collection, StaticStrings::IndexNameEdge,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
||||||
{{arangodb::basics::AttributeName(StaticStrings::FromString, false)},
|
{{arangodb::basics::AttributeName(StaticStrings::FromString, false)},
|
||||||
{arangodb::basics::AttributeName(StaticStrings::ToString, false)}}),
|
{arangodb::basics::AttributeName(StaticStrings::ToString, false)}}),
|
||||||
|
|
|
@ -36,10 +36,10 @@ class LogicalCollection;
|
||||||
|
|
||||||
class MMFilesIndex : public Index {
|
class MMFilesIndex : public Index {
|
||||||
public:
|
public:
|
||||||
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection, std::string const& name,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
||||||
bool unique, bool sparse)
|
bool unique, bool sparse)
|
||||||
: Index(id, collection, attributes, unique, sparse) {}
|
: Index(id, collection, name, attributes, unique, sparse) {}
|
||||||
|
|
||||||
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& info)
|
arangodb::velocypack::Slice const& info)
|
||||||
|
@ -66,7 +66,6 @@ class MMFilesIndex : public Index {
|
||||||
// for mmfiles, truncating the index just unloads it
|
// for mmfiles, truncating the index just unloads it
|
||||||
unload();
|
unload();
|
||||||
}
|
}
|
||||||
|
|
||||||
};
|
};
|
||||||
} // namespace arangodb
|
} // namespace arangodb
|
||||||
|
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Dr. Frank Celler
|
/// @author Dr. Frank Celler
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "MMFilesPrimaryIndex.h"
|
|
||||||
#include "Aql/AstNode.h"
|
#include "Aql/AstNode.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
|
@ -31,6 +30,7 @@
|
||||||
#include "MMFiles/MMFilesCollection.h"
|
#include "MMFiles/MMFilesCollection.h"
|
||||||
#include "MMFiles/MMFilesIndexElement.h"
|
#include "MMFiles/MMFilesIndexElement.h"
|
||||||
#include "MMFiles/MMFilesIndexLookupContext.h"
|
#include "MMFiles/MMFilesIndexLookupContext.h"
|
||||||
|
#include "MMFilesPrimaryIndex.h"
|
||||||
#include "StorageEngine/TransactionState.h"
|
#include "StorageEngine/TransactionState.h"
|
||||||
#include "Transaction/Context.h"
|
#include "Transaction/Context.h"
|
||||||
#include "Transaction/Helpers.h"
|
#include "Transaction/Helpers.h"
|
||||||
|
@ -229,7 +229,7 @@ void MMFilesAnyIndexIterator::reset() {
|
||||||
}
|
}
|
||||||
|
|
||||||
MMFilesPrimaryIndex::MMFilesPrimaryIndex(arangodb::LogicalCollection& collection)
|
MMFilesPrimaryIndex::MMFilesPrimaryIndex(arangodb::LogicalCollection& collection)
|
||||||
: MMFilesIndex(0, collection,
|
: MMFilesIndex(0, collection, StaticStrings::IndexNamePrimary,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
||||||
{{arangodb::basics::AttributeName(StaticStrings::KeyString, false)}}),
|
{{arangodb::basics::AttributeName(StaticStrings::KeyString, false)}}),
|
||||||
/*unique*/ true, /*sparse*/ false) {
|
/*unique*/ true, /*sparse*/ false) {
|
||||||
|
|
|
@ -75,8 +75,8 @@ struct BuilderCookie : public arangodb::TransactionState::Cookie {
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
RocksDBBuilderIndex::RocksDBBuilderIndex(std::shared_ptr<arangodb::RocksDBIndex> const& wp)
|
RocksDBBuilderIndex::RocksDBBuilderIndex(std::shared_ptr<arangodb::RocksDBIndex> const& wp)
|
||||||
: RocksDBIndex(wp->id(), wp->collection(), wp->fields(), wp->unique(),
|
: RocksDBIndex(wp->id(), wp->collection(), wp->name(), wp->fields(),
|
||||||
wp->sparse(), wp->columnFamily(), wp->objectId(),
|
wp->unique(), wp->sparse(), wp->columnFamily(), wp->objectId(),
|
||||||
/*useCache*/ false),
|
/*useCache*/ false),
|
||||||
_wrapped(wp) {
|
_wrapped(wp) {
|
||||||
TRI_ASSERT(_wrapped);
|
TRI_ASSERT(_wrapped);
|
||||||
|
@ -182,9 +182,7 @@ static arangodb::Result fillIndex(RocksDBIndex& ridx, WriteBatchType& batch,
|
||||||
THROW_ARANGO_EXCEPTION(res);
|
THROW_ARANGO_EXCEPTION(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
TRI_IF_FAILURE("RocksDBBuilderIndex::fillIndex") {
|
TRI_IF_FAILURE("RocksDBBuilderIndex::fillIndex") { FATAL_ERROR_EXIT(); }
|
||||||
FATAL_ERROR_EXIT();
|
|
||||||
}
|
|
||||||
|
|
||||||
uint64_t numDocsWritten = 0;
|
uint64_t numDocsWritten = 0;
|
||||||
auto state = RocksDBTransactionState::toState(&trx);
|
auto state = RocksDBTransactionState::toState(&trx);
|
||||||
|
@ -250,7 +248,8 @@ static arangodb::Result fillIndex(RocksDBIndex& ridx, WriteBatchType& batch,
|
||||||
}
|
}
|
||||||
|
|
||||||
// if an error occured drop() will be called
|
// if an error occured drop() will be called
|
||||||
LOG_TOPIC(DEBUG, Logger::ENGINES) << "SNAPSHOT CAPTURED " << numDocsWritten << " " << res.errorMessage();
|
LOG_TOPIC(DEBUG, Logger::ENGINES)
|
||||||
|
<< "SNAPSHOT CAPTURED " << numDocsWritten << " " << res.errorMessage();
|
||||||
|
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
@ -311,7 +310,6 @@ struct ReplayHandler final : public rocksdb::WriteBatch::Handler {
|
||||||
|
|
||||||
// The default implementation of LogData does nothing.
|
// The default implementation of LogData does nothing.
|
||||||
void LogData(const rocksdb::Slice& blob) override {
|
void LogData(const rocksdb::Slice& blob) override {
|
||||||
|
|
||||||
switch (RocksDBLogValue::type(blob)) {
|
switch (RocksDBLogValue::type(blob)) {
|
||||||
case RocksDBLogType::TrackedDocumentInsert:
|
case RocksDBLogType::TrackedDocumentInsert:
|
||||||
if (_lastObjectID == _objectId) {
|
if (_lastObjectID == _objectId) {
|
||||||
|
@ -369,8 +367,7 @@ struct ReplayHandler final : public rocksdb::WriteBatch::Handler {
|
||||||
return rocksdb::Status();
|
return rocksdb::Status();
|
||||||
}
|
}
|
||||||
|
|
||||||
rocksdb::Status DeleteRangeCF(uint32_t column_family_id,
|
rocksdb::Status DeleteRangeCF(uint32_t column_family_id, const rocksdb::Slice& begin_key,
|
||||||
const rocksdb::Slice& begin_key,
|
|
||||||
const rocksdb::Slice& end_key) override {
|
const rocksdb::Slice& end_key) override {
|
||||||
incTick(); // drop and truncate may use this
|
incTick(); // drop and truncate may use this
|
||||||
if (column_family_id == _index.columnFamily()->GetID() &&
|
if (column_family_id == _index.columnFamily()->GetID() &&
|
||||||
|
@ -502,8 +499,8 @@ Result catchup(RocksDBIndex& ridx, WriteBatchType& wb, AccessMode::Type mode,
|
||||||
}
|
}
|
||||||
|
|
||||||
LOG_TOPIC(DEBUG, Logger::ENGINES) << "WAL REPLAYED insertions: " << replay.numInserted
|
LOG_TOPIC(DEBUG, Logger::ENGINES) << "WAL REPLAYED insertions: " << replay.numInserted
|
||||||
<< "; deletions: " << replay.numRemoved << "; lastScannedTick "
|
<< "; deletions: " << replay.numRemoved
|
||||||
<< lastScannedTick;
|
<< "; lastScannedTick " << lastScannedTick;
|
||||||
|
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
@ -552,14 +549,12 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
||||||
// unique index. we need to keep track of all our changes because we need to
|
// unique index. we need to keep track of all our changes because we need to
|
||||||
// avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
// avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
||||||
rocksdb::WriteBatchWithIndex batch(cmp, 32 * 1024 * 1024);
|
rocksdb::WriteBatchWithIndex batch(cmp, 32 * 1024 * 1024);
|
||||||
res = ::fillIndex<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(
|
res = ::fillIndex<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(*internal, batch, snap);
|
||||||
*internal, batch, snap);
|
|
||||||
} else {
|
} else {
|
||||||
// non-unique index. all index keys will be unique anyway because they
|
// non-unique index. all index keys will be unique anyway because they
|
||||||
// contain the document id we can therefore get away with a cheap WriteBatch
|
// contain the document id we can therefore get away with a cheap WriteBatch
|
||||||
rocksdb::WriteBatch batch(32 * 1024 * 1024);
|
rocksdb::WriteBatch batch(32 * 1024 * 1024);
|
||||||
res = ::fillIndex<rocksdb::WriteBatch, RocksDBBatchedMethods>(*internal, batch,
|
res = ::fillIndex<rocksdb::WriteBatch, RocksDBBatchedMethods>(*internal, batch, snap);
|
||||||
snap);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (res.fail()) {
|
if (res.fail()) {
|
||||||
|
@ -578,8 +573,8 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
||||||
numScanned = 0;
|
numScanned = 0;
|
||||||
if (internal->unique()) {
|
if (internal->unique()) {
|
||||||
const rocksdb::Comparator* cmp = internal->columnFamily()->GetComparator();
|
const rocksdb::Comparator* cmp = internal->columnFamily()->GetComparator();
|
||||||
// unique index. we need to keep track of all our changes because we need to
|
// unique index. we need to keep track of all our changes because we need
|
||||||
// avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
// to avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
||||||
rocksdb::WriteBatchWithIndex batch(cmp, 32 * 1024 * 1024);
|
rocksdb::WriteBatchWithIndex batch(cmp, 32 * 1024 * 1024);
|
||||||
res = ::catchup<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(
|
res = ::catchup<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(
|
||||||
*internal, batch, AccessMode::Type::WRITE, scanFrom, lastScanned, numScanned);
|
*internal, batch, AccessMode::Type::WRITE, scanFrom, lastScanned, numScanned);
|
||||||
|
@ -587,10 +582,8 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
||||||
// non-unique index. all index keys will be unique anyway because they
|
// non-unique index. all index keys will be unique anyway because they
|
||||||
// contain the document id we can therefore get away with a cheap WriteBatch
|
// contain the document id we can therefore get away with a cheap WriteBatch
|
||||||
rocksdb::WriteBatch batch(32 * 1024 * 1024);
|
rocksdb::WriteBatch batch(32 * 1024 * 1024);
|
||||||
res = ::catchup<rocksdb::WriteBatch, RocksDBBatchedMethods>(*internal, batch,
|
res = ::catchup<rocksdb::WriteBatch, RocksDBBatchedMethods>(
|
||||||
AccessMode::Type::WRITE,
|
*internal, batch, AccessMode::Type::WRITE, scanFrom, lastScanned, numScanned);
|
||||||
scanFrom, lastScanned,
|
|
||||||
numScanned);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (res.fail()) {
|
if (res.fail()) {
|
||||||
|
|
|
@ -20,7 +20,6 @@
|
||||||
/// @author Jan Christoph Uhde
|
/// @author Jan Christoph Uhde
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "RocksDBCollection.h"
|
|
||||||
#include "Aql/PlanCache.h"
|
#include "Aql/PlanCache.h"
|
||||||
#include "Basics/ReadLocker.h"
|
#include "Basics/ReadLocker.h"
|
||||||
#include "Basics/Result.h"
|
#include "Basics/Result.h"
|
||||||
|
@ -36,6 +35,7 @@
|
||||||
#include "Indexes/Index.h"
|
#include "Indexes/Index.h"
|
||||||
#include "Indexes/IndexIterator.h"
|
#include "Indexes/IndexIterator.h"
|
||||||
#include "RestServer/DatabaseFeature.h"
|
#include "RestServer/DatabaseFeature.h"
|
||||||
|
#include "RocksDBCollection.h"
|
||||||
#include "RocksDBEngine/RocksDBBuilderIndex.h"
|
#include "RocksDBEngine/RocksDBBuilderIndex.h"
|
||||||
#include "RocksDBEngine/RocksDBCommon.h"
|
#include "RocksDBEngine/RocksDBCommon.h"
|
||||||
#include "RocksDBEngine/RocksDBComparator.h"
|
#include "RocksDBEngine/RocksDBComparator.h"
|
||||||
|
@ -269,10 +269,13 @@ void RocksDBCollection::prepareIndexes(arangodb::velocypack::Slice indexesSlice)
|
||||||
WRITE_LOCKER(guard, _indexesLock);
|
WRITE_LOCKER(guard, _indexesLock);
|
||||||
TRI_ASSERT(_indexes.empty());
|
TRI_ASSERT(_indexes.empty());
|
||||||
for (std::shared_ptr<Index>& idx : indexes) {
|
for (std::shared_ptr<Index>& idx : indexes) {
|
||||||
|
TRI_ASSERT(idx != nullptr);
|
||||||
auto const id = idx->id();
|
auto const id = idx->id();
|
||||||
for (auto const& it : _indexes) {
|
for (auto const& it : _indexes) {
|
||||||
|
TRI_ASSERT(it != nullptr);
|
||||||
if (it->id() == id) { // index is there twice
|
if (it->id() == id) { // index is there twice
|
||||||
idx.reset();
|
idx.reset();
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -288,8 +291,9 @@ void RocksDBCollection::prepareIndexes(arangodb::velocypack::Slice indexesSlice)
|
||||||
|
|
||||||
if (_indexes[0]->type() != Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX ||
|
if (_indexes[0]->type() != Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX ||
|
||||||
(TRI_COL_TYPE_EDGE == _logicalCollection.type() &&
|
(TRI_COL_TYPE_EDGE == _logicalCollection.type() &&
|
||||||
|
(_indexes.size() < 3 ||
|
||||||
(_indexes[1]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX ||
|
(_indexes[1]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX ||
|
||||||
_indexes[2]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX))) {
|
_indexes[2]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX)))) {
|
||||||
std::string msg =
|
std::string msg =
|
||||||
"got invalid indexes for collection '" + _logicalCollection.name() + "'";
|
"got invalid indexes for collection '" + _logicalCollection.name() + "'";
|
||||||
LOG_TOPIC(ERR, arangodb::Logger::ENGINES) << msg;
|
LOG_TOPIC(ERR, arangodb::Logger::ENGINES) << msg;
|
||||||
|
@ -332,7 +336,9 @@ std::shared_ptr<Index> RocksDBCollection::createIndex(VPackSlice const& info,
|
||||||
if ((idx = findIndex(info, _indexes)) != nullptr) {
|
if ((idx = findIndex(info, _indexes)) != nullptr) {
|
||||||
// We already have this index.
|
// We already have this index.
|
||||||
if (idx->type() == arangodb::Index::TRI_IDX_TYPE_TTL_INDEX) {
|
if (idx->type() == arangodb::Index::TRI_IDX_TYPE_TTL_INDEX) {
|
||||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, "there can only be one ttl index per collection");
|
THROW_ARANGO_EXCEPTION_MESSAGE(
|
||||||
|
TRI_ERROR_BAD_PARAMETER,
|
||||||
|
"there can only be one ttl index per collection");
|
||||||
}
|
}
|
||||||
|
|
||||||
created = false;
|
created = false;
|
||||||
|
@ -358,8 +364,15 @@ std::shared_ptr<Index> RocksDBCollection::createIndex(VPackSlice const& info,
|
||||||
{
|
{
|
||||||
READ_LOCKER(guard, _indexesLock);
|
READ_LOCKER(guard, _indexesLock);
|
||||||
for (auto const& other : _indexes) { // conflicting index exists
|
for (auto const& other : _indexes) { // conflicting index exists
|
||||||
if (other->id() == idx->id()) {
|
if (other->id() == idx->id() || other->name() == idx->name()) {
|
||||||
return other; // index already exists
|
// definition shares an identifier with an existing index with a
|
||||||
|
// different definition
|
||||||
|
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_ARANGO_DUPLICATE_IDENTIFIER,
|
||||||
|
"duplicate value for `" +
|
||||||
|
arangodb::StaticStrings::IndexId +
|
||||||
|
"` or `" +
|
||||||
|
arangodb::StaticStrings::IndexName +
|
||||||
|
"`");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -752,7 +765,8 @@ bool RocksDBCollection::lookupRevision(transaction::Methods* trx, VPackSlice con
|
||||||
LocalDocumentId documentId;
|
LocalDocumentId documentId;
|
||||||
revisionId = 0;
|
revisionId = 0;
|
||||||
// lookup the revision id in the primary index
|
// lookup the revision id in the primary index
|
||||||
if (!primaryIndex()->lookupRevision(trx, arangodb::velocypack::StringRef(key), documentId, revisionId)) {
|
if (!primaryIndex()->lookupRevision(trx, arangodb::velocypack::StringRef(key),
|
||||||
|
documentId, revisionId)) {
|
||||||
// document not found
|
// document not found
|
||||||
TRI_ASSERT(revisionId == 0);
|
TRI_ASSERT(revisionId == 0);
|
||||||
return false;
|
return false;
|
||||||
|
@ -769,7 +783,8 @@ bool RocksDBCollection::lookupRevision(transaction::Methods* trx, VPackSlice con
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
Result RocksDBCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
Result RocksDBCollection::read(transaction::Methods* trx,
|
||||||
|
arangodb::velocypack::StringRef const& key,
|
||||||
ManagedDocumentResult& result, bool) {
|
ManagedDocumentResult& result, bool) {
|
||||||
LocalDocumentId const documentId = primaryIndex()->lookupKey(trx, key);
|
LocalDocumentId const documentId = primaryIndex()->lookupKey(trx, key);
|
||||||
if (documentId.isSet()) {
|
if (documentId.isSet()) {
|
||||||
|
@ -1331,8 +1346,8 @@ Result RocksDBCollection::updateDocument(transaction::Methods* trx,
|
||||||
READ_LOCKER(guard, _indexesLock);
|
READ_LOCKER(guard, _indexesLock);
|
||||||
for (std::shared_ptr<Index> const& idx : _indexes) {
|
for (std::shared_ptr<Index> const& idx : _indexes) {
|
||||||
RocksDBIndex* rIdx = static_cast<RocksDBIndex*>(idx.get());
|
RocksDBIndex* rIdx = static_cast<RocksDBIndex*>(idx.get());
|
||||||
res = rIdx->update(*trx, mthds, oldDocumentId, oldDoc, newDocumentId, newDoc,
|
res = rIdx->update(*trx, mthds, oldDocumentId, oldDoc, newDocumentId,
|
||||||
options.indexOperationMode);
|
newDoc, options.indexOperationMode);
|
||||||
|
|
||||||
if (res.fail()) {
|
if (res.fail()) {
|
||||||
break;
|
break;
|
||||||
|
|
|
@ -22,7 +22,6 @@
|
||||||
/// @author Michael Hackstein
|
/// @author Michael Hackstein
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "RocksDBEdgeIndex.h"
|
|
||||||
#include "Aql/AstNode.h"
|
#include "Aql/AstNode.h"
|
||||||
#include "Aql/SortCondition.h"
|
#include "Aql/SortCondition.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
|
@ -32,6 +31,7 @@
|
||||||
#include "Cache/CachedValue.h"
|
#include "Cache/CachedValue.h"
|
||||||
#include "Cache/TransactionalCache.h"
|
#include "Cache/TransactionalCache.h"
|
||||||
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
||||||
|
#include "RocksDBEdgeIndex.h"
|
||||||
#include "RocksDBEngine/RocksDBCollection.h"
|
#include "RocksDBEngine/RocksDBCollection.h"
|
||||||
#include "RocksDBEngine/RocksDBCommon.h"
|
#include "RocksDBEngine/RocksDBCommon.h"
|
||||||
#include "RocksDBEngine/RocksDBKey.h"
|
#include "RocksDBEngine/RocksDBKey.h"
|
||||||
|
@ -83,8 +83,7 @@ void RocksDBEdgeIndexWarmupTask::run() {
|
||||||
namespace arangodb {
|
namespace arangodb {
|
||||||
class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
||||||
public:
|
public:
|
||||||
RocksDBEdgeIndexLookupIterator(LogicalCollection* collection,
|
RocksDBEdgeIndexLookupIterator(LogicalCollection* collection, transaction::Methods* trx,
|
||||||
transaction::Methods* trx,
|
|
||||||
arangodb::RocksDBEdgeIndex const* index,
|
arangodb::RocksDBEdgeIndex const* index,
|
||||||
std::unique_ptr<VPackBuilder> keys,
|
std::unique_ptr<VPackBuilder> keys,
|
||||||
std::shared_ptr<cache::Cache> cache)
|
std::shared_ptr<cache::Cache> cache)
|
||||||
|
@ -441,15 +440,15 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
||||||
resetInplaceMemory();
|
resetInplaceMemory();
|
||||||
_keysIterator.reset();
|
_keysIterator.reset();
|
||||||
_lastKey = VPackSlice::nullSlice();
|
_lastKey = VPackSlice::nullSlice();
|
||||||
_builderIterator = VPackArrayIterator(arangodb::velocypack::Slice::emptyArraySlice());
|
_builderIterator =
|
||||||
|
VPackArrayIterator(arangodb::velocypack::Slice::emptyArraySlice());
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief index supports rearming
|
/// @brief index supports rearming
|
||||||
bool canRearm() const override { return true; }
|
bool canRearm() const override { return true; }
|
||||||
|
|
||||||
/// @brief rearm the index iterator
|
/// @brief rearm the index iterator
|
||||||
bool rearm(arangodb::aql::AstNode const* node,
|
bool rearm(arangodb::aql::AstNode const* node, arangodb::aql::Variable const* variable,
|
||||||
arangodb::aql::Variable const* variable,
|
|
||||||
IndexIteratorOptions const& opts) override {
|
IndexIteratorOptions const& opts) override {
|
||||||
TRI_ASSERT(!_index->isSorted() || opts.sorted);
|
TRI_ASSERT(!_index->isSorted() || opts.sorted);
|
||||||
|
|
||||||
|
@ -470,8 +469,7 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN &&
|
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN && aap.value->isArray()) {
|
||||||
aap.value->isArray()) {
|
|
||||||
// a.b IN values
|
// a.b IN values
|
||||||
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value);
|
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value);
|
||||||
_keysIterator = VPackArrayIterator(_keys->slice());
|
_keysIterator = VPackArrayIterator(_keys->slice());
|
||||||
|
@ -519,11 +517,13 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
||||||
auto end = _bounds.end();
|
auto end = _bounds.end();
|
||||||
while (_iterator->Valid() && (cmp->Compare(_iterator->key(), end) < 0)) {
|
while (_iterator->Valid() && (cmp->Compare(_iterator->key(), end) < 0)) {
|
||||||
LocalDocumentId const documentId =
|
LocalDocumentId const documentId =
|
||||||
RocksDBKey::indexDocumentId(RocksDBEntryType::EdgeIndexValue, _iterator->key());
|
RocksDBKey::indexDocumentId(RocksDBEntryType::EdgeIndexValue,
|
||||||
|
_iterator->key());
|
||||||
|
|
||||||
// adding revision ID and _from or _to value
|
// adding revision ID and _from or _to value
|
||||||
_builder.add(VPackValue(documentId.id()));
|
_builder.add(VPackValue(documentId.id()));
|
||||||
arangodb::velocypack::StringRef vertexId = RocksDBValue::vertexId(_iterator->value());
|
arangodb::velocypack::StringRef vertexId =
|
||||||
|
RocksDBValue::vertexId(_iterator->value());
|
||||||
_builder.add(VPackValuePair(vertexId.data(), vertexId.size(), VPackValueType::String));
|
_builder.add(VPackValuePair(vertexId.data(), vertexId.size(), VPackValueType::String));
|
||||||
|
|
||||||
_iterator->Next();
|
_iterator->Next();
|
||||||
|
@ -574,7 +574,7 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
||||||
arangodb::velocypack::Slice _lastKey;
|
arangodb::velocypack::Slice _lastKey;
|
||||||
};
|
};
|
||||||
|
|
||||||
} // namespace
|
} // namespace arangodb
|
||||||
|
|
||||||
// ============================= Index ====================================
|
// ============================= Index ====================================
|
||||||
|
|
||||||
|
@ -590,6 +590,8 @@ RocksDBEdgeIndex::RocksDBEdgeIndex(TRI_idx_iid_t iid, arangodb::LogicalCollectio
|
||||||
arangodb::velocypack::Slice const& info,
|
arangodb::velocypack::Slice const& info,
|
||||||
std::string const& attr)
|
std::string const& attr)
|
||||||
: RocksDBIndex(iid, collection,
|
: RocksDBIndex(iid, collection,
|
||||||
|
((attr == StaticStrings::FromString) ? StaticStrings::IndexNameEdgeFrom
|
||||||
|
: StaticStrings::IndexNameEdgeTo),
|
||||||
std::vector<std::vector<AttributeName>>({{AttributeName(attr, false)}}),
|
std::vector<std::vector<AttributeName>>({{AttributeName(attr, false)}}),
|
||||||
false, false, RocksDBColumnFamily::edge(),
|
false, false, RocksDBColumnFamily::edge(),
|
||||||
basics::VelocyPackHelper::stringUInt64(info, "objectId"),
|
basics::VelocyPackHelper::stringUInt64(info, "objectId"),
|
||||||
|
@ -645,8 +647,7 @@ void RocksDBEdgeIndex::toVelocyPack(VPackBuilder& builder,
|
||||||
|
|
||||||
Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||||
LocalDocumentId const& documentId,
|
LocalDocumentId const& documentId,
|
||||||
velocypack::Slice const& doc,
|
velocypack::Slice const& doc, Index::OperationMode mode) {
|
||||||
Index::OperationMode mode) {
|
|
||||||
Result res;
|
Result res;
|
||||||
VPackSlice fromTo = doc.get(_directionAttr);
|
VPackSlice fromTo = doc.get(_directionAttr);
|
||||||
TRI_ASSERT(fromTo.isString());
|
TRI_ASSERT(fromTo.isString());
|
||||||
|
@ -660,7 +661,8 @@ Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||||
? transaction::helpers::extractToFromDocument(doc)
|
? transaction::helpers::extractToFromDocument(doc)
|
||||||
: transaction::helpers::extractFromFromDocument(doc);
|
: transaction::helpers::extractFromFromDocument(doc);
|
||||||
TRI_ASSERT(toFrom.isString());
|
TRI_ASSERT(toFrom.isString());
|
||||||
RocksDBValue value = RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
RocksDBValue value =
|
||||||
|
RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
||||||
|
|
||||||
// blacklist key in cache
|
// blacklist key in cache
|
||||||
blackListKey(fromToRef);
|
blackListKey(fromToRef);
|
||||||
|
@ -682,8 +684,7 @@ Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||||
|
|
||||||
Result RocksDBEdgeIndex::remove(transaction::Methods& trx, RocksDBMethods* mthd,
|
Result RocksDBEdgeIndex::remove(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||||
LocalDocumentId const& documentId,
|
LocalDocumentId const& documentId,
|
||||||
velocypack::Slice const& doc,
|
velocypack::Slice const& doc, Index::OperationMode mode) {
|
||||||
Index::OperationMode mode) {
|
|
||||||
Result res;
|
Result res;
|
||||||
|
|
||||||
// VPackSlice primaryKey = doc.get(StaticStrings::KeyString);
|
// VPackSlice primaryKey = doc.get(StaticStrings::KeyString);
|
||||||
|
@ -696,7 +697,8 @@ Result RocksDBEdgeIndex::remove(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||||
? transaction::helpers::extractToFromDocument(doc)
|
? transaction::helpers::extractToFromDocument(doc)
|
||||||
: transaction::helpers::extractFromFromDocument(doc);
|
: transaction::helpers::extractFromFromDocument(doc);
|
||||||
TRI_ASSERT(toFrom.isString());
|
TRI_ASSERT(toFrom.isString());
|
||||||
RocksDBValue value = RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
RocksDBValue value =
|
||||||
|
RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
||||||
|
|
||||||
// blacklist key in cache
|
// blacklist key in cache
|
||||||
blackListKey(fromToRef);
|
blackListKey(fromToRef);
|
||||||
|
@ -1053,8 +1055,7 @@ void RocksDBEdgeIndex::fillLookupValue(VPackBuilder& keys,
|
||||||
keys.close();
|
keys.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
void RocksDBEdgeIndex::fillInLookupValues(transaction::Methods* trx,
|
void RocksDBEdgeIndex::fillInLookupValues(transaction::Methods* trx, VPackBuilder& keys,
|
||||||
VPackBuilder& keys,
|
|
||||||
arangodb::aql::AstNode const* values) const {
|
arangodb::aql::AstNode const* values) const {
|
||||||
TRI_ASSERT(values != nullptr);
|
TRI_ASSERT(values != nullptr);
|
||||||
TRI_ASSERT(values->type == arangodb::aql::NODE_TYPE_ARRAY);
|
TRI_ASSERT(values->type == arangodb::aql::NODE_TYPE_ARRAY);
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Jan Steemann
|
/// @author Jan Steemann
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "RocksDBIndex.h"
|
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
#include "Cache/CacheManagerFeature.h"
|
#include "Cache/CacheManagerFeature.h"
|
||||||
#include "Cache/Common.h"
|
#include "Cache/Common.h"
|
||||||
|
@ -33,6 +32,7 @@
|
||||||
#include "RocksDBEngine/RocksDBComparator.h"
|
#include "RocksDBEngine/RocksDBComparator.h"
|
||||||
#include "RocksDBEngine/RocksDBMethods.h"
|
#include "RocksDBEngine/RocksDBMethods.h"
|
||||||
#include "RocksDBEngine/RocksDBTransactionState.h"
|
#include "RocksDBEngine/RocksDBTransactionState.h"
|
||||||
|
#include "RocksDBIndex.h"
|
||||||
#include "StorageEngine/EngineSelectorFeature.h"
|
#include "StorageEngine/EngineSelectorFeature.h"
|
||||||
#include "VocBase/LogicalCollection.h"
|
#include "VocBase/LogicalCollection.h"
|
||||||
#include "VocBase/ticks.h"
|
#include "VocBase/ticks.h"
|
||||||
|
@ -59,10 +59,11 @@ inline uint64_t ensureObjectId(uint64_t oid) {
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
RocksDBIndex::RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
RocksDBIndex::RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
||||||
|
std::string const& name,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
||||||
bool unique, bool sparse, rocksdb::ColumnFamilyHandle* cf,
|
bool unique, bool sparse, rocksdb::ColumnFamilyHandle* cf,
|
||||||
uint64_t objectId, bool useCache)
|
uint64_t objectId, bool useCache)
|
||||||
: Index(id, collection, attributes, unique, sparse),
|
: Index(id, collection, name, attributes, unique, sparse),
|
||||||
_objectId(::ensureObjectId(objectId)),
|
_objectId(::ensureObjectId(objectId)),
|
||||||
_cf(cf),
|
_cf(cf),
|
||||||
_cache(nullptr),
|
_cache(nullptr),
|
||||||
|
@ -241,10 +242,8 @@ void RocksDBIndex::afterTruncate(TRI_voc_tick_t) {
|
||||||
|
|
||||||
Result RocksDBIndex::update(transaction::Methods& trx, RocksDBMethods* mthd,
|
Result RocksDBIndex::update(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||||
LocalDocumentId const& oldDocumentId,
|
LocalDocumentId const& oldDocumentId,
|
||||||
velocypack::Slice const& oldDoc,
|
velocypack::Slice const& oldDoc, LocalDocumentId const& newDocumentId,
|
||||||
LocalDocumentId const& newDocumentId,
|
velocypack::Slice const& newDoc, Index::OperationMode mode) {
|
||||||
velocypack::Slice const& newDoc,
|
|
||||||
Index::OperationMode mode) {
|
|
||||||
// It is illegal to call this method on the primary index
|
// It is illegal to call this method on the primary index
|
||||||
// RocksDBPrimaryIndex must override this method accordingly
|
// RocksDBPrimaryIndex must override this method accordingly
|
||||||
TRI_ASSERT(type() != TRI_IDX_TYPE_PRIMARY_INDEX);
|
TRI_ASSERT(type() != TRI_IDX_TYPE_PRIMARY_INDEX);
|
||||||
|
|
|
@ -128,7 +128,7 @@ class RocksDBIndex : public Index {
|
||||||
virtual bool isPersistent() const override { return true; }
|
virtual bool isPersistent() const override { return true; }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection, std::string const& name,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
||||||
bool unique, bool sparse, rocksdb::ColumnFamilyHandle* cf,
|
bool unique, bool sparse, rocksdb::ColumnFamilyHandle* cf,
|
||||||
uint64_t objectId, bool useCache);
|
uint64_t objectId, bool useCache);
|
||||||
|
@ -139,7 +139,9 @@ class RocksDBIndex : public Index {
|
||||||
|
|
||||||
inline bool useCache() const { return (_cacheEnabled && _cachePresent); }
|
inline bool useCache() const { return (_cacheEnabled && _cachePresent); }
|
||||||
void blackListKey(char const* data, std::size_t len);
|
void blackListKey(char const* data, std::size_t len);
|
||||||
void blackListKey(arangodb::velocypack::StringRef& ref) { blackListKey(ref.data(), ref.size()); };
|
void blackListKey(arangodb::velocypack::StringRef& ref) {
|
||||||
|
blackListKey(ref.data(), ref.size());
|
||||||
|
};
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
uint64_t _objectId;
|
uint64_t _objectId;
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Michael Hackstein
|
/// @author Michael Hackstein
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "RocksDBIndexFactory.h"
|
|
||||||
#include "Basics/StaticStrings.h"
|
#include "Basics/StaticStrings.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
#include "Basics/VelocyPackHelper.h"
|
#include "Basics/VelocyPackHelper.h"
|
||||||
|
@ -36,6 +35,7 @@
|
||||||
#include "RocksDBEngine/RocksDBPrimaryIndex.h"
|
#include "RocksDBEngine/RocksDBPrimaryIndex.h"
|
||||||
#include "RocksDBEngine/RocksDBSkiplistIndex.h"
|
#include "RocksDBEngine/RocksDBSkiplistIndex.h"
|
||||||
#include "RocksDBEngine/RocksDBTtlIndex.h"
|
#include "RocksDBEngine/RocksDBTtlIndex.h"
|
||||||
|
#include "RocksDBIndexFactory.h"
|
||||||
#include "VocBase/LogicalCollection.h"
|
#include "VocBase/LogicalCollection.h"
|
||||||
#include "VocBase/ticks.h"
|
#include "VocBase/ticks.h"
|
||||||
#include "VocBase/voc-types.h"
|
#include "VocBase/voc-types.h"
|
||||||
|
@ -71,8 +71,7 @@ struct EdgeIndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
if (!isClusterConstructor) {
|
if (!isClusterConstructor) {
|
||||||
// this indexes cannot be created directly
|
// this indexes cannot be created directly
|
||||||
return arangodb::Result(TRI_ERROR_INTERNAL, "cannot create edge index");
|
return arangodb::Result(TRI_ERROR_INTERNAL, "cannot create edge index");
|
||||||
|
@ -115,8 +114,7 @@ struct FulltextIndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
index = std::make_shared<arangodb::RocksDBFulltextIndex>(id, collection, definition);
|
index = std::make_shared<arangodb::RocksDBFulltextIndex>(id, collection, definition);
|
||||||
|
|
||||||
return arangodb::Result();
|
return arangodb::Result();
|
||||||
|
@ -150,8 +148,7 @@ struct GeoIndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
||||||
definition, "geo");
|
definition, "geo");
|
||||||
|
|
||||||
|
@ -185,8 +182,7 @@ struct Geo1IndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
||||||
definition, "geo1");
|
definition, "geo1");
|
||||||
|
|
||||||
|
@ -221,8 +217,7 @@ struct Geo2IndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
||||||
definition, "geo2");
|
definition, "geo2");
|
||||||
|
|
||||||
|
@ -257,8 +252,7 @@ struct SecondaryIndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
index = std::make_shared<F>(id, collection, definition);
|
index = std::make_shared<F>(id, collection, definition);
|
||||||
return arangodb::Result();
|
return arangodb::Result();
|
||||||
}
|
}
|
||||||
|
@ -284,13 +278,13 @@ struct SecondaryIndexFactory : public DefaultIndexFactory {
|
||||||
};
|
};
|
||||||
|
|
||||||
struct TtlIndexFactory : public DefaultIndexFactory {
|
struct TtlIndexFactory : public DefaultIndexFactory {
|
||||||
explicit TtlIndexFactory(arangodb::Index::IndexType type) : DefaultIndexFactory(type) {}
|
explicit TtlIndexFactory(arangodb::Index::IndexType type)
|
||||||
|
: DefaultIndexFactory(type) {}
|
||||||
|
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
index = std::make_shared<RocksDBTtlIndex>(id, collection, definition);
|
index = std::make_shared<RocksDBTtlIndex>(id, collection, definition);
|
||||||
return arangodb::Result();
|
return arangodb::Result();
|
||||||
}
|
}
|
||||||
|
@ -322,8 +316,7 @@ struct PrimaryIndexFactory : public DefaultIndexFactory {
|
||||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||||
arangodb::LogicalCollection& collection,
|
arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& definition,
|
arangodb::velocypack::Slice const& definition,
|
||||||
TRI_idx_iid_t id,
|
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||||
bool isClusterConstructor) const override {
|
|
||||||
if (!isClusterConstructor) {
|
if (!isClusterConstructor) {
|
||||||
// this indexes cannot be created directly
|
// this indexes cannot be created directly
|
||||||
return arangodb::Result(TRI_ERROR_INTERNAL,
|
return arangodb::Result(TRI_ERROR_INTERNAL,
|
||||||
|
@ -382,8 +375,8 @@ RocksDBIndexFactory::RocksDBIndexFactory() {
|
||||||
emplace("ttl", ttlIndexFactory);
|
emplace("ttl", ttlIndexFactory);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" => "hash")
|
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" =>
|
||||||
/// used to display storage engine capabilities
|
/// "hash") used to display storage engine capabilities
|
||||||
std::unordered_map<std::string, std::string> RocksDBIndexFactory::indexAliases() const {
|
std::unordered_map<std::string, std::string> RocksDBIndexFactory::indexAliases() const {
|
||||||
return std::unordered_map<std::string, std::string>{
|
return std::unordered_map<std::string, std::string>{
|
||||||
{"skiplist", "hash"},
|
{"skiplist", "hash"},
|
||||||
|
|
|
@ -21,7 +21,6 @@
|
||||||
/// @author Jan Steemann
|
/// @author Jan Steemann
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "RocksDBPrimaryIndex.h"
|
|
||||||
#include "Aql/Ast.h"
|
#include "Aql/Ast.h"
|
||||||
#include "Aql/AstNode.h"
|
#include "Aql/AstNode.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
|
@ -42,6 +41,7 @@
|
||||||
#include "RocksDBEngine/RocksDBTransactionState.h"
|
#include "RocksDBEngine/RocksDBTransactionState.h"
|
||||||
#include "RocksDBEngine/RocksDBTypes.h"
|
#include "RocksDBEngine/RocksDBTypes.h"
|
||||||
#include "RocksDBEngine/RocksDBValue.h"
|
#include "RocksDBEngine/RocksDBValue.h"
|
||||||
|
#include "RocksDBPrimaryIndex.h"
|
||||||
#include "StorageEngine/EngineSelectorFeature.h"
|
#include "StorageEngine/EngineSelectorFeature.h"
|
||||||
#include "Transaction/Context.h"
|
#include "Transaction/Context.h"
|
||||||
#include "Transaction/Helpers.h"
|
#include "Transaction/Helpers.h"
|
||||||
|
@ -67,7 +67,7 @@ using namespace arangodb;
|
||||||
namespace {
|
namespace {
|
||||||
std::string const lowest; // smallest possible key
|
std::string const lowest; // smallest possible key
|
||||||
std::string const highest = "\xFF"; // greatest possible key
|
std::string const highest = "\xFF"; // greatest possible key
|
||||||
}
|
} // namespace
|
||||||
|
|
||||||
// ================ Primary Index Iterators ================
|
// ================ Primary Index Iterators ================
|
||||||
|
|
||||||
|
@ -75,9 +75,10 @@ namespace arangodb {
|
||||||
|
|
||||||
class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
||||||
public:
|
public:
|
||||||
RocksDBPrimaryIndexEqIterator(
|
RocksDBPrimaryIndexEqIterator(LogicalCollection* collection,
|
||||||
LogicalCollection* collection, transaction::Methods* trx, RocksDBPrimaryIndex* index,
|
transaction::Methods* trx, RocksDBPrimaryIndex* index,
|
||||||
std::unique_ptr<VPackBuilder> key, bool allowCoveringIndexOptimization)
|
std::unique_ptr<VPackBuilder> key,
|
||||||
|
bool allowCoveringIndexOptimization)
|
||||||
: IndexIterator(collection, trx),
|
: IndexIterator(collection, trx),
|
||||||
_index(index),
|
_index(index),
|
||||||
_key(std::move(key)),
|
_key(std::move(key)),
|
||||||
|
@ -99,8 +100,7 @@ class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
||||||
bool canRearm() const override { return true; }
|
bool canRearm() const override { return true; }
|
||||||
|
|
||||||
/// @brief rearm the index iterator
|
/// @brief rearm the index iterator
|
||||||
bool rearm(arangodb::aql::AstNode const* node,
|
bool rearm(arangodb::aql::AstNode const* node, arangodb::aql::Variable const* variable,
|
||||||
arangodb::aql::Variable const* variable,
|
|
||||||
IndexIteratorOptions const& opts) override {
|
IndexIteratorOptions const& opts) override {
|
||||||
TRI_ASSERT(node != nullptr);
|
TRI_ASSERT(node != nullptr);
|
||||||
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
||||||
|
@ -128,7 +128,8 @@ class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
_done = true;
|
_done = true;
|
||||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
LocalDocumentId documentId =
|
||||||
|
_index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
||||||
if (documentId.isSet()) {
|
if (documentId.isSet()) {
|
||||||
cb(documentId);
|
cb(documentId);
|
||||||
}
|
}
|
||||||
|
@ -146,7 +147,8 @@ class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
_done = true;
|
_done = true;
|
||||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
LocalDocumentId documentId =
|
||||||
|
_index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
||||||
if (documentId.isSet()) {
|
if (documentId.isSet()) {
|
||||||
cb(documentId, _key->slice());
|
cb(documentId, _key->slice());
|
||||||
}
|
}
|
||||||
|
@ -193,8 +195,7 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
||||||
bool canRearm() const override { return true; }
|
bool canRearm() const override { return true; }
|
||||||
|
|
||||||
/// @brief rearm the index iterator
|
/// @brief rearm the index iterator
|
||||||
bool rearm(arangodb::aql::AstNode const* node,
|
bool rearm(arangodb::aql::AstNode const* node, arangodb::aql::Variable const* variable,
|
||||||
arangodb::aql::Variable const* variable,
|
|
||||||
IndexIteratorOptions const& opts) override {
|
IndexIteratorOptions const& opts) override {
|
||||||
TRI_ASSERT(node != nullptr);
|
TRI_ASSERT(node != nullptr);
|
||||||
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
||||||
|
@ -203,7 +204,8 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
||||||
TRI_ASSERT(aap.opType == arangodb::aql::NODE_TYPE_OPERATOR_BINARY_IN);
|
TRI_ASSERT(aap.opType == arangodb::aql::NODE_TYPE_OPERATOR_BINARY_IN);
|
||||||
|
|
||||||
if (aap.value->isArray()) {
|
if (aap.value->isArray()) {
|
||||||
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value, opts.ascending, !_allowCoveringIndexOptimization);
|
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value, opts.ascending,
|
||||||
|
!_allowCoveringIndexOptimization);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -219,7 +221,8 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
while (limit > 0) {
|
while (limit > 0) {
|
||||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
LocalDocumentId documentId =
|
||||||
|
_index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
||||||
if (documentId.isSet()) {
|
if (documentId.isSet()) {
|
||||||
cb(documentId);
|
cb(documentId);
|
||||||
--limit;
|
--limit;
|
||||||
|
@ -243,7 +246,8 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
while (limit > 0) {
|
while (limit > 0) {
|
||||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
LocalDocumentId documentId =
|
||||||
|
_index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
||||||
if (documentId.isSet()) {
|
if (documentId.isSet()) {
|
||||||
cb(documentId, *_iterator);
|
cb(documentId, *_iterator);
|
||||||
--limit;
|
--limit;
|
||||||
|
@ -307,7 +311,9 @@ class RocksDBPrimaryIndexRangeIterator final : public IndexIterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
public:
|
public:
|
||||||
char const* typeName() const override { return "rocksdb-range-index-iterator"; }
|
char const* typeName() const override {
|
||||||
|
return "rocksdb-range-index-iterator";
|
||||||
|
}
|
||||||
|
|
||||||
/// @brief Get the next limit many elements in the index
|
/// @brief Get the next limit many elements in the index
|
||||||
bool next(LocalDocumentIdCallback const& cb, size_t limit) override {
|
bool next(LocalDocumentIdCallback const& cb, size_t limit) override {
|
||||||
|
@ -398,13 +404,13 @@ class RocksDBPrimaryIndexRangeIterator final : public IndexIterator {
|
||||||
rocksdb::Slice _rangeBound;
|
rocksdb::Slice _rangeBound;
|
||||||
};
|
};
|
||||||
|
|
||||||
} // namespace
|
} // namespace arangodb
|
||||||
|
|
||||||
// ================ PrimaryIndex ================
|
// ================ PrimaryIndex ================
|
||||||
|
|
||||||
RocksDBPrimaryIndex::RocksDBPrimaryIndex(arangodb::LogicalCollection& collection,
|
RocksDBPrimaryIndex::RocksDBPrimaryIndex(arangodb::LogicalCollection& collection,
|
||||||
arangodb::velocypack::Slice const& info)
|
arangodb::velocypack::Slice const& info)
|
||||||
: RocksDBIndex(0, collection,
|
: RocksDBIndex(0, collection, StaticStrings::IndexNamePrimary,
|
||||||
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
||||||
{{arangodb::basics::AttributeName(StaticStrings::KeyString, false)}}),
|
{{arangodb::basics::AttributeName(StaticStrings::KeyString, false)}}),
|
||||||
true, false, RocksDBColumnFamily::primary(),
|
true, false, RocksDBColumnFamily::primary(),
|
||||||
|
@ -502,7 +508,8 @@ LocalDocumentId RocksDBPrimaryIndex::lookupKey(transaction::Methods* trx,
|
||||||
/// the case for older collections
|
/// the case for older collections
|
||||||
/// in this case the caller must fetch the revision id from the actual
|
/// in this case the caller must fetch the revision id from the actual
|
||||||
/// document
|
/// document
|
||||||
bool RocksDBPrimaryIndex::lookupRevision(transaction::Methods* trx, arangodb::velocypack::StringRef keyRef,
|
bool RocksDBPrimaryIndex::lookupRevision(transaction::Methods* trx,
|
||||||
|
arangodb::velocypack::StringRef keyRef,
|
||||||
LocalDocumentId& documentId,
|
LocalDocumentId& documentId,
|
||||||
TRI_voc_rid_t& revisionId) const {
|
TRI_voc_rid_t& revisionId) const {
|
||||||
documentId.clear();
|
documentId.clear();
|
||||||
|
@ -668,16 +675,14 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
||||||
// a.b == value
|
// a.b == value
|
||||||
return createEqIterator(trx, aap.attribute, aap.value);
|
return createEqIterator(trx, aap.attribute, aap.value);
|
||||||
}
|
}
|
||||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN &&
|
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN && aap.value->isArray()) {
|
||||||
aap.value->isArray()) {
|
|
||||||
// a.b IN array
|
// a.b IN array
|
||||||
return createInIterator(trx, aap.attribute, aap.value, opts.ascending);
|
return createInIterator(trx, aap.attribute, aap.value, opts.ascending);
|
||||||
}
|
}
|
||||||
// fall-through intentional here
|
// fall-through intentional here
|
||||||
}
|
}
|
||||||
|
|
||||||
auto removeCollectionFromString =
|
auto removeCollectionFromString = [this, &trx](bool isId, std::string& value) -> int {
|
||||||
[this, &trx](bool isId, std::string& value) -> int {
|
|
||||||
if (isId) {
|
if (isId) {
|
||||||
char const* key = nullptr;
|
char const* key = nullptr;
|
||||||
size_t outLength = 0;
|
size_t outLength = 0;
|
||||||
|
@ -723,11 +728,9 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
||||||
|
|
||||||
auto type = aap.opType;
|
auto type = aap.opType;
|
||||||
|
|
||||||
if (!(type == aql::NODE_TYPE_OPERATOR_BINARY_LE ||
|
if (!(type == aql::NODE_TYPE_OPERATOR_BINARY_LE || type == aql::NODE_TYPE_OPERATOR_BINARY_LT ||
|
||||||
type == aql::NODE_TYPE_OPERATOR_BINARY_LT || type == aql::NODE_TYPE_OPERATOR_BINARY_GE ||
|
type == aql::NODE_TYPE_OPERATOR_BINARY_GE || type == aql::NODE_TYPE_OPERATOR_BINARY_GT ||
|
||||||
type == aql::NODE_TYPE_OPERATOR_BINARY_GT ||
|
type == aql::NODE_TYPE_OPERATOR_BINARY_EQ)) {
|
||||||
type == aql::NODE_TYPE_OPERATOR_BINARY_EQ
|
|
||||||
)) {
|
|
||||||
return new EmptyIndexIterator(&_collection, trx);
|
return new EmptyIndexIterator(&_collection, trx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -740,11 +743,15 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
||||||
} else if (aap.value->isObject() || aap.value->isArray()) {
|
} else if (aap.value->isObject() || aap.value->isArray()) {
|
||||||
// any array or object value is bigger than any potential key
|
// any array or object value is bigger than any potential key
|
||||||
value = ::highest;
|
value = ::highest;
|
||||||
} else if (aap.value->isNullValue() || aap.value->isBoolValue() || aap.value->isIntValue()) {
|
} else if (aap.value->isNullValue() || aap.value->isBoolValue() ||
|
||||||
|
aap.value->isIntValue()) {
|
||||||
// any null, bool or numeric value is lower than any potential key
|
// any null, bool or numeric value is lower than any potential key
|
||||||
// keep lower bound
|
// keep lower bound
|
||||||
} else {
|
} else {
|
||||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL, std::string("unhandled type for valNode: ") + aap.value->getTypeString());
|
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL,
|
||||||
|
std::string(
|
||||||
|
"unhandled type for valNode: ") +
|
||||||
|
aap.value->getTypeString());
|
||||||
}
|
}
|
||||||
|
|
||||||
// strip collection name prefix from comparison value
|
// strip collection name prefix from comparison value
|
||||||
|
@ -763,7 +770,8 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
||||||
lower = std::move(value);
|
lower = std::move(value);
|
||||||
lowerFound = true;
|
lowerFound = true;
|
||||||
}
|
}
|
||||||
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_LE || type == aql::NODE_TYPE_OPERATOR_BINARY_LT) {
|
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_LE ||
|
||||||
|
type == aql::NODE_TYPE_OPERATOR_BINARY_LT) {
|
||||||
// a.b < value
|
// a.b < value
|
||||||
if (cmpResult > 0) {
|
if (cmpResult > 0) {
|
||||||
// doc._id < collection with "bigger" name
|
// doc._id < collection with "bigger" name
|
||||||
|
@ -780,7 +788,8 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
upperFound = true;
|
upperFound = true;
|
||||||
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_GE || type == aql::NODE_TYPE_OPERATOR_BINARY_GT) {
|
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_GE ||
|
||||||
|
type == aql::NODE_TYPE_OPERATOR_BINARY_GT) {
|
||||||
// a.b > value
|
// a.b > value
|
||||||
if (cmpResult < 0) {
|
if (cmpResult < 0) {
|
||||||
// doc._id > collection with "smaller" name
|
// doc._id > collection with "smaller" name
|
||||||
|
@ -868,11 +877,9 @@ IndexIterator* RocksDBPrimaryIndex::createEqIterator(transaction::Methods* trx,
|
||||||
return new EmptyIndexIterator(&_collection, trx);
|
return new EmptyIndexIterator(&_collection, trx);
|
||||||
}
|
}
|
||||||
|
|
||||||
void RocksDBPrimaryIndex::fillInLookupValues(transaction::Methods* trx,
|
void RocksDBPrimaryIndex::fillInLookupValues(transaction::Methods* trx, VPackBuilder& keys,
|
||||||
VPackBuilder& keys,
|
|
||||||
arangodb::aql::AstNode const* values,
|
arangodb::aql::AstNode const* values,
|
||||||
bool ascending,
|
bool ascending, bool isId) const {
|
||||||
bool isId) const {
|
|
||||||
TRI_ASSERT(values != nullptr);
|
TRI_ASSERT(values != nullptr);
|
||||||
TRI_ASSERT(values->type == arangodb::aql::NODE_TYPE_ARRAY);
|
TRI_ASSERT(values->type == arangodb::aql::NODE_TYPE_ARRAY);
|
||||||
|
|
||||||
|
|
|
@ -157,6 +157,16 @@ std::shared_ptr<Index> PhysicalCollection::lookupIndex(TRI_idx_iid_t idxId) cons
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::shared_ptr<Index> PhysicalCollection::lookupIndex(std::string const& idxName) const {
|
||||||
|
READ_LOCKER(guard, _indexesLock);
|
||||||
|
for (auto const& idx : _indexes) {
|
||||||
|
if (idx->name() == idxName) {
|
||||||
|
return idx;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
TRI_voc_rid_t PhysicalCollection::newRevisionId() const {
|
TRI_voc_rid_t PhysicalCollection::newRevisionId() const {
|
||||||
return TRI_HybridLogicalClock();
|
return TRI_HybridLogicalClock();
|
||||||
}
|
}
|
||||||
|
|
|
@ -107,6 +107,11 @@ class PhysicalCollection {
|
||||||
|
|
||||||
/// @brief Find index by iid
|
/// @brief Find index by iid
|
||||||
std::shared_ptr<Index> lookupIndex(TRI_idx_iid_t) const;
|
std::shared_ptr<Index> lookupIndex(TRI_idx_iid_t) const;
|
||||||
|
|
||||||
|
/// @brief Find index by name
|
||||||
|
std::shared_ptr<Index> lookupIndex(std::string const&) const;
|
||||||
|
|
||||||
|
/// @brief get list of all indices
|
||||||
std::vector<std::shared_ptr<Index>> getIndexes() const;
|
std::vector<std::shared_ptr<Index>> getIndexes() const;
|
||||||
|
|
||||||
void getIndexesVPack(velocypack::Builder&, unsigned flags,
|
void getIndexesVPack(velocypack::Builder&, unsigned flags,
|
||||||
|
|
|
@ -190,6 +190,7 @@ LogicalCollection::LogicalCollection(TRI_vocbase_t& vocbase, VPackSlice const& i
|
||||||
TRI_ASSERT(_physical != nullptr);
|
TRI_ASSERT(_physical != nullptr);
|
||||||
// This has to be called AFTER _phyiscal and _logical are properly linked
|
// This has to be called AFTER _phyiscal and _logical are properly linked
|
||||||
// together.
|
// together.
|
||||||
|
|
||||||
prepareIndexes(info.get("indexes"));
|
prepareIndexes(info.get("indexes"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -791,6 +792,10 @@ std::shared_ptr<Index> LogicalCollection::lookupIndex(TRI_idx_iid_t idxId) const
|
||||||
return getPhysical()->lookupIndex(idxId);
|
return getPhysical()->lookupIndex(idxId);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::shared_ptr<Index> LogicalCollection::lookupIndex(std::string const& idxName) const {
|
||||||
|
return getPhysical()->lookupIndex(idxName);
|
||||||
|
}
|
||||||
|
|
||||||
std::shared_ptr<Index> LogicalCollection::lookupIndex(VPackSlice const& info) const {
|
std::shared_ptr<Index> LogicalCollection::lookupIndex(VPackSlice const& info) const {
|
||||||
if (!info.isObject()) {
|
if (!info.isObject()) {
|
||||||
// Compatibility with old v8-vocindex.
|
// Compatibility with old v8-vocindex.
|
||||||
|
@ -853,7 +858,8 @@ void LogicalCollection::deferDropCollection(std::function<bool(LogicalCollection
|
||||||
}
|
}
|
||||||
|
|
||||||
/// @brief reads an element from the document collection
|
/// @brief reads an element from the document collection
|
||||||
Result LogicalCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
Result LogicalCollection::read(transaction::Methods* trx,
|
||||||
|
arangodb::velocypack::StringRef const& key,
|
||||||
ManagedDocumentResult& result, bool lock) {
|
ManagedDocumentResult& result, bool lock) {
|
||||||
TRI_IF_FAILURE("LogicalCollection::read") { return Result(TRI_ERROR_DEBUG); }
|
TRI_IF_FAILURE("LogicalCollection::read") { return Result(TRI_ERROR_DEBUG); }
|
||||||
return getPhysical()->read(trx, key, result, lock);
|
return getPhysical()->read(trx, key, result, lock);
|
||||||
|
|
|
@ -269,6 +269,9 @@ class LogicalCollection : public LogicalDataSource {
|
||||||
/// @brief Find index by iid
|
/// @brief Find index by iid
|
||||||
std::shared_ptr<Index> lookupIndex(TRI_idx_iid_t) const;
|
std::shared_ptr<Index> lookupIndex(TRI_idx_iid_t) const;
|
||||||
|
|
||||||
|
/// @brief Find index by name
|
||||||
|
std::shared_ptr<Index> lookupIndex(std::string const&) const;
|
||||||
|
|
||||||
bool dropIndex(TRI_idx_iid_t iid);
|
bool dropIndex(TRI_idx_iid_t iid);
|
||||||
|
|
||||||
// SECTION: Index access (local only)
|
// SECTION: Index access (local only)
|
||||||
|
|
|
@ -20,8 +20,8 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "Collections.h"
|
|
||||||
#include "Basics/Common.h"
|
#include "Basics/Common.h"
|
||||||
|
#include "Collections.h"
|
||||||
|
|
||||||
#include "Aql/Query.h"
|
#include "Aql/Query.h"
|
||||||
#include "Aql/QueryRegistry.h"
|
#include "Aql/QueryRegistry.h"
|
||||||
|
|
|
@ -20,7 +20,6 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "Indexes.h"
|
|
||||||
#include "Basics/Common.h"
|
#include "Basics/Common.h"
|
||||||
#include "Basics/ReadLocker.h"
|
#include "Basics/ReadLocker.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
|
@ -32,6 +31,7 @@
|
||||||
#include "Cluster/ClusterMethods.h"
|
#include "Cluster/ClusterMethods.h"
|
||||||
#include "Cluster/ServerState.h"
|
#include "Cluster/ServerState.h"
|
||||||
#include "GeneralServer/AuthenticationFeature.h"
|
#include "GeneralServer/AuthenticationFeature.h"
|
||||||
|
#include "Indexes.h"
|
||||||
#include "Indexes/Index.h"
|
#include "Indexes/Index.h"
|
||||||
#include "Indexes/IndexFactory.h"
|
#include "Indexes/IndexFactory.h"
|
||||||
#include "Rest/HttpRequest.h"
|
#include "Rest/HttpRequest.h"
|
||||||
|
@ -62,20 +62,25 @@ using namespace arangodb::methods;
|
||||||
Result Indexes::getIndex(LogicalCollection const* collection,
|
Result Indexes::getIndex(LogicalCollection const* collection,
|
||||||
VPackSlice const& indexId, VPackBuilder& out) {
|
VPackSlice const& indexId, VPackBuilder& out) {
|
||||||
// do some magic to parse the iid
|
// do some magic to parse the iid
|
||||||
std::string name;
|
std::string id; // will (eventually) be fully-qualified; "collection/identifier"
|
||||||
VPackSlice id = indexId;
|
std::string name; // will be just name or id (no "collection/")
|
||||||
if (id.isObject() && id.hasKey(StaticStrings::IndexId)) {
|
VPackSlice idSlice = indexId;
|
||||||
id = id.get(StaticStrings::IndexId);
|
if (idSlice.isObject() && idSlice.hasKey(StaticStrings::IndexId)) {
|
||||||
|
idSlice = idSlice.get(StaticStrings::IndexId);
|
||||||
}
|
}
|
||||||
if (id.isString()) {
|
if (idSlice.isString()) {
|
||||||
std::regex re = std::regex("^([a-zA-Z0-9\\-_]+)\\/([0-9]+)$", std::regex::ECMAScript);
|
std::regex re = std::regex("^([a-zA-Z0-9\\-_]+)\\/([a-zA-Z0-9\\-_]+)$",
|
||||||
if (std::regex_match(id.copyString(), re)) {
|
std::regex::ECMAScript);
|
||||||
name = id.copyString();
|
if (std::regex_match(idSlice.copyString(), re)) {
|
||||||
|
id = idSlice.copyString();
|
||||||
|
name = id.substr(id.find_first_of("/") + 1);
|
||||||
} else {
|
} else {
|
||||||
name = collection->name() + "/" + id.copyString();
|
name = idSlice.copyString();
|
||||||
|
id = collection->name() + "/" + name;
|
||||||
}
|
}
|
||||||
} else if (id.isInteger()) {
|
} else if (idSlice.isInteger()) {
|
||||||
name = collection->name() + "/" + StringUtils::itoa(id.getUInt());
|
name = StringUtils::itoa(idSlice.getUInt());
|
||||||
|
id = collection->name() + "/" + name;
|
||||||
} else {
|
} else {
|
||||||
return Result(TRI_ERROR_ARANGO_INDEX_NOT_FOUND);
|
return Result(TRI_ERROR_ARANGO_INDEX_NOT_FOUND);
|
||||||
}
|
}
|
||||||
|
@ -84,7 +89,8 @@ Result Indexes::getIndex(LogicalCollection const* collection,
|
||||||
Result res = Indexes::getAll(collection, Index::makeFlags(), /*withHidden*/ true, tmp);
|
Result res = Indexes::getAll(collection, Index::makeFlags(), /*withHidden*/ true, tmp);
|
||||||
if (res.ok()) {
|
if (res.ok()) {
|
||||||
for (VPackSlice const& index : VPackArrayIterator(tmp.slice())) {
|
for (VPackSlice const& index : VPackArrayIterator(tmp.slice())) {
|
||||||
if (index.get(StaticStrings::IndexId).compareString(name) == 0) {
|
if (index.get(StaticStrings::IndexId).compareString(id) == 0 ||
|
||||||
|
index.get(StaticStrings::IndexName).compareString(name) == 0) {
|
||||||
out.add(index);
|
out.add(index);
|
||||||
return Result();
|
return Result();
|
||||||
}
|
}
|
||||||
|
@ -309,7 +315,8 @@ Result Indexes::ensureIndexCoordinator(arangodb::LogicalCollection const* collec
|
||||||
TRI_ASSERT(collection != nullptr);
|
TRI_ASSERT(collection != nullptr);
|
||||||
auto& dbName = collection->vocbase().name();
|
auto& dbName = collection->vocbase().name();
|
||||||
auto cid = std::to_string(collection->id());
|
auto cid = std::to_string(collection->id());
|
||||||
auto cluster = application_features::ApplicationServer::getFeature<ClusterFeature>("Cluster");
|
auto cluster = application_features::ApplicationServer::getFeature<ClusterFeature>(
|
||||||
|
"Cluster");
|
||||||
|
|
||||||
return ClusterInfo::instance()->ensureIndexCoordinator( // create index
|
return ClusterInfo::instance()->ensureIndexCoordinator( // create index
|
||||||
dbName, cid, indexDef, create, resultBuilder, cluster->indexCreationTimeout() // args
|
dbName, cid, indexDef, create, resultBuilder, cluster->indexCreationTimeout() // args
|
||||||
|
@ -426,7 +433,8 @@ Result Indexes::ensureIndex(LogicalCollection* collection, VPackSlice const& inp
|
||||||
std::string iid = tmp.slice().get(StaticStrings::IndexId).copyString();
|
std::string iid = tmp.slice().get(StaticStrings::IndexId).copyString();
|
||||||
VPackBuilder b;
|
VPackBuilder b;
|
||||||
b.openObject();
|
b.openObject();
|
||||||
b.add(StaticStrings::IndexId, VPackValue(collection->name() + TRI_INDEX_HANDLE_SEPARATOR_CHR + iid));
|
b.add(StaticStrings::IndexId,
|
||||||
|
VPackValue(collection->name() + TRI_INDEX_HANDLE_SEPARATOR_CHR + iid));
|
||||||
b.close();
|
b.close();
|
||||||
output = VPackCollection::merge(tmp.slice(), b.slice(), false);
|
output = VPackCollection::merge(tmp.slice(), b.slice(), false);
|
||||||
return res;
|
return res;
|
||||||
|
|
|
@ -20,8 +20,8 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "Upgrade.h"
|
|
||||||
#include "Basics/Common.h"
|
#include "Basics/Common.h"
|
||||||
|
#include "Upgrade.h"
|
||||||
|
|
||||||
#include "Agency/AgencyComm.h"
|
#include "Agency/AgencyComm.h"
|
||||||
#include "Basics/StringUtils.h"
|
#include "Basics/StringUtils.h"
|
||||||
|
|
|
@ -20,7 +20,6 @@
|
||||||
/// @author Simon Grätzer
|
/// @author Simon Grätzer
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
#include "UpgradeTasks.h"
|
|
||||||
#include "Agency/AgencyComm.h"
|
#include "Agency/AgencyComm.h"
|
||||||
#include "Basics/Common.h"
|
#include "Basics/Common.h"
|
||||||
#include "Basics/Exceptions.h"
|
#include "Basics/Exceptions.h"
|
||||||
|
@ -31,6 +30,7 @@
|
||||||
#include "Cluster/ClusterFeature.h"
|
#include "Cluster/ClusterFeature.h"
|
||||||
#include "Cluster/ClusterInfo.h"
|
#include "Cluster/ClusterInfo.h"
|
||||||
#include "Cluster/ServerState.h"
|
#include "Cluster/ServerState.h"
|
||||||
|
#include "ClusterEngine/ClusterEngine.h"
|
||||||
#include "GeneralServer/AuthenticationFeature.h"
|
#include "GeneralServer/AuthenticationFeature.h"
|
||||||
#include "Logger/Logger.h"
|
#include "Logger/Logger.h"
|
||||||
#include "MMFiles/MMFilesEngine.h"
|
#include "MMFiles/MMFilesEngine.h"
|
||||||
|
@ -39,6 +39,7 @@
|
||||||
#include "StorageEngine/EngineSelectorFeature.h"
|
#include "StorageEngine/EngineSelectorFeature.h"
|
||||||
#include "StorageEngine/PhysicalCollection.h"
|
#include "StorageEngine/PhysicalCollection.h"
|
||||||
#include "Transaction/StandaloneContext.h"
|
#include "Transaction/StandaloneContext.h"
|
||||||
|
#include "UpgradeTasks.h"
|
||||||
#include "Utils/OperationOptions.h"
|
#include "Utils/OperationOptions.h"
|
||||||
#include "Utils/SingleCollectionTransaction.h"
|
#include "Utils/SingleCollectionTransaction.h"
|
||||||
#include "VocBase/LogicalCollection.h"
|
#include "VocBase/LogicalCollection.h"
|
||||||
|
@ -162,7 +163,7 @@ arangodb::Result recreateGeoIndex(TRI_vocbase_t& vocbase,
|
||||||
|
|
||||||
bool UpgradeTasks::upgradeGeoIndexes(TRI_vocbase_t& vocbase,
|
bool UpgradeTasks::upgradeGeoIndexes(TRI_vocbase_t& vocbase,
|
||||||
arangodb::velocypack::Slice const& slice) {
|
arangodb::velocypack::Slice const& slice) {
|
||||||
if (EngineSelectorFeature::engineName() != "rocksdb") {
|
if (EngineSelectorFeature::engineName() != RocksDBEngine::EngineName) {
|
||||||
LOG_TOPIC(DEBUG, Logger::STARTUP) << "No need to upgrade geo indexes!";
|
LOG_TOPIC(DEBUG, Logger::STARTUP) << "No need to upgrade geo indexes!";
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
@ -240,7 +241,8 @@ bool UpgradeTasks::addDefaultUserOther(TRI_vocbase_t& vocbase,
|
||||||
VPackSlice extra = slice.get("extra");
|
VPackSlice extra = slice.get("extra");
|
||||||
Result res = um->storeUser(false, user, passwd, active, VPackSlice::noneSlice());
|
Result res = um->storeUser(false, user, passwd, active, VPackSlice::noneSlice());
|
||||||
if (res.fail() && !res.is(TRI_ERROR_USER_DUPLICATE)) {
|
if (res.fail() && !res.is(TRI_ERROR_USER_DUPLICATE)) {
|
||||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not add database user " << user << ": " << res.errorMessage();
|
LOG_TOPIC(WARN, Logger::STARTUP) << "could not add database user " << user
|
||||||
|
<< ": " << res.errorMessage();
|
||||||
} else if (extra.isObject() && !extra.isEmptyObject()) {
|
} else if (extra.isObject() && !extra.isEmptyObject()) {
|
||||||
um->updateUser(user, [&](auth::User& user) {
|
um->updateUser(user, [&](auth::User& user) {
|
||||||
user.setUserData(VPackBuilder(extra));
|
user.setUserData(VPackBuilder(extra));
|
||||||
|
@ -254,7 +256,9 @@ bool UpgradeTasks::addDefaultUserOther(TRI_vocbase_t& vocbase,
|
||||||
return TRI_ERROR_NO_ERROR;
|
return TRI_ERROR_NO_ERROR;
|
||||||
});
|
});
|
||||||
if (res.fail()) {
|
if (res.fail()) {
|
||||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not set permissions for new user " << user << ": " << res.errorMessage();
|
LOG_TOPIC(WARN, Logger::STARTUP)
|
||||||
|
<< "could not set permissions for new user " << user << ": "
|
||||||
|
<< res.errorMessage();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
|
@ -336,8 +340,9 @@ bool UpgradeTasks::renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase,
|
||||||
StorageEngine* engine = EngineSelectorFeature::ENGINE;
|
StorageEngine* engine = EngineSelectorFeature::ENGINE;
|
||||||
std::string const path = engine->databasePath(&vocbase);
|
std::string const path = engine->databasePath(&vocbase);
|
||||||
|
|
||||||
std::string const source = arangodb::basics::FileUtils::buildFilename(
|
std::string const source =
|
||||||
path, "REPLICATION-APPLIER-STATE");
|
arangodb::basics::FileUtils::buildFilename(path,
|
||||||
|
"REPLICATION-APPLIER-STATE");
|
||||||
|
|
||||||
if (!basics::FileUtils::isRegularFile(source)) {
|
if (!basics::FileUtils::isRegularFile(source)) {
|
||||||
// source file does not exist
|
// source file does not exist
|
||||||
|
@ -351,11 +356,14 @@ bool UpgradeTasks::renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase,
|
||||||
std::string const dest = arangodb::basics::FileUtils::buildFilename(
|
std::string const dest = arangodb::basics::FileUtils::buildFilename(
|
||||||
path, "REPLICATION-APPLIER-STATE-" + std::to_string(vocbase.id()));
|
path, "REPLICATION-APPLIER-STATE-" + std::to_string(vocbase.id()));
|
||||||
|
|
||||||
LOG_TOPIC(TRACE, Logger::STARTUP) << "copying replication applier file '" << source << "' to '" << dest << "'";
|
LOG_TOPIC(TRACE, Logger::STARTUP) << "copying replication applier file '"
|
||||||
|
<< source << "' to '" << dest << "'";
|
||||||
|
|
||||||
std::string error;
|
std::string error;
|
||||||
if (!TRI_CopyFile(source.c_str(), dest.c_str(), error)) {
|
if (!TRI_CopyFile(source.c_str(), dest.c_str(), error)) {
|
||||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not copy replication applier file '" << source << "' to '" << dest << "'";
|
LOG_TOPIC(WARN, Logger::STARTUP)
|
||||||
|
<< "could not copy replication applier file '" << source << "' to '"
|
||||||
|
<< dest << "'";
|
||||||
result = false;
|
result = false;
|
||||||
}
|
}
|
||||||
return Result();
|
return Result();
|
||||||
|
|
|
@ -48,7 +48,8 @@ struct UpgradeTasks {
|
||||||
static bool createAppsIndex(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
static bool createAppsIndex(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||||
static bool setupAppBundles(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
static bool setupAppBundles(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||||
static bool persistLocalDocumentIds(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
static bool persistLocalDocumentIds(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||||
static bool renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
static bool renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase,
|
||||||
|
velocypack::Slice const& slice);
|
||||||
};
|
};
|
||||||
|
|
||||||
} // namespace methods
|
} // namespace methods
|
||||||
|
|
File diff suppressed because one or more lines are too long
Binary file not shown.
|
@ -1080,6 +1080,7 @@ if (list.length > 0) {
|
||||||
<th class="collectionInfoTh">Deduplicate</th>
|
<th class="collectionInfoTh">Deduplicate</th>
|
||||||
<th class="collectionInfoTh">Selectivity Est.</th>
|
<th class="collectionInfoTh">Selectivity Est.</th>
|
||||||
<th class="collectionInfoTh">Fields</th>
|
<th class="collectionInfoTh">Fields</th>
|
||||||
|
<th class="collectionInfoTh">Name</th>
|
||||||
<th class="collectionInfoTh">Action</th>
|
<th class="collectionInfoTh">Action</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
|
@ -1094,6 +1095,7 @@ if (list.length > 0) {
|
||||||
<td></td>
|
<td></td>
|
||||||
<td></td>
|
<td></td>
|
||||||
<td></td>
|
<td></td>
|
||||||
|
<td></td>
|
||||||
<td><i class="fa fa-plus-circle" id="addIndex"></i></td>
|
<td><i class="fa fa-plus-circle" id="addIndex"></i></td>
|
||||||
</tr>
|
</tr>
|
||||||
</tfoot>
|
</tfoot>
|
||||||
|
@ -1124,6 +1126,17 @@ if (list.length > 0) {
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newGeoName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Geo JSON:</th>
|
<th class="collectionTh">Geo JSON:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -1165,6 +1178,17 @@ if (list.length > 0) {
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newPersistentName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Unique:</th>
|
<th class="collectionTh">Unique:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -1232,6 +1256,17 @@ if (list.length > 0) {
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newHashName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Unique:</th>
|
<th class="collectionTh">Unique:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -1299,6 +1334,17 @@ if (list.length > 0) {
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newFulltextName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Min. length:</th>
|
<th class="collectionTh">Min. length:</th>
|
||||||
<th><input type="text" id="newFulltextMinLength" value=""/></th>
|
<th><input type="text" id="newFulltextMinLength" value=""/></th>
|
||||||
|
@ -1339,6 +1385,17 @@ if (list.length > 0) {
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newSkiplistName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Unique:</th>
|
<th class="collectionTh">Unique:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -1406,6 +1463,17 @@ if (list.length > 0) {
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newTtlName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Documents expire after (s):</th>
|
<th class="collectionTh">Documents expire after (s):</th>
|
||||||
<th><input type="text" id="newTtlExpireAfter" value=""/></th>
|
<th><input type="text" id="newTtlExpireAfter" value=""/></th>
|
||||||
|
@ -3650,4 +3718,4 @@ var cutByResolution = function (str) {
|
||||||
</div>
|
</div>
|
||||||
</div></script><script id="warningList.ejs" type="text/template"> <% if (warnings.length > 0) { %> <div>
|
</div></script><script id="warningList.ejs" type="text/template"> <% if (warnings.length > 0) { %> <div>
|
||||||
<ul> <% console.log(warnings); _.each(warnings, function(w) { console.log(w);%> <li><b><%=w.code%></b>: <%=w.message%></li> <% }); %> </ul>
|
<ul> <% console.log(warnings); _.each(warnings, function(w) { console.log(w);%> <li><b><%=w.code%></b>: <%=w.message%></li> <% }); %> </ul>
|
||||||
</div> <% } %> </script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="filterSelectDiv" style="display:none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1551722803883"></script><script src="app.js?version=1551722803883"></script></body></html>
|
</div> <% } %> </script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="filterSelectDiv" style="display:none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1552058798750"></script><script src="app.js?version=1552058798750"></script></body></html>
|
Binary file not shown.
|
@ -12,6 +12,7 @@
|
||||||
<th class="collectionInfoTh">Deduplicate</th>
|
<th class="collectionInfoTh">Deduplicate</th>
|
||||||
<th class="collectionInfoTh">Selectivity Est.</th>
|
<th class="collectionInfoTh">Selectivity Est.</th>
|
||||||
<th class="collectionInfoTh">Fields</th>
|
<th class="collectionInfoTh">Fields</th>
|
||||||
|
<th class="collectionInfoTh">Name</th>
|
||||||
<th class="collectionInfoTh">Action</th>
|
<th class="collectionInfoTh">Action</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
|
@ -26,6 +27,7 @@
|
||||||
<td></td>
|
<td></td>
|
||||||
<td></td>
|
<td></td>
|
||||||
<td></td>
|
<td></td>
|
||||||
|
<td></td>
|
||||||
<td><i class="fa fa-plus-circle" id="addIndex"></i></td>
|
<td><i class="fa fa-plus-circle" id="addIndex"></i></td>
|
||||||
</tr>
|
</tr>
|
||||||
</tfoot>
|
</tfoot>
|
||||||
|
@ -75,6 +77,17 @@
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newGeoName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Geo JSON:</th>
|
<th class="collectionTh">Geo JSON:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -116,6 +129,17 @@
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newPersistentName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Unique:</th>
|
<th class="collectionTh">Unique:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -183,6 +207,17 @@
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newHashName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Unique:</th>
|
<th class="collectionTh">Unique:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -250,6 +285,17 @@
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newFulltextName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Min. length:</th>
|
<th class="collectionTh">Min. length:</th>
|
||||||
<th><input type="text" id="newFulltextMinLength" value=""/></th>
|
<th><input type="text" id="newFulltextMinLength" value=""/></th>
|
||||||
|
@ -290,6 +336,17 @@
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newSkiplistName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Unique:</th>
|
<th class="collectionTh">Unique:</th>
|
||||||
<th>
|
<th>
|
||||||
|
@ -357,6 +414,17 @@
|
||||||
</div>
|
</div>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th class="collectionTh">Name:</th>
|
||||||
|
<th><input type="text" id="newTtlName" value=""/></th>
|
||||||
|
<th class="tooltipInfoTh">
|
||||||
|
<div class="tooltipDiv">
|
||||||
|
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||||
|
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<th class="collectionTh">Documents expire after (s):</th>
|
<th class="collectionTh">Documents expire after (s):</th>
|
||||||
<th><input type="text" id="newTtlExpireAfter" value=""/></th>
|
<th><input type="text" id="newTtlExpireAfter" value=""/></th>
|
||||||
|
|
|
@ -129,27 +129,34 @@
|
||||||
var sparse;
|
var sparse;
|
||||||
var deduplicate;
|
var deduplicate;
|
||||||
var background;
|
var background;
|
||||||
|
var name;
|
||||||
|
|
||||||
switch (indexType) {
|
switch (indexType) {
|
||||||
case 'Ttl':
|
case 'Ttl':
|
||||||
fields = $('#newTtlFields').val();
|
fields = $('#newTtlFields').val();
|
||||||
var expireAfter = parseInt($('#newTtlExpireAfter').val(), 10) || 0;
|
var expireAfter = parseInt($('#newTtlExpireAfter').val(), 10) || 0;
|
||||||
|
background = self.checkboxToValue('#newTtlBackground');
|
||||||
|
name = $('#newTtlName').val();
|
||||||
postParameter = {
|
postParameter = {
|
||||||
type: 'ttl',
|
type: 'ttl',
|
||||||
fields: self.stringToArray(fields),
|
fields: self.stringToArray(fields),
|
||||||
expireAfter: expireAfter
|
expireAfter: expireAfter,
|
||||||
|
inBackground: background,
|
||||||
|
name: name
|
||||||
};
|
};
|
||||||
background = self.checkboxToValue('#newTtlBackground');
|
|
||||||
break;
|
break;
|
||||||
case 'Geo':
|
case 'Geo':
|
||||||
// HANDLE ARRAY building
|
// HANDLE ARRAY building
|
||||||
fields = $('#newGeoFields').val();
|
fields = $('#newGeoFields').val();
|
||||||
|
background = self.checkboxToValue('#newGeoBackground');
|
||||||
var geoJson = self.checkboxToValue('#newGeoJson');
|
var geoJson = self.checkboxToValue('#newGeoJson');
|
||||||
|
name = $('#newGeoName').val();
|
||||||
postParameter = {
|
postParameter = {
|
||||||
type: 'geo',
|
type: 'geo',
|
||||||
fields: self.stringToArray(fields),
|
fields: self.stringToArray(fields),
|
||||||
geoJson: geoJson,
|
geoJson: geoJson,
|
||||||
inBackground: background
|
inBackground: background,
|
||||||
|
name: name
|
||||||
};
|
};
|
||||||
break;
|
break;
|
||||||
case 'Persistent':
|
case 'Persistent':
|
||||||
|
@ -158,13 +165,15 @@
|
||||||
sparse = self.checkboxToValue('#newPersistentSparse');
|
sparse = self.checkboxToValue('#newPersistentSparse');
|
||||||
deduplicate = self.checkboxToValue('#newPersistentDeduplicate');
|
deduplicate = self.checkboxToValue('#newPersistentDeduplicate');
|
||||||
background = self.checkboxToValue('#newPersistentBackground');
|
background = self.checkboxToValue('#newPersistentBackground');
|
||||||
|
name = $('#newPersistentName').val();
|
||||||
postParameter = {
|
postParameter = {
|
||||||
type: 'persistent',
|
type: 'persistent',
|
||||||
fields: self.stringToArray(fields),
|
fields: self.stringToArray(fields),
|
||||||
unique: unique,
|
unique: unique,
|
||||||
sparse: sparse,
|
sparse: sparse,
|
||||||
deduplicate: deduplicate,
|
deduplicate: deduplicate,
|
||||||
inBackground: background
|
inBackground: background,
|
||||||
|
name: name
|
||||||
};
|
};
|
||||||
break;
|
break;
|
||||||
case 'Hash':
|
case 'Hash':
|
||||||
|
@ -173,24 +182,28 @@
|
||||||
sparse = self.checkboxToValue('#newHashSparse');
|
sparse = self.checkboxToValue('#newHashSparse');
|
||||||
deduplicate = self.checkboxToValue('#newHashDeduplicate');
|
deduplicate = self.checkboxToValue('#newHashDeduplicate');
|
||||||
background = self.checkboxToValue('#newHashBackground');
|
background = self.checkboxToValue('#newHashBackground');
|
||||||
|
name = $('#newHashName').val();
|
||||||
postParameter = {
|
postParameter = {
|
||||||
type: 'hash',
|
type: 'hash',
|
||||||
fields: self.stringToArray(fields),
|
fields: self.stringToArray(fields),
|
||||||
unique: unique,
|
unique: unique,
|
||||||
sparse: sparse,
|
sparse: sparse,
|
||||||
deduplicate: deduplicate,
|
deduplicate: deduplicate,
|
||||||
inBackground: background
|
inBackground: background,
|
||||||
|
name: name
|
||||||
};
|
};
|
||||||
break;
|
break;
|
||||||
case 'Fulltext':
|
case 'Fulltext':
|
||||||
fields = $('#newFulltextFields').val();
|
fields = $('#newFulltextFields').val();
|
||||||
var minLength = parseInt($('#newFulltextMinLength').val(), 10) || 0;
|
var minLength = parseInt($('#newFulltextMinLength').val(), 10) || 0;
|
||||||
background = self.checkboxToValue('#newFulltextBackground');
|
background = self.checkboxToValue('#newFulltextBackground');
|
||||||
|
name = $('#newFulltextName').val();
|
||||||
postParameter = {
|
postParameter = {
|
||||||
type: 'fulltext',
|
type: 'fulltext',
|
||||||
fields: self.stringToArray(fields),
|
fields: self.stringToArray(fields),
|
||||||
minLength: minLength,
|
minLength: minLength,
|
||||||
inBackground: background
|
inBackground: background,
|
||||||
|
name: name
|
||||||
};
|
};
|
||||||
break;
|
break;
|
||||||
case 'Skiplist':
|
case 'Skiplist':
|
||||||
|
@ -199,13 +212,15 @@
|
||||||
sparse = self.checkboxToValue('#newSkiplistSparse');
|
sparse = self.checkboxToValue('#newSkiplistSparse');
|
||||||
deduplicate = self.checkboxToValue('#newSkiplistDeduplicate');
|
deduplicate = self.checkboxToValue('#newSkiplistDeduplicate');
|
||||||
background = self.checkboxToValue('#newSkiplistBackground');
|
background = self.checkboxToValue('#newSkiplistBackground');
|
||||||
|
name = $('#newSkiplistName').val();
|
||||||
postParameter = {
|
postParameter = {
|
||||||
type: 'skiplist',
|
type: 'skiplist',
|
||||||
fields: self.stringToArray(fields),
|
fields: self.stringToArray(fields),
|
||||||
unique: unique,
|
unique: unique,
|
||||||
sparse: sparse,
|
sparse: sparse,
|
||||||
deduplicate: deduplicate,
|
deduplicate: deduplicate,
|
||||||
inBackground: background
|
inBackground: background,
|
||||||
|
name: name
|
||||||
};
|
};
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -430,6 +445,7 @@
|
||||||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(deduplicate) + '</th>' +
|
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(deduplicate) + '</th>' +
|
||||||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(selectivity) + '</th>' +
|
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(selectivity) + '</th>' +
|
||||||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(fieldString) + '</th>' +
|
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(fieldString) + '</th>' +
|
||||||
|
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(v.name) + '</th>' +
|
||||||
'<th class=' + JSON.stringify(cssClass) + '>' + actionString + '</th>' +
|
'<th class=' + JSON.stringify(cssClass) + '>' + actionString + '</th>' +
|
||||||
'</tr>'
|
'</tr>'
|
||||||
);
|
);
|
||||||
|
|
|
@ -630,6 +630,8 @@ ArangoCollection.prototype.getIndexes = ArangoCollection.prototype.indexes = fun
|
||||||
ArangoCollection.prototype.index = function (id) {
|
ArangoCollection.prototype.index = function (id) {
|
||||||
if (id.hasOwnProperty('id')) {
|
if (id.hasOwnProperty('id')) {
|
||||||
id = id.id;
|
id = id.id;
|
||||||
|
} else if (id.hasOwnProperty('name')) {
|
||||||
|
id = id.name;
|
||||||
}
|
}
|
||||||
|
|
||||||
var requestResult = this._database._connection.GET(this._database._indexurl(id, this.name()));
|
var requestResult = this._database._connection.GET(this._database._indexurl(id, this.name()));
|
||||||
|
|
|
@ -138,16 +138,17 @@ ArangoCollection.prototype.index = function (id) {
|
||||||
|
|
||||||
if (typeof id === 'object' && id.hasOwnProperty('id')) {
|
if (typeof id === 'object' && id.hasOwnProperty('id')) {
|
||||||
id = id.id;
|
id = id.id;
|
||||||
|
} else if (typeof id === 'object' && id.hasOwnProperty('name')) {
|
||||||
|
id = id.name;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (typeof id === 'string') {
|
if (typeof id === 'string') {
|
||||||
var pa = ArangoDatabase.indexRegex.exec(id);
|
var pa = ArangoDatabase.indexRegex.exec(id);
|
||||||
|
|
||||||
if (pa === null) {
|
if (pa === null && !isNaN(Number(id)) && Number(id) === Math.floor(Number(id))) {
|
||||||
id = this.name() + '/' + id;
|
id = this.name() + '/' + id;
|
||||||
}
|
}
|
||||||
}
|
} else if (typeof id === 'number') {
|
||||||
else if (typeof id === 'number') {
|
|
||||||
// stringify the id
|
// stringify the id
|
||||||
id = this.name() + '/' + id;
|
id = this.name() + '/' + id;
|
||||||
}
|
}
|
||||||
|
@ -155,7 +156,7 @@ ArangoCollection.prototype.index = function (id) {
|
||||||
for (i = 0; i < indexes.length; ++i) {
|
for (i = 0; i < indexes.length; ++i) {
|
||||||
var index = indexes[i];
|
var index = indexes[i];
|
||||||
|
|
||||||
if (index.id === id) {
|
if (index.id === id || index.name === id) {
|
||||||
return index;
|
return index;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -217,7 +217,7 @@ ArangoDatabase.prototype._truncate = function (name) {
|
||||||
// / @brief was docuBlock IndexVerify
|
// / @brief was docuBlock IndexVerify
|
||||||
// //////////////////////////////////////////////////////////////////////////////
|
// //////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
ArangoDatabase.indexRegex = /^([a-zA-Z0-9\-_]+)\/([0-9]+)$/;
|
ArangoDatabase.indexRegex = /^([a-zA-Z0-9\-_]+)\/([a-zA-Z0-9\-_]+)$/;
|
||||||
|
|
||||||
// //////////////////////////////////////////////////////////////////////////////
|
// //////////////////////////////////////////////////////////////////////////////
|
||||||
// / @brief was docuBlock IndexHandle
|
// / @brief was docuBlock IndexHandle
|
||||||
|
@ -239,6 +239,7 @@ ArangoDatabase.prototype._index = function (id) {
|
||||||
}
|
}
|
||||||
|
|
||||||
var col = this._collection(pa[1]);
|
var col = this._collection(pa[1]);
|
||||||
|
var name = pa[2];
|
||||||
|
|
||||||
if (col === null) {
|
if (col === null) {
|
||||||
err = new ArangoError();
|
err = new ArangoError();
|
||||||
|
@ -253,7 +254,7 @@ ArangoDatabase.prototype._index = function (id) {
|
||||||
for (i = 0; i < indexes.length; ++i) {
|
for (i = 0; i < indexes.length; ++i) {
|
||||||
var index = indexes[i];
|
var index = indexes[i];
|
||||||
|
|
||||||
if (index.id === id) {
|
if (index.id === id || index.name === name) {
|
||||||
return index;
|
return index;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -93,12 +93,20 @@ std::string const StaticStrings::DataSourceType("type");
|
||||||
std::string const StaticStrings::IndexExpireAfter("expireAfter");
|
std::string const StaticStrings::IndexExpireAfter("expireAfter");
|
||||||
std::string const StaticStrings::IndexFields("fields");
|
std::string const StaticStrings::IndexFields("fields");
|
||||||
std::string const StaticStrings::IndexId("id");
|
std::string const StaticStrings::IndexId("id");
|
||||||
|
std::string const StaticStrings::IndexName("name");
|
||||||
std::string const StaticStrings::IndexSparse("sparse");
|
std::string const StaticStrings::IndexSparse("sparse");
|
||||||
std::string const StaticStrings::IndexType("type");
|
std::string const StaticStrings::IndexType("type");
|
||||||
std::string const StaticStrings::IndexUnique("unique");
|
std::string const StaticStrings::IndexUnique("unique");
|
||||||
std::string const StaticStrings::IndexIsBuilding("isBuilding");
|
std::string const StaticStrings::IndexIsBuilding("isBuilding");
|
||||||
std::string const StaticStrings::IndexInBackground("inBackground");
|
std::string const StaticStrings::IndexInBackground("inBackground");
|
||||||
|
|
||||||
|
// static index names
|
||||||
|
std::string const StaticStrings::IndexNameEdge("edge");
|
||||||
|
std::string const StaticStrings::IndexNameEdgeFrom("edge_from");
|
||||||
|
std::string const StaticStrings::IndexNameEdgeTo("edge_to");
|
||||||
|
std::string const StaticStrings::IndexNameInaccessible("inaccessible");
|
||||||
|
std::string const StaticStrings::IndexNamePrimary("primary");
|
||||||
|
|
||||||
// HTTP headers
|
// HTTP headers
|
||||||
std::string const StaticStrings::Accept("accept");
|
std::string const StaticStrings::Accept("accept");
|
||||||
std::string const StaticStrings::AcceptEncoding("accept-encoding");
|
std::string const StaticStrings::AcceptEncoding("accept-encoding");
|
||||||
|
|
|
@ -92,12 +92,20 @@ class StaticStrings {
|
||||||
static std::string const IndexExpireAfter; // ttl index expire value
|
static std::string const IndexExpireAfter; // ttl index expire value
|
||||||
static std::string const IndexFields; // index fields
|
static std::string const IndexFields; // index fields
|
||||||
static std::string const IndexId; // index id
|
static std::string const IndexId; // index id
|
||||||
|
static std::string const IndexName; // index name
|
||||||
static std::string const IndexSparse; // index sparsity marker
|
static std::string const IndexSparse; // index sparsity marker
|
||||||
static std::string const IndexType; // index type
|
static std::string const IndexType; // index type
|
||||||
static std::string const IndexUnique; // index uniqueness marker
|
static std::string const IndexUnique; // index uniqueness marker
|
||||||
static std::string const IndexIsBuilding; // index build in-process
|
static std::string const IndexIsBuilding; // index build in-process
|
||||||
static std::string const IndexInBackground; // index in background
|
static std::string const IndexInBackground; // index in background
|
||||||
|
|
||||||
|
// static index names
|
||||||
|
static std::string const IndexNameEdge;
|
||||||
|
static std::string const IndexNameEdgeFrom;
|
||||||
|
static std::string const IndexNameEdgeTo;
|
||||||
|
static std::string const IndexNameInaccessible;
|
||||||
|
static std::string const IndexNamePrimary;
|
||||||
|
|
||||||
// HTTP headers
|
// HTTP headers
|
||||||
static std::string const Accept;
|
static std::string const Accept;
|
||||||
static std::string const AcceptEncoding;
|
static std::string const AcceptEncoding;
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -40,7 +40,9 @@ var db = require("@arangodb").db;
|
||||||
function ensureIndexSuite() {
|
function ensureIndexSuite() {
|
||||||
'use strict';
|
'use strict';
|
||||||
var cn = "UnitTestsCollectionIdx";
|
var cn = "UnitTestsCollectionIdx";
|
||||||
|
var ecn = "UnitTestsEdgeCollectionIdx";
|
||||||
var collection = null;
|
var collection = null;
|
||||||
|
var edgeCollection = null;
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|
||||||
|
@ -51,6 +53,7 @@ function ensureIndexSuite() {
|
||||||
setUp : function () {
|
setUp : function () {
|
||||||
internal.db._drop(cn);
|
internal.db._drop(cn);
|
||||||
collection = internal.db._create(cn);
|
collection = internal.db._create(cn);
|
||||||
|
edgeCollection = internal.db._createEdgeCollection(ecn);
|
||||||
},
|
},
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -61,10 +64,12 @@ function ensureIndexSuite() {
|
||||||
// we need try...catch here because at least one test drops the collection itself!
|
// we need try...catch here because at least one test drops the collection itself!
|
||||||
try {
|
try {
|
||||||
collection.drop();
|
collection.drop();
|
||||||
|
edgeCollection.drop();
|
||||||
}
|
}
|
||||||
catch (err) {
|
catch (err) {
|
||||||
}
|
}
|
||||||
collection = null;
|
collection = null;
|
||||||
|
edgeCollection = null;
|
||||||
},
|
},
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -87,10 +92,6 @@ function ensureIndexSuite() {
|
||||||
assertEqual(collection.name() + "/" + id, res.id);
|
assertEqual(collection.name() + "/" + id, res.id);
|
||||||
},
|
},
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
/// @brief test: ids
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
|
||||||
testEnsureId2 : function () {
|
testEnsureId2 : function () {
|
||||||
var id = "2734752388";
|
var id = "2734752388";
|
||||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
||||||
|
@ -107,6 +108,132 @@ function ensureIndexSuite() {
|
||||||
assertEqual(collection.name() + "/" + id, res.id);
|
assertEqual(collection.name() + "/" + id, res.id);
|
||||||
},
|
},
|
||||||
|
|
||||||
|
testEnsureId3 : function () {
|
||||||
|
var id = "2734752388";
|
||||||
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(collection.name() + "/" + id, idx.id);
|
||||||
|
|
||||||
|
// expect duplicate id with different definition to fail and error out
|
||||||
|
try {
|
||||||
|
collection.ensureIndex({ type: "hash", fields: [ "a", "c" ], id: id });
|
||||||
|
fail();
|
||||||
|
} catch (err) {
|
||||||
|
assertEqual(errors.ERROR_ARANGO_DUPLICATE_IDENTIFIER.code, err.errorNum);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
testEnsureId4 : function () {
|
||||||
|
var id = "2734752388";
|
||||||
|
var name = "name";
|
||||||
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name, id: id });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(collection.name() + "/" + id, idx.id);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
|
||||||
|
// expect duplicate id with same definition to return old index
|
||||||
|
idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(collection.name() + "/" + id, idx.id);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
},
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
/// @brief test: names
|
||||||
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
testEnsureNamePrimary : function () {
|
||||||
|
var res = collection.getIndexes()[0];
|
||||||
|
|
||||||
|
assertEqual("primary", res.type);
|
||||||
|
assertEqual("primary", res.name);
|
||||||
|
},
|
||||||
|
|
||||||
|
testEnsureNameEdge : function () {
|
||||||
|
var res = edgeCollection.getIndexes()[0];
|
||||||
|
|
||||||
|
assertEqual("primary", res.type);
|
||||||
|
assertEqual("primary", res.name);
|
||||||
|
|
||||||
|
res = edgeCollection.getIndexes()[1];
|
||||||
|
|
||||||
|
assertEqual("edge", res.type);
|
||||||
|
assertEqual("edge", res.name);
|
||||||
|
},
|
||||||
|
|
||||||
|
testEnsureName1 : function () {
|
||||||
|
var name = "byValue";
|
||||||
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
|
||||||
|
var res = collection.getIndexes()[collection.getIndexes().length - 1];
|
||||||
|
|
||||||
|
assertEqual("skiplist", res.type);
|
||||||
|
assertFalse(res.unique);
|
||||||
|
assertEqual([ "b", "d" ], res.fields);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
},
|
||||||
|
|
||||||
|
testEnsureName2 : function () {
|
||||||
|
var name = "byValue";
|
||||||
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
|
||||||
|
// expect duplicate name to fail and error out
|
||||||
|
try {
|
||||||
|
collection.ensureIndex({ type: "hash", fields: [ "a", "c" ], name: name });
|
||||||
|
fail();
|
||||||
|
} catch (err) {
|
||||||
|
assertEqual(errors.ERROR_ARANGO_DUPLICATE_IDENTIFIER.code, err.errorNum);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
testEnsureName3 : function () {
|
||||||
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ]});
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual("idx_", idx.name.substr(0,4));
|
||||||
|
|
||||||
|
var res = collection.getIndexes()[collection.getIndexes().length - 1];
|
||||||
|
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual("idx_", idx.name.substr(0,4));
|
||||||
|
},
|
||||||
|
|
||||||
|
testEnsureName4 : function () {
|
||||||
|
var id = "2734752388";
|
||||||
|
var name = "old";
|
||||||
|
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name, id: id });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(collection.name() + "/" + id, idx.id);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
|
||||||
|
// expect duplicate id with same definition to return old index
|
||||||
|
idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name });
|
||||||
|
assertEqual("skiplist", idx.type);
|
||||||
|
assertFalse(idx.unique);
|
||||||
|
assertEqual([ "b", "d" ], idx.fields);
|
||||||
|
assertEqual(collection.name() + "/" + id, idx.id);
|
||||||
|
assertEqual(name, idx.name);
|
||||||
|
},
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
/// @brief test: ensure invalid type
|
/// @brief test: ensure invalid type
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -1101,4 +1228,3 @@ jsunity.run(ensureIndexSuite);
|
||||||
jsunity.run(ensureIndexEdgesSuite);
|
jsunity.run(ensureIndexEdgesSuite);
|
||||||
|
|
||||||
return jsunity.done();
|
return jsunity.done();
|
||||||
|
|
||||||
|
|
|
@ -122,6 +122,24 @@ function indexSuite() {
|
||||||
assertEqual(id.id, idx.id);
|
assertEqual(id.id, idx.id);
|
||||||
},
|
},
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
/// @brief test: get index by name
|
||||||
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
testIndexByName : function () {
|
||||||
|
var id = collection.ensureGeoIndex("a");
|
||||||
|
|
||||||
|
var idx = collection.index(id.name);
|
||||||
|
assertEqual(id.id, idx.id);
|
||||||
|
assertEqual(id.name, idx.name);
|
||||||
|
|
||||||
|
var fqn = `${collection.name()}/${id.name}`;
|
||||||
|
require('internal').print(fqn);
|
||||||
|
idx = internal.db._index(fqn);
|
||||||
|
assertEqual(id.id, idx.id);
|
||||||
|
assertEqual(id.name, idx.name);
|
||||||
|
},
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
/// @brief drop index
|
/// @brief drop index
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
Loading…
Reference in New Issue