mirror of https://gitee.com/bigwinds/arangodb
Named indices (#8370)
This commit is contained in:
parent
cdb4b46554
commit
413e90508f
22
CHANGELOG
22
CHANGELOG
|
@ -1,6 +1,10 @@
|
|||
devel
|
||||
-----
|
||||
|
||||
* added "name" property for indices
|
||||
|
||||
If a name is not specified on index creation, one will be auto-generated.
|
||||
|
||||
* Under normal circumstances there should be no need to connect to a
|
||||
database server in a cluster with one of the client tools, and it is
|
||||
likely that any user operations carried out there with one of the client
|
||||
|
@ -9,7 +13,7 @@ devel
|
|||
The client tools arangosh, arangodump and arangorestore will now emit
|
||||
a warning when connecting with them to a database server node in a cluster.
|
||||
|
||||
* fix compation behaviour of followers
|
||||
* fix compation behavior of followers
|
||||
|
||||
* added "random" masking to mask any data type, added wildcard masking
|
||||
|
||||
|
@ -81,20 +85,20 @@ devel
|
|||
|
||||
* `--query.registry-ttl` is now honored in single-server mode, and cursor TTLs
|
||||
are now honored on DBServers in cluster mode
|
||||
|
||||
* add "TTL" index type, for optional auto-expiration of documents
|
||||
|
||||
* add "TTL" index type, for optional auto-expiration of documents
|
||||
|
||||
* disable selection of index types "persistent" and "skiplist" in the web
|
||||
interface when using the RocksDB engine. The index types "hash", "skiplist"
|
||||
and "persistent" are just aliases of each other with the RocksDB engine,
|
||||
so there is no need to offer all of them.
|
||||
|
||||
* fixed JS AQL query objects with empty query strings not being recognized
|
||||
* fixed JS AQL query objects with empty query strings not being recognized
|
||||
as AQL queries
|
||||
|
||||
* update V8 to 7.1.302.28
|
||||
|
||||
New V8 behavior introduced herein:
|
||||
New V8 behavior introduced herein:
|
||||
|
||||
- ES2016 changed the default timezone of date strings to be conditional on
|
||||
whether a time part is included. The semantics were a compromise approach
|
||||
|
@ -102,7 +106,7 @@ devel
|
|||
shipping ES5.1 default timezone semantics. This patch implements the
|
||||
new semantics, following ChakraCore and SpiderMonkey (though JSC
|
||||
implements V8's previous semantics).
|
||||
|
||||
|
||||
* fixed JS AQL query objects with empty query strings not being recognized as AQL queries
|
||||
|
||||
* report run-time openssl version (for dynamically linked executables)
|
||||
|
@ -121,7 +125,7 @@ devel
|
|||
|
||||
* upgraded to OpenSSL 1.1.0j
|
||||
|
||||
* added configurable masking of dumped data via `arangodump` tool to obfuscate
|
||||
* added configurable masking of dumped data via `arangodump` tool to obfuscate
|
||||
exported sensible data
|
||||
|
||||
* fixed arangoimp script for MacOSX CLI Bundle
|
||||
|
@ -140,7 +144,7 @@ devel
|
|||
* fix issue #7903: Regression on ISO8601 string compatibility in AQL
|
||||
|
||||
millisecond parts of AQL date values were limited to up to 3 digits.
|
||||
Now the length of the millisecond part is unrestricted, but the
|
||||
Now the length of the millisecond part is unrestricted, but the
|
||||
millisecond precision is still limited to up to 3 digits.
|
||||
|
||||
* the RocksDB primary index can now be used by the optimizer to optimize queries
|
||||
|
@ -150,7 +154,7 @@ devel
|
|||
sorted when sorting documents by their `_key` values.
|
||||
|
||||
Previous versions of ArangoDB tried to interpret `_key` values as numeric values if
|
||||
possible and sorted by these. That previous sort strategy never used an index and
|
||||
possible and sorted by these. That previous sort strategy never used an index and
|
||||
could have caused unnecessary overhead. The new version will now use an index for
|
||||
sorting for the RocksDB engine, but may change the order in which documents are
|
||||
shown in the web UI (e.g. now a `_key` value of "10" will be shown before a `_key`
|
||||
|
|
|
@ -227,7 +227,8 @@ Fetches information about the index with the given _indexHandle_ and returns it.
|
|||
|
||||
The handle of the index to look up. This can either be a fully-qualified
|
||||
identifier or the collection-specific key of the index. If the value is an
|
||||
object, its _id_ property will be used instead.
|
||||
object, its _id_ property will be used instead. Alternatively, the index
|
||||
may be looked up by name.
|
||||
|
||||
**Examples**
|
||||
|
||||
|
@ -243,6 +244,12 @@ assert.equal(result.id, index.id);
|
|||
|
||||
const result = await collection.index(index.id.split("/")[1]);
|
||||
assert.equal(result.id, index.id);
|
||||
|
||||
// -- or --
|
||||
|
||||
const result = await collection.index(index.name);
|
||||
assert.equal(result.id, index.id);
|
||||
assert.equal(result.name, index.name);
|
||||
// result contains the properties of the index
|
||||
```
|
||||
|
||||
|
|
|
@ -7,8 +7,8 @@ Learn how to use different indexes efficiently by going through the
|
|||
Index Identifiers and Handles
|
||||
-----------------------------
|
||||
|
||||
An *index handle* uniquely identifies an index in the database. It is a string and
|
||||
consists of the collection name and an *index identifier* separated by a `/`. The
|
||||
An *index handle* uniquely identifies an index in the database. It is a string and
|
||||
consists of the collection name and an *index identifier* separated by a `/`. The
|
||||
index identifier part is a numeric value that is auto-generated by ArangoDB.
|
||||
|
||||
A specific index of a collection can be accessed using its *index handle* or
|
||||
|
@ -35,6 +35,15 @@ Because the index handle is unique within the database, you can leave out the
|
|||
db._index("demo/362549736");
|
||||
```
|
||||
|
||||
An index may also be looked up by its name. Since names are only unique within
|
||||
a collection, rather than within the database, the lookup must also include the
|
||||
collection name.
|
||||
|
||||
```js
|
||||
db._index("demo/primary")
|
||||
db.demo.index("primary")
|
||||
```
|
||||
|
||||
Collection Methods
|
||||
------------------
|
||||
|
||||
|
@ -86,6 +95,10 @@ Other attributes may be necessary, depending on the index type.
|
|||
- *fulltext*: fulltext index
|
||||
- *geo*: geo index, with _one_ or _two_ attributes
|
||||
|
||||
**name** can be a string. Index names are subject to the same character
|
||||
restrictions as collection names. If omitted, a name will be auto-generated so
|
||||
that it is unique with respect to the collection, e.g. `idx_832910498`.
|
||||
|
||||
**sparse** can be *true* or *false*.
|
||||
|
||||
For *hash*, and *skiplist* the sparsity can be controlled, *fulltext* and *geo*
|
||||
|
@ -98,7 +111,7 @@ object existed before the call is indicated in the return attribute
|
|||
*isNewlyCreated*.
|
||||
|
||||
**deduplicate** can be *true* or *false* and is supported by array indexes of
|
||||
type *hash* or *skiplist*. It controls whether inserting duplicate index values
|
||||
type *hash* or *skiplist*. It controls whether inserting duplicate index values
|
||||
from the same document into a unique array index will lead to a unique constraint
|
||||
error or not. The default value is *true*, so only a single instance of each
|
||||
non-unique index value will be inserted into the index per document. Trying to
|
||||
|
@ -248,7 +261,7 @@ finds an index
|
|||
So you've created an index, and since its maintainance isn't for free,
|
||||
you definitely want to know whether your query can utilize it.
|
||||
|
||||
You can use explain to verify whether **skiplists** or **hash indexes** are
|
||||
You can use explain to verify whether **skiplists** or **hash indexes** are
|
||||
used (if you omit `colors: false` you will get nice colors in ArangoShell):
|
||||
|
||||
@startDocuBlockInline IndexVerify
|
||||
|
@ -260,4 +273,3 @@ used (if you omit `colors: false` you will get nice colors in ArangoShell):
|
|||
~db._drop("example");
|
||||
@END_EXAMPLE_ARANGOSH_OUTPUT
|
||||
@endDocuBlock IndexVerify
|
||||
|
||||
|
|
|
@ -214,6 +214,15 @@ entries, and will continue to work.
|
|||
|
||||
Existing `_modules` collections will also remain functional.
|
||||
|
||||
### Named indices
|
||||
|
||||
Indices now have an additional `name` field, which allows for more useful
|
||||
identifiers. System indices, like the primary and edge indices, have default
|
||||
names (`primary` and `edge`, respectively). If no `name` value is specified
|
||||
on index creation, one will be auto-generated (e.g. `idx_13820395`). The index
|
||||
name _cannot_ be changed after index creation. No two indices on the same
|
||||
collection may share the same name, but two indices on different collections
|
||||
may.
|
||||
|
||||
Client tools
|
||||
------------
|
||||
|
|
|
@ -56,3 +56,14 @@ undefined.
|
|||
This change is about making queries as the above fail with a parse error, as an
|
||||
unknown variable `key1` is accessed here, avoiding the undefined behavior. This is
|
||||
also in line with what the documentation states about variable invalidation.
|
||||
|
||||
Miscellaneous
|
||||
-------------
|
||||
|
||||
### Index creation
|
||||
|
||||
In previous versions of ArangoDB, if one attempted to create an index with a
|
||||
specified `_id`, and that `_id` was already in use, the server would typically
|
||||
return the existing index with matching `_id`. This is somewhat unintuitive, as
|
||||
it would ignore if the rest of the definition did not match. This behavior has
|
||||
been changed so that the server will now return a duplicate identifier error.
|
||||
|
|
|
@ -32,6 +32,10 @@ Indexing the system attribute *_id* is not supported for user-defined indexes.
|
|||
Manually creating an index using *_id* as an index attribute will fail with
|
||||
an error.
|
||||
|
||||
Optionally, an index name may be specified as a string in the *name* attribute.
|
||||
Index names have the same restrictions as collection names. If no value is
|
||||
specified, one will be auto-generated.
|
||||
|
||||
Some indexes can be created as unique or non-unique variants. Uniqueness
|
||||
can be controlled for most indexes by specifying the *unique* flag in the
|
||||
index details. Setting it to *true* will create a unique index.
|
||||
|
@ -76,4 +80,3 @@ target index will not support, then an *HTTP 400* is returned.
|
|||
@RESTRETURNCODE{404}
|
||||
If *collection* is unknown, then an *HTTP 404* is returned.
|
||||
@endDocuBlock
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -21,7 +21,6 @@
|
|||
/// @author Max Neunhoeffer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "ClusterMethods.h"
|
||||
#include "Basics/NumberUtils.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
|
@ -30,6 +29,7 @@
|
|||
#include "Basics/tri-strings.h"
|
||||
#include "Cluster/ClusterComm.h"
|
||||
#include "Cluster/ClusterInfo.h"
|
||||
#include "ClusterMethods.h"
|
||||
#include "Graph/Traverser.h"
|
||||
#include "Indexes/Index.h"
|
||||
#include "RestServer/TtlFeature.h"
|
||||
|
@ -2520,7 +2520,7 @@ Result getTtlStatisticsFromAllDBServers(TtlStatistics& out) {
|
|||
cc->asyncRequest(coordTransactionID, "server:" + *it, arangodb::rest::RequestType::GET,
|
||||
url, body, headers, nullptr, 120.0);
|
||||
}
|
||||
|
||||
|
||||
// Now listen to the results:
|
||||
int count;
|
||||
int nrok = 0;
|
||||
|
@ -2576,7 +2576,7 @@ Result getTtlPropertiesFromAllDBServers(VPackBuilder& out) {
|
|||
cc->asyncRequest(coordTransactionID, "server:" + *it, arangodb::rest::RequestType::GET,
|
||||
url, body, headers, nullptr, 120.0);
|
||||
}
|
||||
|
||||
|
||||
// Now listen to the results:
|
||||
bool set = false;
|
||||
int count;
|
||||
|
@ -2636,7 +2636,7 @@ Result setTtlPropertiesOnAllDBServers(VPackSlice const& properties, VPackBuilder
|
|||
cc->asyncRequest(coordTransactionID, "server:" + *it, arangodb::rest::RequestType::PUT,
|
||||
url, body, headers, nullptr, 120.0);
|
||||
}
|
||||
|
||||
|
||||
// Now listen to the results:
|
||||
bool set = false;
|
||||
int count;
|
||||
|
@ -2777,12 +2777,11 @@ std::shared_ptr<LogicalCollection> ClusterMethods::persistCollectionInAgency(
|
|||
|
||||
VPackBuilder velocy = col->toVelocyPackIgnore(ignoreKeys, false, false);
|
||||
auto& dbName = col->vocbase().name();
|
||||
auto res = ci->createCollectionCoordinator( // create collection
|
||||
dbName, std::to_string(col->id()),
|
||||
col->numberOfShards(),
|
||||
col->replicationFactor(), waitForSyncReplication,
|
||||
velocy.slice(), // collection definition
|
||||
240.0 // request timeout
|
||||
auto res = ci->createCollectionCoordinator( // create collection
|
||||
dbName, std::to_string(col->id()), col->numberOfShards(),
|
||||
col->replicationFactor(), waitForSyncReplication,
|
||||
velocy.slice(), // collection definition
|
||||
240.0 // request timeout
|
||||
);
|
||||
|
||||
if (!res.ok()) {
|
||||
|
|
|
@ -20,13 +20,13 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "ClusterCollection.h"
|
||||
#include "Basics/ReadLocker.h"
|
||||
#include "Basics/Result.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "Basics/WriteLocker.h"
|
||||
#include "Cluster/ClusterMethods.h"
|
||||
#include "ClusterCollection.h"
|
||||
#include "ClusterEngine/ClusterEngine.h"
|
||||
#include "ClusterEngine/ClusterIndex.h"
|
||||
#include "Indexes/Index.h"
|
||||
|
@ -314,6 +314,7 @@ void ClusterCollection::prepareIndexes(arangodb::velocypack::Slice indexesSlice)
|
|||
|
||||
if (indexesSlice.length() == 0 && _indexes.empty()) {
|
||||
engine->indexFactory().fillSystemIndexes(_logicalCollection, indexes);
|
||||
|
||||
} else {
|
||||
engine->indexFactory().prepareIndexes(_logicalCollection, indexesSlice, indexes);
|
||||
}
|
||||
|
@ -423,7 +424,8 @@ LocalDocumentId ClusterCollection::lookupKey(transaction::Methods* trx,
|
|||
THROW_ARANGO_EXCEPTION(TRI_ERROR_NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
Result ClusterCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
||||
Result ClusterCollection::read(transaction::Methods* trx,
|
||||
arangodb::velocypack::StringRef const& key,
|
||||
ManagedDocumentResult& result, bool) {
|
||||
return Result(TRI_ERROR_NOT_IMPLEMENTED);
|
||||
}
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "ClusterEngine.h"
|
||||
#include "ApplicationFeatures/RocksDBOptionFeature.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
#include "Basics/FileUtils.h"
|
||||
|
@ -32,6 +31,7 @@
|
|||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "Basics/WriteLocker.h"
|
||||
#include "Basics/build.h"
|
||||
#include "ClusterEngine.h"
|
||||
#include "ClusterEngine/ClusterCollection.h"
|
||||
#include "ClusterEngine/ClusterIndexFactory.h"
|
||||
#include "ClusterEngine/ClusterRestHandlers.h"
|
||||
|
@ -64,12 +64,15 @@ using namespace arangodb;
|
|||
using namespace arangodb::application_features;
|
||||
using namespace arangodb::options;
|
||||
|
||||
std::string const ClusterEngine::EngineName("Cluster");
|
||||
std::string const ClusterEngine::FeatureName("ClusterEngine");
|
||||
|
||||
// fall back to the using the mock storage engine
|
||||
bool ClusterEngine::Mocking = false;
|
||||
|
||||
// create the storage engine
|
||||
ClusterEngine::ClusterEngine(application_features::ApplicationServer& server)
|
||||
: StorageEngine(server, "Cluster", "ClusterEngine",
|
||||
: StorageEngine(server, EngineName, FeatureName,
|
||||
std::unique_ptr<IndexFactory>(new ClusterIndexFactory())),
|
||||
_actualEngine(nullptr) {
|
||||
setOptional(true);
|
||||
|
|
|
@ -20,10 +20,10 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "ClusterIndex.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "ClusterEngine/ClusterEngine.h"
|
||||
#include "ClusterIndex.h"
|
||||
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
||||
#include "Indexes/SortedIndexAttributeMatcher.h"
|
||||
#include "StorageEngine/EngineSelectorFeature.h"
|
||||
|
@ -94,6 +94,7 @@ void ClusterIndex::toVelocyPack(VPackBuilder& builder,
|
|||
|
||||
for (auto pair : VPackObjectIterator(_info.slice())) {
|
||||
if (!pair.key.isEqualString(StaticStrings::IndexId) &&
|
||||
!pair.key.isEqualString(StaticStrings::IndexName) &&
|
||||
!pair.key.isEqualString(StaticStrings::IndexType) &&
|
||||
!pair.key.isEqualString(StaticStrings::IndexFields) &&
|
||||
!pair.key.isEqualString("selectivityEstimate") && !pair.key.isEqualString("figures") &&
|
||||
|
@ -105,7 +106,7 @@ void ClusterIndex::toVelocyPack(VPackBuilder& builder,
|
|||
}
|
||||
builder.close();
|
||||
}
|
||||
|
||||
|
||||
bool ClusterIndex::isPersistent() const {
|
||||
if (_engineType == ClusterEngineType::MMFilesEngine) {
|
||||
return _indexType == Index::TRI_IDX_TYPE_PERSISTENT_INDEX;
|
||||
|
@ -229,8 +230,8 @@ bool ClusterIndex::supportsFilterCondition(
|
|||
std::unordered_set<std::string> nonNullAttributes;
|
||||
std::size_t values = 0;
|
||||
SortedIndexAttributeMatcher::matchAttributes(this, node, reference, found,
|
||||
values, nonNullAttributes,
|
||||
/*skip evaluation (during execution)*/ false);
|
||||
values, nonNullAttributes,
|
||||
/*skip evaluation (during execution)*/ false);
|
||||
estimatedItems = values;
|
||||
return !found.empty();
|
||||
}
|
||||
|
@ -267,15 +268,19 @@ bool ClusterIndex::supportsFilterCondition(
|
|||
return matcher.matchOne(this, node, reference, itemsInIndex, estimatedItems, estimatedCost);
|
||||
}
|
||||
|
||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||
case TRI_IDX_TYPE_TTL_INDEX: {
|
||||
return SortedIndexAttributeMatcher::supportsFilterCondition(
|
||||
allIndexes, this, node, reference, itemsInIndex, estimatedItems, estimatedCost);
|
||||
return SortedIndexAttributeMatcher::supportsFilterCondition(allIndexes, this,
|
||||
node, reference,
|
||||
itemsInIndex, estimatedItems,
|
||||
estimatedCost);
|
||||
}
|
||||
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
||||
// same for both engines
|
||||
return SortedIndexAttributeMatcher::supportsFilterCondition(
|
||||
allIndexes, this, node, reference, itemsInIndex, estimatedItems, estimatedCost);
|
||||
return SortedIndexAttributeMatcher::supportsFilterCondition(allIndexes, this,
|
||||
node, reference,
|
||||
itemsInIndex, estimatedItems,
|
||||
estimatedCost);
|
||||
}
|
||||
|
||||
case TRI_IDX_TYPE_UNKNOWN:
|
||||
|
@ -301,8 +306,8 @@ bool ClusterIndex::supportsSortCondition(arangodb::aql::SortCondition const* sor
|
|||
estimatedCost, coveredAttributes);
|
||||
} else if (_engineType == ClusterEngineType::RocksDBEngine) {
|
||||
return SortedIndexAttributeMatcher::supportsSortCondition(this, sortCondition, reference,
|
||||
itemsInIndex, estimatedCost,
|
||||
coveredAttributes);
|
||||
itemsInIndex, estimatedCost,
|
||||
coveredAttributes);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
@ -313,19 +318,20 @@ bool ClusterIndex::supportsSortCondition(arangodb::aql::SortCondition const* sor
|
|||
#ifdef USE_IRESEARCH
|
||||
case TRI_IDX_TYPE_IRESEARCH_LINK:
|
||||
#endif
|
||||
case TRI_IDX_TYPE_NO_ACCESS_INDEX:
|
||||
case TRI_IDX_TYPE_NO_ACCESS_INDEX:
|
||||
case TRI_IDX_TYPE_EDGE_INDEX: {
|
||||
return Index::supportsSortCondition(sortCondition, reference, itemsInIndex,
|
||||
estimatedCost, coveredAttributes);
|
||||
}
|
||||
|
||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||
case TRI_IDX_TYPE_TTL_INDEX:
|
||||
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
||||
if (_engineType == ClusterEngineType::MMFilesEngine ||
|
||||
_engineType == ClusterEngineType::RocksDBEngine) {
|
||||
return SortedIndexAttributeMatcher::supportsSortCondition(
|
||||
this, sortCondition, reference, itemsInIndex, estimatedCost, coveredAttributes);
|
||||
return SortedIndexAttributeMatcher::supportsSortCondition(this, sortCondition, reference,
|
||||
itemsInIndex, estimatedCost,
|
||||
coveredAttributes);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
@ -380,8 +386,8 @@ aql::AstNode* ClusterIndex::specializeCondition(aql::AstNode* node,
|
|||
return matcher.specializeOne(this, node, reference);
|
||||
}
|
||||
|
||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||
case TRI_IDX_TYPE_TTL_INDEX:
|
||||
case TRI_IDX_TYPE_SKIPLIST_INDEX:
|
||||
case TRI_IDX_TYPE_TTL_INDEX:
|
||||
case TRI_IDX_TYPE_PERSISTENT_INDEX: {
|
||||
return SortedIndexAttributeMatcher::specializeCondition(this, node, reference);
|
||||
}
|
||||
|
|
|
@ -20,13 +20,13 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "ClusterIndexFactory.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "Cluster/ServerState.h"
|
||||
#include "ClusterEngine/ClusterEngine.h"
|
||||
#include "ClusterEngine/ClusterIndex.h"
|
||||
#include "ClusterIndexFactory.h"
|
||||
#include "Indexes/Index.h"
|
||||
#include "StorageEngine/EngineSelectorFeature.h"
|
||||
#include "VocBase/LogicalCollection.h"
|
||||
|
@ -69,10 +69,9 @@ struct DefaultIndexFactory : public arangodb::IndexTypeFactory {
|
|||
|
||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
arangodb::velocypack::Slice const& definition, TRI_idx_iid_t id,
|
||||
bool // isClusterConstructor
|
||||
) const override {
|
||||
) const override {
|
||||
auto* clusterEngine =
|
||||
static_cast<arangodb::ClusterEngine*>(arangodb::EngineSelectorFeature::ENGINE);
|
||||
|
||||
|
@ -121,13 +120,13 @@ struct DefaultIndexFactory : public arangodb::IndexTypeFactory {
|
|||
};
|
||||
|
||||
struct EdgeIndexFactory : public DefaultIndexFactory {
|
||||
explicit EdgeIndexFactory(std::string const& type) : DefaultIndexFactory(type) {}
|
||||
explicit EdgeIndexFactory(std::string const& type)
|
||||
: DefaultIndexFactory(type) {}
|
||||
|
||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
if (!isClusterConstructor) {
|
||||
// this indexes cannot be created directly
|
||||
return arangodb::Result(TRI_ERROR_INTERNAL, "cannot create edge index");
|
||||
|
@ -153,13 +152,13 @@ struct EdgeIndexFactory : public DefaultIndexFactory {
|
|||
};
|
||||
|
||||
struct PrimaryIndexFactory : public DefaultIndexFactory {
|
||||
explicit PrimaryIndexFactory(std::string const& type) : DefaultIndexFactory(type) {}
|
||||
explicit PrimaryIndexFactory(std::string const& type)
|
||||
: DefaultIndexFactory(type) {}
|
||||
|
||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
if (!isClusterConstructor) {
|
||||
// this indexes cannot be created directly
|
||||
return arangodb::Result(TRI_ERROR_INTERNAL,
|
||||
|
@ -213,13 +212,14 @@ ClusterIndexFactory::ClusterIndexFactory() {
|
|||
emplace(ttlIndexFactory._type, ttlIndexFactory);
|
||||
}
|
||||
|
||||
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" => "hash")
|
||||
/// used to display storage engine capabilities
|
||||
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" =>
|
||||
/// "hash") used to display storage engine capabilities
|
||||
std::unordered_map<std::string, std::string> ClusterIndexFactory::indexAliases() const {
|
||||
auto* ce = static_cast<ClusterEngine*>(EngineSelectorFeature::ENGINE);
|
||||
auto* ae = ce->actualEngine();
|
||||
if (!ae) {
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL, "no actual storage engine for ClusterIndexFactory");
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(
|
||||
TRI_ERROR_INTERNAL, "no actual storage engine for ClusterIndexFactory");
|
||||
}
|
||||
return ae->indexFactory().indexAliases();
|
||||
}
|
||||
|
@ -254,6 +254,7 @@ void ClusterIndexFactory::fillSystemIndexes(arangodb::LogicalCollection& col,
|
|||
input.openObject();
|
||||
input.add(StaticStrings::IndexType, VPackValue("primary"));
|
||||
input.add(StaticStrings::IndexId, VPackValue("0"));
|
||||
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNamePrimary));
|
||||
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
||||
input.add(VPackValue(StaticStrings::KeyString));
|
||||
input.close();
|
||||
|
@ -277,14 +278,20 @@ void ClusterIndexFactory::fillSystemIndexes(arangodb::LogicalCollection& col,
|
|||
input.add(StaticStrings::IndexType,
|
||||
VPackValue(Index::oldtypeName(Index::TRI_IDX_TYPE_EDGE_INDEX)));
|
||||
input.add(StaticStrings::IndexId, VPackValue("1"));
|
||||
|
||||
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
||||
input.add(VPackValue(StaticStrings::FromString));
|
||||
|
||||
if (ct == ClusterEngineType::MMFilesEngine) {
|
||||
input.add(VPackValue(StaticStrings::ToString));
|
||||
}
|
||||
|
||||
input.close();
|
||||
|
||||
if (ct == ClusterEngineType::MMFilesEngine) {
|
||||
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNameEdge));
|
||||
} else if (ct == ClusterEngineType::RocksDBEngine) {
|
||||
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNameEdgeFrom));
|
||||
}
|
||||
|
||||
input.add(StaticStrings::IndexUnique, VPackValue(false));
|
||||
input.add(StaticStrings::IndexSparse, VPackValue(false));
|
||||
input.close();
|
||||
|
@ -299,6 +306,7 @@ void ClusterIndexFactory::fillSystemIndexes(arangodb::LogicalCollection& col,
|
|||
input.add(StaticStrings::IndexType,
|
||||
VPackValue(Index::oldtypeName(Index::TRI_IDX_TYPE_EDGE_INDEX)));
|
||||
input.add(StaticStrings::IndexId, VPackValue("2"));
|
||||
input.add(StaticStrings::IndexName, VPackValue(StaticStrings::IndexNameEdgeTo));
|
||||
input.add(StaticStrings::IndexFields, VPackValue(VPackValueType::Array));
|
||||
input.add(VPackValue(StaticStrings::ToString));
|
||||
input.close();
|
||||
|
|
|
@ -21,17 +21,17 @@
|
|||
/// @author Jan Steemann
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "Index.h"
|
||||
#include "Aql/Ast.h"
|
||||
#include "Aql/AstNode.h"
|
||||
#include "Aql/Variable.h"
|
||||
#include "Basics/datetime.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
#include "Basics/HashSet.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "Basics/datetime.h"
|
||||
#include "Cluster/ServerState.h"
|
||||
#include "Index.h"
|
||||
|
||||
#ifdef USE_IRESEARCH
|
||||
#include "IResearch/IResearchCommon.h"
|
||||
|
@ -98,14 +98,13 @@ bool canBeNull(arangodb::aql::AstNode const* op, arangodb::aql::AstNode const* a
|
|||
TRI_ASSERT(op != nullptr);
|
||||
TRI_ASSERT(access != nullptr);
|
||||
|
||||
if (access->type == arangodb::aql::NODE_TYPE_ATTRIBUTE_ACCESS &&
|
||||
if (access->type == arangodb::aql::NODE_TYPE_ATTRIBUTE_ACCESS &&
|
||||
access->getMemberUnchecked(0)->type == arangodb::aql::NODE_TYPE_REFERENCE) {
|
||||
// a.b
|
||||
// now check if the accessed attribute is _key, _rev or _id.
|
||||
// all of these cannot be null
|
||||
auto attributeName = access->getStringRef();
|
||||
if (attributeName == StaticStrings::KeyString ||
|
||||
attributeName == StaticStrings::IdString ||
|
||||
if (attributeName == StaticStrings::KeyString || attributeName == StaticStrings::IdString ||
|
||||
attributeName == StaticStrings::RevString) {
|
||||
return false;
|
||||
}
|
||||
|
@ -157,16 +156,42 @@ bool typeMatch(char const* type, size_t len, char const* expected) {
|
|||
return (len == ::strlen(expected)) && (::memcmp(type, expected, len) == 0);
|
||||
}
|
||||
|
||||
std::string defaultIndexName(VPackSlice const& slice) {
|
||||
auto type =
|
||||
arangodb::Index::type(slice.get(arangodb::StaticStrings::IndexType).copyString());
|
||||
if (type == arangodb::Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX) {
|
||||
return arangodb::StaticStrings::IndexNamePrimary;
|
||||
} else if (type == arangodb::Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX) {
|
||||
if (EngineSelectorFeature::isRocksDB()) {
|
||||
auto fields = slice.get(arangodb::StaticStrings::IndexFields);
|
||||
TRI_ASSERT(fields.isArray());
|
||||
auto firstField = fields.at(0);
|
||||
TRI_ASSERT(firstField.isString());
|
||||
bool isFromIndex = firstField.isEqualString(arangodb::StaticStrings::FromString);
|
||||
return isFromIndex ? arangodb::StaticStrings::IndexNameEdgeFrom
|
||||
: arangodb::StaticStrings::IndexNameEdgeTo;
|
||||
}
|
||||
return arangodb::StaticStrings::IndexNameEdge;
|
||||
}
|
||||
|
||||
std::string idString = arangodb::basics::VelocyPackHelper::getStringValue(
|
||||
slice, arangodb::StaticStrings::IndexId.c_str(),
|
||||
std::to_string(TRI_NewTickServer()));
|
||||
return std::string("idx_").append(idString);
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
// If the Index is on a coordinator instance the index may not access the
|
||||
// logical collection because it could be gone!
|
||||
|
||||
Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection,
|
||||
std::string const& name,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& fields,
|
||||
bool unique, bool sparse)
|
||||
: _iid(iid),
|
||||
_collection(collection),
|
||||
_name(name),
|
||||
_fields(fields),
|
||||
_useExpansion(::hasExpansion(_fields)),
|
||||
_unique(unique),
|
||||
|
@ -177,6 +202,8 @@ Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection,
|
|||
Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection, VPackSlice const& slice)
|
||||
: _iid(iid),
|
||||
_collection(collection),
|
||||
_name(arangodb::basics::VelocyPackHelper::getStringValue(
|
||||
slice, arangodb::StaticStrings::IndexName, ::defaultIndexName(slice))),
|
||||
_fields(::parseFields(slice.get(arangodb::StaticStrings::IndexFields),
|
||||
Index::allowExpansion(Index::type(
|
||||
slice.get(arangodb::StaticStrings::IndexType).copyString())))),
|
||||
|
@ -188,6 +215,12 @@ Index::Index(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection, VPackSl
|
|||
|
||||
Index::~Index() {}
|
||||
|
||||
void Index::name(std::string const& newName) {
|
||||
if (_name.empty()) {
|
||||
_name = newName;
|
||||
}
|
||||
}
|
||||
|
||||
size_t Index::sortWeight(arangodb::aql::AstNode const* node) {
|
||||
switch (node->type) {
|
||||
case arangodb::aql::NODE_TYPE_OPERATOR_BINARY_EQ:
|
||||
|
@ -380,6 +413,25 @@ bool Index::validateHandle(char const* key, size_t* split) {
|
|||
/// @brief generate a new index id
|
||||
TRI_idx_iid_t Index::generateId() { return TRI_NewTickServer(); }
|
||||
|
||||
/// @brief check if two index definitions share any identifiers (_id, name)
|
||||
bool Index::CompareIdentifiers(velocypack::Slice const& lhs, velocypack::Slice const& rhs) {
|
||||
VPackSlice lhsId = lhs.get(arangodb::StaticStrings::IndexId);
|
||||
VPackSlice rhsId = rhs.get(arangodb::StaticStrings::IndexId);
|
||||
if (lhsId.isString() && rhsId.isString() &&
|
||||
arangodb::basics::VelocyPackHelper::compare(lhsId, rhsId, true) == 0) {
|
||||
return true;
|
||||
}
|
||||
|
||||
VPackSlice lhsName = lhs.get(arangodb::StaticStrings::IndexName);
|
||||
VPackSlice rhsName = rhs.get(arangodb::StaticStrings::IndexName);
|
||||
if (lhsName.isString() && rhsName.isString() &&
|
||||
arangodb::basics::VelocyPackHelper::compare(lhsName, rhsName, true) == 0) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/// @brief index comparator, used by the coordinator to detect if two index
|
||||
/// contents are the same
|
||||
bool Index::Compare(VPackSlice const& lhs, VPackSlice const& rhs) {
|
||||
|
@ -438,6 +490,7 @@ void Index::toVelocyPack(VPackBuilder& builder,
|
|||
arangodb::velocypack::Value(std::to_string(_iid)));
|
||||
builder.add(arangodb::StaticStrings::IndexType,
|
||||
arangodb::velocypack::Value(oldtypeName(type())));
|
||||
builder.add(arangodb::StaticStrings::IndexName, arangodb::velocypack::Value(name()));
|
||||
|
||||
builder.add(arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields));
|
||||
builder.openArray();
|
||||
|
@ -944,22 +997,25 @@ std::ostream& operator<<(std::ostream& stream, arangodb::Index const& index) {
|
|||
return stream;
|
||||
}
|
||||
|
||||
double Index::getTimestamp(arangodb::velocypack::Slice const& doc, std::string const& attributeName) const {
|
||||
double Index::getTimestamp(arangodb::velocypack::Slice const& doc,
|
||||
std::string const& attributeName) const {
|
||||
VPackSlice value = doc.get(attributeName);
|
||||
|
||||
if (value.isString()) {
|
||||
// string value. we expect it to be YYYY-MM-DD etc.
|
||||
tp_sys_clock_ms tp;
|
||||
if (basics::parseDateTime(value.copyString(), tp)) {
|
||||
return static_cast<double>(std::chrono::duration_cast<std::chrono::seconds>(tp.time_since_epoch()).count());
|
||||
}
|
||||
return static_cast<double>(
|
||||
std::chrono::duration_cast<std::chrono::seconds>(tp.time_since_epoch())
|
||||
.count());
|
||||
}
|
||||
// invalid date format
|
||||
// fall-through intentional
|
||||
} else if (value.isNumber()) {
|
||||
// numeric value. we take it as it is
|
||||
return value.getNumericValue<double>();
|
||||
}
|
||||
|
||||
|
||||
// attribute not found in document, or invalid type
|
||||
return -1.0;
|
||||
}
|
||||
|
|
|
@ -77,7 +77,7 @@ class Index {
|
|||
Index(Index const&) = delete;
|
||||
Index& operator=(Index const&) = delete;
|
||||
|
||||
Index(TRI_idx_iid_t iid, LogicalCollection& collection,
|
||||
Index(TRI_idx_iid_t iid, LogicalCollection& collection, std::string const& name,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& fields,
|
||||
bool unique, bool sparse);
|
||||
|
||||
|
@ -113,6 +113,17 @@ class Index {
|
|||
/// @brief return the index id
|
||||
inline TRI_idx_iid_t id() const { return _iid; }
|
||||
|
||||
/// @brief return the index name
|
||||
inline std::string const& name() const {
|
||||
if (_name == StaticStrings::IndexNameEdgeFrom || _name == StaticStrings::IndexNameEdgeTo) {
|
||||
return StaticStrings::IndexNameEdge;
|
||||
}
|
||||
return _name;
|
||||
}
|
||||
|
||||
/// @brief set the name, if it is currently unset
|
||||
void name(std::string const&);
|
||||
|
||||
/// @brief return the index fields
|
||||
inline std::vector<std::vector<arangodb::basics::AttributeName>> const& fields() const {
|
||||
return _fields;
|
||||
|
@ -225,6 +236,9 @@ class Index {
|
|||
/// @brief generate a new index id
|
||||
static TRI_idx_iid_t generateId();
|
||||
|
||||
/// @brief check if two index definitions share any identifiers (_id, name)
|
||||
static bool CompareIdentifiers(velocypack::Slice const& lhs, velocypack::Slice const& rhs);
|
||||
|
||||
/// @brief index comparator, used by the coordinator to detect if two index
|
||||
/// contents are the same
|
||||
static bool Compare(velocypack::Slice const& lhs, velocypack::Slice const& rhs);
|
||||
|
@ -257,9 +271,10 @@ class Index {
|
|||
/// @brief return the selectivity estimate of the index
|
||||
/// must only be called if hasSelectivityEstimate() returns true
|
||||
///
|
||||
/// The extra arangodb::velocypack::StringRef is only used in the edge index as direction
|
||||
/// attribute attribute, a Slice would be more flexible.
|
||||
virtual double selectivityEstimate(arangodb::velocypack::StringRef const& extra = arangodb::velocypack::StringRef()) const;
|
||||
/// The extra arangodb::velocypack::StringRef is only used in the edge index
|
||||
/// as direction attribute attribute, a Slice would be more flexible.
|
||||
virtual double selectivityEstimate(arangodb::velocypack::StringRef const& extra =
|
||||
arangodb::velocypack::StringRef()) const;
|
||||
|
||||
/// @brief update the cluster selectivity estimate
|
||||
virtual void updateClusterSelectivityEstimate(double /*estimate*/) {
|
||||
|
@ -365,7 +380,7 @@ class Index {
|
|||
/// it is only allowed to call this method if the index contains a
|
||||
/// single attribute
|
||||
std::string const& getAttribute() const;
|
||||
|
||||
|
||||
/// @brief generate error result
|
||||
/// @param code the error key
|
||||
/// @param key the conflicting key
|
||||
|
@ -385,10 +400,12 @@ class Index {
|
|||
/// @brief extracts a timestamp value from a document
|
||||
/// returns a negative value if the document does not contain the specified
|
||||
/// attribute, or the attribute does not contain a valid timestamp or date string
|
||||
double getTimestamp(arangodb::velocypack::Slice const& doc, std::string const& attributeName) const;
|
||||
double getTimestamp(arangodb::velocypack::Slice const& doc,
|
||||
std::string const& attributeName) const;
|
||||
|
||||
TRI_idx_iid_t const _iid;
|
||||
LogicalCollection& _collection;
|
||||
std::string _name;
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>> const _fields;
|
||||
bool const _useExpansion;
|
||||
|
||||
|
|
|
@ -21,13 +21,13 @@
|
|||
/// @author Michael Hackstein
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "IndexFactory.h"
|
||||
#include "Basics/AttributeNameParser.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "Cluster/ServerState.h"
|
||||
#include "IndexFactory.h"
|
||||
#include "Indexes/Index.h"
|
||||
#include "RestServer/BootstrapFeature.h"
|
||||
#include "VocBase/LogicalCollection.h"
|
||||
|
@ -56,12 +56,12 @@ struct InvalidIndexFactory : public arangodb::IndexTypeFactory {
|
|||
definition.toString());
|
||||
}
|
||||
|
||||
arangodb::Result normalize( // normalize definition
|
||||
arangodb::velocypack::Builder&, // normalized definition (out-param)
|
||||
arangodb::Result normalize( // normalize definition
|
||||
arangodb::velocypack::Builder&, // normalized definition (out-param)
|
||||
arangodb::velocypack::Slice definition, // source definition
|
||||
bool, // definition for index creation
|
||||
TRI_vocbase_t const& // index vocbase
|
||||
) const override {
|
||||
bool, // definition for index creation
|
||||
TRI_vocbase_t const& // index vocbase
|
||||
) const override {
|
||||
return arangodb::Result(
|
||||
TRI_ERROR_BAD_PARAMETER,
|
||||
std::string(
|
||||
|
@ -75,18 +75,17 @@ InvalidIndexFactory const INVALID;
|
|||
} // namespace
|
||||
|
||||
namespace arangodb {
|
||||
|
||||
|
||||
bool IndexTypeFactory::equal(arangodb::Index::IndexType type,
|
||||
arangodb::velocypack::Slice const& lhs,
|
||||
arangodb::velocypack::Slice const& rhs,
|
||||
bool attributeOrderMatters) const {
|
||||
|
||||
// unique must be identical if present
|
||||
auto value = lhs.get(arangodb::StaticStrings::IndexUnique);
|
||||
|
||||
if (value.isBoolean()) {
|
||||
if (arangodb::basics::VelocyPackHelper::compare(
|
||||
value, rhs.get(arangodb::StaticStrings::IndexUnique), false)) {
|
||||
if (arangodb::basics::VelocyPackHelper::compare(value, rhs.get(arangodb::StaticStrings::IndexUnique),
|
||||
false)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
@ -95,8 +94,8 @@ bool IndexTypeFactory::equal(arangodb::Index::IndexType type,
|
|||
value = lhs.get(arangodb::StaticStrings::IndexSparse);
|
||||
|
||||
if (value.isBoolean()) {
|
||||
if (arangodb::basics::VelocyPackHelper::compare(
|
||||
value, rhs.get(arangodb::StaticStrings::IndexSparse), false)) {
|
||||
if (arangodb::basics::VelocyPackHelper::compare(value, rhs.get(arangodb::StaticStrings::IndexSparse),
|
||||
false)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
@ -188,12 +187,12 @@ Result IndexFactory::emplace(std::string const& type, IndexTypeFactory const& fa
|
|||
return arangodb::Result();
|
||||
}
|
||||
|
||||
Result IndexFactory::enhanceIndexDefinition( // normalizze deefinition
|
||||
velocypack::Slice const definition, // source definition
|
||||
velocypack::Builder& normalized, // normalized definition (out-param)
|
||||
bool isCreation, // definition for index creation
|
||||
TRI_vocbase_t const& vocbase // index vocbase
|
||||
) const {
|
||||
Result IndexFactory::enhanceIndexDefinition( // normalizze deefinition
|
||||
velocypack::Slice const definition, // source definition
|
||||
velocypack::Builder& normalized, // normalized definition (out-param)
|
||||
bool isCreation, // definition for index creation
|
||||
TRI_vocbase_t const& vocbase // index vocbase
|
||||
) const {
|
||||
auto type = definition.get(StaticStrings::IndexType);
|
||||
|
||||
if (!type.isString()) {
|
||||
|
@ -220,6 +219,29 @@ Result IndexFactory::enhanceIndexDefinition( // normalizze deefinition
|
|||
arangodb::velocypack::Value(std::to_string(id)));
|
||||
}
|
||||
|
||||
auto nameSlice = definition.get(StaticStrings::IndexName);
|
||||
std::string name;
|
||||
|
||||
if (nameSlice.isString() && (nameSlice.getStringLength() != 0)) {
|
||||
name = nameSlice.copyString();
|
||||
} else {
|
||||
// we should set the name for special types explicitly elsewhere, but just in case...
|
||||
if (Index::type(type.copyString()) == Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX) {
|
||||
name = StaticStrings::IndexNamePrimary;
|
||||
} else if (Index::type(type.copyString()) == Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX) {
|
||||
name = StaticStrings::IndexNameEdge;
|
||||
} else {
|
||||
// generate a name
|
||||
name = "idx_" + std::to_string(TRI_HybridLogicalClock());
|
||||
}
|
||||
}
|
||||
|
||||
if (!TRI_vocbase_t::IsAllowedName(false, velocypack::StringRef(name))) {
|
||||
return Result(TRI_ERROR_ARANGO_ILLEGAL_NAME);
|
||||
}
|
||||
|
||||
normalized.add(StaticStrings::IndexName, arangodb::velocypack::Value(name));
|
||||
|
||||
return factory.normalize(normalized, definition, isCreation, vocbase);
|
||||
} catch (basics::Exception const& ex) {
|
||||
return Result(ex.code(), ex.what());
|
||||
|
@ -275,8 +297,8 @@ std::shared_ptr<Index> IndexFactory::prepareIndexFromSlice(velocypack::Slice def
|
|||
|
||||
/// same for both storage engines
|
||||
std::vector<std::string> IndexFactory::supportedIndexes() const {
|
||||
return std::vector<std::string>{"primary", "edge", "hash", "skiplist",
|
||||
"ttl", "persistent", "geo", "fulltext"};
|
||||
return std::vector<std::string>{"primary", "edge", "hash", "skiplist",
|
||||
"ttl", "persistent", "geo", "fulltext"};
|
||||
}
|
||||
|
||||
std::unordered_map<std::string, std::string> IndexFactory::indexAliases() const {
|
||||
|
@ -286,7 +308,8 @@ std::unordered_map<std::string, std::string> IndexFactory::indexAliases() const
|
|||
TRI_idx_iid_t IndexFactory::validateSlice(arangodb::velocypack::Slice info,
|
||||
bool generateKey, bool isClusterConstructor) {
|
||||
if (!info.isObject()) {
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, "expecting object for index definition");
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER,
|
||||
"expecting object for index definition");
|
||||
}
|
||||
|
||||
TRI_idx_iid_t iid = 0;
|
||||
|
@ -319,34 +342,35 @@ TRI_idx_iid_t IndexFactory::validateSlice(arangodb::velocypack::Slice info,
|
|||
|
||||
/// @brief process the fields list, deduplicate it, and add it to the json
|
||||
Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& builder,
|
||||
size_t minFields, size_t maxField, bool create,
|
||||
bool allowExpansion) {
|
||||
size_t minFields, size_t maxField,
|
||||
bool create, bool allowExpansion) {
|
||||
TRI_ASSERT(builder.isOpenObject());
|
||||
std::unordered_set<arangodb::velocypack::StringRef> fields;
|
||||
auto fieldsSlice = definition.get(arangodb::StaticStrings::IndexFields);
|
||||
|
||||
builder.add(
|
||||
arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields)
|
||||
);
|
||||
builder.add(arangodb::velocypack::Value(arangodb::StaticStrings::IndexFields));
|
||||
builder.openArray();
|
||||
|
||||
if (fieldsSlice.isArray()) {
|
||||
// "fields" is a list of fields
|
||||
for (auto const& it : VPackArrayIterator(fieldsSlice)) {
|
||||
if (!it.isString()) {
|
||||
return Result(TRI_ERROR_BAD_PARAMETER, "index field names must be strings");
|
||||
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||
"index field names must be strings");
|
||||
}
|
||||
|
||||
arangodb::velocypack::StringRef f(it);
|
||||
|
||||
if (f.empty() || (create && f == StaticStrings::IdString)) {
|
||||
// accessing internal attributes is disallowed
|
||||
return Result(TRI_ERROR_BAD_PARAMETER, "_id attribute cannot be indexed");
|
||||
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||
"_id attribute cannot be indexed");
|
||||
}
|
||||
|
||||
if (fields.find(f) != fields.end()) {
|
||||
// duplicate attribute name
|
||||
return Result(TRI_ERROR_BAD_PARAMETER, "duplicate attribute name in index fields list");
|
||||
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||
"duplicate attribute name in index fields list");
|
||||
}
|
||||
|
||||
std::vector<basics::AttributeName> temp;
|
||||
|
@ -359,7 +383,8 @@ Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& bui
|
|||
|
||||
size_t cc = fields.size();
|
||||
if (cc == 0 || cc < minFields || cc > maxField) {
|
||||
return Result(TRI_ERROR_BAD_PARAMETER, "invalid number of index attributes");
|
||||
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||
"invalid number of index attributes");
|
||||
}
|
||||
|
||||
builder.close();
|
||||
|
@ -367,16 +392,11 @@ Result IndexFactory::processIndexFields(VPackSlice definition, VPackBuilder& bui
|
|||
}
|
||||
|
||||
/// @brief process the unique flag and add it to the json
|
||||
void IndexFactory::processIndexUniqueFlag(VPackSlice definition,
|
||||
VPackBuilder& builder) {
|
||||
void IndexFactory::processIndexUniqueFlag(VPackSlice definition, VPackBuilder& builder) {
|
||||
bool unique = basics::VelocyPackHelper::getBooleanValue(
|
||||
definition, arangodb::StaticStrings::IndexUnique.c_str(), false
|
||||
);
|
||||
definition, arangodb::StaticStrings::IndexUnique.c_str(), false);
|
||||
|
||||
builder.add(
|
||||
arangodb::StaticStrings::IndexUnique,
|
||||
arangodb::velocypack::Value(unique)
|
||||
);
|
||||
builder.add(arangodb::StaticStrings::IndexUnique, arangodb::velocypack::Value(unique));
|
||||
}
|
||||
|
||||
/// @brief process the sparse flag and add it to the json
|
||||
|
@ -384,38 +404,30 @@ void IndexFactory::processIndexSparseFlag(VPackSlice definition,
|
|||
VPackBuilder& builder, bool create) {
|
||||
if (definition.hasKey(arangodb::StaticStrings::IndexSparse)) {
|
||||
bool sparseBool = basics::VelocyPackHelper::getBooleanValue(
|
||||
definition, arangodb::StaticStrings::IndexSparse.c_str(), false
|
||||
);
|
||||
definition, arangodb::StaticStrings::IndexSparse.c_str(), false);
|
||||
|
||||
builder.add(
|
||||
arangodb::StaticStrings::IndexSparse,
|
||||
arangodb::velocypack::Value(sparseBool)
|
||||
);
|
||||
builder.add(arangodb::StaticStrings::IndexSparse,
|
||||
arangodb::velocypack::Value(sparseBool));
|
||||
} else if (create) {
|
||||
// not set. now add a default value
|
||||
builder.add(
|
||||
arangodb::StaticStrings::IndexSparse,
|
||||
arangodb::velocypack::Value(false)
|
||||
);
|
||||
builder.add(arangodb::StaticStrings::IndexSparse, arangodb::velocypack::Value(false));
|
||||
}
|
||||
}
|
||||
|
||||
/// @brief process the deduplicate flag and add it to the json
|
||||
void IndexFactory::processIndexDeduplicateFlag(VPackSlice definition,
|
||||
VPackBuilder& builder) {
|
||||
bool dup = basics::VelocyPackHelper::getBooleanValue(definition,
|
||||
"deduplicate", true);
|
||||
void IndexFactory::processIndexDeduplicateFlag(VPackSlice definition, VPackBuilder& builder) {
|
||||
bool dup = basics::VelocyPackHelper::getBooleanValue(definition, "deduplicate", true);
|
||||
builder.add("deduplicate", VPackValue(dup));
|
||||
}
|
||||
|
||||
/// @brief process the geojson flag and add it to the json
|
||||
void IndexFactory::processIndexGeoJsonFlag(VPackSlice definition,
|
||||
VPackBuilder& builder) {
|
||||
void IndexFactory::processIndexGeoJsonFlag(VPackSlice definition, VPackBuilder& builder) {
|
||||
auto fieldsSlice = definition.get(arangodb::StaticStrings::IndexFields);
|
||||
|
||||
if (fieldsSlice.isArray() && fieldsSlice.length() == 1) {
|
||||
// only add geoJson for indexes with a single field (with needs to be an array)
|
||||
bool geoJson = basics::VelocyPackHelper::getBooleanValue(definition, "geoJson", false);
|
||||
bool geoJson =
|
||||
basics::VelocyPackHelper::getBooleanValue(definition, "geoJson", false);
|
||||
|
||||
builder.add("geoJson", VPackValue(geoJson));
|
||||
}
|
||||
|
@ -430,12 +442,12 @@ Result IndexFactory::enhanceJsonIndexGeneric(VPackSlice definition,
|
|||
processIndexSparseFlag(definition, builder, create);
|
||||
processIndexUniqueFlag(definition, builder);
|
||||
processIndexDeduplicateFlag(definition, builder);
|
||||
|
||||
|
||||
bool bck = basics::VelocyPackHelper::getBooleanValue(definition, StaticStrings::IndexInBackground,
|
||||
false);
|
||||
builder.add(StaticStrings::IndexInBackground, VPackValue(bck));
|
||||
}
|
||||
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
|
@ -443,7 +455,7 @@ Result IndexFactory::enhanceJsonIndexGeneric(VPackSlice definition,
|
|||
Result IndexFactory::enhanceJsonIndexTtl(VPackSlice definition,
|
||||
VPackBuilder& builder, bool create) {
|
||||
Result res = processIndexFields(definition, builder, 1, 1, create, false);
|
||||
|
||||
|
||||
if (res.ok()) {
|
||||
// a TTL index is always non-unique but sparse!
|
||||
builder.add(arangodb::StaticStrings::IndexUnique, arangodb::velocypack::Value(false));
|
||||
|
@ -451,14 +463,16 @@ Result IndexFactory::enhanceJsonIndexTtl(VPackSlice definition,
|
|||
|
||||
VPackSlice v = definition.get(StaticStrings::IndexExpireAfter);
|
||||
if (!v.isNumber()) {
|
||||
return Result(TRI_ERROR_BAD_PARAMETER, "expireAfter attribute must be a number");
|
||||
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||
"expireAfter attribute must be a number");
|
||||
}
|
||||
double d = v.getNumericValue<double>();
|
||||
if (d < 0.0) {
|
||||
return Result(TRI_ERROR_BAD_PARAMETER, "expireAfter attribute must greater equal to zero");
|
||||
return Result(TRI_ERROR_BAD_PARAMETER,
|
||||
"expireAfter attribute must greater equal to zero");
|
||||
}
|
||||
builder.add(arangodb::StaticStrings::IndexExpireAfter, v);
|
||||
|
||||
|
||||
bool bck = basics::VelocyPackHelper::getBooleanValue(definition, StaticStrings::IndexInBackground,
|
||||
false);
|
||||
builder.add(StaticStrings::IndexInBackground, VPackValue(bck));
|
||||
|
@ -468,16 +482,15 @@ Result IndexFactory::enhanceJsonIndexTtl(VPackSlice definition,
|
|||
}
|
||||
|
||||
/// @brief enhances the json of a geo, geo1 or geo2 index
|
||||
Result IndexFactory::enhanceJsonIndexGeo(VPackSlice definition,
|
||||
VPackBuilder& builder, bool create,
|
||||
int minFields, int maxFields) {
|
||||
Result IndexFactory::enhanceJsonIndexGeo(VPackSlice definition, VPackBuilder& builder,
|
||||
bool create, int minFields, int maxFields) {
|
||||
Result res = processIndexFields(definition, builder, minFields, maxFields, create, false);
|
||||
|
||||
|
||||
if (res.ok()) {
|
||||
builder.add(arangodb::StaticStrings::IndexSparse, arangodb::velocypack::Value(true));
|
||||
builder.add(arangodb::StaticStrings::IndexUnique, arangodb::velocypack::Value(false));
|
||||
IndexFactory::processIndexGeoJsonFlag(definition, builder);
|
||||
|
||||
|
||||
bool bck = basics::VelocyPackHelper::getBooleanValue(definition, StaticStrings::IndexInBackground,
|
||||
false);
|
||||
builder.add(StaticStrings::IndexInBackground, VPackValue(bck));
|
||||
|
@ -507,7 +520,7 @@ Result IndexFactory::enhanceJsonIndexFulltext(VPackSlice definition,
|
|||
}
|
||||
|
||||
builder.add("minLength", VPackValue(minWordLength));
|
||||
|
||||
|
||||
bool bck = basics::VelocyPackHelper::getBooleanValue(definition, StaticStrings::IndexInBackground,
|
||||
false);
|
||||
builder.add(StaticStrings::IndexInBackground, VPackValue(bck));
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
/// @author Jan Steemann
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "MMFilesCollection.h"
|
||||
#include "ApplicationFeatures/ApplicationServer.h"
|
||||
#include "Aql/PlanCache.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
|
@ -51,6 +50,7 @@
|
|||
#include "MMFiles/MMFilesLogfileManager.h"
|
||||
#include "MMFiles/MMFilesPrimaryIndex.h"
|
||||
#include "MMFiles/MMFilesTransactionState.h"
|
||||
#include "MMFilesCollection.h"
|
||||
#include "RestServer/DatabaseFeature.h"
|
||||
#include "Scheduler/Scheduler.h"
|
||||
#include "Scheduler/SchedulerFeature.h"
|
||||
|
@ -704,7 +704,7 @@ int MMFilesCollection::close() {
|
|||
// We also have to unload the indexes.
|
||||
WRITE_LOCKER(writeLocker, _dataLock);
|
||||
|
||||
READ_LOCKER_EVENTUAL(guard, _indexesLock);
|
||||
READ_LOCKER_EVENTUAL(guard, _indexesLock);
|
||||
|
||||
for (auto& idx : _indexes) {
|
||||
idx->unload();
|
||||
|
@ -719,11 +719,11 @@ int MMFilesCollection::close() {
|
|||
_ditches.contains(MMFilesDitch::TRI_DITCH_DATAFILE_RENAME) ||
|
||||
_ditches.contains(MMFilesDitch::TRI_DITCH_REPLICATION) ||
|
||||
_ditches.contains(MMFilesDitch::TRI_DITCH_COMPACTION));
|
||||
|
||||
|
||||
if (!hasDocumentDitch && !hasOtherDitch) {
|
||||
// we can abort now
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// give the cleanup thread more time to clean up
|
||||
{
|
||||
|
@ -733,9 +733,13 @@ int MMFilesCollection::close() {
|
|||
|
||||
if ((++tries % 10) == 0) {
|
||||
if (hasDocumentDitch) {
|
||||
LOG_TOPIC(WARN, Logger::ENGINES) << "waiting for cleanup of document ditches for collection '" << _logicalCollection.name() << "'. has other: " << hasOtherDitch;
|
||||
LOG_TOPIC(WARN, Logger::ENGINES)
|
||||
<< "waiting for cleanup of document ditches for collection '"
|
||||
<< _logicalCollection.name() << "'. has other: " << hasOtherDitch;
|
||||
} else {
|
||||
LOG_TOPIC(WARN, Logger::ENGINES) << "waiting for cleanup of ditches for collection '" << _logicalCollection.name() << "'";
|
||||
LOG_TOPIC(WARN, Logger::ENGINES)
|
||||
<< "waiting for cleanup of ditches for collection '"
|
||||
<< _logicalCollection.name() << "'";
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2021,7 +2025,8 @@ Result MMFilesCollection::read(transaction::Methods* trx, VPackSlice const& key,
|
|||
return Result(TRI_ERROR_NO_ERROR);
|
||||
}
|
||||
|
||||
Result MMFilesCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
||||
Result MMFilesCollection::read(transaction::Methods* trx,
|
||||
arangodb::velocypack::StringRef const& key,
|
||||
ManagedDocumentResult& result, bool lock) {
|
||||
// copy string into a vpack string
|
||||
transaction::BuilderLeaser builder(trx);
|
||||
|
@ -2200,10 +2205,12 @@ std::shared_ptr<Index> MMFilesCollection::createIndex(transaction::Methods& trx,
|
|||
TRI_ASSERT(info.isObject());
|
||||
std::shared_ptr<Index> idx = PhysicalCollection::lookupIndex(info);
|
||||
|
||||
if (idx != nullptr) {
|
||||
if (idx != nullptr) {
|
||||
// We already have this index.
|
||||
if (idx->type() == arangodb::Index::TRI_IDX_TYPE_TTL_INDEX) {
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, "there can only be one ttl index per collection");
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(
|
||||
TRI_ERROR_BAD_PARAMETER,
|
||||
"there can only be one ttl index per collection");
|
||||
}
|
||||
created = false;
|
||||
return idx;
|
||||
|
@ -2227,8 +2234,18 @@ std::shared_ptr<Index> MMFilesCollection::createIndex(transaction::Methods& trx,
|
|||
}
|
||||
|
||||
auto other = PhysicalCollection::lookupIndex(idx->id());
|
||||
if (!other) {
|
||||
other = PhysicalCollection::lookupIndex(idx->name());
|
||||
}
|
||||
if (other) {
|
||||
return other;
|
||||
// definition shares an identifier with an existing index with a
|
||||
// different definition
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_ARANGO_DUPLICATE_IDENTIFIER,
|
||||
"duplicate value for `" +
|
||||
arangodb::StaticStrings::IndexId +
|
||||
"` or `" +
|
||||
arangodb::StaticStrings::IndexName +
|
||||
"`");
|
||||
}
|
||||
|
||||
TRI_ASSERT(idx->type() != Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX);
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
/// @author Dr. Frank Celler
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "MMFilesEdgeIndex.h"
|
||||
#include "Aql/AstNode.h"
|
||||
#include "Aql/SortCondition.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
|
@ -32,6 +31,7 @@
|
|||
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
||||
#include "MMFiles/MMFilesCollection.h"
|
||||
#include "MMFiles/MMFilesIndexLookupContext.h"
|
||||
#include "MMFilesEdgeIndex.h"
|
||||
#include "StorageEngine/TransactionState.h"
|
||||
#include "Transaction/Context.h"
|
||||
#include "Transaction/Helpers.h"
|
||||
|
@ -174,7 +174,7 @@ void MMFilesEdgeIndexIterator::reset() {
|
|||
}
|
||||
|
||||
MMFilesEdgeIndex::MMFilesEdgeIndex(TRI_idx_iid_t iid, arangodb::LogicalCollection& collection)
|
||||
: MMFilesIndex(iid, collection,
|
||||
: MMFilesIndex(iid, collection, StaticStrings::IndexNameEdge,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
||||
{{arangodb::basics::AttributeName(StaticStrings::FromString, false)},
|
||||
{arangodb::basics::AttributeName(StaticStrings::ToString, false)}}),
|
||||
|
|
|
@ -36,10 +36,10 @@ class LogicalCollection;
|
|||
|
||||
class MMFilesIndex : public Index {
|
||||
public:
|
||||
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
||||
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection, std::string const& name,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
||||
bool unique, bool sparse)
|
||||
: Index(id, collection, attributes, unique, sparse) {}
|
||||
: Index(id, collection, name, attributes, unique, sparse) {}
|
||||
|
||||
MMFilesIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& info)
|
||||
|
@ -49,24 +49,23 @@ class MMFilesIndex : public Index {
|
|||
virtual bool isHidden() const override {
|
||||
return false; // do not generally hide MMFiles indexes
|
||||
}
|
||||
|
||||
|
||||
virtual bool isPersistent() const override { return false; };
|
||||
|
||||
virtual void batchInsert(transaction::Methods& trx,
|
||||
std::vector<std::pair<LocalDocumentId, arangodb::velocypack::Slice>> const& docs,
|
||||
std::shared_ptr<arangodb::basics::LocalTaskQueue> queue);
|
||||
|
||||
|
||||
virtual Result insert(transaction::Methods& trx, LocalDocumentId const& documentId,
|
||||
arangodb::velocypack::Slice const& doc, OperationMode mode) = 0;
|
||||
|
||||
|
||||
virtual Result remove(transaction::Methods& trx, LocalDocumentId const& documentId,
|
||||
arangodb::velocypack::Slice const& doc, OperationMode mode) = 0;
|
||||
|
||||
|
||||
void afterTruncate(TRI_voc_tick_t) override {
|
||||
// for mmfiles, truncating the index just unloads it
|
||||
unload();
|
||||
}
|
||||
|
||||
};
|
||||
} // namespace arangodb
|
||||
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
/// @author Dr. Frank Celler
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "MMFilesPrimaryIndex.h"
|
||||
#include "Aql/AstNode.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
|
@ -31,6 +30,7 @@
|
|||
#include "MMFiles/MMFilesCollection.h"
|
||||
#include "MMFiles/MMFilesIndexElement.h"
|
||||
#include "MMFiles/MMFilesIndexLookupContext.h"
|
||||
#include "MMFilesPrimaryIndex.h"
|
||||
#include "StorageEngine/TransactionState.h"
|
||||
#include "Transaction/Context.h"
|
||||
#include "Transaction/Helpers.h"
|
||||
|
@ -229,7 +229,7 @@ void MMFilesAnyIndexIterator::reset() {
|
|||
}
|
||||
|
||||
MMFilesPrimaryIndex::MMFilesPrimaryIndex(arangodb::LogicalCollection& collection)
|
||||
: MMFilesIndex(0, collection,
|
||||
: MMFilesIndex(0, collection, StaticStrings::IndexNamePrimary,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
||||
{{arangodb::basics::AttributeName(StaticStrings::KeyString, false)}}),
|
||||
/*unique*/ true, /*sparse*/ false) {
|
||||
|
|
|
@ -75,8 +75,8 @@ struct BuilderCookie : public arangodb::TransactionState::Cookie {
|
|||
} // namespace
|
||||
|
||||
RocksDBBuilderIndex::RocksDBBuilderIndex(std::shared_ptr<arangodb::RocksDBIndex> const& wp)
|
||||
: RocksDBIndex(wp->id(), wp->collection(), wp->fields(), wp->unique(),
|
||||
wp->sparse(), wp->columnFamily(), wp->objectId(),
|
||||
: RocksDBIndex(wp->id(), wp->collection(), wp->name(), wp->fields(),
|
||||
wp->unique(), wp->sparse(), wp->columnFamily(), wp->objectId(),
|
||||
/*useCache*/ false),
|
||||
_wrapped(wp) {
|
||||
TRI_ASSERT(_wrapped);
|
||||
|
@ -111,7 +111,7 @@ Result RocksDBBuilderIndex::insert(transaction::Methods& trx, RocksDBMethods* mt
|
|||
ctx = ptr.get();
|
||||
trx.state()->cookie(this, std::move(ptr));
|
||||
}
|
||||
|
||||
|
||||
// do not track document more than once
|
||||
if (ctx->tracked.find(documentId.id()) == ctx->tracked.end()) {
|
||||
ctx->tracked.insert(documentId.id());
|
||||
|
@ -182,10 +182,8 @@ static arangodb::Result fillIndex(RocksDBIndex& ridx, WriteBatchType& batch,
|
|||
THROW_ARANGO_EXCEPTION(res);
|
||||
}
|
||||
|
||||
TRI_IF_FAILURE("RocksDBBuilderIndex::fillIndex") {
|
||||
FATAL_ERROR_EXIT();
|
||||
}
|
||||
|
||||
TRI_IF_FAILURE("RocksDBBuilderIndex::fillIndex") { FATAL_ERROR_EXIT(); }
|
||||
|
||||
uint64_t numDocsWritten = 0;
|
||||
auto state = RocksDBTransactionState::toState(&trx);
|
||||
RocksDBTransactionCollection* trxColl = trx.resolveTrxCollection();
|
||||
|
@ -250,12 +248,13 @@ static arangodb::Result fillIndex(RocksDBIndex& ridx, WriteBatchType& batch,
|
|||
}
|
||||
|
||||
// if an error occured drop() will be called
|
||||
LOG_TOPIC(DEBUG, Logger::ENGINES) << "SNAPSHOT CAPTURED " << numDocsWritten << " " << res.errorMessage();
|
||||
|
||||
LOG_TOPIC(DEBUG, Logger::ENGINES)
|
||||
<< "SNAPSHOT CAPTURED " << numDocsWritten << " " << res.errorMessage();
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
arangodb::Result RocksDBBuilderIndex::fillIndexForeground() {
|
||||
arangodb::Result RocksDBBuilderIndex::fillIndexForeground() {
|
||||
RocksDBIndex* internal = _wrapped.get();
|
||||
TRI_ASSERT(internal != nullptr);
|
||||
|
||||
|
@ -311,7 +310,6 @@ struct ReplayHandler final : public rocksdb::WriteBatch::Handler {
|
|||
|
||||
// The default implementation of LogData does nothing.
|
||||
void LogData(const rocksdb::Slice& blob) override {
|
||||
|
||||
switch (RocksDBLogValue::type(blob)) {
|
||||
case RocksDBLogType::TrackedDocumentInsert:
|
||||
if (_lastObjectID == _objectId) {
|
||||
|
@ -320,8 +318,8 @@ struct ReplayHandler final : public rocksdb::WriteBatch::Handler {
|
|||
Index::OperationMode::normal);
|
||||
numInserted++;
|
||||
}
|
||||
break;
|
||||
|
||||
break;
|
||||
|
||||
case RocksDBLogType::TrackedDocumentRemove:
|
||||
if (_lastObjectID == _objectId) {
|
||||
auto pair = RocksDBLogValue::trackedDocument(blob);
|
||||
|
@ -330,7 +328,7 @@ struct ReplayHandler final : public rocksdb::WriteBatch::Handler {
|
|||
numRemoved++;
|
||||
}
|
||||
break;
|
||||
|
||||
|
||||
default: // ignore
|
||||
_lastObjectID = 0;
|
||||
break;
|
||||
|
@ -369,10 +367,9 @@ struct ReplayHandler final : public rocksdb::WriteBatch::Handler {
|
|||
return rocksdb::Status();
|
||||
}
|
||||
|
||||
rocksdb::Status DeleteRangeCF(uint32_t column_family_id,
|
||||
const rocksdb::Slice& begin_key,
|
||||
rocksdb::Status DeleteRangeCF(uint32_t column_family_id, const rocksdb::Slice& begin_key,
|
||||
const rocksdb::Slice& end_key) override {
|
||||
incTick(); // drop and truncate may use this
|
||||
incTick(); // drop and truncate may use this
|
||||
if (column_family_id == _index.columnFamily()->GetID() &&
|
||||
RocksDBKey::objectId(begin_key) == _objectId &&
|
||||
RocksDBKey::objectId(end_key) == _objectId) {
|
||||
|
@ -479,7 +476,7 @@ Result catchup(RocksDBIndex& ridx, WriteBatchType& wb, AccessMode::Type mode,
|
|||
res = replay.tmpRes;
|
||||
break;
|
||||
}
|
||||
|
||||
|
||||
commitLambda(batch.sequence);
|
||||
if (res.fail()) {
|
||||
break;
|
||||
|
@ -502,8 +499,8 @@ Result catchup(RocksDBIndex& ridx, WriteBatchType& wb, AccessMode::Type mode,
|
|||
}
|
||||
|
||||
LOG_TOPIC(DEBUG, Logger::ENGINES) << "WAL REPLAYED insertions: " << replay.numInserted
|
||||
<< "; deletions: " << replay.numRemoved << "; lastScannedTick "
|
||||
<< lastScannedTick;
|
||||
<< "; deletions: " << replay.numRemoved
|
||||
<< "; lastScannedTick " << lastScannedTick;
|
||||
|
||||
return res;
|
||||
}
|
||||
|
@ -529,7 +526,7 @@ void RocksDBBuilderIndex::Locker::unlock() {
|
|||
// Background index filler task
|
||||
arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
||||
TRI_ASSERT(locker.isLocked());
|
||||
|
||||
|
||||
arangodb::Result res;
|
||||
RocksDBIndex* internal = _wrapped.get();
|
||||
TRI_ASSERT(internal != nullptr);
|
||||
|
@ -538,28 +535,26 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
|||
rocksdb::DB* rootDB = engine->db()->GetRootDB();
|
||||
rocksdb::Snapshot const* snap = rootDB->GetSnapshot();
|
||||
engine->disableWalFilePruning(true);
|
||||
|
||||
|
||||
auto scope = scopeGuard([&] {
|
||||
engine->disableWalFilePruning(false);
|
||||
if (snap) {
|
||||
rootDB->ReleaseSnapshot(snap);
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
locker.unlock();
|
||||
if (internal->unique()) {
|
||||
const rocksdb::Comparator* cmp = internal->columnFamily()->GetComparator();
|
||||
// unique index. we need to keep track of all our changes because we need to
|
||||
// avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
||||
rocksdb::WriteBatchWithIndex batch(cmp, 32 * 1024 * 1024);
|
||||
res = ::fillIndex<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(
|
||||
*internal, batch, snap);
|
||||
res = ::fillIndex<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(*internal, batch, snap);
|
||||
} else {
|
||||
// non-unique index. all index keys will be unique anyway because they
|
||||
// contain the document id we can therefore get away with a cheap WriteBatch
|
||||
rocksdb::WriteBatch batch(32 * 1024 * 1024);
|
||||
res = ::fillIndex<rocksdb::WriteBatch, RocksDBBatchedMethods>(*internal, batch,
|
||||
snap);
|
||||
res = ::fillIndex<rocksdb::WriteBatch, RocksDBBatchedMethods>(*internal, batch, snap);
|
||||
}
|
||||
|
||||
if (res.fail()) {
|
||||
|
@ -578,8 +573,8 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
|||
numScanned = 0;
|
||||
if (internal->unique()) {
|
||||
const rocksdb::Comparator* cmp = internal->columnFamily()->GetComparator();
|
||||
// unique index. we need to keep track of all our changes because we need to
|
||||
// avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
||||
// unique index. we need to keep track of all our changes because we need
|
||||
// to avoid duplicate index keys. must therefore use a WriteBatchWithIndex
|
||||
rocksdb::WriteBatchWithIndex batch(cmp, 32 * 1024 * 1024);
|
||||
res = ::catchup<rocksdb::WriteBatchWithIndex, RocksDBBatchedWithIndexMethods>(
|
||||
*internal, batch, AccessMode::Type::WRITE, scanFrom, lastScanned, numScanned);
|
||||
|
@ -587,23 +582,21 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
|||
// non-unique index. all index keys will be unique anyway because they
|
||||
// contain the document id we can therefore get away with a cheap WriteBatch
|
||||
rocksdb::WriteBatch batch(32 * 1024 * 1024);
|
||||
res = ::catchup<rocksdb::WriteBatch, RocksDBBatchedMethods>(*internal, batch,
|
||||
AccessMode::Type::WRITE,
|
||||
scanFrom, lastScanned,
|
||||
numScanned);
|
||||
res = ::catchup<rocksdb::WriteBatch, RocksDBBatchedMethods>(
|
||||
*internal, batch, AccessMode::Type::WRITE, scanFrom, lastScanned, numScanned);
|
||||
}
|
||||
|
||||
if (res.fail()) {
|
||||
return res;
|
||||
}
|
||||
|
||||
|
||||
scanFrom = lastScanned;
|
||||
} while (maxCatchups-- > 0 && numScanned > 5000);
|
||||
|
||||
if (!locker.lock()) { // acquire exclusive collection lock
|
||||
return res.reset(TRI_ERROR_LOCK_TIMEOUT);
|
||||
}
|
||||
|
||||
|
||||
scanFrom = lastScanned;
|
||||
if (internal->unique()) {
|
||||
const rocksdb::Comparator* cmp = internal->columnFamily()->GetComparator();
|
||||
|
@ -621,6 +614,6 @@ arangodb::Result RocksDBBuilderIndex::fillIndexBackground(Locker& locker) {
|
|||
scanFrom, lastScanned,
|
||||
numScanned);
|
||||
}
|
||||
|
||||
|
||||
return res;
|
||||
}
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
/// @author Jan Christoph Uhde
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "RocksDBCollection.h"
|
||||
#include "Aql/PlanCache.h"
|
||||
#include "Basics/ReadLocker.h"
|
||||
#include "Basics/Result.h"
|
||||
|
@ -36,6 +35,7 @@
|
|||
#include "Indexes/Index.h"
|
||||
#include "Indexes/IndexIterator.h"
|
||||
#include "RestServer/DatabaseFeature.h"
|
||||
#include "RocksDBCollection.h"
|
||||
#include "RocksDBEngine/RocksDBBuilderIndex.h"
|
||||
#include "RocksDBEngine/RocksDBCommon.h"
|
||||
#include "RocksDBEngine/RocksDBComparator.h"
|
||||
|
@ -269,10 +269,13 @@ void RocksDBCollection::prepareIndexes(arangodb::velocypack::Slice indexesSlice)
|
|||
WRITE_LOCKER(guard, _indexesLock);
|
||||
TRI_ASSERT(_indexes.empty());
|
||||
for (std::shared_ptr<Index>& idx : indexes) {
|
||||
TRI_ASSERT(idx != nullptr);
|
||||
auto const id = idx->id();
|
||||
for (auto const& it : _indexes) {
|
||||
TRI_ASSERT(it != nullptr);
|
||||
if (it->id() == id) { // index is there twice
|
||||
idx.reset();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -288,8 +291,9 @@ void RocksDBCollection::prepareIndexes(arangodb::velocypack::Slice indexesSlice)
|
|||
|
||||
if (_indexes[0]->type() != Index::IndexType::TRI_IDX_TYPE_PRIMARY_INDEX ||
|
||||
(TRI_COL_TYPE_EDGE == _logicalCollection.type() &&
|
||||
(_indexes[1]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX ||
|
||||
_indexes[2]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX))) {
|
||||
(_indexes.size() < 3 ||
|
||||
(_indexes[1]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX ||
|
||||
_indexes[2]->type() != Index::IndexType::TRI_IDX_TYPE_EDGE_INDEX)))) {
|
||||
std::string msg =
|
||||
"got invalid indexes for collection '" + _logicalCollection.name() + "'";
|
||||
LOG_TOPIC(ERR, arangodb::Logger::ENGINES) << msg;
|
||||
|
@ -332,10 +336,12 @@ std::shared_ptr<Index> RocksDBCollection::createIndex(VPackSlice const& info,
|
|||
if ((idx = findIndex(info, _indexes)) != nullptr) {
|
||||
// We already have this index.
|
||||
if (idx->type() == arangodb::Index::TRI_IDX_TYPE_TTL_INDEX) {
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, "there can only be one ttl index per collection");
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(
|
||||
TRI_ERROR_BAD_PARAMETER,
|
||||
"there can only be one ttl index per collection");
|
||||
}
|
||||
|
||||
created = false;
|
||||
created = false;
|
||||
return idx;
|
||||
}
|
||||
}
|
||||
|
@ -358,8 +364,15 @@ std::shared_ptr<Index> RocksDBCollection::createIndex(VPackSlice const& info,
|
|||
{
|
||||
READ_LOCKER(guard, _indexesLock);
|
||||
for (auto const& other : _indexes) { // conflicting index exists
|
||||
if (other->id() == idx->id()) {
|
||||
return other; // index already exists
|
||||
if (other->id() == idx->id() || other->name() == idx->name()) {
|
||||
// definition shares an identifier with an existing index with a
|
||||
// different definition
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_ARANGO_DUPLICATE_IDENTIFIER,
|
||||
"duplicate value for `" +
|
||||
arangodb::StaticStrings::IndexId +
|
||||
"` or `" +
|
||||
arangodb::StaticStrings::IndexName +
|
||||
"`");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -382,7 +395,7 @@ std::shared_ptr<Index> RocksDBCollection::createIndex(VPackSlice const& info,
|
|||
VPackBuilder builder;
|
||||
builder.openObject();
|
||||
for (auto const& pair : VPackObjectIterator(VPackSlice(value.data()))) {
|
||||
if (pair.key.isEqualString("indexes")) { // append new index
|
||||
if (pair.key.isEqualString("indexes")) { // append new index
|
||||
VPackArrayBuilder arrGuard(&builder, "indexes");
|
||||
builder.add(VPackArrayIterator(pair.value));
|
||||
buildIdx->toVelocyPack(builder, Index::makeFlags(Index::Serialize::Internals));
|
||||
|
@ -411,7 +424,7 @@ std::shared_ptr<Index> RocksDBCollection::createIndex(VPackSlice const& info,
|
|||
}
|
||||
TRI_ASSERT(res.fail() || locker.isLocked()); // always lock to avoid inconsistencies
|
||||
locker.lock();
|
||||
|
||||
|
||||
// Step 5. cleanup
|
||||
if (res.ok()) {
|
||||
{
|
||||
|
@ -752,7 +765,8 @@ bool RocksDBCollection::lookupRevision(transaction::Methods* trx, VPackSlice con
|
|||
LocalDocumentId documentId;
|
||||
revisionId = 0;
|
||||
// lookup the revision id in the primary index
|
||||
if (!primaryIndex()->lookupRevision(trx, arangodb::velocypack::StringRef(key), documentId, revisionId)) {
|
||||
if (!primaryIndex()->lookupRevision(trx, arangodb::velocypack::StringRef(key),
|
||||
documentId, revisionId)) {
|
||||
// document not found
|
||||
TRI_ASSERT(revisionId == 0);
|
||||
return false;
|
||||
|
@ -769,7 +783,8 @@ bool RocksDBCollection::lookupRevision(transaction::Methods* trx, VPackSlice con
|
|||
});
|
||||
}
|
||||
|
||||
Result RocksDBCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
||||
Result RocksDBCollection::read(transaction::Methods* trx,
|
||||
arangodb::velocypack::StringRef const& key,
|
||||
ManagedDocumentResult& result, bool) {
|
||||
LocalDocumentId const documentId = primaryIndex()->lookupKey(trx, key);
|
||||
if (documentId.isSet()) {
|
||||
|
@ -1331,8 +1346,8 @@ Result RocksDBCollection::updateDocument(transaction::Methods* trx,
|
|||
READ_LOCKER(guard, _indexesLock);
|
||||
for (std::shared_ptr<Index> const& idx : _indexes) {
|
||||
RocksDBIndex* rIdx = static_cast<RocksDBIndex*>(idx.get());
|
||||
res = rIdx->update(*trx, mthds, oldDocumentId, oldDoc, newDocumentId, newDoc,
|
||||
options.indexOperationMode);
|
||||
res = rIdx->update(*trx, mthds, oldDocumentId, oldDoc, newDocumentId,
|
||||
newDoc, options.indexOperationMode);
|
||||
|
||||
if (res.fail()) {
|
||||
break;
|
||||
|
|
|
@ -22,7 +22,6 @@
|
|||
/// @author Michael Hackstein
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "RocksDBEdgeIndex.h"
|
||||
#include "Aql/AstNode.h"
|
||||
#include "Aql/SortCondition.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
|
@ -32,6 +31,7 @@
|
|||
#include "Cache/CachedValue.h"
|
||||
#include "Cache/TransactionalCache.h"
|
||||
#include "Indexes/SimpleAttributeEqualityMatcher.h"
|
||||
#include "RocksDBEdgeIndex.h"
|
||||
#include "RocksDBEngine/RocksDBCollection.h"
|
||||
#include "RocksDBEngine/RocksDBCommon.h"
|
||||
#include "RocksDBEngine/RocksDBKey.h"
|
||||
|
@ -83,8 +83,7 @@ void RocksDBEdgeIndexWarmupTask::run() {
|
|||
namespace arangodb {
|
||||
class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
||||
public:
|
||||
RocksDBEdgeIndexLookupIterator(LogicalCollection* collection,
|
||||
transaction::Methods* trx,
|
||||
RocksDBEdgeIndexLookupIterator(LogicalCollection* collection, transaction::Methods* trx,
|
||||
arangodb::RocksDBEdgeIndex const* index,
|
||||
std::unique_ptr<VPackBuilder> keys,
|
||||
std::shared_ptr<cache::Cache> cache)
|
||||
|
@ -114,9 +113,9 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
char const* typeName() const override { return "edge-index-iterator"; }
|
||||
|
||||
|
||||
bool hasExtra() const override { return true; }
|
||||
|
||||
|
||||
/// @brief we provide a method to provide the index attribute values
|
||||
/// while scanning the index
|
||||
bool hasCovering() const override { return true; }
|
||||
|
@ -441,25 +440,25 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
|||
resetInplaceMemory();
|
||||
_keysIterator.reset();
|
||||
_lastKey = VPackSlice::nullSlice();
|
||||
_builderIterator = VPackArrayIterator(arangodb::velocypack::Slice::emptyArraySlice());
|
||||
_builderIterator =
|
||||
VPackArrayIterator(arangodb::velocypack::Slice::emptyArraySlice());
|
||||
}
|
||||
|
||||
|
||||
/// @brief index supports rearming
|
||||
bool canRearm() const override { return true; }
|
||||
|
||||
|
||||
/// @brief rearm the index iterator
|
||||
bool rearm(arangodb::aql::AstNode const* node,
|
||||
arangodb::aql::Variable const* variable,
|
||||
bool rearm(arangodb::aql::AstNode const* node, arangodb::aql::Variable const* variable,
|
||||
IndexIteratorOptions const& opts) override {
|
||||
TRI_ASSERT(!_index->isSorted() || opts.sorted);
|
||||
|
||||
|
||||
TRI_ASSERT(node != nullptr);
|
||||
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
||||
TRI_ASSERT(node->numMembers() == 1);
|
||||
AttributeAccessParts aap(node->getMember(0), variable);
|
||||
|
||||
TRI_ASSERT(aap.attribute->stringEquals(_index->_directionAttr));
|
||||
|
||||
|
||||
_keys->clear();
|
||||
|
||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_EQ) {
|
||||
|
@ -468,10 +467,9 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
|||
_keysIterator = VPackArrayIterator(_keys->slice());
|
||||
reset();
|
||||
return true;
|
||||
}
|
||||
|
||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN &&
|
||||
aap.value->isArray()) {
|
||||
}
|
||||
|
||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN && aap.value->isArray()) {
|
||||
// a.b IN values
|
||||
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value);
|
||||
_keysIterator = VPackArrayIterator(_keys->slice());
|
||||
|
@ -519,11 +517,13 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
|||
auto end = _bounds.end();
|
||||
while (_iterator->Valid() && (cmp->Compare(_iterator->key(), end) < 0)) {
|
||||
LocalDocumentId const documentId =
|
||||
RocksDBKey::indexDocumentId(RocksDBEntryType::EdgeIndexValue, _iterator->key());
|
||||
RocksDBKey::indexDocumentId(RocksDBEntryType::EdgeIndexValue,
|
||||
_iterator->key());
|
||||
|
||||
// adding revision ID and _from or _to value
|
||||
_builder.add(VPackValue(documentId.id()));
|
||||
arangodb::velocypack::StringRef vertexId = RocksDBValue::vertexId(_iterator->value());
|
||||
arangodb::velocypack::StringRef vertexId =
|
||||
RocksDBValue::vertexId(_iterator->value());
|
||||
_builder.add(VPackValuePair(vertexId.data(), vertexId.size(), VPackValueType::String));
|
||||
|
||||
_iterator->Next();
|
||||
|
@ -574,7 +574,7 @@ class RocksDBEdgeIndexLookupIterator final : public IndexIterator {
|
|||
arangodb::velocypack::Slice _lastKey;
|
||||
};
|
||||
|
||||
} // namespace
|
||||
} // namespace arangodb
|
||||
|
||||
// ============================= Index ====================================
|
||||
|
||||
|
@ -590,6 +590,8 @@ RocksDBEdgeIndex::RocksDBEdgeIndex(TRI_idx_iid_t iid, arangodb::LogicalCollectio
|
|||
arangodb::velocypack::Slice const& info,
|
||||
std::string const& attr)
|
||||
: RocksDBIndex(iid, collection,
|
||||
((attr == StaticStrings::FromString) ? StaticStrings::IndexNameEdgeFrom
|
||||
: StaticStrings::IndexNameEdgeTo),
|
||||
std::vector<std::vector<AttributeName>>({{AttributeName(attr, false)}}),
|
||||
false, false, RocksDBColumnFamily::edge(),
|
||||
basics::VelocyPackHelper::stringUInt64(info, "objectId"),
|
||||
|
@ -645,8 +647,7 @@ void RocksDBEdgeIndex::toVelocyPack(VPackBuilder& builder,
|
|||
|
||||
Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||
LocalDocumentId const& documentId,
|
||||
velocypack::Slice const& doc,
|
||||
Index::OperationMode mode) {
|
||||
velocypack::Slice const& doc, Index::OperationMode mode) {
|
||||
Result res;
|
||||
VPackSlice fromTo = doc.get(_directionAttr);
|
||||
TRI_ASSERT(fromTo.isString());
|
||||
|
@ -660,7 +661,8 @@ Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
|||
? transaction::helpers::extractToFromDocument(doc)
|
||||
: transaction::helpers::extractFromFromDocument(doc);
|
||||
TRI_ASSERT(toFrom.isString());
|
||||
RocksDBValue value = RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
||||
RocksDBValue value =
|
||||
RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
||||
|
||||
// blacklist key in cache
|
||||
blackListKey(fromToRef);
|
||||
|
@ -682,8 +684,7 @@ Result RocksDBEdgeIndex::insert(transaction::Methods& trx, RocksDBMethods* mthd,
|
|||
|
||||
Result RocksDBEdgeIndex::remove(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||
LocalDocumentId const& documentId,
|
||||
velocypack::Slice const& doc,
|
||||
Index::OperationMode mode) {
|
||||
velocypack::Slice const& doc, Index::OperationMode mode) {
|
||||
Result res;
|
||||
|
||||
// VPackSlice primaryKey = doc.get(StaticStrings::KeyString);
|
||||
|
@ -696,7 +697,8 @@ Result RocksDBEdgeIndex::remove(transaction::Methods& trx, RocksDBMethods* mthd,
|
|||
? transaction::helpers::extractToFromDocument(doc)
|
||||
: transaction::helpers::extractFromFromDocument(doc);
|
||||
TRI_ASSERT(toFrom.isString());
|
||||
RocksDBValue value = RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
||||
RocksDBValue value =
|
||||
RocksDBValue::EdgeIndexValue(arangodb::velocypack::StringRef(toFrom));
|
||||
|
||||
// blacklist key in cache
|
||||
blackListKey(fromToRef);
|
||||
|
@ -728,7 +730,7 @@ IndexIterator* RocksDBEdgeIndex::iteratorForCondition(
|
|||
transaction::Methods* trx, arangodb::aql::AstNode const* node,
|
||||
arangodb::aql::Variable const* reference, IndexIteratorOptions const& opts) {
|
||||
TRI_ASSERT(!isSorted() || opts.sorted);
|
||||
|
||||
|
||||
TRI_ASSERT(node != nullptr);
|
||||
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
||||
TRI_ASSERT(node->numMembers() == 1);
|
||||
|
@ -1041,7 +1043,7 @@ IndexIterator* RocksDBEdgeIndex::createInIterator(transaction::Methods* trx,
|
|||
fillInLookupValues(trx, *(keys.get()), valNode);
|
||||
return new RocksDBEdgeIndexLookupIterator(&_collection, trx, this, std::move(keys), _cache);
|
||||
}
|
||||
|
||||
|
||||
void RocksDBEdgeIndex::fillLookupValue(VPackBuilder& keys,
|
||||
arangodb::aql::AstNode const* value) const {
|
||||
keys.openArray(true);
|
||||
|
@ -1053,8 +1055,7 @@ void RocksDBEdgeIndex::fillLookupValue(VPackBuilder& keys,
|
|||
keys.close();
|
||||
}
|
||||
|
||||
void RocksDBEdgeIndex::fillInLookupValues(transaction::Methods* trx,
|
||||
VPackBuilder& keys,
|
||||
void RocksDBEdgeIndex::fillInLookupValues(transaction::Methods* trx, VPackBuilder& keys,
|
||||
arangodb::aql::AstNode const* values) const {
|
||||
TRI_ASSERT(values != nullptr);
|
||||
TRI_ASSERT(values->type == arangodb::aql::NODE_TYPE_ARRAY);
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
/// @author Jan Steemann
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "RocksDBIndex.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
#include "Cache/CacheManagerFeature.h"
|
||||
#include "Cache/Common.h"
|
||||
|
@ -33,6 +32,7 @@
|
|||
#include "RocksDBEngine/RocksDBComparator.h"
|
||||
#include "RocksDBEngine/RocksDBMethods.h"
|
||||
#include "RocksDBEngine/RocksDBTransactionState.h"
|
||||
#include "RocksDBIndex.h"
|
||||
#include "StorageEngine/EngineSelectorFeature.h"
|
||||
#include "VocBase/LogicalCollection.h"
|
||||
#include "VocBase/ticks.h"
|
||||
|
@ -59,10 +59,11 @@ inline uint64_t ensureObjectId(uint64_t oid) {
|
|||
} // namespace
|
||||
|
||||
RocksDBIndex::RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
||||
std::string const& name,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
||||
bool unique, bool sparse, rocksdb::ColumnFamilyHandle* cf,
|
||||
uint64_t objectId, bool useCache)
|
||||
: Index(id, collection, attributes, unique, sparse),
|
||||
: Index(id, collection, name, attributes, unique, sparse),
|
||||
_objectId(::ensureObjectId(objectId)),
|
||||
_cf(cf),
|
||||
_cache(nullptr),
|
||||
|
@ -241,10 +242,8 @@ void RocksDBIndex::afterTruncate(TRI_voc_tick_t) {
|
|||
|
||||
Result RocksDBIndex::update(transaction::Methods& trx, RocksDBMethods* mthd,
|
||||
LocalDocumentId const& oldDocumentId,
|
||||
velocypack::Slice const& oldDoc,
|
||||
LocalDocumentId const& newDocumentId,
|
||||
velocypack::Slice const& newDoc,
|
||||
Index::OperationMode mode) {
|
||||
velocypack::Slice const& oldDoc, LocalDocumentId const& newDocumentId,
|
||||
velocypack::Slice const& newDoc, Index::OperationMode mode) {
|
||||
// It is illegal to call this method on the primary index
|
||||
// RocksDBPrimaryIndex must override this method accordingly
|
||||
TRI_ASSERT(type() != TRI_IDX_TYPE_PRIMARY_INDEX);
|
||||
|
|
|
@ -124,11 +124,11 @@ class RocksDBIndex : public Index {
|
|||
virtual RocksDBCuckooIndexEstimator<uint64_t>* estimator() { return nullptr; }
|
||||
virtual void setEstimator(std::unique_ptr<RocksDBCuckooIndexEstimator<uint64_t>>) {}
|
||||
virtual void recalculateEstimates() {}
|
||||
|
||||
|
||||
virtual bool isPersistent() const override { return true; }
|
||||
|
||||
protected:
|
||||
RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection,
|
||||
RocksDBIndex(TRI_idx_iid_t id, LogicalCollection& collection, std::string const& name,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>> const& attributes,
|
||||
bool unique, bool sparse, rocksdb::ColumnFamilyHandle* cf,
|
||||
uint64_t objectId, bool useCache);
|
||||
|
@ -139,7 +139,9 @@ class RocksDBIndex : public Index {
|
|||
|
||||
inline bool useCache() const { return (_cacheEnabled && _cachePresent); }
|
||||
void blackListKey(char const* data, std::size_t len);
|
||||
void blackListKey(arangodb::velocypack::StringRef& ref) { blackListKey(ref.data(), ref.size()); };
|
||||
void blackListKey(arangodb::velocypack::StringRef& ref) {
|
||||
blackListKey(ref.data(), ref.size());
|
||||
};
|
||||
|
||||
protected:
|
||||
uint64_t _objectId;
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
/// @author Michael Hackstein
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "RocksDBIndexFactory.h"
|
||||
#include "Basics/StaticStrings.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
#include "Basics/VelocyPackHelper.h"
|
||||
|
@ -36,6 +35,7 @@
|
|||
#include "RocksDBEngine/RocksDBPrimaryIndex.h"
|
||||
#include "RocksDBEngine/RocksDBSkiplistIndex.h"
|
||||
#include "RocksDBEngine/RocksDBTtlIndex.h"
|
||||
#include "RocksDBIndexFactory.h"
|
||||
#include "VocBase/LogicalCollection.h"
|
||||
#include "VocBase/ticks.h"
|
||||
#include "VocBase/voc-types.h"
|
||||
|
@ -57,7 +57,7 @@ struct DefaultIndexFactory : public arangodb::IndexTypeFactory {
|
|||
arangodb::Index::IndexType const _type;
|
||||
|
||||
explicit DefaultIndexFactory(arangodb::Index::IndexType type) : _type(type) {}
|
||||
|
||||
|
||||
bool equal(arangodb::velocypack::Slice const& lhs,
|
||||
arangodb::velocypack::Slice const& rhs) const override {
|
||||
return arangodb::IndexTypeFactory::equal(_type, lhs, rhs, true);
|
||||
|
@ -71,8 +71,7 @@ struct EdgeIndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
if (!isClusterConstructor) {
|
||||
// this indexes cannot be created directly
|
||||
return arangodb::Result(TRI_ERROR_INTERNAL, "cannot create edge index");
|
||||
|
@ -115,8 +114,7 @@ struct FulltextIndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
index = std::make_shared<arangodb::RocksDBFulltextIndex>(id, collection, definition);
|
||||
|
||||
return arangodb::Result();
|
||||
|
@ -150,8 +148,7 @@ struct GeoIndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
||||
definition, "geo");
|
||||
|
||||
|
@ -185,8 +182,7 @@ struct Geo1IndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
||||
definition, "geo1");
|
||||
|
||||
|
@ -221,8 +217,7 @@ struct Geo2IndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
index = std::make_shared<arangodb::RocksDBGeoIndex>(id, collection,
|
||||
definition, "geo2");
|
||||
|
||||
|
@ -257,8 +252,7 @@ struct SecondaryIndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
index = std::make_shared<F>(id, collection, definition);
|
||||
return arangodb::Result();
|
||||
}
|
||||
|
@ -284,13 +278,13 @@ struct SecondaryIndexFactory : public DefaultIndexFactory {
|
|||
};
|
||||
|
||||
struct TtlIndexFactory : public DefaultIndexFactory {
|
||||
explicit TtlIndexFactory(arangodb::Index::IndexType type) : DefaultIndexFactory(type) {}
|
||||
explicit TtlIndexFactory(arangodb::Index::IndexType type)
|
||||
: DefaultIndexFactory(type) {}
|
||||
|
||||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
index = std::make_shared<RocksDBTtlIndex>(id, collection, definition);
|
||||
return arangodb::Result();
|
||||
}
|
||||
|
@ -322,8 +316,7 @@ struct PrimaryIndexFactory : public DefaultIndexFactory {
|
|||
arangodb::Result instantiate(std::shared_ptr<arangodb::Index>& index,
|
||||
arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& definition,
|
||||
TRI_idx_iid_t id,
|
||||
bool isClusterConstructor) const override {
|
||||
TRI_idx_iid_t id, bool isClusterConstructor) const override {
|
||||
if (!isClusterConstructor) {
|
||||
// this indexes cannot be created directly
|
||||
return arangodb::Result(TRI_ERROR_INTERNAL,
|
||||
|
@ -381,13 +374,13 @@ RocksDBIndexFactory::RocksDBIndexFactory() {
|
|||
emplace("skiplist", skiplistIndexFactory);
|
||||
emplace("ttl", ttlIndexFactory);
|
||||
}
|
||||
|
||||
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" => "hash")
|
||||
/// used to display storage engine capabilities
|
||||
|
||||
/// @brief index name aliases (e.g. "persistent" => "hash", "skiplist" =>
|
||||
/// "hash") used to display storage engine capabilities
|
||||
std::unordered_map<std::string, std::string> RocksDBIndexFactory::indexAliases() const {
|
||||
return std::unordered_map<std::string, std::string>{
|
||||
{ "skiplist", "hash" },
|
||||
{ "persistent", "hash" },
|
||||
{"skiplist", "hash"},
|
||||
{"persistent", "hash"},
|
||||
};
|
||||
}
|
||||
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
/// @author Jan Steemann
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "RocksDBPrimaryIndex.h"
|
||||
#include "Aql/Ast.h"
|
||||
#include "Aql/AstNode.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
|
@ -42,6 +41,7 @@
|
|||
#include "RocksDBEngine/RocksDBTransactionState.h"
|
||||
#include "RocksDBEngine/RocksDBTypes.h"
|
||||
#include "RocksDBEngine/RocksDBValue.h"
|
||||
#include "RocksDBPrimaryIndex.h"
|
||||
#include "StorageEngine/EngineSelectorFeature.h"
|
||||
#include "Transaction/Context.h"
|
||||
#include "Transaction/Helpers.h"
|
||||
|
@ -65,9 +65,9 @@
|
|||
using namespace arangodb;
|
||||
|
||||
namespace {
|
||||
std::string const lowest; // smallest possible key
|
||||
std::string const highest = "\xFF"; // greatest possible key
|
||||
}
|
||||
std::string const lowest; // smallest possible key
|
||||
std::string const highest = "\xFF"; // greatest possible key
|
||||
} // namespace
|
||||
|
||||
// ================ Primary Index Iterators ================
|
||||
|
||||
|
@ -75,9 +75,10 @@ namespace arangodb {
|
|||
|
||||
class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
||||
public:
|
||||
RocksDBPrimaryIndexEqIterator(
|
||||
LogicalCollection* collection, transaction::Methods* trx, RocksDBPrimaryIndex* index,
|
||||
std::unique_ptr<VPackBuilder> key, bool allowCoveringIndexOptimization)
|
||||
RocksDBPrimaryIndexEqIterator(LogicalCollection* collection,
|
||||
transaction::Methods* trx, RocksDBPrimaryIndex* index,
|
||||
std::unique_ptr<VPackBuilder> key,
|
||||
bool allowCoveringIndexOptimization)
|
||||
: IndexIterator(collection, trx),
|
||||
_index(index),
|
||||
_key(std::move(key)),
|
||||
|
@ -94,13 +95,12 @@ class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
char const* typeName() const override { return "primary-index-eq-iterator"; }
|
||||
|
||||
|
||||
/// @brief index supports rearming
|
||||
bool canRearm() const override { return true; }
|
||||
|
||||
|
||||
/// @brief rearm the index iterator
|
||||
bool rearm(arangodb::aql::AstNode const* node,
|
||||
arangodb::aql::Variable const* variable,
|
||||
bool rearm(arangodb::aql::AstNode const* node, arangodb::aql::Variable const* variable,
|
||||
IndexIteratorOptions const& opts) override {
|
||||
TRI_ASSERT(node != nullptr);
|
||||
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
||||
|
@ -128,7 +128,8 @@ class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
_done = true;
|
||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
||||
LocalDocumentId documentId =
|
||||
_index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
||||
if (documentId.isSet()) {
|
||||
cb(documentId);
|
||||
}
|
||||
|
@ -146,7 +147,8 @@ class RocksDBPrimaryIndexEqIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
_done = true;
|
||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
||||
LocalDocumentId documentId =
|
||||
_index->lookupKey(_trx, arangodb::velocypack::StringRef(_key->slice()));
|
||||
if (documentId.isSet()) {
|
||||
cb(documentId, _key->slice());
|
||||
}
|
||||
|
@ -188,13 +190,12 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
char const* typeName() const override { return "primary-index-in-iterator"; }
|
||||
|
||||
|
||||
/// @brief index supports rearming
|
||||
bool canRearm() const override { return true; }
|
||||
|
||||
|
||||
/// @brief rearm the index iterator
|
||||
bool rearm(arangodb::aql::AstNode const* node,
|
||||
arangodb::aql::Variable const* variable,
|
||||
bool rearm(arangodb::aql::AstNode const* node, arangodb::aql::Variable const* variable,
|
||||
IndexIteratorOptions const& opts) override {
|
||||
TRI_ASSERT(node != nullptr);
|
||||
TRI_ASSERT(node->type == aql::NODE_TYPE_OPERATOR_NARY_AND);
|
||||
|
@ -203,7 +204,8 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
|||
TRI_ASSERT(aap.opType == arangodb::aql::NODE_TYPE_OPERATOR_BINARY_IN);
|
||||
|
||||
if (aap.value->isArray()) {
|
||||
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value, opts.ascending, !_allowCoveringIndexOptimization);
|
||||
_index->fillInLookupValues(_trx, *(_keys.get()), aap.value, opts.ascending,
|
||||
!_allowCoveringIndexOptimization);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -219,7 +221,8 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
while (limit > 0) {
|
||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
||||
LocalDocumentId documentId =
|
||||
_index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
||||
if (documentId.isSet()) {
|
||||
cb(documentId);
|
||||
--limit;
|
||||
|
@ -243,7 +246,8 @@ class RocksDBPrimaryIndexInIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
while (limit > 0) {
|
||||
LocalDocumentId documentId = _index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
||||
LocalDocumentId documentId =
|
||||
_index->lookupKey(_trx, arangodb::velocypack::StringRef(*_iterator));
|
||||
if (documentId.isSet()) {
|
||||
cb(documentId, *_iterator);
|
||||
--limit;
|
||||
|
@ -307,7 +311,9 @@ class RocksDBPrimaryIndexRangeIterator final : public IndexIterator {
|
|||
}
|
||||
|
||||
public:
|
||||
char const* typeName() const override { return "rocksdb-range-index-iterator"; }
|
||||
char const* typeName() const override {
|
||||
return "rocksdb-range-index-iterator";
|
||||
}
|
||||
|
||||
/// @brief Get the next limit many elements in the index
|
||||
bool next(LocalDocumentIdCallback const& cb, size_t limit) override {
|
||||
|
@ -398,13 +404,13 @@ class RocksDBPrimaryIndexRangeIterator final : public IndexIterator {
|
|||
rocksdb::Slice _rangeBound;
|
||||
};
|
||||
|
||||
} // namespace
|
||||
} // namespace arangodb
|
||||
|
||||
// ================ PrimaryIndex ================
|
||||
|
||||
RocksDBPrimaryIndex::RocksDBPrimaryIndex(arangodb::LogicalCollection& collection,
|
||||
arangodb::velocypack::Slice const& info)
|
||||
: RocksDBIndex(0, collection,
|
||||
: RocksDBIndex(0, collection, StaticStrings::IndexNamePrimary,
|
||||
std::vector<std::vector<arangodb::basics::AttributeName>>(
|
||||
{{arangodb::basics::AttributeName(StaticStrings::KeyString, false)}}),
|
||||
true, false, RocksDBColumnFamily::primary(),
|
||||
|
@ -502,7 +508,8 @@ LocalDocumentId RocksDBPrimaryIndex::lookupKey(transaction::Methods* trx,
|
|||
/// the case for older collections
|
||||
/// in this case the caller must fetch the revision id from the actual
|
||||
/// document
|
||||
bool RocksDBPrimaryIndex::lookupRevision(transaction::Methods* trx, arangodb::velocypack::StringRef keyRef,
|
||||
bool RocksDBPrimaryIndex::lookupRevision(transaction::Methods* trx,
|
||||
arangodb::velocypack::StringRef keyRef,
|
||||
LocalDocumentId& documentId,
|
||||
TRI_voc_rid_t& revisionId) const {
|
||||
documentId.clear();
|
||||
|
@ -628,8 +635,8 @@ bool RocksDBPrimaryIndex::supportsFilterCondition(
|
|||
|
||||
std::size_t values = 0;
|
||||
SortedIndexAttributeMatcher::matchAttributes(this, node, reference, found,
|
||||
values, nonNullAttributes,
|
||||
/*skip evaluation (during execution)*/ false);
|
||||
values, nonNullAttributes,
|
||||
/*skip evaluation (during execution)*/ false);
|
||||
estimatedItems = values;
|
||||
return !found.empty();
|
||||
}
|
||||
|
@ -639,8 +646,8 @@ bool RocksDBPrimaryIndex::supportsSortCondition(arangodb::aql::SortCondition con
|
|||
size_t itemsInIndex, double& estimatedCost,
|
||||
size_t& coveredAttributes) const {
|
||||
return SortedIndexAttributeMatcher::supportsSortCondition(this, sortCondition, reference,
|
||||
itemsInIndex, estimatedCost,
|
||||
coveredAttributes);
|
||||
itemsInIndex, estimatedCost,
|
||||
coveredAttributes);
|
||||
}
|
||||
|
||||
/// @brief creates an IndexIterator for the given Condition
|
||||
|
@ -667,17 +674,15 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
|||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_EQ) {
|
||||
// a.b == value
|
||||
return createEqIterator(trx, aap.attribute, aap.value);
|
||||
}
|
||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN &&
|
||||
aap.value->isArray()) {
|
||||
}
|
||||
if (aap.opType == aql::NODE_TYPE_OPERATOR_BINARY_IN && aap.value->isArray()) {
|
||||
// a.b IN array
|
||||
return createInIterator(trx, aap.attribute, aap.value, opts.ascending);
|
||||
}
|
||||
// fall-through intentional here
|
||||
}
|
||||
|
||||
auto removeCollectionFromString =
|
||||
[this, &trx](bool isId, std::string& value) -> int {
|
||||
auto removeCollectionFromString = [this, &trx](bool isId, std::string& value) -> int {
|
||||
if (isId) {
|
||||
char const* key = nullptr;
|
||||
size_t outLength = 0;
|
||||
|
@ -723,28 +728,30 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
|||
|
||||
auto type = aap.opType;
|
||||
|
||||
if (!(type == aql::NODE_TYPE_OPERATOR_BINARY_LE ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_LT || type == aql::NODE_TYPE_OPERATOR_BINARY_GE ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_GT ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_EQ
|
||||
)) {
|
||||
if (!(type == aql::NODE_TYPE_OPERATOR_BINARY_LE || type == aql::NODE_TYPE_OPERATOR_BINARY_LT ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_GE || type == aql::NODE_TYPE_OPERATOR_BINARY_GT ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_EQ)) {
|
||||
return new EmptyIndexIterator(&_collection, trx);
|
||||
}
|
||||
|
||||
TRI_ASSERT(aap.attribute->type == aql::NODE_TYPE_ATTRIBUTE_ACCESS);
|
||||
bool const isId = (aap.attribute->stringEquals(StaticStrings::IdString));
|
||||
|
||||
std::string value; // empty string == lower bound
|
||||
std::string value; // empty string == lower bound
|
||||
if (aap.value->isStringValue()) {
|
||||
value = aap.value->getString();
|
||||
} else if (aap.value->isObject() || aap.value->isArray()) {
|
||||
// any array or object value is bigger than any potential key
|
||||
value = ::highest;
|
||||
} else if (aap.value->isNullValue() || aap.value->isBoolValue() || aap.value->isIntValue()) {
|
||||
} else if (aap.value->isNullValue() || aap.value->isBoolValue() ||
|
||||
aap.value->isIntValue()) {
|
||||
// any null, bool or numeric value is lower than any potential key
|
||||
// keep lower bound
|
||||
} else {
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL, std::string("unhandled type for valNode: ") + aap.value->getTypeString());
|
||||
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL,
|
||||
std::string(
|
||||
"unhandled type for valNode: ") +
|
||||
aap.value->getTypeString());
|
||||
}
|
||||
|
||||
// strip collection name prefix from comparison value
|
||||
|
@ -763,7 +770,8 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
|||
lower = std::move(value);
|
||||
lowerFound = true;
|
||||
}
|
||||
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_LE || type == aql::NODE_TYPE_OPERATOR_BINARY_LT) {
|
||||
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_LE ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_LT) {
|
||||
// a.b < value
|
||||
if (cmpResult > 0) {
|
||||
// doc._id < collection with "bigger" name
|
||||
|
@ -780,7 +788,8 @@ IndexIterator* RocksDBPrimaryIndex::iteratorForCondition(
|
|||
}
|
||||
}
|
||||
upperFound = true;
|
||||
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_GE || type == aql::NODE_TYPE_OPERATOR_BINARY_GT) {
|
||||
} else if (type == aql::NODE_TYPE_OPERATOR_BINARY_GE ||
|
||||
type == aql::NODE_TYPE_OPERATOR_BINARY_GT) {
|
||||
// a.b > value
|
||||
if (cmpResult < 0) {
|
||||
// doc._id > collection with "smaller" name
|
||||
|
@ -868,11 +877,9 @@ IndexIterator* RocksDBPrimaryIndex::createEqIterator(transaction::Methods* trx,
|
|||
return new EmptyIndexIterator(&_collection, trx);
|
||||
}
|
||||
|
||||
void RocksDBPrimaryIndex::fillInLookupValues(transaction::Methods* trx,
|
||||
VPackBuilder& keys,
|
||||
void RocksDBPrimaryIndex::fillInLookupValues(transaction::Methods* trx, VPackBuilder& keys,
|
||||
arangodb::aql::AstNode const* values,
|
||||
bool ascending,
|
||||
bool isId) const {
|
||||
bool ascending, bool isId) const {
|
||||
TRI_ASSERT(values != nullptr);
|
||||
TRI_ASSERT(values->type == arangodb::aql::NODE_TYPE_ARRAY);
|
||||
|
||||
|
|
|
@ -157,6 +157,16 @@ std::shared_ptr<Index> PhysicalCollection::lookupIndex(TRI_idx_iid_t idxId) cons
|
|||
return nullptr;
|
||||
}
|
||||
|
||||
std::shared_ptr<Index> PhysicalCollection::lookupIndex(std::string const& idxName) const {
|
||||
READ_LOCKER(guard, _indexesLock);
|
||||
for (auto const& idx : _indexes) {
|
||||
if (idx->name() == idxName) {
|
||||
return idx;
|
||||
}
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
TRI_voc_rid_t PhysicalCollection::newRevisionId() const {
|
||||
return TRI_HybridLogicalClock();
|
||||
}
|
||||
|
|
|
@ -107,6 +107,11 @@ class PhysicalCollection {
|
|||
|
||||
/// @brief Find index by iid
|
||||
std::shared_ptr<Index> lookupIndex(TRI_idx_iid_t) const;
|
||||
|
||||
/// @brief Find index by name
|
||||
std::shared_ptr<Index> lookupIndex(std::string const&) const;
|
||||
|
||||
/// @brief get list of all indices
|
||||
std::vector<std::shared_ptr<Index>> getIndexes() const;
|
||||
|
||||
void getIndexesVPack(velocypack::Builder&, unsigned flags,
|
||||
|
|
|
@ -190,6 +190,7 @@ LogicalCollection::LogicalCollection(TRI_vocbase_t& vocbase, VPackSlice const& i
|
|||
TRI_ASSERT(_physical != nullptr);
|
||||
// This has to be called AFTER _phyiscal and _logical are properly linked
|
||||
// together.
|
||||
|
||||
prepareIndexes(info.get("indexes"));
|
||||
}
|
||||
|
||||
|
@ -791,6 +792,10 @@ std::shared_ptr<Index> LogicalCollection::lookupIndex(TRI_idx_iid_t idxId) const
|
|||
return getPhysical()->lookupIndex(idxId);
|
||||
}
|
||||
|
||||
std::shared_ptr<Index> LogicalCollection::lookupIndex(std::string const& idxName) const {
|
||||
return getPhysical()->lookupIndex(idxName);
|
||||
}
|
||||
|
||||
std::shared_ptr<Index> LogicalCollection::lookupIndex(VPackSlice const& info) const {
|
||||
if (!info.isObject()) {
|
||||
// Compatibility with old v8-vocindex.
|
||||
|
@ -853,7 +858,8 @@ void LogicalCollection::deferDropCollection(std::function<bool(LogicalCollection
|
|||
}
|
||||
|
||||
/// @brief reads an element from the document collection
|
||||
Result LogicalCollection::read(transaction::Methods* trx, arangodb::velocypack::StringRef const& key,
|
||||
Result LogicalCollection::read(transaction::Methods* trx,
|
||||
arangodb::velocypack::StringRef const& key,
|
||||
ManagedDocumentResult& result, bool lock) {
|
||||
TRI_IF_FAILURE("LogicalCollection::read") { return Result(TRI_ERROR_DEBUG); }
|
||||
return getPhysical()->read(trx, key, result, lock);
|
||||
|
|
|
@ -269,6 +269,9 @@ class LogicalCollection : public LogicalDataSource {
|
|||
/// @brief Find index by iid
|
||||
std::shared_ptr<Index> lookupIndex(TRI_idx_iid_t) const;
|
||||
|
||||
/// @brief Find index by name
|
||||
std::shared_ptr<Index> lookupIndex(std::string const&) const;
|
||||
|
||||
bool dropIndex(TRI_idx_iid_t iid);
|
||||
|
||||
// SECTION: Index access (local only)
|
||||
|
|
|
@ -20,8 +20,8 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "Collections.h"
|
||||
#include "Basics/Common.h"
|
||||
#include "Collections.h"
|
||||
|
||||
#include "Aql/Query.h"
|
||||
#include "Aql/QueryRegistry.h"
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "Indexes.h"
|
||||
#include "Basics/Common.h"
|
||||
#include "Basics/ReadLocker.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
|
@ -32,6 +31,7 @@
|
|||
#include "Cluster/ClusterMethods.h"
|
||||
#include "Cluster/ServerState.h"
|
||||
#include "GeneralServer/AuthenticationFeature.h"
|
||||
#include "Indexes.h"
|
||||
#include "Indexes/Index.h"
|
||||
#include "Indexes/IndexFactory.h"
|
||||
#include "Rest/HttpRequest.h"
|
||||
|
@ -62,20 +62,25 @@ using namespace arangodb::methods;
|
|||
Result Indexes::getIndex(LogicalCollection const* collection,
|
||||
VPackSlice const& indexId, VPackBuilder& out) {
|
||||
// do some magic to parse the iid
|
||||
std::string name;
|
||||
VPackSlice id = indexId;
|
||||
if (id.isObject() && id.hasKey(StaticStrings::IndexId)) {
|
||||
id = id.get(StaticStrings::IndexId);
|
||||
std::string id; // will (eventually) be fully-qualified; "collection/identifier"
|
||||
std::string name; // will be just name or id (no "collection/")
|
||||
VPackSlice idSlice = indexId;
|
||||
if (idSlice.isObject() && idSlice.hasKey(StaticStrings::IndexId)) {
|
||||
idSlice = idSlice.get(StaticStrings::IndexId);
|
||||
}
|
||||
if (id.isString()) {
|
||||
std::regex re = std::regex("^([a-zA-Z0-9\\-_]+)\\/([0-9]+)$", std::regex::ECMAScript);
|
||||
if (std::regex_match(id.copyString(), re)) {
|
||||
name = id.copyString();
|
||||
if (idSlice.isString()) {
|
||||
std::regex re = std::regex("^([a-zA-Z0-9\\-_]+)\\/([a-zA-Z0-9\\-_]+)$",
|
||||
std::regex::ECMAScript);
|
||||
if (std::regex_match(idSlice.copyString(), re)) {
|
||||
id = idSlice.copyString();
|
||||
name = id.substr(id.find_first_of("/") + 1);
|
||||
} else {
|
||||
name = collection->name() + "/" + id.copyString();
|
||||
name = idSlice.copyString();
|
||||
id = collection->name() + "/" + name;
|
||||
}
|
||||
} else if (id.isInteger()) {
|
||||
name = collection->name() + "/" + StringUtils::itoa(id.getUInt());
|
||||
} else if (idSlice.isInteger()) {
|
||||
name = StringUtils::itoa(idSlice.getUInt());
|
||||
id = collection->name() + "/" + name;
|
||||
} else {
|
||||
return Result(TRI_ERROR_ARANGO_INDEX_NOT_FOUND);
|
||||
}
|
||||
|
@ -84,7 +89,8 @@ Result Indexes::getIndex(LogicalCollection const* collection,
|
|||
Result res = Indexes::getAll(collection, Index::makeFlags(), /*withHidden*/ true, tmp);
|
||||
if (res.ok()) {
|
||||
for (VPackSlice const& index : VPackArrayIterator(tmp.slice())) {
|
||||
if (index.get(StaticStrings::IndexId).compareString(name) == 0) {
|
||||
if (index.get(StaticStrings::IndexId).compareString(id) == 0 ||
|
||||
index.get(StaticStrings::IndexName).compareString(name) == 0) {
|
||||
out.add(index);
|
||||
return Result();
|
||||
}
|
||||
|
@ -259,11 +265,11 @@ static Result EnsureIndexLocal(arangodb::LogicalCollection* collection,
|
|||
VPackSlice const& definition, bool create,
|
||||
VPackBuilder& output) {
|
||||
TRI_ASSERT(collection != nullptr);
|
||||
|
||||
|
||||
Result res;
|
||||
bool created = false;
|
||||
std::shared_ptr<arangodb::Index> idx;
|
||||
|
||||
|
||||
READ_LOCKER(readLocker, collection->vocbase()._inventoryLock);
|
||||
|
||||
if (create) {
|
||||
|
@ -309,10 +315,11 @@ Result Indexes::ensureIndexCoordinator(arangodb::LogicalCollection const* collec
|
|||
TRI_ASSERT(collection != nullptr);
|
||||
auto& dbName = collection->vocbase().name();
|
||||
auto cid = std::to_string(collection->id());
|
||||
auto cluster = application_features::ApplicationServer::getFeature<ClusterFeature>("Cluster");
|
||||
auto cluster = application_features::ApplicationServer::getFeature<ClusterFeature>(
|
||||
"Cluster");
|
||||
|
||||
return ClusterInfo::instance()->ensureIndexCoordinator( // create index
|
||||
dbName, cid, indexDef, create, resultBuilder, cluster->indexCreationTimeout() // args
|
||||
return ClusterInfo::instance()->ensureIndexCoordinator( // create index
|
||||
dbName, cid, indexDef, create, resultBuilder, cluster->indexCreationTimeout() // args
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -333,8 +340,8 @@ Result Indexes::ensureIndex(LogicalCollection* collection, VPackSlice const& inp
|
|||
TRI_ASSERT(collection);
|
||||
VPackBuilder normalized;
|
||||
StorageEngine* engine = EngineSelectorFeature::ENGINE;
|
||||
auto res = engine->indexFactory().enhanceIndexDefinition( // normalize definition
|
||||
input, normalized, create, collection->vocbase() // args
|
||||
auto res = engine->indexFactory().enhanceIndexDefinition( // normalize definition
|
||||
input, normalized, create, collection->vocbase() // args
|
||||
);
|
||||
|
||||
if (!res.ok()) {
|
||||
|
@ -426,7 +433,8 @@ Result Indexes::ensureIndex(LogicalCollection* collection, VPackSlice const& inp
|
|||
std::string iid = tmp.slice().get(StaticStrings::IndexId).copyString();
|
||||
VPackBuilder b;
|
||||
b.openObject();
|
||||
b.add(StaticStrings::IndexId, VPackValue(collection->name() + TRI_INDEX_HANDLE_SEPARATOR_CHR + iid));
|
||||
b.add(StaticStrings::IndexId,
|
||||
VPackValue(collection->name() + TRI_INDEX_HANDLE_SEPARATOR_CHR + iid));
|
||||
b.close();
|
||||
output = VPackCollection::merge(tmp.slice(), b.slice(), false);
|
||||
return res;
|
||||
|
@ -558,8 +566,8 @@ arangodb::Result Indexes::drop(LogicalCollection* collection, VPackSlice const&
|
|||
TRI_ASSERT(collection);
|
||||
auto& databaseName = collection->vocbase().name();
|
||||
|
||||
return ClusterInfo::instance()->dropIndexCoordinator( // drop index
|
||||
databaseName, std::to_string(collection->id()), iid, 0.0 // args
|
||||
return ClusterInfo::instance()->dropIndexCoordinator( // drop index
|
||||
databaseName, std::to_string(collection->id()), iid, 0.0 // args
|
||||
);
|
||||
#endif
|
||||
} else {
|
||||
|
|
|
@ -20,8 +20,8 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "Upgrade.h"
|
||||
#include "Basics/Common.h"
|
||||
#include "Upgrade.h"
|
||||
|
||||
#include "Agency/AgencyComm.h"
|
||||
#include "Basics/StringUtils.h"
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
/// @author Simon Grätzer
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#include "UpgradeTasks.h"
|
||||
#include "Agency/AgencyComm.h"
|
||||
#include "Basics/Common.h"
|
||||
#include "Basics/Exceptions.h"
|
||||
|
@ -31,6 +30,7 @@
|
|||
#include "Cluster/ClusterFeature.h"
|
||||
#include "Cluster/ClusterInfo.h"
|
||||
#include "Cluster/ServerState.h"
|
||||
#include "ClusterEngine/ClusterEngine.h"
|
||||
#include "GeneralServer/AuthenticationFeature.h"
|
||||
#include "Logger/Logger.h"
|
||||
#include "MMFiles/MMFilesEngine.h"
|
||||
|
@ -39,6 +39,7 @@
|
|||
#include "StorageEngine/EngineSelectorFeature.h"
|
||||
#include "StorageEngine/PhysicalCollection.h"
|
||||
#include "Transaction/StandaloneContext.h"
|
||||
#include "UpgradeTasks.h"
|
||||
#include "Utils/OperationOptions.h"
|
||||
#include "Utils/SingleCollectionTransaction.h"
|
||||
#include "VocBase/LogicalCollection.h"
|
||||
|
@ -162,7 +163,7 @@ arangodb::Result recreateGeoIndex(TRI_vocbase_t& vocbase,
|
|||
|
||||
bool UpgradeTasks::upgradeGeoIndexes(TRI_vocbase_t& vocbase,
|
||||
arangodb::velocypack::Slice const& slice) {
|
||||
if (EngineSelectorFeature::engineName() != "rocksdb") {
|
||||
if (EngineSelectorFeature::engineName() != RocksDBEngine::EngineName) {
|
||||
LOG_TOPIC(DEBUG, Logger::STARTUP) << "No need to upgrade geo indexes!";
|
||||
return true;
|
||||
}
|
||||
|
@ -240,21 +241,24 @@ bool UpgradeTasks::addDefaultUserOther(TRI_vocbase_t& vocbase,
|
|||
VPackSlice extra = slice.get("extra");
|
||||
Result res = um->storeUser(false, user, passwd, active, VPackSlice::noneSlice());
|
||||
if (res.fail() && !res.is(TRI_ERROR_USER_DUPLICATE)) {
|
||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not add database user " << user << ": " << res.errorMessage();
|
||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not add database user " << user
|
||||
<< ": " << res.errorMessage();
|
||||
} else if (extra.isObject() && !extra.isEmptyObject()) {
|
||||
um->updateUser(user, [&](auth::User& user) {
|
||||
user.setUserData(VPackBuilder(extra));
|
||||
return TRI_ERROR_NO_ERROR;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
res = um->updateUser(user, [&](auth::User& entry) {
|
||||
entry.grantDatabase(vocbase.name(), auth::Level::RW);
|
||||
entry.grantCollection(vocbase.name(), "*", auth::Level::RW);
|
||||
return TRI_ERROR_NO_ERROR;
|
||||
});
|
||||
if (res.fail()) {
|
||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not set permissions for new user " << user << ": " << res.errorMessage();
|
||||
LOG_TOPIC(WARN, Logger::STARTUP)
|
||||
<< "could not set permissions for new user " << user << ": "
|
||||
<< res.errorMessage();
|
||||
}
|
||||
}
|
||||
return true;
|
||||
|
@ -332,30 +336,34 @@ bool UpgradeTasks::renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase,
|
|||
if (EngineSelectorFeature::engineName() == MMFilesEngine::EngineName) {
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
StorageEngine* engine = EngineSelectorFeature::ENGINE;
|
||||
std::string const path = engine->databasePath(&vocbase);
|
||||
|
||||
std::string const source = arangodb::basics::FileUtils::buildFilename(
|
||||
path, "REPLICATION-APPLIER-STATE");
|
||||
|
||||
|
||||
std::string const source =
|
||||
arangodb::basics::FileUtils::buildFilename(path,
|
||||
"REPLICATION-APPLIER-STATE");
|
||||
|
||||
if (!basics::FileUtils::isRegularFile(source)) {
|
||||
// source file does not exist
|
||||
return true;
|
||||
}
|
||||
|
||||
bool result = true;
|
||||
|
||||
// copy file REPLICATION-APPLIER-STATE to REPLICATION-APPLIER-STATE-<id>
|
||||
|
||||
// copy file REPLICATION-APPLIER-STATE to REPLICATION-APPLIER-STATE-<id>
|
||||
Result res = basics::catchToResult([&vocbase, &path, &source, &result]() -> Result {
|
||||
std::string const dest = arangodb::basics::FileUtils::buildFilename(
|
||||
path, "REPLICATION-APPLIER-STATE-" + std::to_string(vocbase.id()));
|
||||
|
||||
LOG_TOPIC(TRACE, Logger::STARTUP) << "copying replication applier file '" << source << "' to '" << dest << "'";
|
||||
LOG_TOPIC(TRACE, Logger::STARTUP) << "copying replication applier file '"
|
||||
<< source << "' to '" << dest << "'";
|
||||
|
||||
std::string error;
|
||||
if (!TRI_CopyFile(source.c_str(), dest.c_str(), error)) {
|
||||
LOG_TOPIC(WARN, Logger::STARTUP) << "could not copy replication applier file '" << source << "' to '" << dest << "'";
|
||||
LOG_TOPIC(WARN, Logger::STARTUP)
|
||||
<< "could not copy replication applier file '" << source << "' to '"
|
||||
<< dest << "'";
|
||||
result = false;
|
||||
}
|
||||
return Result();
|
||||
|
|
|
@ -48,7 +48,8 @@ struct UpgradeTasks {
|
|||
static bool createAppsIndex(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||
static bool setupAppBundles(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||
static bool persistLocalDocumentIds(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||
static bool renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase, velocypack::Slice const& slice);
|
||||
static bool renameReplicationApplierStateFiles(TRI_vocbase_t& vocbase,
|
||||
velocypack::Slice const& slice);
|
||||
};
|
||||
|
||||
} // namespace methods
|
||||
|
|
File diff suppressed because one or more lines are too long
Binary file not shown.
|
@ -1080,6 +1080,7 @@ if (list.length > 0) {
|
|||
<th class="collectionInfoTh">Deduplicate</th>
|
||||
<th class="collectionInfoTh">Selectivity Est.</th>
|
||||
<th class="collectionInfoTh">Fields</th>
|
||||
<th class="collectionInfoTh">Name</th>
|
||||
<th class="collectionInfoTh">Action</th>
|
||||
</tr>
|
||||
</thead>
|
||||
|
@ -1094,6 +1095,7 @@ if (list.length > 0) {
|
|||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td><i class="fa fa-plus-circle" id="addIndex"></i></td>
|
||||
</tr>
|
||||
</tfoot>
|
||||
|
@ -1124,6 +1126,17 @@ if (list.length > 0) {
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newGeoName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Geo JSON:</th>
|
||||
<th>
|
||||
|
@ -1165,6 +1178,17 @@ if (list.length > 0) {
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newPersistentName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Unique:</th>
|
||||
<th>
|
||||
|
@ -1232,6 +1256,17 @@ if (list.length > 0) {
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newHashName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Unique:</th>
|
||||
<th>
|
||||
|
@ -1299,6 +1334,17 @@ if (list.length > 0) {
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newFulltextName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Min. length:</th>
|
||||
<th><input type="text" id="newFulltextMinLength" value=""/></th>
|
||||
|
@ -1339,6 +1385,17 @@ if (list.length > 0) {
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newSkiplistName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Unique:</th>
|
||||
<th>
|
||||
|
@ -1406,6 +1463,17 @@ if (list.length > 0) {
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newTtlName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Documents expire after (s):</th>
|
||||
<th><input type="text" id="newTtlExpireAfter" value=""/></th>
|
||||
|
@ -3650,4 +3718,4 @@ var cutByResolution = function (str) {
|
|||
</div>
|
||||
</div></script><script id="warningList.ejs" type="text/template"> <% if (warnings.length > 0) { %> <div>
|
||||
<ul> <% console.log(warnings); _.each(warnings, function(w) { console.log(w);%> <li><b><%=w.code%></b>: <%=w.message%></li> <% }); %> </ul>
|
||||
</div> <% } %> </script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="filterSelectDiv" style="display:none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1551722803883"></script><script src="app.js?version=1551722803883"></script></body></html>
|
||||
</div> <% } %> </script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="filterSelectDiv" style="display:none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1552058798750"></script><script src="app.js?version=1552058798750"></script></body></html>
|
Binary file not shown.
|
@ -12,6 +12,7 @@
|
|||
<th class="collectionInfoTh">Deduplicate</th>
|
||||
<th class="collectionInfoTh">Selectivity Est.</th>
|
||||
<th class="collectionInfoTh">Fields</th>
|
||||
<th class="collectionInfoTh">Name</th>
|
||||
<th class="collectionInfoTh">Action</th>
|
||||
</tr>
|
||||
</thead>
|
||||
|
@ -26,6 +27,7 @@
|
|||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td></td>
|
||||
<td><i class="fa fa-plus-circle" id="addIndex"></i></td>
|
||||
</tr>
|
||||
</tfoot>
|
||||
|
@ -75,6 +77,17 @@
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newGeoName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Geo JSON:</th>
|
||||
<th>
|
||||
|
@ -116,6 +129,17 @@
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newPersistentName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Unique:</th>
|
||||
<th>
|
||||
|
@ -183,6 +207,17 @@
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newHashName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Unique:</th>
|
||||
<th>
|
||||
|
@ -250,6 +285,17 @@
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newFulltextName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Min. length:</th>
|
||||
<th><input type="text" id="newFulltextMinLength" value=""/></th>
|
||||
|
@ -290,6 +336,17 @@
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newSkiplistName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Unique:</th>
|
||||
<th>
|
||||
|
@ -357,6 +414,17 @@
|
|||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Name:</th>
|
||||
<th><input type="text" id="newTtlName" value=""/></th>
|
||||
<th class="tooltipInfoTh">
|
||||
<div class="tooltipDiv">
|
||||
<a class="index-tooltip" data-toggle="tooltip" data-placement="left" title="Index name. If left blank, a name will be auto-generated. Example: byValue">
|
||||
<span rel="tooltip" class="arangoicon icon_arangodb_info"></span>
|
||||
</a>
|
||||
</div>
|
||||
</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th class="collectionTh">Documents expire after (s):</th>
|
||||
<th><input type="text" id="newTtlExpireAfter" value=""/></th>
|
||||
|
|
|
@ -129,27 +129,34 @@
|
|||
var sparse;
|
||||
var deduplicate;
|
||||
var background;
|
||||
var name;
|
||||
|
||||
switch (indexType) {
|
||||
case 'Ttl':
|
||||
fields = $('#newTtlFields').val();
|
||||
var expireAfter = parseInt($('#newTtlExpireAfter').val(), 10) || 0;
|
||||
background = self.checkboxToValue('#newTtlBackground');
|
||||
name = $('#newTtlName').val();
|
||||
postParameter = {
|
||||
type: 'ttl',
|
||||
fields: self.stringToArray(fields),
|
||||
expireAfter: expireAfter
|
||||
expireAfter: expireAfter,
|
||||
inBackground: background,
|
||||
name: name
|
||||
};
|
||||
background = self.checkboxToValue('#newTtlBackground');
|
||||
break;
|
||||
case 'Geo':
|
||||
// HANDLE ARRAY building
|
||||
fields = $('#newGeoFields').val();
|
||||
background = self.checkboxToValue('#newGeoBackground');
|
||||
var geoJson = self.checkboxToValue('#newGeoJson');
|
||||
name = $('#newGeoName').val();
|
||||
postParameter = {
|
||||
type: 'geo',
|
||||
fields: self.stringToArray(fields),
|
||||
geoJson: geoJson,
|
||||
inBackground: background
|
||||
inBackground: background,
|
||||
name: name
|
||||
};
|
||||
break;
|
||||
case 'Persistent':
|
||||
|
@ -158,13 +165,15 @@
|
|||
sparse = self.checkboxToValue('#newPersistentSparse');
|
||||
deduplicate = self.checkboxToValue('#newPersistentDeduplicate');
|
||||
background = self.checkboxToValue('#newPersistentBackground');
|
||||
name = $('#newPersistentName').val();
|
||||
postParameter = {
|
||||
type: 'persistent',
|
||||
fields: self.stringToArray(fields),
|
||||
unique: unique,
|
||||
sparse: sparse,
|
||||
deduplicate: deduplicate,
|
||||
inBackground: background
|
||||
inBackground: background,
|
||||
name: name
|
||||
};
|
||||
break;
|
||||
case 'Hash':
|
||||
|
@ -173,24 +182,28 @@
|
|||
sparse = self.checkboxToValue('#newHashSparse');
|
||||
deduplicate = self.checkboxToValue('#newHashDeduplicate');
|
||||
background = self.checkboxToValue('#newHashBackground');
|
||||
name = $('#newHashName').val();
|
||||
postParameter = {
|
||||
type: 'hash',
|
||||
fields: self.stringToArray(fields),
|
||||
unique: unique,
|
||||
sparse: sparse,
|
||||
deduplicate: deduplicate,
|
||||
inBackground: background
|
||||
inBackground: background,
|
||||
name: name
|
||||
};
|
||||
break;
|
||||
case 'Fulltext':
|
||||
fields = $('#newFulltextFields').val();
|
||||
var minLength = parseInt($('#newFulltextMinLength').val(), 10) || 0;
|
||||
background = self.checkboxToValue('#newFulltextBackground');
|
||||
name = $('#newFulltextName').val();
|
||||
postParameter = {
|
||||
type: 'fulltext',
|
||||
fields: self.stringToArray(fields),
|
||||
minLength: minLength,
|
||||
inBackground: background
|
||||
inBackground: background,
|
||||
name: name
|
||||
};
|
||||
break;
|
||||
case 'Skiplist':
|
||||
|
@ -199,13 +212,15 @@
|
|||
sparse = self.checkboxToValue('#newSkiplistSparse');
|
||||
deduplicate = self.checkboxToValue('#newSkiplistDeduplicate');
|
||||
background = self.checkboxToValue('#newSkiplistBackground');
|
||||
name = $('#newSkiplistName').val();
|
||||
postParameter = {
|
||||
type: 'skiplist',
|
||||
fields: self.stringToArray(fields),
|
||||
unique: unique,
|
||||
sparse: sparse,
|
||||
deduplicate: deduplicate,
|
||||
inBackground: background
|
||||
inBackground: background,
|
||||
name: name
|
||||
};
|
||||
break;
|
||||
}
|
||||
|
@ -430,6 +445,7 @@
|
|||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(deduplicate) + '</th>' +
|
||||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(selectivity) + '</th>' +
|
||||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(fieldString) + '</th>' +
|
||||
'<th class=' + JSON.stringify(cssClass) + '>' + arangoHelper.escapeHtml(v.name) + '</th>' +
|
||||
'<th class=' + JSON.stringify(cssClass) + '>' + actionString + '</th>' +
|
||||
'</tr>'
|
||||
);
|
||||
|
|
|
@ -31,7 +31,7 @@
|
|||
var internal = require('internal');
|
||||
var arangosh = require('@arangodb/arangosh');
|
||||
var engine = null;
|
||||
|
||||
|
||||
function getEngine(db) {
|
||||
if (engine === null) {
|
||||
try {
|
||||
|
@ -304,8 +304,8 @@ ArangoCollection.prototype.name = function () {
|
|||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoCollection.prototype.status = function () {
|
||||
if (this._status === null ||
|
||||
this._status === ArangoCollection.STATUS_UNLOADING ||
|
||||
if (this._status === null ||
|
||||
this._status === ArangoCollection.STATUS_UNLOADING ||
|
||||
this._status === ArangoCollection.STATUS_UNLOADED) {
|
||||
this._status = null;
|
||||
this.refresh();
|
||||
|
@ -476,7 +476,7 @@ ArangoCollection.prototype.drop = function (options) {
|
|||
requestResult = this._database._connection.DELETE(this._baseurl());
|
||||
}
|
||||
|
||||
if (requestResult !== null
|
||||
if (requestResult !== null
|
||||
&& requestResult !== undefined
|
||||
&& requestResult.error === true
|
||||
&& requestResult.errorNum !== internal.errors.ERROR_ARANGO_DATA_SOURCE_NOT_FOUND.code) {
|
||||
|
@ -518,16 +518,16 @@ ArangoCollection.prototype.truncate = function (options) {
|
|||
|
||||
arangosh.checkRequestResult(requestResult);
|
||||
this._status = null;
|
||||
|
||||
|
||||
if (!options.compact) {
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
// fetch storage engine type
|
||||
var engine = getEngine(this._database);
|
||||
|
||||
if (engine === 'mmfiles') {
|
||||
try {
|
||||
try {
|
||||
// after we are done with the truncation, we flush the WAL to move out all
|
||||
// remove operations
|
||||
this._database._connection.PUT(this._prefixurl('/_admin/wal/flush?waitForSync=true&waitForCollector=true&maxWaitTime=5'), null);
|
||||
|
@ -630,6 +630,8 @@ ArangoCollection.prototype.getIndexes = ArangoCollection.prototype.indexes = fun
|
|||
ArangoCollection.prototype.index = function (id) {
|
||||
if (id.hasOwnProperty('id')) {
|
||||
id = id.id;
|
||||
} else if (id.hasOwnProperty('name')) {
|
||||
id = id.name;
|
||||
}
|
||||
|
||||
var requestResult = this._database._connection.GET(this._database._indexurl(id, this.name()));
|
||||
|
|
|
@ -73,7 +73,7 @@ var simple = require('@arangodb/simple-query');
|
|||
var ArangoError = require('@arangodb').ArangoError;
|
||||
var ArangoDatabase = require('@arangodb/arango-database').ArangoDatabase;
|
||||
|
||||
|
||||
|
||||
ArangoCollection.prototype.shards = function (detailed) {
|
||||
let base = ArangoClusterInfo.getCollectionInfo(require('internal').db._name(), this.name());
|
||||
if (detailed) {
|
||||
|
@ -138,16 +138,17 @@ ArangoCollection.prototype.index = function (id) {
|
|||
|
||||
if (typeof id === 'object' && id.hasOwnProperty('id')) {
|
||||
id = id.id;
|
||||
} else if (typeof id === 'object' && id.hasOwnProperty('name')) {
|
||||
id = id.name;
|
||||
}
|
||||
|
||||
if (typeof id === 'string') {
|
||||
var pa = ArangoDatabase.indexRegex.exec(id);
|
||||
|
||||
if (pa === null) {
|
||||
if (pa === null && !isNaN(Number(id)) && Number(id) === Math.floor(Number(id))) {
|
||||
id = this.name() + '/' + id;
|
||||
}
|
||||
}
|
||||
else if (typeof id === 'number') {
|
||||
} else if (typeof id === 'number') {
|
||||
// stringify the id
|
||||
id = this.name() + '/' + id;
|
||||
}
|
||||
|
@ -155,7 +156,7 @@ ArangoCollection.prototype.index = function (id) {
|
|||
for (i = 0; i < indexes.length; ++i) {
|
||||
var index = indexes[i];
|
||||
|
||||
if (index.id === id) {
|
||||
if (index.id === id || index.name === id) {
|
||||
return index;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -217,7 +217,7 @@ ArangoDatabase.prototype._truncate = function (name) {
|
|||
// / @brief was docuBlock IndexVerify
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ArangoDatabase.indexRegex = /^([a-zA-Z0-9\-_]+)\/([0-9]+)$/;
|
||||
ArangoDatabase.indexRegex = /^([a-zA-Z0-9\-_]+)\/([a-zA-Z0-9\-_]+)$/;
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////////////
|
||||
// / @brief was docuBlock IndexHandle
|
||||
|
@ -239,6 +239,7 @@ ArangoDatabase.prototype._index = function (id) {
|
|||
}
|
||||
|
||||
var col = this._collection(pa[1]);
|
||||
var name = pa[2];
|
||||
|
||||
if (col === null) {
|
||||
err = new ArangoError();
|
||||
|
@ -253,7 +254,7 @@ ArangoDatabase.prototype._index = function (id) {
|
|||
for (i = 0; i < indexes.length; ++i) {
|
||||
var index = indexes[i];
|
||||
|
||||
if (index.id === id) {
|
||||
if (index.id === id || index.name === name) {
|
||||
return index;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -93,12 +93,20 @@ std::string const StaticStrings::DataSourceType("type");
|
|||
std::string const StaticStrings::IndexExpireAfter("expireAfter");
|
||||
std::string const StaticStrings::IndexFields("fields");
|
||||
std::string const StaticStrings::IndexId("id");
|
||||
std::string const StaticStrings::IndexName("name");
|
||||
std::string const StaticStrings::IndexSparse("sparse");
|
||||
std::string const StaticStrings::IndexType("type");
|
||||
std::string const StaticStrings::IndexUnique("unique");
|
||||
std::string const StaticStrings::IndexIsBuilding("isBuilding");
|
||||
std::string const StaticStrings::IndexInBackground("inBackground");
|
||||
|
||||
// static index names
|
||||
std::string const StaticStrings::IndexNameEdge("edge");
|
||||
std::string const StaticStrings::IndexNameEdgeFrom("edge_from");
|
||||
std::string const StaticStrings::IndexNameEdgeTo("edge_to");
|
||||
std::string const StaticStrings::IndexNameInaccessible("inaccessible");
|
||||
std::string const StaticStrings::IndexNamePrimary("primary");
|
||||
|
||||
// HTTP headers
|
||||
std::string const StaticStrings::Accept("accept");
|
||||
std::string const StaticStrings::AcceptEncoding("accept-encoding");
|
||||
|
|
|
@ -92,12 +92,20 @@ class StaticStrings {
|
|||
static std::string const IndexExpireAfter; // ttl index expire value
|
||||
static std::string const IndexFields; // index fields
|
||||
static std::string const IndexId; // index id
|
||||
static std::string const IndexName; // index name
|
||||
static std::string const IndexSparse; // index sparsity marker
|
||||
static std::string const IndexType; // index type
|
||||
static std::string const IndexUnique; // index uniqueness marker
|
||||
static std::string const IndexIsBuilding; // index build in-process
|
||||
static std::string const IndexInBackground; // index in background
|
||||
|
||||
// static index names
|
||||
static std::string const IndexNameEdge;
|
||||
static std::string const IndexNameEdgeFrom;
|
||||
static std::string const IndexNameEdgeTo;
|
||||
static std::string const IndexNameInaccessible;
|
||||
static std::string const IndexNamePrimary;
|
||||
|
||||
// HTTP headers
|
||||
static std::string const Accept;
|
||||
static std::string const AcceptEncoding;
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -40,7 +40,9 @@ var db = require("@arangodb").db;
|
|||
function ensureIndexSuite() {
|
||||
'use strict';
|
||||
var cn = "UnitTestsCollectionIdx";
|
||||
var ecn = "UnitTestsEdgeCollectionIdx";
|
||||
var collection = null;
|
||||
var edgeCollection = null;
|
||||
|
||||
return {
|
||||
|
||||
|
@ -51,6 +53,7 @@ function ensureIndexSuite() {
|
|||
setUp : function () {
|
||||
internal.db._drop(cn);
|
||||
collection = internal.db._create(cn);
|
||||
edgeCollection = internal.db._createEdgeCollection(ecn);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -61,10 +64,12 @@ function ensureIndexSuite() {
|
|||
// we need try...catch here because at least one test drops the collection itself!
|
||||
try {
|
||||
collection.drop();
|
||||
edgeCollection.drop();
|
||||
}
|
||||
catch (err) {
|
||||
}
|
||||
collection = null;
|
||||
edgeCollection = null;
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -87,10 +92,6 @@ function ensureIndexSuite() {
|
|||
assertEqual(collection.name() + "/" + id, res.id);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test: ids
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testEnsureId2 : function () {
|
||||
var id = "2734752388";
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
||||
|
@ -107,6 +108,132 @@ function ensureIndexSuite() {
|
|||
assertEqual(collection.name() + "/" + id, res.id);
|
||||
},
|
||||
|
||||
testEnsureId3 : function () {
|
||||
var id = "2734752388";
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(collection.name() + "/" + id, idx.id);
|
||||
|
||||
// expect duplicate id with different definition to fail and error out
|
||||
try {
|
||||
collection.ensureIndex({ type: "hash", fields: [ "a", "c" ], id: id });
|
||||
fail();
|
||||
} catch (err) {
|
||||
assertEqual(errors.ERROR_ARANGO_DUPLICATE_IDENTIFIER.code, err.errorNum);
|
||||
}
|
||||
},
|
||||
|
||||
testEnsureId4 : function () {
|
||||
var id = "2734752388";
|
||||
var name = "name";
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name, id: id });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(collection.name() + "/" + id, idx.id);
|
||||
assertEqual(name, idx.name);
|
||||
|
||||
// expect duplicate id with same definition to return old index
|
||||
idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], id: id });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(collection.name() + "/" + id, idx.id);
|
||||
assertEqual(name, idx.name);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test: names
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testEnsureNamePrimary : function () {
|
||||
var res = collection.getIndexes()[0];
|
||||
|
||||
assertEqual("primary", res.type);
|
||||
assertEqual("primary", res.name);
|
||||
},
|
||||
|
||||
testEnsureNameEdge : function () {
|
||||
var res = edgeCollection.getIndexes()[0];
|
||||
|
||||
assertEqual("primary", res.type);
|
||||
assertEqual("primary", res.name);
|
||||
|
||||
res = edgeCollection.getIndexes()[1];
|
||||
|
||||
assertEqual("edge", res.type);
|
||||
assertEqual("edge", res.name);
|
||||
},
|
||||
|
||||
testEnsureName1 : function () {
|
||||
var name = "byValue";
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(name, idx.name);
|
||||
|
||||
var res = collection.getIndexes()[collection.getIndexes().length - 1];
|
||||
|
||||
assertEqual("skiplist", res.type);
|
||||
assertFalse(res.unique);
|
||||
assertEqual([ "b", "d" ], res.fields);
|
||||
assertEqual(name, idx.name);
|
||||
},
|
||||
|
||||
testEnsureName2 : function () {
|
||||
var name = "byValue";
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(name, idx.name);
|
||||
|
||||
// expect duplicate name to fail and error out
|
||||
try {
|
||||
collection.ensureIndex({ type: "hash", fields: [ "a", "c" ], name: name });
|
||||
fail();
|
||||
} catch (err) {
|
||||
assertEqual(errors.ERROR_ARANGO_DUPLICATE_IDENTIFIER.code, err.errorNum);
|
||||
}
|
||||
},
|
||||
|
||||
testEnsureName3 : function () {
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ]});
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual("idx_", idx.name.substr(0,4));
|
||||
|
||||
var res = collection.getIndexes()[collection.getIndexes().length - 1];
|
||||
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual("idx_", idx.name.substr(0,4));
|
||||
},
|
||||
|
||||
testEnsureName4 : function () {
|
||||
var id = "2734752388";
|
||||
var name = "old";
|
||||
var idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name, id: id });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(collection.name() + "/" + id, idx.id);
|
||||
assertEqual(name, idx.name);
|
||||
|
||||
// expect duplicate id with same definition to return old index
|
||||
idx = collection.ensureIndex({ type: "skiplist", fields: [ "b", "d" ], name: name });
|
||||
assertEqual("skiplist", idx.type);
|
||||
assertFalse(idx.unique);
|
||||
assertEqual([ "b", "d" ], idx.fields);
|
||||
assertEqual(collection.name() + "/" + id, idx.id);
|
||||
assertEqual(name, idx.name);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test: ensure invalid type
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -1101,4 +1228,3 @@ jsunity.run(ensureIndexSuite);
|
|||
jsunity.run(ensureIndexEdgesSuite);
|
||||
|
||||
return jsunity.done();
|
||||
|
||||
|
|
|
@ -122,6 +122,24 @@ function indexSuite() {
|
|||
assertEqual(id.id, idx.id);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief test: get index by name
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
testIndexByName : function () {
|
||||
var id = collection.ensureGeoIndex("a");
|
||||
|
||||
var idx = collection.index(id.name);
|
||||
assertEqual(id.id, idx.id);
|
||||
assertEqual(id.name, idx.name);
|
||||
|
||||
var fqn = `${collection.name()}/${id.name}`;
|
||||
require('internal').print(fqn);
|
||||
idx = internal.db._index(fqn);
|
||||
assertEqual(id.id, idx.id);
|
||||
assertEqual(id.name, idx.name);
|
||||
},
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief drop index
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
Loading…
Reference in New Issue