mirror of https://gitee.com/bigwinds/arangodb
Bug fix/mini issues (#2878)
* ignore some return codes when closing zip files and do not report them * hide mostly useless debug message * clear basic authentication cache after deletion of users and after updating them otherwise deleted/changed users can still access the database! * adjust wording * added notes about mmfiles-specific parameters * updated CHANGELOG and documentation
This commit is contained in:
parent
582d3e5fd1
commit
33a7b8b853
40
CHANGELOG
40
CHANGELOG
|
@ -4,16 +4,52 @@ devel
|
|||
* ui: allows now to edit default access level for collections in database
|
||||
_system for all users except the root user.
|
||||
|
||||
|
||||
v3.2.0 (2017-07-20)
|
||||
-------------------
|
||||
|
||||
* fixed UI issues
|
||||
|
||||
* fixed multi-threading issues in Pregel
|
||||
|
||||
* fixed Foxx resilience
|
||||
|
||||
* added command-line option `--javascript.allow-admin-execute`
|
||||
|
||||
This option can be used to control whether user-defined JavaScript code
|
||||
is allowed to be executed on server by sending via HTTP to the API endpoint
|
||||
`/_admin/execute` with an authenticated user account.
|
||||
The default value is `false`, which disables the execution of user-defined
|
||||
code. This is also the recommended setting for production. In test environments,
|
||||
it may be convenient to turn the option on in order to send arbitrary setup
|
||||
or teardown commands for execution on the server.
|
||||
|
||||
|
||||
v3.2.beta6 (2017-07-18)
|
||||
-----------------------
|
||||
|
||||
* various bugfixes
|
||||
|
||||
|
||||
v3.2.beta5 (2017-07-16)
|
||||
-----------------------
|
||||
|
||||
* numerous bugfixes
|
||||
|
||||
|
||||
v3.2.beta4 (2017-07-04)
|
||||
-----------------------
|
||||
|
||||
* ui: fixed document view _from and _to linking issue for special characters
|
||||
|
||||
* added function `db._parse(query)` for parse an AQL query and return information about it
|
||||
* added function `db._parse(query)` for parsing an AQL query and returning information about it
|
||||
|
||||
* fixed one medium priority and two low priority security user interface
|
||||
issues found by owasp zap.
|
||||
|
||||
* ui: added index deduplicate options
|
||||
|
||||
* ui: fixed renaming of collections based on rocksdb storage engine
|
||||
* ui: fixed renaming of collections for the rocksdb storage engine
|
||||
|
||||
* documentation and js fixes for secondaries
|
||||
|
||||
|
|
|
@ -445,6 +445,19 @@ This option only has an effect if the query cache mode is set to either *on* or
|
|||
*demand*.
|
||||
|
||||
|
||||
### JavaScript code execution
|
||||
|
||||
`--javascript.allow-admin-execute`
|
||||
|
||||
This option can be used to control whether user-defined JavaScript code
|
||||
is allowed to be executed on server by sending via HTTP to the API endpoint
|
||||
`/_admin/execute` with an authenticated user account.
|
||||
The default value is *false*, which disables the execution of user-defined
|
||||
code. This is also the recommended setting for production. In test environments,
|
||||
it may be convenient to turn the option on in order to send arbitrary setup
|
||||
or teardown commands for execution on the server.
|
||||
|
||||
|
||||
### V8 contexts
|
||||
|
||||
`--javascript.v8-contexts number`
|
||||
|
|
|
@ -84,6 +84,7 @@ to the [naming conventions](../NamingConventions/README.md).
|
|||
slightly faster than regular collections because ArangoDB does not
|
||||
enforce any synchronization to disk and does not calculate any CRC
|
||||
checksums for datafiles (as there are no datafiles).
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
- *keyOptions* (optional): additional options for key generation. If
|
||||
specified, then *keyOptions* should be a JSON array containing the
|
||||
|
|
|
@ -113,3 +113,18 @@ Command-line options changed
|
|||
|
||||
the minimum number of V8 contexts to create at startup can be configured via
|
||||
the new startup option `--javascript.v8-contexts-minimum`.
|
||||
|
||||
* added command-line option `--javascript.allow-admin-execute`
|
||||
|
||||
This option can be used to control whether user-defined JavaScript code
|
||||
is allowed to be executed on server by sending via HTTP to the API endpoint
|
||||
`/_admin/execute` with an authenticated user account.
|
||||
The default value is `false`, which disables the execution of user-defined
|
||||
code. This is also the recommended setting for production. In test environments,
|
||||
it may be convenient to turn the option on in order to send arbitrary setup
|
||||
or teardown commands for execution on the server.
|
||||
|
||||
The introduction of this option changes the default behavior of ArangoDB 3.2:
|
||||
3.2 now by default disables the execution of JavaScript code via this API,
|
||||
whereas earlier versions allowed it. To restore the old behavior, it is
|
||||
necessary to set the option to `true`.
|
||||
|
|
|
@ -16,5 +16,13 @@ then the return value you produce will be returned as content type
|
|||
*true*, the result will be a JSON object describing the return value
|
||||
directly, otherwise a string produced by JSON.stringify will be
|
||||
returned.
|
||||
|
||||
Note that this API endpoint will only be present if the server was
|
||||
started with the option `--javascript.allow-admin-execute true`.
|
||||
|
||||
The default value of this option is `false`, which disables the execution of
|
||||
user-defined code and disables this API endpoint entirely.
|
||||
This is also the recommended setting for production.
|
||||
|
||||
@endDocuBlock
|
||||
|
||||
|
|
|
@ -11,16 +11,19 @@ The name of the collection.
|
|||
|
||||
@RESTDESCRIPTION
|
||||
In addition to the above, the result will always contain the
|
||||
*waitForSync*, *doCompact*, *journalSize*, and *isVolatile* attributes.
|
||||
*waitForSync* attribute, and the *doCompact*, *journalSize*,
|
||||
and *isVolatile* attributes for the MMFiles storage engine.
|
||||
This is achieved by forcing a load of the underlying collection.
|
||||
|
||||
- *waitForSync*: If *true* then creating, changing or removing
|
||||
documents will wait until the data has been synchronized to disk.
|
||||
|
||||
- *doCompact*: Whether or not the collection will be compacted.
|
||||
This option is only present for the MMFiles storage engine.
|
||||
|
||||
- *journalSize*: The maximal size setting for journals / datafiles
|
||||
in bytes.
|
||||
This option is only present for the MMFiles storage engine.
|
||||
|
||||
- *keyOptions*: JSON object which contains key generation options:
|
||||
- *type*: specifies the type of the key generator. The currently
|
||||
|
@ -34,6 +37,7 @@ This is achieved by forcing a load of the underlying collection.
|
|||
- *isVolatile*: If *true* then the collection data will be
|
||||
kept in memory only and ArangoDB will not write or sync the data
|
||||
to disk.
|
||||
This option is only present for the MMFiles storage engine.
|
||||
|
||||
In a cluster setup, the result will also contain the following attributes:
|
||||
- *numberOfShards*: the number of shards of the collection.
|
||||
|
|
|
@ -13,10 +13,12 @@ document create, update, replace or removal operation. (default: false)
|
|||
|
||||
@RESTBODYPARAM{doCompact,boolean,optional,}
|
||||
whether or not the collection will be compacted (default is *true*)
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
@RESTBODYPARAM{journalSize,integer,optional,int64}
|
||||
The maximal size of a journal or datafile in bytes. The value
|
||||
must be at least `1048576` (1 MiB). (The default is a configuration parameter)
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
@RESTBODYPARAM{isSystem,boolean,optional,}
|
||||
If *true*, create a system collection. In this case *collection-name*
|
||||
|
@ -36,6 +38,7 @@ checksums for datafiles (as there are no datafiles). This option
|
|||
should therefore be used for cache-type collections only, and not
|
||||
for data that cannot be re-created otherwise.
|
||||
(The default is *false*)
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
@RESTBODYPARAM{keyOptions,object,optional,JSF_post_api_collection_opts}
|
||||
additional options for key generation. If specified, then *keyOptions*
|
||||
|
@ -67,7 +70,7 @@ The following values for *type* are valid:
|
|||
- *3*: edges collection
|
||||
|
||||
@RESTBODYPARAM{indexBuckets,integer,optional,int64}
|
||||
The: number of buckets into which indexes using a hash
|
||||
The number of buckets into which indexes using a hash
|
||||
table are split. The default is 16 and this number has to be a
|
||||
power of 2 and less than or equal to 1024.
|
||||
|
||||
|
@ -79,6 +82,7 @@ example, 64 might be a sensible value for a collection with 100
|
|||
value, but other index types might follow in future ArangoDB versions.
|
||||
Changes (see below) are applied when the collection is loaded the next
|
||||
time.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
@RESTBODYPARAM{numberOfShards,integer,optional,int64}
|
||||
(The default is *1*): in a cluster, this value determines the
|
||||
|
|
|
@ -1,133 +1,133 @@
|
|||
|
||||
@startDocuBlock JSF_post_api_index_hash
|
||||
@brief creates a hash index
|
||||
|
||||
@RESTHEADER{POST /_api/index#hash, Create hash index}
|
||||
|
||||
@RESTQUERYPARAMETERS
|
||||
|
||||
@RESTQUERYPARAM{collection-name,string,required}
|
||||
The collection name.
|
||||
|
||||
@RESTBODYPARAM{type,string,required,string}
|
||||
must be equal to *"hash"*.
|
||||
|
||||
@RESTBODYPARAM{fields,array,required,string}
|
||||
an array of attribute paths.
|
||||
|
||||
@RESTBODYPARAM{unique,boolean,required,}
|
||||
if *true*, then create a unique index.
|
||||
|
||||
@RESTBODYPARAM{sparse,boolean,required,}
|
||||
if *true*, then create a sparse index.
|
||||
|
||||
@RESTBODYPARAM{deduplicate,boolean,optional,boolean}
|
||||
if *false*, the deduplication of array values is turned off.
|
||||
|
||||
@RESTDESCRIPTION
|
||||
**NOTE** Swagger examples won't work due to the anchor.
|
||||
|
||||
|
||||
Creates a hash index for the collection *collection-name* if it
|
||||
does not already exist. The call expects an object containing the index
|
||||
details.
|
||||
|
||||
In a sparse index all documents will be excluded from the index that do not
|
||||
contain at least one of the specified index attributes (i.e. *fields*) or that
|
||||
have a value of *null* in any of the specified index attributes. Such documents
|
||||
will not be indexed, and not be taken into account for uniqueness checks if
|
||||
the *unique* flag is set.
|
||||
|
||||
In a non-sparse index, these documents will be indexed (for non-present
|
||||
indexed attributes, a value of *null* will be used) and will be taken into
|
||||
account for uniqueness checks if the *unique* flag is set.
|
||||
|
||||
**Note**: unique indexes on non-shard keys are not supported in a cluster.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
@RESTRETURNCODE{200}
|
||||
If the index already exists, then a *HTTP 200* is returned.
|
||||
|
||||
@RESTRETURNCODE{201}
|
||||
If the index does not already exist and could be created, then a *HTTP 201*
|
||||
is returned.
|
||||
|
||||
@RESTRETURNCODE{400}
|
||||
If the collection already contains documents and you try to create a unique
|
||||
hash index in such a way that there are documents violating the uniqueness,
|
||||
then a *HTTP 400* is returned.
|
||||
|
||||
@RESTRETURNCODE{404}
|
||||
If the *collection-name* is unknown, then a *HTTP 404* is returned.
|
||||
|
||||
@EXAMPLES
|
||||
|
||||
Creating an unique constraint
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateNewUniqueConstraint}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "hash",
|
||||
unique: true,
|
||||
fields : [ "a", "b" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
|
||||
Creating a non-unique hash index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateNewHashIndex}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "hash",
|
||||
unique: false,
|
||||
fields: [ "a", "b" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
|
||||
Creating a sparse index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateSparseHashIndex}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "hash",
|
||||
unique: false,
|
||||
sparse: true,
|
||||
fields: [ "a" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
@endDocuBlock
|
||||
|
||||
|
||||
@startDocuBlock JSF_post_api_index_hash
|
||||
@brief creates a hash index
|
||||
|
||||
@RESTHEADER{POST /_api/index#hash, Create hash index}
|
||||
|
||||
@RESTQUERYPARAMETERS
|
||||
|
||||
@RESTQUERYPARAM{collection-name,string,required}
|
||||
The collection name.
|
||||
|
||||
@RESTBODYPARAM{type,string,required,string}
|
||||
must be equal to *"hash"*.
|
||||
|
||||
@RESTBODYPARAM{fields,array,required,string}
|
||||
an array of attribute paths.
|
||||
|
||||
@RESTBODYPARAM{unique,boolean,required,}
|
||||
if *true*, then create a unique index.
|
||||
|
||||
@RESTBODYPARAM{sparse,boolean,required,}
|
||||
if *true*, then create a sparse index.
|
||||
|
||||
@RESTBODYPARAM{deduplicate,boolean,optional,boolean}
|
||||
if *false*, the deduplication of array values is turned off.
|
||||
|
||||
@RESTDESCRIPTION
|
||||
**NOTE** Swagger examples won't work due to the anchor.
|
||||
|
||||
|
||||
Creates a hash index for the collection *collection-name* if it
|
||||
does not already exist. The call expects an object containing the index
|
||||
details.
|
||||
|
||||
In a sparse index all documents will be excluded from the index that do not
|
||||
contain at least one of the specified index attributes (i.e. *fields*) or that
|
||||
have a value of *null* in any of the specified index attributes. Such documents
|
||||
will not be indexed, and not be taken into account for uniqueness checks if
|
||||
the *unique* flag is set.
|
||||
|
||||
In a non-sparse index, these documents will be indexed (for non-present
|
||||
indexed attributes, a value of *null* will be used) and will be taken into
|
||||
account for uniqueness checks if the *unique* flag is set.
|
||||
|
||||
**Note**: unique indexes on non-shard keys are not supported in a cluster.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
@RESTRETURNCODE{200}
|
||||
If the index already exists, then a *HTTP 200* is returned.
|
||||
|
||||
@RESTRETURNCODE{201}
|
||||
If the index does not already exist and could be created, then a *HTTP 201*
|
||||
is returned.
|
||||
|
||||
@RESTRETURNCODE{400}
|
||||
If the collection already contains documents and you try to create a unique
|
||||
hash index in such a way that there are documents violating the uniqueness,
|
||||
then a *HTTP 400* is returned.
|
||||
|
||||
@RESTRETURNCODE{404}
|
||||
If the *collection-name* is unknown, then a *HTTP 404* is returned.
|
||||
|
||||
@EXAMPLES
|
||||
|
||||
Creating an unique constraint
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateNewUniqueConstraint}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "hash",
|
||||
unique: true,
|
||||
fields : [ "a", "b" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
|
||||
Creating a non-unique hash index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateNewHashIndex}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "hash",
|
||||
unique: false,
|
||||
fields: [ "a", "b" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
|
||||
Creating a sparse index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateSparseHashIndex}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "hash",
|
||||
unique: false,
|
||||
sparse: true,
|
||||
fields: [ "a" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
@endDocuBlock
|
||||
|
||||
|
|
|
@ -1,110 +1,110 @@
|
|||
|
||||
@startDocuBlock JSF_post_api_index_skiplist
|
||||
@brief creates a skip-list
|
||||
|
||||
@RESTHEADER{POST /_api/index#skiplist, Create skip list}
|
||||
|
||||
@RESTQUERYPARAMETERS
|
||||
|
||||
@RESTQUERYPARAM{collection-name,string,required}
|
||||
The collection name.
|
||||
|
||||
@RESTBODYPARAM{type,string,required,string}
|
||||
must be equal to *"skiplist"*.
|
||||
|
||||
@RESTBODYPARAM{fields,array,required,string}
|
||||
an array of attribute paths.
|
||||
|
||||
@RESTBODYPARAM{unique,boolean,required,}
|
||||
if *true*, then create a unique index.
|
||||
|
||||
@RESTBODYPARAM{sparse,boolean,required,}
|
||||
if *true*, then create a sparse index.
|
||||
|
||||
@RESTBODYPARAM{deduplicate,boolean,optional,boolean}
|
||||
if *false*, the deduplication of array values is turned off.
|
||||
|
||||
@RESTDESCRIPTION
|
||||
|
||||
Creates a skip-list index for the collection *collection-name*, if
|
||||
it does not already exist. The call expects an object containing the index
|
||||
details.
|
||||
|
||||
In a sparse index all documents will be excluded from the index that do not
|
||||
contain at least one of the specified index attributes (i.e. *fields*) or that
|
||||
have a value of *null* in any of the specified index attributes. Such documents
|
||||
will not be indexed, and not be taken into account for uniqueness checks if
|
||||
the *unique* flag is set.
|
||||
|
||||
In a non-sparse index, these documents will be indexed (for non-present
|
||||
indexed attributes, a value of *null* will be used) and will be taken into
|
||||
account for uniqueness checks if the *unique* flag is set.
|
||||
|
||||
**Note**: unique indexes on non-shard keys are not supported in a cluster.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
@RESTRETURNCODE{200}
|
||||
If the index already exists, then a *HTTP 200* is
|
||||
returned.
|
||||
|
||||
@RESTRETURNCODE{201}
|
||||
If the index does not already exist and could be created, then a *HTTP 201*
|
||||
is returned.
|
||||
|
||||
@RESTRETURNCODE{400}
|
||||
If the collection already contains documents and you try to create a unique
|
||||
skip-list index in such a way that there are documents violating the
|
||||
uniqueness, then a *HTTP 400* is returned.
|
||||
|
||||
@RESTRETURNCODE{404}
|
||||
If the *collection-name* is unknown, then a *HTTP 404* is returned.
|
||||
|
||||
@EXAMPLES
|
||||
|
||||
Creating a skiplist index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateNewSkiplist}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "skiplist",
|
||||
unique: false,
|
||||
fields: [ "a", "b" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
|
||||
Creating a sparse skiplist index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateSparseSkiplist}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "skiplist",
|
||||
unique: false,
|
||||
sparse: true,
|
||||
fields: [ "a" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
@endDocuBlock
|
||||
|
||||
|
||||
@startDocuBlock JSF_post_api_index_skiplist
|
||||
@brief creates a skip-list
|
||||
|
||||
@RESTHEADER{POST /_api/index#skiplist, Create skip list}
|
||||
|
||||
@RESTQUERYPARAMETERS
|
||||
|
||||
@RESTQUERYPARAM{collection-name,string,required}
|
||||
The collection name.
|
||||
|
||||
@RESTBODYPARAM{type,string,required,string}
|
||||
must be equal to *"skiplist"*.
|
||||
|
||||
@RESTBODYPARAM{fields,array,required,string}
|
||||
an array of attribute paths.
|
||||
|
||||
@RESTBODYPARAM{unique,boolean,required,}
|
||||
if *true*, then create a unique index.
|
||||
|
||||
@RESTBODYPARAM{sparse,boolean,required,}
|
||||
if *true*, then create a sparse index.
|
||||
|
||||
@RESTBODYPARAM{deduplicate,boolean,optional,boolean}
|
||||
if *false*, the deduplication of array values is turned off.
|
||||
|
||||
@RESTDESCRIPTION
|
||||
|
||||
Creates a skip-list index for the collection *collection-name*, if
|
||||
it does not already exist. The call expects an object containing the index
|
||||
details.
|
||||
|
||||
In a sparse index all documents will be excluded from the index that do not
|
||||
contain at least one of the specified index attributes (i.e. *fields*) or that
|
||||
have a value of *null* in any of the specified index attributes. Such documents
|
||||
will not be indexed, and not be taken into account for uniqueness checks if
|
||||
the *unique* flag is set.
|
||||
|
||||
In a non-sparse index, these documents will be indexed (for non-present
|
||||
indexed attributes, a value of *null* will be used) and will be taken into
|
||||
account for uniqueness checks if the *unique* flag is set.
|
||||
|
||||
**Note**: unique indexes on non-shard keys are not supported in a cluster.
|
||||
|
||||
@RESTRETURNCODES
|
||||
|
||||
@RESTRETURNCODE{200}
|
||||
If the index already exists, then a *HTTP 200* is
|
||||
returned.
|
||||
|
||||
@RESTRETURNCODE{201}
|
||||
If the index does not already exist and could be created, then a *HTTP 201*
|
||||
is returned.
|
||||
|
||||
@RESTRETURNCODE{400}
|
||||
If the collection already contains documents and you try to create a unique
|
||||
skip-list index in such a way that there are documents violating the
|
||||
uniqueness, then a *HTTP 400* is returned.
|
||||
|
||||
@RESTRETURNCODE{404}
|
||||
If the *collection-name* is unknown, then a *HTTP 404* is returned.
|
||||
|
||||
@EXAMPLES
|
||||
|
||||
Creating a skiplist index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateNewSkiplist}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "skiplist",
|
||||
unique: false,
|
||||
fields: [ "a", "b" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
|
||||
Creating a sparse skiplist index
|
||||
|
||||
@EXAMPLE_ARANGOSH_RUN{RestIndexCreateSparseSkiplist}
|
||||
var cn = "products";
|
||||
db._drop(cn);
|
||||
db._create(cn);
|
||||
|
||||
var url = "/_api/index?collection=" + cn;
|
||||
var body = {
|
||||
type: "skiplist",
|
||||
unique: false,
|
||||
sparse: true,
|
||||
fields: [ "a" ]
|
||||
};
|
||||
|
||||
var response = logCurlRequest('POST', url, body);
|
||||
|
||||
assert(response.code === 201);
|
||||
|
||||
logJsonResponse(response);
|
||||
~ db._drop(cn);
|
||||
@END_EXAMPLE_ARANGOSH_RUN
|
||||
@endDocuBlock
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@ to the [naming conventions](../NamingConventions/README.md).
|
|||
configuration parameter: The maximal
|
||||
size of a journal or datafile. Note that this also limits the maximal
|
||||
size of a single object. Must be at least 1MB.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
* *isSystem* (optional, default is *false*): If *true*, create a
|
||||
system collection. In this case *collection-name* should start with
|
||||
|
@ -34,6 +35,21 @@ to the [naming conventions](../NamingConventions/README.md).
|
|||
slightly faster than regular collections because ArangoDB does not
|
||||
enforce any synchronization to disk and does not calculate any CRC
|
||||
checksums for datafiles (as there are no datafiles).
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
* *indexBuckets* (optional, default is *16*): The number of buckets
|
||||
into which indexes using a hash table are split. The default is 16 and
|
||||
this number has to be a power of 2 and less than or equal to 1024.
|
||||
|
||||
For very large collections one should increase this to avoid long pauses
|
||||
when the hash table has to be initially built or resized, since buckets
|
||||
are resized individually and can be initially built in parallel. For
|
||||
example, 64 might be a sensible value for a collection with 100
|
||||
000 000 documents. Currently, only the edge index respects this
|
||||
value, but other index types might follow in future ArangoDB versions.
|
||||
Changes (see below) are applied when the collection is loaded the next
|
||||
time.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
* *keyOptions* (optional): additional options for key generation. If
|
||||
specified, then *keyOptions* should be a JSON array containing the
|
||||
|
|
|
@ -9,10 +9,12 @@ Returns an object containing all collection properties.
|
|||
after the data was synced to disk.
|
||||
|
||||
* *journalSize* : The size of the journal in bytes.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
* *isVolatile*: If *true* then the collection data will be
|
||||
kept in memory only and ArangoDB will not write or sync the data
|
||||
to disk.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
* *keyOptions* (optional) additional options for key generation. This is
|
||||
a JSON array containing the following attributes (note: some of the
|
||||
|
@ -31,6 +33,7 @@ Returns an object containing all collection properties.
|
|||
* *indexBuckets*: number of buckets into which indexes using a hash
|
||||
table are split. The default is 16 and this number has to be a
|
||||
power of 2 and less than or equal to 1024.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
For very large collections one should increase this to avoid long pauses
|
||||
when the hash table has to be initially built or resized, since buckets
|
||||
|
@ -57,9 +60,11 @@ one or more of the following attribute(s):
|
|||
after the data was synced to disk.
|
||||
|
||||
* *journalSize* : The size of the journal in bytes.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
* *indexBuckets* : See above, changes are only applied when the
|
||||
collection is loaded the next time.
|
||||
This option is meaningful for the MMFiles storage engine only.
|
||||
|
||||
*Note*: it is not possible to change the journal size after the journal or
|
||||
datafile has been created. Changing this parameter will only effect newly
|
||||
|
|
|
@ -504,6 +504,9 @@ Result AuthInfo::updateUser(std::string const& user,
|
|||
TRI_ASSERT(!it->second.key().empty());
|
||||
func(it->second);
|
||||
data = it->second.toVPackBuilder();
|
||||
// must also clear the basic cache here because the secret may be invalid now
|
||||
// if the password was changed
|
||||
_authBasicCache.clear();
|
||||
}
|
||||
|
||||
Result r = UpdateUser(data.slice());
|
||||
|
@ -590,6 +593,9 @@ Result AuthInfo::removeUser(std::string const& user) {
|
|||
Result res = removeUserInternal(it->second);
|
||||
if (res.ok()) {
|
||||
_authInfo.erase(it);
|
||||
// must also clear the basic cache here because the secret is invalid now
|
||||
_authBasicCache.clear();
|
||||
|
||||
reloadAllUsers();
|
||||
}
|
||||
return res;
|
||||
|
@ -610,6 +616,7 @@ Result AuthInfo::removeAllUsers() {
|
|||
{// do not get into race conditions with loadFromDB
|
||||
MUTEX_LOCKER(locker, _loadFromDBLock);
|
||||
_authInfo.clear();
|
||||
_authBasicCache.clear();
|
||||
_outdated = true;
|
||||
}
|
||||
reloadAllUsers();
|
||||
|
|
|
@ -271,8 +271,8 @@ AuthUserEntry AuthUserEntry::fromDocument(VPackSlice const& slice) {
|
|||
} // if
|
||||
|
||||
} else {
|
||||
LOG_TOPIC(INFO, arangodb::Logger::CONFIG)
|
||||
<< "Deprecation Warning: Update access rights for user '"
|
||||
LOG_TOPIC(DEBUG, arangodb::Logger::CONFIG)
|
||||
<< "updating deprecated access rights struct for user '"
|
||||
<< userSlice.copyString() << "'";
|
||||
VPackValueLength length;
|
||||
char const* value = obj.value.getString(length);
|
||||
|
|
|
@ -469,7 +469,7 @@ int TRI_CreateRecursiveDirectory(char const* path, long& systemError,
|
|||
char* copy;
|
||||
char* p;
|
||||
char* s;
|
||||
|
||||
|
||||
int res = TRI_ERROR_NO_ERROR;
|
||||
p = s = copy = TRI_DuplicateString(path);
|
||||
|
||||
|
@ -486,7 +486,8 @@ int TRI_CreateRecursiveDirectory(char const* path, long& systemError,
|
|||
*p = '\0';
|
||||
res = TRI_CreateDirectory(copy, systemError, systemErrorStr);
|
||||
|
||||
if ((res == TRI_ERROR_FILE_EXISTS) || (res == TRI_ERROR_NO_ERROR)) {
|
||||
if (res == TRI_ERROR_FILE_EXISTS || res == TRI_ERROR_NO_ERROR) {
|
||||
systemErrorStr.clear();
|
||||
res = TRI_ERROR_NO_ERROR;
|
||||
*p = TRI_DIR_SEPARATOR_CHAR;
|
||||
s = p + 1;
|
||||
|
@ -499,16 +500,20 @@ int TRI_CreateRecursiveDirectory(char const* path, long& systemError,
|
|||
p++;
|
||||
}
|
||||
|
||||
if (((res == TRI_ERROR_FILE_EXISTS) || (res == TRI_ERROR_NO_ERROR)) &&
|
||||
if ((res == TRI_ERROR_FILE_EXISTS || res == TRI_ERROR_NO_ERROR) &&
|
||||
(p - s > 0)) {
|
||||
res = TRI_CreateDirectory(copy, systemError, systemErrorStr);
|
||||
|
||||
if (res == TRI_ERROR_FILE_EXISTS) {
|
||||
systemErrorStr.clear();
|
||||
res = TRI_ERROR_NO_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
TRI_Free(TRI_CORE_MEM_ZONE, copy);
|
||||
|
||||
TRI_ASSERT(res != TRI_ERROR_FILE_EXISTS);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
|
@ -519,12 +524,11 @@ int TRI_CreateRecursiveDirectory(char const* path, long& systemError,
|
|||
int TRI_CreateDirectory(char const* path, long& systemError,
|
||||
std::string& systemErrorStr) {
|
||||
TRI_ERRORBUF;
|
||||
int res;
|
||||
|
||||
// reset error flag
|
||||
TRI_set_errno(TRI_ERROR_NO_ERROR);
|
||||
|
||||
res = TRI_MKDIR(path, 0777);
|
||||
int res = TRI_MKDIR(path, 0777);
|
||||
|
||||
if (res == TRI_ERROR_NO_ERROR) {
|
||||
return res;
|
||||
|
@ -534,7 +538,7 @@ int TRI_CreateDirectory(char const* path, long& systemError,
|
|||
TRI_SYSTEM_ERROR();
|
||||
res = errno;
|
||||
if (res != TRI_ERROR_NO_ERROR) {
|
||||
systemErrorStr = std::string("Failed to create directory [") + path + "] " +
|
||||
systemErrorStr = std::string("failed to create directory '") + path + "': " +
|
||||
TRI_GET_ERRORBUF;
|
||||
systemError = res;
|
||||
|
||||
|
|
|
@ -34,6 +34,19 @@
|
|||
#include "Zip/iowin32.h"
|
||||
#endif
|
||||
|
||||
static char const* translateError(int err) {
|
||||
switch (err) {
|
||||
// UNZ_OK and UNZ_EOF have the same numeric value...
|
||||
case UNZ_OK: return "no error";
|
||||
case UNZ_END_OF_LIST_OF_FILE: return "end of list of file";
|
||||
case UNZ_PARAMERROR: return "parameter error";
|
||||
case UNZ_BADZIPFILE: return "bad zip file";
|
||||
case UNZ_INTERNALERROR: return "internal error";
|
||||
case UNZ_CRCERROR: return "crc error";
|
||||
default: return "unknown error";
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief extracts the current file
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -170,7 +183,7 @@ static int ExtractCurrentFile(unzFile uf, void* buffer, size_t const bufferSize,
|
|||
fout = fopen(fullPath, "wb");
|
||||
}
|
||||
|
||||
if (fout == NULL) {
|
||||
if (fout == nullptr) {
|
||||
errorMessage = std::string("failed to open file '") +
|
||||
fullPath + "' for writing: " + strerror(errno);
|
||||
TRI_Free(TRI_CORE_MEM_ZONE, fullPath);
|
||||
|
@ -206,7 +219,12 @@ static int ExtractCurrentFile(unzFile uf, void* buffer, size_t const bufferSize,
|
|||
}
|
||||
|
||||
int ret = unzCloseCurrentFile(uf);
|
||||
if (ret < 0) {
|
||||
if (ret < 0 && ret != UNZ_PARAMERROR) {
|
||||
// we must ignore UNZ_PARAMERROR here.
|
||||
// this error is returned if some of the internal zip file structs are not
|
||||
// properly set up. but this is not a real error here
|
||||
// we want to catch CRC errors here though
|
||||
errorMessage = std::string("cannot read from zip file: ") + translateError(ret);
|
||||
return TRI_ERROR_CANNOT_WRITE_FILE;
|
||||
}
|
||||
|
||||
|
@ -228,7 +246,7 @@ static int UnzipFile(unzFile uf, void* buffer, size_t const bufferSize,
|
|||
|
||||
err = unzGetGlobalInfo64(uf, &gi);
|
||||
if (err != UNZ_OK) {
|
||||
errorMessage = "Failed to get info: " + std::to_string(err);
|
||||
errorMessage = "failed to get info: " + std::to_string(err);
|
||||
return TRI_ERROR_INTERNAL;
|
||||
}
|
||||
|
||||
|
@ -451,7 +469,7 @@ int TRI_UnzipFile(char const* filename, char const* outPath,
|
|||
#endif
|
||||
if (uf == nullptr) {
|
||||
TRI_Free(TRI_UNKNOWN_MEM_ZONE, buffer);
|
||||
errorMessage = std::string("unable to open zip file ") + filename;
|
||||
errorMessage = std::string("unable to open zip file '") + filename + "'";
|
||||
return TRI_ERROR_INTERNAL;
|
||||
}
|
||||
|
||||
|
|
Loading…
Reference in New Issue