1
0
Fork 0

Merge branch 'devel' of https://github.com/arangodb/arangodb into devel

This commit is contained in:
Kaveh Vahedipour 2016-05-09 08:58:39 +02:00
commit c8dca04054
83 changed files with 1544 additions and 3973 deletions

View File

@ -315,10 +315,12 @@ else ()
option(USE_OPTIMIZE_FOR_ARCHITECTURE "try to determine CPU architecture" ON)
if (USE_OPTIMIZE_FOR_ARCHITECTURE)
include(OptimizeForArchitecture)
OptimizeForArchitecture()
if (NOT USE_OPTIMIZE_FOR_ARCHITECTURE)
# mop: core2 (merom) is our absolute minimum!
SET(TARGET_ARCHITECTURE "merom")
endif ()
include(OptimizeForArchitecture)
OptimizeForArchitecture()
set(BASE_FLAGS "${Vc_ARCHITECTURE_FLAGS} ${BASE_FLAGS}")
endif ()

View File

@ -44,30 +44,30 @@ not not 1 // true
- Strings are converted to their numeric equivalent if the string contains a
valid representation of a number. Whitespace at the start and end of the string
is allowed. String values that do not contain any valid representation of a number
will be converted to *null*.
will be converted to the number *0*.
- An empty array is converted to *0*, an array with one member is converted into the
result of `TO_NUMBER()` for its sole member. An array with two or more members is
converted to *null*.
- An object / document is converted to *null*.
converted to the number *0*.
- An object / document is converted to the number *0*.
An unary plus will also try to cast to a number, but `TO_NUMBER()` is the preferred way:
```js
+'5' // 5
+[8] // 8
+[8,9] // null
+{} // null
+[8,9] // 0
+{} // 0
```
An unary minus works likewise, except that a numeric value is also negated:
A unary minus works likewise, except that a numeric value is also negated:
```js
-'5' // -5
-[8] // -8
-[8,9] // null
-{} // null
-[8,9] // 0
-{} // 0
```
- *TO_STRING(value)*: Takes an input *value* of any type and converts it
into a string value as follows:
- *null* is converted to the string *"null"*
- *null* is converted to the the empty string *""*
- *false* is converted to the string *"false"*, *true* to the string *"true"*
- Numbers are converted to their string representations. This can also be a
scientific notation: `TO_STRING(0.0000002) // "2e-7"`

View File

@ -198,42 +198,39 @@ Some example arithmetic operations:
-15
+9.99
The arithmetic operators accept operands of any type. This behavior has changed in
ArangoDB 2.3. Passing non-numeric values to an arithmetic operator is now allow.
Any non-numeric operands will be casted to numbers implicitly by the operator,
without making the query abort.
The arithmetic operators accept operands of any type. Passing non-numeric values to an
arithmetic operator will cast the operands to numbers using the type casting rules
applied by the `TO_NUMBER` function:
The *conversion to a numeric value* works as follows:
- `null` will be converted to `0`
- `false` will be converted to `0`, true will be converted to `1`
- a valid numeric value remains unchanged, but NaN and Infinity will be converted to `null`
- a valid numeric value remains unchanged, but NaN and Infinity will be converted to `0`
- string values are converted to a number if they contain a valid string representation
of a number. Any whitespace at the start or the end of the string is ignored. Strings
with any other contents are converted to `null`
with any other contents are converted to the number `0`
- an empty array is converted to `0`, an array with one member is converted to the numeric
representation of its sole member. Arrays with more members are converted
to `null`
- objects / documents are converted to `null`
representation of its sole member. Arrays with more members are converted to the number
`0`.
- objects / documents are converted to the number `0`.
If the conversion to a number produces a value of `null` for one of the operands,
the result of the whole arithmetic operation will also be `null`. An arithmetic operation
that produces an invalid value, such as `1 / 0` will also produce a value of `null`.
An arithmetic operation that produces an invalid value, such as `1 / 0` will also produce
a result value of `0`.
Here are a few examples:
1 + "a" // null
1 + "a" // 1
1 + "99" // 100
1 + null // 1
null + 1 // 1
3 + [ ] // 3
24 + [ 2 ] // 26
24 + [ 2, 4 ] // null
24 + [ 2, 4 ] // 0
25 - null // 25
17 - true // 16
23 * { } // null
23 * { } // 0
5 * [ 7 ] // 35
24 / "12" // 2
1 / 0 // null
1 / 0 // 0
!SUBSUBSECTION Ternary operator

View File

@ -78,30 +78,6 @@ The default is *false*.
@startDocuBlock keep_alive_timeout
!SUBSECTION Default API compatibility
default API compatibility
`--server.default-api-compatibility`
This option can be used to determine the API compatibility of the ArangoDB
server. It expects an ArangoDB version number as an integer, calculated as
follows:
*10000 \* major + 100 \* minor (example: *10400* for ArangoDB 1.4)*
The value of this option will have an influence on some API return values
when the HTTP client used does not send any compatibility information.
In most cases it will be sufficient to not set this option explicitly but to
keep the default value. However, in case an "old" ArangoDB client is used
that does not send any compatibility information and that cannot handle the
responses of the current version of ArangoDB, it might be reasonable to set
the option to an old version number to improve compatibility with older
clients.
!SUBSECTION Hide Product header

View File

@ -14,7 +14,6 @@ The pattern for collection directory names was changed in 3.0 to include a rando
id component at the end. The new pattern is `collection-<id>-<random>`, where `<id>`
is the collection id and `<random>` is a random number. Previous versions of ArangoDB
used a pattern `collection-<id>` without the random number.
!SECTION Edges and edges attributes
@ -36,14 +35,456 @@ referenced by `_from` and `_to` values may be dropped and re-created later. Any
`_from` and `_to` values of edges pointing to such dropped collection are unaffected
by the drop operation now.
!SECTION Documents
Documents (in contrast to edges) cannot contain the attributes `_from` or `_to` on the
main level in ArangoDB 3.0. These attributes will be automatically removed when saving
documents (i.e. non-edges). `_from` and `_to` can be still used in sub-objects inside
documents.
The `_from` and `_to` attributes will of course be preserved and are still required when
saving edges.
!SECTION AQL
!SUBSECTION Edges handling
When updating or replacing edges via AQL, any modifications to the `_from` and `_to`
attributes of edges were ignored by previous versions of ArangoDB, without signaling
any errors. This was due to the `_from` and `_to` attributes being immutable in earlier
versions of ArangoDB.
From 3.0 on, the `_from` and `_to` attributes of edges are mutable, so any AQL queries that
modify the `_from` or `_to` attribute values of edges will attempt to actually change these
attributes. Clients should be aware of this change and should review their queries that
modify edges to rule out unintended side-effects.
Additionally, when completely replacing the data of existing edges via the AQL `REPLACE`
operation, it is now required to specify values for the `_from` and `_to` attributes,
as `REPLACE` requires the entire new document to be specified. If either `_from` or `_to`
are missing from the replacement document, an `REPLACE` operation will fail.
!SUBSECTION Typecasting functions
The type casting applied by the `TO_NUMBER()` AQL function has changed as follows:
- string values that do not contain a valid numeric value are now converted to the number
`0`. In previous versions of ArangoDB such string values were converted to the value
`null`.
- array values with more than 1 member are now converted to the number `0`. In previous
versions of ArangoDB such arrays were converted to the value `null`.
- objects / documents are now converted to the number `0`. In previous versions of ArangoDB
objects / documents were converted to the value `null`.
Additionally, the `TO_STRING()` AQL function now converts `null` values into an empty string
(`""`) instead of the string `"null"`.
!SUBSECTION Attribute names and parameters
Previous versions of ArangoDB had some trouble with attribute names that contained the dot
symbol (`.`). Some code parts in AQL used the dot symbol to split an attribute name into
sub-components, so an attribute named `a.b` was not completely distinguishable from an
attribute `a` with a sub-attribute `b`. This inconsistent behavior sometimes allowed "hacks"
to work such as passing sub-attributes in a bind parameter as follows:
```
FOR doc IN collection
FILTER doc.@name == 1
RETURN doc
```
If the bind parameter `@name` contained the dot symbol (e.g. `@bind` = `a.b`, it was unclear
whether this should trigger sub-attribute access (i.e. `doc.a.b`) or a access to an attribute
with exactly the specified name (i.e. `doc."a.b"`).
ArangoDB 3.0 now handles attribute names containing the dot symbol properly, and sending a
bind parameter `@name` = `a.b` will now always trigger an access to the attribute `doc."a.b"`,
not the sub-attribute `b` of `a` in `doc`.
For users that used the "hack" of passing bind parameters containing dot symbol to access
sub-attributes, ArangoDB 3.0 allows specifying the attribute name parts as an array of strings,
e.g. `@name` = `[ "a", "b" ]`, which will be resolved to the sub-attribute access `doc.a.b`
when the query is executed.
!SUBSECTION Arithmetic operators
As the arithmetic operations in AQL implicitly convert their operands to numeric values using
`TO_NUMBER()`, their casting behavior has also changed as described above.
Some examples of the changed behavior:
- `"foo" + 1` produces `1` now. In previous versions this produced `null`.
- `[ 1, 2 ] + 1` produces `1`. In previous versions this produced `null`.
- `1 + "foo" + 1´ produces `2` now. In previous version this produced `1`.
!SUBSECTION Keywords
`LIKE` is now a keyword in AQL. Using `LIKE` in either case as an attribute or collection
name in AQL queries now requires quoting.
The AQL optimizer rule "merge-traversal-filter" was renamed to "optimize-traversals".
!SUBSECTION Subqueries
Queries that contain Subqueries that contain data-modification operations such as `INSERT`,
`UPDATE`, `REPLACE`, `UPSERT` or `REMOVE` will now refuse to execute if the collection
affected by the subquery's data-modification operation is read-accessed in an outer scope
of the query.
For example, the following query will refuse to execute as the collection `myCollection`
is modified in the subquery but also read-accessed in the outer scope:
```
FOR doc IN myCollection
LET changes = (
FOR what IN myCollection
FILTER what.value == 1
REMOVE what IN myCollection
)
RETURN doc
```
It is still possible to write to collections from which data is read in the same query,
e.g.
```
FOR doc IN myCollection
FILTER doc.value == 1
REMOVE doc IN myCollection
```
and to modify data in different collection via subqueries.
!SUBSECTION Other changes
The AQL optimizer rule "merge-traversal-filter" that already existed in 3.0 was renamed to
"optimize-traversals". This should be of no relevance to client applications except if
they programatically look for applied optimizer rules in the explain out of AQL queries.
!SECTION JavaScript API changes
The following incompatible changes have been made to the JavaScript API in ArangoDB 3.0:
!SUBSECTION Edges API
When completely replacing an edge via a collection's `replace()` function the replacing
edge data now needs to contain the `_from` and `_to` attributes for the new edge. Previous
versions of ArangoDB did not require the edge data to contain `_from` and `_to` attributes
when replacing an edge, since `_from` and `_to` values were immutable for existing edges.
For example, the following call worked in ArangoDB 2.8 but will fail in 3.0:
```js
db.edgeCollection.replace("myKey", { value: "test" });
```
To make this work in ArangoDB 3.0, `_from` and `_to` need to be added to the replacement
data:
```js
db.edgeCollection.replace("myKey", { _from: "myVertexCollection/1", _to: "myVertexCollection/2", value: "test" });
```
Note that this only affects the `replace()` function but not `update()`, which will
only update the specified attributes of the edge and leave all others intact.
Additionally, the functions `edges()`, `outEdges()` and `inEdges()` with an array of edge
ids will now make the edge ids unique before returning the connected edges. This is probably
desired anyway, as results will be returned only once per distinct input edge id. However,
it may break client applications that rely on the old behavior.
!SUBSECTION Collection API
!SUBSUBSECTION Example matching
The collection function `byExampleHash()` and `byExampleSkiplist()` have been removed in 3.0.
Their functionality is provided by collection's `byExample()` function, which will automatically
use a suitable index if present.
The collection function `byConditionSkiplist()` has been removed in 3.0. The same functionality
can be achieved by issuing an AQL query with the target condition, which will automatically use
a suitable index if present.
!SUBSUBSECTION Revision id handling
The `exists()` method of a collection now throws an exception when the specified document
exists but its revision id does not match the revision id specified. Previous versions of
ArangoDB simply returned `false` if either no document existed with the specified key or
when the revision id did not match. It was therefore impossible to distinguish these two
cases from the return value alone. 3.0 corrects this. Additionally, `exists()` in previous
versions always returned a boolean if only the document key was given. 3.0 now returns the
document's meta-data, which includes the document's current revision id.
Given there is a document with key `test` in collection `myCollection`, then the behavior
of 3.0 is as follows:
```js
/* test if document exists. this returned true in 2.8 */
db.myCollection.exists("test");
{
"_key" : "test",
"_id" : "myCollection/test",
"_rev" : "9758059"
}
/* test if document exists. this returned true in 2.8 */
db.myCollection.exists({ _key: "test" });
{
"_key" : "test",
"_id" : "myCollection/test",
"_rev" : "9758059"
}
/* test if document exists. this also returned false in 2.8 */
db.myCollection.exists("foo");
false
/* test if document with a given revision id exists. this returned true in 2.8 */
db.myCollection.exists({ _key: "test", _rev: "9758059" });
{
"_key" : "test",
"_id" : "myCollection/test",
"_rev" : "9758059"
}
/* test if document with a given revision id exists. this returned false in 2.8 */
db.myCollection.exists({ _key: "test", _rev: "1234" });
JavaScript exception: ArangoError 1200: conflict
```
!SUBSUBSECTION Cap constraints
The cap constraints feature has been removed. This change has led to the removal of the
collection operations `first()` and `last()`, which were internally based on data from
cap constraints.
As cap constraints have been removed in ArangoDB 3.0 it is not possible to create an
index of type "cap" with a collection's `ensureIndex()` function. The dedicated function
`ensureCapConstraint()` has also been removed from the collection API.
!SUBSUBSECTION Undocumented APIs
The undocumented functions `BY_EXAMPLE_HASH()` and `BY_EXAMPLE_SKIPLIST()` and
`BY_CONDITION_SKIPLIST` have been removed. These functions were always hidden and not
intended to be part of the public JavaScript API for collections.
!SECTION HTTP API changes
!SUBSECTION CRUD operations
The following incompatible changes have been made to the HTTP API in ArangoDB 3.0:
!SUBSUBSECTION General
The HTTP insert operations for single documents and edges (POST `/_api/document`) do
not support the URL parameter "createCollection" anymore. In previous versions of
ArangoDB this parameter could be used to automatically create a collection upon
insertion of the first document. It is now required that the target collection already
exists when using this API, otherwise it will return an HTTP 404 error.
Collections can still be created easily via a separate call to POST `/_api/collection`
as before.
The "location" HTTP header returned by ArangoDB when inserting a new document or edge
now always contains the database name. This was also the default behavior in previous
versions of ArangoDB, but it could be overridden by clients sending the HTTP header
`x-arango-version: 1.4` in the request. Clients can continue to send this header to
ArangoDB 3.0, but the header will not influence the location response headers produced
by ArangoDB 3.0 anymore.
Additionally the CRUD operations APIs do not return an attribute "error" in the
response body with an attribute value of "false" in case an operation succeeded.
!SUBSUBSECTION Revision id handling
The operations for updating, replacing and removing documents can optionally check the
revision number of the document to be updated, replaced or removed so the caller can
ensure the operation works on a specific version of the document and there are no
lost updates.
Previous versions of ArangoDB allowed passing the revision id of the previous document
either in the HTTP header `If-Match` or in the URL parameter `rev`. For example,
removing a document with a specific revision id could be achieved as follows:
```
curl -X DELETE \
"http://127.0.0.1:8529/_api/document/myCollection/myKey?rev=123"
```
ArangoDB 3.0 does not support passing the revision id via the "rev" URL parameter
anymore. Instead the previous revision id must be passed in the HTTP header `If-Match`,
e.g.
```
curl -X DELETE \
--header "If-Match: '123'" \
"http://127.0.0.1:8529/_api/document/myCollection/myKey"
```
The URL parameter "policy" was also usable in previous versions of ArangoDB to
control revision handling. Using it was redundant to specifying the expected revision
id via the "rev" parameter or "If-Match" HTTP header and therefore support for the "policy"
parameter was removed in 3.0.
In order to check for a previous revision id when updating, replacing or removing
documents please use the `If-Match` HTTP header as described above. When no revision
check if required the HTTP header can be omitted, and the operations will work on the
current revision of the document, regardless of its revision id.
!SUBSECTION All documents API
The HTTP API for retrieving the ids, keys or URLs of all documents from a collection
was previously located at GET `/_api/document?collection=...`. This API was moved to
PUT `/_api/simple/all-keys` and is now executed as an AQL query.
The name of the collection must now be passed in the HTTP request body instead of in
the request URL. The same is true for the "type" parameter, which controls the type of
the result to be created.
Calls to the previous API can be translated as follows:
- old: GET `/_api/document?collection=<collection>&type=<type>` without HTTP request body
- 3.0: PUT `/_api/simple/all-keys` with HTTP request body `{"name":"<collection>","type":"id"}`
The result format of this API has also changed slightly. In previous versions calls to
the API returned a JSON object with a `documents` attribute. As the functionality is
based on AQL internally in 3.0, the API now returns a JSON object with a `result` attribute:
!SUBSECTION Edges API
!SUBSUBSECTION CRUD operations
The API for documents and edges have been unified in ArangoDB 3.0. The CRUD operations
for documents and edges are now handled by the same endpoint at `/_api/document`. For
CRUD operations there is no distinction anymore between documents and edges API-wise.
That means CRUD operations concerning edges need to be sent to the HTTP endpoint
`/_api/document` instead of `/_api/edge`. Sending requests to `/_api/edge` will
result in an HTTP 404 error in 3.0. The following methods are available at
`/_api/document` for documents and edge:
- HTTP POST: insert new document or edge
- HTTP GET: fetch an existing document or edge
- HTTP PUT: replace an existing document or edge
- HTTP PATCH: partially update an existing document or edge
- HTTP DELETE: remove an existing document or edge
When completely replacing an edge via HTTP PUT please note that the replacing edge
data now needs to contain the `_from` and `_to` attributes for the edge. Previous
versions of ArangoDB did not require sending `_from` and `_to` when replacing edges,
as `_from` and `_to` values were immutable for existing edges.
The `_from` and `_to` attributes of edges now also need to be present inside the
edges objects sent to the server:
```
curl -X POST \
--data '{"value":1,"_from":"myVertexCollection/1","_to":"myVertexCollection/2"}' \
"http://127.0.0.1:8529/_api/document?collection=myEdgeCollection"
```
Previous versions of ArangoDB required the `_from` and `_to` attributes of edges be
sent separately in URL parameter `from` and `to`:
```
curl -X POST \
--data '{"value":1}' \
"http://127.0.0.1:8529/_api/edge?collection=e&from=myVertexCollection/1&to=myVertexCollection/2"
```
!SUBSUBSECTION Querying connected edges
The REST API for querying connected edges at GET `/_api/edges/<collection>` will now
make the edge ids unique before returning the connected edges. This is probably desired anyway
as results will now be returned only once per distinct input edge id. However, it may break
client applications that rely on the old behavior.
!SUBSUBSECTION Graph API
Some data-modification operations in the named graphs API at `/_api/gharial` now return either
HTTP 202 (Accepted) or HTTP 201 (Created) if the operation succeeds. Which status code is returned
depends on the `waitForSync` attribute of the affected collection. In previous versions some
of these operations return HTTP 200 regardless of the `waitForSync` value.
!SUBSECTION Simple queries API
The REST routes PUT `/_api/simple/first` and `/_api/simple/last` have been removed
entirely. These APIs were responsible for returning the first-inserted and
least-inserted documents in a collection. This feature was built on cap constraints
internally, which have been removed in 3.0.
Calling one of these endpoints in 3.0 will result in an HTTP 404 error.
!SUBSECTION Indexes API
It is not supported in 3.0 to create an index with type `cap` (cap constraint) in
3.0 as the cap constraints feature has bee removed. Calling the index creation
endpoint HTTP API POST `/_api/index?collection=...` with an index type `cap` will
therefore result in an HTTP 400 error.
!SUBSECTION Log entries API
The REST route HTTP GET `/_admin/log` is now accessible from within all databases. In
previous versions of ArangoDB, this route was accessible from within the `_system`
database only, and an HTTP 403 (Forbidden) was thrown by the server for any access
from within another database.
!SUBSECTION Figures API
The REST route HTTP GET `/_api/collection/<collection>/figures` will not return the
following result attributes as they became meaningless in 3.0:
- shapefiles.count
- shapes.fileSize
- shapes.count
- shapes.size
- attributes.count
- attributes.size
!SUBSECTION Databases and Collections APIs
When creating a database via the API POST `/_api/database`, ArangoDB will now always
return the HTTP status code 202 (created) if the operation succeeds. Previous versions
of ArangoDB returned HTTP 202 as well, but this behavior was changable by sending an
HTTP header `x-arango-version: 1.4`. When sending this header, previous versions of
ArangoDB returned an HTTP status code 200 (ok). Clients can still send this header to
ArangoDB 3.0 but this will not influence the HTTP status code produced by ArangoDB.
The "location" header produced by ArangoDB 3.0 will now always contain the database
name. This was also the default in previous versions of ArangoDB, but the behaviour
could be overridden by sending the HTTP header `x-arango-version: 1.4`. Clients can
still send the header, but this will not make the database name in the "location"
response header disappear.
!SUBSECTION Replication APIs
The URL parameter "failOnUnknown" was removed from the REST API GET `/_api/replication/dump`.
This parameter controlled whether dumping or replicating edges should fail if one
of the vertex collections linked in the edge's `_from` or `_to` attributes was not
present anymore. In this case the `_from` and `_to` values could not be translated into
meaningful ids anymore.
There were two ways for handling this:
- setting `failOnUnknown` to `true` caused the HTTP request to fail, leaving error
handling to the user
- setting `failOnUnknown` to `false` caused the HTTP request to continue, translating
the collection name part in the `_from` or `_to` value to `_unknown`.
In ArangoDB 3.0 this parameter is obsolete, as `_from` and `_to` are stored as self-contained
string values all the time, so they cannot get invalid when referenced collections are
dropped.
!SUBSECTION Undocumented APIs
The following undocumented HTTP REST endpoints have been removed from ArangoDB's REST
API:
- `/_open/cerberus` and `/_system/cerberus`: these endpoints were intended for some
ArangoDB-internal applications only
- PUT /_api/simple/by-example-hash`, PUT `/_api/simple/by-example-skiplist` and
PUT `/_api/simple/by-condition-skiplist`: these methods were documented in early
versions of ArangoDB but have been marked as not intended to be called by end
users since ArangoDB version 2.3. These methods should not have been part of any
ArangoDB manual since version 2.4.
- `/_api/structure`: an unfinished API for data format and type checks, superseded
by Foxx.
!SECTION Command-line options
@ -146,17 +587,18 @@ The syslog-related options `--log.application` and `--log.facility` have been re
They are superseded by the more general `--log.output` option which can also handle
syslog targets.
!SECTION HTTP API changes
!SUBSECTION Removed other options
The REST route HTTP GET `/_admin/log` is now accessible from within all databases. In
previous versions of ArangoDB, this route was accessible from within the `_system`
database only, and an HTTP 403 (Forbidden) was thrown by the server for any access
from within another database.
The undocumented HTTP REST endpoints `/_open/cerberus` and `/_system/cerberus` have
been removed. These endpoints have been used by some ArangoDB-internal applications
and were not part of ArangoDB's public API.
The option `--server.default-api-compatibility` was present in earlier version of
ArangoDB to control various aspects of the server behavior, e.g. HTTP return codes
or the format of HTTP "location" headers. Client applications could send an HTTP
header "x-arango-version" with a version number to request the server behavior of
a certain ArangoDB version.
This option was only honored in a handful of cases (described above) and was removed
in 3.0 because the changes in server behavior controlled by this option were changed
even before ArangoDB 2.0. This should have left enough time for client applications
to adapt to the new behavior, making the option superfluous in 3.0.
!SECTION ArangoShell and client tools
@ -168,7 +610,6 @@ and all client tools uses these APIs.
In order to connect to earlier versions of ArangoDB with the client tools, an older
version of the client tools needs to be kept installed.
!SUBSECTION Command-line options changed
For all client tools, the option `--server.disable-authentication` was renamed to
@ -178,16 +619,17 @@ is the opposite of the previous `--server.disable-authentication`.
The command-line option `--quiet` was removed from all client tools except arangosh
because it had no effect in those tools.
!SUBSECTION Arangobench
In order to make its purpose more apparent, the former `arangob` client tool has
been renamed to `arangobench` in 3.0.
!SECTION Miscellaneous changes
The checksum calculation algorithm for the `collection.checksum()` method and its
corresponding REST API has changed in 3.0. Checksums calculated in 3.0 will differ
from checksums calculated with 2.8 or before.
corresponding REST API GET `/_api/collection/<collection</checksum` has changed in 3.0.
Checksums calculated in 3.0 will differ from checksums calculated with 2.8 or before.
The ArangoDB server in 3.0 does not read a file `ENDPOINTS` containing a list of
additional endpoints on startup. In 2.8 this file was automatically read if present
in the database directory.

View File

@ -19,8 +19,14 @@ An array of additional vertex collections.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Is returned if the graph could be created and waitForSync is enabled
for the `_graphs` collection. The response body contains the
graph configuration that has been stored.
@RESTRETURNCODE{202}
Is returned if the graph could be created. The body contains the
Is returned if the graph could be created and waitForSync is disabled
for the `_graphs` collection. The response body contains the
graph configuration that has been stored.
@RESTRETURNCODE{409}

View File

@ -19,8 +19,13 @@ dropped if they are not used in other graphs.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Is returned if the graph could be dropped and waitForSync is enabled
for the `_graphs` collection.
@RESTRETURNCODE{202}
Returned if the graph could be dropped.
Returned if the graph could be dropped and waitForSync is disabled
for the `_graphs` collection.
@RESTRETURNCODE{404}
Returned if no graph with this name could be found.

View File

@ -29,8 +29,13 @@ One or many edge collections that can contain target vertices.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Returned if the definition could be added successfully and
waitForSync is enabled for the `_graphs` collection.
@RESTRETURNCODE{202}
Returned if the definition could be added successfully.
Returned if the definition could be added successfully and
waitForSync is disabled for the `_graphs` collection.
@RESTRETURNCODE{400}
Returned if the defininition could not be added, the edge collection

View File

@ -26,8 +26,8 @@ One or many edge collections that can contain target vertices.
@RESTRETURNCODES
@RESTRETURNCODE{200}
Returned if the edge definition could be replaced.
@RESTRETURNCODE{201}
Returned if the request was successful and waitForSync is true.
@RESTRETURNCODE{202}
Returned if the request was successful but waitForSync is false.

View File

@ -24,8 +24,13 @@ Collection will only be dropped if it is not used in other graphs.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Returned if the edge definition could be removed from the graph
and waitForSync is true.
@RESTRETURNCODE{202}
Returned if the edge definition could be removed from the graph.
Returned if the edge definition could be removed from the graph and
waitForSync is false.
@RESTRETURNCODE{400}
Returned if no edge definition with this name is found in the graph.

View File

@ -34,8 +34,8 @@ The body has to be the JSON object to be stored.
@RESTRETURNCODES
@RESTRETURNCODE{200}
Returned if the edge could be replaced.
@RESTRETURNCODE{201}
Returned if the request was successful but waitForSync is true.
@RESTRETURNCODE{202}
Returned if the request was successful but waitForSync is false.

View File

@ -14,8 +14,13 @@ The name of the graph.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Returned if the edge collection could be added successfully and
waitForSync is true.
@RESTRETURNCODE{202}
Returned if the edge collection could be added successfully.
Returned if the edge collection could be added successfully and
waitForSync is false.
@RESTRETURNCODE{404}
Returned if no graph with this name could be found.

View File

@ -23,8 +23,9 @@ Collection will only be dropped if it is not used in other graphs.
@RESTRETURNCODES
@RESTRETURNCODE{200}
Returned if the vertex collection was removed from the graph successfully.
@RESTRETURNCODE{201}
Returned if the vertex collection was removed from the graph successfully
and waitForSync is true.
@RESTRETURNCODE{202}
Returned if the request was successful but waitForSync is false.

View File

@ -263,129 +263,6 @@ BOOST_AUTO_TEST_CASE (tst_json_string2) {
FREE_BUFFER
}
////////////////////////////////////////////////////////////////////////////////
/// @brief test string reference value
////////////////////////////////////////////////////////////////////////////////
BOOST_AUTO_TEST_CASE (tst_json_string_reference) {
INIT_BUFFER
const char* data = "The Quick Brown Fox";
char copy[64];
memset(copy, 0, sizeof(copy));
memcpy(copy, data, strlen(data));
TRI_json_t* json = TRI_CreateStringReferenceJson(TRI_UNKNOWN_MEM_ZONE, copy, strlen(copy));
BOOST_CHECK_EQUAL(true, TRI_IsStringJson(json));
STRINGIFY
BOOST_CHECK_EQUAL("\"The Quick Brown Fox\"", STRING_VALUE);
FREE_BUFFER
FREE_JSON
// freeing JSON should not affect our string
BOOST_CHECK_EQUAL("The Quick Brown Fox", copy);
json = TRI_CreateStringReferenceJson(TRI_UNKNOWN_MEM_ZONE, copy, strlen(copy));
BOOST_CHECK_EQUAL(true, TRI_IsStringJson(json));
// modify the string we're referring to
copy[0] = '*';
copy[1] = '/';
copy[2] = '+';
copy[strlen(copy) - 1] = '!';
sb = TRI_CreateStringBuffer(TRI_UNKNOWN_MEM_ZONE);
STRINGIFY
BOOST_CHECK_EQUAL("\"*/+ Quick Brown Fo!\"", STRING_VALUE);
FREE_BUFFER
BOOST_CHECK_EQUAL("*/+ Quick Brown Fo!", copy);
FREE_JSON
}
////////////////////////////////////////////////////////////////////////////////
/// @brief test string reference value
////////////////////////////////////////////////////////////////////////////////
BOOST_AUTO_TEST_CASE (tst_json_string_reference2) {
INIT_BUFFER
const char* data1 = "The first Brown Fox";
const char* data2 = "The second Brown Fox";
char copy1[64];
char copy2[64];
TRI_json_t* json;
size_t len1 = strlen(data1);
size_t len2 = strlen(data2);
memset(copy1, 0, sizeof(copy1));
memcpy(copy1, data1, len1);
memset(copy2, 0, sizeof(copy2));
memcpy(copy2, data2, len2);
json = TRI_CreateObjectJson(TRI_UNKNOWN_MEM_ZONE);
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "first",
TRI_CreateStringReferenceJson(TRI_UNKNOWN_MEM_ZONE, copy1, strlen(copy1)));
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "second",
TRI_CreateStringReferenceJson(TRI_UNKNOWN_MEM_ZONE, copy2, len2));
BOOST_CHECK_EQUAL(true, TRI_IsObjectJson(json));
STRINGIFY
BOOST_CHECK_EQUAL("{\"first\":\"The first Brown Fox\",\"second\":\"The second Brown Fox\"}", STRING_VALUE);
FREE_BUFFER
FREE_JSON
// freeing JSON should not affect our string
BOOST_CHECK_EQUAL("The first Brown Fox", copy1);
BOOST_CHECK_EQUAL("The second Brown Fox", copy2);
json = TRI_CreateObjectJson(TRI_UNKNOWN_MEM_ZONE);
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "first",
TRI_CreateStringReferenceJson(TRI_UNKNOWN_MEM_ZONE, copy1, strlen(copy1)));
TRI_Insert3ObjectJson(TRI_UNKNOWN_MEM_ZONE, json, "second",
TRI_CreateStringReferenceJson(TRI_UNKNOWN_MEM_ZONE, copy2, len2));
BOOST_CHECK_EQUAL(true, TRI_IsObjectJson(json));
// modify the string we're referring to
copy1[0] = '*';
copy1[1] = '/';
copy1[2] = '+';
copy1[len1 - 1] = '!';
copy2[0] = '*';
copy2[1] = '/';
copy2[2] = '+';
copy2[len2 - 1] = '!';
BOOST_CHECK_EQUAL("*/+ first Brown Fo!", copy1);
BOOST_CHECK_EQUAL("*/+ second Brown Fo!", copy2);
sb = TRI_CreateStringBuffer(TRI_UNKNOWN_MEM_ZONE);
STRINGIFY
BOOST_CHECK_EQUAL("{\"first\":\"*/+ first Brown Fo!\",\"second\":\"*/+ second Brown Fo!\"}", STRING_VALUE);
FREE_BUFFER
// freeing JSON should not affect our string
BOOST_CHECK_EQUAL("*/+ first Brown Fo!", copy1);
BOOST_CHECK_EQUAL("*/+ second Brown Fo!", copy2);
FREE_JSON
}
////////////////////////////////////////////////////////////////////////////////
/// @brief test string value (escaped)
////////////////////////////////////////////////////////////////////////////////

View File

@ -1,137 +0,0 @@
# coding: utf-8
require 'rspec'
require 'arangodb.rb'
describe ArangoDB do
################################################################################
## general tests
################################################################################
context "checking compatibility features:" do
it "tests the compatibility value when no header is set" do
doc = ArangoDB.get("/_admin/echo", :headers => { })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(30000)
end
it "tests the compatibility value when a broken header is set" do
versions = [ "1", "1.", "-1.3", "-1.3.", "x.4", "xx", "", " ", ".", "foobar", "foobar1.3", "xx1.4" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(30000)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "1.3.0", "1.3", "1.3-devel", "1.3.1", "1.3.99", "10300", "10303" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10300)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "1.4.0", "1.4.1", "1.4.2", "1.4.0-devel", "1.4.0-beta2", " 1.4", "1.4 ", " 1.4.0", " 1.4.0 ", "10400", "10401", "10499" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10400)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "1.5.0", "1.5.1", "1.5.2", "1.5.0-devel", "1.5.0-beta2", " 1.5", "1.5 ", " 1.5.0", " 1.5.0 ", "10500", "10501", "10599" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10500)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "2.0.0", "2.0.0-devel", "2.0.0-alpha", "2.0", " 2.0", "2.0 ", " 2.0.0", " 2.0.0 ", "20000", "20000 ", "20099" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(20000)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "2.1.0", "2.1.0-devel", "2.1.0-alpha", "2.1", " 2.1", "2.1 ", " 2.1.0", " 2.1.0 ", "20100", "20100 ", "20199" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(20100)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "2.2.0", "2.2.0-devel", "2.2.0-alpha", "2.2", " 2.2", "2.2 ", " 2.2.0", " 2.2.0 ", "20200", "20200 ", "20299" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(20200)
end
end
it "tests the compatibility value when a too low version is set" do
versions = [ "0.0", "0.1", "0.2", "0.9", "1.0", "1.1", "1.2" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10300)
end
end
it "tests the compatibility value when a too high version is set" do
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => "2.4" })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(20400)
end
end
end

View File

@ -67,28 +67,6 @@ describe ArangoDB do
ArangoDB.delete(api + "/#{name}")
end
it "creates a new database, old return code" do
body = "{\"name\" : \"#{name}\" }"
doc = ArangoDB.log_post("#{prefix}-create", api, :body => body, :headers => { "X-Arango-Version" => "1.4" })
doc.code.should eq(200)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
response = doc.parsed_response
response["result"].should eq(true)
response["error"].should eq(false)
end
it "creates a new database, new return code" do
body = "{\"name\" : \"#{name}\" }"
doc = ArangoDB.log_post("#{prefix}-create", api, :body => body, :headers => { "X-Arango-Version" => "1.5" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
response = doc.parsed_response
response["result"].should eq(true)
response["error"].should eq(false)
end
it "creates a new database" do
body = "{\"name\" : \"#{name}\" }"
doc = ArangoDB.log_post("#{prefix}-create", api, :body => body)

View File

@ -159,7 +159,7 @@ describe ArangoDB do
it "creating a new document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -230,7 +230,7 @@ describe ArangoDB do
it "creating a new document complex body, setting compatibility header " do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"Wo\\\"rld\" }"
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -314,7 +314,7 @@ describe ArangoDB do
it "creating a new umlaut document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"öäüÖÄÜßあ寿司\" }"
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -404,7 +404,7 @@ describe ArangoDB do
it "creating a new not normalized umlaut document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"Grüß Gott.\" }"
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -487,7 +487,7 @@ describe ArangoDB do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"some stuff\" : \"goes here\", \"_key\" : \"#{@key}\" }"
doc = ArangoDB.log_post("#{prefix}-existing-id", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-existing-id", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -585,7 +585,7 @@ describe ArangoDB do
it "creating a new document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-accept", cmd, :body => body)
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -649,7 +649,7 @@ describe ArangoDB do
it "creating a new document, waitForSync URL param = false, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}&waitForSync=false"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept-sync-false", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-accept-sync-false", cmd, :body => body)
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -713,7 +713,7 @@ describe ArangoDB do
it "creating a new document, waitForSync URL param = true, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}&waitForSync=true"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept-sync-true", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-accept-sync-true", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -792,7 +792,7 @@ describe ArangoDB do
it "creating a new document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-named-collection", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc = ArangoDB.log_post("#{prefix}-named-collection", cmd, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")

View File

@ -1,504 +0,0 @@
# coding: utf-8
require 'rspec'
require 'arangodb.rb'
describe ArangoDB do
api = "/_api/structure"
prefix = "structures"
context "dealing with structured documents" do
def create_structure (prefix, body)
cmd = "/_api/document?collection=_structures"
doc = ArangoDB.log_post(prefix, cmd, :body => body)
return doc
end
def insert_structure1 (prefix)
# one numeric attribute "number"
structure = '{ "_key" :"UnitTestsCollectionDocuments", "attributes": { ' +
'"number": { ' +
' "type": "number", ' +
' "formatter": { ' +
' "default": { "args": { "decPlaces": 4, "decSeparator": ".", "thouSeparator": "," }, "module": "@arangodb/formatter", "do": "formatFloat" }, ' +
' "de": { "args": { "decPlaces": 4, "decSeparator": ",", "thouSeparator": "." }, "module": "@arangodb/formatter", "do": "formatFloat" } ' +
' }, ' +
' "parser": { ' +
' "default": { "args": {"decPlaces": 4, "decSeparator": ".", "thouSeparator": "," }, "module": "@arangodb/formatter", "do": "parseFloat" }, ' +
' "de": { "args": {"decPlaces": 4,"decSeparator": ",","thouSeparator": "." },"module": "@arangodb/formatter","do": "parseFloat" } ' +
' }, ' +
' "validators": [ { "module": "@arangodb/formatter", "do": "validateNotNull" } ]' +
' }, ' +
'"aString": { ' +
' "type": "string" ' +
'}' +
'}' +
'}'
return create_structure(prefix, structure);
end
def insert_structure2 (prefix)
# one numeric array attribute "numbers"
structure = '{ "_key" :"UnitTestsCollectionDocuments", "attributes": { ' +
'"numbers": { ' +
' "type": "number_list_type" ' +
'}},' +
'"arrayTypes": { ' +
' "number_list_type": { ' +
' "type": "number", ' +
' "formatter": { ' +
' "default": { "args": { "decPlaces": 4, "decSeparator": ".", "thouSeparator": "," }, "module": "@arangodb/formatter", "do": "formatFloat" }, ' +
' "de": { "args": { "decPlaces": 4, "decSeparator": ",", "thouSeparator": "." }, "module": "@arangodb/formatter", "do": "formatFloat" } ' +
' }, ' +
' "parser": { ' +
' "default": { "args": {"decPlaces": 4, "decSeparator": ".", "thouSeparator": "," }, "module": "@arangodb/formatter", "do": "parseFloat" }, ' +
' "de": { "args": {"decPlaces": 4,"decSeparator": ",","thouSeparator": "." },"module": "@arangodb/formatter","do": "parseFloat" } ' +
' }, ' +
' "validators": [ { "module": "@arangodb/formatter", "do": "validateNotNull" } ]' +
' } ' +
'}' +
'}'
return create_structure(prefix, structure);
end
def insert_structure3 (prefix)
# one object attribute "myObject"
structure = '{ "_key" :"UnitTestsCollectionDocuments", "attributes": { ' +
'"myObject": { ' +
' "type": "object_type", ' +
' "validators": [ { "module": "@arangodb/formatter", "do": "validateNotNull" } ]' +
'}},' +
'"objectTypes": { ' +
' "object_type": { ' +
' "attributes": { ' +
' "aNumber": { ' +
' "type": "number", ' +
' "formatter": { ' +
' "default": { "args": { "decPlaces": 4, "decSeparator": ".", "thouSeparator": "," }, "module": "@arangodb/formatter", "do": "formatFloat" }, ' +
' "de": { "args": { "decPlaces": 4, "decSeparator": ",", "thouSeparator": "." }, "module": "@arangodb/formatter", "do": "formatFloat" } ' +
' }, ' +
' "parser": { ' +
' "default": { "args": {"decPlaces": 4, "decSeparator": ".", "thouSeparator": "," }, "module": "@arangodb/formatter", "do": "parseFloat" }, ' +
' "de": { "args": {"decPlaces": 4,"decSeparator": ",","thouSeparator": "." },"module": "@arangodb/formatter","do": "parseFloat" } ' +
' }, ' +
' "validators": [ { "module": "@arangodb/formatter", "do": "validateNotNull" } ]' +
' }, ' +
' "aString": { ' +
' "type": "string"' +
' }' +
' }' +
' }' +
'}' +
'}'
return create_structure(prefix, structure);
end
def insert_structured_doc (prefix, api, collection, doc, lang, waitForSync, format)
cmd = api + "?collection=" + collection + "&lang=" + lang + "&waitForSync=" + waitForSync + "&format=" + format;
return ArangoDB.log_post(prefix, cmd, :body => doc)
end
def replace_structured_doc (prefix, api, id, doc, lang, waitForSync, format, args = {})
cmd = api + "/" + id + "?lang=" + lang + "&waitForSync=" + waitForSync + "&format=" + format;
return ArangoDB.log_put(prefix, cmd, :body => doc, :headers => args[:headers])
end
def update_structured_doc (prefix, api, id, doc, lang, waitForSync, format, args = {})
cmd = api + "/" + id + "?lang=" + lang + "&waitForSync=" + waitForSync + "&format=" + format;
return ArangoDB.log_patch(prefix, cmd, :body => doc, :headers => args[:headers])
end
def get_doc (prefix, api, id, lang, format, args = {})
cmd = api + "/" + id + "?lang=" + lang + "&format=" + format;
return ArangoDB.log_get(prefix, cmd, args)
end
def delete_doc (prefix, api, id, args = {})
cmd = api + "/" + id;
return ArangoDB.log_delete(prefix, cmd, args)
end
def head_doc (prefix, api, id, args = {})
cmd = api + "/" + id;
return ArangoDB.log_head(prefix, cmd, args)
end
before do
@cn = "UnitTestsCollectionDocuments"
ArangoDB.drop_collection(@cn)
@cid = ArangoDB.create_collection(@cn, false)
cmd = "/_api/document/_structures/" + @cn
ArangoDB.delete(cmd)
end
after do
ArangoDB.drop_collection(@cn)
end
################################################################################
## creates documents with invalid types
################################################################################
it "insert a document" do
p = "#{prefix}-create-1"
insert_structure1(p);
body = '{ "number" : "1234.5" }';
doc = insert_structured_doc(p, api, @cn, body, "en", "false", "false")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
body = '{ "_key" : "a_key", "number" : "99.5" }';
doc = insert_structured_doc(p, api, @cn, body, "en", "true", "true")
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
doc.parsed_response['_key'].should eq("a_key")
# insert same key (error)
body = '{ "_key" : "a_key", "number" : "99.5" }';
doc = insert_structured_doc(p, api, @cn, body, "en", "true", "true")
doc.code.should eq(400)
end
it "insert not valid document" do
p = "#{prefix}-create-2"
insert_structure1(p);
body = '{ }';
doc = insert_structured_doc(p, api, @cn, body, "en", "true", "true")
doc.code.should eq(400)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(true)
doc.parsed_response['code'].should eq(400)
doc.parsed_response['errorNum'].should eq(1)
end
it "insert document in unknown collection" do
p = "#{prefix}-create-3"
insert_structure1(p);
body = '{ }';
cmd = api + "?collection=egal&lang=en&waitForSync=true&format=true";
doc = ArangoDB.log_post(p, cmd, :body => body)
doc.code.should eq(404)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(true)
doc.parsed_response['code'].should eq(404)
doc.parsed_response['errorNum'].should eq(1203)
end
################################################################################
## create and get objects
################################################################################
it "insert a document with other language" do
p = "#{prefix}-get-1"
insert_structure1(p);
body = '{ "number" : "1.234,50" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
doc3 = get_doc(p, api, id, "en", "false")
doc3.code.should eq(200)
doc3.headers['content-type'].should eq("application/json; charset=utf-8")
doc3.parsed_response['_id'].should eq(id)
doc3.parsed_response['number'].should eq(1234.5)
doc2 = get_doc(p, api, id, "en", "true")
doc2.code.should eq(200)
doc2.headers['content-type'].should eq("application/json; charset=utf-8")
doc2.parsed_response['_id'].should eq(id)
doc2.parsed_response['number'].should eq("1,234.5000")
end
it "insert a document with an array attribute" do
p = "#{prefix}-get-2"
insert_structure2(p);
body = '{ "numbers" : [ "1.234,50", "99,99" ] }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
doc3 = get_doc(p, api, id, "en", "false")
doc3.code.should eq(200)
doc3.headers['content-type'].should eq("application/json; charset=utf-8")
doc3.parsed_response['_id'].should eq(id)
doc3.parsed_response['numbers'][0].should eq(1234.5)
doc3.parsed_response['numbers'][1].should eq(99.99)
doc2 = get_doc(p, api, id, "en", "true")
doc2.code.should eq(200)
doc2.headers['content-type'].should eq("application/json; charset=utf-8")
doc2.parsed_response['_id'].should eq(id)
doc2.parsed_response['numbers'][0].should eq("1,234.5000")
doc2.parsed_response['numbers'][1].should eq("99.9900")
end
it "insert a document with an object attribute" do
p = "#{prefix}-get-3"
insert_structure3(p);
body = '{ "myObject" : { "aNumber":"1.234,50", "aString":"str" } }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
doc3 = get_doc(p, api, id, "en", "false")
doc3.code.should eq(200)
doc3.headers['content-type'].should eq("application/json; charset=utf-8")
doc3.parsed_response['_id'].should eq(id)
doc3.parsed_response['myObject']['aNumber'].should eq(1234.5)
doc3.parsed_response['myObject']['aString'].should eq("str")
doc2 = get_doc(p, api, id, "en", "true")
doc2.code.should eq(200)
doc2.headers['content-type'].should eq("application/json; charset=utf-8")
doc2.parsed_response['_id'].should eq(id)
doc2.parsed_response['myObject']['aNumber'].should eq("1,234.5000")
doc2.parsed_response['myObject']['aString'].should eq("str")
end
################################################################################
## get objects with If-None-Match and If-Match
################################################################################
it "get a document with If-None-Match" do
p = "#{prefix}-get2-1"
insert_structure1(p);
body = '{ "number" : "1.234,50" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
rev = doc.parsed_response['_rev']
match = {}
match['If-None-Match'] = '007';
doc = get_doc(p, api, id, "en", "false", :headers => match)
doc.code.should eq(200)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['_id'].should eq(id)
doc.parsed_response['number'].should eq(1234.5)
match = {}
match['If-None-Match'] = rev;
doc = get_doc(p, api, id, "en", "false", :headers => match)
doc.code.should eq(304)
end
it "get a document with If-Match" do
p = "#{prefix}-get2-2"
insert_structure1(p);
body = '{ "number" : "1.234,50" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
rev = doc.parsed_response['_rev']
match = {}
match['If-Match'] = '007';
doc = get_doc(p, api, id, "en", "false", :headers => match)
doc.code.should eq(412)
match = {}
match['If-Match'] = rev;
doc = get_doc(p, api, id, "en", "false", :headers => match)
doc.code.should eq(200)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['_id'].should eq(id)
doc.parsed_response['number'].should eq(1234.5)
end
################################################################################
## get objects header with If-None-Match and If-Match
################################################################################
it "get a document header with If-None-Match" do
p = "#{prefix}-head-1"
insert_structure1(p);
body = '{ "number" : "1.234,50" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
rev = doc.parsed_response['_rev']
match = {}
match['If-None-Match'] = '007';
doc = head_doc(p, api, id, :headers => match)
doc.code.should eq(200)
match = {}
match['If-None-Match'] = rev;
doc = head_doc(p, api, id, :headers => match)
doc.code.should eq(304)
end
it "get a document header with If-Match" do
p = "#{prefix}-head-2"
insert_structure1(p);
body = '{ "number" : "1.234,50" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
rev = doc.parsed_response['_rev']
match = {}
match['If-Match'] = '007';
doc = head_doc(p, api, id, :headers => match)
doc.code.should eq(412)
match = {}
match['If-Match'] = rev;
doc = head_doc(p, api, id, :headers => match)
doc.code.should eq(200)
end
################################################################################
## replace documents
################################################################################
it "replace a document" do
p = "#{prefix}-put-1"
insert_structure1(p);
body = '{ "number" : "1.234,50" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
key = doc.parsed_response['_key']
# replace
doc = replace_structured_doc(p, api, id, body, "de", "false", "true")
doc.code.should eq(202)
id = doc.parsed_response['_id']
rev = doc.parsed_response['_rev']
# replace with wrong _rev
body = '{ "_key":"' + key + '", "_id":"' + id + '", "_rev":"error", "number" : "234,50" }';
doc = replace_structured_doc(p, api, id, body, "de", "false", "true&policy=error")
doc.code.should eq(400)
# replace with last _rev
body = '{ "_key":"' + key + '", "_id":"' + id + '", "_rev":"' + rev + '", "number" : "234,50" }';
doc = replace_structured_doc(p, api, id, body, "de", "true", "true&policy=error")
doc.code.should eq(201)
end
################################################################################
## patch documents
################################################################################
it "patch a document" do
p = "#{prefix}-patch-1"
insert_structure1(p);
body = '{ "number" : "1.234,50", "aString":"str" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
key = doc.parsed_response['_key']
# patch string
body = '{ "aString":"new string" }';
doc = update_structured_doc(p, api, id, body, "de", "false", "true")
doc.code.should eq(202)
id = doc.parsed_response['_id']
rev = doc.parsed_response['_rev']
doc3 = get_doc(p, api, id, "en", "true")
doc3.code.should eq(200)
doc3.headers['content-type'].should eq("application/json; charset=utf-8")
doc3.parsed_response['_id'].should eq(id)
doc3.parsed_response['number'].should eq("1,234.5000")
doc3.parsed_response['aString'].should eq("new string")
# patch number to null (error)
body = '{ "number" : null }';
doc = update_structured_doc(p, api, id, body, "de", "false", "true")
doc.code.should eq(400)
end
################################################################################
## delete documents
################################################################################
it "delete a document" do
p = "#{prefix}-delete-1"
insert_structure1(p);
body = '{ "number" : "1.234,50", "aString":"str" }';
doc = insert_structured_doc(p, api, @cn, body, "de", "false", "true")
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
id = doc.parsed_response['_id']
key = doc.parsed_response['_key']
# delete
doc = delete_doc(p, api, id)
doc.code.should eq(202)
end
end
end

View File

@ -133,7 +133,7 @@ Node& Node::operator= (Node const& rhs) {
_children = rhs._children;
return *this;
}
#include <iostream>
// Comparison with slice
bool Node::operator== (VPackSlice const& rhs) const {
if (rhs.isObject()) {

View File

@ -70,7 +70,7 @@ inline HttpHandler::status_t RestAgencyPrivHandler::reportMethodNotAllowed() {
generateError(GeneralResponse::ResponseCode::METHOD_NOT_ALLOWED, 405);
return HttpHandler::status_t(HANDLER_DONE);
}
#include <iostream>
HttpHandler::status_t RestAgencyPrivHandler::execute() {
try {
VPackBuilder result;

View File

@ -29,7 +29,6 @@
#include "Basics/StringUtils.h"
#include "Basics/VelocyPackHelper.h"
#include <velocypack/Buffer.h>
#include <velocypack/Iterator.h>
#include <velocypack/Slice.h>
@ -37,7 +36,6 @@
#include <ctime>
#include <iomanip>
#include <iostream>
using namespace arangodb::consensus;
using namespace arangodb::basics;

View File

@ -37,12 +37,6 @@
using namespace arangodb;
using namespace arangodb::aql;
/// @brief construct a document
AqlValue::AqlValue(TRI_doc_mptr_t const* mptr) {
_data.pointer = mptr->vpack();
setType(AqlValueType::VPACK_SLICE_POINTER);
}
/// @brief hashes the value
uint64_t AqlValue::hash(arangodb::AqlTransaction* trx, uint64_t seed) const {
switch (type()) {
@ -158,8 +152,8 @@ AqlValue AqlValue::at(int64_t position, bool& mustDestroy,
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
@ -170,7 +164,7 @@ AqlValue AqlValue::at(int64_t position, bool& mustDestroy,
position = n + position;
}
if (position >= 0 && position < n) {
if (doCopy || s.byteSize() < sizeof(_data.internal)) {
if (doCopy) {
mustDestroy = true;
return AqlValue(s.at(position));
}
@ -234,15 +228,56 @@ AqlValue AqlValue::getKeyAttribute(arangodb::AqlTransaction* trx,
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
if (s.isObject()) {
VPackSlice found = Transaction::extractKeyFromDocument(s);
if (!found.isNone()) {
if (doCopy || found.byteSize() < sizeof(_data.internal)) {
if (doCopy) {
mustDestroy = true;
return AqlValue(found);
}
// return a reference to an existing slice
return AqlValue(found.begin());
}
}
// fall-through intentional
break;
}
case DOCVEC:
case RANGE: {
// will return null
break;
}
}
// default is to return null
return AqlValue(arangodb::basics::VelocyPackHelper::NullValue());
}
/// @brief get the _id attribute from an object/document
AqlValue AqlValue::getIdAttribute(arangodb::AqlTransaction* trx,
bool& mustDestroy, bool doCopy) const {
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
if (s.isObject()) {
VPackSlice found = Transaction::extractIdFromDocument(s);
if (found.isCustom()) {
// _id as a custom type needs special treatment
mustDestroy = true;
return AqlValue(trx->extractIdString(trx->resolver(), found, s));
}
if (!found.isNone()) {
if (doCopy) {
mustDestroy = true;
return AqlValue(found);
}
@ -270,15 +305,15 @@ AqlValue AqlValue::getFromAttribute(arangodb::AqlTransaction* trx,
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
if (s.isObject()) {
VPackSlice found = Transaction::extractFromFromDocument(s);
if (!found.isNone()) {
if (doCopy || found.byteSize() < sizeof(_data.internal)) {
if (doCopy) {
mustDestroy = true;
return AqlValue(found);
}
@ -306,15 +341,15 @@ AqlValue AqlValue::getToAttribute(arangodb::AqlTransaction* trx,
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
if (s.isObject()) {
VPackSlice found = Transaction::extractToFromDocument(s);
if (!found.isNone()) {
if (doCopy || found.byteSize() < sizeof(_data.internal)) {
if (doCopy) {
mustDestroy = true;
return AqlValue(found);
}
@ -343,20 +378,20 @@ AqlValue AqlValue::get(arangodb::AqlTransaction* trx,
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
if (s.isObject()) {
VPackSlice found(s.resolveExternal().get(name));
VPackSlice found(s.get(name));
if (found.isCustom()) {
// _id needs special treatment
mustDestroy = true;
return AqlValue(trx->extractIdString(s));
}
if (!found.isNone()) {
if (doCopy || found.byteSize() < sizeof(_data.internal)) {
if (doCopy) {
mustDestroy = true;
return AqlValue(found);
}
@ -385,20 +420,20 @@ AqlValue AqlValue::get(arangodb::AqlTransaction* trx,
mustDestroy = false;
switch (type()) {
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
doCopy = false;
case VPACK_INLINE:
// fall-through intentional
case VPACK_MANAGED: {
VPackSlice s(slice());
if (s.isObject()) {
VPackSlice found(s.resolveExternal().get(names, true));
VPackSlice found(s.get(names, true));
if (found.isCustom()) {
// _id needs special treatment
mustDestroy = true;
return AqlValue(trx->extractIdString(s));
}
if (!found.isNone()) {
if (doCopy || found.byteSize() < sizeof(_data.internal)) {
if (doCopy) {
mustDestroy = true;
return AqlValue(found);
}
@ -427,7 +462,7 @@ bool AqlValue::hasKey(arangodb::AqlTransaction* trx,
case VPACK_SLICE_POINTER:
case VPACK_INLINE:
case VPACK_MANAGED: {
VPackSlice s(slice().resolveExternal());
VPackSlice s(slice());
return (s.isObject() && s.hasKey(name));
}
case DOCVEC:
@ -841,14 +876,14 @@ VPackSlice AqlValue::slice() const {
case VPACK_INLINE: {
VPackSlice s(&_data.internal[0]);
if (s.isExternal()) {
s = VPackSlice(s.getExternal());
s = s.resolveExternal();
}
return s;
}
case VPACK_MANAGED: {
VPackSlice s(_data.buffer->data());
if (s.isExternal()) {
s = VPackSlice(s.getExternal());
s = s.resolveExternal();
}
return s;
}

View File

@ -29,6 +29,7 @@
#include "Aql/Range.h"
#include "Aql/types.h"
#include "Basics/VelocyPackHelper.h"
#include "VocBase/document-collection.h"
#include <velocypack/Buffer.h>
#include <velocypack/Builder.h>
@ -94,12 +95,21 @@ struct AqlValue final {
}
// construct from document
explicit AqlValue(TRI_doc_mptr_t const* mptr);
explicit AqlValue(TRI_doc_mptr_t const* mptr) {
_data.pointer = mptr->vpack();
setType(AqlValueType::VPACK_SLICE_POINTER);
TRI_ASSERT(VPackSlice(_data.pointer).isObject());
TRI_ASSERT(!VPackSlice(_data.pointer).isExternal());
}
// construct from pointer
explicit AqlValue(uint8_t const* pointer) {
_data.pointer = pointer;
// we must get rid of Externals first here, because all
// methods that use VPACK_SLICE_POINTER expect its contents
// to be non-Externals
_data.pointer = VPackSlice(pointer).resolveExternals().begin();
setType(AqlValueType::VPACK_SLICE_POINTER);
TRI_ASSERT(!VPackSlice(_data.pointer).isExternal());
}
// construct from docvec, taking over its ownership
@ -110,7 +120,9 @@ struct AqlValue final {
// construct boolean value type
explicit AqlValue(bool value) {
initFromSlice(value ? arangodb::basics::VelocyPackHelper::TrueValue() : arangodb::basics::VelocyPackHelper::FalseValue());
VPackSlice slice(value ? arangodb::basics::VelocyPackHelper::TrueValue() : arangodb::basics::VelocyPackHelper::FalseValue());
memcpy(_data.internal, slice.begin(), slice.byteSize());
setType(AqlValueType::VPACK_INLINE);
}
// construct from std::string
@ -213,6 +225,9 @@ struct AqlValue final {
/// @brief get the _key attribute from an object/document
AqlValue getKeyAttribute(arangodb::AqlTransaction* trx,
bool& mustDestroy, bool copy) const;
/// @brief get the _id attribute from an object/document
AqlValue getIdAttribute(arangodb::AqlTransaction* trx,
bool& mustDestroy, bool copy) const;
/// @brief get the _from attribute from an object/document
AqlValue getFromAttribute(arangodb::AqlTransaction* trx,
bool& mustDestroy, bool copy) const;
@ -310,12 +325,10 @@ struct AqlValue final {
}
/// @brief initializes value from a slice
void initFromSlice(arangodb::velocypack::Slice const& slice) {
void initFromSlice(arangodb::velocypack::Slice slice) {
if (slice.isExternal()) {
// recursively resolve externals
_data.pointer = slice.resolveExternals().start();
setType(AqlValueType::VPACK_SLICE_POINTER);
return;
slice = slice.resolveExternals();
}
arangodb::velocypack::ValueLength length = slice.byteSize();
if (length < sizeof(_data.internal)) {

View File

@ -43,6 +43,8 @@ AttributeAccessor::AttributeAccessor(
if (_attributeParts.size() == 1) {
if (attributeParts[0] == StaticStrings::KeyString) {
_type = EXTRACT_KEY;
} else if (attributeParts[0] == StaticStrings::IdString) {
_type = EXTRACT_ID;
} else if (attributeParts[0] == StaticStrings::FromString) {
_type = EXTRACT_FROM;
} else if (attributeParts[0] == StaticStrings::ToString) {
@ -76,6 +78,8 @@ AqlValue AttributeAccessor::get(arangodb::AqlTransaction* trx,
switch (_type) {
case EXTRACT_KEY:
return argv->getValueReference(startPos, regs[i]).getKeyAttribute(trx, mustDestroy, true);
case EXTRACT_ID:
return argv->getValueReference(startPos, regs[i]).getIdAttribute(trx, mustDestroy, true);
case EXTRACT_FROM:
return argv->getValueReference(startPos, regs[i]).getFromAttribute(trx, mustDestroy, true);
case EXTRACT_TO:

View File

@ -52,6 +52,7 @@ class AttributeAccessor {
private:
enum AccessorType {
EXTRACT_KEY,
EXTRACT_ID,
EXTRACT_FROM,
EXTRACT_TO,
EXTRACT_SINGLE,

View File

@ -90,15 +90,10 @@ void CalculationBlock::fillBlockWithReference(AqlItemBlock* result) {
// care of correct freeing:
auto a = result->getValueReference(i, _inRegs[0]);
try {
TRI_IF_FAILURE("CalculationBlock::fillBlockWithReference") {
THROW_ARANGO_EXCEPTION(TRI_ERROR_DEBUG);
}
result->setValue(i, _outReg, a);
} catch (...) {
a.destroy();
throw;
TRI_IF_FAILURE("CalculationBlock::fillBlockWithReference") {
THROW_ARANGO_EXCEPTION(TRI_ERROR_DEBUG);
}
result->setValue(i, _outReg, a);
}
}

View File

@ -130,7 +130,7 @@ AqlItemBlock* EnumerateListBlock::getSome(size_t, size_t atMost) {
for (size_t j = 0; j < toSend; j++) {
if (j > 0) {
// re-use already copied aqlvalues
// re-use already copied AqlValues
for (RegisterId i = 0; i < cur->getNrRegs(); i++) {
res->setValue(j, i, res->getValueReference(0, i));
// Note that if this throws, all values will be

View File

@ -210,6 +210,32 @@ int QueryList::kill(TRI_voc_tick_t id) {
return TRI_ERROR_NO_ERROR;
}
/// @brief kills all currently running queries
uint64_t QueryList::killAll(bool silent) {
uint64_t killed = 0;
WRITE_LOCKER(writeLocker, _lock);
for (auto& it : _current) {
auto entry = it.second;
const_cast<arangodb::aql::Query*>(entry->query)->killed(true);
++killed;
std::string queryString(entry->query->queryString(),
entry->query->queryLength());
if (silent) {
LOG(TRACE) << "killing AQL query " << entry->query->id() << " '" << queryString << "'";
} else {
LOG(WARN) << "killing AQL query " << entry->query->id() << " '" << queryString << "'";
}
}
return killed;
}
/// @brief get the list of currently running queries
std::vector<QueryEntryCopy> QueryList::listCurrent() {

View File

@ -150,6 +150,9 @@ class QueryList {
/// @brief kills a query
int kill(TRI_voc_tick_t);
/// @brief kills all currently running queries
uint64_t killAll(bool silent);
/// @brief return the list of running queries
std::vector<QueryEntryCopy> listCurrent();

View File

@ -92,8 +92,8 @@ bool AgencyCallback::executeEmpty() {
result = _cb(VPackSlice::noneSlice());
}
CONDITION_LOCKER(locker, _cv);
if (_useCv) {
CONDITION_LOCKER(locker, _cv);
_cv.signal();
}
return result;
@ -107,8 +107,8 @@ bool AgencyCallback::execute(std::shared_ptr<VPackBuilder> newData) {
result = _cb(newData->slice());
}
CONDITION_LOCKER(locker, _cv);
if (_useCv) {
CONDITION_LOCKER(locker, _cv);
_cv.signal();
}
return result;
@ -137,10 +137,12 @@ void AgencyCallback::executeByCallbackOrTimeout(double maxTimeout) {
compareBuilder = _lastData;
}
_useCv = true;
CONDITION_LOCKER(locker, _cv);
locker.wait(static_cast<uint64_t>(maxTimeout * 1000000.0));
_useCv = false;
{
CONDITION_LOCKER(locker, _cv);
_useCv = true;
locker.wait(static_cast<uint64_t>(maxTimeout * 1000000.0));
_useCv = false;
}
if (!_lastData || _lastData->slice().equals(compareBuilder->slice())) {
LOG(DEBUG) << "Waiting done and nothing happended. Refetching to be sure";

View File

@ -144,10 +144,7 @@ void AgencyPrecondition::toVelocyPack(VPackBuilder& builder) const {
std::string AgencyWriteTransaction::toJson() const {
VPackBuilder builder;
{
VPackArrayBuilder guard(&builder);
toVelocyPack(builder);
}
toVelocyPack(builder);
return builder.toJson();
}
@ -177,10 +174,7 @@ void AgencyWriteTransaction::toVelocyPack(VPackBuilder& builder) const {
std::string AgencyReadTransaction::toJson() const {
VPackBuilder builder;
{
VPackArrayBuilder guard(&builder);
toVelocyPack(builder);
}
toVelocyPack(builder);
return builder.toJson();
}
@ -189,12 +183,9 @@ std::string AgencyReadTransaction::toJson() const {
//////////////////////////////////////////////////////////////////////////////
void AgencyReadTransaction::toVelocyPack(VPackBuilder& builder) const {
VPackArrayBuilder guard(&builder);
{
VPackArrayBuilder guard2(&builder);
for (std::string const& key: keys) {
builder.add(VPackValue(key));
}
VPackArrayBuilder guard2(&builder);
for (std::string const& key: keys) {
builder.add(VPackValue(key));
}
}
@ -225,7 +216,6 @@ AgencyCommResult::AgencyCommResult()
_message(),
_body(),
_values(),
_index(0),
_statusCode(0),
_connected(false) {}
@ -328,7 +318,6 @@ void AgencyCommResult::clear() {
_location = "";
_message = "";
_body = "";
_index = 0;
_statusCode = 0;
}
@ -658,7 +647,7 @@ bool AgencyComm::tryInitializeStructure() {
builder.add(VPackValue("Sync"));
{
VPackObjectBuilder c(&builder);
builder.add("LatestID", VPackValue("1"));
builder.add("LatestID", VPackValue(1));
addEmptyVPackObject("Problems", builder);
builder.add("UserVersion", VPackValue(1));
addEmptyVPackObject("ServerStates", builder);
@ -1566,76 +1555,69 @@ bool AgencyComm::unlockWrite(std::string const& key, double timeout) {
/// @brief get unique id
////////////////////////////////////////////////////////////////////////////////
AgencyCommResult AgencyComm::uniqid(std::string const& key, uint64_t count,
double timeout) {
static int const maxTries = 10;
uint64_t AgencyComm::uniqid(uint64_t count, double timeout) {
static int const maxTries = 1000000;
// this is pretty much forever, but we simply cannot continue at all
// if we do not get a unique id from the agency.
int tries = 0;
AgencyCommResult result;
while (tries++ < maxTries) {
result.clear();
result = getValues(key, false);
uint64_t oldValue = 0;
if (result.httpCode() ==
(int)arangodb::GeneralResponse::ResponseCode::NOT_FOUND) {
while (tries++ < maxTries) {
result = getValues2("Sync/LatestID");
if (!result.successful()) {
usleep(500000);
continue;
}
VPackSlice oldSlice = result.slice()[0].get(std::vector<std::string>(
{prefixStripped(), "Sync", "LatestID"}));
if (!(oldSlice.isSmallInt() || oldSlice.isUInt())) {
LOG(WARN) << "Sync/LatestID in agency is not an unsigned integer, fixing...";
try {
VPackBuilder builder;
builder.add(VPackValue(0));
// create the key on the fly
setValue(key, builder.slice(), 0.0);
tries--;
setValue("Sync/LatestID", builder.slice(), 0.0);
continue;
} catch (...) {
// Could not build local key. Try again
}
continue;
}
if (!result.successful()) {
return result;
}
result.parse("", false);
std::shared_ptr<VPackBuilder> oldBuilder;
std::map<std::string, AgencyCommResultEntry>::iterator it =
result._values.begin();
// If we get here, slice is pointing to an unsigned integer, which
// is the value in the agency.
oldValue = 0;
try {
if (it != result._values.end()) {
// steal the velocypack
oldBuilder.swap((*it).second._vpack);
} else {
oldBuilder->add(VPackValue(0));
}
} catch (...) {
return AgencyCommResult();
oldValue = oldSlice.getUInt();
}
catch (...) {
}
VPackSlice oldSlice = oldBuilder->slice();
uint64_t const oldValue = arangodb::basics::VelocyPackHelper::stringUInt64(oldSlice) + count;
uint64_t const newValue = oldValue + count;
VPackBuilder newBuilder;
try {
newBuilder.add(VPackValue(newValue));
} catch (...) {
return AgencyCommResult();
usleep(500000);
continue;
}
result.clear();
result = casValue(key, oldSlice, newBuilder.slice(), 0.0, timeout);
result = casValue("Sync/LatestID", oldSlice, newBuilder.slice(),
0.0, timeout);
if (result.successful()) {
result._index = oldValue + 1;
break;
}
// The cas did not work, simply try again!
}
return result;
return oldValue;
}
////////////////////////////////////////////////////////////////////////////////
@ -1841,11 +1823,56 @@ AgencyCommResult AgencyComm::sendTransactionWithFailover(
std::string url(buildUrl());
url += "/write";
url += transaction.isWriteTransaction() ? "/write" : "/read";
return sendWithFailover(arangodb::GeneralRequest::RequestType::POST,
VPackBuilder builder;
{
VPackArrayBuilder guard(&builder);
transaction.toVelocyPack(builder);
}
AgencyCommResult result = sendWithFailover(
arangodb::GeneralRequest::RequestType::POST,
timeout == 0.0 ? _globalConnectionOptions._requestTimeout : timeout, url,
transaction.toJson(), false);
builder.slice().toJson(), false);
try {
result.setVPack(VPackParser::fromJson(result.body().c_str()));
if (transaction.isWriteTransaction()) {
if (!result.slice().isObject() ||
!result.slice().get("results").isArray()) {
result._statusCode = 500;
return result;
}
if (result.slice().get("results").length() != 1) {
result._statusCode = 500;
return result;
}
} else {
if (!result.slice().isArray()) {
result._statusCode = 500;
return result;
}
if (result.slice().length() != 1) {
result._statusCode = 500;
return result;
}
}
result._body.clear();
result._statusCode = 200;
} catch(std::exception &e) {
LOG(ERR) << "Error transforming result. " << e.what();
result.clear();
} catch(...) {
LOG(ERR) << "Error transforming result. Out of memory";
result.clear();
}
return result;
}
////////////////////////////////////////////////////////////////////////////////
@ -2061,15 +2088,8 @@ AgencyCommResult AgencyComm::send(
result._message = response->getHttpReturnMessage();
basics::StringBuffer& sb = response->getBody();
result._body = std::string(sb.c_str(), sb.length());
result._index = 0;
result._statusCode = response->getHttpReturnCode();
bool found = false;
std::string lastIndex = response->getHeaderField("x-etcd-index", found);
if (found) {
result._index = arangodb::basics::StringUtils::uint64(lastIndex);
}
LOG(TRACE) << "request to agency returned status code " << result._statusCode
<< ", message: '" << result._message << "', body: '"
<< result._body << "'";

View File

@ -215,8 +215,10 @@ private:
struct AgencyTransaction {
virtual std::string toJson() const = 0;
virtual void toVelocyPack(arangodb::velocypack::Builder& builder) const = 0;
virtual ~AgencyTransaction() {
}
virtual bool isWriteTransaction() const = 0;
};
struct AgencyWriteTransaction : public AgencyTransaction {
@ -237,7 +239,7 @@ struct AgencyWriteTransaction : public AgencyTransaction {
/// @brief converts the transaction to velocypack
//////////////////////////////////////////////////////////////////////////////
void toVelocyPack(arangodb::velocypack::Builder& builder) const;
void toVelocyPack(arangodb::velocypack::Builder& builder) const override final;
//////////////////////////////////////////////////////////////////////////////
/// @brief converts the transaction to json
@ -270,6 +272,13 @@ struct AgencyWriteTransaction : public AgencyTransaction {
AgencyWriteTransaction() = default;
//////////////////////////////////////////////////////////////////////////////
/// @brief return type of transaction
//////////////////////////////////////////////////////////////////////////////
bool isWriteTransaction() const override final {
return true;
}
};
struct AgencyReadTransaction : public AgencyTransaction {
@ -284,7 +293,7 @@ struct AgencyReadTransaction : public AgencyTransaction {
/// @brief converts the transaction to velocypack
//////////////////////////////////////////////////////////////////////////////
void toVelocyPack(arangodb::velocypack::Builder& builder) const;
void toVelocyPack(arangodb::velocypack::Builder& builder) const override final;
//////////////////////////////////////////////////////////////////////////////
/// @brief converts the transaction to json
@ -300,12 +309,27 @@ struct AgencyReadTransaction : public AgencyTransaction {
keys.push_back(key);
}
//////////////////////////////////////////////////////////////////////////////
/// @brief shortcut to create a transaction with more than one operation
//////////////////////////////////////////////////////////////////////////////
explicit AgencyReadTransaction(std::vector<std::string>&& k)
: keys(k) {
}
//////////////////////////////////////////////////////////////////////////////
/// @brief default constructor
//////////////////////////////////////////////////////////////////////////////
AgencyReadTransaction() = default;
//////////////////////////////////////////////////////////////////////////////
/// @brief return type of transaction
//////////////////////////////////////////////////////////////////////////////
bool isWriteTransaction() const override final {
return false;
}
};
struct AgencyCommResult {
@ -342,12 +366,6 @@ struct AgencyCommResult {
int httpCode() const;
//////////////////////////////////////////////////////////////////////////////
/// @brief extract the "index" attribute from the result
//////////////////////////////////////////////////////////////////////////////
uint64_t index() const { return _index; }
//////////////////////////////////////////////////////////////////////////////
/// @brief extract the error code from the result
//////////////////////////////////////////////////////////////////////////////
@ -418,7 +436,6 @@ struct AgencyCommResult {
std::string _realBody;
std::map<std::string, AgencyCommResultEntry> _values;
uint64_t _index;
int _statusCode;
bool _connected;
};
@ -635,7 +652,7 @@ class AgencyComm {
/// @brief get unique id
//////////////////////////////////////////////////////////////////////////////
AgencyCommResult uniqid(std::string const&, uint64_t, double);
uint64_t uniqid(uint64_t, double);
//////////////////////////////////////////////////////////////////////////////
/// @brief registers a callback on a key

View File

@ -371,11 +371,16 @@ void ClusterFeature::start() {
result.slice()[0].get(std::vector<std::string>(
{AgencyComm::prefixStripped(), "Sync", "HeartbeatIntervalMs"}));
if (HeartbeatIntervalMs.isUInt()) {
_heartbeatInterval = HeartbeatIntervalMs.getUInt();
if (HeartbeatIntervalMs.isInteger()) {
try {
_heartbeatInterval = HeartbeatIntervalMs.getUInt();
LOG(INFO) << "using heartbeat interval value '" << _heartbeatInterval
<< " ms' from agency";
}
catch (...) {
// Ignore if it is not a small int or uint
}
LOG(INFO) << "using heartbeat interval value '" << _heartbeatInterval
<< " ms' from agency";
}
}

File diff suppressed because it is too large Load Diff

View File

@ -27,6 +27,7 @@
#include "Basics/Common.h"
#include "Basics/JsonHelper.h"
#include "Basics/VelocyPackHelper.h"
#include "Basics/Mutex.h"
#include "Basics/ReadWriteLock.h"
#include "Cluster/AgencyComm.h"
@ -35,6 +36,8 @@
#include "VocBase/vocbase.h"
#include <velocypack/Slice.h>
#include <velocypack/Iterator.h>
#include <velocypack/velocypack-aliases.h>
struct TRI_json_t;
@ -55,7 +58,7 @@ class CollectionInfo {
public:
CollectionInfo();
explicit CollectionInfo(struct TRI_json_t*);
CollectionInfo(std::shared_ptr<VPackBuilder>, VPackSlice);
CollectionInfo(CollectionInfo const&);
@ -72,29 +75,11 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
int replicationFactor () const {
TRI_json_t* const node
= arangodb::basics::JsonHelper::getObjectElement(_json,
"replicationFactor");
if (TRI_IsNumberJson(node)) {
return (int) (node->_value._number);
if (!_slice.isObject()) {
return 1;
}
return 1;
}
//////////////////////////////////////////////////////////////////////////////
/// @brief returns the replication quorum
//////////////////////////////////////////////////////////////////////////////
int replicationQuorum () const {
TRI_json_t* const node
= arangodb::basics::JsonHelper::getObjectElement(_json,
"replicationQuorum");
if (TRI_IsNumberJson(node)) {
return (int) (node->_value._number);
}
return 1;
return arangodb::basics::VelocyPackHelper::getNumericValue<TRI_voc_size_t>(
_slice, "replicationFactor", 1);
}
//////////////////////////////////////////////////////////////////////////////
@ -102,7 +87,7 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool empty() const {
return (nullptr == _json); //|| (id() == 0);
return _slice.isNone();
}
//////////////////////////////////////////////////////////////////////////////
@ -110,7 +95,14 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
TRI_voc_cid_t id() const {
return arangodb::basics::JsonHelper::stringUInt64(_json, "id");
if (!_slice.isObject()) {
return 0;
}
VPackSlice idSlice = _slice.get("id");
if (idSlice.isString()) {
return arangodb::basics::VelocyPackHelper::stringUInt64(idSlice);
}
return 0;
}
//////////////////////////////////////////////////////////////////////////////
@ -118,7 +110,10 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
std::string id_as_string() const {
return arangodb::basics::JsonHelper::getStringValue(_json, "id", "");
if (!_slice.isObject()) {
return std::string("");
}
return arangodb::basics::VelocyPackHelper::getStringValue(_slice, "id", "");
}
//////////////////////////////////////////////////////////////////////////////
@ -126,7 +121,10 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
std::string name() const {
return arangodb::basics::JsonHelper::getStringValue(_json, "name", "");
if (!_slice.isObject()) {
return std::string("");
}
return arangodb::basics::VelocyPackHelper::getStringValue(_slice, "name", "");
}
//////////////////////////////////////////////////////////////////////////////
@ -134,8 +132,11 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
TRI_col_type_e type() const {
return (TRI_col_type_e)arangodb::basics::JsonHelper::getNumericValue<int>(
_json, "type", (int)TRI_COL_TYPE_UNKNOWN);
if (!_slice.isObject()) {
return TRI_COL_TYPE_UNKNOWN;
}
return (TRI_col_type_e)arangodb::basics::VelocyPackHelper::getNumericValue<int>(
_slice, "type", (int)TRI_COL_TYPE_UNKNOWN);
}
//////////////////////////////////////////////////////////////////////////////
@ -143,9 +144,12 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
TRI_vocbase_col_status_e status() const {
if (!_slice.isObject()) {
return TRI_VOC_COL_STATUS_CORRUPTED;
}
return (TRI_vocbase_col_status_e)
arangodb::basics::JsonHelper::getNumericValue<int>(
_json, "status", (int)TRI_VOC_COL_STATUS_CORRUPTED);
arangodb::basics::VelocyPackHelper::getNumericValue<int>(
_slice, "status", (int)TRI_VOC_COL_STATUS_CORRUPTED);
}
//////////////////////////////////////////////////////////////////////////////
@ -161,7 +165,7 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool deleted() const {
return arangodb::basics::JsonHelper::getBooleanValue(_json, "deleted",
return arangodb::basics::VelocyPackHelper::getBooleanValue(_slice, "deleted",
false);
}
@ -170,7 +174,7 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool doCompact() const {
return arangodb::basics::JsonHelper::getBooleanValue(_json, "doCompact",
return arangodb::basics::VelocyPackHelper::getBooleanValue(_slice, "doCompact",
false);
}
@ -179,7 +183,7 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool isSystem() const {
return arangodb::basics::JsonHelper::getBooleanValue(_json, "isSystem",
return arangodb::basics::VelocyPackHelper::getBooleanValue(_slice, "isSystem",
false);
}
@ -188,7 +192,7 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool isVolatile() const {
return arangodb::basics::JsonHelper::getBooleanValue(_json, "isVolatile",
return arangodb::basics::VelocyPackHelper::getBooleanValue(_slice, "isVolatile",
false);
}
@ -196,24 +200,22 @@ class CollectionInfo {
/// @brief returns the indexes
//////////////////////////////////////////////////////////////////////////////
TRI_json_t const* getIndexes() const {
return arangodb::basics::JsonHelper::getObjectElement(_json, "indexes");
VPackSlice const getIndexes() const {
if (_slice.isNone()) {
return VPackSlice();
}
return _slice.get("indexes");
}
//////////////////////////////////////////////////////////////////////////////
/// @brief returns a copy of the key options
/// the caller is responsible for freeing it
//////////////////////////////////////////////////////////////////////////////
TRI_json_t* keyOptions() const {
TRI_json_t const* keyOptions =
arangodb::basics::JsonHelper::getObjectElement(_json, "keyOptions");
if (keyOptions != nullptr) {
return TRI_CopyJson(TRI_UNKNOWN_MEM_ZONE, keyOptions);
VPackSlice const keyOptions() const {
if (_slice.isNone()) {
return VPackSlice();
}
return nullptr;
return _slice.get("keyOptions");
}
//////////////////////////////////////////////////////////////////////////////
@ -221,12 +223,10 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool allowUserKeys() const {
TRI_json_t const* keyOptions =
arangodb::basics::JsonHelper::getObjectElement(_json, "keyOptions");
if (keyOptions != nullptr) {
return arangodb::basics::JsonHelper::getBooleanValue(
keyOptions, "allowUserKeys", true);
VPackSlice keyOptionsSlice = keyOptions();
if (!keyOptionsSlice.isNone()) {
return arangodb::basics::VelocyPackHelper::getBooleanValue(
keyOptionsSlice, "allowUserKeys", true);
}
return true; // the default value
@ -237,7 +237,7 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool waitForSync() const {
return arangodb::basics::JsonHelper::getBooleanValue(_json, "waitForSync",
return arangodb::basics::VelocyPackHelper::getBooleanValue(_slice, "waitForSync",
false);
}
@ -246,8 +246,8 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
TRI_voc_size_t journalSize() const {
return arangodb::basics::JsonHelper::getNumericValue<TRI_voc_size_t>(
_json, "journalSize", 0);
return arangodb::basics::VelocyPackHelper::getNumericValue<TRI_voc_size_t>(
_slice, "journalSize", 0);
}
//////////////////////////////////////////////////////////////////////////////
@ -255,8 +255,11 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
uint32_t indexBuckets() const {
return arangodb::basics::JsonHelper::getNumericValue<uint32_t>(
_json, "indexBuckets", 1);
if (!_slice.isObject()) {
return 1;
}
return arangodb::basics::VelocyPackHelper::getNumericValue<uint32_t>(
_slice, "indexBuckets", 1);
}
//////////////////////////////////////////////////////////////////////////////
@ -264,9 +267,19 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
std::vector<std::string> shardKeys() const {
TRI_json_t* const node =
arangodb::basics::JsonHelper::getObjectElement(_json, "shardKeys");
return arangodb::basics::JsonHelper::stringArray(node);
std::vector<std::string> shardKeys;
if (_slice.isNone()) {
return shardKeys;
}
auto shardKeysSlice = _slice.get("shardKeys");
if (shardKeysSlice.isArray()) {
for (auto const& shardKey: VPackArrayIterator(shardKeysSlice)) {
shardKeys.push_back(shardKey.copyString());
}
}
return shardKeys;
}
//////////////////////////////////////////////////////////////////////////////
@ -274,15 +287,18 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
bool usesDefaultShardKeys() const {
TRI_json_t* const node =
arangodb::basics::JsonHelper::getObjectElement(_json, "shardKeys");
if (TRI_LengthArrayJson(node) != 1) {
if (_slice.isNone()) {
return false;
}
TRI_json_t* firstKey = TRI_LookupArrayJson(node, 0);
TRI_ASSERT(TRI_IsStringJson(firstKey));
auto shardKeysSlice = _slice.get("shardKeys");
if (!shardKeysSlice.isArray() || shardKeysSlice.length() != 1) {
return false;
}
auto firstElement = shardKeysSlice.at(0);
TRI_ASSERT(firstElement.isString());
std::string shardKey =
arangodb::basics::JsonHelper::getStringValue(firstKey, "");
arangodb::basics::VelocyPackHelper::getStringValue(firstElement, "");
return shardKey == TRI_VOC_ATTRIBUTE_KEY;
}
@ -302,22 +318,18 @@ class CollectionInfo {
return res;
}
res.reset(new ShardMap());
TRI_json_t* const node =
arangodb::basics::JsonHelper::getObjectElement(_json, "shards");
if (node != nullptr && TRI_IsObjectJson(node)) {
size_t len = TRI_LengthVector(&node->_value._objects);
for (size_t i = 0; i < len; i += 2) {
auto key =
static_cast<TRI_json_t*>(TRI_AtVector(&node->_value._objects, i));
auto value = static_cast<TRI_json_t*>(
TRI_AtVector(&node->_value._objects, i + 1));
if (TRI_IsStringJson(key) && TRI_IsArrayJson(value)) {
ShardID shard = arangodb::basics::JsonHelper::getStringValue(key, "");
std::vector<ServerID> servers =
arangodb::basics::JsonHelper::stringArray(value);
if (shard != "") {
(*res).insert(make_pair(shard, servers));
auto shardsSlice = _slice.get("shards");
if (shardsSlice.isObject()) {
for (auto const& shardSlice: VPackObjectIterator(shardsSlice)) {
if (shardSlice.key.isString() && shardSlice.value.isArray()) {
ShardID shard = shardSlice.key.copyString();
std::vector<ServerID> servers;
for (auto const& serverSlice: VPackArrayIterator(shardSlice.value)) {
servers.push_back(serverSlice.copyString());
}
(*res).insert(make_pair(shardSlice.key.copyString(), servers));
}
}
}
@ -333,11 +345,13 @@ class CollectionInfo {
//////////////////////////////////////////////////////////////////////////////
int numberOfShards() const {
TRI_json_t* const node =
arangodb::basics::JsonHelper::getObjectElement(_json, "shards");
if (_slice.isNone()) {
return 0;
}
auto shardsSlice = _slice.get("shards");
if (TRI_IsObjectJson(node)) {
return (int)(TRI_LengthVector(&node->_value._objects) / 2);
if (shardsSlice.isObject()) {
return shardsSlice.length();
}
return 0;
}
@ -346,10 +360,16 @@ class CollectionInfo {
/// @brief returns the json
//////////////////////////////////////////////////////////////////////////////
TRI_json_t const* getJson() const { return _json; }
std::shared_ptr<VPackBuilder> const getVPack() const { return _vpack; }
//////////////////////////////////////////////////////////////////////////////
/// @brief returns the slice
//////////////////////////////////////////////////////////////////////////////
VPackSlice const getSlice() const { return _slice; }
private:
TRI_json_t* _json;
std::shared_ptr<VPackBuilder> _vpack;
VPackSlice _slice;
// Only to protect the cache:
mutable Mutex _mutex;
@ -583,13 +603,6 @@ class ClusterInfo {
std::vector<DatabaseID> listDatabases(bool = false);
//////////////////////////////////////////////////////////////////////////////
/// @brief (re-)load the information about planned collections from the agency
/// Usually one does not have to call this directly.
//////////////////////////////////////////////////////////////////////////////
void loadPlannedCollections();
//////////////////////////////////////////////////////////////////////////////
/// @brief (re-)load the information about our plan
/// Usually one does not have to call this directly.
@ -598,11 +611,11 @@ class ClusterInfo {
void loadPlan();
//////////////////////////////////////////////////////////////////////////////
/// @brief (re-)load the information about current databases
/// @brief (re-)load the information about current state
/// Usually one does not have to call this directly.
//////////////////////////////////////////////////////////////////////////////
void loadCurrentDatabases();
void loadCurrent();
//////////////////////////////////////////////////////////////////////////////
/// @brief ask about a collection
@ -906,15 +919,16 @@ class ClusterInfo {
ProtectionData _coordinatorsProt;
std::shared_ptr<VPackBuilder> _plan;
std::shared_ptr<VPackBuilder> _current;
std::unordered_map<DatabaseID, VPackSlice> _plannedDatabases; // from Plan/Databases
ProtectionData _planProt;
std::unordered_map<DatabaseID,
std::unordered_map<ServerID, struct TRI_json_t*>>
std::unordered_map<ServerID, VPackSlice>>
_currentDatabases; // from Current/Databases
ProtectionData _currentDatabasesProt;
ProtectionData _currentProt;
// We need information about collections, again we have
// data from Plan and from Current.
@ -926,7 +940,6 @@ class ClusterInfo {
// The Plan state:
AllCollections _plannedCollections; // from Plan/Collections/
ProtectionData _plannedCollectionsProt;
std::unordered_map<CollectionID,
std::shared_ptr<std::vector<std::string>>>
_shards; // from Plan/Collections/

View File

@ -113,9 +113,6 @@ void HeartbeatThread::runDBServer() {
// convert timeout to seconds
double const interval = (double)_interval / 1000.0 / 1000.0;
// value of Sync/Commands/my-id at startup
uint64_t lastCommandIndex = getLastCommandIndex();
std::function<bool(VPackSlice const& result)> updatePlan = [&](
VPackSlice const& result) {
if (!result.isNumber()) {
@ -170,23 +167,23 @@ void HeartbeatThread::runDBServer() {
break;
}
{
// send an initial GET request to Sync/Commands/my-id
AgencyCommResult result =
_agency.getValues("Sync/Commands/" + _myId, false);
if (result.successful()) {
handleStateChange(result, lastCommandIndex);
}
}
if (isStopping()) {
break;
}
if (--currentCount == 0) {
currentCount = currentCountStart;
// send an initial GET request to Sync/Commands/my-id
LOG(TRACE) << "Looking at Sync/Commands/" + _myId;
AgencyCommResult result =
_agency.getValues2("Sync/Commands/" + _myId);
if (result.successful()) {
handleStateChange(result);
}
if (isStopping()) {
break;
}
LOG(TRACE) << "Refetching Current/Version...";
AgencyCommResult res = _agency.getValues2("Current/Version");
if (!res.successful()) {
@ -286,9 +283,6 @@ void HeartbeatThread::runCoordinator() {
// last value of current which we have noticed:
uint64_t lastCurrentVersionNoticed = 0;
// value of Sync/Commands/my-id at startup
uint64_t lastCommandIndex = getLastCommandIndex();
setReady();
while (!isStopping()) {
@ -303,28 +297,20 @@ void HeartbeatThread::runCoordinator() {
break;
}
{
// send an initial GET request to Sync/Commands/my-id
AgencyCommResult result =
_agency.getValues("Sync/Commands/" + _myId, false);
AgencyReadTransaction trx(std::vector<std::string>({
_agency.prefix() + "Plan/Version",
_agency.prefix() + "Current/Version",
_agency.prefix() + "Sync/Commands/" + _myId,
_agency.prefix() + "Sync/UserVersion"}));
AgencyCommResult result = _agency.sendTransactionWithFailover(trx);
if (result.successful()) {
handleStateChange(result, lastCommandIndex);
}
}
if (!result.successful()) {
LOG(WARN) << "Heartbeat: Could not read from agency!";
} else {
LOG(TRACE) << "Looking at Sync/Commands/" + _myId;
if (isStopping()) {
break;
}
handleStateChange(result);
bool shouldSleep = true;
// get the current version of the Plan
AgencyCommResult result = _agency.getValues2("Plan/Version");
if (result.successful()) {
VPackSlice versionSlice
= result.slice()[0].get(std::vector<std::string>(
{_agency.prefixStripped(), "Plan", "Version"}));
@ -345,15 +331,8 @@ void HeartbeatThread::runCoordinator() {
}
}
}
}
result.clear();
result = _agency.getValues2("Sync/UserVersion");
if (result.successful()) {
velocypack::Slice slice =
VPackSlice slice =
result.slice()[0].get(std::vector<std::string>(
{_agency.prefixStripped(), "Sync", "UserVersion"}));
@ -395,14 +374,9 @@ void HeartbeatThread::runCoordinator() {
}
}
}
}
result = _agency.getValues2("Current/Version");
if (result.successful()) {
VPackSlice versionSlice
= result.slice()[0].get(std::vector<std::string>(
{_agency.prefixStripped(), "Plan", "Version"}));
versionSlice = result.slice()[0].get(std::vector<std::string>(
{_agency.prefixStripped(), "Plan", "Version"}));
if (versionSlice.isInteger()) {
uint64_t currentVersion = 0;
@ -419,19 +393,17 @@ void HeartbeatThread::runCoordinator() {
}
}
if (shouldSleep) {
double remain = interval - (TRI_microtime() - start);
double remain = interval - (TRI_microtime() - start);
// sleep for a while if appropriate, on some systems usleep does not
// like arguments greater than 1000000
while (remain > 0.0) {
if (remain >= 0.5) {
usleep(500000);
remain -= 0.5;
} else {
usleep((unsigned long)(remain * 1000.0 * 1000.0));
remain = 0.0;
}
// sleep for a while if appropriate, on some systems usleep does not
// like arguments greater than 1000000
while (remain > 0.0) {
if (remain >= 0.5) {
usleep(500000);
remain -= 0.5;
} else {
usleep((unsigned long)(remain * 1000.0 * 1000.0));
remain = 0.0;
}
}
@ -477,39 +449,6 @@ void HeartbeatThread::removeDispatchedJob(ServerJobResult result) {
_condition.signal();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief fetch the index id of the value of Sync/Commands/my-id from the
/// agency this index value is determined initially and it is passed to the
/// watch command (we're waiting for an entry with a higher id)
////////////////////////////////////////////////////////////////////////////////
uint64_t HeartbeatThread::getLastCommandIndex() {
// get the initial command state
AgencyCommResult result = _agency.getValues("Sync/Commands/" + _myId, false);
if (result.successful()) {
result.parse("Sync/Commands/", false);
std::map<std::string, AgencyCommResultEntry>::iterator it =
result._values.find(_myId);
if (it != result._values.end()) {
// found something
LOG(TRACE) << "last command index was: '" << (*it).second._index << "'";
return (*it).second._index;
}
}
if (result._index > 0) {
// use the value returned in header X-Etcd-Index
return result._index;
}
// nothing found. this is not an error
return 0;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief handles a plan version change, coordinator case
/// this is triggered if the heartbeat thread finds a new plan version number
@ -648,7 +587,8 @@ bool HeartbeatThread::handlePlanChangeCoordinator(uint64_t currentPlanVersion) {
////////////////////////////////////////////////////////////////////////////////
/// @brief handles a plan version change, DBServer case
/// this is triggered if the heartbeat thread finds a new plan version number
/// this is triggered if the heartbeat thread finds a new plan version number,
/// and every few heartbeats if the Current/Version has changed.
////////////////////////////////////////////////////////////////////////////////
bool HeartbeatThread::syncDBServerStatusQuo() {
@ -716,21 +656,12 @@ bool HeartbeatThread::syncDBServerStatusQuo() {
/// notified about this particular change again).
////////////////////////////////////////////////////////////////////////////////
bool HeartbeatThread::handleStateChange(AgencyCommResult& result,
uint64_t& lastCommandIndex) {
result.parse("Sync/Commands/", false);
std::map<std::string, AgencyCommResultEntry>::const_iterator it =
result._values.find(_myId);
if (it != result._values.end()) {
lastCommandIndex = (*it).second._index;
std::string command = "";
VPackSlice const slice = it->second._vpack->slice();
if (slice.isString()) {
command = slice.copyString();
}
bool HeartbeatThread::handleStateChange(AgencyCommResult& result) {
VPackSlice const slice = result.slice()[0].get(
std::vector<std::string>({ AgencyComm::prefixStripped(), "Sync",
"Commands", _myId }));
if (slice.isString()) {
std::string command = slice.copyString();
ServerState::StateEnum newState = ServerState::stringToState(command);
if (newState != ServerState::STATE_UNDEFINED) {

View File

@ -128,13 +128,7 @@ class HeartbeatThread : public Thread {
/// @brief handles a state change
//////////////////////////////////////////////////////////////////////////////
bool handleStateChange(AgencyCommResult&, uint64_t&);
//////////////////////////////////////////////////////////////////////////////
/// @brief fetch the last value of Sync/Commands/my-id from the agency
//////////////////////////////////////////////////////////////////////////////
uint64_t getLastCommandIndex();
bool handleStateChange(AgencyCommResult&);
//////////////////////////////////////////////////////////////////////////////
/// @brief sends the current server's state to the agency

View File

@ -608,15 +608,13 @@ static void JS_UniqidAgency(v8::FunctionCallbackInfo<v8::Value> const& args) {
TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate);
if (args.Length() < 1 || args.Length() > 3) {
TRI_V8_THROW_EXCEPTION_USAGE("uniqid(<key>, <count>, <timeout>)");
if (args.Length() > 2) {
TRI_V8_THROW_EXCEPTION_USAGE("uniqid(<count>, <timeout>)");
}
std::string const key = TRI_ObjectToString(args[0]);
uint64_t count = 1;
if (args.Length() > 1) {
count = TRI_ObjectToUInt64(args[1], true);
if (args.Length() > 0) {
count = TRI_ObjectToUInt64(args[0], true);
}
if (count < 1 || count > 10000000) {
@ -624,18 +622,14 @@ static void JS_UniqidAgency(v8::FunctionCallbackInfo<v8::Value> const& args) {
}
double timeout = 0.0;
if (args.Length() > 2) {
timeout = TRI_ObjectToDouble(args[2]);
if (args.Length() > 1) {
timeout = TRI_ObjectToDouble(args[1]);
}
AgencyComm comm;
AgencyCommResult result = comm.uniqid(key, count, timeout);
uint64_t result = comm.uniqid(count, timeout);
if (!result.successful() || result._index == 0) {
THROW_AGENCY_EXCEPTION(result);
}
std::string const value = StringUtils::itoa(result._index);
std::string const value = StringUtils::itoa(result);
TRI_V8_RETURN_STD_STRING(value);
TRI_V8_TRY_CATCH_END
@ -770,8 +764,6 @@ static void JS_GetCollectionInfoClusterInfo(
v8::Number::New(isolate, ci->journalSize()));
result->Set(TRI_V8_ASCII_STRING("replicationFactor"),
v8::Number::New(isolate, ci->replicationFactor()));
result->Set(TRI_V8_ASCII_STRING("replicationQuorum"),
v8::Number::New(isolate, ci->replicationQuorum()));
std::vector<std::string> const& sks = ci->shardKeys();
v8::Handle<v8::Array> shardKeys = v8::Array::New(isolate, (int)sks.size());
@ -792,7 +784,7 @@ static void JS_GetCollectionInfoClusterInfo(
}
result->Set(TRI_V8_ASCII_STRING("shards"), shardIds);
v8::Handle<v8::Value> indexes = TRI_ObjectJson(isolate, ci->getIndexes());
v8::Handle<v8::Value> indexes = TRI_VPackToV8(isolate, ci->getIndexes());
result->Set(TRI_V8_ASCII_STRING("indexes"), indexes);
TRI_V8_RETURN(result);
@ -1072,7 +1064,7 @@ static void JS_UniqidClusterInfo(
TRI_V8_THROW_EXCEPTION_PARAMETER("<count> is invalid");
}
uint64_t value = ClusterInfo::instance()->uniqid();
uint64_t value = ClusterInfo::instance()->uniqid(count);
if (value == 0) {
TRI_V8_THROW_EXCEPTION_MESSAGE(TRI_ERROR_INTERNAL,

View File

@ -193,8 +193,7 @@ bool HttpCommTask::processRead() {
// header is too large
HttpResponse response(
GeneralResponse::ResponseCode::REQUEST_HEADER_FIELDS_TOO_LARGE,
getCompatibility());
GeneralResponse::ResponseCode::REQUEST_HEADER_FIELDS_TOO_LARGE);
// we need to close the connection, because there is no way we
// know what to remove and then continue
@ -223,8 +222,7 @@ bool HttpCommTask::processRead() {
LOG(ERR) << "cannot generate request";
// internal server error
HttpResponse response(GeneralResponse::ResponseCode::SERVER_ERROR,
getCompatibility());
HttpResponse response(GeneralResponse::ResponseCode::SERVER_ERROR);
// we need to close the connection, because there is no way we
// know how to remove the body and then continue
@ -242,8 +240,7 @@ bool HttpCommTask::processRead() {
if (_httpVersion != GeneralRequest::ProtocolVersion::HTTP_1_0 &&
_httpVersion != GeneralRequest::ProtocolVersion::HTTP_1_1) {
HttpResponse response(
GeneralResponse::ResponseCode::HTTP_VERSION_NOT_SUPPORTED,
getCompatibility());
GeneralResponse::ResponseCode::HTTP_VERSION_NOT_SUPPORTED);
// we need to close the connection, because there is no way we
// know what to remove and then continue
@ -258,8 +255,7 @@ bool HttpCommTask::processRead() {
if (_fullUrl.size() > 16384) {
HttpResponse response(
GeneralResponse::ResponseCode::REQUEST_URI_TOO_LONG,
getCompatibility());
GeneralResponse::ResponseCode::REQUEST_URI_TOO_LONG);
// we need to close the connection, because there is no way we
// know what to remove and then continue
@ -343,8 +339,7 @@ bool HttpCommTask::processRead() {
// bad request, method not allowed
HttpResponse response(
GeneralResponse::ResponseCode::METHOD_NOT_ALLOWED,
getCompatibility());
GeneralResponse::ResponseCode::METHOD_NOT_ALLOWED);
// we need to close the connection, because there is no way we
// know what to remove and then continue
@ -377,8 +372,7 @@ bool HttpCommTask::processRead() {
LOG(TRACE) << "cannot serve request - server is inactive";
HttpResponse response(
GeneralResponse::ResponseCode::SERVICE_UNAVAILABLE,
getCompatibility());
GeneralResponse::ResponseCode::SERVICE_UNAVAILABLE);
// we need to close the connection, because there is no way we
// know what to remove and then continue
@ -489,8 +483,6 @@ bool HttpCommTask::processRead() {
// authenticate
// .............................................................................
auto const compatibility = _request->compatibility();
GeneralResponse::ResponseCode authResult =
_server->handlerFactory()->authenticateRequest(_request);
@ -499,15 +491,15 @@ bool HttpCommTask::processRead() {
if (authResult == GeneralResponse::ResponseCode::OK || isOptionsRequest) {
// handle HTTP OPTIONS requests directly
if (isOptionsRequest) {
processCorsOptions(compatibility);
processCorsOptions();
} else {
processRequest(compatibility);
processRequest();
}
}
// not found
else if (authResult == GeneralResponse::ResponseCode::NOT_FOUND) {
HttpResponse response(authResult, compatibility);
HttpResponse response(authResult);
response.setContentType(StaticStrings::MimeTypeJson);
response.body()
@ -525,7 +517,7 @@ bool HttpCommTask::processRead() {
// forbidden
else if (authResult == GeneralResponse::ResponseCode::FORBIDDEN) {
HttpResponse response(authResult, compatibility);
HttpResponse response(authResult);
response.setContentType(StaticStrings::MimeTypeJson);
response.body()
@ -542,8 +534,7 @@ bool HttpCommTask::processRead() {
// not authenticated
else {
HttpResponse response(GeneralResponse::ResponseCode::UNAUTHORIZED,
compatibility);
HttpResponse response(GeneralResponse::ResponseCode::UNAUTHORIZED);
if (sendWwwAuthenticateHeader()) {
std::string realm =
"basic realm=\"" +
@ -712,8 +703,7 @@ bool HttpCommTask::checkContentLength(bool expectContentLength) {
if (bodyLength < 0) {
// bad request, body length is < 0. this is a client error
HttpResponse response(GeneralResponse::ResponseCode::LENGTH_REQUIRED,
getCompatibility());
HttpResponse response(GeneralResponse::ResponseCode::LENGTH_REQUIRED);
resetState(true);
handleResponse(&response);
@ -735,8 +725,7 @@ bool HttpCommTask::checkContentLength(bool expectContentLength) {
// request entity too large
HttpResponse response(
GeneralResponse::ResponseCode::REQUEST_ENTITY_TOO_LARGE,
getCompatibility());
GeneralResponse::ResponseCode::REQUEST_ENTITY_TOO_LARGE);
resetState(true);
handleResponse(&response);
@ -779,8 +768,8 @@ void HttpCommTask::fillWriteBuffer() {
/// @brief handles CORS options
////////////////////////////////////////////////////////////////////////////////
void HttpCommTask::processCorsOptions(uint32_t compatibility) {
HttpResponse response(GeneralResponse::ResponseCode::OK, compatibility);
void HttpCommTask::processCorsOptions() {
HttpResponse response(GeneralResponse::ResponseCode::OK);
response.setHeaderNC(StaticStrings::Allow, StaticStrings::CorsMethods);
@ -817,7 +806,7 @@ void HttpCommTask::processCorsOptions(uint32_t compatibility) {
/// @brief processes a request
////////////////////////////////////////////////////////////////////////////////
void HttpCommTask::processRequest(uint32_t compatibility) {
void HttpCommTask::processRequest() {
// check for deflate
bool found;
std::string const& acceptEncoding =
@ -846,8 +835,7 @@ void HttpCommTask::processRequest(uint32_t compatibility) {
if (handler == nullptr) {
LOG(TRACE) << "no handler is known, giving up";
HttpResponse response(GeneralResponse::ResponseCode::NOT_FOUND,
compatibility);
HttpResponse response(GeneralResponse::ResponseCode::NOT_FOUND);
clearRequest();
handleResponse(&response);
@ -886,8 +874,7 @@ void HttpCommTask::processRequest(uint32_t compatibility) {
}
if (ok) {
HttpResponse response(GeneralResponse::ResponseCode::ACCEPTED,
compatibility);
HttpResponse response(GeneralResponse::ResponseCode::ACCEPTED);
if (jobId > 0) {
// return the job id we just created
@ -906,8 +893,7 @@ void HttpCommTask::processRequest(uint32_t compatibility) {
}
if (!ok) {
HttpResponse response(GeneralResponse::ResponseCode::SERVER_ERROR,
compatibility);
HttpResponse response(GeneralResponse::ResponseCode::SERVER_ERROR);
handleResponse(&response);
}
}
@ -983,18 +969,6 @@ bool HttpCommTask::sendWwwAuthenticateHeader() const {
return !found;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get request compatibility
////////////////////////////////////////////////////////////////////////////////
int32_t HttpCommTask::getCompatibility() const {
if (_request != nullptr) {
return _request->compatibility();
}
return GeneralRequest::MIN_COMPATIBILITY;
}
bool HttpCommTask::setup(Scheduler* scheduler, EventLoop loop) {
bool ok = SocketTask::setup(scheduler, loop);

View File

@ -124,13 +124,13 @@ class HttpCommTask : public SocketTask, public RequestStatisticsAgent {
/// @brief handles CORS options
//////////////////////////////////////////////////////////////////////////////
void processCorsOptions(uint32_t compatibility);
void processCorsOptions();
//////////////////////////////////////////////////////////////////////////////
/// @brief processes a request
//////////////////////////////////////////////////////////////////////////////
void processRequest(uint32_t compatibility);
void processRequest();
//////////////////////////////////////////////////////////////////////////////
/// @brief clears the request object
@ -154,12 +154,6 @@ class HttpCommTask : public SocketTask, public RequestStatisticsAgent {
bool sendWwwAuthenticateHeader() const;
//////////////////////////////////////////////////////////////////////////////
/// @brief get request compatibility
//////////////////////////////////////////////////////////////////////////////
int32_t getCompatibility() const;
protected:
bool setup(Scheduler* scheduler, EventLoop loop) override;

View File

@ -185,8 +185,7 @@ HttpHandler::status_t HttpHandler::executeFull() {
}
if (status._status != HANDLER_ASYNC && _response == nullptr) {
_response = new HttpResponse(GeneralResponse::ResponseCode::SERVER_ERROR,
GeneralRequest::MIN_COMPATIBILITY);
_response = new HttpResponse(GeneralResponse::ResponseCode::SERVER_ERROR);
}
requestStatisticsAgentSetRequestEnd();
@ -245,14 +244,6 @@ void HttpHandler::createResponse(GeneralResponse::ResponseCode code) {
delete _response;
_response = nullptr;
int32_t apiCompatibility;
if (_request != nullptr) {
apiCompatibility = _request->compatibility();
} else {
apiCompatibility = GeneralRequest::MIN_COMPATIBILITY;
}
// create a "standard" (standalone) Http response
_response = new HttpResponse(code, apiCompatibility);
_response = new HttpResponse(code);
}

View File

@ -61,12 +61,10 @@ class MaintenanceHandler : public HttpHandler {
////////////////////////////////////////////////////////////////////////////////
HttpHandlerFactory::HttpHandlerFactory(std::string const& authenticationRealm,
int32_t minCompatibility,
bool allowMethodOverride,
context_fptr setContext,
void* setContextData)
: _authenticationRealm(authenticationRealm),
_minCompatibility(minCompatibility),
_allowMethodOverride(allowMethodOverride),
_setContext(setContext),
_setContextData(setContextData),
@ -141,8 +139,7 @@ std::string HttpHandlerFactory::authenticationRealm(
HttpRequest* HttpHandlerFactory::createRequest(ConnectionInfo const& info,
char const* ptr, size_t length) {
HttpRequest* request = new HttpRequest(info, ptr, length, _minCompatibility,
_allowMethodOverride);
HttpRequest* request = new HttpRequest(info, ptr, length, _allowMethodOverride);
if (request != nullptr) {
setRequestContext(request);

View File

@ -79,7 +79,7 @@ class HttpHandlerFactory {
/// @brief constructs a new handler factory
//////////////////////////////////////////////////////////////////////////////
HttpHandlerFactory(std::string const&, int32_t, bool, context_fptr, void*);
HttpHandlerFactory(std::string const&, bool, context_fptr, void*);
HttpHandlerFactory(HttpHandlerFactory const&) = delete;
HttpHandlerFactory& operator=(HttpHandlerFactory const&) = delete;
@ -149,14 +149,6 @@ class HttpHandlerFactory {
std::string _authenticationRealm;
//////////////////////////////////////////////////////////////////////////////
/// @brief minimum compatibility
/// the value is an ArangoDB version number in the following format:
/// 10000 * major + 100 * minor (e.g. 10400 for ArangoDB 1.4)
//////////////////////////////////////////////////////////////////////////////
int32_t _minCompatibility;
//////////////////////////////////////////////////////////////////////////////
/// @brief allow overriding HTTP request method with custom headers
//////////////////////////////////////////////////////////////////////////////

View File

@ -516,7 +516,7 @@ int Syncer::createIndex(VPackSlice const& slice) {
std::string cnameString = getCName(slice);
// TODO
// Backwards compatibiltiy. old check to nullptr, new is empty string
// Backwards compatibility. old check to nullptr, new is empty string
// Other api does not know yet.
char const* cname = nullptr;
if (!cnameString.empty()) {
@ -575,7 +575,7 @@ int Syncer::dropIndex(arangodb::velocypack::Slice const& slice) {
std::string cnameString = getCName(slice);
// TODO
// Backwards compatibiltiy. old check to nullptr, new is empty string
// Backwards compatibility. old check to nullptr, new is empty string
// Other api does not know yet.
char const* cname = nullptr;
if (!cnameString.empty()) {

View File

@ -132,7 +132,7 @@ HttpHandler::status_t RestBatchHandler::execute() {
LOG(TRACE) << "part header is: " << std::string(headerStart, headerLength);
HttpRequest* request =
new HttpRequest(_request->connectionInfo(), headerStart, headerLength,
_request->compatibility(), false);
false);
if (request == nullptr) {
generateError(GeneralResponse::ResponseCode::SERVER_ERROR,

View File

@ -22,15 +22,18 @@
#include "RestServer/BootstrapFeature.h"
#include "Logger/Logger.h"
#include "Aql/QueryList.h"
#include "Cluster/AgencyComm.h"
#include "Cluster/ClusterInfo.h"
#include "Cluster/ServerState.h"
#include "HttpServer/HttpHandlerFactory.h"
#include "Logger/Logger.h"
#include "Rest/GeneralResponse.h"
#include "Rest/Version.h"
#include "RestServer/DatabaseFeature.h"
#include "RestServer/DatabaseServerFeature.h"
#include "V8Server/V8DealerFeature.h"
#include "VocBase/server.h"
using namespace arangodb;
using namespace arangodb::application_features;
@ -148,3 +151,35 @@ void BootstrapFeature::start() {
<< ") is ready for business. Have fun!";
}
void BootstrapFeature::stop() {
auto server = ApplicationServer::getFeature<DatabaseServerFeature>("DatabaseServer");
TRI_server_t* s = server->SERVER;
// notify all currently running queries about the shutdown
if (ServerState::instance()->isCoordinator()) {
std::vector<TRI_voc_tick_t> ids = TRI_GetIdsCoordinatorDatabaseServer(s, true);
for (auto& id : ids) {
TRI_vocbase_t* vocbase = TRI_UseByIdCoordinatorDatabaseServer(s, id);
if (vocbase != nullptr) {
vocbase->_queries->killAll(true);
TRI_ReleaseVocBase(vocbase);
}
}
} else {
std::vector<std::string> names;
int res = TRI_GetDatabaseNamesServer(s, names);
if (res == TRI_ERROR_NO_ERROR) {
for (auto& name : names) {
TRI_vocbase_t* vocbase = TRI_UseDatabaseServer(s, name.c_str());
if (vocbase != nullptr) {
vocbase->_queries->killAll(true);
TRI_ReleaseVocBase(vocbase);
}
}
}
}
}

View File

@ -32,6 +32,7 @@ class BootstrapFeature final : public application_features::ApplicationFeature {
public:
void start() override final;
void stop() override final;
};
}

View File

@ -79,7 +79,6 @@ RestServerFeature::RestServerFeature(
: ApplicationFeature(server, "RestServer"),
_keepAliveTimeout(300.0),
_authenticationRealm(authenticationRealm),
_defaultApiCompatibility(Version::getNumericServerVersion()),
_allowMethodOverride(false),
_authentication(true),
_authenticationUnixSockets(true),
@ -104,10 +103,6 @@ void RestServerFeature::collectOptions(
std::shared_ptr<ProgramOptions> options) {
options->addSection("server", "Server features");
options->addHiddenOption("--server.default-api-compatibility",
"default API compatibility version",
new Int32Parameter(&_defaultApiCompatibility));
options->addOption("--server.authentication",
"enable or disable authentication for ALL client requests",
new BooleanParameter(&_authentication));
@ -140,12 +135,6 @@ void RestServerFeature::collectOptions(
}
void RestServerFeature::validateOptions(std::shared_ptr<ProgramOptions>) {
if (_defaultApiCompatibility < HttpRequest::MIN_COMPATIBILITY) {
LOG(FATAL) << "invalid value for --server.default-api-compatibility. "
"minimum allowed value is "
<< HttpRequest::MIN_COMPATIBILITY;
FATAL_ERROR_EXIT();
}
}
static TRI_vocbase_t* LookupDatabaseFromRequest(HttpRequest* request,
@ -197,15 +186,12 @@ void RestServerFeature::prepare() {
}
void RestServerFeature::start() {
LOG(DEBUG) << "using default API compatibility: "
<< (long int)_defaultApiCompatibility;
_jobManager.reset(new AsyncJobManager(ClusterCommRestCallback));
_httpOptions._vocbase = DatabaseFeature::DATABASE->vocbase();
_handlerFactory.reset(new HttpHandlerFactory(
_authenticationRealm, _defaultApiCompatibility, _allowMethodOverride,
_authenticationRealm, _allowMethodOverride,
&SetRequestContext, DatabaseServerFeature::SERVER));
defineHandlers();

View File

@ -52,7 +52,6 @@ class RestServerFeature final
private:
double _keepAliveTimeout;
std::string const _authenticationRealm;
int32_t _defaultApiCompatibility;
bool _allowMethodOverride;
bool _authentication;
bool _authenticationUnixSockets;

View File

@ -136,6 +136,7 @@ static void createBabiesError(VPackBuilder& builder,
builder.openObject();
builder.add("error", VPackValue(true));
builder.add("errorNum", VPackValue(errorCode));
builder.add("errorMessage", VPackValue(TRI_errno_string(errorCode)));
builder.close();
}
@ -764,6 +765,40 @@ VPackSlice Transaction::extractKeyFromDocument(VPackSlice slice) {
return slice.get(StaticStrings::KeyString);
}
//////////////////////////////////////////////////////////////////////////////
/// @brief quick access to the _id attribute in a database document
/// the document must have at least two attributes, and _id is supposed to
/// be the second one
/// note that this may return a Slice of type Custom!
//////////////////////////////////////////////////////////////////////////////
VPackSlice Transaction::extractIdFromDocument(VPackSlice slice) {
if (slice.isExternal()) {
slice = slice.resolveExternal();
}
TRI_ASSERT(slice.isObject());
// a regular document must have at least the three attributes
// _key, _id and _rev (in this order). _id must be the second attribute
TRI_ASSERT(slice.length() >= 2);
uint8_t const* p = slice.begin() + slice.findDataOffset(slice.head());
if (*p == basics::VelocyPackHelper::KeyAttribute) {
// skip over _key
++p;
// skip over _key value
p += VPackSlice(p).byteSize();
if (*p == basics::VelocyPackHelper::IdAttribute) {
// the + 1 is required so that we can skip over the attribute name
// and point to the attribute value
return VPackSlice(p + 1);
}
}
// fall back to the regular lookup method
return slice.get(StaticStrings::IdString);
}
//////////////////////////////////////////////////////////////////////////////
/// @brief quick access to the _from attribute in a database document
/// the document must have at least five attributes: _key, _id, _from, _to
@ -3030,9 +3065,7 @@ std::shared_ptr<Index> Transaction::indexForCollectionCoordinator(
name.c_str(), _vocbase->_name);
}
TRI_json_t const* json = (*collectionInfo).getIndexes();
auto indexBuilder = arangodb::basics::JsonHelper::toVelocyPack(json);
VPackSlice const slice = indexBuilder->slice();
VPackSlice const slice = (*collectionInfo).getIndexes();
if (slice.isArray()) {
for (auto const& v : VPackArrayIterator(slice)) {
@ -3094,9 +3127,7 @@ std::vector<std::shared_ptr<Index>> Transaction::indexesForCollectionCoordinator
name.c_str(), _vocbase->_name);
}
TRI_json_t const* json = collectionInfo->getIndexes();
auto indexBuilder = arangodb::basics::JsonHelper::toVelocyPack(json);
VPackSlice const slice = indexBuilder->slice();
VPackSlice const slice = collectionInfo->getIndexes();
if (slice.isArray()) {
size_t const n = static_cast<size_t>(slice.length());

View File

@ -308,6 +308,15 @@ class Transaction {
//////////////////////////////////////////////////////////////////////////////
static VPackSlice extractKeyFromDocument(VPackSlice);
//////////////////////////////////////////////////////////////////////////////
/// @brief quick access to the _id attribute in a database document
/// the document must have at least two attributes, and _id is supposed to
/// be the second one
/// note that this may return a Slice of type Custom!
//////////////////////////////////////////////////////////////////////////////
static VPackSlice extractIdFromDocument(VPackSlice);
//////////////////////////////////////////////////////////////////////////////
/// @brief quick access to the _from attribute in a database document

View File

@ -163,8 +163,7 @@ void WorkMonitor::vpackHandler(VPackBuilder* b, WorkDescription* desc) {
////////////////////////////////////////////////////////////////////////////////
void WorkMonitor::sendWorkOverview(uint64_t taskId, std::string const& data) {
auto response = std::make_unique<HttpResponse>(GeneralResponse::ResponseCode::OK,
GeneralRequest::MIN_COMPATIBILITY);
auto response = std::make_unique<HttpResponse>(GeneralResponse::ResponseCode::OK);
response->setContentType(StaticStrings::MimeTypeJson);
TRI_AppendString2StringBuffer(response->body().stringBuffer(), data.c_str(),

View File

@ -128,7 +128,7 @@ class v8_action_t : public TRI_action_t {
result.isValid = true;
result.response =
new HttpResponse(GeneralResponse::ResponseCode::NOT_FOUND, request->compatibility());
new HttpResponse(GeneralResponse::ResponseCode::NOT_FOUND);
return result;
}
@ -488,11 +488,6 @@ static v8::Handle<v8::Object> RequestCppToV8(v8::Isolate* isolate,
TRI_GET_GLOBAL_STRING(CookiesKey);
req->ForceSet(CookiesKey, cookiesObject);
// determine API compatibility version
int32_t compatibility = request->compatibility();
TRI_GET_GLOBAL_STRING(CompatibilityKey);
req->ForceSet(CompatibilityKey, v8::Integer::New(isolate, compatibility));
return req;
}
@ -502,8 +497,7 @@ static v8::Handle<v8::Object> RequestCppToV8(v8::Isolate* isolate,
static HttpResponse* ResponseV8ToCpp(v8::Isolate* isolate,
TRI_v8_global_t const* v8g,
v8::Handle<v8::Object> const res,
uint32_t compatibility) {
v8::Handle<v8::Object> const res) {
GeneralResponse::ResponseCode code = GeneralResponse::ResponseCode::OK;
TRI_GET_GLOBAL_STRING(ResponseCodeKey);
@ -513,7 +507,7 @@ static HttpResponse* ResponseV8ToCpp(v8::Isolate* isolate,
(int)(TRI_ObjectToDouble(res->Get(ResponseCodeKey))));
}
auto response = std::make_unique<HttpResponse>(code, compatibility);
auto response = std::make_unique<HttpResponse>(code);
TRI_GET_GLOBAL_STRING(ContentTypeKey);
if (res->Has(ContentTypeKey)) {
@ -722,7 +716,7 @@ static TRI_action_result_t ExecuteActionVocbase(
result.canceled = false;
HttpResponse* response =
new HttpResponse(GeneralResponse::ResponseCode::SERVER_ERROR, request->compatibility());
new HttpResponse(GeneralResponse::ResponseCode::SERVER_ERROR);
if (errorMessage.empty()) {
errorMessage = TRI_errno_string(errorCode);
}
@ -736,8 +730,7 @@ static TRI_action_result_t ExecuteActionVocbase(
else if (tryCatch.HasCaught()) {
if (tryCatch.CanContinue()) {
HttpResponse* response = new HttpResponse(GeneralResponse::ResponseCode::SERVER_ERROR,
request->compatibility());
HttpResponse* response = new HttpResponse(GeneralResponse::ResponseCode::SERVER_ERROR);
response->body().appendText(TRI_StringifyV8Exception(isolate, &tryCatch));
result.response = response;
@ -750,7 +743,7 @@ static TRI_action_result_t ExecuteActionVocbase(
else {
result.response =
ResponseV8ToCpp(isolate, v8g, res, request->compatibility());
ResponseV8ToCpp(isolate, v8g, res);
}
return result;

View File

@ -1305,9 +1305,6 @@ static void JS_PropertiesVocbaseCol(
result->Set(
TRI_V8_ASCII_STRING("replicationFactor"),
v8::Number::New(isolate, static_cast<double>(c->replicationFactor())));
result->Set(
TRI_V8_ASCII_STRING("replicationQuorum"),
v8::Number::New(isolate, static_cast<double>(c->replicationQuorum())));
TRI_V8_RETURN(result);
}

View File

@ -871,7 +871,6 @@ static void CreateCollectionCoordinator(
uint64_t numberOfShards = 1;
std::vector<std::string> shardKeys;
uint64_t replicationFactor = 1;
uint64_t replicationQuorum = 1;
// default shard key
shardKeys.push_back("_key");
@ -950,10 +949,6 @@ static void CreateCollectionCoordinator(
if (p->Has(TRI_V8_ASCII_STRING("replicationFactor"))) {
replicationFactor = TRI_ObjectToUInt64(p->Get(TRI_V8_ASCII_STRING("replicationFactor")), false);
}
if (p->Has(TRI_V8_ASCII_STRING("replicationQuorum"))) {
replicationQuorum = TRI_ObjectToUInt64(p->Get(TRI_V8_ASCII_STRING("replicationQuorum")), false);
}
}
if (numberOfShards == 0 || numberOfShards > 1000) {
@ -964,10 +959,6 @@ static void CreateCollectionCoordinator(
TRI_V8_THROW_EXCEPTION_PARAMETER("invalid replicationFactor");
}
if (replicationQuorum == 0 || replicationQuorum > replicationFactor) {
TRI_V8_THROW_EXCEPTION_PARAMETER("invalid replicationQuorum");
}
if (shardKeys.empty() || shardKeys.size() > 8) {
TRI_V8_THROW_EXCEPTION_PARAMETER("invalid number of shard keys");
}
@ -1062,7 +1053,6 @@ static void CreateCollectionCoordinator(
("journalSize", Value(parameters.maximalSize()))
("indexBuckets", Value(parameters.indexBuckets()))
("replicationFactor", Value(replicationFactor))
("replicationQuorum", Value(replicationQuorum))
("keyOptions", Value(ValueType::Object))
("type", Value("traditional"))
("allowUserKeys", Value(allowUserKeys))
@ -1115,7 +1105,7 @@ static void CreateCollectionCoordinator(
if (myerrno != TRI_ERROR_NO_ERROR) {
TRI_V8_THROW_EXCEPTION_MESSAGE(myerrno, errorMsg);
}
ci->loadPlannedCollections();
ci->loadPlan();
std::shared_ptr<CollectionInfo> c = ci->getCollection(databaseName, cid);
TRI_vocbase_col_t* newcoll = CoordinatorCollection(vocbase, *c);
@ -1286,9 +1276,7 @@ static void GetIndexesCoordinator(
v8::Handle<v8::Array> ret = v8::Array::New(isolate);
std::shared_ptr<VPackBuilder> tmp =
arangodb::basics::JsonHelper::toVelocyPack(c->getIndexes());
VPackSlice slice = tmp->slice();
VPackSlice slice = c->getIndexes();
if (slice.isArray()) {
uint32_t j = 0;

View File

@ -894,12 +894,13 @@ VocbaseCollectionInfo::VocbaseCollectionInfo(CollectionInfo const& other)
std::string const name = other.name();
memset(_name, 0, sizeof(_name));
memcpy(_name, name.c_str(), name.size());
VPackSlice keyOptionsSlice(other.keyOptions());
std::unique_ptr<TRI_json_t> otherOpts(other.keyOptions());
if (otherOpts != nullptr) {
std::shared_ptr<arangodb::velocypack::Builder> builder =
arangodb::basics::JsonHelper::toVelocyPack(otherOpts.get());
_keyOptions = builder->steal();
if (!keyOptionsSlice.isNone()) {
VPackBuilder builder;
builder.add(keyOptionsSlice);
_keyOptions = builder.steal();
}
}

View File

@ -1712,7 +1712,7 @@ void TRI_EnableDeadlockDetectionDatabasesServer(TRI_server_t* server) {
////////////////////////////////////////////////////////////////////////////////
std::vector<TRI_voc_tick_t> TRI_GetIdsCoordinatorDatabaseServer(
TRI_server_t* server) {
TRI_server_t* server, bool includeSystem) {
std::vector<TRI_voc_tick_t> v;
{
auto unuser(server->_databasesProtector.use());
@ -1722,7 +1722,7 @@ std::vector<TRI_voc_tick_t> TRI_GetIdsCoordinatorDatabaseServer(
TRI_vocbase_t* vocbase = p.second;
TRI_ASSERT(vocbase != nullptr);
if (!TRI_EqualString(vocbase->_name, TRI_VOC_SYSTEM_DATABASE)) {
if (includeSystem || !TRI_EqualString(vocbase->_name, TRI_VOC_SYSTEM_DATABASE)) {
v.emplace_back(vocbase->_id);
}
}

View File

@ -160,10 +160,10 @@ void TRI_EnableDeadlockDetectionDatabasesServer(TRI_server_t*);
////////////////////////////////////////////////////////////////////////////////
/// @brief get the ids of all local coordinator databases
/// the caller is responsible for freeing the result
////////////////////////////////////////////////////////////////////////////////
std::vector<TRI_voc_tick_t> TRI_GetIdsCoordinatorDatabaseServer(TRI_server_t*);
std::vector<TRI_voc_tick_t> TRI_GetIdsCoordinatorDatabaseServer(TRI_server_t*,
bool includeSystem = false);
////////////////////////////////////////////////////////////////////////////////
/// @brief drops an existing coordinator database

View File

@ -167,10 +167,6 @@ function parseBodyForCreateCollection (req, res) {
if (body.hasOwnProperty("replicationFactor")) {
r.parameters.replicationFactor = body.replicationFactor || "";
}
if (body.hasOwnProperty("replicationQuorum")) {
r.parameters.replicationQuorum = body.replicationQuorum || "";
}
}
return r;

File diff suppressed because it is too large Load Diff

View File

@ -4739,7 +4739,7 @@ ArangoDatabase.prototype._create = function (name, properties, type) {
[ "waitForSync", "journalSize", "isSystem", "isVolatile",
"doCompact", "keyOptions", "shardKeys", "numberOfShards",
"distributeShardsLike", "indexBuckets", "id",
"replicationFactor", "replicationQuorum" ].forEach(function(p) {
"replicationFactor" ].forEach(function(p) {
if (properties.hasOwnProperty(p)) {
body[p] = properties[p];
}

View File

@ -304,7 +304,7 @@ ArangoDatabase.prototype._create = function (name, properties, type) {
[ "waitForSync", "journalSize", "isSystem", "isVolatile",
"doCompact", "keyOptions", "shardKeys", "numberOfShards",
"distributeShardsLike", "indexBuckets", "id",
"replicationFactor", "replicationQuorum" ].forEach(function(p) {
"replicationFactor" ].forEach(function(p) {
if (properties.hasOwnProperty(p)) {
body[p] = properties[p];
}

View File

@ -261,8 +261,6 @@ var helpArangoCollection = arangosh.createHelpHeadline("ArangoCollection help")
' <keepNull>) ' + "\n" +
' remove(<id>) delete document ' + "\n" +
' exists(<id>) checks whether a document exists ' + "\n" +
' first() first inserted/updated document ' + "\n" +
' last() last inserted/updated document ' + "\n" +
' ' + "\n" +
'Attributes: ' + "\n" +
' _database database object ' + "\n" +

View File

@ -315,7 +315,7 @@ ArangoDatabase.prototype._create = function (name, properties, type) {
[ "waitForSync", "journalSize", "isSystem", "isVolatile",
"doCompact", "keyOptions", "shardKeys", "numberOfShards",
"distributeShardsLike", "indexBuckets", "id",
"replicationFactor", "replicationQuorum" ].forEach(function(p) {
"replicationFactor" ].forEach(function(p) {
if (properties.hasOwnProperty(p)) {
body[p] = properties[p];
}

View File

@ -0,0 +1,150 @@
/*jshint globalstrict:false, strict:false, sub: true */
/*global fail, assertEqual */
////////////////////////////////////////////////////////////////////////////////
/// @brief very quick test for basic functionality
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2016-2016 ArangoDB GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is ArangoDB GmbH, Cologne, Germany
///
/// @author Max Neunhoeffer
/// @author Copyright 2016, ArangoDB GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
var jsunity = require("jsunity");
var arangodb = require("@arangodb");
var ERRORS = arangodb.errors;
var db = arangodb.db;
////////////////////////////////////////////////////////////////////////////////
/// @brief test attributes
////////////////////////////////////////////////////////////////////////////////
function QuickieSuite () {
'use strict';
return {
////////////////////////////////////////////////////////////////////////////////
/// @brief set up
////////////////////////////////////////////////////////////////////////////////
setUp : function () {
},
////////////////////////////////////////////////////////////////////////////////
/// @brief tear down
////////////////////////////////////////////////////////////////////////////////
tearDown : function () {
},
////////////////////////////////////////////////////////////////////////////////
/// @brief quickly create a collection and do some operations:
////////////////////////////////////////////////////////////////////////////////
testACollection: function () {
try {
db._drop("UnitTestCollection");
}
catch (e) {
}
// Create a collection:
var c = db._create("UnitTestCollection", {numberOfShards:2});
// Do a bunch of operations:
var r = c.insert({"Hallo":12});
var d = c.document(r._key);
assertEqual(12, d.Hallo);
c.replace(r._key, {"Hallo":13});
d = c.document(r._key);
assertEqual(13, d.Hallo);
c.update(r._key, {"Hallo":14});
d = c.document(r._key);
assertEqual(14, d.Hallo);
c.remove(r._key);
try {
d = c.document(r._key);
fail();
}
catch (e) {
assertEqual(ERRORS.ERROR_ARANGO_DOCUMENT_NOT_FOUND.code, e.errorNum);
}
// Drop the collection again:
c.drop();
},
////////////////////////////////////////////////////////////////////////////////
/// @brief quickly create a database and a collection and do some operations:
////////////////////////////////////////////////////////////////////////////////
testADatabase: function () {
try {
db._dropDatabase("UnitTestDatabase");
}
catch (e) {
}
db._createDatabase("UnitTestDatabase");
db._useDatabase("UnitTestDatabase");
// Create a collection:
var c = db._create("UnitTestCollection", {numberOfShards:2});
// Do a bunch of operations:
var r = c.insert({"Hallo":12});
var d = c.document(r._key);
assertEqual(12, d.Hallo);
c.replace(r._key, {"Hallo":13});
d = c.document(r._key);
assertEqual(13, d.Hallo);
c.update(r._key, {"Hallo":14});
d = c.document(r._key);
assertEqual(14, d.Hallo);
c.remove(r._key);
try {
d = c.document(r._key);
fail();
}
catch (e) {
assertEqual(ERRORS.ERROR_ARANGO_DOCUMENT_NOT_FOUND.code, e.errorNum);
}
// Drop the collection again:
c.drop();
// Drop the database again:
db._useDatabase("_system");
db._dropDatabase("UnitTestDatabase");
}
};
}
////////////////////////////////////////////////////////////////////////////////
/// @brief executes the test suite
////////////////////////////////////////////////////////////////////////////////
jsunity.run(QuickieSuite);
return jsunity.done();

View File

@ -288,102 +288,6 @@ ArangoCollection.prototype.any = function () {
return this.ANY();
};
////////////////////////////////////////////////////////////////////////////////
/// @brief was docuBlock documentsCollectionFirst
////////////////////////////////////////////////////////////////////////////////
ArangoCollection.prototype.first = function (count) {
var cluster = require("@arangodb/cluster");
if (cluster.isCoordinator()) {
var dbName = require("internal").db._name();
var shards = cluster.shardList(dbName, this.name());
if (shards.length !== 1) {
var err = new ArangoError();
err.errorNum = internal.errors.ERROR_CLUSTER_UNSUPPORTED.code;
err.errorMessage = "operation is not supported in sharded collections";
throw err;
}
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
var shard = shards[0];
ArangoClusterComm.asyncRequest("put",
"shard:" + shard,
dbName,
"/_api/simple/first",
JSON.stringify({
collection: shard,
count: count
}),
{ },
options);
var results = cluster.wait(coord, shards);
if (results.length) {
var body = JSON.parse(results[0].body);
return body.result || null;
}
}
else {
return this.FIRST(count);
}
return null;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief was docuBlock documentsCollectionLast
////////////////////////////////////////////////////////////////////////////////
ArangoCollection.prototype.last = function (count) {
var cluster = require("@arangodb/cluster");
if (cluster.isCoordinator()) {
var dbName = require("internal").db._name();
var shards = cluster.shardList(dbName, this.name());
if (shards.length !== 1) {
var err = new ArangoError();
err.errorNum = internal.errors.ERROR_CLUSTER_UNSUPPORTED.code;
err.errorMessage = "operation is not supported in sharded collections";
throw err;
}
var coord = { coordTransactionID: ArangoClusterInfo.uniqid() };
var options = { coordTransactionID: coord.coordTransactionID, timeout: 360 };
var shard = shards[0];
ArangoClusterComm.asyncRequest("put",
"shard:" + shard,
dbName,
"/_api/simple/last",
JSON.stringify({
collection: shard,
count: count
}),
{ },
options);
var results = cluster.wait(coord, shards);
if (results.length) {
var body = JSON.parse(results[0].body);
return body.result || null;
}
}
else {
return this.LAST(count);
}
return null;
};
////////////////////////////////////////////////////////////////////////////////
/// @brief was docuBlock collectionFirstExample
////////////////////////////////////////////////////////////////////////////////

View File

@ -79,9 +79,8 @@ ArangoStatement.prototype.execute = function () {
opts.cache = this._cache;
}
}
var result = AQL_EXECUTE(this._query, this._bindVars, opts);
return new GeneralArrayCursor(result.json, 0, null, result);
};

View File

@ -136,7 +136,23 @@ function aqlVPackExternalsTestSuite () {
}
},
testExternalAttributeAccess: function () {
let coll = db._collection(collName);
let ecoll = db._collection(edgeColl);
coll.truncate();
ecoll.truncate();
coll.insert({ _key: "a", w: 1});
coll.insert({ _key: "b", w: 2});
coll.insert({ _key: "c", w: 3});
ecoll.insert({ _key: "a", _from: coll.name() + "/a", _to: coll.name() + "/b", w: 1});
ecoll.insert({ _key: "b", _from: coll.name() + "/b", _to: coll.name() + "/c", w: 2});
const query = `FOR x,y,p IN 1..10 OUTBOUND '${collName}/a' ${edgeColl} SORT x._key, y._key RETURN p.vertices[*].w`;
const cursor = db._query(query);
assertEqual([ 1, 2 ], cursor.next());
assertEqual([ 1, 2, 3 ], cursor.next());
}
};
}

View File

@ -445,22 +445,6 @@ int TRI_InitStringCopyJson(TRI_memory_zone_t* zone, TRI_json_t* result,
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief creates a string reference object with given length
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* TRI_CreateStringReferenceJson(TRI_memory_zone_t* zone,
char const* value, size_t length) {
TRI_json_t* result =
static_cast<TRI_json_t*>(TRI_Allocate(zone, sizeof(TRI_json_t), false));
if (result != nullptr) {
InitStringReference(result, value, length);
}
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief initializes a string reference object
////////////////////////////////////////////////////////////////////////////////

View File

@ -121,13 +121,6 @@ void TRI_InitStringJson(TRI_json_t*, char*, size_t);
int TRI_InitStringCopyJson(TRI_memory_zone_t*, TRI_json_t*, char const*,
size_t);
////////////////////////////////////////////////////////////////////////////////
/// @brief creates a string reference object with given length
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* TRI_CreateStringReferenceJson(TRI_memory_zone_t*, char const* value,
size_t length);
////////////////////////////////////////////////////////////////////////////////
/// @brief creates a string reference object
////////////////////////////////////////////////////////////////////////////////
@ -306,12 +299,6 @@ TRI_json_t* TRI_CopyJson(TRI_memory_zone_t*, TRI_json_t const*);
TRI_json_t* TRI_JsonString(TRI_memory_zone_t*, char const* text);
////////////////////////////////////////////////////////////////////////////////
/// @brief parses a json file and returns error message
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* TRI_JsonFile(TRI_memory_zone_t*, char const* path, char** error);
////////////////////////////////////////////////////////////////////////////////
/// @brief default deleter for TRI_json_t
/// this can be used to put a TRI_json_t with TRI_UNKNOWN_MEM_ZONE into an

View File

@ -542,73 +542,3 @@ TRI_json_t* TRI_JsonString (TRI_memory_zone_t* zone, char const* text) {
return object;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief parses a json file
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* TRI_JsonFile (TRI_memory_zone_t* zone, char const* path, char** error) {
FILE* in;
TRI_json_t* value;
int c;
struct yyguts_t * yyg;
yyscan_t scanner;
value = static_cast<TRI_json_t*>(TRI_Allocate(zone, sizeof(TRI_json_t), false));
if (value == nullptr) {
// out of memory
return nullptr;
}
in = fopen(path, "rb");
if (in == nullptr) {
TRI_Free(zone, value);
return nullptr;
}
// init as a JSON null object so the memory in value is initialized
TRI_InitNullJson(value);
yylex_init(&scanner);
yyg = (struct yyguts_t*) scanner;
yyextra._memoryZone = zone;
yyin = in;
c = yylex(scanner);
if (! ParseValue(scanner, value, c)) {
TRI_FreeJson(zone, value);
value = nullptr;
}
else {
c = yylex(scanner);
if (c != END_OF_FILE) {
TRI_FreeJson(zone, value);
value = nullptr;
}
}
if (error != nullptr) {
if (yyextra._message != nullptr) {
*error = TRI_DuplicateString(yyextra._message);
}
else {
*error = nullptr;
}
}
yylex_destroy(scanner);
fclose(in);
return value;
}
// Local Variables:
// mode: C
// mode: outline-minor
// outline-regexp: "^\\(/// @brief\\|/// {@inheritDoc}\\|/// @addtogroup\\|// --SECTION--\\|/// @\\}\\)"
// End:

View File

@ -32,8 +32,6 @@
using namespace arangodb;
using namespace arangodb::basics;
int32_t const GeneralRequest::MIN_COMPATIBILITY = 10300L;
static std::string const EMPTY_STR = "";
std::string GeneralRequest::translateVersion(ProtocolVersion version) {

View File

@ -38,9 +38,6 @@ class GeneralRequest {
GeneralRequest(GeneralRequest const&) = delete;
GeneralRequest& operator=(GeneralRequest const&) = delete;
public:
static int32_t const MIN_COMPATIBILITY;
public:
// VSTREAM_CRED: This method is used for sending Authentication
// request,i.e; username and password.
@ -83,10 +80,8 @@ class GeneralRequest {
static RequestType findRequestType(char const*, size_t const);
public:
GeneralRequest(ConnectionInfo const& connectionInfo,
int32_t defaultApiCompatibility)
explicit GeneralRequest(ConnectionInfo const& connectionInfo)
: _version(ProtocolVersion::UNKNOWN),
_defaultApiCompatibility(defaultApiCompatibility),
_connectionInfo(connectionInfo),
_clientTaskId(0),
_requestContext(nullptr),
@ -102,8 +97,6 @@ class GeneralRequest {
std::string const& protocol() const { return _protocol; }
void setProtocol(std::string const& protocol) { _protocol = protocol; }
virtual int32_t compatibility() = 0;
ConnectionInfo const& connectionInfo() const { return _connectionInfo; }
void setConnectionInfo(ConnectionInfo const& connectionInfo) {
_connectionInfo = connectionInfo;
@ -172,7 +165,6 @@ class GeneralRequest {
protected:
ProtocolVersion _version;
std::string _protocol;
int32_t _defaultApiCompatibility;
// connection info
ConnectionInfo _connectionInfo;

View File

@ -415,9 +415,8 @@ GeneralResponse::ResponseCode GeneralResponse::responseCode(int code) {
}
}
GeneralResponse::GeneralResponse(ResponseCode responseCode,
uint32_t compatibility)
: _responseCode(responseCode), _apiCompatibility(compatibility) {}
GeneralResponse::GeneralResponse(ResponseCode responseCode)
: _responseCode(responseCode) {}
std::string const& GeneralResponse::header(std::string const& key) const {
std::string k = StringUtils::tolower(key);

View File

@ -100,7 +100,7 @@ class GeneralResponse {
static ResponseCode responseCode(int);
public:
GeneralResponse(ResponseCode, uint32_t);
explicit GeneralResponse(ResponseCode);
virtual ~GeneralResponse() {}
public:
@ -132,7 +132,6 @@ class GeneralResponse {
protected:
ResponseCode _responseCode;
uint32_t const _apiCompatibility;
std::unordered_map<std::string, std::string> _headers;
};
}

View File

@ -47,10 +47,8 @@ std::string const HttpRequest::MULTI_PART_CONTENT_TYPE = "multipart/form-data";
HttpRequest::HttpRequest(ConnectionInfo const& connectionInfo,
char const* header, size_t length,
int32_t defaultApiCompatibility,
bool allowMethodOverride)
: GeneralRequest(connectionInfo, defaultApiCompatibility),
: GeneralRequest(connectionInfo),
_contentLength(0),
_header(nullptr),
_allowMethodOverride(allowMethodOverride) {
@ -66,68 +64,6 @@ HttpRequest::~HttpRequest() {
delete[] _header;
}
int32_t HttpRequest::compatibility() {
int32_t result = _defaultApiCompatibility;
bool found;
std::string const& apiVersion = header("x-arango-version", found);
if (!found) {
return result;
}
char const* a = apiVersion.c_str();
char const* p = a;
char const* e = a + apiVersion.size();
// read major version
uint32_t major = 0;
while (p < e && *p >= '0' && *p <= '9') {
major = major * 10 + (*p - '0');
++p;
}
if (p != a && (*p == '.' || *p == '-' || p == e)) {
if (major >= 10000) {
// version specified as "10400"
if (*p == '\0') {
result = major;
if (result < MIN_COMPATIBILITY) {
result = MIN_COMPATIBILITY;
} else {
// set patch-level to 0
result /= 100L;
result *= 100L;
}
return result;
}
}
a = ++p;
// read minor version
uint32_t minor = 0;
while (p < e && *p >= '0' && *p <= '9') {
minor = minor * 10 + (*p - '0');
++p;
}
if (p != a && (*p == '.' || *p == '-' || p == e)) {
result = (int32_t)(minor * 100L + major * 10000L);
}
}
if (result < MIN_COMPATIBILITY) {
result = MIN_COMPATIBILITY;
}
return result;
}
void HttpRequest::parseHeader(size_t length) {
char* start = _header;
char* end = start + length;

View File

@ -45,12 +45,9 @@ class HttpRequest : public GeneralRequest {
static std::string const MULTI_PART_CONTENT_TYPE;
public:
HttpRequest(ConnectionInfo const&, char const*, size_t, int32_t, bool);
HttpRequest(ConnectionInfo const&, char const*, size_t, bool);
~HttpRequest();
public:
int32_t compatibility() override;
public:
// HTTP protocol version is 1.0
bool isHttp10() const { return _version == ProtocolVersion::HTTP_1_0; }

View File

@ -34,8 +34,8 @@ using namespace arangodb::basics;
std::string const HttpResponse::BATCH_ERROR_HEADER = "x-arango-errors";
bool HttpResponse::HIDE_PRODUCT_HEADER = false;
HttpResponse::HttpResponse(ResponseCode code, uint32_t compatibility)
: GeneralResponse(code, compatibility),
HttpResponse::HttpResponse(ResponseCode code)
: GeneralResponse(code),
_isHeadResponse(false),
_isChunked(false),
_body(TRI_UNKNOWN_MEM_ZONE, false),
@ -118,7 +118,7 @@ size_t HttpResponse::bodySize() const {
}
void HttpResponse::writeHeader(StringBuffer* output) {
bool const capitalizeHeaders = (_apiCompatibility >= 20100);
bool const capitalizeHeaders = true;
output->appendText(TRI_CHAR_LENGTH_PAIR("HTTP/1.1 "));
output->appendText(responseString(_responseCode));

View File

@ -36,7 +36,7 @@ class HttpResponse : public GeneralResponse {
static std::string const BATCH_ERROR_HEADER;
public:
HttpResponse(ResponseCode code, uint32_t compatibility);
explicit HttpResponse(ResponseCode code);
public:
bool isHeadResponse() const { return _isHeadResponse; }

View File

@ -4,7 +4,7 @@ set -e
mkdir -p build-debian
cd build-debian
cmake -DASM_OPTIMIZATIONS=Off -DETCDIR=/etc -DCMAKE_INSTALL_PREFIX=/usr -DVARDIR=/var ..
cmake -DCMAKE_BUILD_TYPE=Release -DUSE_OPTIMIZE_FOR_ARCHITECTURE=Off -DETCDIR=/etc -DCMAKE_INSTALL_PREFIX=/usr -DVARDIR=/var ..
make -j12
cpack -G DEB --verbose
cd ..

View File

@ -1,2 +1,6 @@
#!/bin/bash
curl -X POST http://localhost:4001/_api/agency/read -d '[["/"]]' | jq .
if [ "$*" == "" ] ; then
curl -s X POST http://localhost:4001/_api/agency/read -d '[["/"]]' | jq .
else
curl -s -X POST http://localhost:4001/_api/agency/read -d '[["/"]]' | jq $*
fi

5
scripts/quickieTest.sh Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
scripts/unittest shell_server --test js/common/tests/shell/shell-quickie.js
scripts/unittest shell_server --test js/common/tests/shell/shell-quickie.js --cluster true
scripts/unittest shell_client --test js/common/tests/shell/shell-quickie.js
scripts/unittest shell_client --test js/common/tests/shell/shell-quickie.js --cluster true

View File

@ -6,7 +6,6 @@ if [ -z "$XTERMOPTIONS" ] ; then
XTERMOPTIONS="--geometry=80x43"
fi
if [ ! -d arangod ] || [ ! -d arangosh ] || [ ! -d UnitTests ] ; then
echo Must be started in the main ArangoDB source directory.
exit 1