mirror of https://gitee.com/bigwinds/arangodb
Merge branch '1.1' of github.com:triAGENS/ArangoDB into 1.1
This commit is contained in:
commit
6d53970f90
10
CHANGELOG
10
CHANGELOG
|
@ -1,6 +1,16 @@
|
||||||
v1.1.beta3 (XXXX-XX-XX)
|
v1.1.beta3 (XXXX-XX-XX)
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
|
* added collection type label to web interface
|
||||||
|
|
||||||
|
* fixed issue #290: the web interface now disallows creating non-edges in edge collections
|
||||||
|
when creating collections via the web interface, the collection type must also be
|
||||||
|
specified (default is document collection)
|
||||||
|
|
||||||
|
* fixed issue #289: tab-completion does not insert any spaces
|
||||||
|
|
||||||
|
* fixed issue #282: fix escaping in web interface
|
||||||
|
|
||||||
* made AQL function NOT_NULL take any number of arguments. Will now return its
|
* made AQL function NOT_NULL take any number of arguments. Will now return its
|
||||||
first argument that is not null, or null if all arguments are null. This is downwards
|
first argument that is not null, or null if all arguments are null. This is downwards
|
||||||
compatible.
|
compatible.
|
||||||
|
|
|
@ -80,6 +80,7 @@ WIKI = \
|
||||||
JSModules \
|
JSModules \
|
||||||
Key-Value \
|
Key-Value \
|
||||||
NamingConventions \
|
NamingConventions \
|
||||||
|
NewFeatures11 \
|
||||||
RefManual \
|
RefManual \
|
||||||
RestDocument \
|
RestDocument \
|
||||||
RestEdge \
|
RestEdge \
|
||||||
|
|
|
@ -23,6 +23,9 @@ The HTML and PDF versions of the manual can be found
|
||||||
Please contact @EXTREF_S{http://www.arangodb.org/connect,us} if you
|
Please contact @EXTREF_S{http://www.arangodb.org/connect,us} if you
|
||||||
have any questions.
|
have any questions.
|
||||||
|
|
||||||
|
New Features in ArangoDB 1.1 {#NewFeatures11}
|
||||||
|
=============================================
|
||||||
|
|
||||||
Upgrading to ArangoDB 1.1 {#ArangoDBUpgrading}
|
Upgrading to ArangoDB 1.1 {#ArangoDBUpgrading}
|
||||||
==============================================
|
==============================================
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,339 @@
|
||||||
|
New Features in ArangoDB 1.1 {#NewFeatures11}
|
||||||
|
=============================================
|
||||||
|
|
||||||
|
## Batch requests
|
||||||
|
|
||||||
|
ArangoDB 1.1 provides a new REST API for batch requests at `/_api/batch`.
|
||||||
|
|
||||||
|
Clients can use the API to send multiple requests to ArangoDB at once. They can
|
||||||
|
package multiple requests into just one aggregated request.
|
||||||
|
|
||||||
|
ArangoDB will then unpack the aggregated request and process the contained requests
|
||||||
|
one-by-one. When done it will send an aggregated response to the client, that the
|
||||||
|
client can then unpack to get the list of individual responses.
|
||||||
|
|
||||||
|
Using the batch request API may save network overhead because it reduces the
|
||||||
|
number of HTTP requests and responses that clients and ArangoDB need to exchange.
|
||||||
|
This may be especially important if the network is slow or if the individual
|
||||||
|
requests are small and the network overhead per request would be significant.
|
||||||
|
|
||||||
|
It should be noted that packing multiple individual requests into one aggregate
|
||||||
|
request on the client side introduces some overhead itself. The same is true
|
||||||
|
for the aggregate request unpacking and assembling on the server side. Using
|
||||||
|
batch requests may still be beneficial in many cases, but it should be obvious
|
||||||
|
that they should be used only when they replace a considerable amount of
|
||||||
|
individual requests.
|
||||||
|
|
||||||
|
For more information see @ref HttpBatch.
|
||||||
|
|
||||||
|
|
||||||
|
## More fine grained control of sync behavior
|
||||||
|
|
||||||
|
ArangoDB stores all document data in memory-mapped files. When adding new documents,
|
||||||
|
updating existing documents or deleting documents, these modifications are appended at
|
||||||
|
the end of the currently used memory-mapped datafile.
|
||||||
|
|
||||||
|
It is configurable whether ArangoDB should directly respond then and synchronise the
|
||||||
|
changes to disk asynchronously, or if it should force the synchronisation before
|
||||||
|
responding. The parameter to control this is named `waitForSync` and can be set on a
|
||||||
|
per-collection level.
|
||||||
|
|
||||||
|
Often, sychronisation is not required on collection level, but on operation level.
|
||||||
|
ArangoDB 1.1 tries to improve on this by providing extra parameters for the REST
|
||||||
|
and Javascript _document_ and _edge_ modification operations.
|
||||||
|
|
||||||
|
This parameter can be used to force synchronisation for operations that work on
|
||||||
|
collections that have `waitForSync` set to `false`.
|
||||||
|
|
||||||
|
The following REST API methods support the parameter `waitForSync` to force
|
||||||
|
synchronisation:
|
||||||
|
|
||||||
|
* `POST /_api/document`: adding a document
|
||||||
|
* `POST /_api/edge`: adding an edge
|
||||||
|
* `PATCH /_api/document`: partially update a document
|
||||||
|
* `PATCH /_api/edge`: partially update an edge
|
||||||
|
* `PUT /_api/document`: replace a document
|
||||||
|
* `PUT /_api/edge`: replace an edge
|
||||||
|
* `DELETE /_api/document`: delete a document
|
||||||
|
* `DELETE /_api/edge`: delete an edge
|
||||||
|
|
||||||
|
If the `waitForSync` parameter is omitted or set to `false`, the collection-level
|
||||||
|
synchronisation behavior will be applied. Setting the parameter to `true`
|
||||||
|
will force synchronisation.
|
||||||
|
|
||||||
|
The following Javascript methods support forcing synchronisation, too:
|
||||||
|
* save()
|
||||||
|
* update()
|
||||||
|
* relace()
|
||||||
|
* delete()
|
||||||
|
|
||||||
|
Force synchronisation of a save operation:
|
||||||
|
|
||||||
|
> db.users.save({"name":"foo"}, true);
|
||||||
|
|
||||||
|
If the second parameter is omitted or set to `false`, the collection-level
|
||||||
|
synchronisation behavior will be applied. Setting the parameter to `true`
|
||||||
|
will force synchronisation.
|
||||||
|
|
||||||
|
|
||||||
|
## Synchronisation of shape data
|
||||||
|
|
||||||
|
ArangoDB 1.1 provides an option `--database.force-sync-shapes` that controls whether
|
||||||
|
shape data (information about document attriubte names and attribute value types)
|
||||||
|
should be synchronised to disk directly after each write, or if synchronisation is
|
||||||
|
allowed to happen asynchronously.
|
||||||
|
The latter options allows ArangoDB to return faster from operations that involve new
|
||||||
|
document shapes.
|
||||||
|
|
||||||
|
In ArangoDB 1.0, shape information was always synchronised to disk, and users did not
|
||||||
|
have any options. The default value of `--database.force-sync-shapes` in ArangoDB 1.1
|
||||||
|
is `true` so it is fully compatible with ArangoDB 1.0.
|
||||||
|
However, in ArangoDB 1.1 the direct synchronisation can be turned off by setting the
|
||||||
|
value to `false`. Direct synchronisation of shape data will then be disabled for
|
||||||
|
collections that have a `waitForSync` value of `false`.
|
||||||
|
Shape data will always be synchronised directly for collections that have a `waitForSync`
|
||||||
|
value of `true`.
|
||||||
|
|
||||||
|
Still, ArangoDB 1.1 may need to perform less synchronisation when it writes shape data
|
||||||
|
(attribute names and attribute value types of collection documents).
|
||||||
|
|
||||||
|
Users may benefit if they save documents with many different structures (in terms of
|
||||||
|
document attribute names and attribute value types) in the same collection. If only
|
||||||
|
small amounts of distinct document shapes are used, the effect will not be noticable.
|
||||||
|
|
||||||
|
|
||||||
|
## Collection types
|
||||||
|
|
||||||
|
In ArangoDB 1.1, collections are now explicitly typed:
|
||||||
|
- regular documents go into _document_-only collections,
|
||||||
|
- and edges go into _edge_ collections.
|
||||||
|
|
||||||
|
In 1.0, collections were untyped, and edges and documents could be mixed in the same collection.
|
||||||
|
Whether or not a collection was to be treated as an _edge_ or _document_ collection was
|
||||||
|
decided at runtime by looking at the prefix used (e.g. `db.xxx` vs. `edges.xxx`).
|
||||||
|
|
||||||
|
The explicit collection types used in ArangoDB allow users to query the collection type at
|
||||||
|
runtime and make decisions based on the type:
|
||||||
|
|
||||||
|
arangosh> db.users.type();
|
||||||
|
|
||||||
|
Extra Javascript functions have been introduced to create collections:
|
||||||
|
|
||||||
|
arangosh> db._createDocumentCollection("users");
|
||||||
|
arangosh> db._createEdgeCollection("relationships");
|
||||||
|
|
||||||
|
The "traditional" functions are still available:
|
||||||
|
|
||||||
|
arangosh> db._create("users");
|
||||||
|
arangosh> edges._create("relationships");
|
||||||
|
|
||||||
|
The ArangoDB web interface also allows the explicit creation of _edge_
|
||||||
|
collections.
|
||||||
|
|
||||||
|
|
||||||
|
## Support for partial updates
|
||||||
|
|
||||||
|
The REST API for documents now offers the HTTP PATCH method to partially update
|
||||||
|
documents. A partial update allows specifying only the attributes the change instead
|
||||||
|
of the full document. Internally, it will merge the supplied attributes into the
|
||||||
|
existing document.
|
||||||
|
|
||||||
|
Completely overwriting/replacing entire documents is still available via the HTTP PUT
|
||||||
|
method in ArangoDB 1.0.
|
||||||
|
In _arangosh_, the partial update method is named _update_, and the previously existing
|
||||||
|
_replace_ method still performs a replacement of the entire document as before.
|
||||||
|
|
||||||
|
This call with replace just the `active` attribute of the document `user`. All other
|
||||||
|
attributes will remain unmodified. The document revision number will of course be updated
|
||||||
|
as updating creates a new revision:
|
||||||
|
|
||||||
|
arangosh> db.users.update(user, { "active" : false });
|
||||||
|
|
||||||
|
Contrary, the `replace` method will replace the entire existing document with the data
|
||||||
|
supplied. All other attributes will be removed. Replacing will also create a new revision:
|
||||||
|
|
||||||
|
arangosh> db.users.replace(user, { "active" : false });
|
||||||
|
|
||||||
|
For more information, please check @ref JS_UpdateVocbaseCol and @ref JS_ReplaeVocbaseCol.
|
||||||
|
|
||||||
|
|
||||||
|
## AQL
|
||||||
|
|
||||||
|
The following functions have been added or extended in the ArangoDB Query Language
|
||||||
|
(AQL) in ArangoDB 1.1:
|
||||||
|
- `MERGE_RECURSIVE()`: new function that merges documents recursively. Especially, it will merge
|
||||||
|
sub-attributes, a functionality not provided by the previously existing `MERGE()` function.
|
||||||
|
- `NOT_NULL()`: now works with any number of arguments and returns the first non-null argument.
|
||||||
|
If all arguments are `null`, the function will return `null`, too.
|
||||||
|
- `FIRST_LIST()`: new function that returns the first argument that is a list, and `null`
|
||||||
|
if none of the arguments are lists.
|
||||||
|
- `FIRST_DOCUMENT()`: new function that returns the first argument that is a document, and `null`
|
||||||
|
if none of the arguments are documents.
|
||||||
|
- `TO_LIST()`: converts the argument into a list.
|
||||||
|
|
||||||
|
|
||||||
|
## Endpoints
|
||||||
|
|
||||||
|
ArangoDB can now listen to incoming connections on one or many "endpoint" of different
|
||||||
|
types. In ArangoDB lingo, an endpoint is the combination of a protocol and some
|
||||||
|
configuration for it.
|
||||||
|
|
||||||
|
The currently supported protocol types are:
|
||||||
|
- `tcp`: for unencrypted connection over TCP/IP
|
||||||
|
- `ssl`: for secure connections using SSL over TCP/IP
|
||||||
|
- `unix`: for connections over Unix domain sockets
|
||||||
|
|
||||||
|
You should note that the data transferred inside the protocol is still HTTP, regardless
|
||||||
|
of the chosen protocol. The endpoint protocol can thus be understood as the envelope
|
||||||
|
that all HTTP communication is shipped inside.
|
||||||
|
|
||||||
|
To specify an endpoint, ArangoDB 1.1 introduces a new option `--server.endpoint`. The
|
||||||
|
values accepted by this option have the following specification syntax:
|
||||||
|
- `tcp://host:port (HTTP over IPv4)`
|
||||||
|
- `tcp://[host]:port (HTTP over IPv6)`
|
||||||
|
- `ssl://host:port (HTTP over SSL-encrypted IPv4)`
|
||||||
|
- `ssl://[host]:port (HTTP over SSL-encrypted IPv6)`
|
||||||
|
- `unix://path/to/socket (HTTP over Unix domain socket)`
|
||||||
|
|
||||||
|
### TCP endpoints
|
||||||
|
|
||||||
|
The configuration options for the `tcp` endpoint type are hostname/ip address and an
|
||||||
|
optional port number. If the port is ommitted, the default port number of 8529 is used.
|
||||||
|
|
||||||
|
To make the server listen to connections coming in for IP 192.168.173.13 on TCP/IP port 8529:
|
||||||
|
> bin/arangod --server.endpoint tcp://192.168.173.13:8529
|
||||||
|
|
||||||
|
To make the server listen to connections coming in for IP 127.0.0.1 TCP/IP port 999:
|
||||||
|
> bin/arangod --server.endpoint tcp://127.0.0.1:999
|
||||||
|
|
||||||
|
### SSL endpoints
|
||||||
|
|
||||||
|
SSL endpoints can be used for secure, encrypted connections to ArangoDB. The connection is
|
||||||
|
secured using SSL. SSL is computationally intensive so using it will result in an
|
||||||
|
(unavoidable) performance degradation when compared to plain-text requests.
|
||||||
|
|
||||||
|
The configuration options for the `ssl` endpoint type are the same as for `tcp` endpoints.
|
||||||
|
|
||||||
|
To make the server listen to SSL connections coming in for IP 192.168.173.13 on TCP/IP port 8529:
|
||||||
|
> bin/arangod --server.endpoint ssl://192.168.173.13:8529
|
||||||
|
|
||||||
|
As multiple endpoints can be configured, ArangoDB can serve SSL and non-SSL requests in
|
||||||
|
parallel, provided they use different ports:
|
||||||
|
|
||||||
|
> bin/arangod --server.endpoint tcp://192.168.173.13:8529 --server.endpoint ssl://192.168.173.13:8530
|
||||||
|
|
||||||
|
### Unix domain socket endpoints
|
||||||
|
|
||||||
|
The `unix` endpoint type can only be used if clients are on the same host as the _arangod_ server.
|
||||||
|
Connections will then be estabished using a Unix domain socket, which is backed by a socket descriptor
|
||||||
|
file. This type of connection should be slightly more efficient than TCP/IP.
|
||||||
|
|
||||||
|
The configuration option for a `unix` endpoint type is the socket descriptor filename:
|
||||||
|
|
||||||
|
To make the server use a Unix domain socket with filename `/var/run/arango.sock`:
|
||||||
|
> bin/arangod --server.endpoint unix:///var/run/arango.sock
|
||||||
|
|
||||||
|
|
||||||
|
## Blueprints API
|
||||||
|
|
||||||
|
Blueprints is a property graph model interface with provided implementations.
|
||||||
|
Databases that implement the Blueprints interfaces automatically support
|
||||||
|
Blueprints-enabled applications (@EXTREF{http://tinkerpop.com/,http://tinkerpop.com}).
|
||||||
|
|
||||||
|
For more information please refer to @ref HttpBluePrints.
|
||||||
|
|
||||||
|
|
||||||
|
## Server statistics
|
||||||
|
|
||||||
|
ArangoDB 1.1 allows querying the server status via REST API methods.
|
||||||
|
|
||||||
|
The following methods are available:
|
||||||
|
- `GET /_admin/connection-statistics`: provides connection statistics
|
||||||
|
- `GET /_admin/request-statistics`: provides request statistics
|
||||||
|
|
||||||
|
Both methods return the current figures and historical values. The historical
|
||||||
|
figures are aggregated. They can be used to monitor the current server status as
|
||||||
|
well as to get an overview of how the figures developed over time and look for
|
||||||
|
trends.
|
||||||
|
|
||||||
|
The ArangoDB web interface is using these APIs to provide charts with the
|
||||||
|
server connection statistics figures. It has a new tab "Statistics" for this purpose.
|
||||||
|
|
||||||
|
For more information on the APIs, please refer to @ref HttpSystemConnectionStatistics
|
||||||
|
and @ref HttpSystemRequestStatistics.
|
||||||
|
|
||||||
|
|
||||||
|
## Improved HTTP request handling
|
||||||
|
|
||||||
|
### Error codes
|
||||||
|
|
||||||
|
ArangoDB 1.1 better handles malformed HTTP requests than ArangoDB 1.0 did.
|
||||||
|
When it encounters an invalid HTTP request, it might answer with some HTTP status
|
||||||
|
codes that ArangoDB 1.0 did not use:
|
||||||
|
- `HTTP 411 Length Required` will be returned for requests that have a negative
|
||||||
|
value in their `Content-Length` HTTP header.
|
||||||
|
- `HTTP 413 Request Entity Too Large` will be returned for too big requests. The
|
||||||
|
maximum size is 512 MB at the moment.
|
||||||
|
- `HTTP 431 Request Header Field Too Large` will be returned for requests with too
|
||||||
|
long HTTP headers. The maximum size per header field is 1 MB at the moment.
|
||||||
|
|
||||||
|
For requests that are not completely shipped from the client to the server, the
|
||||||
|
server will allow the client 90 seconds of time before closing the dangling connection.
|
||||||
|
|
||||||
|
If the `Content-Length` HTTP header in an incoming request is set and contains a
|
||||||
|
value that is less than the length of the HTTP body sent, the server will return
|
||||||
|
a `HTTP 400 Bad Request`.
|
||||||
|
|
||||||
|
### Keep-Alive
|
||||||
|
|
||||||
|
In version 1.1, ArangoDB will behave as follows when it comes to HTTP keep-alive:
|
||||||
|
- if a client sends a `Connection: close` HTTP header, the server will close the connection as
|
||||||
|
requested
|
||||||
|
- if a client sends a `Connection: keep-alive` HTTP header, the server will not close the
|
||||||
|
connection but keep it alive as requested
|
||||||
|
- if a client does not send any `Connection` HTTP header, the server will assume _keep-alive_
|
||||||
|
if the request was an HTTP/1.1 request, and _close_ if the request was an HTTP/1.0 request
|
||||||
|
- dangling keep-alive connections will be closed automatically by the server after a configurable
|
||||||
|
amount of seconds. To adjust the value, use the new server option `--server.keep-alive-timeout`.
|
||||||
|
- Keep-alive can be turned off in ArangoDB by setting `--server.keep-alive-timeout` to a value of `0`.
|
||||||
|
|
||||||
|
### Configurable backlog
|
||||||
|
|
||||||
|
ArangoDB 1.1 adds an option `--server.backlog-size` to configure the system backlog size.
|
||||||
|
The backlog size controls the maximum number of queued connections and is used by the listen()
|
||||||
|
system call.
|
||||||
|
|
||||||
|
The default value in ArangoDB is 10, the maximum value is platform-dependent.
|
||||||
|
|
||||||
|
|
||||||
|
## Using V8 options
|
||||||
|
|
||||||
|
To use arbitrary options the V8 engine provides, ArangoDB 1.1 introduces a new startup
|
||||||
|
option `--javascript.v8-options`.
|
||||||
|
All options that shall be passed to V8 without being interpreted by ArangoDB can be put
|
||||||
|
inside this option. ArangoDB itself will ignore these options and will let V8 handle them.
|
||||||
|
It is up to V8 to handle these options and complain about wrong options. In case of invalid
|
||||||
|
options, V8 may refuse to start, which will also abort the startup of ArangoDB.
|
||||||
|
|
||||||
|
To get a list of all options that the V8 engine in the currently used version of ArangoDB
|
||||||
|
supports, you can use the value `--help`, which will just be passed through to V8:
|
||||||
|
|
||||||
|
> bin/arangod --javascript.v8-options "--help" /tmp/voctest
|
||||||
|
|
||||||
|
|
||||||
|
## Smaller hash indexes
|
||||||
|
|
||||||
|
Some internal structures have been adjusted in ArangoDB 1.1 so that hash index entries
|
||||||
|
consume considerably less memory.
|
||||||
|
|
||||||
|
Installations may benefit if they use unique or non-unqiue hash indexes on collections.
|
||||||
|
|
||||||
|
|
||||||
|
## arangoimp
|
||||||
|
|
||||||
|
_arangoimp_ now allows specifiying the end-of-line (EOL) character of the input file.
|
||||||
|
This allows better support for files created on Windows systems with `\r\n` EOLs.
|
||||||
|
|
||||||
|
_arangoimp_ also supports importing input files in TSV format. TSV is a simple separated
|
||||||
|
format such as CSV, but with the tab character as the separator, no quoting for values
|
||||||
|
and thus no support for line breaks inside the values.
|
|
@ -10,6 +10,12 @@ The following list contains changes in ArangoDB 1.1 that are not 100% downwards-
|
||||||
Existing users of ArangoDB 1.0 should read the list carefully and make sure they have undertaken all
|
Existing users of ArangoDB 1.0 should read the list carefully and make sure they have undertaken all
|
||||||
necessary steps and precautions before upgrading from ArangoDB 1.0 to ArangoDB 1.1.
|
necessary steps and precautions before upgrading from ArangoDB 1.0 to ArangoDB 1.1.
|
||||||
|
|
||||||
|
## New dependencies
|
||||||
|
|
||||||
|
As ArangoDB 1.1 supports SSL connections, ArangoDB can only be built on servers with the OpenSSL
|
||||||
|
library installed. The OpenSSL is not bundled with ArangoDB and must be installed separately.
|
||||||
|
|
||||||
|
|
||||||
## Database directory version check and upgrade
|
## Database directory version check and upgrade
|
||||||
|
|
||||||
Starting with ArangoDB 1.1, _arangod_ will perform a database version check at startup.
|
Starting with ArangoDB 1.1, _arangod_ will perform a database version check at startup.
|
||||||
|
@ -109,16 +115,24 @@ The following _arangod_ startup options have been removed in ArangoDB 1.1:
|
||||||
- `--server.require-keep-alive`
|
- `--server.require-keep-alive`
|
||||||
- `--server.secure-require-keep-alive`
|
- `--server.secure-require-keep-alive`
|
||||||
|
|
||||||
In version 1.1, The server will now behave as follows automatically which should be more
|
In version 1.1, the server will behave as follows automatically which should be more
|
||||||
conforming to the HTTP standard:
|
conforming to the HTTP standard:
|
||||||
- if a client sends a `Connection: close` HTTP header, the server will close the connection as
|
- if a client sends a `Connection: close` HTTP header, the server will close the connection as
|
||||||
requested
|
requested
|
||||||
- if a client sends a `Connection: keep-alive` HTTP header, the server will not close the
|
- if a client sends a `Connection: keep-alive` HTTP header, the server will not close the
|
||||||
connection but keep it alive as requested
|
connection but keep it alive as requested
|
||||||
- if a client does not send any `Connection` HTTP header, the server will assume _keep-alive_
|
- if a client does not send any `Connection` HTTP header, the server will assume _keep-alive_
|
||||||
if the request was an HTTP/1.1 request, and "close" if the request was an HTTP/1.0 request
|
if the request was an HTTP/1.1 request, and _close_ if the request was an HTTP/1.0 request
|
||||||
- dangling keep-alive connections will be closed automatically by the server after a configurable
|
- dangling keep-alive connections will be closed automatically by the server after a configurable
|
||||||
amount of seconds. To adjust the value, use the new server option `--server.keep-alive-timeout`
|
amount of seconds. To adjust the value, use the new server option `--server.keep-alive-timeout`.
|
||||||
|
- Keep-alive can be turned off in ArangoDB by setting `--server.keep-alive-timeout` to a value of `0`.
|
||||||
|
|
||||||
|
As ArangoDB 1.1 will use keep-alive by default for incoming HTTP/1.1 requests without a
|
||||||
|
`Connection` header, using ArangoDB 1.1 from a browser will likely result in the same connection
|
||||||
|
being re-used. This may be unintuitive because requests from a browser to ArangoDB will
|
||||||
|
effectively be serialised, not parallelised. To conduct parallel requests from a browser, you
|
||||||
|
should either set `--server.keep-alive-timeout` to a value of `0`, or make your browser send
|
||||||
|
`Connection: close` HTTP headers with its requests.
|
||||||
|
|
||||||
|
|
||||||
## Start / stop scripts
|
## Start / stop scripts
|
||||||
|
@ -163,9 +177,13 @@ If one of the documents contains either a `_from` or a `_to` attribute, the coll
|
||||||
_edge_ collection. Otherwise, the collection is marked as a _document_ collection.
|
_edge_ collection. Otherwise, the collection is marked as a _document_ collection.
|
||||||
|
|
||||||
This distinction is important because edges can only be created in _edge_ collections starting
|
This distinction is important because edges can only be created in _edge_ collections starting
|
||||||
with 1.1. Client code may need to be adjusted to work with ArangoDB 1.1 if it tries to insert
|
with 1.1. User code may need to be adjusted to work with ArangoDB 1.1 if it tries to insert
|
||||||
edges into _document_-only collections.
|
edges into _document_-only collections.
|
||||||
|
|
||||||
|
User code must also be adjusted if it uses the `ArangoEdges` or `ArangoEdgesCollection` objects
|
||||||
|
that were present in ArangoDB 1.0 on the server. This only affects user code that was intended
|
||||||
|
to be run on the server, directly in ArangoDB. The `ArangoEdges` or `ArangoEdgesCollection`
|
||||||
|
objects were not exposed to _arangosh_ or any other clients.
|
||||||
|
|
||||||
## arangoimp / arangosh
|
## arangoimp / arangosh
|
||||||
|
|
||||||
|
@ -187,3 +205,9 @@ to the _arangod_ server. If no password is given on the command line, _arangoimp
|
||||||
will interactively prompt for a password.
|
will interactively prompt for a password.
|
||||||
If no username is specified on the command line, the default user _root_ will be used but there
|
If no username is specified on the command line, the default user _root_ will be used but there
|
||||||
will still be a password prompt.
|
will still be a password prompt.
|
||||||
|
|
||||||
|
|
||||||
|
## Removed functionality
|
||||||
|
|
||||||
|
In 1.0, there were unfinished REST APIs available at the `/_admin/config` URL suffix.
|
||||||
|
These APIs were stubs only and have been removed in ArangoDB 1.1.
|
||||||
|
|
131
UPGRADING
131
UPGRADING
|
@ -1,130 +1 @@
|
||||||
Changes that should be considered when installing or upgrading ArangoDB
|
Please refer to Documentation/Manual/Upgrading.md
|
||||||
-----------------------------------------------------------------------
|
|
||||||
|
|
||||||
* Starting the server
|
|
||||||
|
|
||||||
Starting with ArangoDB 1.1, arangod will perform a database version check at startup.
|
|
||||||
It will look for a file named "VERSION" in its database directory. If the file is not
|
|
||||||
present (it will not be present in an ArangoDB 1.0 database), arangod in version 1.1
|
|
||||||
will refuse to start and ask the user to run the script "arango-upgrade" first.
|
|
||||||
If the VERSION file is present but is from a non-matching version of ArangoDB, arangod
|
|
||||||
will also refuse to start and ask the user to run the upgrade script first.
|
|
||||||
This procedure shall ensure that users have full control over when they perform any
|
|
||||||
updates/upgrades of their data, and do not risk running an incompatible server/database
|
|
||||||
state tandem.
|
|
||||||
|
|
||||||
ArangoDB users are asked to run arango-upgrade when upgrading from one version of
|
|
||||||
ArangoDB to a higher version (e.g. from 1.0 to 1.1), but also after pulling the latest
|
|
||||||
ArangoDB source code while staying in the same minor version (e.g. when updating from
|
|
||||||
1.1-beta1 to 1.1-beta2).
|
|
||||||
|
|
||||||
When installing ArangoDB from scratch, users should also run arango-upgrade once to
|
|
||||||
initialise their database directory with some system collections that ArangoDB requires.
|
|
||||||
When not run, arangod will refuse to start as mentioned before.
|
|
||||||
|
|
||||||
|
|
||||||
The startup options "--port", "--server.port", "--server.http-port", and
|
|
||||||
"--server.admin-port" have all been removed for arangod in version 1.1.
|
|
||||||
All these options have been replaced by the new "--server.endpoint" option.
|
|
||||||
This option allows to specify protocol, hostname and port the server should use for
|
|
||||||
incoming connections.
|
|
||||||
The "--server.endpoint" option must be specified on server start, otherwise arangod
|
|
||||||
will refuse to start.
|
|
||||||
|
|
||||||
The server can be bound to one or multiple endpoints at once. The following endpoint
|
|
||||||
specification sytnax is currently supported:
|
|
||||||
- tcp://host:port or http@tcp://host:port (HTTP over IPv4)
|
|
||||||
- tcp://[host]:port or http@tcp://[host]:port (HTTP over IPv6)
|
|
||||||
- ssl://host:port or http@tcp://host:port (HTTP over SSL-encrypted IPv4)
|
|
||||||
- ssl://[host]:port or http@tcp://[host]:port (HTTP over SSL-encrypted IPv6)
|
|
||||||
- unix://path/to/socket or http@unix:///path/to/socket (HTTP over UNIX socket)
|
|
||||||
|
|
||||||
An example value for the option is --server.endpoint tcp://127.0.0.1:8529. This will
|
|
||||||
make the server listen to request coming in on IP address 127.0.0.1 on port 8529, and
|
|
||||||
that use HTTP over TCP/IPv4.
|
|
||||||
|
|
||||||
|
|
||||||
The arangod startup options "--server.require-keep-alive" and
|
|
||||||
"--server.secure-require-keep-alive" have been removed in 1.1. The server will now
|
|
||||||
behave as follows which should be more conforming to the HTTP standard:
|
|
||||||
* if a client sends a "Connection: close" header, the server will close the
|
|
||||||
connection as request
|
|
||||||
* if a client sends a "Connection: keep-alive" header, the server will not
|
|
||||||
close the connection but keep it alive as requested
|
|
||||||
* if a client does not send any "Connection" header, the server will assume
|
|
||||||
"keep-alive" if the request was an HTTP/1.1 request, and "close" if the
|
|
||||||
request was an HTTP/1.0 request
|
|
||||||
* dangling keep-alive connections will be closed automatically by the server after
|
|
||||||
a configurable amount of seconds. To adjust the value, use the new server option
|
|
||||||
"--server.keep-alive-timeout"
|
|
||||||
|
|
||||||
|
|
||||||
* Start / stop scripts
|
|
||||||
|
|
||||||
The user has changed from "arango" to "arangodb", the start script name has changed from
|
|
||||||
"arangod" to "arangodb", the database directory has changed from "/var/arangodb" to
|
|
||||||
"/var/lib/arangodb" to be compliant with various Linux policies.
|
|
||||||
|
|
||||||
|
|
||||||
* Collection types
|
|
||||||
|
|
||||||
In 1.1, we have introduced types for collections: regular documents go into document
|
|
||||||
collections, and edges go into edge collections. The prefixing (db.xxx vs. edges.xxx)
|
|
||||||
works slightly different in 1.1: edges.xxx can still be used to access collections,
|
|
||||||
however, it will not determine the type of existing collections anymore. In 1.0, you
|
|
||||||
could write edges.xxx.something and xxx was automatically treated as an edge collection.
|
|
||||||
As collections know and save their type in ArangoDB 1.1, this might work slightly
|
|
||||||
differently.
|
|
||||||
|
|
||||||
In 1.1, edge collections can still be created via edges._create() as in 1.0, but
|
|
||||||
a new method was also introduced that uses the db object: db._createEdgeCollection().
|
|
||||||
To create document collections, the following methods are available: db._create()
|
|
||||||
as in 1.0, and additionally there is now db._createDocumentCollection().
|
|
||||||
|
|
||||||
Collections in 1.1 are now either document or edge collections, but the two concepts
|
|
||||||
can not be mixed in the same collection. arango-upgrade will determine the types of
|
|
||||||
existing collections from 1.0 once on upgrade, based on the inspection of the first 50
|
|
||||||
documents in the collection. If one of them contains either a _from or a _to attribute,
|
|
||||||
the collection is made an edge collection, otherwise, the colleciton is marked as a
|
|
||||||
document collection.
|
|
||||||
This distinction is important because edges can only be created in edge collections
|
|
||||||
starting with 1.1. Client code may need to be adjusted for 1.1 to insert edges only
|
|
||||||
into "pure" edge collections in 1.1.
|
|
||||||
|
|
||||||
|
|
||||||
* Authorization
|
|
||||||
|
|
||||||
Starting from 1.1, arangod may be started with authentication turned on.
|
|
||||||
When turned on, all requests incoming to arangod via the HTTP interface must carry an
|
|
||||||
HTTP authorization header with a valid username and password in order to be processed.
|
|
||||||
Clients sending requests without HTTP autorization headers or with invalid usernames/
|
|
||||||
passwords will be rejected by arangod with an HTTP 401 error.
|
|
||||||
|
|
||||||
arango-upgrade will create a default user "root" with an empty password when run
|
|
||||||
initially.
|
|
||||||
|
|
||||||
To turn authorization off, the server can be started with the command line option
|
|
||||||
--server.disable-authentication true. Of course this configuration can also be stored
|
|
||||||
in a configuration file.
|
|
||||||
|
|
||||||
|
|
||||||
* arangoimp / arangosh
|
|
||||||
|
|
||||||
The parameters "connect-timeout" and "request-timeout" for arangosh and arangoimp have
|
|
||||||
been renamed to "--server.connect-timeout" and "--server.request-timeout".
|
|
||||||
|
|
||||||
The parameter "--server" has been removed for both arangoimp and arangosh.
|
|
||||||
To specify a server to connect to, the client tools now provide an option
|
|
||||||
"--server.endpoint". This option can be used to specify the protocol, hostname and port
|
|
||||||
for the connection. The default endpoint that is used when none is specified is
|
|
||||||
tcp://127.0.0.1:8529. For more information on the endpoint specification syntax, see
|
|
||||||
above.
|
|
||||||
|
|
||||||
The options "--server.username" and "--server.password" have been added for arangoimp
|
|
||||||
and arangosh in order to use authorization from these client tools, too.
|
|
||||||
These options can be used to specify the username and password when connectiong via
|
|
||||||
client tools to the arangod server. If no password is given on the command line,
|
|
||||||
arangoimp and arangosh will interactively prompt for a password.
|
|
||||||
If no username is specified on the command line, the default user "root" will be
|
|
||||||
used but there will still be a password prompt.
|
|
||||||
|
|
||||||
|
|
|
@ -86,40 +86,40 @@ BOOST_AUTO_TEST_CASE (tst_mersenne_int31) {
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
BOOST_AUTO_TEST_CASE (tst_mersenne_seed) {
|
BOOST_AUTO_TEST_CASE (tst_mersenne_seed) {
|
||||||
TRI_SeedMersenneTwister((uint32_t) 0);
|
TRI_SeedMersenneTwister((uint32_t) 0UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2357136044, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2357136044UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2546248239, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2546248239UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 3071714933, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 3071714933UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
TRI_SeedMersenneTwister((uint32_t) 1);
|
TRI_SeedMersenneTwister((uint32_t) 1UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 1791095845, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 1791095845UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4282876139, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4282876139UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 3093770124, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 3093770124UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
TRI_SeedMersenneTwister((uint32_t) 2);
|
TRI_SeedMersenneTwister((uint32_t) 2UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 1872583848, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 1872583848UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 794921487, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 794921487UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 111352301, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 111352301UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
TRI_SeedMersenneTwister((uint32_t) 23);
|
TRI_SeedMersenneTwister((uint32_t) 23UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
TRI_SeedMersenneTwister((uint32_t) 42);
|
TRI_SeedMersenneTwister((uint32_t) 42UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 1608637542, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 1608637542UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 3421126067, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 3421126067UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4083286876, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4083286876UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
TRI_SeedMersenneTwister((uint32_t) 458735);
|
TRI_SeedMersenneTwister((uint32_t) 458735UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 1537542272, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 1537542272UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4131475792, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4131475792UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2280116031, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2280116031UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
TRI_SeedMersenneTwister((uint32_t) 395568682893);
|
TRI_SeedMersenneTwister((uint32_t) 395568682893UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2297195664, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2297195664UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2381406737, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2381406737UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4184846092, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4184846092UL, TRI_Int32MersenneTwister());
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -127,26 +127,26 @@ BOOST_AUTO_TEST_CASE (tst_mersenne_seed) {
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
BOOST_AUTO_TEST_CASE (tst_mersenne_reseed) {
|
BOOST_AUTO_TEST_CASE (tst_mersenne_reseed) {
|
||||||
TRI_SeedMersenneTwister((uint32_t) 23);
|
TRI_SeedMersenneTwister((uint32_t) 23UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
// re-seed with same value and compare
|
// re-seed with same value and compare
|
||||||
TRI_SeedMersenneTwister((uint32_t) 23);
|
TRI_SeedMersenneTwister((uint32_t) 23UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
// seed with different value
|
// seed with different value
|
||||||
TRI_SeedMersenneTwister((uint32_t) 458735);
|
TRI_SeedMersenneTwister((uint32_t) 458735UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 1537542272, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 1537542272UL, TRI_Int32MersenneTwister());
|
||||||
|
|
||||||
// re-seed with original value and compare
|
// re-seed with original value and compare
|
||||||
TRI_SeedMersenneTwister((uint32_t) 23);
|
TRI_SeedMersenneTwister((uint32_t) 23UL);
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
|
||||||
BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
|
BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
|
@ -124,7 +124,7 @@ static bool AllocateTable (TRI_hasharray_t* array, size_t numElements) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// position array directly on a cache line boundary
|
// position array directly on a cache line boundary
|
||||||
offset = ((uint64_t) data) % CACHE_LINE_SIZE;
|
offset = ((intptr_t) data) % CACHE_LINE_SIZE;
|
||||||
|
|
||||||
if (offset == 0) {
|
if (offset == 0) {
|
||||||
// we're already on a cache line boundary
|
// we're already on a cache line boundary
|
||||||
|
@ -134,7 +134,7 @@ static bool AllocateTable (TRI_hasharray_t* array, size_t numElements) {
|
||||||
// move to start of a cache line
|
// move to start of a cache line
|
||||||
table = data + (CACHE_LINE_SIZE - offset);
|
table = data + (CACHE_LINE_SIZE - offset);
|
||||||
}
|
}
|
||||||
assert(((uint64_t) table) % CACHE_LINE_SIZE == 0);
|
assert(((intptr_t) table) % CACHE_LINE_SIZE == 0);
|
||||||
|
|
||||||
array->_data = data;
|
array->_data = data;
|
||||||
array->_table = table;
|
array->_table = table;
|
||||||
|
|
|
@ -353,7 +353,7 @@ void TRI_InitSkipList (TRI_skiplist_t* skiplist, size_t elementSize,
|
||||||
// ..........................................................................
|
// ..........................................................................
|
||||||
skiplist->_base._maxHeight = maximumHeight;
|
skiplist->_base._maxHeight = maximumHeight;
|
||||||
if (maximumHeight > SKIPLIST_ABSOLUTE_MAX_HEIGHT) {
|
if (maximumHeight > SKIPLIST_ABSOLUTE_MAX_HEIGHT) {
|
||||||
LOG_ERROR("Invalid maximum height for skiplist", TRI_ERROR_INTERNAL);
|
LOG_ERROR("Invalid maximum height for skiplist");
|
||||||
assert(false);
|
assert(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1413,7 +1413,7 @@ void TRI_InitSkipListMulti (TRI_skiplist_multi_t* skiplist,
|
||||||
// ..........................................................................
|
// ..........................................................................
|
||||||
skiplist->_base._maxHeight = maximumHeight;
|
skiplist->_base._maxHeight = maximumHeight;
|
||||||
if (maximumHeight > SKIPLIST_ABSOLUTE_MAX_HEIGHT) {
|
if (maximumHeight > SKIPLIST_ABSOLUTE_MAX_HEIGHT) {
|
||||||
LOG_ERROR("Invalid maximum height for skiplist", TRI_ERROR_INTERNAL);
|
LOG_ERROR("Invalid maximum height for skiplist");
|
||||||
assert(false);
|
assert(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -352,7 +352,7 @@ static bool CheckCollection (TRI_collection_t* collection) {
|
||||||
collection->_lastError = datafile->_lastError;
|
collection->_lastError = datafile->_lastError;
|
||||||
stop = true;
|
stop = true;
|
||||||
|
|
||||||
LOG_ERROR("cannot rename sealed log-file to %s, this should not happen: %s", filename, TRI_errno());
|
LOG_ERROR("cannot rename sealed log-file to %s, this should not happen: %s", filename, TRI_last_error());
|
||||||
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
|
@ -248,7 +248,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
|
||||||
TRI_READ_UNLOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION(primary);
|
TRI_READ_UNLOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION(primary);
|
||||||
|
|
||||||
if (deleted) {
|
if (deleted) {
|
||||||
LOG_TRACE("found a stale document: %lu", d->_did);
|
LOG_TRACE("found a stale document: %llu", d->_did);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -256,7 +256,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
|
||||||
res = CopyDocument(sim, marker, &result, &fid);
|
res = CopyDocument(sim, marker, &result, &fid);
|
||||||
|
|
||||||
if (res != TRI_ERROR_NO_ERROR) {
|
if (res != TRI_ERROR_NO_ERROR) {
|
||||||
LOG_FATAL("cannot write compactor file: ", TRI_last_error());
|
LOG_FATAL("cannot write compactor file: %s", TRI_last_error());
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -277,7 +277,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
|
||||||
dfi->_numberDead += 1;
|
dfi->_numberDead += 1;
|
||||||
dfi->_sizeDead += marker->_size - markerSize;
|
dfi->_sizeDead += marker->_size - markerSize;
|
||||||
|
|
||||||
LOG_DEBUG("found a stale document after copying: %lu", d->_did);
|
LOG_DEBUG("found a stale document after copying: %llu", d->_did);
|
||||||
TRI_WRITE_UNLOCK_DATAFILES_DOC_COLLECTION(primary);
|
TRI_WRITE_UNLOCK_DATAFILES_DOC_COLLECTION(primary);
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
|
@ -302,7 +302,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
|
||||||
res = CopyDocument(sim, marker, &result, &fid);
|
res = CopyDocument(sim, marker, &result, &fid);
|
||||||
|
|
||||||
if (res != TRI_ERROR_NO_ERROR) {
|
if (res != TRI_ERROR_NO_ERROR) {
|
||||||
LOG_FATAL("cannot write compactor file: ", TRI_last_error());
|
LOG_FATAL("cannot write compactor file: %s", TRI_last_error());
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -169,7 +169,7 @@ static int TruncateDatafile (TRI_datafile_t* datafile, TRI_voc_size_t vocSize) {
|
||||||
res = TRI_UNMMFile(datafile->_data, datafile->_maximalSize, &(datafile->_fd), &(datafile->_mmHandle));
|
res = TRI_UNMMFile(datafile->_data, datafile->_maximalSize, &(datafile->_fd), &(datafile->_mmHandle));
|
||||||
|
|
||||||
if (res < 0) {
|
if (res < 0) {
|
||||||
LOG_ERROR("munmap failed with: %s", res);
|
LOG_ERROR("munmap failed with: %d", res);
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -514,7 +514,7 @@ static TRI_datafile_t* OpenDatafile (char const* filename, bool ignoreErrors) {
|
||||||
if (res != TRI_ERROR_NO_ERROR) {
|
if (res != TRI_ERROR_NO_ERROR) {
|
||||||
TRI_set_errno(res);
|
TRI_set_errno(res);
|
||||||
close(fd);
|
close(fd);
|
||||||
LOG_ERROR("cannot memory map file '%s': '%s'", filename, res);
|
LOG_ERROR("cannot memory map file '%s': '%d'", filename, res);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -626,7 +626,7 @@ TRI_datafile_t* TRI_CreateDatafile (char const* filename, TRI_voc_size_t maximal
|
||||||
// remove empty file
|
// remove empty file
|
||||||
TRI_UnlinkFile(filename);
|
TRI_UnlinkFile(filename);
|
||||||
|
|
||||||
LOG_ERROR("cannot memory map file '%s': '%s'", filename, res);
|
LOG_ERROR("cannot memory map file '%s': '%d'", filename, res);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1009,7 +1009,7 @@ bool TRI_CloseDatafile (TRI_datafile_t* datafile) {
|
||||||
res = TRI_UNMMFile(datafile->_data, datafile->_maximalSize, &(datafile->_fd), &(datafile->_mmHandle));
|
res = TRI_UNMMFile(datafile->_data, datafile->_maximalSize, &(datafile->_fd), &(datafile->_mmHandle));
|
||||||
|
|
||||||
if (res != TRI_ERROR_NO_ERROR) {
|
if (res != TRI_ERROR_NO_ERROR) {
|
||||||
LOG_ERROR("munmap failed with: %s", res);
|
LOG_ERROR("munmap failed with: %d", res);
|
||||||
datafile->_state = TRI_DF_STATE_WRITE_ERROR;
|
datafile->_state = TRI_DF_STATE_WRITE_ERROR;
|
||||||
datafile->_lastError = res;
|
datafile->_lastError = res;
|
||||||
return false;
|
return false;
|
||||||
|
|
|
@ -2439,12 +2439,12 @@ static int FillIndex (TRI_document_collection_t* collection, TRI_index_t* idx) {
|
||||||
++inserted;
|
++inserted;
|
||||||
|
|
||||||
if (inserted % 10000 == 0) {
|
if (inserted % 10000 == 0) {
|
||||||
LOG_DEBUG("indexed %ld documents of collection %lu", inserted, (unsigned long) primary->base._cid);
|
LOG_DEBUG("indexed %lu documents of collection %lu", (unsigned long) inserted, (unsigned long) primary->base._cid);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (scanned % 10000 == 0) {
|
if (scanned % 10000 == 0) {
|
||||||
LOG_TRACE("scanned %ld of %ld datafile entries of collection %lu", scanned, n, (unsigned long) primary->base._cid);
|
LOG_TRACE("scanned %ld of %ld datafile entries of collection %lu", (unsigned long) scanned, (unsigned long) n, (unsigned long) primary->base._cid);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -3312,14 +3312,14 @@ static TRI_index_t* CreateGeoIndexDocumentCollection (TRI_document_collection_t*
|
||||||
if (location != NULL) {
|
if (location != NULL) {
|
||||||
idx = TRI_CreateGeo1Index(&sim->base, location, loc, geoJson, constraint, ignoreNull);
|
idx = TRI_CreateGeo1Index(&sim->base, location, loc, geoJson, constraint, ignoreNull);
|
||||||
|
|
||||||
LOG_TRACE("created geo-index for location '%s': %d",
|
LOG_TRACE("created geo-index for location '%s': %ld",
|
||||||
location,
|
location,
|
||||||
(unsigned long) loc);
|
(unsigned long) loc);
|
||||||
}
|
}
|
||||||
else if (longitude != NULL && latitude != NULL) {
|
else if (longitude != NULL && latitude != NULL) {
|
||||||
idx = TRI_CreateGeo2Index(&sim->base, latitude, lat, longitude, lon, constraint, ignoreNull);
|
idx = TRI_CreateGeo2Index(&sim->base, latitude, lat, longitude, lon, constraint, ignoreNull);
|
||||||
|
|
||||||
LOG_TRACE("created geo-index for location '%s': %d, %d",
|
LOG_TRACE("created geo-index for location '%s': %ld, %ld",
|
||||||
location,
|
location,
|
||||||
(unsigned long) lat,
|
(unsigned long) lat,
|
||||||
(unsigned long) lon);
|
(unsigned long) lon);
|
||||||
|
|
Binary file not shown.
|
@ -230,6 +230,7 @@ html.busy, html.busy * {
|
||||||
.hoverClass:hover {
|
.hoverClass:hover {
|
||||||
background-color: #696969 !important;
|
background-color: #696969 !important;
|
||||||
border-bottom: 1px solid #696969;
|
border-bottom: 1px solid #696969;
|
||||||
|
margin-top: -1px;
|
||||||
color: white !important;
|
color: white !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -549,14 +550,14 @@ form {
|
||||||
}
|
}
|
||||||
|
|
||||||
#formatJSONyesno {
|
#formatJSONyesno {
|
||||||
margin-top: -40px;
|
margin-top: -50px;
|
||||||
float:right;
|
float:right;
|
||||||
margin-right: 120px;
|
margin-right: 120px;
|
||||||
color: #797979;
|
color: #797979;
|
||||||
}
|
}
|
||||||
|
|
||||||
#aqlinfo {
|
#aqlinfo {
|
||||||
line-height: 150%;
|
line-height: 250%;
|
||||||
}
|
}
|
||||||
|
|
||||||
#submitQuery {
|
#submitQuery {
|
||||||
|
@ -565,19 +566,26 @@ form {
|
||||||
}
|
}
|
||||||
#refreshShell{
|
#refreshShell{
|
||||||
padding-bottom: 3px;
|
padding-bottom: 3px;
|
||||||
margin-top: -10px;
|
margin-top: -6px;
|
||||||
width: 9.5% !important;
|
width: 9.5% !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
#queryOutput a, .queryError, .querySuccess {
|
#queryOutput a, .queryError, .querySuccess {
|
||||||
font-size: 0.9em !important;
|
font-size: 0.9em !important;
|
||||||
font-family: "courier";
|
font-family: "courier";
|
||||||
|
padding-left: 10px;
|
||||||
|
padding-top: 10px !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#queryOutput pre {
|
||||||
|
padding-left: 10px;
|
||||||
|
padding-top: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
#queryOutput {
|
#queryOutput {
|
||||||
margin-bottom: 5px;
|
margin-bottom: 5px;
|
||||||
height:35%;
|
height:35%;
|
||||||
|
padding-top: 10px;
|
||||||
overflow-y: auto;
|
overflow-y: auto;
|
||||||
border: 1px solid black;
|
border: 1px solid black;
|
||||||
background: white;
|
background: white;
|
||||||
|
@ -587,10 +595,13 @@ form {
|
||||||
}
|
}
|
||||||
|
|
||||||
#queryContent {
|
#queryContent {
|
||||||
|
padding-top: 10px;
|
||||||
height: 45%;
|
height: 45%;
|
||||||
font-family: "courier";
|
font-family: "courier";
|
||||||
width: 100%;
|
width: 100%;
|
||||||
resize: vertical;
|
resize: vertical;
|
||||||
|
padding-left: 10px;
|
||||||
|
margin-top: 2px;
|
||||||
}
|
}
|
||||||
|
|
||||||
#avocshWindow {
|
#avocshWindow {
|
||||||
|
@ -626,6 +637,7 @@ form {
|
||||||
height: 30px;
|
height: 30px;
|
||||||
background-color: white;
|
background-color: white;
|
||||||
margin-right: 0.5%;
|
margin-right: 0.5%;
|
||||||
|
padding-left: 10px;
|
||||||
}
|
}
|
||||||
|
|
||||||
.avocshSuccess {
|
.avocshSuccess {
|
||||||
|
@ -707,7 +719,7 @@ form {
|
||||||
|
|
||||||
#menue-right {
|
#menue-right {
|
||||||
padding-top:11px;
|
padding-top:11px;
|
||||||
padding-right:14px;
|
padding-right:10px;
|
||||||
height:40px;
|
height:40px;
|
||||||
width:auto;
|
width:auto;
|
||||||
float:right;
|
float:right;
|
||||||
|
@ -826,6 +838,12 @@ form {
|
||||||
#logTableID_info {
|
#logTableID_info {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#logTableID, #warnLogTableID, #critLogTableID, #infoLogTableID, #debugLogTableID {
|
||||||
|
border-left: 1px solid #AAAAAA;
|
||||||
|
border-right: 1px solid #AAAAAA;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
.ui-dialog {
|
.ui-dialog {
|
||||||
height:100px;
|
height:100px;
|
||||||
font-size: 0.8em;
|
font-size: 0.8em;
|
||||||
|
@ -834,9 +852,9 @@ form {
|
||||||
.fg-toolbar {
|
.fg-toolbar {
|
||||||
font-size: 0.7em;
|
font-size: 0.7em;
|
||||||
border: 1px solid #D3D3D3 !important;
|
border: 1px solid #D3D3D3 !important;
|
||||||
// -webkit-box-shadow: inset 0 0 1px 1px #f6f6f6;
|
-webkit-box-shadow: inset 0 0 1px 1px #f6f6f6;
|
||||||
// -moz-box-shadow: inset 0 0 1px 1px #f6f6f6;
|
-moz-box-shadow: inset 0 0 1px 1px #f6f6f6;
|
||||||
// box-shadow: inset 0 0 1px 1px #f6f6f6;
|
box-shadow: inset 0 0 1px 1px #f6f6f6;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -848,29 +866,16 @@ form {
|
||||||
background: #e3e3e3;
|
background: #e3e3e3;
|
||||||
padding: 9px 0 8px;
|
padding: 9px 0 8px;
|
||||||
border: 1px solid #bbb;
|
border: 1px solid #bbb;
|
||||||
// -webkit-box-shadow: inset 0 0 1px 1px #f6f6f6;
|
|
||||||
// -moz-box-shadow: inset 0 0 1px 1px #f6f6f6;
|
|
||||||
// box-shadow: inset 0 0 1px 1px #f6f6f6;
|
|
||||||
color: #333;
|
color: #333;
|
||||||
font: 0.7em Verdana,Arial,sans-serif;
|
font: 0.7em Verdana,Arial,sans-serif;
|
||||||
line-height: 1;
|
line-height: 1;
|
||||||
text-align: center;
|
text-align: center;
|
||||||
text-shadow: 0 1px 0 #fff;
|
|
||||||
width: 90px;
|
width: 90px;
|
||||||
}
|
}
|
||||||
#menue button.minimal:hover {
|
#menue button.minimal:hover {
|
||||||
background: #d9d9d9;
|
/* background: #d9d9d9; */
|
||||||
// -webkit-box-shadow: inset 0 0 1px 1px #eaeaea;
|
/* color: #222; */
|
||||||
// -moz-box-shadow: inset 0 0 1px 1px #eaeaea;
|
|
||||||
// box-shadow: inset 0 0 1px 1px #eaeaea;
|
|
||||||
color: #222;
|
|
||||||
cursor: pointer; }
|
cursor: pointer; }
|
||||||
#menue button.minimal:active {
|
|
||||||
background: #d0d0d0;
|
|
||||||
// -webkit-box-shadow: inset 0 0 1px 1px #e3e3e3;
|
|
||||||
// -moz-box-shadow: inset 0 0 1px 1px #e3e3e3;
|
|
||||||
// box-shadow: inset 0 0 1px 1px #e3e3e3;
|
|
||||||
color: #000; }
|
|
||||||
/*
|
/*
|
||||||
##############################################################################
|
##############################################################################
|
||||||
### CollectionView subView Buttons
|
### CollectionView subView Buttons
|
||||||
|
@ -909,7 +914,7 @@ form {
|
||||||
}
|
}
|
||||||
|
|
||||||
#queryForm {
|
#queryForm {
|
||||||
margin-top: -5px;
|
margin-top: -3px;
|
||||||
}
|
}
|
||||||
|
|
||||||
#queryView {
|
#queryView {
|
||||||
|
@ -962,3 +967,42 @@ form {
|
||||||
#iInfo {
|
#iInfo {
|
||||||
font-size: 0.8em;
|
font-size: 0.8em;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#shellInfo {
|
||||||
|
line-height: 200%;
|
||||||
|
}
|
||||||
|
|
||||||
|
#formatJSONyesno {
|
||||||
|
padding-top: 5px !important
|
||||||
|
}
|
||||||
|
|
||||||
|
@media screen
|
||||||
|
and (-webkit-min-device-pixel-ratio:0)
|
||||||
|
{
|
||||||
|
#submitAvoc {
|
||||||
|
padding-top: 4px !important;
|
||||||
|
height: 30px !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
#refreshShell {
|
||||||
|
margin-top: -4px !important;
|
||||||
|
margin-right: 2px !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
#refrehShellTextRefresh {
|
||||||
|
line-height:230% !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
#formatshellJSONyesno {
|
||||||
|
padding-top: 5px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
.leftCell {
|
||||||
|
border-left: 1px solid #D3D3D3 !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
.rightCell {
|
||||||
|
border-right: 1px solid #D3D3D3 !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
|
@ -10,30 +10,30 @@
|
||||||
@import "css/jquery-ui-1.8.19.custom.css";
|
@import "css/jquery-ui-1.8.19.custom.css";
|
||||||
@import "css/jquery.dataTables_themeroller.css";
|
@import "css/jquery.dataTables_themeroller.css";
|
||||||
</style>
|
</style>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery-1.7.2.js"></script>
|
<script type="text/javascript" src="js/jquery-1.7.2.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.snippet.js"></script>
|
<script type="text/javascript" src="js/jquery.snippet.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.qtip.js"></script>
|
<script type="text/javascript" src="js/jquery.qtip.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.flot.js"></script>
|
<script type="text/javascript" src="js/jquery.flot.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.flot.stack.js"></script>
|
<script type="text/javascript" src="js/jquery.flot.stack.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.flot.resize.js"></script>
|
<script type="text/javascript" src="js/jquery.flot.resize.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.flot.crosshair.js"></script>
|
<script type="text/javascript" src="js/jquery.flot.crosshair.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/json2.js"></script>
|
<script type="text/javascript" src="js/json2.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.sha256.js"></script>
|
<script type="text/javascript" src="js/jquery.sha256.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.layout-latest.js"></script>
|
<script type="text/javascript" src="js/jquery.layout-latest.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.address-1.4.js"></script>
|
<script type="text/javascript" src="js/jquery.address-1.4.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.dataTables.js"></script>
|
<script type="text/javascript" src="js/jquery.dataTables.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery-ui-1.8.18.custom.min.js"></script>
|
<script type="text/javascript" src="js/jquery-ui-1.8.18.custom.min.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.autogrow.js"></script>
|
<script type="text/javascript" src="js/jquery.autogrow.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.jeditable.js"></script>
|
<script type="text/javascript" src="js/jquery.jeditable.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/jquery.jeditable.autogrow.js"></script>
|
<script type="text/javascript" src="js/jquery.jeditable.autogrow.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/master.js"></script>
|
<script type="text/javascript" src="js/master.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/avocsh1.js"></script>
|
<script type="text/javascript" src="js/avocsh1.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/print.js"></script>
|
<script type="text/javascript" src="js/print.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/errors.js"></script>
|
<script type="text/javascript" src="js/errors.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/modules/simple-query-basics.js"></script>
|
<script type="text/javascript" src="js/modules/simple-query-basics.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/modules/simple-query.js"></script>
|
<script type="text/javascript" src="js/modules/simple-query.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/modules/statement-basics.js"></script>
|
<script type="text/javascript" src="js/modules/statement-basics.js"></script>
|
||||||
<script type="text/javascript" language="javascript" src="js/client.js"></script>
|
<script type="text/javascript" src="js/client.js"></script>
|
||||||
<style>
|
<style>
|
||||||
a:link {color: #797979; text-decoration: none;}
|
a:link {color: #797979; text-decoration: none;}
|
||||||
a:visited {color: #797979; text-decoration: none;}
|
a:visited {color: #797979; text-decoration: none;}
|
||||||
|
@ -47,7 +47,7 @@
|
||||||
<div id="menue" class="ui-layout-north">
|
<div id="menue" class="ui-layout-north">
|
||||||
|
|
||||||
<div id="menue-left">
|
<div id="menue-left">
|
||||||
<a href="/_admin/html/index.html"><img src="../html/media/images/ArangoDB_Logo.png"></a>
|
<a href="/_admin/html/index.html"><img src="../html/media/images/ArangoDB_Logo.png" alt="ArangoDB Logo"></a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div id="menue-right">
|
<div id="menue-right">
|
||||||
|
@ -56,8 +56,8 @@
|
||||||
<button class="minimal" id="AvocSH">Shell</button>
|
<button class="minimal" id="AvocSH">Shell</button>
|
||||||
<button class="minimal" id="Logs">Logs</button>
|
<button class="minimal" id="Logs">Logs</button>
|
||||||
<button class="minimal" id="Status">Statistics</button>
|
<button class="minimal" id="Status">Statistics</button>
|
||||||
<a href="http://www.arangodb.org/manuals/" target="_blank">
|
<a href="http://www.arangodb.org/manuals/current" target="_blank">
|
||||||
<img class="externalLink" src="../html/media/icons/expand16icon.png" alt=""></img>
|
<img class="externalLink" src="../html/media/icons/expand16icon.png" alt="External Link Icon"></img>
|
||||||
<button class="minimal" id="Documentation">Documentation</button></a>
|
<button class="minimal" id="Documentation">Documentation</button></a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
@ -86,6 +86,7 @@
|
||||||
<th></th>
|
<th></th>
|
||||||
<th>ID</th>
|
<th>ID</th>
|
||||||
<th>Name</th>
|
<th>Name</th>
|
||||||
|
<th>Type</th>
|
||||||
<th>Status</th>
|
<th>Status</th>
|
||||||
<th>Size Documents</th>
|
<th>Size Documents</th>
|
||||||
<th>Number of Documents</th>
|
<th>Number of Documents</th>
|
||||||
|
@ -130,13 +131,13 @@
|
||||||
<td class="longTD">If true then the data is synchronised to disk before returning from a create or update of an document.
|
<td class="longTD">If true then the data is synchronised to disk before returning from a create or update of an document.
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td>isSystem:</td><td>
|
<tr><td>type:</td><td>
|
||||||
<form action="#" id="isSystemForm">
|
<form action="#" id="typeForm">
|
||||||
<input type="radio" name="isSystem" value=true>yes</input>
|
<input type="radio" name="type" value="2" checked>document</input>
|
||||||
<input type="radio" name="isSystem" value=false checked>no</input>
|
<input type="radio" name="type" value="3">edge</input>
|
||||||
</form>
|
</form>
|
||||||
</td>
|
</td>
|
||||||
<td class="longTD">If true, create a system collection. In this case collection-name should start with an underscore.
|
<td class="longTD">The type of the collection to create.
|
||||||
</td>
|
</td>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
@ -437,7 +438,7 @@
|
||||||
<input type="radio" name="formatshellJSONyesno" value=true checked>yes</input>
|
<input type="radio" name="formatshellJSONyesno" value=true checked>yes</input>
|
||||||
<input type="radio" name="formatshellJSONyesno" value=false>no</input>
|
<input type="radio" name="formatshellJSONyesno" value=false>no</input>
|
||||||
</form>
|
</form>
|
||||||
<a href="http://www.arangodb.org/manuals/UserManual.html" target="_blank" style="font-size:0.8em">ArangoDB Shell - click for more information</a>
|
<a href="http://www.arangodb.org/manuals/current/UserManualArangosh.html" target="_blank" style="font-size:0.8em">ArangoDB Shell - click for more information</a><br></br>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div id="queryView" style="display: none">
|
<div id="queryView" style="display: none">
|
||||||
|
@ -453,7 +454,7 @@
|
||||||
<form id="queryForm" method="post" onsubmit="return false">
|
<form id="queryForm" method="post" onsubmit="return false">
|
||||||
<textarea placeholder="Type in your query..." class="editBox" id="queryContent"></textarea><br>
|
<textarea placeholder="Type in your query..." class="editBox" id="queryContent"></textarea><br>
|
||||||
<button class="minimal" id="submitQuery">Execute</button>
|
<button class="minimal" id="submitQuery">Execute</button>
|
||||||
<a style="font-size:0.8em;" id="aqlinfo" href="http://www.arangodb.org/manuals/Aql.html" target="_blank">ArangoDB Query Language - click for more information</a>
|
<a style="font-size:0.8em;" id="aqlinfo" href="http://www.arangodb.org/manuals/current/Aql.html" target="_blank">ArangoDB Query Language - click for more information</a>
|
||||||
<br></br>
|
<br></br>
|
||||||
</form>
|
</form>
|
||||||
<form action="#" id="formatJSONyesno" style="font-size:0.8em;">
|
<form action="#" id="formatJSONyesno" style="font-size:0.8em;">
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -282,6 +282,21 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
/// @{
|
/// @{
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
/// @brief macro that validates printf() style call arguments
|
||||||
|
/// the printf() call contained will never be executed but is just there to
|
||||||
|
/// enable compile-time error check. it will be optimised away after that
|
||||||
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
#ifdef TRI_ENABLE_LOGGER
|
||||||
|
|
||||||
|
#define LOG_ARG_CHECK(...) \
|
||||||
|
if (false) { \
|
||||||
|
printf(__VA_ARGS__); \
|
||||||
|
} \
|
||||||
|
|
||||||
|
#endif
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
/// @brief logs fatal errors
|
/// @brief logs fatal errors
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -290,6 +305,7 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
|
|
||||||
#define LOG_FATAL(...) \
|
#define LOG_FATAL(...) \
|
||||||
do { \
|
do { \
|
||||||
|
LOG_ARG_CHECK(__VA_ARGS__) \
|
||||||
if (TRI_IsHumanLogging() && TRI_IsFatalLogging()) { \
|
if (TRI_IsHumanLogging() && TRI_IsFatalLogging()) { \
|
||||||
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_FATAL, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_FATAL, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
||||||
} \
|
} \
|
||||||
|
@ -309,6 +325,7 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
|
|
||||||
#define LOG_ERROR(...) \
|
#define LOG_ERROR(...) \
|
||||||
do { \
|
do { \
|
||||||
|
LOG_ARG_CHECK(__VA_ARGS__) \
|
||||||
if (TRI_IsHumanLogging() && TRI_IsErrorLogging()) { \
|
if (TRI_IsHumanLogging() && TRI_IsErrorLogging()) { \
|
||||||
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_ERROR, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_ERROR, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
||||||
} \
|
} \
|
||||||
|
@ -330,6 +347,7 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
|
|
||||||
#define LOG_WARNING(...) \
|
#define LOG_WARNING(...) \
|
||||||
do { \
|
do { \
|
||||||
|
LOG_ARG_CHECK(__VA_ARGS__) \
|
||||||
if (TRI_IsHumanLogging() && TRI_IsWarningLogging()) { \
|
if (TRI_IsHumanLogging() && TRI_IsWarningLogging()) { \
|
||||||
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_WARNING, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_WARNING, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
||||||
} \
|
} \
|
||||||
|
@ -351,6 +369,7 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
|
|
||||||
#define LOG_INFO(...) \
|
#define LOG_INFO(...) \
|
||||||
do { \
|
do { \
|
||||||
|
LOG_ARG_CHECK(__VA_ARGS__) \
|
||||||
if (TRI_IsHumanLogging() && TRI_IsInfoLogging()) { \
|
if (TRI_IsHumanLogging() && TRI_IsInfoLogging()) { \
|
||||||
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_INFO, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_INFO, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
||||||
} \
|
} \
|
||||||
|
@ -372,6 +391,7 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
|
|
||||||
#define LOG_DEBUG(...) \
|
#define LOG_DEBUG(...) \
|
||||||
do { \
|
do { \
|
||||||
|
LOG_ARG_CHECK(__VA_ARGS__) \
|
||||||
if (TRI_IsHumanLogging() && TRI_IsDebugLogging(__FILE__)) { \
|
if (TRI_IsHumanLogging() && TRI_IsDebugLogging(__FILE__)) { \
|
||||||
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_DEBUG, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_DEBUG, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
||||||
} \
|
} \
|
||||||
|
@ -391,6 +411,7 @@ void TRI_FreeBufferLogging (TRI_vector_t* buffer);
|
||||||
|
|
||||||
#define LOG_TRACE(...) \
|
#define LOG_TRACE(...) \
|
||||||
do { \
|
do { \
|
||||||
|
LOG_ARG_CHECK(__VA_ARGS__) \
|
||||||
if (TRI_IsHumanLogging() && TRI_IsTraceLogging(__FILE__)) { \
|
if (TRI_IsHumanLogging() && TRI_IsTraceLogging(__FILE__)) { \
|
||||||
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_TRACE, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
TRI_Log(__FUNCTION__, __FILE__, __LINE__, TRI_LOG_LEVEL_TRACE, TRI_LOG_SEVERITY_HUMAN, __VA_ARGS__); \
|
||||||
} \
|
} \
|
||||||
|
|
|
@ -198,15 +198,16 @@ static char** AttemptedCompletion (char const* text, int start, int end) {
|
||||||
if (result != 0 && result[0] != 0 && result[1] == 0) {
|
if (result != 0 && result[0] != 0 && result[1] == 0) {
|
||||||
size_t n = strlen(result[0]);
|
size_t n = strlen(result[0]);
|
||||||
|
|
||||||
if (result[0][n-1] == ')') {
|
if (result[0][n - 1] == ')') {
|
||||||
result[0][n-1] = '\0';
|
result[0][n - 1] = '\0';
|
||||||
|
|
||||||
#if RL_READLINE_VERSION >= 0x0500
|
|
||||||
rl_completion_suppress_append = 1;
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if RL_READLINE_VERSION >= 0x0500
|
||||||
|
// issue #289
|
||||||
|
rl_completion_suppress_append = 1;
|
||||||
|
#endif
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -250,6 +251,9 @@ V8LineEditor::V8LineEditor (v8::Handle<v8::Context> context, std::string const&
|
||||||
|
|
||||||
bool V8LineEditor::open (const bool autoComplete) {
|
bool V8LineEditor::open (const bool autoComplete) {
|
||||||
if (autoComplete) {
|
if (autoComplete) {
|
||||||
|
// issue #289: do not append a space after completion
|
||||||
|
rl_completion_append_character = '\0';
|
||||||
|
|
||||||
rl_attempted_completion_function = AttemptedCompletion;
|
rl_attempted_completion_function = AttemptedCompletion;
|
||||||
rl_completer_word_break_characters = WordBreakCharacters;
|
rl_completer_word_break_characters = WordBreakCharacters;
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue