diff --git a/CHANGELOG b/CHANGELOG
index d722ecafb1..2897874fd2 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,6 +1,16 @@
v1.1.beta3 (XXXX-XX-XX)
-----------------------
+* added collection type label to web interface
+
+* fixed issue #290: the web interface now disallows creating non-edges in edge collections
+ when creating collections via the web interface, the collection type must also be
+ specified (default is document collection)
+
+* fixed issue #289: tab-completion does not insert any spaces
+
+* fixed issue #282: fix escaping in web interface
+
* made AQL function NOT_NULL take any number of arguments. Will now return its
first argument that is not null, or null if all arguments are null. This is downwards
compatible.
diff --git a/Documentation/Makefile.files b/Documentation/Makefile.files
index 08c739285f..b841e33558 100644
--- a/Documentation/Makefile.files
+++ b/Documentation/Makefile.files
@@ -80,6 +80,7 @@ WIKI = \
JSModules \
Key-Value \
NamingConventions \
+ NewFeatures11 \
RefManual \
RestDocument \
RestEdge \
diff --git a/Documentation/Manual/Home.md b/Documentation/Manual/Home.md
index bc3fb7416c..08d50350db 100644
--- a/Documentation/Manual/Home.md
+++ b/Documentation/Manual/Home.md
@@ -23,6 +23,9 @@ The HTML and PDF versions of the manual can be found
Please contact @EXTREF_S{http://www.arangodb.org/connect,us} if you
have any questions.
+New Features in ArangoDB 1.1 {#NewFeatures11}
+=============================================
+
Upgrading to ArangoDB 1.1 {#ArangoDBUpgrading}
==============================================
diff --git a/Documentation/Manual/NewFeatures11.md b/Documentation/Manual/NewFeatures11.md
new file mode 100644
index 0000000000..3646023180
--- /dev/null
+++ b/Documentation/Manual/NewFeatures11.md
@@ -0,0 +1,339 @@
+New Features in ArangoDB 1.1 {#NewFeatures11}
+=============================================
+
+## Batch requests
+
+ArangoDB 1.1 provides a new REST API for batch requests at `/_api/batch`.
+
+Clients can use the API to send multiple requests to ArangoDB at once. They can
+package multiple requests into just one aggregated request.
+
+ArangoDB will then unpack the aggregated request and process the contained requests
+one-by-one. When done it will send an aggregated response to the client, that the
+client can then unpack to get the list of individual responses.
+
+Using the batch request API may save network overhead because it reduces the
+number of HTTP requests and responses that clients and ArangoDB need to exchange.
+This may be especially important if the network is slow or if the individual
+requests are small and the network overhead per request would be significant.
+
+It should be noted that packing multiple individual requests into one aggregate
+request on the client side introduces some overhead itself. The same is true
+for the aggregate request unpacking and assembling on the server side. Using
+batch requests may still be beneficial in many cases, but it should be obvious
+that they should be used only when they replace a considerable amount of
+individual requests.
+
+For more information see @ref HttpBatch.
+
+
+## More fine grained control of sync behavior
+
+ArangoDB stores all document data in memory-mapped files. When adding new documents,
+updating existing documents or deleting documents, these modifications are appended at
+the end of the currently used memory-mapped datafile.
+
+It is configurable whether ArangoDB should directly respond then and synchronise the
+changes to disk asynchronously, or if it should force the synchronisation before
+responding. The parameter to control this is named `waitForSync` and can be set on a
+per-collection level.
+
+Often, sychronisation is not required on collection level, but on operation level.
+ArangoDB 1.1 tries to improve on this by providing extra parameters for the REST
+and Javascript _document_ and _edge_ modification operations.
+
+This parameter can be used to force synchronisation for operations that work on
+collections that have `waitForSync` set to `false`.
+
+The following REST API methods support the parameter `waitForSync` to force
+synchronisation:
+
+* `POST /_api/document`: adding a document
+* `POST /_api/edge`: adding an edge
+* `PATCH /_api/document`: partially update a document
+* `PATCH /_api/edge`: partially update an edge
+* `PUT /_api/document`: replace a document
+* `PUT /_api/edge`: replace an edge
+* `DELETE /_api/document`: delete a document
+* `DELETE /_api/edge`: delete an edge
+
+If the `waitForSync` parameter is omitted or set to `false`, the collection-level
+synchronisation behavior will be applied. Setting the parameter to `true`
+will force synchronisation.
+
+The following Javascript methods support forcing synchronisation, too:
+* save()
+* update()
+* relace()
+* delete()
+
+Force synchronisation of a save operation:
+
+ > db.users.save({"name":"foo"}, true);
+
+If the second parameter is omitted or set to `false`, the collection-level
+synchronisation behavior will be applied. Setting the parameter to `true`
+will force synchronisation.
+
+
+## Synchronisation of shape data
+
+ArangoDB 1.1 provides an option `--database.force-sync-shapes` that controls whether
+shape data (information about document attriubte names and attribute value types)
+should be synchronised to disk directly after each write, or if synchronisation is
+allowed to happen asynchronously.
+The latter options allows ArangoDB to return faster from operations that involve new
+document shapes.
+
+In ArangoDB 1.0, shape information was always synchronised to disk, and users did not
+have any options. The default value of `--database.force-sync-shapes` in ArangoDB 1.1
+is `true` so it is fully compatible with ArangoDB 1.0.
+However, in ArangoDB 1.1 the direct synchronisation can be turned off by setting the
+value to `false`. Direct synchronisation of shape data will then be disabled for
+collections that have a `waitForSync` value of `false`.
+Shape data will always be synchronised directly for collections that have a `waitForSync`
+value of `true`.
+
+Still, ArangoDB 1.1 may need to perform less synchronisation when it writes shape data
+(attribute names and attribute value types of collection documents).
+
+Users may benefit if they save documents with many different structures (in terms of
+document attribute names and attribute value types) in the same collection. If only
+small amounts of distinct document shapes are used, the effect will not be noticable.
+
+
+## Collection types
+
+In ArangoDB 1.1, collections are now explicitly typed:
+- regular documents go into _document_-only collections,
+- and edges go into _edge_ collections.
+
+In 1.0, collections were untyped, and edges and documents could be mixed in the same collection.
+Whether or not a collection was to be treated as an _edge_ or _document_ collection was
+decided at runtime by looking at the prefix used (e.g. `db.xxx` vs. `edges.xxx`).
+
+The explicit collection types used in ArangoDB allow users to query the collection type at
+runtime and make decisions based on the type:
+
+ arangosh> db.users.type();
+
+Extra Javascript functions have been introduced to create collections:
+
+ arangosh> db._createDocumentCollection("users");
+ arangosh> db._createEdgeCollection("relationships");
+
+The "traditional" functions are still available:
+
+ arangosh> db._create("users");
+ arangosh> edges._create("relationships");
+
+The ArangoDB web interface also allows the explicit creation of _edge_
+collections.
+
+
+## Support for partial updates
+
+The REST API for documents now offers the HTTP PATCH method to partially update
+documents. A partial update allows specifying only the attributes the change instead
+of the full document. Internally, it will merge the supplied attributes into the
+existing document.
+
+Completely overwriting/replacing entire documents is still available via the HTTP PUT
+method in ArangoDB 1.0.
+In _arangosh_, the partial update method is named _update_, and the previously existing
+_replace_ method still performs a replacement of the entire document as before.
+
+This call with replace just the `active` attribute of the document `user`. All other
+attributes will remain unmodified. The document revision number will of course be updated
+as updating creates a new revision:
+
+ arangosh> db.users.update(user, { "active" : false });
+
+Contrary, the `replace` method will replace the entire existing document with the data
+supplied. All other attributes will be removed. Replacing will also create a new revision:
+
+ arangosh> db.users.replace(user, { "active" : false });
+
+For more information, please check @ref JS_UpdateVocbaseCol and @ref JS_ReplaeVocbaseCol.
+
+
+## AQL
+
+The following functions have been added or extended in the ArangoDB Query Language
+(AQL) in ArangoDB 1.1:
+- `MERGE_RECURSIVE()`: new function that merges documents recursively. Especially, it will merge
+ sub-attributes, a functionality not provided by the previously existing `MERGE()` function.
+- `NOT_NULL()`: now works with any number of arguments and returns the first non-null argument.
+ If all arguments are `null`, the function will return `null`, too.
+- `FIRST_LIST()`: new function that returns the first argument that is a list, and `null`
+ if none of the arguments are lists.
+- `FIRST_DOCUMENT()`: new function that returns the first argument that is a document, and `null`
+ if none of the arguments are documents.
+- `TO_LIST()`: converts the argument into a list.
+
+
+## Endpoints
+
+ArangoDB can now listen to incoming connections on one or many "endpoint" of different
+types. In ArangoDB lingo, an endpoint is the combination of a protocol and some
+configuration for it.
+
+The currently supported protocol types are:
+- `tcp`: for unencrypted connection over TCP/IP
+- `ssl`: for secure connections using SSL over TCP/IP
+- `unix`: for connections over Unix domain sockets
+
+You should note that the data transferred inside the protocol is still HTTP, regardless
+of the chosen protocol. The endpoint protocol can thus be understood as the envelope
+that all HTTP communication is shipped inside.
+
+To specify an endpoint, ArangoDB 1.1 introduces a new option `--server.endpoint`. The
+values accepted by this option have the following specification syntax:
+- `tcp://host:port (HTTP over IPv4)`
+- `tcp://[host]:port (HTTP over IPv6)`
+- `ssl://host:port (HTTP over SSL-encrypted IPv4)`
+- `ssl://[host]:port (HTTP over SSL-encrypted IPv6)`
+- `unix://path/to/socket (HTTP over Unix domain socket)`
+
+### TCP endpoints
+
+The configuration options for the `tcp` endpoint type are hostname/ip address and an
+optional port number. If the port is ommitted, the default port number of 8529 is used.
+
+To make the server listen to connections coming in for IP 192.168.173.13 on TCP/IP port 8529:
+ > bin/arangod --server.endpoint tcp://192.168.173.13:8529
+
+To make the server listen to connections coming in for IP 127.0.0.1 TCP/IP port 999:
+ > bin/arangod --server.endpoint tcp://127.0.0.1:999
+
+### SSL endpoints
+
+SSL endpoints can be used for secure, encrypted connections to ArangoDB. The connection is
+secured using SSL. SSL is computationally intensive so using it will result in an
+(unavoidable) performance degradation when compared to plain-text requests.
+
+The configuration options for the `ssl` endpoint type are the same as for `tcp` endpoints.
+
+To make the server listen to SSL connections coming in for IP 192.168.173.13 on TCP/IP port 8529:
+ > bin/arangod --server.endpoint ssl://192.168.173.13:8529
+
+As multiple endpoints can be configured, ArangoDB can serve SSL and non-SSL requests in
+parallel, provided they use different ports:
+
+ > bin/arangod --server.endpoint tcp://192.168.173.13:8529 --server.endpoint ssl://192.168.173.13:8530
+
+### Unix domain socket endpoints
+
+The `unix` endpoint type can only be used if clients are on the same host as the _arangod_ server.
+Connections will then be estabished using a Unix domain socket, which is backed by a socket descriptor
+file. This type of connection should be slightly more efficient than TCP/IP.
+
+The configuration option for a `unix` endpoint type is the socket descriptor filename:
+
+To make the server use a Unix domain socket with filename `/var/run/arango.sock`:
+ > bin/arangod --server.endpoint unix:///var/run/arango.sock
+
+
+## Blueprints API
+
+Blueprints is a property graph model interface with provided implementations.
+Databases that implement the Blueprints interfaces automatically support
+Blueprints-enabled applications (@EXTREF{http://tinkerpop.com/,http://tinkerpop.com}).
+
+For more information please refer to @ref HttpBluePrints.
+
+
+## Server statistics
+
+ArangoDB 1.1 allows querying the server status via REST API methods.
+
+The following methods are available:
+- `GET /_admin/connection-statistics`: provides connection statistics
+- `GET /_admin/request-statistics`: provides request statistics
+
+Both methods return the current figures and historical values. The historical
+figures are aggregated. They can be used to monitor the current server status as
+well as to get an overview of how the figures developed over time and look for
+trends.
+
+The ArangoDB web interface is using these APIs to provide charts with the
+server connection statistics figures. It has a new tab "Statistics" for this purpose.
+
+For more information on the APIs, please refer to @ref HttpSystemConnectionStatistics
+and @ref HttpSystemRequestStatistics.
+
+
+## Improved HTTP request handling
+
+### Error codes
+
+ArangoDB 1.1 better handles malformed HTTP requests than ArangoDB 1.0 did.
+When it encounters an invalid HTTP request, it might answer with some HTTP status
+codes that ArangoDB 1.0 did not use:
+- `HTTP 411 Length Required` will be returned for requests that have a negative
+ value in their `Content-Length` HTTP header.
+- `HTTP 413 Request Entity Too Large` will be returned for too big requests. The
+ maximum size is 512 MB at the moment.
+- `HTTP 431 Request Header Field Too Large` will be returned for requests with too
+ long HTTP headers. The maximum size per header field is 1 MB at the moment.
+
+For requests that are not completely shipped from the client to the server, the
+server will allow the client 90 seconds of time before closing the dangling connection.
+
+If the `Content-Length` HTTP header in an incoming request is set and contains a
+value that is less than the length of the HTTP body sent, the server will return
+a `HTTP 400 Bad Request`.
+
+### Keep-Alive
+
+In version 1.1, ArangoDB will behave as follows when it comes to HTTP keep-alive:
+- if a client sends a `Connection: close` HTTP header, the server will close the connection as
+ requested
+- if a client sends a `Connection: keep-alive` HTTP header, the server will not close the
+ connection but keep it alive as requested
+- if a client does not send any `Connection` HTTP header, the server will assume _keep-alive_
+ if the request was an HTTP/1.1 request, and _close_ if the request was an HTTP/1.0 request
+- dangling keep-alive connections will be closed automatically by the server after a configurable
+ amount of seconds. To adjust the value, use the new server option `--server.keep-alive-timeout`.
+- Keep-alive can be turned off in ArangoDB by setting `--server.keep-alive-timeout` to a value of `0`.
+
+### Configurable backlog
+
+ArangoDB 1.1 adds an option `--server.backlog-size` to configure the system backlog size.
+The backlog size controls the maximum number of queued connections and is used by the listen()
+system call.
+
+The default value in ArangoDB is 10, the maximum value is platform-dependent.
+
+
+## Using V8 options
+
+To use arbitrary options the V8 engine provides, ArangoDB 1.1 introduces a new startup
+option `--javascript.v8-options`.
+All options that shall be passed to V8 without being interpreted by ArangoDB can be put
+inside this option. ArangoDB itself will ignore these options and will let V8 handle them.
+It is up to V8 to handle these options and complain about wrong options. In case of invalid
+options, V8 may refuse to start, which will also abort the startup of ArangoDB.
+
+To get a list of all options that the V8 engine in the currently used version of ArangoDB
+supports, you can use the value `--help`, which will just be passed through to V8:
+
+ > bin/arangod --javascript.v8-options "--help" /tmp/voctest
+
+
+## Smaller hash indexes
+
+Some internal structures have been adjusted in ArangoDB 1.1 so that hash index entries
+consume considerably less memory.
+
+Installations may benefit if they use unique or non-unqiue hash indexes on collections.
+
+
+## arangoimp
+
+_arangoimp_ now allows specifiying the end-of-line (EOL) character of the input file.
+This allows better support for files created on Windows systems with `\r\n` EOLs.
+
+_arangoimp_ also supports importing input files in TSV format. TSV is a simple separated
+format such as CSV, but with the tab character as the separator, no quoting for values
+and thus no support for line breaks inside the values.
diff --git a/Documentation/Manual/Upgrading.md b/Documentation/Manual/Upgrading.md
index 099ce6d8e0..7d525e20a2 100644
--- a/Documentation/Manual/Upgrading.md
+++ b/Documentation/Manual/Upgrading.md
@@ -10,6 +10,12 @@ The following list contains changes in ArangoDB 1.1 that are not 100% downwards-
Existing users of ArangoDB 1.0 should read the list carefully and make sure they have undertaken all
necessary steps and precautions before upgrading from ArangoDB 1.0 to ArangoDB 1.1.
+## New dependencies
+
+As ArangoDB 1.1 supports SSL connections, ArangoDB can only be built on servers with the OpenSSL
+library installed. The OpenSSL is not bundled with ArangoDB and must be installed separately.
+
+
## Database directory version check and upgrade
Starting with ArangoDB 1.1, _arangod_ will perform a database version check at startup.
@@ -109,16 +115,24 @@ The following _arangod_ startup options have been removed in ArangoDB 1.1:
- `--server.require-keep-alive`
- `--server.secure-require-keep-alive`
-In version 1.1, The server will now behave as follows automatically which should be more
+In version 1.1, the server will behave as follows automatically which should be more
conforming to the HTTP standard:
- if a client sends a `Connection: close` HTTP header, the server will close the connection as
requested
- if a client sends a `Connection: keep-alive` HTTP header, the server will not close the
connection but keep it alive as requested
- if a client does not send any `Connection` HTTP header, the server will assume _keep-alive_
- if the request was an HTTP/1.1 request, and "close" if the request was an HTTP/1.0 request
+ if the request was an HTTP/1.1 request, and _close_ if the request was an HTTP/1.0 request
- dangling keep-alive connections will be closed automatically by the server after a configurable
- amount of seconds. To adjust the value, use the new server option `--server.keep-alive-timeout`
+ amount of seconds. To adjust the value, use the new server option `--server.keep-alive-timeout`.
+- Keep-alive can be turned off in ArangoDB by setting `--server.keep-alive-timeout` to a value of `0`.
+
+As ArangoDB 1.1 will use keep-alive by default for incoming HTTP/1.1 requests without a
+`Connection` header, using ArangoDB 1.1 from a browser will likely result in the same connection
+being re-used. This may be unintuitive because requests from a browser to ArangoDB will
+effectively be serialised, not parallelised. To conduct parallel requests from a browser, you
+should either set `--server.keep-alive-timeout` to a value of `0`, or make your browser send
+`Connection: close` HTTP headers with its requests.
## Start / stop scripts
@@ -163,9 +177,13 @@ If one of the documents contains either a `_from` or a `_to` attribute, the coll
_edge_ collection. Otherwise, the collection is marked as a _document_ collection.
This distinction is important because edges can only be created in _edge_ collections starting
-with 1.1. Client code may need to be adjusted to work with ArangoDB 1.1 if it tries to insert
+with 1.1. User code may need to be adjusted to work with ArangoDB 1.1 if it tries to insert
edges into _document_-only collections.
+User code must also be adjusted if it uses the `ArangoEdges` or `ArangoEdgesCollection` objects
+that were present in ArangoDB 1.0 on the server. This only affects user code that was intended
+to be run on the server, directly in ArangoDB. The `ArangoEdges` or `ArangoEdgesCollection`
+objects were not exposed to _arangosh_ or any other clients.
## arangoimp / arangosh
@@ -186,4 +204,10 @@ These options can be used to specify the username and password when connecting v
to the _arangod_ server. If no password is given on the command line, _arangoimp_ and _arangosh_
will interactively prompt for a password.
If no username is specified on the command line, the default user _root_ will be used but there
-will still be a password prompt.
+will still be a password prompt.
+
+
+## Removed functionality
+
+In 1.0, there were unfinished REST APIs available at the `/_admin/config` URL suffix.
+These APIs were stubs only and have been removed in ArangoDB 1.1.
diff --git a/UPGRADING b/UPGRADING
index 210464af94..c64db8c469 100644
--- a/UPGRADING
+++ b/UPGRADING
@@ -1,130 +1 @@
-Changes that should be considered when installing or upgrading ArangoDB
------------------------------------------------------------------------
-
-* Starting the server
-
- Starting with ArangoDB 1.1, arangod will perform a database version check at startup.
- It will look for a file named "VERSION" in its database directory. If the file is not
- present (it will not be present in an ArangoDB 1.0 database), arangod in version 1.1
- will refuse to start and ask the user to run the script "arango-upgrade" first.
- If the VERSION file is present but is from a non-matching version of ArangoDB, arangod
- will also refuse to start and ask the user to run the upgrade script first.
- This procedure shall ensure that users have full control over when they perform any
- updates/upgrades of their data, and do not risk running an incompatible server/database
- state tandem.
-
- ArangoDB users are asked to run arango-upgrade when upgrading from one version of
- ArangoDB to a higher version (e.g. from 1.0 to 1.1), but also after pulling the latest
- ArangoDB source code while staying in the same minor version (e.g. when updating from
- 1.1-beta1 to 1.1-beta2).
-
- When installing ArangoDB from scratch, users should also run arango-upgrade once to
- initialise their database directory with some system collections that ArangoDB requires.
- When not run, arangod will refuse to start as mentioned before.
-
-
- The startup options "--port", "--server.port", "--server.http-port", and
- "--server.admin-port" have all been removed for arangod in version 1.1.
- All these options have been replaced by the new "--server.endpoint" option.
- This option allows to specify protocol, hostname and port the server should use for
- incoming connections.
- The "--server.endpoint" option must be specified on server start, otherwise arangod
- will refuse to start.
-
- The server can be bound to one or multiple endpoints at once. The following endpoint
- specification sytnax is currently supported:
- - tcp://host:port or http@tcp://host:port (HTTP over IPv4)
- - tcp://[host]:port or http@tcp://[host]:port (HTTP over IPv6)
- - ssl://host:port or http@tcp://host:port (HTTP over SSL-encrypted IPv4)
- - ssl://[host]:port or http@tcp://[host]:port (HTTP over SSL-encrypted IPv6)
- - unix://path/to/socket or http@unix:///path/to/socket (HTTP over UNIX socket)
-
- An example value for the option is --server.endpoint tcp://127.0.0.1:8529. This will
- make the server listen to request coming in on IP address 127.0.0.1 on port 8529, and
- that use HTTP over TCP/IPv4.
-
-
- The arangod startup options "--server.require-keep-alive" and
- "--server.secure-require-keep-alive" have been removed in 1.1. The server will now
- behave as follows which should be more conforming to the HTTP standard:
- * if a client sends a "Connection: close" header, the server will close the
- connection as request
- * if a client sends a "Connection: keep-alive" header, the server will not
- close the connection but keep it alive as requested
- * if a client does not send any "Connection" header, the server will assume
- "keep-alive" if the request was an HTTP/1.1 request, and "close" if the
- request was an HTTP/1.0 request
- * dangling keep-alive connections will be closed automatically by the server after
- a configurable amount of seconds. To adjust the value, use the new server option
- "--server.keep-alive-timeout"
-
-
-* Start / stop scripts
-
- The user has changed from "arango" to "arangodb", the start script name has changed from
- "arangod" to "arangodb", the database directory has changed from "/var/arangodb" to
- "/var/lib/arangodb" to be compliant with various Linux policies.
-
-
-* Collection types
-
- In 1.1, we have introduced types for collections: regular documents go into document
- collections, and edges go into edge collections. The prefixing (db.xxx vs. edges.xxx)
- works slightly different in 1.1: edges.xxx can still be used to access collections,
- however, it will not determine the type of existing collections anymore. In 1.0, you
- could write edges.xxx.something and xxx was automatically treated as an edge collection.
- As collections know and save their type in ArangoDB 1.1, this might work slightly
- differently.
-
- In 1.1, edge collections can still be created via edges._create() as in 1.0, but
- a new method was also introduced that uses the db object: db._createEdgeCollection().
- To create document collections, the following methods are available: db._create()
- as in 1.0, and additionally there is now db._createDocumentCollection().
-
- Collections in 1.1 are now either document or edge collections, but the two concepts
- can not be mixed in the same collection. arango-upgrade will determine the types of
- existing collections from 1.0 once on upgrade, based on the inspection of the first 50
- documents in the collection. If one of them contains either a _from or a _to attribute,
- the collection is made an edge collection, otherwise, the colleciton is marked as a
- document collection.
- This distinction is important because edges can only be created in edge collections
- starting with 1.1. Client code may need to be adjusted for 1.1 to insert edges only
- into "pure" edge collections in 1.1.
-
-
-* Authorization
-
- Starting from 1.1, arangod may be started with authentication turned on.
- When turned on, all requests incoming to arangod via the HTTP interface must carry an
- HTTP authorization header with a valid username and password in order to be processed.
- Clients sending requests without HTTP autorization headers or with invalid usernames/
- passwords will be rejected by arangod with an HTTP 401 error.
-
- arango-upgrade will create a default user "root" with an empty password when run
- initially.
-
- To turn authorization off, the server can be started with the command line option
- --server.disable-authentication true. Of course this configuration can also be stored
- in a configuration file.
-
-
-* arangoimp / arangosh
-
- The parameters "connect-timeout" and "request-timeout" for arangosh and arangoimp have
- been renamed to "--server.connect-timeout" and "--server.request-timeout".
-
- The parameter "--server" has been removed for both arangoimp and arangosh.
- To specify a server to connect to, the client tools now provide an option
- "--server.endpoint". This option can be used to specify the protocol, hostname and port
- for the connection. The default endpoint that is used when none is specified is
- tcp://127.0.0.1:8529. For more information on the endpoint specification syntax, see
- above.
-
- The options "--server.username" and "--server.password" have been added for arangoimp
- and arangosh in order to use authorization from these client tools, too.
- These options can be used to specify the username and password when connectiong via
- client tools to the arangod server. If no password is given on the command line,
- arangoimp and arangosh will interactively prompt for a password.
- If no username is specified on the command line, the default user "root" will be
- used but there will still be a password prompt.
-
+Please refer to Documentation/Manual/Upgrading.md
diff --git a/UnitTests/Philadelphia/mersenne-test.cpp b/UnitTests/Philadelphia/mersenne-test.cpp
index d32b0e7265..843e1cdb5e 100644
--- a/UnitTests/Philadelphia/mersenne-test.cpp
+++ b/UnitTests/Philadelphia/mersenne-test.cpp
@@ -86,40 +86,40 @@ BOOST_AUTO_TEST_CASE (tst_mersenne_int31) {
////////////////////////////////////////////////////////////////////////////////
BOOST_AUTO_TEST_CASE (tst_mersenne_seed) {
- TRI_SeedMersenneTwister((uint32_t) 0);
- BOOST_CHECK_EQUAL((uint32_t) 2357136044, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2546248239, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 3071714933, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 0UL);
+ BOOST_CHECK_EQUAL((uint32_t) 2357136044UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2546248239UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 3071714933UL, TRI_Int32MersenneTwister());
- TRI_SeedMersenneTwister((uint32_t) 1);
- BOOST_CHECK_EQUAL((uint32_t) 1791095845, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4282876139, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 3093770124, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 1UL);
+ BOOST_CHECK_EQUAL((uint32_t) 1791095845UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4282876139UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 3093770124UL, TRI_Int32MersenneTwister());
- TRI_SeedMersenneTwister((uint32_t) 2);
- BOOST_CHECK_EQUAL((uint32_t) 1872583848, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 794921487, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 111352301, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 2UL);
+ BOOST_CHECK_EQUAL((uint32_t) 1872583848UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 794921487UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 111352301UL, TRI_Int32MersenneTwister());
- TRI_SeedMersenneTwister((uint32_t) 23);
- BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 23UL);
+ BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
- TRI_SeedMersenneTwister((uint32_t) 42);
- BOOST_CHECK_EQUAL((uint32_t) 1608637542, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 3421126067, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4083286876, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 42UL);
+ BOOST_CHECK_EQUAL((uint32_t) 1608637542UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 3421126067UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4083286876UL, TRI_Int32MersenneTwister());
- TRI_SeedMersenneTwister((uint32_t) 458735);
- BOOST_CHECK_EQUAL((uint32_t) 1537542272, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4131475792, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2280116031, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 458735UL);
+ BOOST_CHECK_EQUAL((uint32_t) 1537542272UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4131475792UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2280116031UL, TRI_Int32MersenneTwister());
- TRI_SeedMersenneTwister((uint32_t) 395568682893);
- BOOST_CHECK_EQUAL((uint32_t) 2297195664, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2381406737, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4184846092, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 395568682893UL);
+ BOOST_CHECK_EQUAL((uint32_t) 2297195664UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2381406737UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4184846092UL, TRI_Int32MersenneTwister());
}
////////////////////////////////////////////////////////////////////////////////
@@ -127,26 +127,26 @@ BOOST_AUTO_TEST_CASE (tst_mersenne_seed) {
////////////////////////////////////////////////////////////////////////////////
BOOST_AUTO_TEST_CASE (tst_mersenne_reseed) {
- TRI_SeedMersenneTwister((uint32_t) 23);
- BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 23UL);
+ BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
// re-seed with same value and compare
- TRI_SeedMersenneTwister((uint32_t) 23);
- BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 23UL);
+ BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
// seed with different value
- TRI_SeedMersenneTwister((uint32_t) 458735);
- BOOST_CHECK_EQUAL((uint32_t) 1537542272, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 458735UL);
+ BOOST_CHECK_EQUAL((uint32_t) 1537542272UL, TRI_Int32MersenneTwister());
// re-seed with original value and compare
- TRI_SeedMersenneTwister((uint32_t) 23);
- BOOST_CHECK_EQUAL((uint32_t) 2221777491, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 2873750246, TRI_Int32MersenneTwister());
- BOOST_CHECK_EQUAL((uint32_t) 4067173416, TRI_Int32MersenneTwister());
+ TRI_SeedMersenneTwister((uint32_t) 23UL);
+ BOOST_CHECK_EQUAL((uint32_t) 2221777491UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 2873750246UL, TRI_Int32MersenneTwister());
+ BOOST_CHECK_EQUAL((uint32_t) 4067173416UL, TRI_Int32MersenneTwister());
}
////////////////////////////////////////////////////////////////////////////////
diff --git a/arangod/HashIndex/hasharray.c b/arangod/HashIndex/hasharray.c
index 06bfc50792..6edbe484db 100755
--- a/arangod/HashIndex/hasharray.c
+++ b/arangod/HashIndex/hasharray.c
@@ -124,7 +124,7 @@ static bool AllocateTable (TRI_hasharray_t* array, size_t numElements) {
}
// position array directly on a cache line boundary
- offset = ((uint64_t) data) % CACHE_LINE_SIZE;
+ offset = ((intptr_t) data) % CACHE_LINE_SIZE;
if (offset == 0) {
// we're already on a cache line boundary
@@ -134,7 +134,7 @@ static bool AllocateTable (TRI_hasharray_t* array, size_t numElements) {
// move to start of a cache line
table = data + (CACHE_LINE_SIZE - offset);
}
- assert(((uint64_t) table) % CACHE_LINE_SIZE == 0);
+ assert(((intptr_t) table) % CACHE_LINE_SIZE == 0);
array->_data = data;
array->_table = table;
diff --git a/arangod/SkipLists/skiplist.c b/arangod/SkipLists/skiplist.c
index c457fe7680..58341c3047 100755
--- a/arangod/SkipLists/skiplist.c
+++ b/arangod/SkipLists/skiplist.c
@@ -353,7 +353,7 @@ void TRI_InitSkipList (TRI_skiplist_t* skiplist, size_t elementSize,
// ..........................................................................
skiplist->_base._maxHeight = maximumHeight;
if (maximumHeight > SKIPLIST_ABSOLUTE_MAX_HEIGHT) {
- LOG_ERROR("Invalid maximum height for skiplist", TRI_ERROR_INTERNAL);
+ LOG_ERROR("Invalid maximum height for skiplist");
assert(false);
}
@@ -1413,7 +1413,7 @@ void TRI_InitSkipListMulti (TRI_skiplist_multi_t* skiplist,
// ..........................................................................
skiplist->_base._maxHeight = maximumHeight;
if (maximumHeight > SKIPLIST_ABSOLUTE_MAX_HEIGHT) {
- LOG_ERROR("Invalid maximum height for skiplist", TRI_ERROR_INTERNAL);
+ LOG_ERROR("Invalid maximum height for skiplist");
assert(false);
}
diff --git a/arangod/VocBase/collection.c b/arangod/VocBase/collection.c
index 444630e5ea..522847f43c 100644
--- a/arangod/VocBase/collection.c
+++ b/arangod/VocBase/collection.c
@@ -352,7 +352,7 @@ static bool CheckCollection (TRI_collection_t* collection) {
collection->_lastError = datafile->_lastError;
stop = true;
- LOG_ERROR("cannot rename sealed log-file to %s, this should not happen: %s", filename, TRI_errno());
+ LOG_ERROR("cannot rename sealed log-file to %s, this should not happen: %s", filename, TRI_last_error());
break;
}
diff --git a/arangod/VocBase/compactor.c b/arangod/VocBase/compactor.c
index d4353cb59f..213ee4882c 100644
--- a/arangod/VocBase/compactor.c
+++ b/arangod/VocBase/compactor.c
@@ -248,7 +248,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
TRI_READ_UNLOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION(primary);
if (deleted) {
- LOG_TRACE("found a stale document: %lu", d->_did);
+ LOG_TRACE("found a stale document: %llu", d->_did);
return true;
}
@@ -256,7 +256,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
res = CopyDocument(sim, marker, &result, &fid);
if (res != TRI_ERROR_NO_ERROR) {
- LOG_FATAL("cannot write compactor file: ", TRI_last_error());
+ LOG_FATAL("cannot write compactor file: %s", TRI_last_error());
return false;
}
@@ -277,7 +277,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
dfi->_numberDead += 1;
dfi->_sizeDead += marker->_size - markerSize;
- LOG_DEBUG("found a stale document after copying: %lu", d->_did);
+ LOG_DEBUG("found a stale document after copying: %llu", d->_did);
TRI_WRITE_UNLOCK_DATAFILES_DOC_COLLECTION(primary);
return true;
@@ -302,7 +302,7 @@ static bool Compactifier (TRI_df_marker_t const* marker, void* data, TRI_datafil
res = CopyDocument(sim, marker, &result, &fid);
if (res != TRI_ERROR_NO_ERROR) {
- LOG_FATAL("cannot write compactor file: ", TRI_last_error());
+ LOG_FATAL("cannot write compactor file: %s", TRI_last_error());
return false;
}
diff --git a/arangod/VocBase/datafile.c b/arangod/VocBase/datafile.c
index 1b5c46c9df..f0dfd7cd5e 100644
--- a/arangod/VocBase/datafile.c
+++ b/arangod/VocBase/datafile.c
@@ -169,7 +169,7 @@ static int TruncateDatafile (TRI_datafile_t* datafile, TRI_voc_size_t vocSize) {
res = TRI_UNMMFile(datafile->_data, datafile->_maximalSize, &(datafile->_fd), &(datafile->_mmHandle));
if (res < 0) {
- LOG_ERROR("munmap failed with: %s", res);
+ LOG_ERROR("munmap failed with: %d", res);
return res;
}
@@ -514,7 +514,7 @@ static TRI_datafile_t* OpenDatafile (char const* filename, bool ignoreErrors) {
if (res != TRI_ERROR_NO_ERROR) {
TRI_set_errno(res);
close(fd);
- LOG_ERROR("cannot memory map file '%s': '%s'", filename, res);
+ LOG_ERROR("cannot memory map file '%s': '%d'", filename, res);
return NULL;
}
@@ -626,7 +626,7 @@ TRI_datafile_t* TRI_CreateDatafile (char const* filename, TRI_voc_size_t maximal
// remove empty file
TRI_UnlinkFile(filename);
- LOG_ERROR("cannot memory map file '%s': '%s'", filename, res);
+ LOG_ERROR("cannot memory map file '%s': '%d'", filename, res);
return NULL;
}
@@ -1009,7 +1009,7 @@ bool TRI_CloseDatafile (TRI_datafile_t* datafile) {
res = TRI_UNMMFile(datafile->_data, datafile->_maximalSize, &(datafile->_fd), &(datafile->_mmHandle));
if (res != TRI_ERROR_NO_ERROR) {
- LOG_ERROR("munmap failed with: %s", res);
+ LOG_ERROR("munmap failed with: %d", res);
datafile->_state = TRI_DF_STATE_WRITE_ERROR;
datafile->_lastError = res;
return false;
diff --git a/arangod/VocBase/document-collection.c b/arangod/VocBase/document-collection.c
index 66f56cdf64..269f0e5626 100644
--- a/arangod/VocBase/document-collection.c
+++ b/arangod/VocBase/document-collection.c
@@ -2439,12 +2439,12 @@ static int FillIndex (TRI_document_collection_t* collection, TRI_index_t* idx) {
++inserted;
if (inserted % 10000 == 0) {
- LOG_DEBUG("indexed %ld documents of collection %lu", inserted, (unsigned long) primary->base._cid);
+ LOG_DEBUG("indexed %lu documents of collection %lu", (unsigned long) inserted, (unsigned long) primary->base._cid);
}
}
if (scanned % 10000 == 0) {
- LOG_TRACE("scanned %ld of %ld datafile entries of collection %lu", scanned, n, (unsigned long) primary->base._cid);
+ LOG_TRACE("scanned %ld of %ld datafile entries of collection %lu", (unsigned long) scanned, (unsigned long) n, (unsigned long) primary->base._cid);
}
}
}
@@ -3312,14 +3312,14 @@ static TRI_index_t* CreateGeoIndexDocumentCollection (TRI_document_collection_t*
if (location != NULL) {
idx = TRI_CreateGeo1Index(&sim->base, location, loc, geoJson, constraint, ignoreNull);
- LOG_TRACE("created geo-index for location '%s': %d",
+ LOG_TRACE("created geo-index for location '%s': %ld",
location,
(unsigned long) loc);
}
else if (longitude != NULL && latitude != NULL) {
idx = TRI_CreateGeo2Index(&sim->base, latitude, lat, longitude, lon, constraint, ignoreNull);
- LOG_TRACE("created geo-index for location '%s': %d, %d",
+ LOG_TRACE("created geo-index for location '%s': %ld, %ld",
location,
(unsigned long) lat,
(unsigned long) lon);
diff --git a/html/admin/css/.jquery-ui-1.7.2.custom.css.swp b/html/admin/css/.jquery-ui-1.7.2.custom.css.swp
deleted file mode 100644
index ba9b23000a..0000000000
Binary files a/html/admin/css/.jquery-ui-1.7.2.custom.css.swp and /dev/null differ
diff --git a/html/admin/css/layout.css b/html/admin/css/layout.css
index 2f700a9761..895daf5b40 100644
--- a/html/admin/css/layout.css
+++ b/html/admin/css/layout.css
@@ -229,7 +229,8 @@ html.busy, html.busy * {
.hoverClass:hover {
background-color: #696969 !important;
- border-bottom: 1px solid #696969;
+ border-bottom: 1px solid #696969;
+ margin-top: -1px;
color: white !important;
}
@@ -549,14 +550,14 @@ form {
}
#formatJSONyesno {
- margin-top: -40px;
+ margin-top: -50px;
float:right;
margin-right: 120px;
color: #797979;
}
#aqlinfo {
- line-height: 150%;
+ line-height: 250%;
}
#submitQuery {
@@ -565,19 +566,26 @@ form {
}
#refreshShell{
padding-bottom: 3px;
- margin-top: -10px;
+ margin-top: -6px;
width: 9.5% !important;
}
#queryOutput a, .queryError, .querySuccess {
font-size: 0.9em !important;
font-family: "courier";
+ padding-left: 10px;
+ padding-top: 10px !important;
}
+#queryOutput pre {
+ padding-left: 10px;
+ padding-top: 10px;
+}
#queryOutput {
margin-bottom: 5px;
- height:35%;
+ height:35%;
+ padding-top: 10px;
overflow-y: auto;
border: 1px solid black;
background: white;
@@ -587,10 +595,13 @@ form {
}
#queryContent {
+ padding-top: 10px;
height: 45%;
font-family: "courier";
width: 100%;
resize: vertical;
+ padding-left: 10px;
+ margin-top: 2px;
}
#avocshWindow {
@@ -626,6 +637,7 @@ form {
height: 30px;
background-color: white;
margin-right: 0.5%;
+ padding-left: 10px;
}
.avocshSuccess {
@@ -707,7 +719,7 @@ form {
#menue-right {
padding-top:11px;
- padding-right:14px;
+ padding-right:10px;
height:40px;
width:auto;
float:right;
@@ -826,6 +838,12 @@ form {
#logTableID_info {
}
+#logTableID, #warnLogTableID, #critLogTableID, #infoLogTableID, #debugLogTableID {
+ border-left: 1px solid #AAAAAA;
+ border-right: 1px solid #AAAAAA;
+}
+
+
.ui-dialog {
height:100px;
font-size: 0.8em;
@@ -834,9 +852,9 @@ form {
.fg-toolbar {
font-size: 0.7em;
border: 1px solid #D3D3D3 !important;
-// -webkit-box-shadow: inset 0 0 1px 1px #f6f6f6;
-// -moz-box-shadow: inset 0 0 1px 1px #f6f6f6;
-// box-shadow: inset 0 0 1px 1px #f6f6f6;
+ -webkit-box-shadow: inset 0 0 1px 1px #f6f6f6;
+ -moz-box-shadow: inset 0 0 1px 1px #f6f6f6;
+ box-shadow: inset 0 0 1px 1px #f6f6f6;
}
/*
@@ -848,29 +866,16 @@ form {
background: #e3e3e3;
padding: 9px 0 8px;
border: 1px solid #bbb;
-// -webkit-box-shadow: inset 0 0 1px 1px #f6f6f6;
-// -moz-box-shadow: inset 0 0 1px 1px #f6f6f6;
-// box-shadow: inset 0 0 1px 1px #f6f6f6;
color: #333;
font: 0.7em Verdana,Arial,sans-serif;
line-height: 1;
text-align: center;
- text-shadow: 0 1px 0 #fff;
width: 90px;
}
#menue button.minimal:hover {
- background: #d9d9d9;
-// -webkit-box-shadow: inset 0 0 1px 1px #eaeaea;
-// -moz-box-shadow: inset 0 0 1px 1px #eaeaea;
-// box-shadow: inset 0 0 1px 1px #eaeaea;
- color: #222;
+/* background: #d9d9d9; */
+/* color: #222; */
cursor: pointer; }
-#menue button.minimal:active {
- background: #d0d0d0;
-// -webkit-box-shadow: inset 0 0 1px 1px #e3e3e3;
-// -moz-box-shadow: inset 0 0 1px 1px #e3e3e3;
-// box-shadow: inset 0 0 1px 1px #e3e3e3;
- color: #000; }
/*
##############################################################################
### CollectionView subView Buttons
@@ -909,7 +914,7 @@ form {
}
#queryForm {
- margin-top: -5px;
+ margin-top: -3px;
}
#queryView {
@@ -962,3 +967,42 @@ form {
#iInfo {
font-size: 0.8em;
}
+
+#shellInfo {
+ line-height: 200%;
+}
+
+#formatJSONyesno {
+ padding-top: 5px !important
+}
+
+@media screen
+ and (-webkit-min-device-pixel-ratio:0)
+{
+ #submitAvoc {
+ padding-top: 4px !important;
+ height: 30px !important;
+ }
+
+ #refreshShell {
+ margin-top: -4px !important;
+ margin-right: 2px !important;
+ }
+
+ #refrehShellTextRefresh {
+ line-height:230% !important;
+ }
+
+ #formatshellJSONyesno {
+ padding-top: 5px;
+ }
+}
+
+.leftCell {
+ border-left: 1px solid #D3D3D3 !important;
+}
+
+.rightCell {
+ border-right: 1px solid #D3D3D3 !important;
+}
+
diff --git a/html/admin/index.html b/html/admin/index.html
index 8ef6514c8d..b5613c31ca 100644
--- a/html/admin/index.html
+++ b/html/admin/index.html
@@ -10,30 +10,30 @@
@import "css/jquery-ui-1.8.19.custom.css";
@import "css/jquery.dataTables_themeroller.css";
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+