1
0
Fork 0

Merge branch 'devel' of https://github.com/arangodb/arangodb into devel

This commit is contained in:
jsteemann 2016-05-20 16:00:10 +02:00
commit cdb728f807
37 changed files with 1449 additions and 237 deletions

4
.gitignore vendored
View File

@ -108,15 +108,11 @@ js/apps/system/_admin/aardvark/APP/node_modules/*
js/apps/system/_admin/aardvark/APP/frontend/build/app.js js/apps/system/_admin/aardvark/APP/frontend/build/app.js
js/apps/system/_admin/aardvark/APP/frontend/build/app.js.gz js/apps/system/_admin/aardvark/APP/frontend/build/app.js.gz
js/apps/system/_admin/aardvark/APP/frontend/build/app.min.js
js/apps/system/_admin/aardvark/APP/frontend/build/extra-minified.css
js/apps/system/_admin/aardvark/APP/frontend/build/extra.css js/apps/system/_admin/aardvark/APP/frontend/build/extra.css
js/apps/system/_admin/aardvark/APP/frontend/build/extra.css.gz js/apps/system/_admin/aardvark/APP/frontend/build/extra.css.gz
js/apps/system/_admin/aardvark/APP/frontend/build/index.html js/apps/system/_admin/aardvark/APP/frontend/build/index.html
js/apps/system/_admin/aardvark/APP/frontend/build/libs.js js/apps/system/_admin/aardvark/APP/frontend/build/libs.js
js/apps/system/_admin/aardvark/APP/frontend/build/libs.js.gz js/apps/system/_admin/aardvark/APP/frontend/build/libs.js.gz
js/apps/system/_admin/aardvark/APP/frontend/build/libs.min.js
js/apps/system/_admin/aardvark/APP/frontend/build/style-minified.css
js/apps/system/_admin/aardvark/APP/frontend/build/style.css js/apps/system/_admin/aardvark/APP/frontend/build/style.css
js/apps/system/_admin/aardvark/APP/frontend/build/scripts.html.part js/apps/system/_admin/aardvark/APP/frontend/build/scripts.html.part

View File

@ -1,5 +1,7 @@
!CHAPTER ARM !CHAPTER ARM
Currently ARM linux is *unsupported*; The initialization of the BOOST lockfree queue doesn't work.
The ArangoDB packages for ARM require the kernel to allow unaligned memory access. The ArangoDB packages for ARM require the kernel to allow unaligned memory access.
How the kernel handles unaligned memory access is configurable at runtime by How the kernel handles unaligned memory access is configurable at runtime by
checking and adjusting the contents `/proc/cpu/alignment`. checking and adjusting the contents `/proc/cpu/alignment`.

View File

@ -22,27 +22,24 @@ is available that describes how to compile ArangoDB from source on Ubuntu.
Verify that your system contains Verify that your system contains
* the GNU C/C++ compilers "gcc" and "g++" and the standard C/C++ libraries, with support * git (to obtain the sources)
for C++11. You will need version gcc 4.9.0 or higher. For "clang" and "clang++", * a modern C/C++ compiler C++11 capable including full regex support:
you will need at least version 3.6. * GNU "gcc" and "g++" version 4.9.0 or higher
* the GNU autotools (autoconf, automake) * "clang" and "clang++" version 3.6 or higher
* Visual C++ 2015 [(see the "compiling under windows" cookbook for more details)](/cookbook/CompilingUnderWindows30.html)
* cmake
* GNU make * GNU make
* the GNU scanner generator FLEX, at least version 2.3.35 * Python, version 2 in order to use gyp for V8
* the GNU parser generator BISON, at least version 2.4
* Python, version 2 or 3
* the OpenSSL library, version 1.0.1g or higher (development package) * the OpenSSL library, version 1.0.1g or higher (development package)
* the GNU readline library (development package) * jemalloc or tcmalloc development packages
* Go, at least version 1.4.1 * the GNU scanner generator FLEX, at least version 2.3.35 (optional)
* the GNU parser generator BISON, at least version 2.4 (optional)
Most Linux systems already supply RPMs or DPKGs for these packages. Most Linux systems already supply RPMs or DPKGs for these packages.
Some distributions, for example Ubuntu 12.04 or Centos 5, provide only very out-dated Some older distributions, for example Ubuntu 12.04 or Centos 5, provide only very out-dated
versions of compilers, FLEX, BISON, and/or the V8 engine. In that case you need to compile versions of compilers, FLEX and BISON. In that case you need to compile
newer versions of the programs and/or libraries. newer versions of the programs and/or libraries.
When compiling with special configure options, you may need the following extra libraries:
* the Boost test framework library (only when using configure option `--enable-maintainer-mode`)
!SUBSECTION Download the Source !SUBSECTION Download the Source
Download the latest source using ***git***: Download the latest source using ***git***:
@ -62,42 +59,38 @@ any changes, you can speed up cloning substantially by using the *--single-branc
Switch into the ArangoDB directory Switch into the ArangoDB directory
unix> cd ArangoDB unix> cd ArangoDB
unix> mkdir build
unix> cd build
In order to generate the configure script, execute In order to generate the build environment please execute
unix> make setup unix> cmake ..
This will call aclocal, autoheader, automake, and autoconf in the correct order.
!SUBSECTION Configure
In order to configure the build environment please execute
unix> ./configure
to setup the makefiles. This will check the various system characteristics and to setup the makefiles. This will check the various system characteristics and
installed libraries. installed libraries. If you installed the compiler in a non standard location, you may need to specify it:
Please note that it may be required to set the *--host* and *--target* variables cmake -DCMAKE_C_COMPILER=/opt/bin/gcc -DCMAKE_CXX_COMPILER=/opt/bin/g++ ..
when running the configure command. For example, if you compile on MacOS, you
should add the following options to the configure command:
--host=x86_64-apple-darwin --target=x86_64-apple-darwin If you compile on MacOS, you should add the following options to the cmake command:
The host and target values for other architectures vary. cmake .. -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_OSX_DEPLOYMENT_TARGET=10.11
If you also plan to make changes to the source code of ArangoDB, add the If you also plan to make changes to the source code of ArangoDB, you should compile with the `Debug` target;
following option to the *configure* command: *--enable-maintainer-mode*. Using The `Debug` target enables additional sanity checks etc. which would slow down production binaries.
this option, you can make changes to the lexer and parser files and some other
source files that will generate other files. Enabling this option will add extra
dependencies to BISON, FLEX, and PYTHON. These external tools then need to be
available in the correct versions on your system.
The following configuration options exist: Other options valuable for development:
-DARANGODB_ENABLE_MAINTAINER_MODE
Needed, if you plan to make changes to AQL language (which is implemented using a lexer and parser
files in `arangod/Aql/grammar.y` and `arangod/Aql/tokens.ll`) your system has to contain the tools FLEX and BISON.
-DARANGODB_ENABLE_BACKTRACE
(requires the maintainer mode) If you want to have c++ stacktraces attached to your exceptions.
This can be usefull to more quick locate the place where an exception or an assertion was thrown.
`--enable-relative`
This will make relative paths be used in the compiled binaries and
scripts. It allows to run ArangoDB from the compile directory directly, without the scripts. It allows to run ArangoDB from the compile directory directly, without the
need for a *make install* command and specifying much configuration parameters. need for a *make install* command and specifying much configuration parameters.
When used, you can start ArangoDB using this command: When used, you can start ArangoDB using this command:
@ -106,73 +99,21 @@ When used, you can start ArangoDB using this command:
ArangoDB will then automatically use the configuration from file *etc/relative/arangod.conf*. ArangoDB will then automatically use the configuration from file *etc/relative/arangod.conf*.
`--enable-all-in-one-etcd` -DUSE_FAILURE_TESTS
This tells the build system to use the bundled version of ETCD. This is the
default and recommended.
`--enable-internal-go`
This tells the build system to use Go binaries located in the 3rdParty
directory. Note that ArangoDB does not ship with Go binaries, and that the Go
binaries must be copied into this directory manually.
`--enable-maintainer-mode`
This tells the build system to use BISON and FLEX to regenerate the parser and
scanner files. If disabled, the supplied files will be used so you cannot make
changes to the parser and scanner files. You need at least BISON 2.4.1 and FLEX
2.5.35. This option also allows you to make changes to the error messages file,
which is converted to js and C header files using Python. You will need Python 2
or 3 for this. Furthermore, this option enables additional test cases to be
executed in a *make unittests* run. You also need to install the Boost test
framework for this.
Additionally, turning on the maintainer mode will turn on a lot of assertions in
the code.
`--enable-failure-tests`
This option activates additional code in the server that intentionally makes the This option activates additional code in the server that intentionally makes the
server crash or misbehave (e.g. by pretending the system ran out of server crash or misbehave (e.g. by pretending the system ran out of
memory). This option is useful for writing tests. memory). This option is useful for writing tests.
`--enable-v8-debug` By default the libc allocator is chosen. If your system offers the jemalloc it will be
prefered over tcmalloc and the system allocator.
Builds a debug version of the V8 library. This is useful only when working on !SUBSUBSECTION shared memory
the V8 integration inside ArangoDB. Gyp is used as makefile generator by V8. Gyp requires shared memory to be available,
which may not if you i.e. compile in a chroot. You can make it available like this:
`--enable-tcmalloc` none /opt/chroots/ubuntu_precise_x64/dev/shm tmpfs rw,nosuid,nodev,noexec 0 2
devpts /opt/chroots/ubuntu_precise_x64/dev/pts devpts gid=5,mode=620 0 0
Links arangod and the client tools against the tcmalloc library installed on the
system. Note that when this option is set, a tcmalloc library must be present
and exposed under the name `libtcmalloc`, `libtcmalloc_minimal` or
`libtcmalloc_debug`.
!SUBSECTION Compiling Go
Users F21 and duralog told us that some systems don't provide an update-to-date
version of go. This seems to be the case for at least Ubuntu 12 and 13. To
install go on these system, you may follow the instructions provided
[here](http://blog.labix.org/2013/06/15/in-flight-deb-packages-of-go). For
other systems, you may follow the instructions
[here](http://golang.org/doc/install).
To make ArangoDB use a specific version of go, you may copy the go binaries into
the 3rdParty/go-32 or 3rdParty/go-64 directories of ArangoDB (depending on your
architecture), and then tell ArangoDB to use this specific go version by using
the *--enable-internal-go* configure option.
User duralog provided some the following script to pull the latest release
version of go into the ArangoDB source directory and build it:
cd ArangoDB
hg clone -u release https://code.google.com/p/go 3rdParty/go-64 && \
cd 3rdParty/go-64/src && \
./all.bash
# now that go is installed, run your configure with --enable-internal-go
./configure --enable-internal-go
!SUBSECTION Compile !SUBSECTION Compile
@ -200,10 +141,10 @@ to IP address 127.0.0.1. You should see the startup messages similar to the
following: following:
``` ```
2013-10-14T12:47:29Z [29266] INFO ArangoDB xxx ... 2016-06-01T12:47:29Z [29266] INFO ArangoDB xxx ...
2013-10-14T12:47:29Z [29266] INFO using endpoint 'tcp://127.0.0.1.8529' for non-encrypted requests 2016-06-10T12:47:29Z [29266] INFO using endpoint 'tcp://127.0.0.1.8529' for non-encrypted requests
2013-10-14T12:47:30Z [29266] INFO Authentication is turned off 2016-06-01T12:47:30Z [29266] INFO Authentication is turned on
2013-10-14T12:47:30Z [29266] INFO ArangoDB (version xxx) is ready for business. Have fun! 2016-60-01T12:47:30Z [29266] INFO ArangoDB (version xxx) is ready for business. Have fun!
``` ```
If it fails with a message about the database directory, please make sure the If it fails with a message about the database directory, please make sure the
@ -233,28 +174,19 @@ From time to time there will be bigger structural changes in ArangoDB, which may
render the old Makefiles invalid. Should this be the case and `make` complains render the old Makefiles invalid. Should this be the case and `make` complains
about missing files etc., the following commands should fix it: about missing files etc., the following commands should fix it:
unix> rm -rf lib/*/.deps arangod/*/.deps arangosh/*/.deps Makefile
unix> make setup unix> rm -f CMakeCache.txt
unix> ./configure <your configure options go here> unix> cmake ..
unix> make unix> make
In order to reset everything and also recompile all 3rd party libraries, issue In order to reset everything and also recompile all 3rd party libraries, issue
the following commands: the following commands:
unix> make superclean
unix> git checkout -- . unix> git checkout -- .
unix> make setup unix> cd ..; rm -rf build; mkdir build; cd build
unix> ./configure <your configure options go here>
unix> make
This will clean up ArangoDB and the 3rd party libraries, and rebuild everything. This will clean up ArangoDB and the 3rd party libraries, and rebuild everything.
If you forgot your previous configure options, you can look them up with
unix> head config.log
before issuing `make superclean` (as make `superclean` also removes the file `config.log`).
Sometimes you can get away with the less intrusive commands. Sometimes you can get away with the less intrusive commands.
!SUBSECTION Install !SUBSECTION Install

View File

@ -323,8 +323,9 @@ it may break client applications that rely on the old behavior.
!SUBSECTION Databases API !SUBSECTION Databases API
`_listDatabases()` has been renamed to `_databases()` (making it consistent with `_collections()`) The `_listDatabases()` function of the `db` object has been renamed to `_databases()`, making it
`_listEndpoints()` has been renamed to `_endpoints()` (making it consistent with `_collections()`) consistent with the `_collections()` function. Also the `_listEndpoints()` function has been
renamed to `_endpoints()`.
!SUBSECTION Collection API !SUBSECTION Collection API
@ -589,6 +590,43 @@ could be overridden by sending the HTTP header `x-arango-version: 1.4`. Clients
still send the header, but this will not make the database name in the "location" still send the header, but this will not make the database name in the "location"
response header disappear. response header disappear.
The result format for querying all collections via the API GET `/_api/collection`
has been changed.
Previous versions of ArangoDB returned an object with an attribute named `collections`
and an attribute named `names`. Both contained all available collections, but
`collections` contained the collections as an array, and `names` contained the
collections again, contained in an object in which the attribute names were the
collection names, e.g.
```
{
"collections": [
{"id":"5874437","name":"test","isSystem":false,"status":3,"type":2},
{"id":"17343237","name":"something","isSystem":false,"status":3,"type":2},
...
],
"names": {
"test": {"id":"5874437","name":"test","isSystem":false,"status":3,"type":2},
"something": {"id":"17343237","name":"something","isSystem":false,"status":3,"type":2},
...
}
}
```
This result structure was redundant, and therefore has been simplified to just
```
{
"result": [
{"id":"5874437","name":"test","isSystem":false,"status":3,"type":2},
{"id":"17343237","name":"something","isSystem":false,"status":3,"type":2},
...
]
}
in ArangoDB 3.0.
!SUBSECTION Replication APIs !SUBSECTION Replication APIs
The URL parameter "failOnUnknown" was removed from the REST API GET `/_api/replication/dump`. The URL parameter "failOnUnknown" was removed from the REST API GET `/_api/replication/dump`.
@ -771,7 +809,7 @@ to adapt to the new behavior, making the option superfluous in 3.0.
!SECTION Web Admin Interface !SECTION Web Admin Interface
The JavaScript shell has been removed from ArangoDB's web interface. The functionality The JavaScript shell has been removed from ArangoDB's web interface. The functionality
it provided is still fully available in the ArangoShell (arangosh) binary shipped the shell provided is still fully available in the ArangoShell (arangosh) binary shipped
with ArangoDB. with ArangoDB.
!SECTION ArangoShell and client tools !SECTION ArangoShell and client tools

View File

@ -14,7 +14,7 @@ ArangoDB
Master: [![Build Status](https://secure.travis-ci.org/arangodb/arangodb.png?branch=master)](http://travis-ci.org/arangodb/arangodb) Master: [![Build Status](https://secure.travis-ci.org/arangodb/arangodb.png?branch=master)](http://travis-ci.org/arangodb/arangodb)
Slack: ![ArangoDB-Logo](http://slack.arangodb.com/badge.svg) Slack: [![ArangoDB-Logo](http://slack.arangodb.com/badge.svg)](https://slack.arangodb.com)
ArangoDB is a multi-model, open-source database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions. Use ACID transactions if you require them. Scale horizontally with a few mouse clicks. ArangoDB is a multi-model, open-source database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions. Use ACID transactions if you require them. Scale horizontally with a few mouse clicks.

View File

@ -140,14 +140,14 @@ struct AqlValue final {
} }
if (length < sizeof(_data.internal) - 1) { if (length < sizeof(_data.internal) - 1) {
// short string... can store it inline // short string... can store it inline
_data.internal[0] = 0x40 + length; _data.internal[0] = static_cast<uint8_t>(0x40 + length);
memcpy(_data.internal + 1, value, length); memcpy(_data.internal + 1, value, length);
setType(AqlValueType::VPACK_INLINE); setType(AqlValueType::VPACK_INLINE);
} else if (length <= 126) { } else if (length <= 126) {
// short string... cannot store inline, but we don't need to // short string... cannot store inline, but we don't need to
// create a full-features Builder object here // create a full-features Builder object here
_data.buffer = new arangodb::velocypack::Buffer<uint8_t>(length + 1); _data.buffer = new arangodb::velocypack::Buffer<uint8_t>(length + 1);
_data.buffer->push_back(0x40 + length); _data.buffer->push_back(static_cast<char>(0x40 + length));
_data.buffer->append(value, length); _data.buffer->append(value, length);
setType(AqlValueType::VPACK_MANAGED); setType(AqlValueType::VPACK_MANAGED);
} else { } else {
@ -169,14 +169,14 @@ struct AqlValue final {
size_t const length = value.size(); size_t const length = value.size();
if (length < sizeof(_data.internal) - 1) { if (length < sizeof(_data.internal) - 1) {
// short string... can store it inline // short string... can store it inline
_data.internal[0] = 0x40 + length; _data.internal[0] = static_cast<uint8_t>(0x40 + length);
memcpy(_data.internal + 1, value.c_str(), value.size()); memcpy(_data.internal + 1, value.c_str(), value.size());
setType(AqlValueType::VPACK_INLINE); setType(AqlValueType::VPACK_INLINE);
} else if (length <= 126) { } else if (length <= 126) {
// short string... cannot store inline, but we don't need to // short string... cannot store inline, but we don't need to
// create a full-features Builder object here // create a full-features Builder object here
_data.buffer = new arangodb::velocypack::Buffer<uint8_t>(length + 1); _data.buffer = new arangodb::velocypack::Buffer<uint8_t>(length + 1);
_data.buffer->push_back(0x40 + length); _data.buffer->push_back(static_cast<char>(0x40 + length));
_data.buffer->append(value.c_str(), length); _data.buffer->append(value.c_str(), length);
setType(AqlValueType::VPACK_MANAGED); setType(AqlValueType::VPACK_MANAGED);
} else { } else {

View File

@ -95,6 +95,13 @@ class Parser {
/// @brief adjust the current parse position /// @brief adjust the current parse position
inline void increaseOffset(size_t offset) { _offset += offset; } inline void increaseOffset(size_t offset) { _offset += offset; }
inline void decreaseOffset(int offset) {
_offset -= static_cast<size_t>(offset);
}
/// @brief adjust the current parse position
inline void decreaseOffset(size_t offset) { _offset -= offset; }
/// @brief fill the output buffer with a fragment of the query /// @brief fill the output buffer with a fragment of the query
void fillBuffer(char* result, size_t length) { void fillBuffer(char* result, size_t length) {
memcpy(result, _buffer, length); memcpy(result, _buffer, length);

File diff suppressed because it is too large Load Diff

View File

@ -561,6 +561,8 @@ namespace arangodb {
/* now push the character back into the input stream and return a T_NOT token */ /* now push the character back into the input stream and return a T_NOT token */
BEGIN(INITIAL); BEGIN(INITIAL);
yyless(0); yyless(0);
/* must decrement offset by one character as we're pushing the char back onto the stack */
yyextra->decreaseOffset(1);
return T_NOT; return T_NOT;
} }

View File

@ -975,7 +975,33 @@ static void CreateCollectionCoordinator(
std::vector<std::string> dbServers; std::vector<std::string> dbServers;
if (distributeShardsLike.empty()) { bool done = false;
if (!distributeShardsLike.empty()) {
CollectionNameResolver resolver(vocbase);
TRI_voc_cid_t otherCid =
resolver.getCollectionIdCluster(distributeShardsLike);
if (otherCid != 0) {
std::string otherCidString
= arangodb::basics::StringUtils::itoa(otherCid);
std::shared_ptr<CollectionInfo> collInfo =
ci->getCollection(databaseName, otherCidString);
if (!collInfo->empty()) {
auto shards = collInfo->shardIds();
auto shardList = ci->getShardList(otherCidString);
for (auto const& s : *shardList) {
auto it = shards->find(s);
if (it != shards->end()) {
for (auto const& s : it->second) {
dbServers.push_back(s);
}
}
}
done = true;
}
}
}
if (!done) {
// fetch list of available servers in cluster, and shuffle them randomly // fetch list of available servers in cluster, and shuffle them randomly
dbServers = ci->getCurrentDBServers(); dbServers = ci->getCurrentDBServers();
@ -985,23 +1011,6 @@ static void CreateCollectionCoordinator(
} }
random_shuffle(dbServers.begin(), dbServers.end()); random_shuffle(dbServers.begin(), dbServers.end());
} else {
CollectionNameResolver resolver(vocbase);
TRI_voc_cid_t otherCid =
resolver.getCollectionIdCluster(distributeShardsLike);
std::string otherCidString = arangodb::basics::StringUtils::itoa(otherCid);
std::shared_ptr<CollectionInfo> collInfo =
ci->getCollection(databaseName, otherCidString);
auto shards = collInfo->shardIds();
auto shardList = ci->getShardList(otherCidString);
for (auto const& s : *shardList) {
auto it = shards->find(s);
if (it != shards->end()) {
for (auto const& s : it->second) {
dbServers.push_back(s);
}
}
}
} }
// now create the shards // now create the shards
@ -1013,17 +1022,23 @@ static void CreateCollectionCoordinator(
for (uint64_t j = 0; j < replicationFactor; ++j) { for (uint64_t j = 0; j < replicationFactor; ++j) {
std::string candidate; std::string candidate;
size_t count2 = 0; size_t count2 = 0;
bool found = true;
do { do {
candidate = dbServers[count++]; candidate = dbServers[count++];
if (count >= dbServers.size()) { if (count >= dbServers.size()) {
count = 0; count = 0;
} }
if (++count2 == dbServers.size() + 1) { if (++count2 == dbServers.size() + 1) {
TRI_V8_THROW_EXCEPTION_PARAMETER("replicationFactor too large"); LOG(WARN) << "createCollectionCoordinator: replicationFactor is "
"too large for the number of DBservers";
found = false;
break;
} }
} while (std::find(serverIds.begin(), serverIds.end(), candidate) != } while (std::find(serverIds.begin(), serverIds.end(), candidate) !=
serverIds.end()); serverIds.end());
serverIds.push_back(candidate); if (found) {
serverIds.push_back(candidate);
}
} }
// determine shard id // determine shard id

View File

@ -32,7 +32,7 @@ endpoint = tcp://127.0.0.1:8529
# reuse-address = false # reuse-address = false
# disable authentication for the admin frontend # disable authentication for the admin frontend
authentication = false authentication = true
# number of server threads, 0 to use them all # number of server threads, 0 to use them all
# threads = 4 # threads = 4

View File

@ -1,5 +1,5 @@
[server] [server]
authentication = false authentication = true
endpoint = tcp://0.0.0.0:8529 endpoint = tcp://0.0.0.0:8529
[javascript] [javascript]

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 661 B

After

Width:  |  Height:  |  Size: 6.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -362,6 +362,10 @@
window.App.notificationList.add({title:title, content: content, info: info, type: 'error'}); window.App.notificationList.add({title:title, content: content, info: info, type: 'error'});
}, },
arangoWarning: function (title, content, info) {
window.App.notificationList.add({title:title, content: content, info: info, type: 'warning'});
},
hideArangoNotifications: function() { hideArangoNotifications: function() {
$.noty.clearQueue(); $.noty.clearQueue();
$.noty.closeAll(); $.noty.closeAll();

View File

@ -167,6 +167,9 @@
data.numberOfShards = object.shards; data.numberOfShards = object.shards;
data.shardKeys = object.keys; data.shardKeys = object.keys;
} }
if (object.replicationFactor) {
data.replicationFactor = JSON.parse(object.replicationFactor);
}
$.ajax({ $.ajax({
cache: false, cache: false,

View File

@ -74,6 +74,28 @@
</th> </th>
</tr> </tr>
<% if (figuresData.numberOfShards) { %>
<tr>
<th class="collectionInfoTh2">Shards:</th>
<th class="collectionInfoTh">
<div class="modal-text"><%=figuresData.numberOfShards%></div>
</th>
<th class="collectionInfoTh">
</th>
</tr>
<% } %>
<% if (figuresData.replicationFactor) { %>
<tr>
<th class="collectionInfoTh2">Replication factor:</th>
<th class="collectionInfoTh">
<div class="modal-text"><%=figuresData.replicationFactor%></div>
</th>
<th class="collectionInfoTh">
</th>
</tr>
<% } %>
<tr> <tr>
<th class="collectionInfoTh2">Index buckets:</th> <th class="collectionInfoTh2">Index buckets:</th>
<th class="collectionInfoTh"> <th class="collectionInfoTh">

View File

@ -508,7 +508,7 @@
"new-replication-factor", "new-replication-factor",
"Replication factor", "Replication factor",
"", "",
"Numeric value. Default is '1'. Description: TODO", "Numeric value. Must be at least 1. Description: TODO",
"", "",
false, false,
[ [

View File

@ -622,13 +622,13 @@
} }
if (self.server !== "-local-") { if (self.server !== "-local-") {
url = self.serverInfo.endpoint + "/_admin/aardvark/statistics/cluster";
urlParams += "&type=short&DBserver=" + self.serverInfo.target; urlParams += "&type=short&DBserver=" + self.serverInfo.target;
if (! self.history.hasOwnProperty(self.server)) { if (! self.history.hasOwnProperty(self.server)) {
self.history[self.server] = {}; self.history[self.server] = {};
} }
} }
console.log(url);
$.ajax( $.ajax(
url + urlParams, url + urlParams,

View File

@ -317,11 +317,20 @@
var template; var template;
if (typeof templateName === 'string') { if (typeof templateName === 'string') {
template = templateEngine.createTemplate(templateName); template = templateEngine.createTemplate(templateName);
$(".createModalDialog .modal-body").html(template.render({ if (divID) {
content: tableContent, $('#' + divID + " .createModalDialog .modal-body").html(template.render({
advancedContent: advancedContent, content: tableContent,
info: extraInfo advancedContent: advancedContent,
})); info: extraInfo
}));
}
else {
$("#modalPlaceholder .createModalDialog .modal-body").html(template.render({
content: tableContent,
advancedContent: advancedContent,
info: extraInfo
}));
}
} }
else { else {
var counter = 0; var counter = 0;

View File

@ -1,6 +1,6 @@
/*jshint browser: true */ /*jshint browser: true */
/*jshint unused: false */ /*jshint unused: false */
/*global Backbone, templateEngine, $, window, noty */ /*global frontendConfig, Backbone, templateEngine, $, window, noty */
(function () { (function () {
"use strict"; "use strict";
@ -16,6 +16,15 @@
this.collection.bind("add", this.renderNotifications.bind(this)); this.collection.bind("add", this.renderNotifications.bind(this));
this.collection.bind("remove", this.renderNotifications.bind(this)); this.collection.bind("remove", this.renderNotifications.bind(this));
this.collection.bind("reset", this.renderNotifications.bind(this)); this.collection.bind("reset", this.renderNotifications.bind(this));
// TODO save user property if check should be enabled/disabled
window.setTimeout(function() {
if (frontendConfig.authenticationEnabled === false) {
window.arangoHelper.arangoWarning(
"Warning", "Authentication is disabled. Do not use this setup in production mode."
);
}
}, 2000);
}, },
notificationItem: templateEngine.createTemplate("notificationItem.ejs"), notificationItem: templateEngine.createTemplate("notificationItem.ejs"),
@ -66,6 +75,10 @@
} }
}]; }];
} }
else if (latestModel.get('type') === 'warning') {
time = 20000;
}
$.noty.clearQueue(); $.noty.clearQueue();
$.noty.closeAll(); $.noty.closeAll();

View File

@ -8,7 +8,8 @@
.accordion-heading { .accordion-heading {
padding-top: 15px; padding-bottom: 20px;
padding-top: 25px;
a { a {
border: 1px solid $c-accordion-heading; border: 1px solid $c-accordion-heading;

View File

@ -5,7 +5,6 @@
text-align: left; text-align: left;
width: 20% !important; width: 20% !important;
select,
textarea { textarea {
margin-top: 10px; margin-top: 10px;
} }

View File

@ -35,9 +35,18 @@
} }
.select2-drop-active { .select2-drop-active {
border: 2px solid $c-info;
border-top: 0;
margin-top: -2px;
width: 452px !important;
z-index: 9999999; z-index: 9999999;
} }
.select2-results,
.select2-no-results {
font-weight: 100;
}
.modal-tabbar { .modal-tabbar {
border-bottom: 1px solid $c-darker-grey; border-bottom: 1px solid $c-darker-grey;
} }
@ -49,12 +58,27 @@
font-weight: 300; font-weight: 300;
max-height: 410px; max-height: 410px;
input {
height: 20px;
}
select {
height: 33px;
}
.select2-container-multi.select2-container-active {
.select2-choices {
border: 2px solid $c-info;
}
}
.select2-choices { .select2-choices {
background-image: none !important; background-image: none !important;
border: 1px solid $c-dark-grey; border: 2px solid $c-content-border;
border-radius: 3px; border-radius: 3px;
-webkit-box-shadow: none; -webkit-box-shadow: none;
box-shadow: none; box-shadow: none;
width: 448px;
input { input {
@extend %inputs; @extend %inputs;
@ -269,11 +293,12 @@
} }
select { select {
width: 450px; margin-top: 0;
width: 452px;
} }
.collectionTh { .collectionTh {
height: 50px; height: 55px;
} }
.tab-content { .tab-content {
@ -301,8 +326,7 @@
color: $c-white; color: $c-white;
font-size: 9pt; font-size: 9pt;
font-weight: 100; font-weight: 100;
margin-bottom: 5px; margin-top: -9px;
margin-top: -7px;
padding-left: 5px; padding-left: 5px;
padding-right: 5px; padding-right: 5px;
position: absolute; position: absolute;
@ -411,7 +435,6 @@
.modal table tr, .modal table tr,
.thBorderBottom { .thBorderBottom {
border-bottom: 1px solid $c-modal-table-border-bottom !important;
} }
.modal-delete-confirmation { .modal-delete-confirmation {

View File

@ -4104,7 +4104,6 @@ input.gv-radio-button {
text-align: left; text-align: left;
width: 20% !important; } width: 20% !important; }
.collectionTh input, .collectionTh input,
.collectionTh select,
.collectionTh textarea { .collectionTh textarea {
margin-top: 10px; } margin-top: 10px; }

View File

@ -53,7 +53,8 @@ var getReadableName = function(name) {
var getStorage = function() { var getStorage = function() {
var c = db._collection("_apps"); var c = db._collection("_apps");
if (c === null) { if (c === null) {
c = db._create("_apps", {isSystem: true}); c = db._create("_apps", {isSystem: true, replicationFactor: 1,
distributeShardsLike: "_graphs"});
c.ensureIndex({ type: "hash", fields: [ "mount" ], unique: true }); c.ensureIndex({ type: "hash", fields: [ "mount" ], unique: true });
} }
return c; return c;

View File

@ -2,7 +2,7 @@
/*global fail, assertEqual, assertTrue, assertNotEqual */ /*global fail, assertEqual, assertTrue, assertNotEqual */
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief test the unique constraint /// @brief test the hash index
/// ///
/// @file /// @file
/// ///
@ -304,7 +304,6 @@ function HashIndexSuite() {
}; };
} }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief executes the test suites /// @brief executes the test suites
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -33,7 +33,6 @@ var internal = require("internal");
var errors = internal.errors; var errors = internal.errors;
var testHelper = require("@arangodb/test-helper").Helper; var testHelper = require("@arangodb/test-helper").Helper;
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief test suite: basics /// @brief test suite: basics
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -551,6 +550,142 @@ function getIndexesSuite() {
assertEqual("fulltext", idx.type); assertEqual("fulltext", idx.type);
assertFalse(idx.unique); assertFalse(idx.unique);
assertEqual([ "value" ], idx.fields); assertEqual([ "value" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetRocksDBUnique1 : function () {
collection.ensureIndex({ type: "rocksdb", unique: true, fields: ["value"] });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertTrue(idx.unique);
assertFalse(idx.sparse);
assertEqual([ "value" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetRocksDBUnique2 : function () {
collection.ensureIndex({ type: "rocksdb", unique: true, fields: ["value1", "value2"] });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertTrue(idx.unique);
assertFalse(idx.sparse);
assertEqual([ "value1", "value2" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetSparseRocksDBUnique1 : function () {
collection.ensureIndex({ type: "rocksdb", unique: true, fields: ["value"], sparse: true });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertTrue(idx.unique);
assertTrue(idx.sparse);
assertEqual([ "value" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetSparseRocksDBUnique2 : function () {
collection.ensureIndex({ type: "rocksdb", unique: true, fields: ["value1", "value2"], sparse: true });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertTrue(idx.unique);
assertTrue(idx.sparse);
assertEqual([ "value1", "value2" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get non-unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetRocksDBNonUnique1 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["value"] });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertFalse(idx.unique);
assertFalse(idx.sparse);
assertEqual([ "value" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get non-unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetRocksDBNonUnique2 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["value1", "value2"] });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertFalse(idx.unique);
assertFalse(idx.sparse);
assertEqual([ "value1", "value2" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get non-unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetSparseRocksDBNonUnique1 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["value"], sparse: true });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertFalse(idx.unique);
assertTrue(idx.sparse);
assertEqual([ "value" ], idx.fields);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: get non-unique rocksdb index
////////////////////////////////////////////////////////////////////////////////
testGetSparseRocksDBNonUnique2 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["value1", "value2"], sparse: true });
var res = collection.getIndexes();
assertEqual(2, res.length);
var idx = res[1];
assertEqual("rocksdb", idx.type);
assertFalse(idx.unique);
assertTrue(idx.sparse);
assertEqual([ "value1", "value2" ], idx.fields);
} }
}; };

View File

@ -0,0 +1,331 @@
/*jshint globalstrict:false, strict:false */
/*global fail, assertEqual, assertTrue, assertNotEqual */
////////////////////////////////////////////////////////////////////////////////
/// @brief test the rocksdb index
///
/// @file
///
/// DISCLAIMER
///
/// Copyright 2010-2012 triagens GmbH, Cologne, Germany
///
/// Licensed under the Apache License, Version 2.0 (the "License");
/// you may not use this file except in compliance with the License.
/// You may obtain a copy of the License at
///
/// http://www.apache.org/licenses/LICENSE-2.0
///
/// Unless required by applicable law or agreed to in writing, software
/// distributed under the License is distributed on an "AS IS" BASIS,
/// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
/// See the License for the specific language governing permissions and
/// limitations under the License.
///
/// Copyright holder is triAGENS GmbH, Cologne, Germany
///
/// @author Dr. Frank Celler, Lucas Dohmen
/// @author Copyright 2012, triAGENS GmbH, Cologne, Germany
////////////////////////////////////////////////////////////////////////////////
var jsunity = require("jsunity");
var internal = require("internal");
////////////////////////////////////////////////////////////////////////////////
/// @brief test suite: creation
////////////////////////////////////////////////////////////////////////////////
function RocksDBIndexSuite() {
'use strict';
var cn = "UnitTestsCollectionRocksDB";
var collection = null;
return {
////////////////////////////////////////////////////////////////////////////////
/// @brief set up
////////////////////////////////////////////////////////////////////////////////
setUp : function () {
internal.db._drop(cn);
collection = internal.db._create(cn);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief tear down
////////////////////////////////////////////////////////////////////////////////
tearDown : function () {
// try...catch is necessary as some tests delete the collection itself!
try {
collection.unload();
collection.drop();
}
catch (err) {
}
collection = null;
internal.wait(0.0);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: index creation
////////////////////////////////////////////////////////////////////////////////
testCreation : function () {
var idx = collection.ensureIndex({ type: "rocksdb", fields: ["a"] });
var id = idx.id;
assertNotEqual(0, id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(false, idx.sparse);
assertEqual(["a"], idx.fields);
assertEqual(true, idx.isNewlyCreated);
idx = collection.ensureIndex({ type: "rocksdb", fields: ["a"] });
assertEqual(id, idx.id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(false, idx.sparse);
assertEqual(["a"], idx.fields);
assertEqual(false, idx.isNewlyCreated);
idx = collection.ensureIndex({ type: "rocksdb", fields: ["a"], sparse: true });
assertNotEqual(id, idx.id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(true, idx.sparse);
assertEqual(["a"], idx.fields);
assertEqual(true, idx.isNewlyCreated);
id = idx.id;
idx = collection.ensureIndex({ type: "rocksdb", fields: ["a"], sparse: true });
assertEqual(id, idx.id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(true, idx.sparse);
assertEqual(["a"], idx.fields);
assertEqual(false, idx.isNewlyCreated);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: permuted attributes
////////////////////////////////////////////////////////////////////////////////
testCreationPermuted : function () {
var idx = collection.ensureIndex({ type: "rocksdb", fields: ["a", "b"] });
var id = idx.id;
assertNotEqual(0, id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(["a","b"], idx.fields);
assertEqual(true, idx.isNewlyCreated);
idx = collection.ensureIndex({ type: "rocksdb", fields: ["a", "b"] });
assertEqual(id, idx.id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(["a","b"], idx.fields);
assertEqual(false, idx.isNewlyCreated);
idx = collection.ensureIndex({ type: "rocksdb", fields: ["b", "a"] });
assertNotEqual(id, idx.id);
id = idx.id;
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(["b","a"], idx.fields);
assertEqual(true, idx.isNewlyCreated);
idx = collection.ensureIndex({ type: "rocksdb", fields: ["b", "a"] });
assertEqual(id, idx.id);
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(["b","a"], idx.fields);
assertEqual(false, idx.isNewlyCreated);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: documents
////////////////////////////////////////////////////////////////////////////////
testUniqueDocuments : function () {
var idx = collection.ensureIndex({ type: "rocksdb", fields: ["a", "b"] });
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(["a","b"], idx.fields.sort());
assertEqual(true, idx.isNewlyCreated);
collection.save({ a : 1, b : 1 });
collection.save({ a : 1, b : 1 });
collection.save({ a : 1 });
collection.save({ a : 1 });
collection.save({ a : null, b : 1 });
collection.save({ a : null, b : 1 });
collection.save({ c : 1 });
collection.save({ c : 1 });
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: documents
////////////////////////////////////////////////////////////////////////////////
testUniqueDocumentsSparseIndex : function () {
var idx = collection.ensureIndex({ type: "rocksdb", fields: ["a", "b"], sparse: true });
assertEqual("rocksdb", idx.type);
assertEqual(false, idx.unique);
assertEqual(["a","b"], idx.fields);
assertEqual(true, idx.isNewlyCreated);
collection.save({ a : 1, b : 1 });
collection.save({ a : 1, b : 1 });
collection.save({ a : 1 });
collection.save({ a : 1 });
collection.save({ a : null, b : 1 });
collection.save({ a : null, b : 1 });
collection.save({ c : 1 });
collection.save({ c : 1 });
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: combination of indexes
////////////////////////////////////////////////////////////////////////////////
testMultiIndexViolation1 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["a"], unique: true });
collection.ensureIndex({ type: "rocksdb", fields: ["b"] });
collection.save({ a : "test1", b : 1});
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err1) {
}
var doc1 = collection.save({ a : "test2", b : 1});
assertTrue(doc1._key !== "");
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err2) {
}
var doc2 = collection.save({ a : "test3", b : 1});
assertTrue(doc2._key !== "");
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: combination of indexes
////////////////////////////////////////////////////////////////////////////////
testMultiIndexViolationSparse1 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["a"], unique: true, sparse: true });
collection.ensureIndex({ type: "rocksdb", fields: ["b"], sparse: true });
collection.save({ a : "test1", b : 1});
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err1) {
}
var doc1 = collection.save({ a : "test2", b : 1});
assertTrue(doc1._key !== "");
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err2) {
}
var doc2 = collection.save({ a : "test3", b : 1});
assertTrue(doc2._key !== "");
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: combination of indexes
////////////////////////////////////////////////////////////////////////////////
testMultiIndexViolation2 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["a"], unique: true });
collection.ensureIndex({ type: "rocksdb", fields: ["b"] });
collection.save({ a : "test1", b : 1});
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err1) {
}
var doc1 = collection.save({ a : "test2", b : 1});
assertTrue(doc1._key !== "");
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err2) {
}
var doc2 = collection.save({ a : "test3", b : 1});
assertTrue(doc2._key !== "");
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: combination of indexes
////////////////////////////////////////////////////////////////////////////////
testMultiIndexViolationSparse2 : function () {
collection.ensureIndex({ type: "rocksdb", fields: ["a"], unique: true, sparse: true });
collection.ensureIndex({ type: "rocksdb", fields: ["b"], sparse: true });
collection.save({ a : "test1", b : 1});
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err1) {
}
var doc1 = collection.save({ a : "test2", b : 1});
assertTrue(doc1._key !== "");
try {
collection.save({ a : "test1", b : 1});
fail();
}
catch (err2) {
}
var doc2 = collection.save({ a : "test3", b : 1});
assertTrue(doc2._key !== "");
}
};
}
////////////////////////////////////////////////////////////////////////////////
/// @brief executes the test suite
////////////////////////////////////////////////////////////////////////////////
jsunity.run(RocksDBIndexSuite);
return jsunity.done();

View File

@ -75,11 +75,11 @@ function startReadingQuery (endpoint, collName, timeout) {
} }
var count = 0; var count = 0;
while (true) { while (true) {
if (++count > 5) { if (++count > 15) {
console.error("startReadingQuery: Read transaction did not begin. Giving up after 5 tries"); console.error("startReadingQuery: Read transaction did not begin. Giving up after 10 tries");
return false; return false;
} }
require("internal").wait(0.2); require("internal").wait(1);
r = request({ url: url + "/_api/query/current", method: "GET" }); r = request({ url: url + "/_api/query/current", method: "GET" });
if (r.status !== 200) { if (r.status !== 200) {
console.error("startReadingQuery: Bad response from /_api/query/current", console.error("startReadingQuery: Bad response from /_api/query/current",
@ -106,7 +106,6 @@ function startReadingQuery (endpoint, collName, timeout) {
} }
} }
console.info("startReadingQuery: Did not find query.", r); console.info("startReadingQuery: Did not find query.", r);
require("internal").wait(0.5, false);
} }
} }

View File

@ -54,7 +54,9 @@ function createStatisticsCollection (name) {
var r = null; var r = null;
try { try {
r = db._create(name, { isSystem: true, waitForSync: false }); r = db._create(name, { isSystem: true, waitForSync: false,
replicationFactor: 1,
distributeShardsLike: "_graphs" });
} }
catch (err) { catch (err) {
} }

View File

@ -164,6 +164,69 @@ function ahuacatlParseTestSuite () {
assertEqual([ ], getParameters(getParseResults("/* @nada */ return /* @@nada */ /*@@nada*/ 1 /*@nada*/"))); assertEqual([ ], getParameters(getParseResults("/* @nada */ return /* @@nada */ /*@@nada*/ 1 /*@nada*/")));
}, },
////////////////////////////////////////////////////////////////////////////////
/// @brief test string parsing
////////////////////////////////////////////////////////////////////////////////
testStrings : function () {
function getRoot(query) { return getParseResults(query).ast[0]; }
var returnNode = getRoot("return 'abcdef'").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("abcdef", returnNode.subNodes[0].value);
returnNode = getRoot("return 'abcdef ghi'").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("abcdef ghi", returnNode.subNodes[0].value);
returnNode = getRoot("return 'abcd\"\\'ab\\nc'").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("abcd\"'ab\nc", returnNode.subNodes[0].value);
returnNode = getRoot("return '\\'abcd\"\\'ab\nnc'").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("'abcd\"'ab\nnc", returnNode.subNodes[0].value);
returnNode = getRoot("return \"abcdef\"").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("abcdef", returnNode.subNodes[0].value);
returnNode = getRoot("return \"abcdef ghi\"").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("abcdef ghi", returnNode.subNodes[0].value);
returnNode = getRoot("return \"abcd\\\"\\'ab\\nc\"").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("abcd\"'ab\nc", returnNode.subNodes[0].value);
returnNode = getRoot("return \"\\'abcd\\\"\\'ab\nnc\"").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("value", returnNode.subNodes[0].type);
assertEqual("'abcd\"'ab\nnc", returnNode.subNodes[0].value);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test string parsing
////////////////////////////////////////////////////////////////////////////////
testStringsAfterNot : function () {
function getRoot(query) { return getParseResults(query).ast[0]; }
var returnNode = getRoot("return NOT ('abc' == 'def')").subNodes[0];
assertEqual("return", returnNode.type);
assertEqual("unary not", returnNode.subNodes[0].type);
assertEqual("compare ==", returnNode.subNodes[0].subNodes[0].type);
assertEqual("abc", returnNode.subNodes[0].subNodes[0].subNodes[0].value);
assertEqual("def", returnNode.subNodes[0].subNodes[0].subNodes[1].value);
},
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief test too many collections /// @brief test too many collections
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -501,28 +501,28 @@
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief setupSessions /// @brief setupGraphs
/// ///
/// set up the collection _sessions /// set up the collection _graphs
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
addTask({ addTask({
name: "setupSessions", name: "setupGraphs",
description: "setup _sessions collection", description: "setup _graphs collection",
mode: [ MODE_PRODUCTION, MODE_DEVELOPMENT ], mode: [ MODE_PRODUCTION, MODE_DEVELOPMENT ],
cluster: [ CLUSTER_NONE, CLUSTER_COORDINATOR_GLOBAL ], cluster: [ CLUSTER_NONE, CLUSTER_COORDINATOR_GLOBAL ],
database: [ DATABASE_INIT, DATABASE_UPGRADE ], database: [ DATABASE_INIT, DATABASE_UPGRADE ],
task: function () { task: function () {
return createSystemCollection("_sessions", { return createSystemCollection("_graphs", {
waitForSync: false, waitForSync : false,
journalSize: 4 * 1024 * 1024 journalSize: 1024 * 1024,
replicationFactor: 1
}); });
} }
}); });
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief setupUsers /// @brief setupUsers
/// ///
@ -541,7 +541,9 @@
return createSystemCollection("_users", { return createSystemCollection("_users", {
waitForSync : false, waitForSync : false,
shardKeys: [ "user" ], shardKeys: [ "user" ],
journalSize: 4 * 1024 * 1024 journalSize: 4 * 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });
@ -685,23 +687,25 @@
}); });
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief setupGraphs /// @brief setupSessions
/// ///
/// set up the collection _graphs /// set up the collection _sessions
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
addTask({ addTask({
name: "setupGraphs", name: "setupSessions",
description: "setup _graphs collection", description: "setup _sessions collection",
mode: [ MODE_PRODUCTION, MODE_DEVELOPMENT ], mode: [ MODE_PRODUCTION, MODE_DEVELOPMENT ],
cluster: [ CLUSTER_NONE, CLUSTER_COORDINATOR_GLOBAL ], cluster: [ CLUSTER_NONE, CLUSTER_COORDINATOR_GLOBAL ],
database: [ DATABASE_INIT, DATABASE_UPGRADE ], database: [ DATABASE_INIT, DATABASE_UPGRADE ],
task: function () { task: function () {
return createSystemCollection("_graphs", { return createSystemCollection("_sessions", {
waitForSync : false, waitForSync: false,
journalSize: 1024 * 1024 journalSize: 4 * 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });
@ -792,7 +796,9 @@
task: function () { task: function () {
return createSystemCollection("_modules", { return createSystemCollection("_modules", {
journalSize: 1024 * 1024 journalSize: 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });
@ -814,7 +820,9 @@
task: function () { task: function () {
// needs to be big enough for assets // needs to be big enough for assets
return createSystemCollection("_routing", { return createSystemCollection("_routing", {
journalSize: 8 * 1024 * 1024 journalSize: 8 * 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });
@ -985,7 +993,9 @@
task: function () { task: function () {
return createSystemCollection("_aqlfunctions", { return createSystemCollection("_aqlfunctions", {
journalSize: 2 * 1024 * 1024 journalSize: 2 * 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });
@ -1101,7 +1111,9 @@
var name = "_frontend"; var name = "_frontend";
var result = createSystemCollection(name, { var result = createSystemCollection(name, {
waitForSync: false, waitForSync: false,
journalSize: 1024 * 1024 journalSize: 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
return result; return result;
@ -1165,7 +1177,9 @@
task: function () { task: function () {
return createSystemCollection("_queues", { return createSystemCollection("_queues", {
journalSize: 1024 * 1024 journalSize: 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });
@ -1186,7 +1200,9 @@
task: function () { task: function () {
return createSystemCollection("_jobs", { return createSystemCollection("_jobs", {
journalSize: 4 * 1024 * 1024 journalSize: 4 * 1024 * 1024,
replicationFactor: 1,
distributeShardsLike: "_graphs"
}); });
} }
}); });