mirror of https://gitee.com/bigwinds/arangodb
Merge branch 'devel' of github.com:triAGENS/ArangoDB into devel
This commit is contained in:
commit
288f174011
12
CHANGELOG
12
CHANGELOG
|
@ -24,6 +24,18 @@ v2.4.0 (XXXX-XX-XX)
|
|||
* prevent Foxx queues from permanently writing to the journal even when
|
||||
server is idle
|
||||
|
||||
* fixed AQL COLLECT statement with INTO clause, which copied more variables
|
||||
than v2.2 and thus lead to too much memory consumption.
|
||||
This deals with #1107.
|
||||
|
||||
* fixed AQL COLLECT statement, this concerned every COLLECT statement,
|
||||
only the first group had access to the values of the variables before
|
||||
the COLLECT statement. This deals with #1127.
|
||||
|
||||
* fixed some AQL internals, where sometimes too many items were
|
||||
fetched from upstream in the presence of a LIMIT clause. This should
|
||||
generally improve performance.
|
||||
|
||||
|
||||
v2.3.0 (2014-11-18)
|
||||
-------------------
|
||||
|
|
|
@ -228,7 +228,18 @@ contains the group value.
|
|||
|
||||
The second form does the same as the first form, but additionally introduces a
|
||||
variable (specified by *groups*) that contains all elements that fell into the
|
||||
group. Specifying the *INTO* clause is optional-
|
||||
group. This works as follows: The *groups* variable is a list containing
|
||||
as many elements as there are in the group. Each member of that list is
|
||||
a JSON object in which the value of every variable that is defined in the
|
||||
AQL query is bound to the corresponding attribute. Note that this considers
|
||||
all variables that are defined before the *COLLECT* statement, but not those on
|
||||
the top level (outside of any *FOR*), unless the *COLLECT* statement is itself
|
||||
on the top level, in which case all variables are taken. Furthermore note
|
||||
that it is possible that the optimizer moves *LET* statements out of *FOR*
|
||||
statements to improve performance. In a future version of ArangoDB we plan
|
||||
to allow to configure exactly the values of which variables are copied
|
||||
into the *groups* variable, since excessive copying can have a negative
|
||||
impact on performance. Specifying the *INTO* clause is optional.
|
||||
|
||||
```
|
||||
FOR u IN users
|
||||
|
|
|
@ -68,7 +68,7 @@ use
|
|||
make jslint
|
||||
to find out whether all of your files comply to jslint. This is required to make contineous integration work smoothly.
|
||||
|
||||
if you want to add new / new patterns, edit js/Makefile.files
|
||||
if you want to add new files / patterns to this make target, edit js/Makefile.files
|
||||
|
||||
Use standalone for your js file
|
||||
-------------------------------
|
||||
|
@ -88,7 +88,7 @@ Dependencies
|
|||
|
||||
Filename conventions
|
||||
====================
|
||||
Special patterns in filenames are used to select tests to be executed or skipped depending no parameters:
|
||||
Special patterns in filenames are used to select tests to be executed or skipped depending on parameters:
|
||||
|
||||
-cluster
|
||||
--------
|
||||
|
@ -121,9 +121,10 @@ There are several major places where unittests live:
|
|||
- UnitTests/HttpInterface
|
||||
- UnitTests/Basics
|
||||
- UnitTests/Geo
|
||||
- js/server/tests
|
||||
- js/common/tests
|
||||
- js/server/tests (runneable on the server)
|
||||
- js/common/tests (runneable on the server & via arangosh)
|
||||
- js/common/test-data
|
||||
- js/client/tests (runneable via arangosh)
|
||||
- /js/apps/system/aardvark/test
|
||||
|
||||
|
||||
|
@ -134,10 +135,14 @@ TODO: which tests are these?
|
|||
|
||||
jsUnity on arangod
|
||||
------------------
|
||||
you can engage single tests when running arangod with console like this:
|
||||
require("jsunity").runTest("js/server/tests/aql-queries-simple.js");
|
||||
|
||||
|
||||
jsUnity via arangosh
|
||||
--------------------
|
||||
|
||||
arangosh is similar, however, you can only run tests which are intended to be ran via arangosh:
|
||||
require("jsunity").runTest("js/client/tests/shell-client.js");
|
||||
|
||||
|
||||
|
||||
|
@ -200,6 +205,10 @@ Javascript framework
|
|||
Invoked like that:
|
||||
scripts/run scripts/unittest.js all
|
||||
|
||||
calling it without parameters like this:
|
||||
scripts/run scripts/unittest.js
|
||||
will give you a extensive usage help which we won't duplicate here.
|
||||
|
||||
Choosing facility
|
||||
_________________
|
||||
|
||||
|
@ -216,8 +225,11 @@ Passing Options
|
|||
_______________
|
||||
|
||||
Options are passed in as one json; Please note that formating blanks may cause problems.
|
||||
Different facilities may take different options. The above mentioned usage output contains
|
||||
the full detail.
|
||||
|
||||
so a commandline for running a single test using valgrind could look like this:
|
||||
A commandline for running a single test (-> with the facility 'single_server') using
|
||||
valgrind could look like this:
|
||||
|
||||
scripts/run scripts/unittest.js single_server \
|
||||
'{"test":"js/server/tests/aql-escaping.js",'\
|
||||
|
@ -230,7 +242,7 @@ so a commandline for running a single test using valgrind could look like this:
|
|||
|
||||
- we specify the test to execute
|
||||
- we specify some arangod arguments which increase the server performance
|
||||
- we specify to run using valgrind
|
||||
- we specify to run using valgrind (this is supported by all facilities
|
||||
- we specify some valgrind commandline arguments
|
||||
|
||||
Running a single unittestsuite
|
||||
|
|
|
@ -724,6 +724,7 @@ Json AqlValue::extractListMember (triagens::arango::AqlTransaction* trx,
|
|||
auto vecCollection = (*it)->getDocumentCollection(0);
|
||||
return (*it)->getValue(p - totalSize, 0).toJson(trx, vecCollection);
|
||||
}
|
||||
totalSize += (*it)->size();
|
||||
}
|
||||
break; // fall-through to returning null
|
||||
}
|
||||
|
@ -762,6 +763,8 @@ AqlValue AqlValue::CreateFromBlocks (triagens::arango::AqlTransaction* trx,
|
|||
for (RegisterId j = 0; j < n; ++j) {
|
||||
if (variableNames[j][0] != '\0') {
|
||||
// temporaries don't have a name and won't be included
|
||||
// Variables from depth 0 are excluded, too, unless the
|
||||
// COLLECT statement is on level 0 as well.
|
||||
values.set(variableNames[j].c_str(), current->getValue(i, j).toJson(trx, current->getDocumentCollection(j)));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -78,12 +78,14 @@ AggregatorGroup::~AggregatorGroup () {
|
|||
void AggregatorGroup::initialize (size_t capacity) {
|
||||
TRI_ASSERT(capacity > 0);
|
||||
|
||||
groupValues.clear();
|
||||
collections.clear();
|
||||
groupValues.reserve(capacity);
|
||||
collections.reserve(capacity);
|
||||
|
||||
for (size_t i = 0; i < capacity; ++i) {
|
||||
groupValues[i] = AqlValue();
|
||||
collections[i] = nullptr;
|
||||
groupValues.emplace_back();
|
||||
collections.push_back(nullptr);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -92,7 +94,8 @@ void AggregatorGroup::reset () {
|
|||
delete (*it);
|
||||
}
|
||||
groupBlocks.clear();
|
||||
groupValues[0].erase(); // FIXMEFIXME warum nur 0???
|
||||
groupValues[0].erase(); // only need to erase [0], because we have
|
||||
// only copies of references anyway
|
||||
}
|
||||
|
||||
void AggregatorGroup::addValues (AqlItemBlock const* src,
|
||||
|
@ -716,7 +719,8 @@ AqlItemBlock* EnumerateCollectionBlock::getSome (size_t, // atLeast,
|
|||
}
|
||||
|
||||
if (_buffer.empty()) {
|
||||
if (! ExecutionBlock::getBlock(DefaultBatchSize, DefaultBatchSize)) {
|
||||
size_t toFetch = (std::min)(DefaultBatchSize, atMost);
|
||||
if (! ExecutionBlock::getBlock(toFetch, toFetch)) {
|
||||
_done = true;
|
||||
return nullptr;
|
||||
}
|
||||
|
@ -797,7 +801,8 @@ size_t EnumerateCollectionBlock::skipSome (size_t atLeast, size_t atMost) {
|
|||
|
||||
while (skipped < atLeast) {
|
||||
if (_buffer.empty()) {
|
||||
if (! getBlock(DefaultBatchSize, DefaultBatchSize)) {
|
||||
size_t toFetch = (std::min)(DefaultBatchSize, atMost);
|
||||
if (! getBlock(toFetch, toFetch)) {
|
||||
_done = true;
|
||||
return skipped;
|
||||
}
|
||||
|
@ -1168,7 +1173,8 @@ AqlItemBlock* IndexRangeBlock::getSome (size_t atLeast,
|
|||
// try again!
|
||||
|
||||
if (_buffer.empty()) {
|
||||
if (! ExecutionBlock::getBlock(DefaultBatchSize, DefaultBatchSize)
|
||||
size_t toFetch = (std::min)(DefaultBatchSize, atMost);
|
||||
if (! ExecutionBlock::getBlock(toFetch, toFetch)
|
||||
|| (! initIndex())) {
|
||||
_done = true;
|
||||
return nullptr;
|
||||
|
@ -1274,7 +1280,8 @@ size_t IndexRangeBlock::skipSome (size_t atLeast,
|
|||
|
||||
while (skipped < atLeast) {
|
||||
if (_buffer.empty()) {
|
||||
if (! ExecutionBlock::getBlock(DefaultBatchSize, DefaultBatchSize)
|
||||
size_t toFetch = (std::min)(DefaultBatchSize, atMost);
|
||||
if (! ExecutionBlock::getBlock(toFetch, toFetch)
|
||||
|| (! initIndex())) {
|
||||
_done = true;
|
||||
return skipped;
|
||||
|
@ -1726,7 +1733,8 @@ AqlItemBlock* EnumerateListBlock::getSome (size_t, size_t atMost) {
|
|||
// try again!
|
||||
|
||||
if (_buffer.empty()) {
|
||||
if (! ExecutionBlock::getBlock(DefaultBatchSize, DefaultBatchSize)) {
|
||||
size_t toFetch = (std::min)(DefaultBatchSize, atMost);
|
||||
if (! ExecutionBlock::getBlock(toFetch, toFetch)) {
|
||||
_done = true;
|
||||
return nullptr;
|
||||
}
|
||||
|
@ -1847,7 +1855,8 @@ size_t EnumerateListBlock::skipSome (size_t atLeast, size_t atMost) {
|
|||
|
||||
while (skipped < atLeast) {
|
||||
if (_buffer.empty()) {
|
||||
if (! ExecutionBlock::getBlock(DefaultBatchSize, DefaultBatchSize)) {
|
||||
size_t toFetch = (std::min)(DefaultBatchSize, atMost);
|
||||
if (! ExecutionBlock::getBlock(toFetch, toFetch)) {
|
||||
_done = true;
|
||||
return skipped;
|
||||
}
|
||||
|
@ -2073,7 +2082,7 @@ AqlItemBlock* CalculationBlock::getSome (size_t atLeast,
|
|||
size_t atMost) {
|
||||
|
||||
unique_ptr<AqlItemBlock> res(ExecutionBlock::getSomeWithoutRegisterClearout(
|
||||
DefaultBatchSize, DefaultBatchSize));
|
||||
atLeast, atMost));
|
||||
|
||||
if (res.get() == nullptr) {
|
||||
return nullptr;
|
||||
|
@ -2341,6 +2350,10 @@ bool FilterBlock::hasMore () {
|
|||
}
|
||||
|
||||
if (_buffer.empty()) {
|
||||
// QUESTION: Is this sensible? Asking whether there is more might
|
||||
// trigger an expensive fetching operation, even if later on only
|
||||
// a single document is needed due to a LIMIT...
|
||||
// However, how should we know this here?
|
||||
if (! getBlock(DefaultBatchSize, DefaultBatchSize)) {
|
||||
_done = true;
|
||||
return false;
|
||||
|
@ -2395,13 +2408,17 @@ AggregateBlock::AggregateBlock (ExecutionEngine* engine,
|
|||
}
|
||||
|
||||
// iterate over all our variables
|
||||
for (auto it = en->getRegisterPlan()->varInfo.begin();
|
||||
it != en->getRegisterPlan()->varInfo.end(); ++it) {
|
||||
for (auto& vi : en->getRegisterPlan()->varInfo) {
|
||||
if (vi.second.depth > 0 || en->getDepth() == 1) {
|
||||
// Do not keep variables from depth 0, unless we are depth 1 ourselves
|
||||
// (which means no FOR in which we are contained)
|
||||
|
||||
// find variable in the global variable map
|
||||
auto itVar = en->_variableMap.find((*it).first);
|
||||
auto itVar = en->_variableMap.find(vi.first);
|
||||
|
||||
if (itVar != en->_variableMap.end()) {
|
||||
_variableNames[(*it).second.registerId] = (*itVar).second;
|
||||
_variableNames[vi.second.registerId] = (*itVar).second;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2585,6 +2602,15 @@ void AggregateBlock::emitGroup (AqlItemBlock const* cur,
|
|||
AqlItemBlock* res,
|
||||
size_t row) {
|
||||
|
||||
if (row > 0) {
|
||||
// re-use already copied aqlvalues
|
||||
for (RegisterId i = 0; i < cur->getNrRegs(); i++) {
|
||||
res->setValue(row, i, res->getValue(0, i));
|
||||
// Note: if this throws, then all values will be deleted
|
||||
// properly since the first one is.
|
||||
}
|
||||
}
|
||||
|
||||
size_t i = 0;
|
||||
for (auto it = _aggregateRegisters.begin(); it != _aggregateRegisters.end(); ++it) {
|
||||
// FIXME: can throw:
|
||||
|
@ -3369,7 +3395,7 @@ void UpdateBlock::work (std::vector<AqlItemBlock*>& blocks) {
|
|||
TRI_json_t* old = TRI_JsonShapedJson(_collection->documentCollection()->getShaper(), &shapedJson);
|
||||
|
||||
if (old != nullptr) {
|
||||
TRI_json_t* patchedJson = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, old, json.json(), ep->_options.nullMeansRemove);
|
||||
TRI_json_t* patchedJson = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, old, json.json(), ep->_options.nullMeansRemove, ep->_options.mergeArrays);
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, old);
|
||||
|
||||
if (patchedJson != nullptr) {
|
||||
|
|
|
@ -2043,22 +2043,35 @@ ExecutionNode* AggregateNode::clone (ExecutionPlan* plan,
|
|||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
struct UserVarFinder : public WalkerWorker<ExecutionNode> {
|
||||
UserVarFinder () {};
|
||||
UserVarFinder (int mindepth) : mindepth(mindepth), depth(-1) {};
|
||||
~UserVarFinder () {};
|
||||
std::vector<Variable const*> userVars;
|
||||
int mindepth; // minimal depth to consider
|
||||
int depth;
|
||||
|
||||
bool enterSubquery (ExecutionNode*, ExecutionNode*) override final {
|
||||
return false;
|
||||
}
|
||||
|
||||
bool before (ExecutionNode* en) override final {
|
||||
void after (ExecutionNode* en) override final {
|
||||
if (en->getType() == ExecutionNode::SINGLETON) {
|
||||
depth = 0;
|
||||
}
|
||||
else if (en->getType() == ExecutionNode::ENUMERATE_COLLECTION ||
|
||||
en->getType() == ExecutionNode::INDEX_RANGE ||
|
||||
en->getType() == ExecutionNode::ENUMERATE_LIST ||
|
||||
en->getType() == ExecutionNode::AGGREGATE) {
|
||||
depth += 1;
|
||||
}
|
||||
// Now depth is set correct for this node.
|
||||
if (depth >= mindepth) {
|
||||
auto vars = en->getVariablesSetHere();
|
||||
for (auto v : vars) {
|
||||
if (v->isUserDefined()) {
|
||||
userVars.push_back(v);
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -2069,10 +2082,18 @@ std::vector<Variable const*> AggregateNode::getVariablesUsedHere () const {
|
|||
}
|
||||
if (_outVariable != nullptr) {
|
||||
// Here we have to find all user defined variables in this query
|
||||
// amonst our dependencies:
|
||||
UserVarFinder finder;
|
||||
// amongst our dependencies:
|
||||
UserVarFinder finder(1);
|
||||
auto myselfasnonconst = const_cast<AggregateNode*>(this);
|
||||
myselfasnonconst->walk(&finder);
|
||||
if (finder.depth == 1) {
|
||||
// we are toplevel, let's run again with mindepth = 0
|
||||
finder.userVars.clear();
|
||||
finder.mindepth = 0;
|
||||
finder.depth = -1;
|
||||
finder.reset();
|
||||
myselfasnonconst->walk(&finder);
|
||||
}
|
||||
for (auto x : finder.userVars) {
|
||||
v.insert(x);
|
||||
}
|
||||
|
|
|
@ -251,6 +251,9 @@ ModificationOptions ExecutionPlan::createOptions (AstNode const* node) {
|
|||
// nullMeansRemove is the opposite of keepNull
|
||||
options.nullMeansRemove = value->isFalse();
|
||||
}
|
||||
else if (strcmp(name, "mergeArrays") == 0) {
|
||||
options.mergeArrays = value->isTrue();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -36,6 +36,7 @@ ModificationOptions::ModificationOptions (Json const& json) {
|
|||
ignoreErrors = JsonHelper::getBooleanValue(array.json(), "ignoreErrors", false);
|
||||
waitForSync = JsonHelper::getBooleanValue(array.json(), "waitForSync", false);
|
||||
nullMeansRemove = JsonHelper::getBooleanValue(array.json(), "nullMeansRemove", false);
|
||||
mergeArrays = JsonHelper::getBooleanValue(array.json(), "mergeArrays", false);
|
||||
}
|
||||
|
||||
void ModificationOptions::toJson (triagens::basics::Json& json,
|
||||
|
@ -44,7 +45,8 @@ void ModificationOptions::toJson (triagens::basics::Json& json,
|
|||
flags = Json(Json::Array, 3)
|
||||
("ignoreErrors", Json(ignoreErrors))
|
||||
("waitForSync", Json(waitForSync))
|
||||
("nullMeansRemove", Json(nullMeansRemove));
|
||||
("nullMeansRemove", Json(nullMeansRemove))
|
||||
("mergeArrays", Json(mergeArrays));
|
||||
|
||||
json ("modificationFlags", flags);
|
||||
}
|
||||
|
|
|
@ -53,7 +53,8 @@ namespace triagens {
|
|||
ModificationOptions ()
|
||||
: ignoreErrors(false),
|
||||
waitForSync(false),
|
||||
nullMeansRemove(false) {
|
||||
nullMeansRemove(false),
|
||||
mergeArrays(false) {
|
||||
}
|
||||
|
||||
void toJson (triagens::basics::Json& json, TRI_memory_zone_t* zone) const;
|
||||
|
@ -65,6 +66,7 @@ namespace triagens {
|
|||
bool ignoreErrors;
|
||||
bool waitForSync;
|
||||
bool nullMeansRemove;
|
||||
bool mergeArrays;
|
||||
|
||||
};
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1051,6 +1051,7 @@ int modifyDocumentOnCoordinator (
|
|||
bool waitForSync,
|
||||
bool isPatch,
|
||||
bool keepNull, // only counts for isPatch == true
|
||||
bool mergeArrays, // only counts for isPatch == true
|
||||
TRI_json_t* json,
|
||||
map<string, string> const& headers,
|
||||
triagens::rest::HttpResponse::HttpResponseCode& responseCode,
|
||||
|
@ -1115,6 +1116,12 @@ int modifyDocumentOnCoordinator (
|
|||
if (! keepNull) {
|
||||
revstr += "&keepNull=false";
|
||||
}
|
||||
if (mergeArrays) {
|
||||
revstr += "&mergeArrays=true";
|
||||
}
|
||||
else {
|
||||
revstr += "&mergeArrays=false";
|
||||
}
|
||||
}
|
||||
else {
|
||||
reqType = triagens::rest::HttpRequest::HTTP_REQUEST_PUT;
|
||||
|
|
|
@ -177,6 +177,7 @@ namespace triagens {
|
|||
bool waitForSync,
|
||||
bool isPatch,
|
||||
bool keepNull, // only counts for isPatch == true
|
||||
bool mergeArrays, // only counts for isPatch == true
|
||||
TRI_json_t* json,
|
||||
std::map<std::string, std::string> const& headers,
|
||||
triagens::rest::HttpResponse::HttpResponseCode& responseCode,
|
||||
|
|
|
@ -1202,6 +1202,12 @@ bool RestDocumentHandler::replaceDocument () {
|
|||
/// from the existing document that are contained in the patch document with an
|
||||
/// attribute value of *null*.
|
||||
///
|
||||
/// @RESTQUERYPARAM{mergeArrays,boolean,optional}
|
||||
/// Controls whether arrays (not lists) will be merged if present in both the
|
||||
/// existing and the patch document. If set to *false*, the value in the
|
||||
/// patch document will overwrite the existing document's value. If set to
|
||||
/// *true*, arrays will be merged. The default is *true*.
|
||||
///
|
||||
/// @RESTQUERYPARAM{waitForSync,boolean,optional}
|
||||
/// Wait until document has been synced to disk.
|
||||
///
|
||||
|
@ -1410,6 +1416,7 @@ bool RestDocumentHandler::modifyDocument (bool isPatch) {
|
|||
if (isPatch) {
|
||||
// patching an existing document
|
||||
bool nullMeansRemove;
|
||||
bool mergeArrays;
|
||||
bool found;
|
||||
char const* valueStr = _request->value("keepNull", found);
|
||||
if (! found || StringUtils::boolean(valueStr)) {
|
||||
|
@ -1421,6 +1428,15 @@ bool RestDocumentHandler::modifyDocument (bool isPatch) {
|
|||
nullMeansRemove = true;
|
||||
}
|
||||
|
||||
valueStr = _request->value("mergeArrays", found);
|
||||
if (! found || StringUtils::boolean(valueStr)) {
|
||||
// the default is true
|
||||
mergeArrays = true;
|
||||
}
|
||||
else {
|
||||
mergeArrays = false;
|
||||
}
|
||||
|
||||
// read the existing document
|
||||
TRI_doc_mptr_copy_t oldDocument;
|
||||
|
||||
|
@ -1471,7 +1487,7 @@ bool RestDocumentHandler::modifyDocument (bool isPatch) {
|
|||
}
|
||||
}
|
||||
|
||||
TRI_json_t* patchedJson = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, old, json, nullMeansRemove);
|
||||
TRI_json_t* patchedJson = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, old, json, nullMeansRemove, mergeArrays);
|
||||
TRI_FreeJson(shaper->_memoryZone, old);
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);
|
||||
|
||||
|
@ -1574,13 +1590,17 @@ bool RestDocumentHandler::modifyDocumentCoordinator (
|
|||
string resultBody;
|
||||
|
||||
bool keepNull = true;
|
||||
if (! strcmp(_request->value("keepNull"),"false")) {
|
||||
if (! strcmp(_request->value("keepNull"), "false")) {
|
||||
keepNull = false;
|
||||
}
|
||||
bool mergeArrays = true;
|
||||
if (TRI_EqualString(_request->value("mergeArrays"), "false")) {
|
||||
mergeArrays = false;
|
||||
}
|
||||
|
||||
int error = triagens::arango::modifyDocumentOnCoordinator(
|
||||
dbname, collname, key, rev, policy, waitForSync, isPatch,
|
||||
keepNull, json, headers, responseCode, resultHeaders, resultBody);
|
||||
keepNull, mergeArrays, json, headers, responseCode, resultHeaders, resultBody);
|
||||
|
||||
if (error != TRI_ERROR_NO_ERROR) {
|
||||
generateTransactionError(collname, error);
|
||||
|
|
|
@ -91,6 +91,7 @@ struct InsertOptions {
|
|||
struct UpdateOptions {
|
||||
bool overwrite = false;
|
||||
bool keepNull = true;
|
||||
bool mergeArrays = true;
|
||||
bool waitForSync = false;
|
||||
bool silent = false;
|
||||
};
|
||||
|
@ -701,6 +702,7 @@ static v8::Handle<v8::Value> ModifyVocbaseColCoordinator (
|
|||
bool waitForSync,
|
||||
bool isPatch,
|
||||
bool keepNull, // only counts if isPatch==true
|
||||
bool mergeArrays, // only counts if isPatch==true
|
||||
bool silent,
|
||||
v8::Arguments const& argv) {
|
||||
v8::HandleScope scope;
|
||||
|
@ -734,7 +736,7 @@ static v8::Handle<v8::Value> ModifyVocbaseColCoordinator (
|
|||
|
||||
error = triagens::arango::modifyDocumentOnCoordinator(
|
||||
dbname, collname, key, rev, policy, waitForSync, isPatch,
|
||||
keepNull, json, headers, responseCode, resultHeaders, resultBody);
|
||||
keepNull, mergeArrays, json, headers, responseCode, resultHeaders, resultBody);
|
||||
// Note that the json has been freed inside!
|
||||
|
||||
if (error != TRI_ERROR_NO_ERROR) {
|
||||
|
@ -875,6 +877,7 @@ static v8::Handle<v8::Value> ReplaceVocbaseCol (bool useCollection,
|
|||
options.waitForSync,
|
||||
false, // isPatch
|
||||
true, // keepNull, does not matter
|
||||
false, // mergeArrays, does not matter
|
||||
options.silent,
|
||||
argv));
|
||||
}
|
||||
|
@ -1081,7 +1084,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
|
|||
TRI_v8_global_t* v8g = static_cast<TRI_v8_global_t*>(v8::Isolate::GetCurrent()->GetData());
|
||||
|
||||
if (argLength < 2 || argLength > 5) {
|
||||
TRI_V8_EXCEPTION_USAGE(scope, "update(<document>, <data>, {overwrite: booleanValue, keepNull: booleanValue, waitForSync: booleanValue})");
|
||||
TRI_V8_EXCEPTION_USAGE(scope, "update(<document>, <data>, {overwrite: booleanValue, keepNull: booleanValue, mergeArrays: booleanValue, waitForSync: booleanValue})");
|
||||
}
|
||||
|
||||
if (argLength > 2) {
|
||||
|
@ -1094,6 +1097,9 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
|
|||
if (optionsObject->Has(v8g->KeepNullKey)) {
|
||||
options.keepNull = TRI_ObjectToBoolean(optionsObject->Get(v8g->KeepNullKey));
|
||||
}
|
||||
if (optionsObject->Has(v8g->MergeArraysKey)) {
|
||||
options.mergeArrays = TRI_ObjectToBoolean(optionsObject->Get(v8g->MergeArraysKey));
|
||||
}
|
||||
if (optionsObject->Has(v8g->WaitForSyncKey)) {
|
||||
options.waitForSync = TRI_ObjectToBoolean(optionsObject->Get(v8g->WaitForSyncKey));
|
||||
}
|
||||
|
@ -1160,6 +1166,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
|
|||
options.waitForSync,
|
||||
true, // isPatch
|
||||
options.keepNull,
|
||||
options.mergeArrays,
|
||||
options.silent,
|
||||
argv));
|
||||
}
|
||||
|
@ -1226,7 +1233,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (bool useCollection,
|
|||
}
|
||||
}
|
||||
|
||||
TRI_json_t* patchedJson = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, old, json, ! options.keepNull);
|
||||
TRI_json_t* patchedJson = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, old, json, ! options.keepNull, options.mergeArrays);
|
||||
TRI_FreeJson(zone, old);
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, json);
|
||||
|
||||
|
|
|
@ -786,7 +786,7 @@ class KeySpace {
|
|||
TRI_V8_EXCEPTION(scope, TRI_ERROR_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
TRI_json_t* merged = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, found->json, other, nullMeansRemove);
|
||||
TRI_json_t* merged = TRI_MergeJson(TRI_UNKNOWN_MEM_ZONE, found->json, other, nullMeansRemove, false);
|
||||
TRI_FreeJson(TRI_UNKNOWN_MEM_ZONE, other);
|
||||
|
||||
if (merged == nullptr) {
|
||||
|
|
|
@ -1174,6 +1174,7 @@ ArangoCollection.prototype.replace = function (id, data, overwrite, waitForSync)
|
|||
/// @param id the id of the document
|
||||
/// @param overwrite (optional) a boolean value or a json object
|
||||
/// @param keepNull (optional) determines if null values should saved or not
|
||||
/// @param mergeArrays (optional) whether or not array values should be merged
|
||||
/// @param waitForSync (optional) a boolean value .
|
||||
/// @example update("example/996280832675", { a : 1, c : 2} )
|
||||
/// @example update("example/996280832675", { a : 1, c : 2, x: null}, true, true, true)
|
||||
|
@ -1212,6 +1213,11 @@ ArangoCollection.prototype.update = function (id, data, overwrite, keepNull, wai
|
|||
}
|
||||
params = "?keepNull=" + options.keepNull;
|
||||
|
||||
if (! options.hasOwnProperty("mergeArrays")) {
|
||||
options.mergeArrays = true;
|
||||
}
|
||||
params += "&mergeArrays=" + options.mergeArrays;
|
||||
|
||||
if (options.hasOwnProperty("overwrite") && options.overwrite) {
|
||||
params += "&policy=last";
|
||||
}
|
||||
|
|
|
@ -758,6 +758,10 @@ ArangoDatabase.prototype._update = function (id, data, overwrite, keepNull, wait
|
|||
options.keepNull = true;
|
||||
}
|
||||
params = "?keepNull=" + options.keepNull;
|
||||
if (! options.hasOwnProperty("mergeArrays")) {
|
||||
options.mergeArrays = true;
|
||||
}
|
||||
params += "&mergeArrays=" + options.mergeArrays;
|
||||
|
||||
if (options.hasOwnProperty("overwrite") && options.overwrite) {
|
||||
params += "&policy=last";
|
||||
|
|
|
@ -1173,6 +1173,7 @@ ArangoCollection.prototype.replace = function (id, data, overwrite, waitForSync)
|
|||
/// @param id the id of the document
|
||||
/// @param overwrite (optional) a boolean value or a json object
|
||||
/// @param keepNull (optional) determines if null values should saved or not
|
||||
/// @param mergeArrays (optional) whether or not array values should be merged
|
||||
/// @param waitForSync (optional) a boolean value .
|
||||
/// @example update("example/996280832675", { a : 1, c : 2} )
|
||||
/// @example update("example/996280832675", { a : 1, c : 2, x: null}, true, true, true)
|
||||
|
@ -1211,6 +1212,11 @@ ArangoCollection.prototype.update = function (id, data, overwrite, keepNull, wai
|
|||
}
|
||||
params = "?keepNull=" + options.keepNull;
|
||||
|
||||
if (! options.hasOwnProperty("mergeArrays")) {
|
||||
options.mergeArrays = true;
|
||||
}
|
||||
params += "&mergeArrays=" + options.mergeArrays;
|
||||
|
||||
if (options.hasOwnProperty("overwrite") && options.overwrite) {
|
||||
params += "&policy=last";
|
||||
}
|
||||
|
|
|
@ -757,6 +757,10 @@ ArangoDatabase.prototype._update = function (id, data, overwrite, keepNull, wait
|
|||
options.keepNull = true;
|
||||
}
|
||||
params = "?keepNull=" + options.keepNull;
|
||||
if (! options.hasOwnProperty("mergeArrays")) {
|
||||
options.mergeArrays = true;
|
||||
}
|
||||
params += "&mergeArrays=" + options.mergeArrays;
|
||||
|
||||
if (options.hasOwnProperty("overwrite") && options.overwrite) {
|
||||
params += "&policy=last";
|
||||
|
|
|
@ -99,6 +99,11 @@ generator.addState('ideas', {
|
|||
|
||||
Some states take additional information: Entities need to know which repository they are contained in (via `containedIn`) and repositories need to know which entities they contain (via `contains`).
|
||||
|
||||
States can also have a superstate. This can be done by providing `superstate` with the name of the state that should be the superstate as a string. The superstate is provided to the service via a third parameter in its action. It is an object that has a key called `superstate` where the value depends on the superstate's type:
|
||||
|
||||
* If the superstate is an entity, it has a key `entity` where the value is the entity and a key `repository` which is the Foxx.Repository in which the entity is saved.
|
||||
* If the superstate is a repository, it has a key `repository` which contains the Foxx.Repository.
|
||||
|
||||
### Entity
|
||||
|
||||
An entity can be `parameterized` (by setting its attribute `parameterized` to `true`) which means that there is not only one state of that type, but there can be an arbitrary amount – each of them is identified by a parameter. This is usually the case with entities that are stored in a repository.
|
||||
|
|
|
@ -32,9 +32,9 @@
|
|||
}
|
||||
|
||||
route = controller[verb](url, action)
|
||||
.onlyIf(relation.condition)
|
||||
.errorResponse(VertexNotFound, 404, 'The vertex could not be found')
|
||||
.errorResponse(ConditionNotFulfilled, 403, 'The condition could not be fulfilled')
|
||||
.onlyIf(relation.condition)
|
||||
.summary(relation.summary)
|
||||
.notes(relation.notes);
|
||||
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -6,7 +6,7 @@
|
|||
"coffee-script": "1.7.1",
|
||||
"decimal": "0.0.2",
|
||||
"docco": "0.6.3",
|
||||
"foxx_generator": "^0.5.0",
|
||||
"foxx_generator": "0.5.1",
|
||||
"htmlparser2": "3.7.2",
|
||||
"iced-coffee-script": "1.7.1-f",
|
||||
"joi": "4.9.0",
|
||||
|
|
|
@ -454,6 +454,7 @@ function cleanupDBDirectories(options) {
|
|||
// print("deleted " + cleanupDirectories[i]);
|
||||
}
|
||||
}
|
||||
cleanupDirectories = [];
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -86,7 +86,7 @@ function optimizerIndexesTestSuite () {
|
|||
|
||||
var plan = AQL_EXPLAIN(query).plan;
|
||||
var indexes = 0;
|
||||
var nodeTypes = plan.nodes.map(function(node) {
|
||||
plan.nodes.map(function(node) {
|
||||
if (node.type === "IndexRangeNode") {
|
||||
++indexes;
|
||||
}
|
||||
|
@ -109,7 +109,7 @@ function optimizerIndexesTestSuite () {
|
|||
|
||||
var plan = AQL_EXPLAIN(query).plan;
|
||||
var indexes = 0;
|
||||
var nodeTypes = plan.nodes.map(function(node) {
|
||||
plan.nodes.map(function(node) {
|
||||
if (node.type === "IndexRangeNode") {
|
||||
++indexes;
|
||||
}
|
||||
|
@ -132,7 +132,7 @@ function optimizerIndexesTestSuite () {
|
|||
|
||||
var plan = AQL_EXPLAIN(query).plan;
|
||||
var indexes = 0;
|
||||
var nodeTypes = plan.nodes.map(function(node) {
|
||||
plan.nodes.map(function(node) {
|
||||
if (node.type === "IndexRangeNode") {
|
||||
++indexes;
|
||||
}
|
||||
|
|
|
@ -40,7 +40,8 @@
|
|||
static TRI_json_t* MergeRecursive (TRI_memory_zone_t* zone,
|
||||
TRI_json_t const* lhs,
|
||||
TRI_json_t const* rhs,
|
||||
bool nullMeansRemove) {
|
||||
bool nullMeansRemove,
|
||||
bool mergeArrays) {
|
||||
TRI_json_t* result = TRI_CopyJson(zone, lhs);
|
||||
|
||||
if (result == nullptr) {
|
||||
|
@ -65,7 +66,7 @@ static TRI_json_t* MergeRecursive (TRI_memory_zone_t* zone,
|
|||
// existing array does not have the attribute => append new attribute
|
||||
if (value->_type == TRI_JSON_ARRAY) {
|
||||
TRI_json_t* empty = TRI_CreateArrayJson(zone);
|
||||
TRI_json_t* merged = MergeRecursive(zone, empty, value, nullMeansRemove);
|
||||
TRI_json_t* merged = MergeRecursive(zone, empty, value, nullMeansRemove, mergeArrays);
|
||||
TRI_Insert3ArrayJson(zone, result, key->_value._string.data, merged);
|
||||
|
||||
TRI_FreeJson(zone, empty);
|
||||
|
@ -76,8 +77,8 @@ static TRI_json_t* MergeRecursive (TRI_memory_zone_t* zone,
|
|||
}
|
||||
else {
|
||||
// existing array already has the attribute => replace attribute
|
||||
if (lhsValue->_type == TRI_JSON_ARRAY && value->_type == TRI_JSON_ARRAY) {
|
||||
TRI_json_t* merged = MergeRecursive(zone, lhsValue, value, nullMeansRemove);
|
||||
if (lhsValue->_type == TRI_JSON_ARRAY && value->_type == TRI_JSON_ARRAY && mergeArrays) {
|
||||
TRI_json_t* merged = MergeRecursive(zone, lhsValue, value, nullMeansRemove, mergeArrays);
|
||||
TRI_ReplaceArrayJson(zone, result, key->_value._string.data, merged);
|
||||
TRI_FreeJson(zone, merged);
|
||||
}
|
||||
|
@ -732,13 +733,14 @@ bool TRI_HasDuplicateKeyJson (TRI_json_t const* object) {
|
|||
TRI_json_t* TRI_MergeJson (TRI_memory_zone_t* zone,
|
||||
TRI_json_t const* lhs,
|
||||
TRI_json_t const* rhs,
|
||||
bool nullMeansRemove) {
|
||||
bool nullMeansRemove,
|
||||
bool mergeArrays) {
|
||||
TRI_json_t* result;
|
||||
|
||||
TRI_ASSERT(lhs->_type == TRI_JSON_ARRAY);
|
||||
TRI_ASSERT(rhs->_type == TRI_JSON_ARRAY);
|
||||
|
||||
result = MergeRecursive(zone, lhs, rhs, nullMeansRemove);
|
||||
result = MergeRecursive(zone, lhs, rhs, nullMeansRemove, mergeArrays);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
|
|
@ -137,6 +137,7 @@ bool TRI_HasDuplicateKeyJson (TRI_json_t const*);
|
|||
TRI_json_t* TRI_MergeJson (TRI_memory_zone_t*,
|
||||
TRI_json_t const*,
|
||||
TRI_json_t const*,
|
||||
bool,
|
||||
bool);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -92,6 +92,7 @@ TRI_v8_global_s::TRI_v8_global_s (v8::Isolate* isolate)
|
|||
KeyOptionsKey(),
|
||||
LengthKey(),
|
||||
LifeTimeKey(),
|
||||
MergeArraysKey(),
|
||||
NameKey(),
|
||||
OperationIDKey(),
|
||||
ParametersKey(),
|
||||
|
@ -176,6 +177,7 @@ TRI_v8_global_s::TRI_v8_global_s (v8::Isolate* isolate)
|
|||
KeyOptionsKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("keyOptions"));
|
||||
LengthKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("length"));
|
||||
LifeTimeKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("lifeTime"));
|
||||
MergeArraysKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("mergeArrays"));
|
||||
NameKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("name"));
|
||||
OperationIDKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("operationID"));
|
||||
OverwriteKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("overwrite"));
|
||||
|
|
|
@ -555,6 +555,12 @@ typedef struct TRI_v8_global_s {
|
|||
|
||||
v8::Persistent<v8::String> LifeTimeKey;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief "mergeArrays" key name
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
v8::Persistent<v8::String> MergeArraysKey;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
/// @brief "name" key
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
|
Loading…
Reference in New Issue