1
0
Fork 0

Merge branch 'devel' of https://github.com/arangodb/arangodb into devel

# Conflicts:
#	arangosh/Import/ImportHelper.cpp
This commit is contained in:
Simon Grätzer 2017-03-04 02:08:27 +01:00
commit dfd3cc3869
41 changed files with 576 additions and 151 deletions

View File

@ -1,6 +1,30 @@
devel
-----
* added `--translate` option for arangoimp to translate attribute names from
the input files to attriubte names expected by ArangoDB
The `--translate` option can be specified multiple times (once per translation
to be executed). The following example renames the "id" column from the input
file to "_key", and the "from" column to "_from", and the "to" column to "_to":
arangoimp --type csv --file data.csv --translate "id=_key" --translate "from=_from" --translate "to=_to"
`--translate` works for CSV and TSV inputs only.
* fixed issue #2350
* fixed issue #2349
* fixed issue #2346
* fixed issue #2342
* change default string truncation length from 80 characters to 256 characters for
`print`/`printShell` functions in ArangoShell and arangod. This will emit longer
prefixes of string values before truncating them with `...`, which is helpful
for debugging.
* always validate incoming JSON HTTP requests for duplicate attribute names
Incoming JSON data with duplicate attribute names will now be rejected as
@ -12,6 +36,10 @@ devel
* allow passing own `graphql-sync` module instance to Foxx GraphQL router
* arangoexport can now export to csv format
* arangoimp: fixed issue #2214
v3.2.alpha2 (2017-02-20)
------------------------
@ -81,7 +109,20 @@ v3.2.alpha1 (2017-02-05)
* generated Foxx services now use swagger tags
v3.1.12 (XXXX-XX-XX)
v3.1.13 (XXXX-XX-XX)
--------------------
* fixed issue #2342
* changed thread handling to queue only user requests on coordinator
* use exponential backoff when waiting for collection locks
* repair short name server lookup in cluster in the case of a removed
server
v3.1.12 (2017-02-28)
--------------------
* disable shell color escape sequences on Windows

View File

@ -239,6 +239,25 @@ An example command line to execute the TSV import is:
> arangoimp --file "data.tsv" --type tsv --collection "users"
### Attribute Name Translation
For the CSV and TSV input formats, attribute names can be translated automatically.
This is useful in case the import file has different attribute names than those
that should be used in ArangoDB.
A common use case is to rename an "id" column from the input file into "_key" as
it is expected by ArangoDB. To do this, specify the following translation when
invoking arangoimp:
> arangoimp --file "data.csv" --type csv --translate "id=_key"
Other common cases are to rename columns in the input file to *_from* and *_to*:
> arangoimp --file "data.csv" --type csv --translate "from=_from" --translate "to=_to"
The *translate* option can be specified multiple types. The source attribute name
and the target attribute must be separated with a *=*.
### Importing into an Edge Collection
arangoimp can also be used to import data into an existing edge collection.

View File

@ -10,7 +10,7 @@ Homebrew
--------
If you are using [homebrew](http://brew.sh/),
then you can install the ArangoDB using *brew* as follows:
then you can install the latest released stable version of ArangoDB using *brew* as follows:
brew install arangodb

View File

@ -39,6 +39,9 @@ Returned if the user can be added by the server
If the JSON representation is malformed or mandatory data is missing
from the request.
@RESTRETURNCODE{409}
Returned if a user with the same name already exists.
@EXAMPLES
@EXAMPLE_ARANGOSH_RUN{RestCreateUser}
@ -63,7 +66,8 @@ from the request.
@RESTHEADER{PUT /_api/user/{user}/database/{dbname}, Grant or revoke database access}
@RESTBODYPARAM{grant,string,required,string}
Use "rw" to grant access right and "none" to revoke.
Use "rw" to grant read and write access rights, or "ro" to
grant read-only access right. To revoke access rights, use "none".
@RESTURLPARAMETERS
@ -81,8 +85,8 @@ REST call.
@RESTRETURNCODES
@RESTRETURNCODE{201}
Returned if the user can be added by the server
@RESTRETURNCODE{200}
Returned if the access permissions were changed successfully.
@RESTRETURNCODE{400}
If the JSON representation is malformed or mandatory data is missing
@ -125,20 +129,26 @@ Fetch the list of databases available to the specified *user*. You
need permission to the *_system* database in order to execute this
REST call.
The call will return a JSON object with the per-database access
privileges for the specified user. The *result* object will contain
the databases names as object keys, and the associated privileges
for the database as values.
@RESTRETURNCODES
@RESTRETURNCODE{200}
Returned if the list of available databases can be returned.
@RESTRETURNCODE{400}
If the access privileges aren't right etc.
If the access privileges are not right etc.
@EXAMPLES
@EXAMPLE_ARANGOSH_RUN{RestFetchUserDatabaseList}
var users = require("@arangodb/users");
var theUser="anotherAdmin@secapp";
users.save(theUser, "secret")
users.save(theUser, "secret");
users.grantDatabase(theUser, "_system", "rw");
var url = "/_api/user/" + theUser + "/database/";
var response = logCurlRequest('GET', url);
@ -177,9 +187,10 @@ An optional JSON object with arbitrary extra data about the user.
@RESTDESCRIPTION
Replaces the data of an existing user. The name of an existing user
must be specified in user. You can only change the password of your
self. You need access to the *_system* database to change the
*active* flag.
must be specified in *user*. When authentication is turned on in the
server, only users that have read and write permissions for the *_system*
database can change other users' data. Additionally, a user can change
his/her own data.
@RESTRETURNCODES
@ -237,9 +248,10 @@ An optional JSON object with arbitrary extra data about the user.
@RESTDESCRIPTION
Partially updates the data of an existing user. The name of an existing
user must be specified in *user*. You can only change the password of your
self. You need access to the *_system* database to change the
*active* flag.
user must be specified in *user*. When authentication is turned on in the
server, only users that have read and write permissions for the *_system*
database can change other users' data. Additionally, a user can change
his/her own data.
@RESTRETURNCODES

View File

@ -2,6 +2,8 @@
I'm using the latest ArangoDB of the respective release series:
- [ ] 2.8
- [ ] 3.0
- [ ] 3.1
- [ ] 3.2 pre-releases: ___
- [ ] self-compiled devel branch
On this operating system:

View File

@ -56,10 +56,10 @@ INFOFILE="/tmp/ArangoDB-CLI.info.$$"
echo "ArangoDB server has been started"
echo ""
echo "The database directory is located at"
echo " '${ROOTDIR}@INC_CPACK_ARANGO_DATA_DIR@'"
echo " '${HOME}@INC_CPACK_ARANGO_DATA_DIR@'"
echo ""
echo "The log file is located at"
echo " '${ROOTDIR}@INC_CPACK_ARANGO_LOG_DIR@/arangod.log'"
echo " '${HOME}@INC_CPACK_ARANGO_LOG_DIR@/arangod.log'"
echo ""
echo "You can access the server using a browser at 'http://127.0.0.1:8529/'"
echo "or start the ArangoDB shell"

View File

@ -265,6 +265,54 @@ describe ArangoDB do
response["error"].should eq(false)
end
it "creates a new database with two users, using 'user' attribute" do
body = "{\"name\" : \"#{name}\", \"users\": [ { \"user\": \"admin\", \"password\": \"secret\", \"extra\": { \"gender\": \"m\" } }, { \"user\": \"foxx\", \"active\": false } ] }"
doc = ArangoDB.log_post("#{prefix}-create-users", api, :body => body)
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
response = doc.parsed_response
response["result"].should eq(true)
response["error"].should eq(false)
# list of databases should include the new database
doc = ArangoDB.log_get("#{prefix}-create-users", api)
doc.code.should eq(200)
result = doc.parsed_response["result"]
result.should include("_system")
result.should include(name)
# retrieve information about new database
doc = ArangoDB.log_get("#{prefix}-create-users", "/_db/#{name}" + api + "/current")
doc.code.should eq(200)
result = doc.parsed_response["result"]
result["name"].should eq(name)
result["path"].should be_kind_of(String)
result["isSystem"].should eq(false)
# retrieve information about user "admin"
doc = ArangoDB.log_get("#{prefix}-create-users", "/_db/_system/_api/user/admin")
doc.code.should eq(200)
result = doc.parsed_response
result["user"].should eq("admin")
result["active"].should eq(true)
result["extra"]["gender"].should eq("m")
# retrieve information about user "foxx"
doc = ArangoDB.log_get("#{prefix}-create-users", "/_db/_system/_api/user/foxx")
doc.code.should eq(200)
result = doc.parsed_response
result["user"].should eq("foxx")
result["active"].should eq(false)
doc = ArangoDB.log_delete("#{prefix}-create-users", api + "/#{name}")
doc.code.should eq(200)
response = doc.parsed_response
response["result"].should eq(true)
response["error"].should eq(false)
end
it "creates a new database with an invalid user object" do
body = "{\"name\" : \"#{name}\", \"users\": [ { } ] }"
doc = ArangoDB.log_post("#{prefix}-create-users-missing", api, :body => body)

View File

@ -215,10 +215,10 @@ describe ArangoDB do
doc = ArangoDB.log_post("#{prefix}-add", api, :body => body)
doc.code.should eq(400)
doc.code.should eq(409)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(true)
doc.parsed_response['code'].should eq(400)
doc.parsed_response['code'].should eq(409)
doc.parsed_response['errorNum'].should eq(1702)
end
end

View File

@ -179,8 +179,7 @@ SortedCollectBlock::SortedCollectBlock(ExecutionEngine* engine,
}
_aggregateRegisters.emplace_back(
std::make_pair((*itOut).second.registerId, reg));
_currentGroup.aggregators.emplace_back(
std::move(Aggregator::fromTypeString(_trx, p.second.second)));
_currentGroup.aggregators.emplace_back(Aggregator::fromTypeString(_trx, p.second.second));
}
TRI_ASSERT(_aggregateRegisters.size() == en->_aggregateVariables.size());
TRI_ASSERT(_aggregateRegisters.size() == _currentGroup.aggregators.size());
@ -711,7 +710,7 @@ int HashedCollectBlock::getOrSkipSome(size_t atLeast, size_t atMost,
// no aggregate registers. this means we'll only count the number of
// items
if (en->_count) {
aggregateValues->emplace_back(std::move(std::make_unique<AggregatorLength>(_trx, 1)));
aggregateValues->emplace_back(std::make_unique<AggregatorLength>(_trx, 1));
}
} else {
// we do have aggregate registers. create them as empty AqlValues
@ -720,8 +719,7 @@ int HashedCollectBlock::getOrSkipSome(size_t atLeast, size_t atMost,
// initialize aggregators
size_t j = 0;
for (auto const& r : en->_aggregateVariables) {
aggregateValues->emplace_back(
std::move(Aggregator::fromTypeString(_trx, r.second.second)));
aggregateValues->emplace_back(Aggregator::fromTypeString(_trx, r.second.second));
aggregateValues->back()->reduce(
GetValueForRegister(cur, _pos, _aggregateRegisters[j].second));
++j;

View File

@ -31,6 +31,7 @@
#include <velocypack/Options.h>
#include <velocypack/Slice.h>
#include <velocypack/Validator.h>
#include <velocypack/HexDump.h>
#include <velocypack/velocypack-aliases.h>
#include <memory>
@ -60,6 +61,10 @@ inline std::size_t validateAndCount(char const* vpStart,
} while (vpStart != vpEnd);
return numPayloads - 1;
} catch (std::exception const& e) {
VPackSlice slice(vpStart);
VPackHexDump dump(slice);
LOG_TOPIC(DEBUG, Logger::COMMUNICATION)
<< "len: " << std::distance(vpStart, vpEnd) << " - " << dump ;
throw std::runtime_error(
std::string("error during validation of incoming VPack: ") + e.what());
}

View File

@ -1073,27 +1073,35 @@ std::shared_ptr<VPackBuilder> RestImportHandler::createVelocyPackObject(
}
TRI_ASSERT(keys.isArray());
VPackValueLength const n = keys.length();
VPackValueLength const m = values.length();
if (n != m) {
VPackArrayIterator itKeys(keys);
VPackArrayIterator itValues(values);
if (itKeys.size() != itValues.size()) {
errorMsg = positionize(lineNumber) + "wrong number of JSON values (got " +
std::to_string(m) + ", expected " + std::to_string(n) + ")";
std::to_string(itValues.size()) + ", expected " + std::to_string(itKeys.size()) + ")";
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER, errorMsg);
}
auto result = std::make_shared<VPackBuilder>();
result->openObject();
for (size_t i = 0; i < n; ++i) {
VPackSlice const key = keys.at(i);
VPackSlice const value = values.at(i);
while (itKeys.valid()) {
TRI_ASSERT(itValues.valid());
VPackSlice const key = itKeys.value();
VPackSlice const value = itValues.value();
if (key.isString() && !value.isNone() && !value.isNull()) {
std::string tmp = key.copyString();
result->add(tmp, value);
VPackValueLength l;
char const* p = key.getString(l);
result->add(p, l, value);
}
itKeys.next();
itValues.next();
}
result->close();
return result;

View File

@ -403,7 +403,7 @@ static v8::Handle<v8::Object> RequestCppToV8(v8::Isolate* isolate,
TRI_GET_GLOBAL_STRING(RequestTypeKey);
TRI_GET_GLOBAL_STRING(RequestBodyKey);
auto set_request_body_json_or_vpack = [&]() {
auto setRequestBodyJsonOrVPack = [&]() {
if (rest::ContentType::JSON == request->contentType()) {
auto httpreq = dynamic_cast<HttpRequest*>(request);
if (httpreq == nullptr) {
@ -424,36 +424,32 @@ static v8::Handle<v8::Object> RequestCppToV8(v8::Isolate* isolate,
req->ForceSet(RequestBodyKey, TRI_V8_STD_STRING(jsonString));
headers["content-length"] = StringUtils::itoa(jsonString.size());
headers["content-type"] = StaticStrings::MimeTypeJson;
} else {
throw std::logic_error("unhandled request type");
}
};
for (auto const& it : headers) {
headerFields->ForceSet(TRI_V8_STD_STRING(it.first),
TRI_V8_STD_STRING(it.second));
}
// copy request type
switch (request->requestType()) {
case rest::RequestType::POST: {
TRI_GET_GLOBAL_STRING(PostConstant);
req->ForceSet(RequestTypeKey, PostConstant);
set_request_body_json_or_vpack();
setRequestBodyJsonOrVPack();
break;
}
case rest::RequestType::PUT: {
TRI_GET_GLOBAL_STRING(PutConstant);
req->ForceSet(RequestTypeKey, PutConstant);
set_request_body_json_or_vpack();
setRequestBodyJsonOrVPack();
break;
}
case rest::RequestType::PATCH: {
TRI_GET_GLOBAL_STRING(PatchConstant);
req->ForceSet(RequestTypeKey, PatchConstant);
set_request_body_json_or_vpack();
setRequestBodyJsonOrVPack();
break;
}
case rest::RequestType::OPTIONS: {
@ -478,6 +474,11 @@ static v8::Handle<v8::Object> RequestCppToV8(v8::Isolate* isolate,
break;
}
}
for (auto const& it : headers) {
headerFields->ForceSet(TRI_V8_STD_STRING(it.first),
TRI_V8_STD_STRING(it.second));
}
// copy request parameter
v8::Handle<v8::Object> valuesObject = v8::Object::New(isolate);
@ -562,7 +563,11 @@ static void ResponseV8ToCpp(v8::Isolate* isolate, TRI_v8_global_t const* v8g,
}
switch (response->transportType()) {
case Endpoint::TransportType::HTTP:
response->setContentType(contentType);
if (autoContent) {
response->setContentType(rest::ContentType::JSON);
} else {
response->setContentType(contentType);
}
break;
case Endpoint::TransportType::VPP:
@ -1111,7 +1116,13 @@ static void JS_RawRequestBody(v8::FunctionCallbackInfo<v8::Value> const& args) {
case Endpoint::TransportType::HTTP: {
auto httpRequest = static_cast<arangodb::HttpRequest*>(e->Value());
if (httpRequest != nullptr) {
std::string bodyStr = httpRequest->body();
std::string bodyStr;
if (rest::ContentType::VPACK == request->contentType()) {
VPackSlice slice = request->payload();
bodyStr = slice.toJson();
} else {
bodyStr = httpRequest->body();
}
V8Buffer* buffer =
V8Buffer::New(isolate, bodyStr.c_str(), bodyStr.size());

View File

@ -398,7 +398,11 @@ void ExportFeature::writeCollectionBatch(int fd, VPackArrayIterator it, std::str
}
value = std::regex_replace(value, std::regex("\""), "\"\"");
value = std::regex_replace(value, std::regex(","), "\",");
if (value.find(",") != std::string::npos || value.find("\"\"") != std::string::npos) {
value = "\"" + value;
value.append("\"");
}
}
line.append(value);
}

View File

@ -98,6 +98,10 @@ void ImportFeature::collectOptions(
options->addOption("--convert",
"convert the strings 'null', 'false', 'true' and strings containing numbers into non-string types (csv and tsv only)",
new BooleanParameter(&_convert));
options->addOption("--translate",
"translate an attribute name (use as --translate \"from=to\", for csv and tsv only)",
new VectorParameter<StringParameter>(&_translations));
std::unordered_set<std::string> types = {"document", "edge"};
std::vector<std::string> typesVector(types.begin(), types.end());
@ -171,6 +175,21 @@ void ImportFeature::validateOptions(
LOG_TOPIC(WARN, arangodb::Logger::FIXME) << "capping --batch-size value to " << MaxBatchSize;
_chunkSize = MaxBatchSize;
}
for (auto const& it : _translations) {
auto parts = StringUtils::split(it, "=");
if (parts.size() != 2) {
LOG_TOPIC(FATAL, arangodb::Logger::FIXME) << "invalid translation '" << it << "'";
FATAL_ERROR_EXIT();
}
StringUtils::trimInPlace(parts[0]);
StringUtils::trimInPlace(parts[1]);
if (parts[0].empty() || parts[1].empty()) {
LOG_TOPIC(FATAL, arangodb::Logger::FIXME) << "invalid translation '" << it << "'";
FATAL_ERROR_EXIT();
}
}
}
void ImportFeature::start() {
@ -249,6 +268,18 @@ void ImportFeature::start() {
ih.setRowsToSkip(static_cast<size_t>(_rowsToSkip));
ih.setOverwrite(_overwrite);
ih.useBackslash(_useBackslash);
std::unordered_map<std::string, std::string> translations;
for (auto const& it : _translations) {
auto parts = StringUtils::split(it, "=");
TRI_ASSERT(parts.size() == 2); // already validated before
StringUtils::trimInPlace(parts[0]);
StringUtils::trimInPlace(parts[1]);
translations.emplace(parts[0], parts[1]);
}
ih.setTranslations(translations);
// quote
if (_quote.length() <= 1) {

View File

@ -56,14 +56,14 @@ class ImportFeature final : public application_features::ApplicationFeature,
bool _createCollection;
std::string _createCollectionType;
std::string _typeImport;
std::vector<std::string> _translations;
bool _overwrite;
std::string _quote;
std::string _separator;
bool _progress;
std::string _onDuplicateAction;
uint64_t _rowsToSkip;
private:
int* _result;
};
}

View File

@ -156,6 +156,7 @@ ImportHelper::ImportHelper(httpclient::SimpleHttpClient* client,
_rowsRead(0),
_rowOffset(0),
_rowsToSkip(0),
_keyColumn(-1),
_onDuplicateAction("error"),
_collectionName(),
_lineBuffer(TRI_UNKNOWN_MEM_ZONE),
@ -249,10 +250,15 @@ bool ImportHelper::importDelimited(std::string const& collectionName,
_errorMessage = TRI_LAST_ERROR_STR;
return false;
} else if (n == 0) {
// we have read the entire file
// now have the CSV parser parse an additional new line so it
// will definitely process the last line of the input data if
// it did not end with a newline
TRI_ParseCsvString(&parser, "\n", 1);
break;
}
totalRead += (int64_t)n;
totalRead += static_cast<int64_t>(n);
reportProgress(totalLength, totalRead, nextProgress);
TRI_ParseCsvString(&parser, buffer, n);
@ -353,7 +359,7 @@ bool ImportHelper::importJson(std::string const& collectionName,
checkedFront = true;
}
totalRead += (int64_t)n;
totalRead += static_cast<int64_t>(n);
reportProgress(totalLength, totalRead, nextProgress);
if (_outputBuffer.length() > _maxUploadSize) {
@ -481,18 +487,26 @@ void ImportHelper::addField(char const* field, size_t fieldLength, size_t row,
_lineBuffer.appendChar(',');
}
if (row == 0 + _rowsToSkip || escaped) {
if (row == _rowsToSkip && fieldLength > 0) {
// translate field
auto it = _translations.find(std::string(field, fieldLength));
if (it != _translations.end()) {
field = (*it).second.c_str();
fieldLength = (*it).second.size();
}
}
if (_keyColumn == -1 && row == _rowsToSkip && fieldLength == 4 && memcmp(field, "_key", 4) == 0) {
_keyColumn = column;
}
if (row == _rowsToSkip || escaped || _keyColumn == static_cast<decltype(_keyColumn)>(column)) {
// head line or escaped value
_lineBuffer.appendJsonEncoded(field, fieldLength);
return;
}
if (!_convert) {
_lineBuffer.appendJsonEncoded(field, fieldLength);
return;
}
if (*field == '\0') {
if (*field == '\0' || fieldLength == 0) {
// do nothing
_lineBuffer.appendText(TRI_CHAR_LENGTH_PAIR("null"));
return;
@ -508,50 +522,60 @@ void ImportHelper::addField(char const* field, size_t fieldLength, size_t row,
return;
}
if (IsInteger(field, fieldLength)) {
// integer value
// conversion might fail with out-of-range error
try {
if (fieldLength > 8) {
// long integer numbers might be problematic. check if we get out of
// range
(void) std::stoll(std::string(
field,
fieldLength)); // this will fail if the number cannot be converted
if (_convert) {
if (IsInteger(field, fieldLength)) {
// integer value
// conversion might fail with out-of-range error
try {
if (fieldLength > 8) {
// long integer numbers might be problematic. check if we get out of
// range
(void) std::stoll(std::string(
field,
fieldLength)); // this will fail if the number cannot be converted
}
int64_t num = StringUtils::int64(field, fieldLength);
_lineBuffer.appendInteger(num);
} catch (...) {
// conversion failed
_lineBuffer.appendJsonEncoded(field, fieldLength);
}
} else if (IsDecimal(field, fieldLength)) {
// double value
// conversion might fail with out-of-range error
try {
std::string tmp(field, fieldLength);
size_t pos = 0;
double num = std::stod(tmp, &pos);
if (pos == fieldLength) {
bool failed = (num != num || num == HUGE_VAL || num == -HUGE_VAL);
if (!failed) {
_lineBuffer.appendDecimal(num);
return;
}
}
// NaN, +inf, -inf
// fall-through to appending the number as a string
} catch (...) {
// conversion failed
// fall-through to appending the number as a string
}
int64_t num = StringUtils::int64(field, fieldLength);
_lineBuffer.appendInteger(num);
} catch (...) {
// conversion failed
_lineBuffer.appendChar('"');
_lineBuffer.appendText(field, fieldLength);
_lineBuffer.appendChar('"');
} else {
_lineBuffer.appendJsonEncoded(field, fieldLength);
}
} else if (IsDecimal(field, fieldLength)) {
// double value
// conversion might fail with out-of-range error
try {
std::string tmp(field, fieldLength);
size_t pos = 0;
double num = std::stod(tmp, &pos);
if (pos == fieldLength) {
bool failed = (num != num || num == HUGE_VAL || num == -HUGE_VAL);
if (!failed) {
_lineBuffer.appendDecimal(num);
return;
}
}
// NaN, +inf, -inf
// fall-through to appending the number as a string
} catch (...) {
// conversion failed
// fall-through to appending the number as a string
}
_lineBuffer.appendChar('"');
_lineBuffer.appendText(field, fieldLength);
_lineBuffer.appendChar('"');
} else {
_lineBuffer.appendJsonEncoded(field, fieldLength);
if (IsInteger(field, fieldLength) || IsDecimal(field, fieldLength)) {
// numeric value. don't convert
_lineBuffer.appendText(field, fieldLength);
} else {
// non-numeric value
_lineBuffer.appendJsonEncoded(field, fieldLength);
}
}
}
@ -661,7 +685,7 @@ void ImportHelper::sendCsvBuffer() {
if (!checkCreateCollection()) {
return;
}
std::unordered_map<std::string, std::string> headerFields;
std::string url("/_api/import?" + getCollectionUrlPart() + "&line=" +
StringUtils::itoa(_rowOffset) + "&details=true&onDuplicate=" +
@ -676,7 +700,7 @@ void ImportHelper::sendCsvBuffer() {
if (_firstChunk && _overwrite) {
url += "&overwrite=true";
}
_firstChunk = false;
std::unique_ptr<SimpleHttpResult> result(_client->request(

View File

@ -138,6 +138,10 @@ class ImportHelper {
_createCollectionType = value;
}
void setTranslations(std::unordered_map<std::string, std::string> const& translations) {
_translations = translations;
}
//////////////////////////////////////////////////////////////////////////////
/// @brief whether or not to overwrite existing data in the collection
//////////////////////////////////////////////////////////////////////////////
@ -263,6 +267,8 @@ class ImportHelper {
size_t _rowOffset;
size_t _rowsToSkip;
int64_t _keyColumn;
std::string _onDuplicateAction;
std::string _collectionName;
std::string _fromCollectionPrefix;
@ -271,6 +277,8 @@ class ImportHelper {
arangodb::basics::StringBuffer _outputBuffer;
std::string _firstLine;
std::unordered_map<std::string, std::string> _translations;
bool _hasError;
std::string _errorMessage;

View File

@ -1,9 +1,4 @@
set(W_INSTALL_FILES "${PROJECT_SOURCE_DIR}/Installation/Windows/")
if (${USE_ENTERPRISE})
set(CPACK_PACKAGE_NAME "ArangoDB3e")
else()
set(CPACK_PACKAGE_NAME "ArangoDB3")
endif()
set(CPACK_NSIS_DISPLAY_NAME, ${ARANGODB_DISPLAY_NAME})
set(CPACK_NSIS_HELP_LINK ${ARANGODB_HELP_LINK})
@ -101,6 +96,9 @@ add_custom_target(package-arongodb-client-nsis
list(APPEND PACKAGES_LIST package-arongodb-client-nsis)
add_custom_target(copy_client_nsis_package
COMMAND ${CMAKE_COMMAND} -E copy ${ARANGODB_CLIENT_PACKAGE_FILE_NAME}.exe ${PACKAGE_TARGET_DIR})
add_custom_target(copy_nsis_packages
COMMAND ${CMAKE_COMMAND} -E copy ${CPACK_PACKAGE_FILE_NAME}.exe ${PACKAGE_TARGET_DIR})

View File

@ -57,6 +57,11 @@ elseif ("${PACKAGING}" STREQUAL "Bundle")
include(packages/bundle)
include(packages/tar)
elseif (MSVC)
if (${USE_ENTERPRISE})
set(CPACK_PACKAGE_NAME "ArangoDB3e")
else()
set(CPACK_PACKAGE_NAME "ArangoDB3")
endif()
if (CMAKE_CL_64)
SET(ARANGODB_PACKAGE_ARCHITECTURE "win64")
else ()

View File

@ -283,8 +283,11 @@ function put_api_permission (req, res) {
if (json.grant === 'rw' || json.grant === 'ro') {
doc = users.grantDatabase(user, dbname, json.grant);
} else {
} else if (json.grant === 'none' || json.grant === '') {
doc = users.revokeDatabase(user, dbname, json.grant);
} else {
actions.resultBad(req, res, arangodb.ERROR_HTTP_BAD_PARAMETER, "invalid grant type");
return;
}
users.reload();

View File

@ -126,9 +126,21 @@ function post_api_database (req, res) {
var i;
for (i = 0; i < users.length; ++i) {
var user = users[i];
if (typeof user !== 'object' ||
!user.hasOwnProperty('username') ||
typeof (user.username) !== 'string') {
if (typeof user !== 'object') {
// bad username
actions.resultBad(req, res, arangodb.ERROR_HTTP_BAD_PARAMETER);
return;
}
var name;
if (user.hasOwnProperty('username')) {
name = user.username;
} else if (user.hasOwnProperty('user')) {
name = user.user;
}
if (typeof name !== 'string') {
// bad username
actions.resultBad(req, res, arangodb.ERROR_HTTP_BAD_PARAMETER);
return;

File diff suppressed because one or more lines are too long

View File

@ -2747,4 +2747,4 @@ var cutByResolution = function (str) {
</div>
<div id="workMonitorContent" class="innerContent">
</div></script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1487326471860"></script><script src="app.js?version=1487326471860"></script></body></html>
</div></script></head><body><nav class="navbar" style="display: none"><div class="primary"><div class="navlogo"><a class="logo big" href="#"><img id="ArangoDBLogo" class="arangodbLogo" src="img/arangodb-edition-optimized.svg"></a><a class="logo small" href="#"><img class="arangodbLogo" src="img/arangodb_logo_small.png"></a><a class="version"><span id="currentVersion"></span></a></div><div class="statmenu" id="statisticBar"></div><div class="navmenu" id="navigationBar"></div></div></nav><div id="modalPlaceholder"></div><div class="bodyWrapper" style="display: none"><div class="centralRow"><div id="navbar2" class="navbarWrapper secondary"><div class="subnavmenu" id="subNavigationBar"></div></div><div class="resizecontainer contentWrapper"><div id="loadingScreen" class="loadingScreen" style="display: none"><i class="fa fa-circle-o-notch fa-spin fa-3x fa-fw margin-bottom"></i> <span class="sr-only">Loading...</span></div><div id="content" class="centralContent"></div><footer class="footer"><div id="footerBar"></div></footer></div></div></div><div id="progressPlaceholder" style="display:none"></div><div id="spotlightPlaceholder" style="display:none"></div><div id="graphSettingsContent" style="display: none"></div><div id="offlinePlaceholder" style="display:none"><div class="offline-div"><div class="pure-u"><div class="pure-u-1-4"></div><div class="pure-u-1-2 offline-window"><div class="offline-header"><h3>You have been disconnected from the server</h3></div><div class="offline-body"><p>The connection to the server has been lost. The server may be under heavy load.</p><p>Trying to reconnect in <span id="offlineSeconds">10</span> seconds.</p><p class="animation_state"><span><button class="button-success">Reconnect now</button></span></p></div></div><div class="pure-u-1-4"></div></div></div></div><div class="arangoFrame" style=""><div class="outerDiv"><div class="innerDiv"></div></div></div><script src="libs.js?version=1488384275033"></script><script src="app.js?version=1488384275033"></script></body></html>

View File

@ -78,7 +78,6 @@
},
parse: function (res) {
console.log(res);
if (!res.error) {
return res.graphs;
}

View File

@ -52,6 +52,7 @@
},
hideSmartGraphOptions: function () {
$('#tab-smartGraph').parent().remove();
$('#row_general-numberOfShards').show();
$('#smartGraphInfo').hide();
$('#row_new-numberOfShards').hide();
@ -640,10 +641,12 @@
};
}
} else {
if ($('#general-numberOfShards').val().length > 0) {
newCollectionObject.options = {
numberOfShards: $('#general-numberOfShards').val()
};
if (frontendConfig.isCluster) {
if ($('#general-numberOfShards').val().length > 0) {
newCollectionObject.options = {
numberOfShards: $('#general-numberOfShards').val()
};
}
}
}

View File

@ -1163,6 +1163,10 @@ function runArangoImp (options, instanceInfo, what) {
if (what.separator !== undefined) {
args['separator'] = what.separator;
}
if (what.convert !== undefined) {
args['convert'] = what.convert ? 'true' : 'false';
}
return executeAndWait(ARANGOIMP_BIN, toArgv(args), options);
}
@ -3227,6 +3231,23 @@ const impTodos = [{
create: 'true',
separator: ';',
backslash: true
}, {
id: 'csvnoconvert',
data: makePathUnix('js/common/test-data/import/import-noconvert.csv'),
coll: 'UnitTestsImportCsvNoConvert',
type: 'csv',
create: 'true',
separator: ',',
convert: true,
backslash: true
}, {
id: 'csvnoeol',
data: makePathUnix('js/common/test-data/import/import-noeol.csv'),
coll: 'UnitTestsImportCsvNoEol',
type: 'csv',
create: 'true',
separator: ',',
backslash: true
}, {
id: 'tsv1',
data: makePathUnix('js/common/test-data/import/import-1.tsv'),

View File

@ -236,12 +236,12 @@ exports.permission = function (username, key) {
if (key === undefined || key === null) {
uri = '_api/user/' + encodeURIComponent(username)
+ '/permission';
+ '/database';
requestResult = db._connection.GET(uri);
} else {
uri = '_api/user/' + encodeURIComponent(username)
+ '/permission/' + encodeURIComponent(key);
+ '/database/' + encodeURIComponent(key);
requestResult = db._connection.GET(uri);
}

View File

@ -1460,7 +1460,7 @@ global.DEFINE_MODULE('internal', (function () {
output('\n');
}
printShell.limitString = 80;
printShell.limitString = 256;
// //////////////////////////////////////////////////////////////////////////////
// / @brief flatten

View File

@ -0,0 +1,17 @@
"value1","value2"
1,null
2,false
3,true
4,1
5,2
6,3
7,a
8,b
9, a
10,-1
11,-.5
12,3.566
13,0
14,
15, c
16, 1
1 value1 value2
2 1 null
3 2 false
4 3 true
5 4 1
6 5 2
7 6 3
8 7 a
9 8 b
10 9 a
11 10 -1
12 11 -.5
13 12 3.566
14 13 0
15 14
16 15 c
17 16 1

View File

@ -0,0 +1,4 @@
"value1","value2"
a,b
c,d
e,f
1 value1 value2
2 a b
3 c d
4 e f

View File

@ -1,5 +1,5 @@
/*jshint globalstrict:false, strict:false */
/*global assertEqual, fail */
/*global assertEqual, assertTrue, assertFalse, fail */
////////////////////////////////////////////////////////////////////////////////
/// @brief test the users management
@ -253,8 +253,126 @@ function UsersSuite () {
testReload : function () {
users.reload();
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test invalid grants
////////////////////////////////////////////////////////////////////////////////
testInvalidGrants : function () {
var username = "users-1";
var passwd = "passwd";
users.save(username, passwd);
assertEqual(username, c.firstExample({ user: username }).user);
[ "foo", "bar", "baz", "w", "wx", "_system" ].forEach(function(type) {
try {
users.grantDatabase(username, "_system", type);
fail();
} catch (err) {
assertTrue(err.errorNum === ERRORS.ERROR_BAD_PARAMETER.code ||
err.errorNum === ERRORS.ERROR_HTTP_BAD_PARAMETER.code);
}
});
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test grant
////////////////////////////////////////////////////////////////////////////////
testGrantExisting : function () {
var username = "users-1";
var passwd = "passwd";
users.save(username, passwd);
assertEqual(username, c.firstExample({ user: username }).user);
users.grantDatabase(username, "_system", "rw");
// cannot really test something here as grantDatabase() does not return anything
// but if it did not throw an exception, this is already a success!
var result = users.permission(username);
assertTrue(result.hasOwnProperty("_system"));
assertEqual("rw", result._system);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test grant non-existing user
////////////////////////////////////////////////////////////////////////////////
testGrantNonExisting1 : function () {
try {
users.grantDatabase("this user does not exist", "_system", "rw");
fail();
} catch (err) {
assertEqual(ERRORS.ERROR_USER_NOT_FOUND.code, err.errorNum);
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test grant non-existing user
////////////////////////////////////////////////////////////////////////////////
testGrantNonExisting2 : function () {
try {
users.grantDatabase("users-1", "_system", "rw");
fail();
} catch (err) {
assertEqual(ERRORS.ERROR_USER_NOT_FOUND.code, err.errorNum);
}
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test grant change
////////////////////////////////////////////////////////////////////////////////
testGrantChange : function () {
var username = "users-1";
var passwd = "passwd";
users.save(username, passwd);
assertEqual(username, c.firstExample({ user: username }).user);
users.grantDatabase(username, "_system", "rw");
// cannot really test something here as grantDatabase() does not return anything
// but if it did not throw an exception, this is already a success!
var result = users.permission(username);
assertTrue(result.hasOwnProperty("_system"));
assertEqual("rw", result._system);
users.grantDatabase(username, "_system", "ro");
result = users.permission(username);
assertTrue(result.hasOwnProperty("_system"));
assertEqual("ro", result._system);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test grant/revoke
////////////////////////////////////////////////////////////////////////////////
testGrantRevoke : function () {
var username = "users-1";
var passwd = "passwd";
users.save(username, passwd);
assertEqual(username, c.firstExample({ user: username }).user);
users.grantDatabase(username, "_system", "rw");
// cannot really test something here as grantDatabase() does not return anything
// but if it did not throw an exception, this is already a success!
var result = users.permission(username);
assertTrue(result.hasOwnProperty("_system"));
assertEqual("rw", result._system);
users.revokeDatabase(username, "_system");
result = users.permission(username);
assertFalse(result.hasOwnProperty("_system"));
assertEqual({}, result);
}
};
}

View File

@ -1816,6 +1816,9 @@ function arangoErrorToHttpCode (num) {
case arangodb.ERROR_ARANGO_DUPLICATE_NAME:
case arangodb.ERROR_ARANGO_DUPLICATE_IDENTIFIER:
case arangodb.ERROR_USER_DUPLICATE:
case arangodb.ERROR_GRAPH_DUPLICATE:
case arangodb.ERROR_TASK_DUPLICATE_ID:
case arangodb.ERROR_SERVICE_MOUNTPOINT_CONFLICT:
return exports.HTTP_CONFLICT;

View File

@ -497,9 +497,10 @@ exports.revokeDatabase = function (username, database) {
}
let databases = user.databases || {};
databases[database] = 'none';
databases[database] = undefined;
delete databases[database];
users.update(user, { databases: databases }, false, false);
users.update(user, { databases: databases }, { keepNull: false, mergeObjects: false });
// not exports.reload() as this is an abstract method...
require('@arangodb/users').reload();

View File

@ -42,6 +42,8 @@
db._drop("UnitTestsImportCsv3");
db._drop("UnitTestsImportCsv4");
db._drop("UnitTestsImportCsv5");
db._drop("UnitTestsImportCsvNoConvert");
db._drop("UnitTestsImportCsvNoEol");
db._drop("UnitTestsImportTsv1");
db._drop("UnitTestsImportTsv2");
db._drop("UnitTestsImportVertex");

View File

@ -42,6 +42,8 @@
db._drop("UnitTestsImportCsv3");
db._drop("UnitTestsImportCsv4");
db._drop("UnitTestsImportCsv5");
db._drop("UnitTestsImportCsvNoConvert");
db._drop("UnitTestsImportCsvNoEol");
db._drop("UnitTestsImportTsv1");
db._drop("UnitTestsImportTsv2");
db._drop("UnitTestsImportVertex");

View File

@ -285,6 +285,49 @@ function importTestSuite () {
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test csv import without converting
////////////////////////////////////////////////////////////////////////////////
testCsvImportNoConvert : function () {
var expected = [
{ value1: 1 },
{ value1: 2, value2: false },
{ value1: 3, value2: true },
{ value1: 4, value2: 1 },
{ value1: 5, value2: 2 },
{ value1: 6, value2: 3 },
{ value1: 7, value2: "a" },
{ value1: 8, value2: "b" },
{ value1: 9, value2: " a" },
{ value1: 10, value2: -1 },
{ value1: 11, value2: -0.5 },
{ value1: 12, value2: 3.566 },
{ value1: 13, value2: 0 },
{ value1: 14 },
{ value1: 15, value2: " c" },
{ value1: 16, value2: " 1" }
];
var actual = getQueryResults("FOR i IN UnitTestsImportCsvNoConvert SORT i.value1 RETURN i");
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test csv import without trailing eol
////////////////////////////////////////////////////////////////////////////////
testCsvImportNoEol : function () {
var expected = [
{ value1: "a", value2: "b" },
{ value1: "c", value2: "d" },
{ value1: "e", value2: "f" }
];
var actual = getQueryResults("FOR i IN UnitTestsImportCsvNoEol SORT i.value1 RETURN i");
assertEqual(expected, actual);
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test tsv import
////////////////////////////////////////////////////////////////////////////////

View File

@ -292,14 +292,12 @@ function ReplicationSuite() {
function (state) {
// wait for slave applier to have started and detect the mess
internal.wait(5, false);
// slave should have failed
assertFalse(replication.applier.state().state.running);
return true;
return replication.applier.state().state.running;
},
function (state) {
// slave should have failed
assertFalse(replication.applier.state().state.running);
// data loss on slave!
assertTrue(db._collection(cn).count() < 25);
},

View File

@ -179,17 +179,11 @@ typedef long suseconds_t;
#undef free
#endif
////////////////////////////////////////////////////////////////////////////////
/// @brief helper macro for calculating strlens for static strings at
/// a compile-time (unless compiled with fno-builtin-strlen etc.)
////////////////////////////////////////////////////////////////////////////////
#define TRI_CHAR_LENGTH_PAIR(value) (value), strlen(value)
////////////////////////////////////////////////////////////////////////////////
/// @brief assert
////////////////////////////////////////////////////////////////////////////////
#ifndef TRI_ASSERT
#ifdef ARANGODB_ENABLE_MAINTAINER_MODE
@ -211,12 +205,8 @@ typedef long suseconds_t;
#endif
////////////////////////////////////////////////////////////////////////////////
/// @brief aborts program execution, returning an error code
///
/// if backtraces are enabled, a backtrace will be printed before
////////////////////////////////////////////////////////////////////////////////
#define FATAL_ERROR_EXIT(...) \
do { \
TRI_LogBacktrace(); \
@ -226,12 +216,8 @@ typedef long suseconds_t;
exit(EXIT_FAILURE); \
} while (0)
////////////////////////////////////////////////////////////////////////////////
/// @brief aborts program execution, calling std::abort
///
/// if backtraces are enabled, a backtrace will be printed before
////////////////////////////////////////////////////////////////////////////////
#define FATAL_ERROR_ABORT(...) \
do { \
TRI_LogBacktrace(); \
@ -251,7 +237,6 @@ inline void ADB_WindowsExitFunction(int exitCode, void* data) {}
// --SECTIONS-- deferred execution
// -----------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////
/// Use in a function (or scope) as:
/// TRI_DEFER( <ONE_STATEMENT> );
/// and the statement will be called regardless if the function throws or
@ -262,8 +247,6 @@ inline void ADB_WindowsExitFunction(int exitCode, void* data) {}
/// appearance.
/// The idea to this is from
/// http://blog.memsql.com/c-error-handling-with-auto/
////////////////////////////////////////////////////////////////////////////////
#define TOKEN_PASTE_WRAPPED(x, y) x##y
#define TOKEN_PASTE(x, y) TOKEN_PASTE_WRAPPED(x, y)

View File

@ -315,12 +315,14 @@ void Logger::log(char const* function, char const* file, long int line,
size_t offset = out.str().size() - message.size();
auto msg = std::make_unique<LogMessage>(level, topicId, out.str(), offset);
bool const isDirectLogLevel = (level == LogLevel::FATAL || level == LogLevel::ERR || level == LogLevel::WARN);
// now either queue or output the message
if (_threaded && !isDirectLogLevel) {
if (_threaded) {
try {
_loggingThread->log(msg);
bool const isDirectLogLevel = (level == LogLevel::FATAL || level == LogLevel::ERR || level == LogLevel::WARN);
if (isDirectLogLevel) {
_loggingThread->flush();
}
return;
} catch (...) {
// fall-through to non-threaded logging