1
0
Fork 0

Merge branch 'spdvpk' of ssh://github.com/ArangoDB/ArangoDB into spdvpk

This commit is contained in:
Max Neunhoeffer 2016-04-08 10:28:12 +02:00
commit 1d4abd16a6
53 changed files with 767 additions and 726 deletions

View File

@ -3,8 +3,8 @@ ArangoDB Maintainers manual
=========================== ===========================
This file contains documentation about the build process, documentation generation means, unittests - put short - if you want to hack parts of arangod this could be interesting for you. This file contains documentation about the build process, documentation generation means, unittests - put short - if you want to hack parts of arangod this could be interesting for you.
Configure CMake
========= =====
* *--enable-relative* - relative mode so you can run without make install * *--enable-relative* - relative mode so you can run without make install
* *--enable-maintainer-mode* - generate lex/yacc files * *--enable-maintainer-mode* - generate lex/yacc files
* *--with-backtrace* - add backtraces to native code asserts & exceptions * *--with-backtrace* - add backtraces to native code asserts & exceptions
@ -26,8 +26,8 @@ At runtime arangod needs to be started with these options:
--javascript.v8-options="--gdbjit_dump" --javascript.v8-options="--gdbjit_dump"
--javascript.v8-options="--gdbjit_full" --javascript.v8-options="--gdbjit_full"
Debugging the Make process Debugging the build process
-------------------------- ---------------------------
If the compile goes wrong for no particular reason, appending 'verbose=' adds more output. For some reason V8 has VERBOSE=1 for the same effect. If the compile goes wrong for no particular reason, appending 'verbose=' adds more output. For some reason V8 has VERBOSE=1 for the same effect.
Runtime Runtime
@ -53,7 +53,8 @@ A sample version to help working with the arangod rescue console may look like t
}; };
print = internal.print; print = internal.print;
__________________________________________________________________________________________________________ HINT: You shouldn't lean on these variables in your foxx services.
______________________________________________________________________________________________________
JSLint JSLint
====== ======
@ -63,19 +64,19 @@ Make target
----------- -----------
use use
make gitjslint ./utils/gitjslint.sh
to lint your modified files. to lint your modified files.
make jslint ./utils/jslint.sh
to find out whether all of your files comply to jslint. This is required to make continuous integration work smoothly. to find out whether all of your files comply to jslint. This is required to make continuous integration work smoothly.
if you want to add new files / patterns to this make target, edit js/Makefile.files if you want to add new files / patterns to this make target, edit the respective shell scripts.
to be safe from committing non-linted stuff add **.git/hooks/pre-commit** with: to be safe from committing non-linted stuff add **.git/hooks/pre-commit** with:
make gitjslint ./utils/jslint.sh
Use jslint standalone for your js file Use jslint standalone for your js file
@ -83,15 +84,17 @@ Use jslint standalone for your js file
If you want to search errors in your js file, jslint is very handy - like a compiler is for C/C++. If you want to search errors in your js file, jslint is very handy - like a compiler is for C/C++.
You can invoke it like this: You can invoke it like this:
bin/arangosh --jslint js/server/modules/@arangodb/testing.js bin/arangosh --jslint js/client/modules/@arangodb/testing.js
__________________________________________________________________________________________________________ _____________________________________________________________________________________________________
ArangoDB Unittesting Framework ArangoDB Unittesting Framework
============================== ==============================
Dependencies Dependencies
------------ ------------
* Ruby, rspec, httparty, boost_test (compile time) * *Ruby*, *rspec*, *httparty* to install the required dependencies run:
cd UnitTests/HttpInterface; bundler
* boost_test (compile time)
Filename conventions Filename conventions
@ -141,7 +144,7 @@ There are several major places where unittests live:
HttpInterface - RSpec Client Tests HttpInterface - RSpec Client Tests
--------------------------------- ----------------------------------
These tests work on the plain RESTfull interface of arangodb, and thus also test invalid HTTP-requests and so forth, plus check error handling in the server. These tests work on the plain RESTfull interface of arangodb, and thus also test invalid HTTP-requests and so forth, plus check error handling in the server.
@ -158,50 +161,14 @@ arangosh is similar, however, you can only run tests which are intended to be ra
require("jsunity").runTest("js/client/tests/shell-client.js"); require("jsunity").runTest("js/client/tests/shell-client.js");
mocha tests
-----------
All tests with -spec in their names are using the [mochajs.org](https://mochajs.org) framework.
jasmine tests jasmine tests
------------- -------------
Jasmine tests cover testing the UI components of aardvark
Jasmine tests cover two important usecase:
- testing the UI components of aardvark
-spec
aardvark
Invocation methods
==================
Make-targets
------------
Most of the tests can be invoked via the main Makefile: (UnitTests/Makefile.unittests)
- unittests
- unittests-brief
- unittests-verbose
- unittests-recovery
- unittests-config
- unittests-boost
- unittests-single
- unittests-shell-server
- unittests-shell-server-only
- unittests-shell-server-aql
- unittests-shell-client-readonly
- unittests-shell-client
- unittests-http-server
- unittests-ssl-server
- unittests-import
- unittests-replication
- unittests-replication-server
- unittests-replication-http
- unittests-replication-data
- unittests-upgrade
- unittests-dfdb
- unittests-foxx-manager
- unittests-dump
- unittests-arangob
- unittests-authentication
- unittests-authentication-parameters
Javascript framework Javascript framework
-------------------- --------------------
@ -225,7 +192,6 @@ Available choices include:
- *all*: (calls multiple) This target is utilized by most of the jenkins builds invoking unit tests. - *all*: (calls multiple) This target is utilized by most of the jenkins builds invoking unit tests.
- *single_client*: (see Running a single unittestsuite) - *single_client*: (see Running a single unittestsuite)
- *single_server*: (see Running a single unittestsuite) - *single_server*: (see Running a single unittestsuite)
- *single_localserver*: (see Running a single unittestsuite)
- many more - call without arguments for more details. - many more - call without arguments for more details.
Passing Options Passing Options
@ -248,7 +214,7 @@ syntax --option value --sub:option value. Using Valgrind could look like this:
- we specify the test to execute - we specify the test to execute
- we specify some arangod arguments via --extraargs which increase the server performance - we specify some arangod arguments via --extraargs which increase the server performance
- we specify to run using valgrind (this is supported by all facilities - we specify to run using valgrind (this is supported by all facilities)
- we specify some valgrind commandline arguments - we specify some valgrind commandline arguments
Running a single unittestsuite Running a single unittestsuite
@ -266,7 +232,7 @@ Testing a single rspec test:
scripts/unittest http_server --test api-users-spec.rb scripts/unittest http_server --test api-users-spec.rb
**scripts/unittest** is mostly only a wrapper; The backend functionality lives in: **scripts/unittest** is mostly only a wrapper; The backend functionality lives in:
**js/server/modules/@arangodb/testing.js** **js/client/modules/@arangodb/testing.js**
Running foxx tests with a fake foxx Repo Running foxx tests with a fake foxx Repo
---------------------------------------- ----------------------------------------
@ -292,8 +258,6 @@ arangod commandline arguments
bin/arangod /tmp/dataUT --javascript.unit-tests="js/server/tests/aql-escaping.js" --no-server bin/arangod /tmp/dataUT --javascript.unit-tests="js/server/tests/aql-escaping.js" --no-server
make unittest
js/common/modules/loadtestrunner.js js/common/modules/loadtestrunner.js
__________________________________________________________________________________________________________ __________________________________________________________________________________________________________
@ -339,7 +303,7 @@ These commands for `-c` mean:
If you don't specify them via -c you can also use them in an interactive manner. If you don't specify them via -c you can also use them in an interactive manner.
__________________________________________________________________________________________________________ ______________________________________________________________________________________________________
Documentation Documentation
============= =============
@ -351,7 +315,7 @@ Dependencies to build documentation:
https://pypi.python.org/pypi/setuptools https://pypi.python.org/pypi/setuptools
Download setuptools zip file, extract to any folder, use bundled python 2.6 to install: Download setuptools zip file, extract to any folder to install:
python ez_install.py python ez_install.py
@ -361,7 +325,7 @@ Dependencies to build documentation:
https://github.com/triAGENS/markdown-pp/ https://github.com/triAGENS/markdown-pp/
Checkout the code with Git, use bundled python 2.6 to install: Checkout the code with Git, use your system python to install:
python setup.py install python setup.py install
@ -389,8 +353,8 @@ Dependencies to build documentation:
Generate users documentation Generate users documentation
============================ ============================
If you've edited REST-Documentation, first invoke `make swagger`.
If you've edited examples, see below howto regenerate them. If you've edited examples, see below howto regenerate them.
If you've edited REST-Documentation, first invoke `./utils/generateSwagger.sh`.
Run the `make` command in `arangodb/Documentation/Books` to generate it. Run the `make` command in `arangodb/Documentation/Books` to generate it.
The documentation will be generated into `arangodb/Documentation/Books/books/Users` - The documentation will be generated into `arangodb/Documentation/Books/books/Users` -
use your favourite browser to read it. use your favourite browser to read it.
@ -441,24 +405,23 @@ Generate an ePub:
Where to add new... Where to add new...
------------------- -------------------
- js/action/api/* - markdown comments in source with execution section - Documentation/DocuBlocks/* - markdown comments with execution section
- Documentation/Books/Users/SUMMARY.md - index of all sub documentations - Documentation/Books/Users/SUMMARY.md - index of all sub documentations
- Documentation/Scripts/generateSwaggerApi.py - list of all sections to be adjusted if
generate generate
-------- --------
- `./scripts/generateExamples --onlyThisOne geoIndexSelect` will only produce one example - *geoIndexSelect* - `./utils/generateExamples.sh --onlyThisOne geoIndexSelect` will only produce one example - *geoIndexSelect*
- `./scripts/generateExamples --onlyThisOne 'MOD.*'` will only produce the examples matching that regex; Note that - `./utils/generateExamples.sh --onlyThisOne 'MOD.*'` will only produce the examples matching that regex; Note that
examples with enumerations in their name may base on others in their series - so you should generate the whole group. examples with enumerations in their name may base on others in their series - so you should generate the whole group.
- `./scripts/generateExamples --server.endpoint tcp://127.0.0.1:8529` will utilize an existing arangod instead of starting a new one. - `./utils/generateExamples.sh --server.endpoint tcp://127.0.0.1:8529` will utilize an existing arangod instead of starting a new one.
this does seriously cut down the execution time. this does seriously cut down the execution time.
- alternatively you can use generateExamples (i.e. on windows since the make target is not portable) like that: - you can use generateExamples like that:
`./scripts/generateExamples `./utils/generateExamples.sh \
--server.endpoint 'tcp://127.0.0.1:8529' --server.endpoint 'tcp://127.0.0.1:8529' \
--withPython 3rdParty/V8-4.3.61/third_party/python_26/python26.exe --withPython C:/tools/python2/python.exe \
--onlyThisOne 'MOD.*'` --onlyThisOne 'MOD.*'`
- `./Documentation/Scripts/allExamples.sh` generates a file where you can inspect all examples for readability. - `./Documentation/Scripts/allExamples.sh` generates a file where you can inspect all examples for readability.
- `make swagger` - on top level to generate the documentation interactively with the server; you may use - `./utils/generateSwagger.sh` - on top level to generate the documentation interactively with the server; you may use
[the swagger editor](https://github.com/swagger-api/swagger-editor) to revalidate whether [the swagger editor](https://github.com/swagger-api/swagger-editor) to revalidate whether
*js/apps/system/_admin/aardvark/APP/api-docs.json* is accurate. *js/apps/system/_admin/aardvark/APP/api-docs.json* is accurate.
- `cd Documentation/Books; make` - to generate the HTML documentation - `cd Documentation/Books; make` - to generate the HTML documentation
@ -480,18 +443,17 @@ Read / use the documentation
arangod Example tool arangod Example tool
==================== ====================
`make example` picks examples from the source code documentation, executes them, and creates a transcript including their results. `./utils/generateExamples.sh` picks examples from the code documentation, executes them, and creates a transcript including their results.
*Hint: Windows users may use ./scripts/generateExamples for this purpose*
Here is how its details work: Here is how its details work:
- all ending with *.cpp*, *.js* and *.mdpp* are searched. - all *Documentation/DocuBlocks/*.md* and *Documentation/Books/*.mdpp* are searched.
- all lines inside of source code starting with '///' are matched, all lines in .mdpp files. - all lines inside of source code starting with '///' are matched, all lines in .mdpp files.
- an example start is marked with *@EXAMPLE_ARANGOSH_OUTPUT* or *@EXAMPLE_ARANGOSH_RUN* - an example start is marked with *@EXAMPLE_ARANGOSH_OUTPUT* or *@EXAMPLE_ARANGOSH_RUN*
- the example is named by the string provided in brackets after the above key - the example is named by the string provided in brackets after the above key
- the output is written to `Documentation/Examples/<name>.generated` - the output is written to `Documentation/Examples/<name>.generated`
- examples end with *@END_EXAMPLE_[OUTPUT|RUN]* - examples end with *@END_EXAMPLE_[OUTPUT|RUN]*
- all code in between is executed as javascript in the **arangosh** while talking to a valid **arangod**. You may inspect the - all code in between is executed as javascript in the **arangosh** while talking to a valid **arangod**.
generated js code in `/tmp/arangosh.examples.js` You may inspect the generated js code in `/tmp/arangosh.examples.js`
OUTPUT and RUN specifics OUTPUT and RUN specifics
--------------------------- ---------------------------
@ -535,7 +497,7 @@ sortable naming scheme so they're executed in sequence. Using `<modulename>_<seq
Swagger integration Swagger integration
=================== ===================
`make swagger` scans the sourcecode, and generates swagger output. `./utils/generateSwagger.sh` scans the documentation, and generates swagger output.
It scans for all documentationblocks containing `@RESTHEADER`. It scans for all documentationblocks containing `@RESTHEADER`.
It is a prerequisite for integrating these blocks into the gitbook documentation. It is a prerequisite for integrating these blocks into the gitbook documentation.

View File

@ -316,6 +316,7 @@ BOOST_AUTO_TEST_CASE (tst_geo1000) {
gcmass(1009,list1,5, 53245966); gcmass(1009,list1,5, 53245966);
list1 = GeoIndex_ReadCursor(gcr,5); list1 = GeoIndex_ReadCursor(gcr,5);
gcmass(1010,list1,4, 86589238); gcmass(1010,list1,4, 86589238);
GeoIndex_CursorFree(gcr);
MyFree(gi); MyFree(gi);
} }
@ -365,6 +366,8 @@ gcp.longitude= 25.5;
gcr = GeoIndex_NewCursor(gi,&gcp); gcr = GeoIndex_NewCursor(gi,&gcp);
list1 = GeoIndex_ReadCursor(gcr,1); list1 = GeoIndex_ReadCursor(gcr,1);
icheck(11,1,list1->length); icheck(11,1,list1->length);
GeoIndex_CoordinatesFree(list1);
GeoIndex_CursorFree(gcr);
gcp.latitude = 89.9; gcp.latitude = 89.9;
gcp.longitude = -180.0; gcp.longitude = -180.0;
gcp.data = ix + 64; gcp.data = ix + 64;
@ -394,6 +397,7 @@ gcp.latitude = 89.9;
gcp.longitude = -180.0; gcp.longitude = -180.0;
gcp.data = ix + 64; gcp.data = ix + 64;
GeoIndex_insert(gi,&gcp); GeoIndex_insert(gi,&gcp);
GeoIndex_CoordinatesFree(list1);
list1 = GeoIndex_NearestCountPoints(gi,&gcp,1); list1 = GeoIndex_NearestCountPoints(gi,&gcp,1);
gccheck(13,list1, 1,"AAAAAAAAAAAAAAAABAAAAAAAA"); gccheck(13,list1, 1,"AAAAAAAAAAAAAAAABAAAAAAAA");
gicheck(14,gi); gicheck(14,gi);
@ -541,37 +545,37 @@ MyFree(gi);
/* in some chaotic ways */ /* in some chaotic ways */
BOOST_AUTO_TEST_CASE (tst_geo70) { BOOST_AUTO_TEST_CASE (tst_geo70) {
gi=GeoIndex_new(); gi=GeoIndex_new();
gcp.latitude = 0.0; gcp.latitude = 0.0;
gcp.longitude = 40.0; gcp.longitude = 40.0;
gcp.data = &ix[4]; gcp.data = &ix[4];
i = GeoIndex_insert(gi,&gcp); i = GeoIndex_insert(gi,&gcp);
icheck(70,0,i); icheck(70,0,i);
gcp.data = &ix[5]; gcp.data = &ix[5];
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(71,-1,i); icheck(71,-1,i);
gcp.longitude = 40.000001; gcp.longitude = 40.000001;
gcp.data = &ix[4]; gcp.data = &ix[4];
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(72,-1,i); icheck(72,-1,i);
gcp.latitude = 0.0000000001; gcp.latitude = 0.0000000001;
gcp.longitude = 40.0; gcp.longitude = 40.0;
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(73,-1,i); icheck(73,-1,i);
gcp.latitude = 0.0; gcp.latitude = 0.0;
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(74,0,i); icheck(74,0,i);
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(75,-1,i); icheck(75,-1,i);
for(j=1;j<=8;j++) for(j=1;j<=8;j++)
{ {
gcp.latitude = 0.0; gcp.latitude = 0.0;
lo=j; lo=j;
lo=lo*10; lo=lo*10;
@ -579,73 +583,76 @@ for(j=1;j<=8;j++)
gcp.data = &ix[j]; gcp.data = &ix[j];
i = GeoIndex_insert(gi,&gcp); i = GeoIndex_insert(gi,&gcp);
icheck(76,0,i); icheck(76,0,i);
} }
gcp.latitude = 0.0; gcp.latitude = 0.0;
gcp.longitude= 25.5; gcp.longitude= 25.5;
list1 = GeoIndex_NearestCountPoints(gi,&gcp,1); list1 = GeoIndex_NearestCountPoints(gi,&gcp,1);
icheck(77,1,list1->length); icheck(77,1,list1->length);
dcheck(78,0.0,list1->coordinates[0].latitude,0.0); dcheck(78,0.0,list1->coordinates[0].latitude,0.0);
dcheck(79,30.0,list1->coordinates[0].longitude,0.0); dcheck(79,30.0,list1->coordinates[0].longitude,0.0);
pcheck(80,&ix[3],(char *)list1->coordinates[0].data); pcheck(80,&ix[3],(char *)list1->coordinates[0].data);
gcp.longitude= 24.5; gcp.longitude= 24.5;
list1 = GeoIndex_NearestCountPoints(gi,&gcp,1); GeoIndex_CoordinatesFree(list1);
icheck(81,1,list1->length); list1 = GeoIndex_NearestCountPoints(gi,&gcp,1);
dcheck(82,0.0,list1->coordinates[0].latitude,0.0); icheck(81,1,list1->length);
dcheck(83,20.0,list1->coordinates[0].longitude,0.0); dcheck(82,0.0,list1->coordinates[0].latitude,0.0);
pcheck(84,&ix[2],(char *)list1->coordinates[0].data); dcheck(83,20.0,list1->coordinates[0].longitude,0.0);
pcheck(84,&ix[2],(char *)list1->coordinates[0].data);
GeoIndex_CoordinatesFree(list1);
gcp.latitude = 1.0; gcp.latitude = 1.0;
gcp.longitude = 40.0; gcp.longitude = 40.0;
gcp.data = &ix[14]; gcp.data = &ix[14];
i = GeoIndex_insert(gi,&gcp); i = GeoIndex_insert(gi,&gcp);
icheck(85,0,i); icheck(85,0,i);
gcp.longitude = 8000.0; gcp.longitude = 8000.0;
i = GeoIndex_insert(gi,&gcp); i = GeoIndex_insert(gi,&gcp);
icheck(86,-3,i); icheck(86,-3,i);
gcp.latitude = 800.0; gcp.latitude = 800.0;
gcp.longitude = 80.0; gcp.longitude = 80.0;
i = GeoIndex_insert(gi,&gcp); i = GeoIndex_insert(gi,&gcp);
icheck(86,-3,i); icheck(86,-3,i);
gcp.latitude = 800.0; gcp.latitude = 800.0;
gcp.longitude = 80.0; gcp.longitude = 80.0;
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(87,-3,i); icheck(87,-3,i);
gcp.latitude = 1.0; gcp.latitude = 1.0;
gcp.longitude = 40.0; gcp.longitude = 40.0;
gcp.data = &ix[14]; gcp.data = &ix[14];
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(88,0,i); icheck(88,0,i);
for(j=1;j<10;j++) for(j=1;j<10;j++)
{ {
gcp.latitude = 0.0; gcp.latitude = 0.0;
gcp.longitude = 40.0; gcp.longitude = 40.0;
gcp.data = &ix[20+j]; gcp.data = &ix[20+j];
i = GeoIndex_insert(gi,&gcp); i = GeoIndex_insert(gi,&gcp);
icheck(89,0,i); icheck(89,0,i);
} }
for(j=1;j<10;j++) for(j=1;j<10;j++)
{ {
gcp.latitude = 0.0; gcp.latitude = 0.0;
gcp.longitude = 40.0; gcp.longitude = 40.0;
gcp.data = &ix[20+j]; gcp.data = &ix[20+j];
i = GeoIndex_remove(gi,&gcp); i = GeoIndex_remove(gi,&gcp);
icheck(90,0,i); icheck(90,0,i);
} }
gcp.latitude = 0.0; gcp.latitude = 0.0;
gcp.longitude= 35.5; gcp.longitude= 35.5;
list1 = GeoIndex_NearestCountPoints(gi,&gcp,1); list1 = GeoIndex_NearestCountPoints(gi,&gcp,1);
icheck(91,1,list1->length); icheck(91,1,list1->length);
dcheck(92,0.0,list1->coordinates[0].latitude,0.0); dcheck(92,0.0,list1->coordinates[0].latitude,0.0);
dcheck(93,40.0,list1->coordinates[0].longitude,0.0); dcheck(93,40.0,list1->coordinates[0].longitude,0.0);
pcheck(94,&ix[4],(char *)list1->coordinates[0].data); pcheck(94,&ix[4],(char *)list1->coordinates[0].data);
GeoIndex_CoordinatesFree(list1);
list1 = GeoIndex_NearestCountPoints(gi,&gcp,10); list1 = GeoIndex_NearestCountPoints(gi,&gcp,10);
gccheck(95,list1, 8,"OPBAAAAAAAAAAAAAAAAAAAAAA"); gccheck(95,list1, 8,"OPBAAAAAAAAAAAAAAAAAAAAAA");
@ -890,6 +897,7 @@ BOOST_AUTO_TEST_CASE (tst_geo200) {
} }
} }
} }
GeoIndex_CoordinatesFree(list1);
list1 = GeoIndex_PointsWithinRadius(gi,&gcp1,13000.0); list1 = GeoIndex_PointsWithinRadius(gi,&gcp1,13000.0);
if(list1->length==5) if(list1->length==5)

View File

@ -51,8 +51,7 @@ class ApplicationAgency : virtual public arangodb::rest::ApplicationFeature {
public: public:
ApplicationAgency(ApplicationEndpointServer*); explicit ApplicationAgency(ApplicationEndpointServer*);
~ApplicationAgency(); ~ApplicationAgency();

View File

@ -145,7 +145,6 @@ Ast::~Ast() {}
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
TRI_json_t* Ast::toJson(TRI_memory_zone_t* zone, bool verbose) const { TRI_json_t* Ast::toJson(TRI_memory_zone_t* zone, bool verbose) const {
#warning Deprecated
TRI_json_t* json = TRI_CreateArrayJson(zone); TRI_json_t* json = TRI_CreateArrayJson(zone);
if (json == nullptr) { if (json == nullptr) {

View File

@ -472,7 +472,6 @@ struct CoordinatorInstanciator : public WalkerWorker<ExecutionNode> {
EngineInfo const& info, Collection* collection, EngineInfo const& info, Collection* collection,
QueryId& connectedId, std::string const& shardId, QueryId& connectedId, std::string const& shardId,
TRI_json_t* jsonPlan) { TRI_json_t* jsonPlan) {
#warning still Json inplace. Needs to be fixed
// create a JSON representation of the plan // create a JSON representation of the plan
Json result(Json::Object); Json result(Json::Object);

View File

@ -1,3 +1,4 @@
////////////////////////////////////////////////////////////////////////////////
/// @brief Infrastructure for ExecutionPlans /// @brief Infrastructure for ExecutionPlans
/// ///
/// DISCLAIMER /// DISCLAIMER

View File

@ -220,7 +220,6 @@ ExecutionPlan* ExecutionPlan::clone(Query const& query) {
arangodb::basics::Json ExecutionPlan::toJson(Ast* ast, TRI_memory_zone_t* zone, arangodb::basics::Json ExecutionPlan::toJson(Ast* ast, TRI_memory_zone_t* zone,
bool verbose) const { bool verbose) const {
#warning Remove this
// TODO // TODO
VPackBuilder b; VPackBuilder b;
_root->toVelocyPack(b, verbose); _root->toVelocyPack(b, verbose);

View File

@ -226,7 +226,7 @@ struct AgencyTransaction {
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
/// @brief shortcut to create a transaction with one operation /// @brief shortcut to create a transaction with one operation
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
AgencyTransaction(AgencyOperation operation) { explicit AgencyTransaction(AgencyOperation const& operation) {
operations.push_back(operation); operations.push_back(operation);
} }
}; };

View File

@ -375,11 +375,10 @@ bool ApplicationCluster::open() {
AgencyComm comm; AgencyComm comm;
AgencyCommResult result; AgencyCommResult result;
bool success;
do { do {
AgencyCommLocker locker("Current", "WRITE"); AgencyCommLocker locker("Current", "WRITE");
success = locker.successful(); bool success = locker.successful();
if (success) { if (success) {
VPackBuilder builder; VPackBuilder builder;
try { try {

View File

@ -677,6 +677,7 @@ int createDocumentOnCoordinator(
} }
responseCode = res.answer_code; responseCode = res.answer_code;
TRI_ASSERT(res.answer != nullptr);
auto parsedResult = res.answer->toVelocyPack(&VPackOptions::Defaults); auto parsedResult = res.answer->toVelocyPack(&VPackOptions::Defaults);
resultBody.swap(parsedResult); resultBody.swap(parsedResult);
return TRI_ERROR_NO_ERROR; return TRI_ERROR_NO_ERROR;
@ -708,6 +709,7 @@ int createDocumentOnCoordinator(
} }
resultMap.emplace(res.shardID, tmpBuilder); resultMap.emplace(res.shardID, tmpBuilder);
} else { } else {
TRI_ASSERT(res.answer != nullptr);
resultMap.emplace(res.shardID, resultMap.emplace(res.shardID,
res.answer->toVelocyPack(&VPackOptions::Defaults)); res.answer->toVelocyPack(&VPackOptions::Defaults));
auto resultHeaders = res.answer->headers(); auto resultHeaders = res.answer->headers();
@ -1250,7 +1252,7 @@ int getFilteredDocumentsOnCoordinator(
size_t resCount = TRI_LengthArrayJson(documents); size_t resCount = TRI_LengthArrayJson(documents);
for (size_t k = 0; k < resCount; ++k) { for (size_t k = 0; k < resCount; ++k) {
try { try {
TRI_json_t* element = TRI_LookupArrayJson(documents, k); TRI_json_t const* element = TRI_LookupArrayJson(documents, k);
std::string id = arangodb::basics::JsonHelper::checkAndGetStringValue( std::string id = arangodb::basics::JsonHelper::checkAndGetStringValue(
element, TRI_VOC_ATTRIBUTE_ID); element, TRI_VOC_ATTRIBUTE_ID);
auto tmpBuilder = basics::JsonHelper::toVelocyPack(element); auto tmpBuilder = basics::JsonHelper::toVelocyPack(element);

View File

@ -398,7 +398,7 @@ static inline node_t** FollowersNodes(void* data) {
uint8_t numAllocated = *head; uint8_t numAllocated = *head;
uint8_t* keys = (uint8_t*)(head + 2); // numAllocated + numEntries uint8_t* keys = (uint8_t*)(head + 2); // numAllocated + numEntries
return (node_t**)(uint8_t*)((keys + numAllocated) + Padding(numAllocated)); return reinterpret_cast<node_t**>(keys + numAllocated + Padding(numAllocated));
} }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -424,7 +424,7 @@ static inline node_t** FollowersNodesPos(void* data, uint32_t numAllocated) {
uint8_t* head = (uint8_t*)data; uint8_t* head = (uint8_t*)data;
uint8_t* keys = (uint8_t*)(head + 2); // numAllocated + numEntries uint8_t* keys = (uint8_t*)(head + 2); // numAllocated + numEntries
return (node_t**)(uint8_t*)((keys + numAllocated) + Padding(numAllocated)); return reinterpret_cast<node_t**>(keys + numAllocated + Padding(numAllocated));
} }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -1968,6 +1968,9 @@ int GeoIndex_remove(GeoIndex* gi, GeoCoordinate* c) {
/* user when the results of a search are finished with */ /* user when the results of a search are finished with */
/* =================================================== */ /* =================================================== */
void GeoIndex_CoordinatesFree(GeoCoordinates* clist) { void GeoIndex_CoordinatesFree(GeoCoordinates* clist) {
if (clist == nullptr) {
return;
}
TRI_Free(TRI_UNKNOWN_MEM_ZONE, clist->coordinates); TRI_Free(TRI_UNKNOWN_MEM_ZONE, clist->coordinates);
TRI_Free(TRI_UNKNOWN_MEM_ZONE, clist->distances); TRI_Free(TRI_UNKNOWN_MEM_ZONE, clist->distances);
TRI_Free(TRI_UNKNOWN_MEM_ZONE, clist); TRI_Free(TRI_UNKNOWN_MEM_ZONE, clist);
@ -1993,10 +1996,7 @@ typedef struct {
} hpot; // pot for putting on the heap } hpot; // pot for putting on the heap
bool hpotcompare(hpot a, hpot b) { bool hpotcompare(hpot a, hpot b) {
if (a.dist > b.dist) return (a.dist > b.dist);
return true;
else
return false;
} }
typedef struct { typedef struct {
@ -2036,17 +2036,20 @@ GeoFix makedist(GeoPot* pot, GeoDetailedPoint* gd) {
GeoCursor* GeoIndex_NewCursor(GeoIndex* gi, GeoCoordinate* c) { GeoCursor* GeoIndex_NewCursor(GeoIndex* gi, GeoCoordinate* c) {
GeoIx* gix; GeoIx* gix;
GeoCr* gcr;
hpot hp; hpot hp;
if (c->longitude < -180.0) return NULL; if (c->longitude < -180.0) return nullptr;
if (c->longitude > 180.0) return NULL; if (c->longitude > 180.0) return nullptr;
if (c->latitude < -90.0) return NULL; if (c->latitude < -90.0) return nullptr;
if (c->latitude > 90.0) return NULL; if (c->latitude > 90.0) return nullptr;
gix = (GeoIx*)gi; gix = (GeoIx*)gi;
gcr = static_cast<GeoCr*>( GeoCr* gcr = nullptr;
TRI_Allocate(TRI_UNKNOWN_MEM_ZONE, sizeof(GeoCr), false));
if (gcr == NULL) { try {
gcr = new GeoCr;
}
catch (...) { }
if (gcr == nullptr) {
return (GeoCursor*)gcr; return (GeoCursor*)gcr;
} }
gcr->Ix = gix; gcr->Ix = gix;
@ -2145,13 +2148,11 @@ GeoCoordinates* GeoIndex_ReadCursor(GeoCursor* gc, int count) {
} }
void GeoIndex_CursorFree(GeoCursor* gc) { void GeoIndex_CursorFree(GeoCursor* gc) {
GeoCr* cr; if (gc == nullptr) {
if (gc == NULL) {
return; return;
} }
cr = (GeoCr*)gc; GeoCr* cr = reinterpret_cast<GeoCr*>(gc);
TRI_Free(TRI_UNKNOWN_MEM_ZONE, cr); delete cr;
return;
} }
/* =================================================== */ /* =================================================== */

View File

@ -360,7 +360,8 @@ bool ApplicationEndpointServer::createSslContext() {
// set options // set options
SSL_CTX_set_options(_sslContext, (long)_sslOptions); SSL_CTX_set_options(_sslContext, (long)_sslOptions);
LOG(INFO) << "using SSL options: " << _sslOptions; std::string sslOptions = stringifySslOptions(_sslOptions);
LOG(INFO) << "using SSL options: " << sslOptions;
if (!_sslCipherList.empty()) { if (!_sslCipherList.empty()) {
if (SSL_CTX_set_cipher_list(_sslContext, _sslCipherList.c_str()) != 1) { if (SSL_CTX_set_cipher_list(_sslContext, _sslCipherList.c_str()) != 1) {
@ -455,3 +456,243 @@ bool ApplicationEndpointServer::createSslContext() {
return true; return true;
} }
std::string ApplicationEndpointServer::stringifySslOptions(uint64_t opts) const {
std::string result;
#ifdef SSL_OP_MICROSOFT_SESS_ID_BUG
if (opts & SSL_OP_MICROSOFT_SESS_ID_BUG) {
result.append(", SSL_OP_MICROSOFT_SESS_ID_BUG");
}
#endif
#ifdef SSL_OP_NETSCAPE_CHALLENGE_BUG
if (opts & SSL_OP_NETSCAPE_CHALLENGE_BUG) {
result.append(", SSL_OP_NETSCAPE_CHALLENGE_BUG");
}
#endif
#ifdef SSL_OP_LEGACY_SERVER_CONNECT
if (opts & SSL_OP_LEGACY_SERVER_CONNECT) {
result.append(", SSL_OP_LEGACY_SERVER_CONNECT");
}
#endif
#ifdef SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG
if (opts & SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG) {
result.append(", SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG");
}
#endif
#ifdef SSL_OP_TLSEXT_PADDING
if (opts & SSL_OP_TLSEXT_PADDING) {
result.append(", SSL_OP_TLSEXT_PADDING");
}
#endif
#ifdef SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER
if (opts & SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER) {
result.append(", SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER");
}
#endif
#ifdef SSL_OP_SAFARI_ECDHE_ECDSA_BUG
if (opts & SSL_OP_SAFARI_ECDHE_ECDSA_BUG) {
result.append(", SSL_OP_SAFARI_ECDHE_ECDSA_BUG");
}
#endif
#ifdef SSL_OP_SSLEAY_080_CLIENT_DH_BUG
if (opts & SSL_OP_SSLEAY_080_CLIENT_DH_BUG) {
result.append(", SSL_OP_SSLEAY_080_CLIENT_DH_BUG");
}
#endif
#ifdef SSL_OP_TLS_D5_BUG
if (opts & SSL_OP_TLS_D5_BUG) {
result.append(", SSL_OP_TLS_D5_BUG");
}
#endif
#ifdef SSL_OP_TLS_BLOCK_PADDING_BUG
if (opts & SSL_OP_TLS_BLOCK_PADDING_BUG) {
result.append(", SSL_OP_TLS_BLOCK_PADDING_BUG");
}
#endif
#ifdef SSL_OP_MSIE_SSLV2_RSA_PADDING
if (opts & SSL_OP_MSIE_SSLV2_RSA_PADDING) {
result.append(", SSL_OP_MSIE_SSLV2_RSA_PADDING");
}
#endif
#ifdef SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG
if (opts & SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG) {
result.append(", SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG");
}
#endif
#ifdef SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS
if (opts & SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS) {
result.append(", SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS");
}
#endif
#ifdef SSL_OP_NO_QUERY_MTU
if (opts & SSL_OP_NO_QUERY_MTU) {
result.append(", SSL_OP_NO_QUERY_MTU");
}
#endif
#ifdef SSL_OP_COOKIE_EXCHANGE
if (opts & SSL_OP_COOKIE_EXCHANGE) {
result.append(", SSL_OP_COOKIE_EXCHANGE");
}
#endif
#ifdef SSL_OP_NO_TICKET
if (opts & SSL_OP_NO_TICKET) {
result.append(", SSL_OP_NO_TICKET");
}
#endif
#ifdef SSL_OP_CISCO_ANYCONNECT
if (opts & SSL_OP_CISCO_ANYCONNECT) {
result.append(", SSL_OP_CISCO_ANYCONNECT");
}
#endif
#ifdef SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION
if (opts & SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION) {
result.append(", SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION");
}
#endif
#ifdef SSL_OP_NO_COMPRESSION
if (opts & SSL_OP_NO_COMPRESSION) {
result.append(", SSL_OP_NO_COMPRESSION");
}
#endif
#ifdef SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION
if (opts & SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION) {
result.append(", SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION");
}
#endif
#ifdef SSL_OP_SINGLE_ECDH_USE
if (opts & SSL_OP_SINGLE_ECDH_USE) {
result.append(", SSL_OP_SINGLE_ECDH_USE");
}
#endif
#ifdef SSL_OP_SINGLE_DH_USE
if (opts & SSL_OP_SINGLE_DH_USE) {
result.append(", SSL_OP_SINGLE_DH_USE");
}
#endif
#ifdef SSL_OP_EPHEMERAL_RSA
if (opts & SSL_OP_EPHEMERAL_RSA) {
result.append(", SSL_OP_EPHEMERAL_RSA");
}
#endif
#ifdef SSL_OP_CIPHER_SERVER_PREFERENCE
if (opts & SSL_OP_CIPHER_SERVER_PREFERENCE) {
result.append(", SSL_OP_CIPHER_SERVER_PREFERENCE");
}
#endif
#ifdef SSL_OP_TLS_ROLLBACK_BUG
if (opts & SSL_OP_TLS_ROLLBACK_BUG) {
result.append(", SSL_OP_TLS_ROLLBACK_BUG");
}
#endif
#ifdef SSL_OP_NO_SSLv2
if (opts & SSL_OP_NO_SSLv2) {
result.append(", SSL_OP_NO_SSLv2");
}
#endif
#ifdef SSL_OP_NO_SSLv3
if (opts & SSL_OP_NO_SSLv3) {
result.append(", SSL_OP_NO_SSLv3");
}
#endif
#ifdef SSL_OP_NO_TLSv1
if (opts & SSL_OP_NO_TLSv1) {
result.append(", SSL_OP_NO_TLSv1");
}
#endif
#ifdef SSL_OP_NO_TLSv1_2
if (opts & SSL_OP_NO_TLSv1_2) {
result.append(", SSL_OP_NO_TLSv1_2");
}
#endif
#ifdef SSL_OP_NO_TLSv1_1
if (opts & SSL_OP_NO_TLSv1_1) {
result.append(", SSL_OP_NO_TLSv1_1");
}
#endif
#ifdef SSL_OP_NO_DTLSv1
if (opts & SSL_OP_NO_DTLSv1) {
result.append(", SSL_OP_NO_DTLSv1");
}
#endif
#ifdef SSL_OP_NO_DTLSv1_2
if (opts & SSL_OP_NO_DTLSv1_2) {
result.append(", SSL_OP_NO_DTLSv1_2");
}
#endif
#ifdef SSL_OP_NO_SSL_MASK
if (opts & SSL_OP_NO_SSL_MASK) {
result.append(", SSL_OP_NO_SSL_MASK");
}
#endif
#ifdef SSL_OP_PKCS1_CHECK_1
if (opts & SSL_OP_PKCS1_CHECK_1) {
result.append(", SSL_OP_PKCS1_CHECK_1");
}
#endif
#ifdef SSL_OP_PKCS1_CHECK_2
if (opts & SSL_OP_PKCS1_CHECK_2) {
result.append(", SSL_OP_PKCS1_CHECK_2");
}
#endif
#ifdef SSL_OP_NETSCAPE_CA_DN_BUG
if (opts & SSL_OP_NETSCAPE_CA_DN_BUG) {
result.append(", SSL_OP_NETSCAPE_CA_DN_BUG");
}
#endif
#ifdef SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG
if (opts & SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG) {
result.append(", SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG");
}
#endif
#ifdef SSL_OP_CRYPTOPRO_TLSEXT_BUG
if (opts & SSL_OP_CRYPTOPRO_TLSEXT_BUG) {
result.append(", SSL_OP_CRYPTOPRO_TLSEXT_BUG");
}
#endif
if (result.empty()) {
return result;
}
// strip initial comma
return result.substr(2);
}

View File

@ -94,6 +94,7 @@ class ApplicationEndpointServer : public ApplicationFeature {
private: private:
bool createSslContext(); bool createSslContext();
std::string stringifySslOptions(uint64_t opts) const;
protected: protected:
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////

View File

@ -533,7 +533,7 @@ bool HttpCommTask::processRead() {
static std::string const wwwAuthenticate = "www-authenticate"; static std::string const wwwAuthenticate = "www-authenticate";
if (sendWwwAuthenticateHeader()) { if (sendWwwAuthenticateHeader()) {
std::string const realm = static std::string const realm =
"basic realm=\"" + "basic realm=\"" +
_server->handlerFactory()->authenticationRealm(_request) + "\""; _server->handlerFactory()->authenticationRealm(_request) + "\"";
@ -774,7 +774,7 @@ void HttpCommTask::fillWriteBuffer() {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
void HttpCommTask::processCorsOptions(uint32_t compatibility) { void HttpCommTask::processCorsOptions(uint32_t compatibility) {
std::string const allowedMethods = "DELETE, GET, HEAD, PATCH, POST, PUT"; static std::string const allowedMethods = "DELETE, GET, HEAD, PATCH, POST, PUT";
HttpResponse response(GeneralResponse::ResponseCode::OK, compatibility); HttpResponse response(GeneralResponse::ResponseCode::OK, compatibility);

View File

@ -220,7 +220,12 @@ int SkiplistIndex::insert(arangodb::Transaction*, TRI_doc_mptr_t const* doc,
bool) { bool) {
std::vector<TRI_index_element_t*> elements; std::vector<TRI_index_element_t*> elements;
int res = fillElement(elements, doc); int res;
try {
res = fillElement(elements, doc);
} catch (...) {
res = TRI_ERROR_OUT_OF_MEMORY;
}
if (res != TRI_ERROR_NO_ERROR) { if (res != TRI_ERROR_NO_ERROR) {
for (auto& it : elements) { for (auto& it : elements) {
@ -239,11 +244,6 @@ int SkiplistIndex::insert(arangodb::Transaction*, TRI_doc_mptr_t const* doc,
for (size_t i = 0; i < count; ++i) { for (size_t i = 0; i < count; ++i) {
res = _skiplistIndex->insert(elements[i]); res = _skiplistIndex->insert(elements[i]);
if (res == TRI_ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED && !_unique) {
// We ignore unique_constraint violated if we are not unique
res = TRI_ERROR_NO_ERROR;
}
if (res != TRI_ERROR_NO_ERROR) { if (res != TRI_ERROR_NO_ERROR) {
TRI_index_element_t::freeElement(elements[i]); TRI_index_element_t::freeElement(elements[i]);
// Note: this element is freed already // Note: this element is freed already
@ -255,6 +255,10 @@ int SkiplistIndex::insert(arangodb::Transaction*, TRI_doc_mptr_t const* doc,
// No need to free elements[j] skiplist has taken over already // No need to free elements[j] skiplist has taken over already
} }
if (res == TRI_ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED && !_unique) {
// We ignore unique_constraint violated if we are not unique
res = TRI_ERROR_NO_ERROR;
}
break; break;
} }
} }
@ -269,7 +273,12 @@ int SkiplistIndex::remove(arangodb::Transaction*, TRI_doc_mptr_t const* doc,
bool) { bool) {
std::vector<TRI_index_element_t*> elements; std::vector<TRI_index_element_t*> elements;
int res = fillElement(elements, doc); int res;
try {
res = fillElement(elements, doc);
} catch (...) {
res = TRI_ERROR_OUT_OF_MEMORY;
}
if (res != TRI_ERROR_NO_ERROR) { if (res != TRI_ERROR_NO_ERROR) {
for (auto& it : elements) { for (auto& it : elements) {

View File

@ -573,6 +573,7 @@ bool RestDocumentHandler::deleteDocument() {
search = builder.slice(); search = builder.slice();
} else { } else {
try { try {
TRI_ASSERT(_request != nullptr);
builderPtr = _request->toVelocyPack(transactionContext->getVPackOptions()); builderPtr = _request->toVelocyPack(transactionContext->getVPackOptions());
} catch (...) { } catch (...) {
// If an error occurs here the body is not parsable. Fail with bad parameter // If an error occurs here the body is not parsable. Fail with bad parameter
@ -636,6 +637,7 @@ bool RestDocumentHandler::readManyDocuments() {
return false; return false;
} }
TRI_ASSERT(_request != nullptr);
auto builderPtr = _request->toVelocyPack(transactionContext->getVPackOptions()); auto builderPtr = _request->toVelocyPack(transactionContext->getVPackOptions());
VPackSlice search = builderPtr->slice(); VPackSlice search = builderPtr->slice();

View File

@ -107,7 +107,7 @@ void RestJobHandler::putJob() {
_response = response; _response = response;
// plus a new header // plus a new header
static std::string xArango = "x-arango-async-id"; static std::string const xArango = "x-arango-async-id";
_response->setHeaderNC(xArango, value); _response->setHeaderNC(xArango, value);
} }

View File

@ -402,14 +402,14 @@ void RestReplicationHandler::handleCommandLoggerState() {
VPackBuilder builder; VPackBuilder builder;
builder.add(VPackValue(VPackValueType::Object)); // Base builder.add(VPackValue(VPackValueType::Object)); // Base
arangodb::wal::LogfileManagerState const&& s = arangodb::wal::LogfileManagerState const s =
arangodb::wal::LogfileManager::instance()->state(); arangodb::wal::LogfileManager::instance()->state();
std::string const lastTickString(StringUtils::itoa(s.lastTick));
// "state" part // "state" part
builder.add("state", VPackValue(VPackValueType::Object)); builder.add("state", VPackValue(VPackValueType::Object));
builder.add("running", VPackValue(true)); builder.add("running", VPackValue(true));
builder.add("lastLogTick", VPackValue(lastTickString)); builder.add("lastLogTick", VPackValue(std::to_string(s.lastCommittedTick)));
builder.add("lastUncommittedLogTick", VPackValue(std::to_string(s.lastAssignedTick)));
builder.add("totalEvents", VPackValue(s.numEvents)); builder.add("totalEvents", VPackValue(s.numEvents));
builder.add("time", VPackValue(s.timeString)); builder.add("time", VPackValue(s.timeString));
builder.close(); builder.close();
@ -813,7 +813,7 @@ void RestReplicationHandler::handleTrampolineCoordinator() {
void RestReplicationHandler::handleCommandLoggerFollow() { void RestReplicationHandler::handleCommandLoggerFollow() {
// determine start and end tick // determine start and end tick
arangodb::wal::LogfileManagerState state = arangodb::wal::LogfileManagerState const state =
arangodb::wal::LogfileManager::instance()->state(); arangodb::wal::LogfileManager::instance()->state();
TRI_voc_tick_t tickStart = 0; TRI_voc_tick_t tickStart = 0;
TRI_voc_tick_t tickEnd = UINT64_MAX; TRI_voc_tick_t tickEnd = UINT64_MAX;
@ -933,7 +933,7 @@ void RestReplicationHandler::handleCommandLoggerFollow() {
if (res == TRI_ERROR_NO_ERROR) { if (res == TRI_ERROR_NO_ERROR) {
bool const checkMore = (dump._lastFoundTick > 0 && bool const checkMore = (dump._lastFoundTick > 0 &&
dump._lastFoundTick != state.lastDataTick); dump._lastFoundTick != state.lastCommittedTick);
// generate the result // generate the result
size_t const length = TRI_LengthStringBuffer(dump._buffer); size_t const length = TRI_LengthStringBuffer(dump._buffer);
@ -954,7 +954,7 @@ void RestReplicationHandler::handleCommandLoggerFollow() {
StringUtils::itoa(dump._lastFoundTick)); StringUtils::itoa(dump._lastFoundTick));
_response->setHeaderNC(TRI_REPLICATION_HEADER_LASTTICK, _response->setHeaderNC(TRI_REPLICATION_HEADER_LASTTICK,
StringUtils::itoa(state.lastTick)); StringUtils::itoa(state.lastCommittedTick));
_response->setHeaderNC(TRI_REPLICATION_HEADER_ACTIVE, "true"); _response->setHeaderNC(TRI_REPLICATION_HEADER_ACTIVE, "true");
@ -991,10 +991,10 @@ void RestReplicationHandler::handleCommandLoggerFollow() {
void RestReplicationHandler::handleCommandDetermineOpenTransactions() { void RestReplicationHandler::handleCommandDetermineOpenTransactions() {
// determine start and end tick // determine start and end tick
arangodb::wal::LogfileManagerState state = arangodb::wal::LogfileManagerState const state =
arangodb::wal::LogfileManager::instance()->state(); arangodb::wal::LogfileManager::instance()->state();
TRI_voc_tick_t tickStart = 0; TRI_voc_tick_t tickStart = 0;
TRI_voc_tick_t tickEnd = state.lastDataTick; TRI_voc_tick_t tickEnd = state.lastCommittedTick;
bool found; bool found;
std::string const& value1 = _request->value("from", found); std::string const& value1 = _request->value("from", found);
@ -1100,12 +1100,12 @@ void RestReplicationHandler::handleCommandInventory() {
// "state" // "state"
builder.add("state", VPackValue(VPackValueType::Object)); builder.add("state", VPackValue(VPackValueType::Object));
arangodb::wal::LogfileManagerState const&& s = arangodb::wal::LogfileManagerState const s =
arangodb::wal::LogfileManager::instance()->state(); arangodb::wal::LogfileManager::instance()->state();
builder.add("running", VPackValue(true)); builder.add("running", VPackValue(true));
auto logTickString = std::to_string(s.lastTick); builder.add("lastLogTick", VPackValue(std::to_string(s.lastCommittedTick)));
builder.add("lastLogTick", VPackValue(logTickString)); builder.add("lastUncommittedLogTick", VPackValue(std::to_string(s.lastAssignedTick)));
builder.add("totalEvents", VPackValue(s.numEvents)); builder.add("totalEvents", VPackValue(s.numEvents));
builder.add("time", VPackValue(s.timeString)); builder.add("time", VPackValue(s.timeString));
@ -3195,6 +3195,9 @@ void RestReplicationHandler::handleCommandSync() {
config._includeSystem = includeSystem; config._includeSystem = includeSystem;
config._verbose = verbose; config._verbose = verbose;
// wait until all data in current logfile got synced
arangodb::wal::LogfileManager::instance()->waitForSync(5.0);
InitialSyncer syncer(_vocbase, &config, restrictCollections, restrictType, InitialSyncer syncer(_vocbase, &config, restrictCollections, restrictType,
verbose); verbose);

View File

@ -2102,7 +2102,13 @@ bool Transaction::supportsFilterCondition(
arangodb::aql::Variable const* reference, size_t itemsInIndex, arangodb::aql::Variable const* reference, size_t itemsInIndex,
size_t& estimatedItems, double& estimatedCost) { size_t& estimatedItems, double& estimatedCost) {
return indexHandle.getIndex()->supportsFilterCondition( auto idx = indexHandle.getIndex();
if (nullptr == idx) {
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER,
"The index id cannot be empty.");
}
return idx->supportsFilterCondition(
condition, reference, itemsInIndex, estimatedItems, estimatedCost); condition, reference, itemsInIndex, estimatedItems, estimatedCost);
} }
@ -2116,7 +2122,12 @@ std::vector<std::vector<arangodb::basics::AttributeName>>
Transaction::getIndexFeatures(IndexHandle const& indexHandle, bool& isSorted, Transaction::getIndexFeatures(IndexHandle const& indexHandle, bool& isSorted,
bool& isSparse) { bool& isSparse) {
std::shared_ptr<arangodb::Index> idx = indexHandle.getIndex(); auto idx = indexHandle.getIndex();
if (nullptr == idx) {
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER,
"The index id cannot be empty.");
}
isSorted = idx->isSorted(); isSorted = idx->isSorted();
isSparse = idx->sparse(); isSparse = idx->sparse();
return idx->fields(); return idx->fields();
@ -2184,7 +2195,6 @@ std::shared_ptr<OperationCursor> Transaction::indexScanForCondition(
arangodb::aql::Ast* ast, arangodb::aql::AstNode const* condition, arangodb::aql::Ast* ast, arangodb::aql::AstNode const* condition,
arangodb::aql::Variable const* var, uint64_t limit, uint64_t batchSize, arangodb::aql::Variable const* var, uint64_t limit, uint64_t batchSize,
bool reverse) { bool reverse) {
#warning TODO Who checks if indexId is valid and is used for this collection?
if (ServerState::instance()->isCoordinator()) { if (ServerState::instance()->isCoordinator()) {
// The index scan is only available on DBServers and Single Server. // The index scan is only available on DBServers and Single Server.
@ -2200,6 +2210,10 @@ std::shared_ptr<OperationCursor> Transaction::indexScanForCondition(
IndexIteratorContext ctxt(_vocbase, resolver()); IndexIteratorContext ctxt(_vocbase, resolver());
auto idx = indexId.getIndex(); auto idx = indexId.getIndex();
if (nullptr == idx) {
THROW_ARANGO_EXCEPTION_MESSAGE(TRI_ERROR_BAD_PARAMETER,
"The index id cannot be empty.");
}
std::unique_ptr<IndexIterator> iterator(idx->iteratorForCondition(this, &ctxt, ast, condition, var, reverse)); std::unique_ptr<IndexIterator> iterator(idx->iteratorForCondition(this, &ctxt, ast, condition, var, reverse));
@ -2223,7 +2237,6 @@ std::shared_ptr<OperationCursor> Transaction::indexScan(
std::string const& collectionName, CursorType cursorType, std::string const& collectionName, CursorType cursorType,
IndexHandle const& indexId, VPackSlice const search, uint64_t skip, IndexHandle const& indexId, VPackSlice const search, uint64_t skip,
uint64_t limit, uint64_t batchSize, bool reverse) { uint64_t limit, uint64_t batchSize, bool reverse) {
#warning TODO Who checks if indexId is valid and is used for this collection?
// For now we assume indexId is the iid part of the index. // For now we assume indexId is the iid part of the index.
if (ServerState::instance()->isCoordinator()) { if (ServerState::instance()->isCoordinator()) {

View File

@ -51,14 +51,15 @@ static void JS_StateLoggerReplication(
TRI_V8_TRY_CATCH_BEGIN(isolate); TRI_V8_TRY_CATCH_BEGIN(isolate);
v8::HandleScope scope(isolate); v8::HandleScope scope(isolate);
arangodb::wal::LogfileManagerState s = arangodb::wal::LogfileManagerState const s =
arangodb::wal::LogfileManager::instance()->state(); arangodb::wal::LogfileManager::instance()->state();
v8::Handle<v8::Object> result = v8::Object::New(isolate); v8::Handle<v8::Object> result = v8::Object::New(isolate);
v8::Handle<v8::Object> state = v8::Object::New(isolate); v8::Handle<v8::Object> state = v8::Object::New(isolate);
state->Set(TRI_V8_ASCII_STRING("running"), v8::True(isolate)); state->Set(TRI_V8_ASCII_STRING("running"), v8::True(isolate));
state->Set(TRI_V8_ASCII_STRING("lastLogTick"), V8TickId(isolate, s.lastTick)); state->Set(TRI_V8_ASCII_STRING("lastLogTick"), V8TickId(isolate, s.lastCommittedTick));
state->Set(TRI_V8_ASCII_STRING("lastUncommittedLogTick"), V8TickId(isolate, s.lastAssignedTick));
state->Set(TRI_V8_ASCII_STRING("totalEvents"), state->Set(TRI_V8_ASCII_STRING("totalEvents"),
v8::Number::New(isolate, (double)s.numEvents)); v8::Number::New(isolate, (double)s.numEvents));
state->Set(TRI_V8_ASCII_STRING("time"), TRI_V8_STD_STRING(s.timeString)); state->Set(TRI_V8_ASCII_STRING("time"), TRI_V8_STD_STRING(s.timeString));

View File

@ -562,6 +562,8 @@ class KeySpace {
} }
} }
TRI_ASSERT(dest != nullptr);
if (!TRI_IsArrayJson(dest->json)) { if (!TRI_IsArrayJson(dest->json)) {
TRI_V8_THROW_EXCEPTION(TRI_ERROR_INTERNAL); TRI_V8_THROW_EXCEPTION(TRI_ERROR_INTERNAL);
} }

View File

@ -2620,6 +2620,7 @@ static void MapGetVocBase(v8::Local<v8::String> const name,
if (collection != nullptr && collection->_cid == 0) { if (collection != nullptr && collection->_cid == 0) {
delete collection; delete collection;
collection = nullptr;
TRI_V8_RETURN(v8::Handle<v8::Value>()); TRI_V8_RETURN(v8::Handle<v8::Value>());
} }
} }

View File

@ -828,7 +828,7 @@ TRI_collection_t* TRI_CreateCollection(
// create collection structure // create collection structure
if (collection == nullptr) { if (collection == nullptr) {
try { try {
TRI_collection_t* tmp = new TRI_collection_t(parameters); TRI_collection_t* tmp = new TRI_collection_t(vocbase, parameters);
collection = tmp; collection = tmp;
} catch (std::exception&) { } catch (std::exception&) {
collection = nullptr; collection = nullptr;

View File

@ -294,11 +294,13 @@ struct TRI_collection_t {
TRI_collection_t(TRI_collection_t const&) = delete; TRI_collection_t(TRI_collection_t const&) = delete;
TRI_collection_t& operator=(TRI_collection_t const&) = delete; TRI_collection_t& operator=(TRI_collection_t const&) = delete;
TRI_collection_t() TRI_collection_t() = delete;
: _tickMax(0), _state(TRI_COL_STATE_WRITE), _lastError(0) {}
explicit TRI_collection_t(arangodb::VocbaseCollectionInfo const& info) explicit TRI_collection_t(TRI_vocbase_t* vocbase)
: _info(info), _tickMax(0), _state(TRI_COL_STATE_WRITE), _lastError(0) {} : _vocbase(vocbase), _tickMax(0), _state(TRI_COL_STATE_WRITE), _lastError(0) {}
TRI_collection_t(TRI_vocbase_t* vocbase, arangodb::VocbaseCollectionInfo const& info)
: _info(info), _vocbase(vocbase), _tickMax(0), _state(TRI_COL_STATE_WRITE), _lastError(0) {}
~TRI_collection_t() = default; ~TRI_collection_t() = default;

View File

@ -68,8 +68,9 @@ using namespace arangodb::basics;
/// @brief create a document collection /// @brief create a document collection
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
TRI_document_collection_t::TRI_document_collection_t() TRI_document_collection_t::TRI_document_collection_t(TRI_vocbase_t* vocbase)
: _lock(), : TRI_collection_t(vocbase),
_lock(),
_nextCompactionStartIndex(0), _nextCompactionStartIndex(0),
_lastCompactionStatus(nullptr), _lastCompactionStatus(nullptr),
_useSecondaryIndexes(true), _useSecondaryIndexes(true),
@ -1209,7 +1210,7 @@ TRI_document_collection_t* TRI_CreateDocumentCollection(
// first create the document collection // first create the document collection
TRI_document_collection_t* document; TRI_document_collection_t* document;
try { try {
document = new TRI_document_collection_t(); document = new TRI_document_collection_t(vocbase);
} catch (std::exception&) { } catch (std::exception&) {
document = nullptr; document = nullptr;
} }
@ -1753,7 +1754,7 @@ TRI_document_collection_t* TRI_OpenDocumentCollection(TRI_vocbase_t* vocbase,
// first open the document collection // first open the document collection
TRI_document_collection_t* document = nullptr; TRI_document_collection_t* document = nullptr;
try { try {
document = new TRI_document_collection_t(); document = new TRI_document_collection_t(vocbase);
} catch (std::exception&) { } catch (std::exception&) {
} }
@ -3707,7 +3708,7 @@ int TRI_document_collection_t::remove(arangodb::Transaction* trx,
TRI_ASSERT(marker == nullptr); TRI_ASSERT(marker == nullptr);
// get the header pointer of the previous revision // get the header pointer of the previous revision
TRI_doc_mptr_t* oldHeader; TRI_doc_mptr_t* oldHeader = nullptr;
VPackSlice key; VPackSlice key;
if (slice.isString()) { if (slice.isString()) {
key = slice; key = slice;
@ -3720,6 +3721,7 @@ int TRI_document_collection_t::remove(arangodb::Transaction* trx,
return res; return res;
} }
TRI_ASSERT(oldHeader != nullptr);
prevRev = oldHeader->revisionIdAsSlice(); prevRev = oldHeader->revisionIdAsSlice();
previous = *oldHeader; previous = *oldHeader;

View File

@ -36,6 +36,8 @@
#include "VocBase/voc-types.h" #include "VocBase/voc-types.h"
#include "Wal/Marker.h" #include "Wal/Marker.h"
struct TRI_vocbase_t;
namespace arangodb { namespace arangodb {
class EdgeIndex; class EdgeIndex;
class Index; class Index;
@ -90,7 +92,7 @@ struct TRI_doc_collection_info_t {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
struct TRI_document_collection_t : public TRI_collection_t { struct TRI_document_collection_t : public TRI_collection_t {
TRI_document_collection_t(); explicit TRI_document_collection_t(TRI_vocbase_t* vocbase);
~TRI_document_collection_t(); ~TRI_document_collection_t();

View File

@ -1666,6 +1666,7 @@ TRI_vocbase_col_t* TRI_CreateCollectionVocBase(
VPackBuilder builder; VPackBuilder builder;
{ {
VPackObjectBuilder b(&builder); VPackObjectBuilder b(&builder);
// note: cid may be modified by this function call
collection = collection =
CreateCollection(vocbase, parameters, cid, writeMarker, builder); CreateCollection(vocbase, parameters, cid, writeMarker, builder);
} }

View File

@ -1224,21 +1224,21 @@ char* CollectorThread::nextFreeMarkerPosition(
goto leave; goto leave;
} }
// must rotate the existing journal. now update its stats
if (cache->lastFid > 0) {
auto& dfi = createDfi(cache, cache->lastFid);
document->_datafileStatistics.increaseUncollected(cache->lastFid,
dfi.numberUncollected);
// and reset afterwards
dfi.numberUncollected = 0;
}
// journal is full, close it and sync // journal is full, close it and sync
LOG_TOPIC(DEBUG, Logger::COLLECTOR) << "closing full journal '" << datafile->getName(datafile) LOG_TOPIC(DEBUG, Logger::COLLECTOR) << "closing full journal '" << datafile->getName(datafile)
<< "'"; << "'";
TRI_CloseDatafileDocumentCollection(document, i, false); TRI_CloseDatafileDocumentCollection(document, i, false);
} }
// must rotate the existing journal. now update its stats
if (cache->lastFid > 0) {
auto& dfi = getDfi(cache, cache->lastFid);
document->_datafileStatistics.increaseUncollected(cache->lastFid,
dfi.numberUncollected);
// and reset afterwards
dfi.numberUncollected = 0;
}
datafile = datafile =
TRI_CreateDatafileDocumentCollection(document, tick, targetSize, false); TRI_CreateDatafileDocumentCollection(document, tick, targetSize, false);
@ -1256,6 +1256,9 @@ char* CollectorThread::nextFreeMarkerPosition(
THROW_ARANGO_EXCEPTION(res); THROW_ARANGO_EXCEPTION(res);
} }
cache->lastDatafile = datafile;
cache->lastFid = datafile->_fid;
} // next iteration } // next iteration
leave: leave:

View File

@ -938,6 +938,42 @@ int LogfileManager::flush(bool waitForSync, bool waitForCollector,
return res; return res;
} }
////////////////////////////////////////////////////////////////////////////////
/// wait until all changes to the current logfile are synced
////////////////////////////////////////////////////////////////////////////////
bool LogfileManager::waitForSync(double maxWait) {
TRI_ASSERT(!_inRecovery);
double const end = TRI_microtime() + maxWait;
TRI_voc_tick_t lastAssignedTick = 0;
while (true) {
// fill the state
LogfileManagerState state;
_slots->statistics(state.lastAssignedTick, state.lastCommittedTick, state.lastCommittedDataTick, state.numEvents);
if (lastAssignedTick == 0) {
// get last assigned tick only once
lastAssignedTick = state.lastAssignedTick;
}
// now compare last committed tick with first lastAssigned tick that we got
if (state.lastCommittedTick >= lastAssignedTick) {
// everything was already committed
return true;
}
// not everything was committed yet. wait a bit
usleep(10000);
if (TRI_microtime() >= end) {
// time's up!
return false;
}
}
}
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief re-inserts a logfile back into the inventory only /// @brief re-inserts a logfile back into the inventory only
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -1656,7 +1692,7 @@ LogfileManagerState LogfileManager::state() {
LogfileManagerState state; LogfileManagerState state;
// now fill the state // now fill the state
_slots->statistics(state.lastTick, state.lastDataTick, state.numEvents); _slots->statistics(state.lastAssignedTick, state.lastCommittedTick, state.lastCommittedDataTick, state.numEvents);
state.timeString = getTimeString(); state.timeString = getTimeString();
return state; return state;

View File

@ -66,8 +66,9 @@ struct LogfileRange {
typedef std::vector<LogfileRange> LogfileRanges; typedef std::vector<LogfileRange> LogfileRanges;
struct LogfileManagerState { struct LogfileManagerState {
TRI_voc_tick_t lastTick; TRI_voc_tick_t lastAssignedTick;
TRI_voc_tick_t lastDataTick; TRI_voc_tick_t lastCommittedTick;
TRI_voc_tick_t lastCommittedDataTick;
uint64_t numEvents; uint64_t numEvents;
std::string timeString; std::string timeString;
}; };
@ -405,6 +406,12 @@ class LogfileManager : public rest::ApplicationFeature {
int flush(bool, bool, bool); int flush(bool, bool, bool);
//////////////////////////////////////////////////////////////////////////////
/// wait until all changes to the current logfile are synced
//////////////////////////////////////////////////////////////////////////////
bool waitForSync(double);
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
/// @brief re-inserts a logfile back into the inventory only /// @brief re-inserts a logfile back into the inventory only
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////

View File

@ -68,11 +68,14 @@ Slots::~Slots() { delete[] _slots; }
/// @brief get the statistics of the slots /// @brief get the statistics of the slots
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
void Slots::statistics(Slot::TickType& lastTick, Slot::TickType& lastDataTick, void Slots::statistics(Slot::TickType& lastAssignedTick,
Slot::TickType& lastCommittedTick,
Slot::TickType& lastCommittedDataTick,
uint64_t& numEvents) { uint64_t& numEvents) {
MUTEX_LOCKER(mutexLocker, _lock); MUTEX_LOCKER(mutexLocker, _lock);
lastTick = _lastCommittedTick; lastAssignedTick = _lastAssignedTick;
lastDataTick = _lastCommittedDataTick; lastCommittedTick = _lastCommittedTick;
lastCommittedDataTick = _lastCommittedDataTick;
numEvents = _numEvents; numEvents = _numEvents;
} }

View File

@ -95,7 +95,7 @@ class Slots {
/// @brief get the statistics of the slots /// @brief get the statistics of the slots
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
void statistics(Slot::TickType&, Slot::TickType&, uint64_t&); void statistics(Slot::TickType&, Slot::TickType&, Slot::TickType&, uint64_t&);
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
/// @brief execute a flush operation /// @brief execute a flush operation

View File

@ -59,8 +59,10 @@ void SynchronizerThread::beginShutdown() {
void SynchronizerThread::signalSync() { void SynchronizerThread::signalSync() {
CONDITION_LOCKER(guard, _condition); CONDITION_LOCKER(guard, _condition);
++_waiting; if (++_waiting == 1) {
_condition.signal(); // only signal once
_condition.signal();
}
} }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -11,7 +11,7 @@ window.arangoDocument = Backbone.Collection.extend({
cache: false, cache: false,
type: 'DELETE', type: 'DELETE',
contentType: "application/json", contentType: "application/json",
url: "/_api/edge/" + colid + "/" + docid, url: "/_api/edge/" + encodeURIComponent(colid) + "/" + encodeURIComponent(docid),
success: function () { success: function () {
callback(false); callback(false);
}, },
@ -25,7 +25,7 @@ window.arangoDocument = Backbone.Collection.extend({
cache: false, cache: false,
type: 'DELETE', type: 'DELETE',
contentType: "application/json", contentType: "application/json",
url: "/_api/document/" + colid + "/" + docid, url: "/_api/document/" + encodeURIComponent(colid) + "/" + encodeURIComponent(docid),
success: function () { success: function () {
callback(false); callback(false);
}, },
@ -116,7 +116,7 @@ window.arangoDocument = Backbone.Collection.extend({
$.ajax({ $.ajax({
cache: false, cache: false,
type: "GET", type: "GET",
url: "/_api/edge/" + colid +"/"+ docid, url: "/_api/edge/" + encodeURIComponent(colid) +"/"+ encodeURIComponent(docid),
contentType: "application/json", contentType: "application/json",
processData: false, processData: false,
success: function(data) { success: function(data) {
@ -134,7 +134,7 @@ window.arangoDocument = Backbone.Collection.extend({
$.ajax({ $.ajax({
cache: false, cache: false,
type: "GET", type: "GET",
url: "/_api/document/" + colid +"/"+ docid, url: "/_api/document/" + encodeURIComponent(colid) +"/"+ encodeURIComponent(docid),
contentType: "application/json", contentType: "application/json",
processData: false, processData: false,
success: function(data) { success: function(data) {
@ -150,7 +150,7 @@ window.arangoDocument = Backbone.Collection.extend({
$.ajax({ $.ajax({
cache: false, cache: false,
type: "PUT", type: "PUT",
url: "/_api/edge/" + colid + "/" + docid, url: "/_api/edge/" + encodeURIComponent(colid) + "/" + encodeURIComponent(docid),
data: model, data: model,
contentType: "application/json", contentType: "application/json",
processData: false, processData: false,
@ -166,7 +166,7 @@ window.arangoDocument = Backbone.Collection.extend({
$.ajax({ $.ajax({
cache: false, cache: false,
type: "PUT", type: "PUT",
url: "/_api/document/" + colid + "/" + docid, url: "/_api/document/" + encodeURIComponent(colid) + "/" + encodeURIComponent(docid),
data: model, data: model,
contentType: "application/json", contentType: "application/json",
processData: false, processData: false,

View File

@ -258,7 +258,17 @@
}); });
} }
this.documentView.colid = colid; this.documentView.colid = colid;
this.documentView.docid = docid;
var doc = window.location.hash.split("/")[2];
var test = (doc.split("%").length - 1) % 3;
if (decodeURI(doc) !== doc && test !== 0) {
doc = decodeURIComponent(doc);
}
this.documentView.docid = doc;
this.documentView.render(); this.documentView.render();
var callback = function(error, type) { var callback = function(error, type) {

View File

@ -585,6 +585,7 @@
var from = $('.modal-body #new-edge-from-attr').last().val(); var from = $('.modal-body #new-edge-from-attr').last().val();
var to = $('.modal-body #new-edge-to').last().val(); var to = $('.modal-body #new-edge-to').last().val();
var key = $('.modal-body #new-edge-key-attr').last().val(); var key = $('.modal-body #new-edge-key-attr').last().val();
var url;
var callback = function(error, data) { var callback = function(error, data) {
@ -593,7 +594,15 @@
} }
else { else {
window.modalView.hide(); window.modalView.hide();
window.location.hash = "collection/" + data; data = data.split('/');
try {
url = "collection/" + data[0] + '/' + data[1];
decodeURI(url);
} catch (ex) {
url = "collection/" + data[0] + '/' + encodeURIComponent(data[1]);
}
window.location.hash = url;
} }
}.bind(this); }.bind(this);
@ -608,6 +617,7 @@
addDocument: function() { addDocument: function() {
var collid = window.location.hash.split("/")[1]; var collid = window.location.hash.split("/")[1];
var key = $('.modal-body #new-document-key-attr').last().val(); var key = $('.modal-body #new-document-key-attr').last().val();
var url;
var callback = function(error, data) { var callback = function(error, data) {
if (error) { if (error) {
@ -615,7 +625,16 @@
} }
else { else {
window.modalView.hide(); window.modalView.hide();
window.location.hash = "collection/" + data; data = data.split('/');
try {
url = "collection/" + data[0] + '/' + data[1];
decodeURI(url);
} catch (ex) {
url = "collection/" + data[0] + '/' + encodeURIComponent(data[1]);
}
window.location.hash = url;
} }
}.bind(this); }.bind(this);
@ -862,7 +881,18 @@
clicked: function (event) { clicked: function (event) {
var self = event.currentTarget; var self = event.currentTarget;
window.App.navigate("collection/" + this.collection.collectionID + "/" + $(self).attr("id").substr(4), true);
var url, doc = $(self).attr("id").substr(4);
try {
url = "collection/" + this.collection.collectionID + '/' + doc;
decodeURI(doc);
} catch (ex) {
url = "collection/" + this.collection.collectionID + '/' + encodeURIComponent(doc);
}
//window.App.navigate(url, true);
window.location.hash = url;
}, },
drawTable: function() { drawTable: function() {

View File

@ -47,7 +47,6 @@
"ERROR_HTTP_CORRUPTED_JSON" : { "code" : 600, "message" : "invalid JSON object" }, "ERROR_HTTP_CORRUPTED_JSON" : { "code" : 600, "message" : "invalid JSON object" },
"ERROR_HTTP_SUPERFLUOUS_SUFFICES" : { "code" : 601, "message" : "superfluous URL suffices" }, "ERROR_HTTP_SUPERFLUOUS_SUFFICES" : { "code" : 601, "message" : "superfluous URL suffices" },
"ERROR_ARANGO_ILLEGAL_STATE" : { "code" : 1000, "message" : "illegal state" }, "ERROR_ARANGO_ILLEGAL_STATE" : { "code" : 1000, "message" : "illegal state" },
"ERROR_ARANGO_SHAPER_FAILED" : { "code" : 1001, "message" : "could not shape document" },
"ERROR_ARANGO_DATAFILE_SEALED" : { "code" : 1002, "message" : "datafile sealed" }, "ERROR_ARANGO_DATAFILE_SEALED" : { "code" : 1002, "message" : "datafile sealed" },
"ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE" : { "code" : 1003, "message" : "unknown type" }, "ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE" : { "code" : 1003, "message" : "unknown type" },
"ERROR_ARANGO_READ_ONLY" : { "code" : 1004, "message" : "read only" }, "ERROR_ARANGO_READ_ONLY" : { "code" : 1004, "message" : "read only" },

File diff suppressed because it is too large Load Diff

View File

@ -205,7 +205,7 @@ function ahuacatlFailureSuite () {
testReturnBlock : function () { testReturnBlock : function () {
internal.debugSetFailAt("ReturnBlock::getSome"); internal.debugSetFailAt("ReturnBlock::getSome");
assertFailingQuery("FOR year IN [ 2010, 2011, 2012 ] LET quarters = ((FOR q IN [ 'jhaskdjhjkasdhkjahsd', 2, 3, 4 ] RETURN q)) RETURN 'kljhasdjkhaskjdhaskjdhasd'"); assertFailingQuery("FOR year IN [ 2010, 2011, 2012 ] LET quarters = ((FOR q IN [ 'jhaskdjhjkasdhkjahsd', 2, 3, 4 ] RETURN CONCAT('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', q))) RETURN LENGTH(quarters)");
}, },
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -255,9 +255,10 @@ function ahuacatlFailureSuite () {
testSortBlock5 : function () { testSortBlock5 : function () {
internal.debugSetFailAt("SortBlock::doSortingNext2"); internal.debugSetFailAt("SortBlock::doSortingNext2");
assertFailingQuery("FOR i IN " + c.name() + " COLLECT key = i._key SORT key RETURN key"); // we need values that are >= 16 bytes long
assertFailingQuery("FOR i IN " + c.name() + " COLLECT key = i.value SORT key RETURN key"); assertFailingQuery("FOR i IN " + c.name() + " COLLECT key = i._key SORT CONCAT('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', key) RETURN key");
assertFailingQuery("FOR i IN " + c.name() + " COLLECT key = i.value2 SORT key RETURN key"); assertFailingQuery("FOR i IN " + c.name() + " COLLECT key = i.value SORT CONCAT('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', key) RETURN key");
assertFailingQuery("FOR i IN " + c.name() + " COLLECT key = i.value2 SORT CONCAT('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', key) RETURN key");
}, },
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
@ -546,7 +547,7 @@ function ahuacatlFailureSuite () {
assertFailingQuery("FOR i IN " + c.name() + " FILTER 1 IN i.value[*] RETURN i"); assertFailingQuery("FOR i IN " + c.name() + " FILTER 1 IN i.value[*] RETURN i");
}, },
testIndexNodeSkiplist9 : function () { testIndexNodeSkiplist6 : function () {
c.ensureSkiplist("value"); c.ensureSkiplist("value");
internal.debugSetFailAt("SkiplistIndex::accessFitsIndex"); internal.debugSetFailAt("SkiplistIndex::accessFitsIndex");
assertFailingQuery("FOR i IN " + c.name() + " FILTER i.value == 1 RETURN i"); assertFailingQuery("FOR i IN " + c.name() + " FILTER i.value == 1 RETURN i");

View File

@ -4144,10 +4144,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testInsertServerFailuresEmpty : function () { testInsertServerFailuresEmpty : function () {
var failures = [ "InsertDocumentNoLegend", var failures = [
"InsertDocumentNoLegendExcept",
"InsertDocumentNoMarker",
"InsertDocumentNoMarkerExcept",
"InsertDocumentNoHeader", "InsertDocumentNoHeader",
"InsertDocumentNoHeaderExcept", "InsertDocumentNoHeaderExcept",
"InsertDocumentNoLock", "InsertDocumentNoLock",
@ -4170,7 +4167,7 @@ function transactionServerFailuresSuite () {
fail(); fail();
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(0, c.count()); assertEqual(0, c.count());
@ -4182,10 +4179,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testInsertServerFailuresNonEmpty : function () { testInsertServerFailuresNonEmpty : function () {
var failures = [ "InsertDocumentNoLegend", var failures = [
"InsertDocumentNoLegendExcept",
"InsertDocumentNoMarker",
"InsertDocumentNoMarkerExcept",
"InsertDocumentNoHeader", "InsertDocumentNoHeader",
"InsertDocumentNoHeaderExcept", "InsertDocumentNoHeaderExcept",
"InsertDocumentNoLock", "InsertDocumentNoLock",
@ -4211,7 +4205,7 @@ function transactionServerFailuresSuite () {
fail(); fail();
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(1, c.count()); assertEqual(1, c.count());
@ -4224,10 +4218,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testInsertServerFailuresConstraint : function () { testInsertServerFailuresConstraint : function () {
var failures = [ "InsertDocumentNoLegend", var failures = [
"InsertDocumentNoLegendExcept",
"InsertDocumentNoMarker",
"InsertDocumentNoMarkerExcept",
"InsertDocumentNoHeader", "InsertDocumentNoHeader",
"InsertDocumentNoHeaderExcept", "InsertDocumentNoHeaderExcept",
"InsertDocumentNoLock" ]; "InsertDocumentNoLock" ];
@ -4247,7 +4238,7 @@ function transactionServerFailuresSuite () {
fail(); fail();
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(1, c.count()); assertEqual(1, c.count());
@ -4260,10 +4251,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testInsertServerFailuresMulti : function () { testInsertServerFailuresMulti : function () {
var failures = [ "InsertDocumentNoLegend", var failures = [
"InsertDocumentNoLegendExcept",
"InsertDocumentNoMarker",
"InsertDocumentNoMarkerExcept",
"InsertDocumentNoHeader", "InsertDocumentNoHeader",
"InsertDocumentNoHeaderExcept", "InsertDocumentNoHeaderExcept",
"InsertDocumentNoLock", "InsertDocumentNoLock",
@ -4295,10 +4283,10 @@ function transactionServerFailuresSuite () {
}); });
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(0, c.count()); assertEqual(0, c.count(), f);
}); });
}, },
@ -4421,8 +4409,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testUpdateServerFailuresNonEmpty : function () { testUpdateServerFailuresNonEmpty : function () {
var failures = [ "UpdateDocumentNoLegend", var failures = [
"UpdateDocumentNoLegendExcept",
"UpdateDocumentNoMarker", "UpdateDocumentNoMarker",
"UpdateDocumentNoMarkerExcept", "UpdateDocumentNoMarkerExcept",
"UpdateDocumentNoLock", "UpdateDocumentNoLock",
@ -4448,7 +4435,7 @@ function transactionServerFailuresSuite () {
fail(); fail();
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(1, c.count()); assertEqual(1, c.count());
@ -4462,8 +4449,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testUpdateServerFailuresMulti : function () { testUpdateServerFailuresMulti : function () {
var failures = [ "UpdateDocumentNoLegend", var failures = [
"UpdateDocumentNoLegendExcept",
"UpdateDocumentNoMarker", "UpdateDocumentNoMarker",
"UpdateDocumentNoMarkerExcept", "UpdateDocumentNoMarkerExcept",
"UpdateDocumentNoLock", "UpdateDocumentNoLock",
@ -4500,12 +4486,12 @@ function transactionServerFailuresSuite () {
}); });
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(10, c.count()); assertEqual(10, c.count());
for (i = 0; i < 10; ++i) { for (i = 0; i < 10; ++i) {
assertEqual(i, c.document("test" + i).a); assertEqual(i, c.document("test" + i).a, f);
} }
}); });
}, },
@ -4515,8 +4501,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testUpdateServerFailuresMultiUpdate : function () { testUpdateServerFailuresMultiUpdate : function () {
var failures = [ "UpdateDocumentNoLegend", var failures = [
"UpdateDocumentNoLegendExcept",
"UpdateDocumentNoMarker", "UpdateDocumentNoMarker",
"UpdateDocumentNoMarkerExcept", "UpdateDocumentNoMarkerExcept",
"UpdateDocumentNoLock", "UpdateDocumentNoLock",
@ -4558,10 +4543,10 @@ function transactionServerFailuresSuite () {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum);
} }
assertEqual(10, c.count()); assertEqual(10, c.count(), f);
for (i = 0; i < 10; ++i) { for (i = 0; i < 10; ++i) {
assertEqual(i, c.document("test" + i).a); assertEqual(i, c.document("test" + i).a, f);
assertEqual(undefined, c.document("test" + i).b); assertEqual(undefined, c.document("test" + i).b, f);
} }
}); });
}, },
@ -4611,8 +4596,7 @@ function transactionServerFailuresSuite () {
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
testMixedServerFailures : function () { testMixedServerFailures : function () {
var failures = [ "UpdateDocumentNoLegend", var failures = [
"UpdateDocumentNoLegendExcept",
"UpdateDocumentNoMarker", "UpdateDocumentNoMarker",
"UpdateDocumentNoMarkerExcept", "UpdateDocumentNoMarkerExcept",
"UpdateDocumentNoLock", "UpdateDocumentNoLock",
@ -4623,10 +4607,6 @@ function transactionServerFailuresSuite () {
"RemoveDocumentNoLock", "RemoveDocumentNoLock",
"RemoveDocumentNoOperation", "RemoveDocumentNoOperation",
"RemoveDocumentNoOperationExcept", "RemoveDocumentNoOperationExcept",
"InsertDocumentNoLegend",
"InsertDocumentNoLegendExcept",
"InsertDocumentNoMarker",
"InsertDocumentNoMarkerExcept",
"InsertDocumentNoHeader", "InsertDocumentNoHeader",
"InsertDocumentNoHeaderExcept", "InsertDocumentNoHeaderExcept",
"InsertDocumentNoLock", "InsertDocumentNoLock",
@ -4674,13 +4654,13 @@ function transactionServerFailuresSuite () {
}); });
} }
catch (err) { catch (err) {
assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum); assertEqual(internal.errors.ERROR_DEBUG.code, err.errorNum, f);
} }
assertEqual(100, c.count()); assertEqual(100, c.count());
for (i = 0; i < 100; ++i) { for (i = 0; i < 100; ++i) {
assertEqual(i, c.document("test" + i).a); assertEqual(i, c.document("test" + i).a, f);
assertEqual(undefined, c.document("test" + i).b); assertEqual(undefined, c.document("test" + i).b, f);
} }
}); });
}, },
@ -4936,92 +4916,6 @@ function transactionServerFailuresSuite () {
testHelper.waitUnload(c); testHelper.waitUnload(c);
assertEqual(100, c.count()); assertEqual(100, c.count());
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: cannot write attribute marker for trx
////////////////////////////////////////////////////////////////////////////////
testNoAttributeMarker : function () {
internal.debugClearFailAt();
db._drop(cn);
c = db._create(cn);
var i;
for (i = 0; i < 100; ++i) {
c.save({ _key: "test" + i, a: i });
}
assertEqual(100, c.count());
internal.wal.flush(true, true);
try {
TRANSACTION({
collections: {
write: [ cn ],
},
action: function () {
var i;
for (i = 100; i < 200; ++i) {
c.save({ _key: "test" + i, a: i });
}
internal.debugSetFailAt("ShaperWriteAttributeMarker");
c.save({ _key: "test100", newAttribute: "foo" });
}
});
fail();
}
catch (err) {
assertEqual(internal.errors.ERROR_ARANGO_SHAPER_FAILED.code, err.errorNum);
}
assertEqual(100, c.count());
internal.debugClearFailAt();
},
////////////////////////////////////////////////////////////////////////////////
/// @brief test: cannot write shape marker for trx
////////////////////////////////////////////////////////////////////////////////
testNoShapeMarker : function () {
internal.debugClearFailAt();
db._drop(cn);
c = db._create(cn);
var i;
for (i = 0; i < 100; ++i) {
c.save({ _key: "test" + i, a: i });
}
assertEqual(100, c.count());
internal.wal.flush(true, true);
try {
TRANSACTION({
collections: {
write: [ cn ],
},
action: function () {
var i;
for (i = 100; i < 200; ++i) {
c.save({ _key: "test" + i, a: i });
}
internal.debugSetFailAt("ShaperWriteShapeMarker");
c.save({ _key: "test100", newAttribute: "foo", reallyNew: "foo" });
}
});
fail();
}
catch (err) {
assertEqual(internal.errors.ERROR_ARANGO_SHAPER_FAILED.code, err.errorNum);
}
assertEqual(100, c.count());
internal.debugClearFailAt();
} }
}; };
@ -5049,4 +4943,3 @@ jsunity.run(transactionConstraintsSuite);
return jsunity.done(); return jsunity.done();

View File

@ -55,7 +55,6 @@ ERROR_HTTP_SUPERFLUOUS_SUFFICES,601,"superfluous URL suffices","Will be raised w
################################################################################ ################################################################################
ERROR_ARANGO_ILLEGAL_STATE,1000,"illegal state","Internal error that will be raised when the datafile is not in the required state." ERROR_ARANGO_ILLEGAL_STATE,1000,"illegal state","Internal error that will be raised when the datafile is not in the required state."
ERROR_ARANGO_SHAPER_FAILED,1001,"could not shape document","Internal error that will be raised when the shaper encountered a problem."
ERROR_ARANGO_DATAFILE_SEALED,1002,"datafile sealed","Internal error that will be raised when trying to write to a datafile." ERROR_ARANGO_DATAFILE_SEALED,1002,"datafile sealed","Internal error that will be raised when trying to write to a datafile."
ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE,1003,"unknown type","Internal error that will be raised when an unknown collection type is encountered." ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE,1003,"unknown type","Internal error that will be raised when an unknown collection type is encountered."
ERROR_ARANGO_READ_ONLY,1004,"read only","Internal error that will be raised when trying to write to a read-only datafile or collection." ERROR_ARANGO_READ_ONLY,1004,"read only","Internal error that will be raised when trying to write to a read-only datafile or collection."

View File

@ -43,7 +43,6 @@ void TRI_InitializeErrorMessages () {
REG_ERROR(ERROR_HTTP_CORRUPTED_JSON, "invalid JSON object"); REG_ERROR(ERROR_HTTP_CORRUPTED_JSON, "invalid JSON object");
REG_ERROR(ERROR_HTTP_SUPERFLUOUS_SUFFICES, "superfluous URL suffices"); REG_ERROR(ERROR_HTTP_SUPERFLUOUS_SUFFICES, "superfluous URL suffices");
REG_ERROR(ERROR_ARANGO_ILLEGAL_STATE, "illegal state"); REG_ERROR(ERROR_ARANGO_ILLEGAL_STATE, "illegal state");
REG_ERROR(ERROR_ARANGO_SHAPER_FAILED, "could not shape document");
REG_ERROR(ERROR_ARANGO_DATAFILE_SEALED, "datafile sealed"); REG_ERROR(ERROR_ARANGO_DATAFILE_SEALED, "datafile sealed");
REG_ERROR(ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE, "unknown type"); REG_ERROR(ERROR_ARANGO_UNKNOWN_COLLECTION_TYPE, "unknown type");
REG_ERROR(ERROR_ARANGO_READ_ONLY, "read only"); REG_ERROR(ERROR_ARANGO_READ_ONLY, "read only");

View File

@ -83,8 +83,6 @@
/// - 1000: @LIT{illegal state} /// - 1000: @LIT{illegal state}
/// Internal error that will be raised when the datafile is not in the /// Internal error that will be raised when the datafile is not in the
/// required state. /// required state.
/// - 1001: @LIT{could not shape document}
/// Internal error that will be raised when the shaper encountered a problem.
/// - 1002: @LIT{datafile sealed} /// - 1002: @LIT{datafile sealed}
/// Internal error that will be raised when trying to write to a datafile. /// Internal error that will be raised when trying to write to a datafile.
/// - 1003: @LIT{unknown type} /// - 1003: @LIT{unknown type}
@ -1010,16 +1008,6 @@ void TRI_InitializeErrorMessages ();
#define TRI_ERROR_ARANGO_ILLEGAL_STATE (1000) #define TRI_ERROR_ARANGO_ILLEGAL_STATE (1000)
////////////////////////////////////////////////////////////////////////////////
/// @brief 1001: ERROR_ARANGO_SHAPER_FAILED
///
/// could not shape document
///
/// Internal error that will be raised when the shaper encountered a problem.
////////////////////////////////////////////////////////////////////////////////
#define TRI_ERROR_ARANGO_SHAPER_FAILED (1001)
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief 1002: ERROR_ARANGO_DATAFILE_SEALED /// @brief 1002: ERROR_ARANGO_DATAFILE_SEALED
/// ///

View File

@ -194,16 +194,13 @@ Endpoint* Endpoint::factory(const Endpoint::EndpointType type,
} }
std::string copy = unifiedForm(specification); std::string copy = unifiedForm(specification);
std::string prefix = "http";
TransportType protocol = TransportType::HTTP; TransportType protocol = TransportType::HTTP;
if (StringUtils::isPrefix(copy, "http+")) { if (StringUtils::isPrefix(copy, "http+")) {
protocol = TransportType::HTTP; protocol = TransportType::HTTP;
prefix = "http+";
copy = copy.substr(5); copy = copy.substr(5);
} else if (StringUtils::isPrefix(copy, "vpp+")) { } else if (StringUtils::isPrefix(copy, "vpp+")) {
protocol = TransportType::VPP; protocol = TransportType::VPP;
prefix = "vpp+";
copy = copy.substr(4); copy = copy.substr(4);
} else { } else {
// invalid protocol // invalid protocol

View File

@ -119,25 +119,6 @@ std::vector<std::string> EndpointList::all() const {
return result; return result;
} }
////////////////////////////////////////////////////////////////////////////////
/// @brief return all endpoints with a certain prefix
////////////////////////////////////////////////////////////////////////////////
std::map<std::string, Endpoint*> EndpointList::getByPrefix(
std::string const& prefix) const {
std::map<std::string, Endpoint*> result;
for (auto& it : _endpoints) {
std::string const& key = it.first;
if (StringUtils::isPrefix(key, prefix)) {
result[key] = it.second;
}
}
return result;
}
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
/// @brief return all endpoints with a certain encryption type /// @brief return all endpoints with a certain encryption type
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////

View File

@ -48,9 +48,6 @@ class EndpointList {
bool hasSsl() const; bool hasSsl() const;
void dump() const; void dump() const;
private:
std::map<std::string, Endpoint*> getByPrefix(std::string const&) const;
private: private:
std::map<std::string, Endpoint*> _endpoints; std::map<std::string, Endpoint*> _endpoints;
}; };

View File

@ -273,6 +273,6 @@ std::string const& GeneralRequest::value(std::string const& key, bool& found) co
void GeneralRequest::setArrayValue(char* key, size_t length, char const* value) { void GeneralRequest::setArrayValue(char* key, size_t length, char const* value) {
std::string keyStr(key, length); std::string keyStr(key, length);
_arrayValues[key].emplace_back(value); _arrayValues[keyStr].emplace_back(value);
} }

View File

@ -450,13 +450,11 @@ void GeneralResponse::setHeader(std::string const& key,
std::string const& value) { std::string const& value) {
std::string k = StringUtils::tolower(key); std::string k = StringUtils::tolower(key);
_headers[key] = value; _headers[k] = value;
} }
void GeneralResponse::setHeaderNC(std::string const& key, void GeneralResponse::setHeaderNC(std::string const& key,
std::string const& value) { std::string const& value) {
std::string k = StringUtils::tolower(key);
_headers[key] = value; _headers[key] = value;
} }

View File

@ -750,6 +750,3 @@ std::shared_ptr<VPackBuilder> HttpRequest::toVelocyPack(
return parser.steal(); return parser.steal();
} }
TRI_json_t* HttpRequest::toJson(char** errmsg) {
return TRI_Json2String(TRI_UNKNOWN_MEM_ZONE, body().c_str(), errmsg);
}

View File

@ -28,7 +28,6 @@
#include "Rest/GeneralRequest.h" #include "Rest/GeneralRequest.h"
#include "Basics/StringBuffer.h" #include "Basics/StringBuffer.h"
#include "Basics/json.h"
#include "Endpoint/ConnectionInfo.h" #include "Endpoint/ConnectionInfo.h"
namespace arangodb { namespace arangodb {
@ -74,9 +73,6 @@ class HttpRequest : public GeneralRequest {
std::shared_ptr<arangodb::velocypack::Builder> toVelocyPack( std::shared_ptr<arangodb::velocypack::Builder> toVelocyPack(
arangodb::velocypack::Options const*); arangodb::velocypack::Options const*);
// the request body as TRI_json_t*
TRI_json_t* toJson(char**);
using GeneralRequest::setHeader; using GeneralRequest::setHeader;
private: private:

View File

@ -632,7 +632,7 @@ void SimpleHttpClient::processHeader() {
} }
// end of header found // end of header found
if (*ptr == '\r' || *ptr == '\0') { if (*ptr == '\r' || *ptr == '\n' || *ptr == '\0') {
size_t len = pos - ptr; size_t len = pos - ptr;
_readBufferOffset += len + 1; _readBufferOffset += len + 1;
ptr += len + 1; ptr += len + 1;

View File

@ -808,7 +808,8 @@ static void JS_Download(v8::FunctionCallbackInfo<v8::Value> const& args) {
SimpleHttpClient client(connection.get(), timeout, false); SimpleHttpClient client(connection.get(), timeout, false);
client.setSupportDeflate(false); client.setSupportDeflate(false);
client.setExposeArangoDB(false); // security by obscurity won't work. Github requires a useragent nowadays.
client.setExposeArangoDB(true);
v8::Handle<v8::Object> result = v8::Object::New(isolate); v8::Handle<v8::Object> result = v8::Object::New(isolate);