1
0
Fork 0

Merge remote-tracking branch 'origin/devel' into 1.4

This commit is contained in:
Frank Celler 2013-10-29 11:18:27 +01:00
commit ebca4c601e
68 changed files with 1443 additions and 328 deletions

View File

@ -1,11 +1,70 @@
v1.4.0-rc1 (2013-10-29)
-----------------------
* issue #643: Some minor corrections and a link to "Downloads" by @frankmayer
* fixed issue #648: /batch API is missing from Web Interface API Docummentation (Swagger)
* fixed issue #647: Icon tooltips missing
* fixed issue #646: index creation in web interface
* fixed issue #645: Allow jumping from edge to linked vertices
* merged PR for issue #643: Some minor corrections and a link to "Downloads"
* fixed issue #642: Completion of error handling
* fixed issue #639: compiling v1.4 on maverick produces warnings on -Wstrict-null-sentinel
* fixed issue #634: Web interface bug: Escape does not always propagate
* fixed issue #620: added startup option `--server.default-api-compatibility`
This adds the following changes to the ArangoDB server and clients:
- the server provides a new startup option `--server.default-api-compatibility`.
This option can be used to determine the compatibility of (some) server API
return values. The value for this parameter is a server version number,
calculated as follows: `10000 * major + 100 * minor` (e.g. `10400` for ArangoDB
1.3). The default value is `10400` (1.4), the minimum allowed value is `10300`
(1.3).
When setting this option to a value lower than the current server version,
the server might respond with old-style results to "old" clients, increasing
compatibility with "old" (non-up-to-date) clients.
- the server will on each incoming request check for an HTTP header
`x-arango-version`. Clients can optionally set this header to the API
version number they support. For example, if a client sends the HTTP header
`x-arango-version: 10300`, the server will pick this up and might send ArangoDB
1.3-style responses in some situations.
Setting either the startup parameter or using the HTTP header (or both) allows
running "old" clients with newer versions of ArangoDB, without having to adjust
the clients too much.
- the `location` headers returned by the server for the APIs `/_api/document/...`
and `/_api/collection/...` will have different values depending on the used API
version. If the API compatibility is `10300`, the `location` headers returned
will look like this:
location: /_api/document/....
whereas when an API compatibility of `10400` or higher is used, the `location`
headers will look like this:
location: /_db/<database name>/_api/document/...
Please note that even in the presence of this, old API versions still may not
be supported forever by the server.
* fixed issue #643: Some minor corrections and a link to "Downloads" by @frankmayer
* started issue #642: Completion of error handling
* fixed issue #639: compiling v1.4 on maverick produces warnings on
-Wstrict-null-sentinel
* fixed issue #621: Standard Config needs to be fixed
* added function to manage indexes (web interface)
* improved server shutdown time by signalling shutdown to applicationserver,

View File

@ -453,6 +453,26 @@ The following command-line options have been added for _arangod_ in ArangoDB 1.4
let certain types of requests pass. Enabling this option may impose a security
risk, so it should only be used in very controlled environments.
The default value for this option is `false` (no method overriding allowed).
- `--server.default-api-compatibility`: this option can be used to determine the
compatibility of (some) server API return values. The value for this parameter is
a server version number calculated as follows: `10000 * major + 100 * minor`
(e.g. `10400` for ArangoDB 1.3). The default value is `10400` (1.4), the minimum
allowed value is `10300` (1.3).
When setting this option to a value lower than the current server version,
the server might respond with old-style results to "old" clients, increasing
compatibility with "old" (non-up-to-date) clients. In ArangoDB 1.4.0, this option
mainly affects the style of the returned `location` headers:
When set to `10300`, the returned `location` headers will not include the
database name. When set to `10400` or higher, the `location` headers returned
will also include the database name.
The default value for this option is `10400` (ArangoDB 1.4), so ArangoDB will
return the new-style location headers (including database name) by default.
If you use a non-1.4 compatible ArangoDB client driver, you may set this option
to make ArangoDB return the old-style headers.
* `--scheduler.maximal-queue-size`: limits the size of the asynchronous request
execution queue. Please have a look at @ref NewFeatures14Async for more details.

View File

@ -7,6 +7,15 @@ Upgrading to ArangoDB 1.4 {#Upgrading14}
Upgrading {#Upgrading14Introduction}
====================================
1.4 is currently beta, please do not use in production.
Please read the following sections if you upgrade from a pre-1.4 version of ArangoDB
to ArangoDB 1.4.
ArangoDB 1.4 comes with a few changes, some of which are not 100% compatible to
ArangoDB 1.3. The incompatibilies are mainly due to the introduction of the multiple
databases feature and to some changes inside Foxx.
Filesystem layout changes {#Upgrading14FileSystem}
--------------------------------------------------
@ -191,6 +200,52 @@ the same IP address, so ArangoDB will try to bind to the same address twice
Other obvious bind problems at startup may be caused by ports being used by
other programs, or IP addresses changing.
Problem: Server returns different `location` headers than in 1.3
-----------------------------------------------------------------
ArangoDB 1.4 by default will return `location` HTTP headers that contain the
database name too. This is a consequence of potentially having multiple databases
in the same server instance.
For example, when creating a new document, ArangoDB 1.3 returned an HTTP
response with a `location` header like this:
location: /_api/document/<collection name>/<document key>
Contrary, ArangoDB 1.4 will return a `location` header like this by default:
location: /_db/<database name>/_api/document/<collection name>/<document key>
This may not be compatible to pre-1.4 clients that rely on the old format
of the `location` header.
Obviously one workaround is to upgrade the used client driver to the newest
version. If that cannot be done or if the newest version of the client driver is
not ready for ArangoDB 1.4, the server provides a startup option that can be
used to increase compatibility with old clients:
--server.default-api-compatibility
Not setting this option will make ArangoDB set it to the current server version, and
assume all clients are compatible. This will also make it send the new-style
location headers.
Setting this value to an older version number will make the server try to keep
the API compatible to older versions where possible. For example, to send the
old (pre-1.4) style location headers, set the value to `10300` (1.3) as follows:
--server.default-api-compatibility 10300
The server will then return the old-style `location` headers.
Another way to fix the `location` header issue is to make the client send API
compatibility information itself. This can be achieved by sending an extra HTTP
header `x-arango-version` along with a client request. For example, sending the
following header in a request will make ArangoDB return the old style `location`
headers too:
x-arango-version: 10300
Problem: Find out the storage location of a database / collection
-----------------------------------------------------------------
@ -215,8 +270,8 @@ can use a Bash script like this:
The above script should print out the names of all databases and collections
with their corresponding directory names.
Problem: AQL user-function does not work anymore
------------------------------------------------
Problem: AQL user-functions do not work anymore
-----------------------------------------------
The namespace resolution operator for AQL user-defined functions has changed from `:`
to `::`. Names of user-defined function names need to be adjusted in AQL queries.

View File

@ -43,6 +43,7 @@ import sys, os, json, string
files = {
"js/actions/api-aqlfunction.js" : "aqlfunction",
"arangod/RestHandler/RestBatchHandler.cpp" : "batch",
"js/actions/api-collection.js" : "collection",
"js/actions/api-cursor.js" : "cursor",
"js/actions/api-database.js" : "database",

View File

@ -123,6 +123,10 @@ Command-Line Options for arangod {#CommandLineArangod}
@anchor CommandLineArangoKeepAliveTimeout
@copydetails triagens::rest::ApplicationEndpointServer::_keepAliveTimeout
@CLEARPAGE
@anchor CommandLineArangoDefaultApiCompatibility
@copydetails triagens::rest::ApplicationEndpointServer::_defaultApiCompatibility
@CLEARPAGE
@anchor CommandLineArangoAllowMethodOverride
@copydetails triagens::rest::ApplicationEndpointServer::_allowMethodOverride

View File

@ -20,6 +20,8 @@ TOC {#CommandLineTOC}
- @ref CommandLineArangoDisableAuthentication "server.disable-authentication"
- @ref CommandLineArangoAuthenticateSystemOnly "server.authenticate-system-only"
- @ref CommandLineArangoKeepAliveTimeout "server.keep-alive-timeout"
- @ref CommandLineArangoDefaultApiCompatibility "server.default-api-compatibility"
- @ref CommandLineArangoAllowMethodOverride "server.allow-method-override"
- @ref CommandLineArangoDisableReplicationLogger "server.disable-replication-logger"
- @ref CommandLineArangoDisableReplicationApplier "server.disable-replication-applier"
- @ref CommandLineArangoKeyFile "server.keyfile"

View File

@ -396,6 +396,11 @@ The Request Object
The `request` object inherits several attributes from the underlying Actions:
* `compatibility`: an integer specifying the compatibility version sent by the
client (in request header `x-arango-version`). If the client does not send this
header, ArangoDB will set this to the minimum compatible version number. The
value is 10000 * major + 100 * minor (e.g. `10400` for ArangoDB version 1.4).
* `user`: the name of the current ArangoDB user. This will be populated only
if authentication is turned on, and will be `null` otherwise.

View File

@ -0,0 +1,95 @@
# coding: utf-8
require 'rspec'
require './arangodb.rb'
describe ArangoDB do
################################################################################
## general tests
################################################################################
context "checking compatibility features:" do
it "tests the compatibility value when no header is set" do
doc = ArangoDB.get("/_admin/echo", :headers => { })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10400)
end
it "tests the compatibility value when a broken header is set" do
versions = [ "1", "1.", "-1.3", "-1.3.", "x.4", "xx", "", " ", ".", "foobar", "foobar1.3", "xx1.4" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10400)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "1.3.0", "1.3", "1.3-devel", "1.3.1", "1.3.99", "10300", "10303" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10300)
end
end
it "tests the compatibility value when a valid header is set" do
versions = [ "1.4.0", "1.4.1", "1.4.2", "1.4.0-devel", "1.4.0-beta2", " 1.4", "1.4 ", " 1.4.0", " 1.4.0 ", "10400", "10401", "10499" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10400)
end
end
it "tests the compatibility value when a too low version is set" do
versions = [ "0.0", "0.1", "0.2", "0.9", "1.0", "1.1", "1.2" ]
versions.each do|value|
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => value })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10300)
end
end
it "tests the compatibility value when a too high version is set" do
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => "1.5" })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(10500)
end
it "tests the compatibility value when a too high version is set" do
doc = ArangoDB.get("/_admin/echo", :headers => { "x-arango-version" => "2.0" })
doc.code.should eq(200)
compatibility = doc.parsed_response['compatibility']
compatibility.should be_kind_of(Integer)
compatibility.should eq(20000)
end
end
end

View File

@ -101,7 +101,40 @@ describe ArangoDB do
it "creating a new document" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -134,7 +167,47 @@ describe ArangoDB do
it "creating a new document complex body" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"Wo\\\"rld\" }"
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
cmd = "/_api/document/#{did}"
doc = ArangoDB.log_get("#{prefix}-complex", cmd)
doc.code.should eq(200)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['Hallo'].should eq('Wo"rld')
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new document complex body, setting compatibility header " do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"Wo\\\"rld\" }"
doc = ArangoDB.log_post("#{prefix}", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -174,7 +247,53 @@ describe ArangoDB do
it "creating a new umlaut document" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"öäüÖÄÜßあ寿司\" }"
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
cmd = "/_api/document/#{did}"
doc = ArangoDB.log_get("#{prefix}-umlaut", cmd)
doc.code.should eq(200)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
newBody = doc.body()
newBody = newBody.sub!(/^.*"Hallo":"([^"]*)".*$/, '\1')
newBody.should eq("\\u00F6\\u00E4\\u00FC\\u00D6\\u00C4\\u00DC\\u00DF\\u3042\\u5BFF\\u53F8")
doc.parsed_response['Hallo'].should eq('öäüÖÄÜßあ寿司')
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new umlaut document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"öäüÖÄÜßあ寿司\" }"
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -220,7 +339,53 @@ describe ArangoDB do
it "creating a new not normalized umlaut document" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"Gru\\u0308\\u00DF Gott.\" }"
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
cmd = "/_api/document/#{did}"
doc = ArangoDB.log_get("#{prefix}-umlaut", cmd)
doc.code.should eq(200)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
newBody = doc.body()
newBody = newBody.sub!(/^.*"Hallo":"([^"]*)".*$/, '\1')
newBody.should eq("Gr\\u00FC\\u00DF Gott.")
doc.parsed_response['Hallo'].should eq('Grüß Gott.')
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new not normalized umlaut document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"Gru\\u0308\\u00DF Gott.\" }"
doc = ArangoDB.log_post("#{prefix}-umlaut", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -263,7 +428,6 @@ describe ArangoDB do
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a document with an existing id" do
@key = "a_new_key"
@ -271,7 +435,42 @@ describe ArangoDB do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"some stuff\" : \"goes here\", \"_key\" : \"#{@key}\" }"
doc = ArangoDB.log_post("#{prefix}-existing-id", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-existing-id", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
did.should eq("#{@cn}/#{@key}")
match = didRegex.match(did)
match[1].should eq("#{@cn}")
location.should eq("/_api/document/#{did}")
ArangoDB.delete("/_api/document/#{@cn}/#{@key}")
end
it "creating a document with an existing id, setting compatibility header" do
@key = "a_new_key"
ArangoDB.delete("/_api/document/#{@cn}/#{@key}")
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"some stuff\" : \"goes here\", \"_key\" : \"#{@key}\" }"
doc = ArangoDB.log_post("#{prefix}-existing-id", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -338,7 +537,40 @@ describe ArangoDB do
it "creating a new document" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-accept", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -371,7 +603,40 @@ describe ArangoDB do
it "creating a new document, waitForSync URL param = false" do
cmd = "/_api/document?collection=#{@cn}&waitForSync=false"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept-sync-false", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-accept-sync-false", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new document, waitForSync URL param = false, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}&waitForSync=false"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept-sync-false", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -404,7 +669,40 @@ describe ArangoDB do
it "creating a new document, waitForSync URL param = true" do
cmd = "/_api/document?collection=#{@cn}&waitForSync=true"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept-sync-true", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-accept-sync-true", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new document, waitForSync URL param = true, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}&waitForSync=true"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-accept-sync-true", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -448,11 +746,44 @@ describe ArangoDB do
after do
ArangoDB.drop_collection(@cn)
end
it "creating a new document" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-named-collection", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-named-collection", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
match = didRegex.match(did)
match[1].should eq("#{@cn}")
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "creating a new document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-named-collection", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(201)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
@ -499,7 +830,7 @@ describe ArangoDB do
it "returns an error if collection is unknown" do
cmd = "/_api/document?collection=#{@cn}"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-unknown-collection-name", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-create-collection", cmd, :body => body)
doc.code.should eq(404)
doc.parsed_response['error'].should eq(true)
@ -507,11 +838,40 @@ describe ArangoDB do
doc.parsed_response['code'].should eq(404)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
end
it "create the collection and the document" do
cmd = "/_api/document?collection=#{@cn}&createCollection=true"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-create-collection", cmd, :body => body)
doc = ArangoDB.log_post("#{prefix}-create-collection", cmd, :body => body, :headers => { "x-arango-version" => "1.3" })
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")
doc.parsed_response['error'].should eq(false)
etag = doc.headers['etag']
etag.should be_kind_of(String)
location = doc.headers['location']
location.should be_kind_of(String)
rev = doc.parsed_response['_rev']
rev.should be_kind_of(String)
did = doc.parsed_response['_id']
did.should be_kind_of(String)
etag.should eq("\"#{rev}\"")
location.should eq("/_api/document/#{did}")
ArangoDB.delete(location)
ArangoDB.size_collection(@cn).should eq(0)
end
it "create the collection and the document, setting compatibility header" do
cmd = "/_api/document?collection=#{@cn}&createCollection=true"
body = "{ \"Hallo\" : \"World\" }"
doc = ArangoDB.log_post("#{prefix}-create-collection", cmd, :body => body, :headers => { "x-arango-version" => "1.4" })
doc.code.should eq(202)
doc.headers['content-type'].should eq("application/json; charset=utf-8")

View File

@ -2,6 +2,7 @@
test -d logs || mkdir logs
rspec --color --format d \
api-compatibility-spec.rb \
api-http-spec.rb \
api-admin-spec.rb \
api-aqlfunction-spec.rb \

View File

@ -670,6 +670,7 @@ TRI_associative_pointer_t* TRI_CreateFunctionsAql (void) {
REGISTER_FUNCTION("MERGE", "MERGE", true, false, "a,a|+", NULL);
REGISTER_FUNCTION("MERGE_RECURSIVE", "MERGE_RECURSIVE", true, false, "a,a|+", NULL);
REGISTER_FUNCTION("DOCUMENT", "DOCUMENT", false, false, "h,sl", NULL);
REGISTER_FUNCTION("DOCUMENT_HANDLE", "DOCUMENT_HANDLE", false, false, "sl", NULL);
REGISTER_FUNCTION("MATCHES", "MATCHES", true, false, ".,l|b", NULL);
REGISTER_FUNCTION("UNSET", "UNSET", true, false, "a,sl|+", NULL);
REGISTER_FUNCTION("KEEP", "KEEP", true, false, "a,sl|+", NULL);

View File

@ -547,7 +547,7 @@ int ContinuousSyncer::startTransaction (TRI_json_t const* json) {
}
}
res = TRI_BeginTransaction(trx, getHint(totalOperations), TRI_TRANSACTION_TOP_LEVEL);
res = TRI_BeginTransaction(trx, getHint((const size_t) totalOperations), TRI_TRANSACTION_TOP_LEVEL);
if (res != TRI_ERROR_NO_ERROR) {
TRI_FreeTransaction(trx);

View File

@ -76,7 +76,83 @@ RestBatchHandler::~RestBatchHandler () {
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
/// {@inheritDoc}
/// @brief executes a batch request
///
/// @RESTHEADER{POST /_api/batch,executes a batch request}
///
/// @RESTBODYPARAM{body,string,required}
/// The multipart batch request, consisting of the envelope and the individual
/// batch parts.
///
/// @RESTDESCRIPTION
/// Executes a batch request. A batch request can contain any number of
/// other requests that can be sent to ArangoDB in isolation. The benefit of
/// using batch requests is that batching requests requires less client/server
/// roundtrips than when sending isolated requests.
///
/// All parts of a batch request are executed serially on the server. The
/// server will return the results of all parts in a single response when all
/// parts are finished.
///
/// Technically, a batch request is a multipart HTTP request, with
/// content-type `multipart/form-data`. A batch request consists of an
/// envelope and the individual batch part actions. Batch part actions
/// are "regular" HTTP requests, including full header and an optional body.
/// Multiple batch parts are separated by a boundary identifier. The
/// boundary identifier is declared in the batch envelope. The MIME content-type
/// for each individual batch part must be `application/x-arango-batchpart`.
///
/// The response sent by the server will be an `HTTP 200` response, with an
/// error summary header `x-arango-errors`. This header contains the number of
/// batch parts that failed with an HTTP error code of at least 400.
///
/// The response sent by the server is a multipart response, too. It contains
/// the individual HTTP responses for all batch parts, including the full HTTP
/// result header (with status code and other potential headers) and an
/// optional result body. The individual batch parts in the result are
/// seperated using the same boundary value as specified in the request.
///
/// The order of batch parts in the response will be the same as in the
/// original client request. Client can additionally use the `Content-Id`
/// MIME header in a batch part to define an individual id for each batch part.
/// The server will return this id is the batch part responses, too.
///
/// @RESTRETURNCODES
///
/// @RESTRETURNCODE{200}
/// is returned if the batch was received successfully. HTTP 200 is returned
/// even if one or multiple batch part actions failed.
///
/// @RESTRETURNCODE{400}
/// is returned if the batch envelope is malformed or incorrectly formatted.
/// This code will also be returned if the content-type of the overall batch
/// request or the individual MIME parts is not as expected.
///
/// @RESTRETURNCODE{405}
/// is returned when an invalid HTTP method is used.
///
/// @EXAMPLES
///
/// @EXAMPLE_ARANGOSH_RUN{RestBatch1}
/// var parts = [
/// "Content-Type: application/x-arango-batchpart\r\nContent-Id: myId1\r\n\r\nGET /_api/version HTTP/1.1\r\n",
/// "Content-Type: application/x-arango-batchpart\r\nContent-Id: myId2\r\n\r\nDELETE /_api/collection/products HTTP/1.1\r\n",
/// "Content-Type: application/x-arango-batchpart\r\nContent-Id: someId\r\n\r\nPOST /_api/collection/products HTTP/1.1\r\n\r\n{ \"name\": \"products\" }\r\n",
/// "Content-Type: application/x-arango-batchpart\r\nContent-Id: nextId\r\n\r\nGET /_api/collection/products/figures HTTP/1.1\r\n",
/// "Content-Type: application/x-arango-batchpart\r\nContent-Id: otherId\r\n\r\nDELETE /_api/collection/products HTTP/1.1\r\n"
/// ];
/// var boundary = "SomeBoundaryValue";
/// var headers = { "Content-Type" : "multipart/form-data; boundary=" + boundary };
/// var body = "--" + boundary + "\r\n" +
/// parts.join("\r\n" + "--" + boundary + "\r\n") +
/// "--" + boundary + "--\r\n";
///
/// var response = logCurlRequestRaw('POST', '/_api/batch', body, headers);
///
/// assert(response.code === 200);
///
/// logRawResponse(response);
/// @END_EXAMPLE_ARANGOSH_RUN
////////////////////////////////////////////////////////////////////////////////
Handler::status_e RestBatchHandler::execute() {
@ -130,13 +206,14 @@ Handler::status_e RestBatchHandler::execute() {
const size_t partLength = helper.foundLength;
const char* headerStart = partStart;
char* bodyStart = NULL;
char* bodyStart = 0;
size_t headerLength = 0;
size_t bodyLength = 0;
// assume Windows linebreak \r\n\r\n as delimiter
char* p = strstr((char*) headerStart, "\r\n\r\n");
if (p != NULL) {
if (p != 0 && p + 4 <= partEnd) {
headerLength = p - partStart;
bodyStart = p + 4;
bodyLength = partEnd - bodyStart;
@ -144,7 +221,8 @@ Handler::status_e RestBatchHandler::execute() {
else {
// test Unix linebreak
p = strstr((char*) headerStart, "\n\n");
if (p != NULL) {
if (p != 0 && p + 2 <= partEnd) {
headerLength = p - partStart;
bodyStart = p + 2;
bodyLength = partEnd - bodyStart;
@ -157,7 +235,7 @@ Handler::status_e RestBatchHandler::execute() {
// set up request object for the part
LOGGER_TRACE("part header is " << string(headerStart, headerLength));
HttpRequest* request = new HttpRequest(_request->connectionInfo(), headerStart, headerLength, false);
HttpRequest* request = new HttpRequest(_request->connectionInfo(), headerStart, headerLength, _request->compatibility(), false);
if (request == 0) {
generateError(HttpResponse::SERVER_ERROR, TRI_ERROR_OUT_OF_MEMORY);
@ -180,7 +258,6 @@ Handler::status_e RestBatchHandler::execute() {
request->setHeader("authorization", 13, authorization.c_str());
}
HttpHandler* handler = _server->createHandler(request);
if (! handler) {
@ -221,6 +298,7 @@ Handler::status_e RestBatchHandler::execute() {
}
HttpResponse* partResponse = handler->getResponse();
if (partResponse == 0) {
delete handler;
generateError(HttpResponse::BAD, TRI_ERROR_INTERNAL, "could not create a response for batch part request");

View File

@ -2862,10 +2862,12 @@ void RestReplicationHandler::handleCommandApplierSetConfig () {
}
value = JsonHelper::getArrayElement(json, "database");
if (config._database != 0) {
// free old value
TRI_FreeString(TRI_CORE_MEM_ZONE, config._database);
}
if (JsonHelper::isString(value)) {
if (config._database != 0) {
TRI_FreeString(TRI_CORE_MEM_ZONE, config._database);
}
config._database = TRI_DuplicateString2Z(TRI_CORE_MEM_ZONE, value->_value._string.data, value->_value._string.length - 1);
}
else {

View File

@ -200,7 +200,15 @@ void RestVocbaseBaseHandler::generate20x (const HttpResponse::HttpResponseCode r
// in these cases we do not return etag nor location
_response->setHeader("etag", 4, "\"" + rev + "\"");
// handle does not need to be RFC 2047-encoded
_response->setHeader("location", 8, string("/_db/" + _request->databaseName() + DOCUMENT_PATH + "/" + handle));
if (_request->compatibility() < 10400L) {
// pre-1.4-location header (e.g. /_api/document/xyz)
_response->setHeader("location", 8, string(DOCUMENT_PATH + "/" + handle));
}
else {
// 1.4-location header (e.g. /_api/document/xyz)
_response->setHeader("location", 8, string("/_db/" + _request->databaseName() + DOCUMENT_PATH + "/" + handle));
}
}
// _id and _key are safe and do not need to be JSON-encoded

View File

@ -828,10 +828,17 @@ bool ApplicationV8::prepareV8Instance (const size_t i) {
if (vocbase != 0) {
vocbase->_state = 2;
TRI_JoinThread(&vocbase->_synchroniser);
TRI_JoinThread(&vocbase->_compactor);
int res = TRI_ERROR_NO_ERROR;
res |= TRI_JoinThread(&vocbase->_synchroniser);
res |= TRI_JoinThread(&vocbase->_compactor);
vocbase->_state = 3;
TRI_JoinThread(&vocbase->_cleanup);
res |= TRI_JoinThread(&vocbase->_cleanup);
if (res != TRI_ERROR_NO_ERROR) {
LOG_ERROR("unable to join database threads for database '%s'", vocbase->_name);
}
}
}

View File

@ -624,6 +624,11 @@ static HttpResponse* ExecuteActionVocbase (TRI_vocbase_t* vocbase,
}
req->Set(v8g->CookiesKey, cookiesObject);
// determine API compatibility version
int32_t compatibility = request->compatibility();
req->Set(v8g->CompatibilityKey, v8::Integer::New(compatibility));
// execute the callback
v8::Handle<v8::Object> res = v8::Object::New();

View File

@ -1480,6 +1480,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (const bool useCollection,
v8::Handle<v8::Value> err = TRI_ParseDocumentOrDocumentHandle(resolver, col, key, rid, argv[0]);
if (! err.IsEmpty()) {
FREE_STRING(TRI_CORE_MEM_ZONE, key);
return scope.Close(v8::ThrowException(err));
}
@ -1492,6 +1493,7 @@ static v8::Handle<v8::Value> UpdateVocbaseCol (const bool useCollection,
if (! argv[1]->IsObject() || argv[1]->IsArray()) {
// we're only accepting "real" object documents
FREE_STRING(TRI_CORE_MEM_ZONE, key);
TRI_V8_EXCEPTION(scope, TRI_ERROR_ARANGO_DOCUMENT_TYPE_INVALID);
}

View File

@ -798,9 +798,14 @@ int TRI_StopReplicationApplier (TRI_replication_applier_t* applier,
res = StopApplier(applier, resetError);
TRI_WriteUnlockReadWriteLock(&applier->_statusLock);
// join the thread without the status lock (otherwise it would propbably not join)
TRI_JoinThread(&applier->_thread);
// join the thread without the status lock (otherwise it would probably not join)
if (res == TRI_ERROR_NO_ERROR) {
res = TRI_JoinThread(&applier->_thread);
}
else {
// keep original error code
TRI_JoinThread(&applier->_thread);
}
SetTerminateFlag(applier, false);
@ -1169,6 +1174,7 @@ void TRI_InitConfigurationReplicationApplier (TRI_replication_applier_configurat
config->_database = NULL;
config->_username = NULL;
config->_password = NULL;
config->_requestTimeout = 300.0;
config->_connectTimeout = 10.0;
config->_maxConnectRetries = 100;

View File

@ -1959,13 +1959,15 @@ int TRI_StartServer (TRI_server_t* server,
////////////////////////////////////////////////////////////////////////////////
int TRI_StopServer (TRI_server_t* server) {
int res;
// set shutdown flag
TRI_LockMutex(&server->_createLock);
server->_shutdown = true;
TRI_UnlockMutex(&server->_createLock);
// stop dbm thread
TRI_JoinThread(&server->_databaseManager);
res = TRI_JoinThread(&server->_databaseManager);
CloseDatabases(server);
@ -1976,7 +1978,7 @@ int TRI_StopServer (TRI_server_t* server) {
WriteShutdownInfo(server);
TRI_DestroyLockFile(server->_lockFilename);
return TRI_ERROR_NO_ERROR;
return res;
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -1454,6 +1454,7 @@ TRI_vocbase_t* TRI_OpenVocBase (TRI_server_t* server,
void TRI_DestroyVocBase (TRI_vocbase_t* vocbase) {
TRI_vector_pointer_t collections;
int res;
size_t i;
TRI_InitVectorPointer(&collections, TRI_UNKNOWN_MEM_ZONE);
@ -1484,17 +1485,30 @@ void TRI_DestroyVocBase (TRI_vocbase_t* vocbase) {
#ifdef TRI_SKIPLIST_EX
// wait for the index garbage collector to finish what ever it is doing
TRI_JoinThread(&vocbase->_indexGC);
res = TRI_JoinThread(&vocbase->_indexGC);
if (res != TRI_ERROR_NO_ERROR) {
LOG_ERROR("unable to join indexgc thread: %s", TRI_errno_string(res));
}
#endif
// wait until synchroniser and compactor are finished
TRI_JoinThread(&vocbase->_synchroniser);
res = TRI_JoinThread(&vocbase->_synchroniser);
if (res != TRI_ERROR_NO_ERROR) {
LOG_ERROR("unable to join synchroniser thread: %s", TRI_errno_string(res));
}
TRI_LockCondition(&vocbase->_compactorCondition);
TRI_SignalCondition(&vocbase->_compactorCondition);
TRI_UnlockCondition(&vocbase->_compactorCondition);
TRI_JoinThread(&vocbase->_compactor);
res = TRI_JoinThread(&vocbase->_compactor);
if (res != TRI_ERROR_NO_ERROR) {
LOG_ERROR("unable to join compactor thread: %s", TRI_errno_string(res));
}
// this will signal the cleanup thread to do one last iteration
vocbase->_state = 3;
@ -1502,7 +1516,12 @@ void TRI_DestroyVocBase (TRI_vocbase_t* vocbase) {
TRI_LockCondition(&vocbase->_cleanupCondition);
TRI_SignalCondition(&vocbase->_cleanupCondition);
TRI_UnlockCondition(&vocbase->_cleanupCondition);
TRI_JoinThread(&vocbase->_cleanup);
res = TRI_JoinThread(&vocbase->_cleanup);
if (res != TRI_ERROR_NO_ERROR) {
LOG_ERROR("unable to join cleanup thread: %s", TRI_errno_string(res));
}
// free replication
TRI_FreeReplicationApplier(vocbase->_replicationApplier);

View File

@ -46,8 +46,14 @@ var API = "_api/collection";
/// @brief return a prefixed URL
////////////////////////////////////////////////////////////////////////////////
function databasePrefix (url) {
return "/_db/" + arangodb.db._name() + url;
function databasePrefix (req, url) {
if (req.hasOwnProperty('compatibility') && req.compatibility < 10400) {
// pre 1.4-style location response (e.g. /_api/collection/xyz)
return url;
}
// 1.4-style location response (e.g. /_db/dbname/_api/collection/xyz)
return "/_db/" + encodeURIComponent(arangodb.db._name()) + url;
}
////////////////////////////////////////////////////////////////////////////////
@ -275,7 +281,6 @@ function post_api_collection (req, res) {
}
var result = {};
var headers = {};
result.id = collection._id;
result.name = collection.name();
@ -286,7 +291,9 @@ function post_api_collection (req, res) {
result.type = collection.type();
result.keyOptions = collection.keyOptions;
headers.location = databasePrefix("/" + API + "/" + result.name);
var headers = {
location: databasePrefix(req, "/" + API + "/" + result.name)
};
actions.resultOk(req, res, actions.HTTP_OK, result, headers);
}
@ -785,7 +792,9 @@ function get_api_collection (req, res) {
if (req.suffix.length === 1) {
result = collectionRepresentation(collection, false, false, false);
headers = { location : databasePrefix("/" + API + "/" + collection.name()) };
headers = {
location : databasePrefix(req, "/" + API + "/" + collection.name())
};
actions.resultOk(req, res, actions.HTTP_OK, result, headers);
return;
}
@ -828,7 +837,9 @@ function get_api_collection (req, res) {
else if (sub === "figures") {
result = collectionRepresentation(collection, true, true, true);
headers = { location : databasePrefix("/" + API + "/" + collection.name() + "/figures") };
headers = {
location : databasePrefix(req, "/" + API + "/" + collection.name() + "/figures")
};
actions.resultOk(req, res, actions.HTTP_OK, result, headers);
}
@ -838,7 +849,9 @@ function get_api_collection (req, res) {
else if (sub === "count") {
result = collectionRepresentation(collection, true, true, false);
headers = { location : databasePrefix("/" + API + "/" + collection.name() + "/count") };
headers = {
location : databasePrefix(req, "/" + API + "/" + collection.name() + "/count")
};
actions.resultOk(req, res, actions.HTTP_OK, result, headers);
}
@ -848,7 +861,9 @@ function get_api_collection (req, res) {
else if (sub === "properties") {
result = collectionRepresentation(collection, true, false, false);
headers = { location : databasePrefix("/" + API + "/" + collection.name() + "/properties") };
headers = {
location : databasePrefix(req, "/" + API + "/" + collection.name() + "/properties")
};
actions.resultOk(req, res, actions.HTTP_OK, result, headers);
}
@ -858,7 +873,9 @@ function get_api_collection (req, res) {
else if (sub === "parameter") {
result = collectionRepresentation(collection, true, false, false);
headers = { location : databasePrefix("/" + API + "/" + collection.name() + "/parameter") };
headers = {
location : databasePrefix(req, "/" + API + "/" + collection.name() + "/parameter")
};
actions.resultOk(req, res, actions.HTTP_OK, result, headers);
}

View File

@ -463,6 +463,11 @@ function saveDocument(req, res, collection, document) {
"Etag" : doc._rev,
"location" : "/_api/structure/" + doc._id
};
if (req.hasOwnProperty('compatibility') && req.compatibility >= 10400) {
// 1.4+ style location header
headers.location = "/_db/" + encodeURIComponent(arangodb.db._name()) + headers.location;
}
var returnCode = waitForSync ? actions.HTTP_CREATED : actions.HTTP_ACCEPTED;
@ -480,9 +485,9 @@ function replaceDocument(req, res, collection, oldDocument, newDocument) {
var waitForSync = getWaitForSync(req, collection);
var overwrite = getOverwritePolicy(req);
if (!overwrite &&
undefined !== newDocument._rev &&
oldDocument._rev !== newDocument._rev) {
if (! overwrite &&
undefined !== newDocument._rev &&
oldDocument._rev !== newDocument._rev) {
resultError(req, res, actions.HTTP_BAD,
arangodb.ERROR_FAILED,
"wrong version");

View File

@ -6,6 +6,10 @@
"path": "api-docs/aqlfunction.{format}",
"description": "aqlfunction API"
},
{
"path": "api-docs/batch.{format}",
"description": "batch API"
},
{
"path": "api-docs/collection.{format}",
"description": "collection API"

View File

@ -0,0 +1,42 @@
{
"basePath": "/",
"swaggerVersion": "1.1",
"apiVersion": "0.1",
"apis": [
{
"operations": [
{
"errorResponses": [
{
"reason": "is returned if the batch was received successfully. HTTP 200 is returned even if one or multiple batch part actions failed. ",
"code": "200"
},
{
"reason": "is returned if the batch envelope is malformed or incorrectly formatted. This code will also be returned if the content-type of the overall batch request or the individual MIME parts is not as expected. ",
"code": "400"
},
{
"reason": "is returned when an invalid HTTP method is used. ",
"code": "405"
}
],
"parameters": [
{
"dataType": "String",
"paramType": "body",
"required": "true",
"name": "body",
"description": "The multipart batch request, consisting of the envelope and the individual batch parts. "
}
],
"notes": "Executes a batch request. A batch request can contain any number of other requests that can be sent to ArangoDB in isolation. The benefit of using batch requests is that batching requests requires less client/server roundtrips than when sending isolated requests. <br><br>All parts of a batch request are executed serially on the server. The server will return the results of all parts in a single response when all parts are finished. <br><br>Technically, a batch request is a multipart HTTP request, with content-type <em>multipart/form-data</em>. A batch request consists of an envelope and the individual batch part actions. Batch part actions are \"regular\" HTTP requests, including full header and an optional body. Multiple batch parts are separated by a boundary identifier. The boundary identifier is declared in the batch envelope. The MIME content-type for each individual batch part must be <em>application/x-arango-batchpart</em>. <br><br>The response sent by the server will be an <em>HTTP 200</em> response, with an error summary header <em>x-arango-errors</em>. This header contains the number of batch parts that failed with an HTTP error code of at least 400. <br><br>The response sent by the server is a multipart response, too. It contains the individual HTTP responses for all batch parts, including the full HTTP result header (with status code and other potential headers) and an optional result body. The individual batch parts in the result are seperated using the same boundary value as specified in the request. <br><br>The order of batch parts in the response will be the same as in the original client request. Client can additionally use the <em>Content-Id</em> MIME header in a batch part to define an individual id for each batch part. The server will return this id is the batch part responses, too. <br><br>",
"summary": "executes a batch request",
"httpMethod": "POST",
"examples": "<br><br><pre><code class=\"json\" >unix> curl -X POST --header 'Content-Type: multipart/form-data; boundary=SomeBoundaryValue' --data @- --dump - http://localhost:8529/_api/batch\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: myId1\r\n\r\nGET /_api/version HTTP/1.1\r\n\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: myId2\r\n\r\nDELETE /_api/collection/products HTTP/1.1\r\n\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: someId\r\n\r\nPOST /_api/collection/products HTTP/1.1\r\n\r\n{ \"name\": \"products\" }\r\n\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: nextId\r\n\r\nGET /_api/collection/products/figures HTTP/1.1\r\n\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: otherId\r\n\r\nDELETE /_api/collection/products HTTP/1.1\r\n--SomeBoundaryValue--\r\n\n\nHTTP/1.1 200 OK\ncontent-type: multipart/form-data; boundary=SomeBoundaryValue\nx-arango-errors: 1\n\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: myId1\r\n\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=utf-8\r\ncontent-length: 41\r\n\r\n{\"server\":\"arango\",\"version\":\"1.4.devel\"}\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: myId2\r\n\r\nHTTP/1.1 404 Not Found\r\ncontent-type: application/json; charset=utf-8\r\ncontent-length: 88\r\n\r\n{\"error\":true,\"code\":404,\"errorNum\":1203,\"errorMessage\":\"unknown collection 'products'\"}\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: someId\r\n\r\nHTTP/1.1 200 OK\r\nlocation: /_db/_system/_api/collection/products\r\ncontent-type: application/json; charset=utf-8\r\ncontent-length: 137\r\n\r\n{\"id\":\"330952107\",\"name\":\"products\",\"waitForSync\":false,\"isVolatile\":false,\"isSystem\":false,\"status\":3,\"type\":2,\"error\":false,\"code\":200}\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: nextId\r\n\r\nHTTP/1.1 200 OK\r\nlocation: /_db/_system/_api/collection/products/figures\r\ncontent-type: application/json; charset=utf-8\r\ncontent-length: 526\r\n\r\n{\"id\":\"330952107\",\"name\":\"products\",\"doCompact\":true,\"isVolatile\":false,\"isSystem\":false,\"journalSize\":1048576,\"keyOptions\":{\"type\":\"traditional\",\"allowUserKeys\":true},\"waitForSync\":false,\"count\":0,\"figures\":{\"alive\":{\"count\":0,\"size\":0},\"dead\":{\"count\":0,\"size\":0,\"deletion\":0},\"datafiles\":{\"count\":0,\"fileSize\":0},\"journals\":{\"count\":0,\"fileSize\":0},\"compactors\":{\"count\":0,\"fileSize\":0},\"shapefiles\":{\"count\":1,\"fileSize\":2097152},\"shapes\":{\"count\":6},\"attributes\":{\"count\":0}},\"status\":3,\"type\":2,\"error\":false,\"code\":200}\r\n--SomeBoundaryValue\r\nContent-Type: application/x-arango-batchpart\r\nContent-Id: otherId\r\n\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=utf-8\r\ncontent-length: 43\r\n\r\n{\"id\":\"330952107\",\"error\":false,\"code\":200}\r\n--SomeBoundaryValue--\n\n</code></pre><br>",
"nickname": "executesABatchRequest"
}
],
"path": "/_api/batch"
}
]
}

View File

@ -17,7 +17,7 @@
},
{
"name" : "Sequences",
"value": "/* Returns the sequence of integers between 2010 and 2013 (including) */FOR year IN 2010..2013\n RETURN year"
"value": "/* Returns the sequence of integers between 2010 and 2013 (including) */\nFOR year IN 2010..2013\n RETURN year"
},
{
"name" : "Bind parameters",

View File

@ -18,6 +18,15 @@
cursor: default !important;
}
.docLink {
}
.docPreview {
text-align: right;
float: right;
margin-right: -17px !important;
}
#collectionPrev, #collectionNext{
cursor: pointer;
}
@ -70,18 +79,18 @@
color: #8AA050;
}
#deleteRow {
.deleteAttribute {
color: #B30000;
}
#editSecondRow, #editFirstRow {
.editSecondAttribute, .editFirstAttribute {
color: #444444;
font-size: 16px;
float:right;
padding-top: 4px;
}
#addRow, #deleteRow {
.addAttribute, .deleteAttribute {
font-size: 20px;
float:right;
padding-top: 2px;
@ -114,13 +123,13 @@ table.dataTable thead th {
cursor: default !important;
}
#editFirstRow {
.editFirstAttribute {
float:right;
text-align:right;
margin-right: -17px !important;
}
#editSecondRow {
.editSecondAttribute {
float:right;
text-align:right;
margin-right: -30px !important;

View File

@ -17,6 +17,15 @@ window.arangoHelper = {
'$id' : true
};
},
fixTooltips: function (selector, placement) {
$(selector).tooltip({
placement: placement,
hide: false,
show: false
});
},
removeNotifications: function () {
$.gritter.removeAll();
this.lastNotificationMessage = null;
@ -201,9 +210,4 @@ window.arangoHelper = {
return typeof v;
}
};

View File

@ -125,7 +125,7 @@ window.arangoDocument = Backbone.Collection.extend({
result = true;
},
error: function(data) {
result = false;
result = false;
}
});
return result;
@ -145,7 +145,7 @@ window.arangoDocument = Backbone.Collection.extend({
result = true;
},
error: function(data) {
result = false;
result = false;
}
});
return result;

View File

@ -22,8 +22,7 @@
<th>type</th>
<th>&nbsp;</th>
<th>
<!--<button class="enabled" id="addRow"><img id="addDocumentLine" class="plusIcon" src="img/plus_icon.png"></button>--!>
<a id="addRow"><span class="glyphicon glyphicon-plus-sign"></span></a>
<a class="addAttribute"><span class="glyphicon glyphicon-plus-sign" title="Add attribute"></span></a>
</th>
</tr>
</thead>

View File

@ -7,8 +7,8 @@
<ul><li class="enabled"><a id="filterCollection"><span class="glyphicon glyphicon-filter" title="Filter collection"></span></a></li></ul>
<ul><li class="enabled"><a id="importCollection"><span class="arangoicon arango-icon-import" title="Upload documents"></span></a></li></ul>
<ul>
<li class="enabled"><a id="collectionPrev"><span class="glyphicon glyphicon-chevron-left"></span></a></li>
<li class="enabled"><a id="collectionNext"><span class="glyphicon glyphicon-chevron-right"></span></a></li>
<li class="enabled"><a id="collectionPrev"><span class="glyphicon glyphicon-chevron-left" title="Previous collection"></span></a></li>
<li class="enabled"><a id="collectionNext"><span class="glyphicon glyphicon-chevron-right" title="Next collection"></span></a></li>
</ul>
</div>
@ -64,7 +64,7 @@
<th class="collectionTh">Type:</th>
<th class="">
<select id="newIndexType">
<option value="Cap">Cap Constraints</option>
<option value="Cap">Cap Constraint</option>
<option value="Geo">Geo Index</option>
<option value="Hash">Hash Index</option>
<option value="Fulltext">Fulltext Index</option>

View File

@ -3,7 +3,7 @@
<% var appInfos = attributes.app.split(":"); %>
<h5 class="applicationName"><%= appInfos[1] %><%= attributes.isSystem ? " (system)" : "" %><%= appInfos[0] === "dev" ? " (dev)" : ""%></h5>
<div class="pull-right">
<span class="glyphicon glyphicon-info-sign" alt="Show API documentation" title="Show API documentation"></span>
<span class="glyphicon glyphicon-info-sign" title="Show API documentation"></span>
</div>
<img src="/_admin/aardvark/foxxes/thumbnail/<%=attributes.app %>" alt="icon" class="foxxIcon"/>
<p class="foxxDescription">

View File

@ -1,5 +1,5 @@
/*jslint indent: 2, nomen: true, maxlen: 100, sloppy: true, vars: true, white: true, plusplus: true */
/*global Backbone, EJS, $, window, _ */
/*global Backbone, EJS, $, window, arangoHelper, _ */
window.ApplicationsView = Backbone.View.extend({
el: '#content',
@ -120,6 +120,8 @@ window.ApplicationsView = Backbone.View.extend({
v.toggle("active", self._showActive);
v.toggle("inactive", self._showInactive);
});
arangoHelper.fixTooltips(".glyphicon", "left");
return this;
}
});

View File

@ -3,7 +3,6 @@
var collectionInfoView = Backbone.View.extend({
el: '#modalPlaceholder',
chart: null,
initialize: function () {
},
@ -22,6 +21,7 @@ var collectionInfoView = Backbone.View.extend({
$('#infoTab a').click(function (e) {
e.preventDefault();
$(this).tab('show');
$(this).focus();
});
return this;
@ -29,11 +29,6 @@ var collectionInfoView = Backbone.View.extend({
events: {
"hidden #show-collection" : "hidden"
},
listenKey: function(e) {
if (e.keyCode === 13) {
this.saveModifiedCollection();
}
},
hidden: function () {
window.App.navigate("#collections", {trigger: true});
},
@ -47,7 +42,7 @@ var collectionInfoView = Backbone.View.extend({
return;
}
$('#show-collection-name').text("Collection: "+this.myCollection.name);
$('#show-collection-name').text("Collection: " + this.myCollection.name);
$('#show-collection-id').text(this.myCollection.id);
$('#show-collection-type').text(this.myCollection.type);
$('#show-collection-status').text(this.myCollection.status);
@ -59,51 +54,12 @@ var collectionInfoView = Backbone.View.extend({
//remove
this.index = window.arangoCollectionsStore.getIndex(this.options.colId, true);
this.fillLoadedModal(this.data);
//this.convertFigures(this.data);
//this.renderFigures();
}
},
renderFigures: function () {
var self = this;
// prevent some d3-internal races with a timeout
window.setTimeout(function () {
var chart = nv.models.pieChart()
.x(function(d) { return d.label; })
.y(function(d) { return d.value; })
.showLabels(true);
nv.addGraph(function() {
d3.select(".modal-body-right svg")
.datum(self.convertFigures())
.transition().duration(1200)
.call(chart);
return chart;
});
return chart;
}, 500);
},
convertFigures: function () {
var self = this;
var collValues = [];
if (self.data && self.data.figures) {
$.each(self.data.figures, function(k,v) {
collValues.push({
"label" : k,
"value" : v.count
});
});
}
return [{
key: "Collections Status",
values: collValues
}];
},
roundNumber: function(number, n) {
var faktor;
faktor = Math.pow(10,n);
var returnVal = (Math.round(number * faktor) / faktor);
var factor;
factor = Math.pow(10,n);
var returnVal = (Math.round(number * factor) / factor);
return returnVal;
},
appendFigures: function () {
@ -122,7 +78,7 @@ var collectionInfoView = Backbone.View.extend({
'<th class="'+cssClass+'">Datafiles</th>'+
'<th class="'+cssClass+'">'+this.data.figures.datafiles.count+'</th>'+
'<th class="'+cssClass+'">'+
this.roundNumber(this.data.figures.datafiles.fileSize / 1024 / 1024 , 2)+
this.roundNumber(this.data.figures.datafiles.fileSize / 1024 / 1024, 2)+
'</th>'+
'<th class="tooltipInfoTh '+cssClass+'">'+
'<a class="modalInfoTooltips" title="Number of active datafiles.">'+
@ -133,7 +89,7 @@ var collectionInfoView = Backbone.View.extend({
'<th class="'+cssClass+'">Journals</th>'+
'<th class="'+cssClass+'">'+this.data.figures.journals.count+'</th>'+
'<th class="'+cssClass+'">'+
this.roundNumber(this.data.figures.journals.fileSize / 1024 / 1024 , 2)+
this.roundNumber(this.data.figures.journals.fileSize / 1024 / 1024, 2)+
'</th>'+
'<th class="tooltipInfoTh '+cssClass+'">'+
'<a class="modalInfoTooltips" title="Number of journal files.">'+
@ -144,7 +100,7 @@ var collectionInfoView = Backbone.View.extend({
'<th class="'+cssClass+'">Compactors</th>'+
'<th class="'+cssClass+'">'+this.data.figures.compactors.count+'</th>'+
'<th class="'+cssClass+'">'+
this.roundNumber(this.data.figures.compactors.fileSize / 1024 / 1024 , 2)+
this.roundNumber(this.data.figures.compactors.fileSize / 1024 / 1024, 2)+
'</th>'+
'<th class="tooltipInfoTh '+cssClass+'">'+
'<a class="modalInfoTooltips" title="Number of compactor files.">'+
@ -155,7 +111,7 @@ var collectionInfoView = Backbone.View.extend({
'<th class="'+cssClass+'">Shape files</th>'+
'<th class="'+cssClass+'">'+this.data.figures.shapefiles.count+'</th>'+
'<th class="'+cssClass+'">'+
this.roundNumber(this.data.figures.shapefiles.fileSize / 1024 / 1024 , 2)+
this.roundNumber(this.data.figures.shapefiles.fileSize / 1024 / 1024, 2)+
'</th>'+
'<th class="tooltipInfoTh '+cssClass+'">'+
'<a class="modalInfoTooltips" title="Number of shape files.">'+
@ -267,7 +223,7 @@ var collectionInfoView = Backbone.View.extend({
$('#show-collection-sync').text('true');
}
var calculatedSize = data.journalSize / 1024 / 1024;
$('#show-collection-size').text(this.roundNumber(calculatedSize,2));
$('#show-collection-size').text(this.roundNumber(calculatedSize, 2));
$('#show-collection-rev').text(this.revision.revision);
this.appendIndex();

View File

@ -1,5 +1,5 @@
/*jslint indent: 2, nomen: true, maxlen: 100, sloppy: true, vars: true, white: true, plusplus: true */
/*global require, exports, Backbone, EJS, window, setTimeout, clearTimeout, $*/
/*global require, exports, Backbone, EJS, window, setTimeout, clearTimeout, arangoHelper, $*/
var collectionsView = Backbone.View.extend({
el: '#content',
@ -41,17 +41,9 @@ var collectionsView = Backbone.View.extend({
}, this);
//append info icon for loaded collections
/*
$('.loaded').parent().prev().append(
'<i class="icon-info-sign show-info-view" alt="Show collection properties"'+
'title="Show collection properties"></i>'
);
$('.unloaded').parent().prev().append(
'<i class="icon-info-sign disabled-info-view" alt="disabled"'+
'title="disabled"></i>'
);*/
$('.loaded').parent().prev().append(
'<span class="glyphicon glyphicon-info-sign spanInfo ICON" alt="Collection properties"</span>'
'<span class="glyphicon glyphicon-info-sign spanInfo ICON" ' +
'title="Show collection properties"</span>'
);
$('.unloaded').parent().prev().append(
'<span class="glyphicon glyphicon-info-sign spanDisabled ICON" alt="disabled"</span>'
@ -63,6 +55,7 @@ var collectionsView = Backbone.View.extend({
$('#searchInput').val('');
$('#searchInput').val(val);
arangoHelper.fixTooltips(".glyphicon, .arangoicon", "left");
return this;
},

View File

@ -209,6 +209,8 @@ var dashboardView = Backbone.View.extend({
self.updateNOW = true;
$(this.el).html(this.template.text);
this.getReplicationStatus();
arangoHelper.fixTooltips(".glyphicon", "top");
var counter = 1;
@ -634,10 +636,10 @@ var dashboardView = Backbone.View.extend({
'<li class="statClient" id="' + figure.identifier + '">' +
'<div class="boxHeader"><h6 class="dashboardH6">' + figure.name +
'</h6>'+
'<i class="icon-remove db-hide" value="'+figure.identifier+'"></i>' +
'<i class="icon-info-sign db-info" value="'+figure.identifier+
'" title="'+figure.description+'"></i>' +
'<i class="icon-zoom-in db-zoom" value="'+figure.identifier+'"></i>' +
'<i class="icon-remove db-hide" value="' + figure.identifier + '"></i>' +
'<i class="icon-info-sign db-info" value="' + figure.identifier +
'" title="' + figure.description + '"></i>' +
'<i class="icon-zoom-in db-zoom" value="' + figure.identifier + '"></i>' +
'</div>' +
'<div class="statChart" id="' + figure.identifier + 'Chart"><svg class="svgClass"/></div>' +
'</li>'
@ -650,10 +652,7 @@ var dashboardView = Backbone.View.extend({
figure.name + '</label></a></li>'
);
//tooltips small charts
$('.db-info').tooltip({
placement: "top",
delay: {show: 100, hide: 100}
});
arangoHelper.fixTooltips(".db-info", "top");
}
});

View File

@ -7,23 +7,24 @@ var documentView = Backbone.View.extend({
colid: 0,
docid: 0,
currentKey: 0,
documentCache: { },
init: function () {
this.initTable();
},
events: {
"click #saveDocument" : "saveDocument",
"click #addDocumentLine" : "addLine",
"click #addRow" : "addLine",
"click #documentTableID #deleteRow" : "deleteLine",
"click #sourceView" : "sourceView",
"click #editFirstRow" : "editFirst",
"click #documentTableID tr" : "clicked",
"click #editSecondRow" : "editSecond",
"keydown .sorting_1" : "listenKey",
"keydown #documentviewMain" : "listenGlobalKey",
"blur #documentviewMain textarea" : "checkFocus"
"click #saveDocument" : "saveDocument",
"click #addDocumentLine" : "addLine",
"click .addAttribute" : "addLine",
"click #documentTableID .deleteAttribute" : "deleteLine",
"click #sourceView" : "sourceView",
"click #editFirstAttribute" : "editFirst",
"click #documentTableID tr" : "clicked",
"click .editSecondAttribute" : "editSecond",
"keydown .sorting_1" : "listenKey",
"keydown #documentviewMain" : "listenGlobalKey",
"blur #documentviewMain textarea" : "checkFocus"
},
checkFocus: function(e) {
@ -71,6 +72,9 @@ var documentView = Backbone.View.extend({
this.drawTable();
}
}
arangoHelper.fixTooltips(".glyphicon", "left");
arangoHelper.fixTooltips(".docLink", "top");
},
clicked: function (a) {
var self = a.currentTarget;
@ -111,7 +115,7 @@ var documentView = Backbone.View.extend({
result = window.arangoDocumentStore.saveDocument(this.colid, this.docid, model);
if (result === true) {
arangoHelper.arangoNotification('Document saved');
$('#addRow').removeClass('disabledBtn');
$('.addAttribute').removeClass('disabledBtn');
$('td').removeClass('validateError');
}
else if (result === false) {
@ -124,7 +128,7 @@ var documentView = Backbone.View.extend({
result = window.arangoDocumentStore.saveEdge(this.colid, this.docid, model);
if (result === true) {
arangoHelper.arangoNotification('Edge saved');
$('#addRow').removeClass('disabledBtn');
$('.addAttribute').removeClass('disabledBtn');
$('td').removeClass('validateError');
}
else if (result === false) {
@ -138,56 +142,92 @@ var documentView = Backbone.View.extend({
'<div class="breadcrumb">'+
'<a href="#" class="activeBread">Collections</a>'+
' > '+
'<a class="activeBread" href="#collection/'+name[1]+'/documents/1">'+name[1]+'</a>'+
'<a class="activeBread" href="#collection/' + name[1] + '/documents/1">' + name[1] + '</a>'+
' > '+
'<a class="disabledBread">'+name[2]+'</a>'+
'<a class="disabledBread">' + name[2] + '</a>'+
'</div>'
);
},
getLinkedDoc: function (handle) {
var self = this;
if (! self.documentCache.hasOwnProperty(handle)) {
$.ajax({
cache: false,
type: "GET",
async: false,
url: "/_api/document/" + handle,
contentType: "application/json",
processData: false,
success: function(data) {
self.documentCache[handle] = data;
},
error: function(data) {
self.documentCache[handle] = null;
}
});
}
return self.documentCache[handle];
},
drawTable: function () {
var self = this;
/* $(self.table).dataTable().fnAddData([
'<div class="notwriteable"></div>',
'<div class="notwriteable"></div>',
'<a class="add" class="notwriteable" id="addDocumentLine"> </a>',
'<div class="notwriteable"></div>',
'<div class="notwriteable"></div>',
'<button class="enabled" id="addRow"><img id="addDocumentLine"'+
'class="plusIcon" src="img/plus_icon.png"></button>'
]);*/
$.each(window.arangoDocumentStore.models[0].attributes, function(key, value) {
if (arangoHelper.isSystemAttribute(key)) {
var preview = "";
var html;
if (key === "_from" || key === "_to") {
var linkedDoc = self.getLinkedDoc(value);
if (linkedDoc !== null && linkedDoc !== undefined) {
preview = '<span class="docPreview glyphicon glyphicon-info-sign" title="' +
self.escaped(JSON.stringify(linkedDoc)) + '"></span>';
html = '<a href="#collection/' + value +
'" class="docLink" title="Go to document">' + self.escaped(value) +
'</a>';
}
else {
html = self.escaped(value);
}
}
else {
html = self.value2html(value, true);
}
$(self.table).dataTable().fnAddData([
key,
'',
self.value2html(value, true),
preview,
html,
JSON.stringify(value, null, 4),
"",
""
]);
}
else {
$(self.table).dataTable().fnAddData(
[
key,
'<a id="editFirstRow"><span class="glyphicon glyphicon-edit"></span></a>',
self.value2html(value),
JSON.stringify(value, null, 4),
'<a id="editSecondRow"><span class="glyphicon glyphicon-edit"></span></a>',
'<a id="deleteRow"><span class="glyphicon glyphicon-minus-sign"></span></a>'
$(self.table).dataTable().fnAddData([
key,
'<a class="editFirstAttribute"><span class="glyphicon glyphicon-edit"></span></a>',
self.value2html(value),
JSON.stringify(value, null, 4),
'<a class="editSecondAttribute"><span class="glyphicon glyphicon-edit"></span></a>',
'<a class="deleteAttribute"><span class="glyphicon glyphicon-minus-sign" ' +
'title="Delete attribute"></span></a>'
]);
}
});
this.makeEditable();
$(this.table).dataTable().fnSort([ [0,'asc'] ]);
$(this.table).dataTable().fnSort([ [0, 'asc'] ]);
},
addLine: function (event) {
if ($('#addRow').hasClass('disabledBtn') === true) {
if ($('.addAttribute').hasClass('disabledBtn') === true) {
return;
}
$('#addRow').addClass('disabledBtn');
$('.addAttribute').addClass('disabledBtn');
//event.stopPropagation();
var randomKey = arangoHelper.getRandomToken();
var self = this;
@ -195,11 +235,12 @@ var documentView = Backbone.View.extend({
$(this.table).dataTable().fnAddData(
[
self.currentKey,
'<a id="editFirstRow"><span class="glyphicon glyphicon-edit"></span></a>',
'<a class="editFirstAttribute"><span class="glyphicon glyphicon-edit"></span></a>',
this.value2html("editme"),
JSON.stringify("editme"),
'<a id="editSecondRow"><span class="glyphicon glyphicon-edit"></span></a>',
'<a id="deleteRow"><span class="glyphicon glyphicon-minus-sign"></span></a>'
'<a class="editSecondAttribute"><span class="glyphicon glyphicon-edit"></span></a>',
'<a class="deleteAttribute"><span class="glyphicon glyphicon-minus-sign" ' +
'title="Delete attribute"></span></a>'
]
);
this.makeEditable();
@ -211,6 +252,8 @@ var documentView = Backbone.View.extend({
return;
}
});
arangoHelper.fixTooltips(".glyphicon", "left");
},
deleteLine: function (a) {
@ -289,7 +332,7 @@ var documentView = Backbone.View.extend({
var i = 0;
$('.writeable', documentEditTable.fnGetNodes() ).each(function () {
var aPos = documentEditTable.fnGetPosition(this);
if ( i === 1) {
if (i === 1) {
$(this).removeClass('writeable');
i = 0;
}
@ -359,7 +402,7 @@ var documentView = Backbone.View.extend({
$.each(data, function(key, val) {
if (val[0] === currentKey2) {
$('#documentTableID').dataTable().fnDeleteRow(key);
$('#addRow').removeClass('disabledBtn');
$('.addAttribute').removeClass('disabledBtn');
}
});
}

View File

@ -335,10 +335,7 @@ var documentsView = Backbone.View.extend({
if (doctype === 'edge') {
$('#edgeCreateModal').modal('show');
$('.modalTooltips').tooltip({
placement: "left",
delay: {show: 3000, hide: 100}
});
arangoHelper.fixTooltips(".modalTooltips", "left");
}
else {
var result = window.arangoDocumentStore.createTypeDocument(collid);
@ -534,13 +531,15 @@ var documentsView = Backbone.View.extend({
+ value.attributes.key
+ '</div>',
/* '<button class="enabled" id="deleteDoc">'
+ '<img src="img/icon_delete.png" width="16" height="16"></button>'*/
'<a id="deleteDoc"><span class="glyphicon glyphicon-minus-sign" data-original-title="'
+'Add a document"></span><a>'
]
+'Delete document" title="Delete document"></span><a>'
]
);
});
// we added some icons, so we need to fix their tooltips
arangoHelper.fixTooltips(".glyphicon, .arangoicon", "top");
$(".prettify").snippet("javascript", {
style: "nedit",
menu: false,
@ -548,6 +547,7 @@ var documentsView = Backbone.View.extend({
transparent: true,
showNum: false
});
}
this.totalPages = window.arangoDocumentsStore.totalPages;
this.currentPage = window.arangoDocumentsStore.currentPage;
@ -582,11 +582,8 @@ var documentsView = Backbone.View.extend({
$('.modalImportTooltips').tooltip({
placement: "left"
});
$('.glyphicon, .arangoicon').tooltip({
placement: "top",
delay: {show: 3000, hide: 100}
});
arangoHelper.fixTooltips(".glyphicon, .arangoicon", "top");
return this;
},
@ -672,6 +669,7 @@ var documentsView = Backbone.View.extend({
resetIndexForms: function () {
$('#indexHeader input').val('').prop("checked", false);
$('#newIndexType').val('Cap').prop('selected',true);
this.selectIndexType();
},
stringToArray: function (fieldString) {
var fields = [];
@ -802,14 +800,13 @@ var documentsView = Backbone.View.extend({
var actionString = '';
$.each(this.index.indexes, function(k,v) {
if (v.type === 'primary' || v.type === 'edge') {
actionString = '<span class="glyphicon glyphicon-ban-circle" ' +
'data-original-title="No action"></span>';
}
else {
actionString = '<span class="deleteIndex glyphicon glyphicon-minus-sign" ' +
'data-original-title="Delete index"></span>';
'data-original-title="Delete index" title="Delete index"></span>';
}
if (v.fields !== undefined) {
@ -830,6 +827,8 @@ var documentsView = Backbone.View.extend({
'</tr>'
);
});
arangoHelper.fixTooltips("deleteIndex", "left");
}
}
});

View File

@ -1,5 +1,5 @@
/*jslint indent: 2, nomen: true, maxlen: 100, sloppy: true, vars: true, white: true, plusplus: true */
/*global Backbone, $, window, EJS, _ */
/*global Backbone, $, window, EJS, arangoHelper, _ */
window.FoxxActiveView = Backbone.View.extend({
tagName: 'li',
@ -60,6 +60,7 @@ window.FoxxActiveView = Backbone.View.extend({
if (this._show) {
$(this.el).html(this.template.render(this.model));
}
return $(this.el);
}
});

View File

@ -3,7 +3,6 @@
var FoxxInstalledListView = Backbone.View.extend({
el: '#content',
//template: new EJS({url: 'js/templates/foxxListView.ejs'}),
template: new EJS({url: 'js/templates/applicationsView.ejs'}),
events: {

View File

@ -20,7 +20,7 @@ window.foxxMountView = Backbone.View.extend({
return this;
},
events: {
"hidden #install-foxx" : "hidden",
"hidden #install-foxx" : "hidden",
"click #cancel" : "hideModal",
"click #install" : "install"
},

View File

@ -165,10 +165,7 @@ var queryView = Backbone.View.extend({
}
});
$('.queryTooltips').tooltip({
placement: "top",
delay: {show: 3000, hide: 100}
});
arangoHelper.fixTooltips(".queryTooltips, .glyphicon", "top");
$('#aqlEditor .ace_text-input').focus();
$.gritter.removeAll();

View File

@ -674,6 +674,33 @@ function DOCUMENT (collection, id) {
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get a document by its unique id or their unique ids
////////////////////////////////////////////////////////////////////////////////
function DOCUMENT_HANDLE (id) {
"use strict";
if (TYPEWEIGHT(id) === TYPEWEIGHT_LIST) {
var result = [ ], i;
for (i = 0; i < id.length; ++i) {
try {
result.push(INTERNAL.db._document(id[i]));
}
catch (e1) {
}
}
return result;
}
try {
return INTERNAL.db._document(id);
}
catch (e2) {
return null;
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get all documents from the specified collection
////////////////////////////////////////////////////////////////////////////////
@ -3923,6 +3950,7 @@ exports.GET_INDEX = GET_INDEX;
exports.DOCUMENT_MEMBER = DOCUMENT_MEMBER;
exports.LIST = LIST;
exports.DOCUMENT = DOCUMENT;
exports.DOCUMENT_HANDLE = DOCUMENT_HANDLE;
exports.GET_DOCUMENTS = GET_DOCUMENTS;
exports.GET_DOCUMENTS_INCREMENTAL_INIT = GET_DOCUMENTS_INCREMENTAL_INIT;
exports.GET_DOCUMENTS_INCREMENTAL_CONT = GET_DOCUMENTS_INCREMENTAL_CONT;

View File

@ -357,7 +357,7 @@ HttpHandler::status_e RestAdminLogHandler::execute () {
(sortAscending ? LidCompareAsc : LidCompareDesc));
for (size_t i = 0; i < length; ++i) {
TRI_log_buffer_t* buf = (TRI_log_buffer_t*) TRI_AtVector(&clean, offset + i);
TRI_log_buffer_t* buf = (TRI_log_buffer_t*) TRI_AtVector(&clean, (size_t) (offset + i));
uint32_t l = 0;
switch (buf->_level) {

View File

@ -372,7 +372,7 @@ void RestJobHandler::getJob () {
char const* value = _request->value("count", found);
if (found) {
count = StringUtils::uint64(value);
count = (size_t) StringUtils::uint64(value);
}
vector<AsyncJobResult::IdType> ids;

View File

@ -404,7 +404,7 @@ namespace triagens {
}
for (uint64_t j = 0; j < oldAlloc; j++) {
if (!_desc.isEmptyElement(oldTable[j])) {
if (! _desc.isEmptyElement(oldTable[j])) {
addNewElement(oldTable[j]);
}
}

View File

@ -194,26 +194,28 @@ bool Thread::start (ConditionVariable * finishedCondition) {
/// @brief stops the thread
////////////////////////////////////////////////////////////////////////////////
void Thread::stop () {
int Thread::stop () {
if (_running != 0) {
LOG_TRACE("trying to cancel (aka stop) the thread '%s'", _name.c_str());
TRI_StopThread(&_thread);
return TRI_StopThread(&_thread);
}
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief joins the thread
////////////////////////////////////////////////////////////////////////////////
void Thread::join () {
TRI_JoinThread(&_thread);
int Thread::join () {
return TRI_JoinThread(&_thread);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief stops and joins the thread
////////////////////////////////////////////////////////////////////////////////
void Thread::shutdown () {
int Thread::shutdown () {
size_t const MAX_TRIES = 10;
size_t const WAIT = 10000;
@ -226,11 +228,14 @@ void Thread::shutdown () {
}
if (_running != 0) {
LOG_TRACE("trying to cancel (aka stop) the thread '%s'", _name.c_str());
TRI_StopThread(&_thread);
int res = TRI_StopThread(&_thread);
if (res != TRI_ERROR_NO_ERROR) {
LOG_ERROR("unable to stop thread %s", _name.c_str());
}
}
TRI_JoinThread(&_thread);
return TRI_JoinThread(&_thread);
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -156,19 +156,19 @@ namespace triagens {
/// @brief stops the thread
////////////////////////////////////////////////////////////////////////////////
void stop ();
int stop ();
////////////////////////////////////////////////////////////////////////////////
/// @brief joins the thread
////////////////////////////////////////////////////////////////////////////////
void join ();
int join ();
////////////////////////////////////////////////////////////////////////////////
/// @brief stops and joins the thread
////////////////////////////////////////////////////////////////////////////////
void shutdown ();
int shutdown ();
////////////////////////////////////////////////////////////////////////////////
/// @brief send signal to thread

View File

@ -1647,6 +1647,7 @@ int TRI_Crc32File (char const* path, uint32_t* crc) {
void* buffer;
int bufferSize;
int res;
int res2;
*crc = TRI_InitialCrc32();
@ -1684,11 +1685,17 @@ int TRI_Crc32File (char const* path, uint32_t* crc) {
break;
}
}
fclose(fin);
TRI_Free(TRI_UNKNOWN_MEM_ZONE, buffer);
res2 = fclose(fin);
if (res2 != TRI_ERROR_NO_ERROR && res2 != EOF) {
if (res == TRI_ERROR_NO_ERROR) {
res = res2;
}
// otherwise keep original error
}
*crc = TRI_FinalCrc32(*crc);
return res;

View File

@ -75,16 +75,16 @@
/// @brief initialises a new mutex
////////////////////////////////////////////////////////////////////////////////
void TRI_InitMutex (TRI_mutex_t* mutex) {
pthread_mutex_init(mutex, 0);
int TRI_InitMutex (TRI_mutex_t* mutex) {
return pthread_mutex_init(mutex, 0);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destroys a mutex
////////////////////////////////////////////////////////////////////////////////
void TRI_DestroyMutex (TRI_mutex_t* mutex) {
pthread_mutex_destroy(mutex);
int TRI_DestroyMutex (TRI_mutex_t* mutex) {
return pthread_mutex_destroy(mutex);
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -46,20 +46,28 @@
/// @brief initialises a new mutex
////////////////////////////////////////////////////////////////////////////////
void TRI_InitMutex (TRI_mutex_t* mutex) {
int TRI_InitMutex (TRI_mutex_t* mutex) {
mutex->_mutex = CreateMutex(NULL, FALSE, NULL);
if (mutex->_mutex == NULL) {
LOG_FATAL_AND_EXIT("cannot create the mutex");
}
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destroys a mutex
////////////////////////////////////////////////////////////////////////////////
void TRI_DestroyMutex (TRI_mutex_t* mutex) {
CloseHandle(mutex->_mutex);
int TRI_DestroyMutex (TRI_mutex_t* mutex) {
if (CloseHandle(mutex->_mutex) == 0) {
DWORD result = GetLastError();
LOG_FATAL_AND_EXIT("locks-win32.c:TRI_DestroyMutex:could not destroy the mutex -->%d",result);
}
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
@ -83,7 +91,6 @@ void TRI_LockMutex (TRI_mutex_t* mutex) {
DWORD result = WaitForSingleObject(mutex->_mutex, INFINITE);
switch (result) {
case WAIT_ABANDONED: {
LOG_FATAL_AND_EXIT("locks-win32.c:TRI_LockMutex:could not lock the condition --> WAIT_ABANDONED");
}
@ -101,7 +108,6 @@ void TRI_LockMutex (TRI_mutex_t* mutex) {
result = GetLastError();
LOG_FATAL_AND_EXIT("locks-win32.c:TRI_LockMutex:could not lock the condition --> WAIT_FAILED - reason -->%d",result);
}
}
}

View File

@ -97,13 +97,13 @@ extern "C" {
/// implements mutual exclusion. For details see www.wikipedia.org.
////////////////////////////////////////////////////////////////////////////////
void TRI_InitMutex (TRI_mutex_t*);
int TRI_InitMutex (TRI_mutex_t*);
////////////////////////////////////////////////////////////////////////////////
/// @brief destroyes a mutex
/// @brief destroys a mutex
////////////////////////////////////////////////////////////////////////////////
void TRI_DestroyMutex (TRI_mutex_t*);
int TRI_DestroyMutex (TRI_mutex_t*);
////////////////////////////////////////////////////////////////////////////////
/// @}

View File

@ -30,6 +30,7 @@
#endif
#include "BasicsC/logging.h"
#include "BasicsC/shell-colors.h"
#ifdef TRI_ENABLE_SYSLOG
#define SYSLOG_NAMES
@ -58,6 +59,12 @@
/// @{
////////////////////////////////////////////////////////////////////////////////
typedef enum {
APPENDER_TYPE_FILE,
APPENDER_TYPE_SYSLOG
}
TRI_log_appender_type_e;
////////////////////////////////////////////////////////////////////////////////
/// @brief base structure for log appenders
////////////////////////////////////////////////////////////////////////////////
@ -66,9 +73,12 @@ typedef struct TRI_log_appender_s {
void (*log) (struct TRI_log_appender_s*, TRI_log_level_e, TRI_log_severity_e, char const* msg, size_t length);
void (*reopen) (struct TRI_log_appender_s*);
void (*close) (struct TRI_log_appender_s*);
char* _contentFilter; // an optional content filter for log messages
TRI_log_severity_e _severityFilter; // appender will care only about message with a specific severity. set to TRI_LOG_SEVERITY_UNKNOWN to catch all
bool _consume; // whether or not the appender will consume the message (true) or let it through to other appenders (false)
char* (*details) (struct TRI_log_appender_s*);
char* _contentFilter; // an optional content filter for log messages
TRI_log_severity_e _severityFilter; // appender will care only about message with a specific severity. set to TRI_LOG_SEVERITY_UNKNOWN to catch all
TRI_log_appender_type_e _type;
bool _consume; // whether or not the appender will consume the message (true) or let it through to other appenders (false)
}
TRI_log_appender_t;
@ -594,6 +604,7 @@ static bool WriteStderr (char const* line, ssize_t len) {
// if write() fails, we do not care
n = TRI_WRITE(STDERR_FILENO, "\n", 1);
if (n <= 0) {
return false;
}
@ -1466,6 +1477,35 @@ static void LogAppenderFile_Log (TRI_log_appender_t* appender,
if (fd < 0) {
return;
}
if (level == TRI_LOG_LEVEL_FATAL) {
// a fatal error. always print this on stderr, too.
size_t i;
fprintf(stderr, TRI_SHELL_COLOR_RED "%s" TRI_SHELL_COLOR_RESET "\n", msg);
// this function is already called when the appenders lock is held
// no need to lock it again
for (i = 0; i < Appenders._length; ++i) {
TRI_log_appender_t* a;
char* details;
a = Appenders._buffer[i];
details = a->details(appender);
if (details != NULL) {
fprintf(stderr, "%s\n", details);
TRI_Free(TRI_CORE_MEM_ZONE, details);
}
}
if (self->_filename == NULL &&
(fd == STDOUT_FILENO || fd == STDERR_FILENO)) {
// the logfile is either stdout or stderr. no need to print the message again
return;
}
}
escaped = TRI_EscapeControlsCString(TRI_UNKNOWN_MEM_ZONE, msg, length, &escapedLength, true);
@ -1551,6 +1591,28 @@ static void LogAppenderFile_Close (TRI_log_appender_t* appender) {
TRI_Free(TRI_CORE_MEM_ZONE, self);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief provide details about the logfile appender
////////////////////////////////////////////////////////////////////////////////
static char* LogAppenderFile_Details (TRI_log_appender_t* appender) {
log_appender_file_t* self;
char buffer[1024];
self = (log_appender_file_t*) appender;
if (self->_filename != NULL &&
self->_fd != STDOUT_FILENO &&
self->_fd != STDERR_FILENO) {
snprintf(buffer, sizeof(buffer), "More error details may be provided in the logfile '%s'", self->_filename);
return TRI_DuplicateStringZ(TRI_CORE_MEM_ZONE, buffer);
}
return NULL;
}
////////////////////////////////////////////////////////////////////////////////
/// @}
////////////////////////////////////////////////////////////////////////////////
@ -1588,6 +1650,7 @@ TRI_log_appender_t* TRI_CreateLogAppenderFile (char const* filename,
return NULL;
}
appender->base._type = APPENDER_TYPE_FILE;
appender->base._contentFilter = NULL;
appender->base._severityFilter = severityFilter;
appender->base._consume = consume;
@ -1604,13 +1667,13 @@ TRI_log_appender_t* TRI_CreateLogAppenderFile (char const* filename,
// logging to stdout
if (TRI_EqualString(filename, "+")) {
appender->_filename = NULL;
appender->_fd = 1;
appender->_fd = STDOUT_FILENO;
}
// logging to stderr
else if (TRI_EqualString(filename, "-")) {
appender->_filename = NULL;
appender->_fd = 2;
appender->_fd = STDERR_FILENO;
}
// logging to file
@ -1635,9 +1698,10 @@ TRI_log_appender_t* TRI_CreateLogAppenderFile (char const* filename,
}
// set methods
appender->base.log = LogAppenderFile_Log;
appender->base.reopen = LogAppenderFile_Reopen;
appender->base.close = LogAppenderFile_Close;
appender->base.log = LogAppenderFile_Log;
appender->base.reopen = LogAppenderFile_Reopen;
appender->base.close = LogAppenderFile_Close;
appender->base.details = LogAppenderFile_Details;
// create lock
TRI_InitSpin(&appender->_lock);
@ -1775,6 +1839,18 @@ static void LogAppenderSyslog_Close (TRI_log_appender_t* appender) {
#endif
////////////////////////////////////////////////////////////////////////////////
/// @brief provide details about the logfile appender
////////////////////////////////////////////////////////////////////////////////
#ifdef TRI_ENABLE_SYSLOG
static char* LogAppenderSyslog_Details (TRI_log_appender_t* appender) {
return TRI_DuplicateStringZ(TRI_CORE_MEM_ZONE, "More error details may be provided in the syslog");
}
#endif
////////////////////////////////////////////////////////////////////////////////
/// @}
////////////////////////////////////////////////////////////////////////////////
@ -1817,6 +1893,7 @@ TRI_log_appender_t* TRI_CreateLogAppenderSyslog (char const* name,
return NULL;
}
appender->base._type = APPENDER_TYPE_SYSLOG;
appender->base._contentFilter = NULL;
appender->base._severityFilter = severityFilter;
appender->base._consume = consume;
@ -1825,6 +1902,7 @@ TRI_log_appender_t* TRI_CreateLogAppenderSyslog (char const* name,
appender->base.log = LogAppenderSyslog_Log;
appender->base.reopen = LogAppenderSyslog_Reopen;
appender->base.close = LogAppenderSyslog_Close;
appender->base.details = LogAppenderSyslog_Details;
if (contentFilter != NULL) {
if (NULL == (appender->base._contentFilter = TRI_DuplicateStringZ(TRI_CORE_MEM_ZONE, contentFilter))) {
@ -1956,6 +2034,7 @@ bool TRI_ShutdownLogging (bool clearBuffers) {
TRI_SignalCondition(&LogCondition);
TRI_UnlockCondition(&LogCondition);
// ignore all errors here as we cannot log them anywhere...
TRI_JoinThread(&LoggingThread);
TRI_DestroyMutex(&LogMessageQueueLock);
TRI_DestroyVector(&LogMessageQueue);

View File

@ -197,24 +197,24 @@ bool TRI_StartThread (TRI_thread_t* thread,
/// @brief trys to stops a thread
////////////////////////////////////////////////////////////////////////////////
void TRI_StopThread (TRI_thread_t* thread) {
pthread_cancel(*thread);
int TRI_StopThread (TRI_thread_t* thread) {
return pthread_cancel(*thread);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief detachs a thread
////////////////////////////////////////////////////////////////////////////////
void TRI_DetachThread (TRI_thread_t* thread) {
pthread_detach(*thread);
int TRI_DetachThread (TRI_thread_t* thread) {
return pthread_detach(*thread);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief waits for a thread to finish
////////////////////////////////////////////////////////////////////////////////
void TRI_JoinThread (TRI_thread_t* thread) {
pthread_join(*thread, 0);
int TRI_JoinThread (TRI_thread_t* thread) {
return pthread_join(*thread, 0);
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -74,7 +74,7 @@ thread_data_t;
static DWORD __stdcall ThreadStarter (void* data) {
thread_data_t* d;
d = data;
d = (thread_data_t*) data;
d->starter(d->_data);
TRI_Free(TRI_CORE_MEM_ZONE, d);
@ -148,7 +148,11 @@ bool TRI_StartThread (TRI_thread_t* thread, const char* name, void (*starter)(v
DWORD threadId;
thread_data_t* d;
d = TRI_Allocate(TRI_CORE_MEM_ZONE, sizeof(thread_data_t), false);
d = (thread_data_t*) TRI_Allocate(TRI_CORE_MEM_ZONE, sizeof(thread_data_t), false);
if (d == NULL) {
return false;
}
d->starter = starter;
d->_data = data;
@ -170,35 +174,61 @@ bool TRI_StartThread (TRI_thread_t* thread, const char* name, void (*starter)(v
return true;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief attempts to stop/terminate a thread
////////////////////////////////////////////////////////////////////////////////
void TRI_StopThread(TRI_thread_t* thread) {
TerminateThread(thread,0);
int TRI_StopThread (TRI_thread_t* thread) {
if (TerminateThread(thread, 0) == 0) {
DWORD result = GetLastError();
LOG_ERROR("threads-win32.c:TRI_StopThread:could not stop thread -->%d",result);
return TRI_ERROR_INTERNAL;
}
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief waits for a thread to finish
////////////////////////////////////////////////////////////////////////////////
void TRI_DetachThread(TRI_thread_t* thread) {
int TRI_DetachThread (TRI_thread_t* thread) {
// TODO: no native implementation
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief waits for a thread to finish
////////////////////////////////////////////////////////////////////////////////
void TRI_JoinThread (TRI_thread_t* thread) {
WaitForSingleObject(*thread, INFINITE);
int TRI_JoinThread (TRI_thread_t* thread) {
DWORD result = WaitForSingleObject(*thread, INFINITE);
switch (result) {
case WAIT_ABANDONED: {
LOG_FATAL_AND_EXIT("threads-win32.c:TRI_JoinThread:could not join thread --> WAIT_ABANDONED");
}
case WAIT_OBJECT_0: {
// everything ok
break;
}
case WAIT_TIMEOUT: {
LOG_FATAL_AND_EXIT("threads-win32.c:TRI_JoinThread:could not joint thread --> WAIT_TIMEOUT");
}
case WAIT_FAILED: {
result = GetLastError();
LOG_FATAL_AND_EXIT("threads-win32.c:TRI_JoinThread:could not join thread --> WAIT_FAILED - reason -->%d",result);
}
}
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sends a signal to a thread
////////////////////////////////////////////////////////////////////////////////
@ -208,7 +238,6 @@ bool TRI_SignalThread (TRI_thread_t* thread, int signum) {
return false;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief checks if this thread is the thread passed as a parameter
////////////////////////////////////////////////////////////////////////////////

View File

@ -104,31 +104,31 @@ TRI_tid_t TRI_CurrentThreadId (void);
/// @brief starts a thread
////////////////////////////////////////////////////////////////////////////////
bool TRI_StartThread (TRI_thread_t* thread, char const* name, void (*starter)(void*), void* data);
bool TRI_StartThread (TRI_thread_t*, char const*, void (*starter)(void*), void* data);
////////////////////////////////////////////////////////////////////////////////
/// @brief trys to stops a thread
////////////////////////////////////////////////////////////////////////////////
void TRI_StopThread (TRI_thread_t* thread);
int TRI_StopThread (TRI_thread_t*);
////////////////////////////////////////////////////////////////////////////////
/// @brief detachs a thread
////////////////////////////////////////////////////////////////////////////////
void TRI_DetachThread (TRI_thread_t* thread);
int TRI_DetachThread (TRI_thread_t*);
////////////////////////////////////////////////////////////////////////////////
/// @brief waits for a thread to finish
////////////////////////////////////////////////////////////////////////////////
void TRI_JoinThread (TRI_thread_t*);
int TRI_JoinThread (TRI_thread_t*);
////////////////////////////////////////////////////////////////////////////////
/// @brief sends a signal to the thread
////////////////////////////////////////////////////////////////////////////////
bool TRI_SignalThread (TRI_thread_t* thread, int signal);
bool TRI_SignalThread (TRI_thread_t*, int);
////////////////////////////////////////////////////////////////////////////////
/// @brief checks if we are the thread

View File

@ -39,7 +39,7 @@
#include "HttpServer/HttpServer.h"
#include "HttpServer/HttpsServer.h"
#include "Logger/Logger.h"
#include "Rest/OperationMode.h"
#include "Rest/Version.h"
#include "Scheduler/ApplicationScheduler.h"
using namespace triagens::basics;
@ -101,6 +101,7 @@ ApplicationEndpointServer::ApplicationEndpointServer (ApplicationServer* applica
_httpPort(),
_endpoints(),
_keepAliveTimeout(300.0),
_defaultApiCompatibility(0),
_allowMethodOverride(false),
_backlogSize(10),
_httpsKeyfile(),
@ -111,6 +112,8 @@ ApplicationEndpointServer::ApplicationEndpointServer (ApplicationServer* applica
_sslCipherList(""),
_sslContext(0),
_rctx() {
_defaultApiCompatibility = Version::getNumericServerVersion();
}
////////////////////////////////////////////////////////////////////////////////
@ -218,8 +221,9 @@ void ApplicationEndpointServer::setupOptions (map<string, ProgramOptionsDescript
options[ApplicationServer::OPTIONS_SERVER + ":help-admin"]
("server.allow-method-override", &_allowMethodOverride, "allow HTTP method override using special headers")
("server.keep-alive-timeout", &_keepAliveTimeout, "keep-alive timeout in seconds")
("server.backlog-size", &_backlogSize, "listen backlog size")
("server.default-api-compatibility", &_defaultApiCompatibility, "default API compatibility version (e.g. 10300)")
("server.keep-alive-timeout", &_keepAliveTimeout, "keep-alive timeout in seconds")
;
options[ApplicationServer::OPTIONS_SSL]
@ -265,6 +269,9 @@ bool ApplicationEndpointServer::parsePhase2 (ProgramOptions& options) {
}
}
if (_defaultApiCompatibility < 10300L) {
LOGGER_FATAL_AND_EXIT("invalid value for --server.default-api-compatibility. minimum allowed value is 10300");
}
// and return
return true;
@ -545,9 +552,12 @@ bool ApplicationEndpointServer::prepare () {
_endpointList.dump();
_handlerFactory = new HttpHandlerFactory(_authenticationRealm,
_defaultApiCompatibility,
_allowMethodOverride,
_setContext,
_contextData);
LOGGER_INFO("using default API compatibility: " << _defaultApiCompatibility);
return true;
}

View File

@ -404,6 +404,30 @@ namespace triagens {
double _keepAliveTimeout;
////////////////////////////////////////////////////////////////////////////////
/// @brief default API compatibility
///
/// @CMDOPT{\--server.default-api-compatibility}
///
/// This option can be used to determine the API compatibility of the ArangoDB
/// server. It expects an ArangoDB version number as an integer, calculated as
/// follows:
///
/// `10000 * major + 100 * minor` (example: `10400` for ArangoDB 1.4)
///
/// The value of this option will have an influence on some API return values
/// when the HTTP client used does not send any compatibility information.
///
/// In most cases it will be sufficient to not set this option explicitly but to
/// keep the default value. However, in case an "old" ArangoDB client is used
/// that does not send any compatibility information and that cannot handle the
/// responses of the current version of ArangoDB, it might be reasonable to set
/// the option to an old version number to improve compatibility with older
/// clients.
////////////////////////////////////////////////////////////////////////////////
int32_t _defaultApiCompatibility;
////////////////////////////////////////////////////////////////////////////////
/// @brief allow HTTP method override via custom headers?
///

View File

@ -56,10 +56,12 @@ using namespace std;
////////////////////////////////////////////////////////////////////////////////
HttpHandlerFactory::HttpHandlerFactory (std::string const& authenticationRealm,
int32_t minCompatibility,
bool allowMethodOverride,
context_fptr setContext,
void* setContextData)
: _authenticationRealm(authenticationRealm),
_minCompatibility(minCompatibility),
_allowMethodOverride(allowMethodOverride),
_setContext(setContext),
_setContextData(setContextData),
@ -72,6 +74,7 @@ HttpHandlerFactory::HttpHandlerFactory (std::string const& authenticationRealm,
HttpHandlerFactory::HttpHandlerFactory (HttpHandlerFactory const& that)
: _authenticationRealm(that._authenticationRealm),
_minCompatibility(that._minCompatibility),
_allowMethodOverride(that._allowMethodOverride),
_setContext(that._setContext),
_setContextData(that._setContextData),
@ -88,6 +91,7 @@ HttpHandlerFactory::HttpHandlerFactory (HttpHandlerFactory const& that)
HttpHandlerFactory& HttpHandlerFactory::operator= (HttpHandlerFactory const& that) {
if (this != &that) {
_authenticationRealm = that._authenticationRealm;
_minCompatibility = that._minCompatibility;
_allowMethodOverride = that._allowMethodOverride;
_setContext = that._setContext;
_setContextData = that._setContextData;
@ -184,7 +188,7 @@ HttpRequest* HttpHandlerFactory::createRequest (ConnectionInfo const& info,
}
#endif
HttpRequest* request = new HttpRequest(info, ptr, length, _allowMethodOverride);
HttpRequest* request = new HttpRequest(info, ptr, length, _minCompatibility, _allowMethodOverride);
if (request != 0) {
setRequestContext(request);

View File

@ -126,6 +126,7 @@ namespace triagens {
////////////////////////////////////////////////////////////////////////////////
HttpHandlerFactory (std::string const&,
int32_t,
bool,
context_fptr,
void*);
@ -248,6 +249,14 @@ namespace triagens {
string _authenticationRealm;
////////////////////////////////////////////////////////////////////////////////
/// @brief minimum compatibility
/// the value is an ArangoDB version number in the following format:
/// 10000 * major + 100 * minor (e.g. 10400 for ArangoDB 1.4)
////////////////////////////////////////////////////////////////////////////////
int32_t _minCompatibility;
////////////////////////////////////////////////////////////////////////////////
/// @brief allow overriding HTTP request method with custom headers
////////////////////////////////////////////////////////////////////////////////

View File

@ -188,7 +188,28 @@ static int forkProcess (string const& workingDirectory, string& current) {
}
}
// DO NOT close the standard file descriptors
// we're a daemon so there won't be a terminal attached
// close the standard file descriptors and re-open them mapped to /dev/null
int fd = open("/dev/null", O_RDWR | O_CREAT, 0644);
if (fd < 0) {
LOG_FATAL_AND_EXIT("cannot open /dev/null");
}
if (dup2(fd, STDIN_FILENO) < 0) {
LOG_FATAL_AND_EXIT("cannot re-map stdin to /dev/null");
}
if (dup2(fd, STDOUT_FILENO) < 0) {
LOG_FATAL_AND_EXIT("cannot re-map stdout to /dev/null");
}
if (dup2(fd, STDERR_FILENO) < 0) {
LOG_FATAL_AND_EXIT("cannot re-map stderr to /dev/null");
}
close(fd);
return 0;
}

View File

@ -59,6 +59,7 @@ static char const* EMPTY_STR = "";
HttpRequest::HttpRequest (ConnectionInfo const& info,
char const* header,
size_t length,
int32_t defaultApiCompatibility,
bool allowMethodOverride)
: _requestPath(EMPTY_STR),
_headers(5),
@ -78,6 +79,7 @@ HttpRequest::HttpRequest (ConnectionInfo const& info,
_user(),
_requestContext(0),
_isRequestContextOwner(false),
_defaultApiCompatibility(defaultApiCompatibility),
_allowMethodOverride(allowMethodOverride) {
// copy request - we will destroy/rearrange the content to compute the
@ -92,32 +94,6 @@ HttpRequest::HttpRequest (ConnectionInfo const& info,
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief http request constructor
////////////////////////////////////////////////////////////////////////////////
HttpRequest::HttpRequest ()
: _requestPath(EMPTY_STR),
_headers(1),
_values(1),
_arrayValues(1),
_cookies(1),
_contentLength(0),
_body(0),
_bodySize(0),
_freeables(),
_connectionInfo(),
_type(HTTP_REQUEST_ILLEGAL),
_prefix(),
_suffix(),
_version(HTTP_UNKNOWN),
_databaseName(),
_user(),
_requestContext(0),
_isRequestContextOwner(false),
_allowMethodOverride(false) {
}
////////////////////////////////////////////////////////////////////////////////
/// @brief destructor
////////////////////////////////////////////////////////////////////////////////
@ -524,16 +500,6 @@ int HttpRequest::setBody (char const* newBody,
return TRI_ERROR_NO_ERROR;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the request body as TRI_json_t*
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* HttpRequest::toJson (char** errmsg) {
TRI_json_t* json = TRI_Json2String(TRI_UNKNOWN_MEM_ZONE, body(), errmsg);
return json;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief sets a header field
////////////////////////////////////////////////////////////////////////////////
@ -568,6 +534,82 @@ void HttpRequest::setHeader (char const* key,
}
}
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the request body as TRI_json_t*
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* HttpRequest::toJson (char** errmsg) {
TRI_json_t* json = TRI_Json2String(TRI_UNKNOWN_MEM_ZONE, body(), errmsg);
return json;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief determine version compatibility
////////////////////////////////////////////////////////////////////////////////
int32_t HttpRequest::compatibility () {
int32_t result = _defaultApiCompatibility;
bool found;
char const* apiVersion = header("x-arango-version", found);
if (! found) {
return result;
}
static const int32_t minCompatibility = 10300L;
char const* p = apiVersion;
// read major version
while (*p >= '0' && *p <= '9') {
++p;
}
if ((*p == '.' || *p == '-' || *p == '\0') && p != apiVersion) {
int32_t major = TRI_Int32String2(apiVersion, (p - apiVersion));
if (major >= 10000) {
// version specified as "10400"
if (*p == '\0') {
result = major;
if (result < minCompatibility) {
result = minCompatibility;
}
else {
// set patch-level to 0
result /= 100L;
result *= 100L;
}
return result;
}
}
apiVersion = ++p;
// read minor version
while (*p >= '0' && *p <= '9') {
++p;
}
if ((*p == '.' || *p == '-' || *p == '\0') && p != apiVersion) {
int32_t minor = TRI_Int32String2(apiVersion, (p - apiVersion));
result = (int32_t) (minor * 100L + major * 10000L);
}
}
if (result < minCompatibility) {
// minimum value
result = minCompatibility;
}
return result;
}
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the protocol
////////////////////////////////////////////////////////////////////////////////

View File

@ -97,15 +97,6 @@ namespace triagens {
public:
////////////////////////////////////////////////////////////////////////////////
/// @brief http request constructor
///
/// Constructs a http request given nothing. You can add the values, the header
/// information, and path information afterwards.
////////////////////////////////////////////////////////////////////////////////
HttpRequest ();
////////////////////////////////////////////////////////////////////////////////
/// @brief http request constructor
///
@ -120,6 +111,7 @@ namespace triagens {
HttpRequest (ConnectionInfo const&,
char const*,
size_t,
int32_t,
bool);
////////////////////////////////////////////////////////////////////////////////
@ -134,12 +126,6 @@ namespace triagens {
public:
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the request body as TRI_json_t*
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* toJson (char**);
////////////////////////////////////////////////////////////////////////////////
/// @brief returns the protocol
////////////////////////////////////////////////////////////////////////////////
@ -460,6 +446,18 @@ namespace triagens {
void setHeader (char const* key, size_t keyLength, char const* value);
////////////////////////////////////////////////////////////////////////////////
/// @brief determine version compatibility
////////////////////////////////////////////////////////////////////////////////
int32_t compatibility ();
////////////////////////////////////////////////////////////////////////////////
/// @brief gets the request body as TRI_json_t*
////////////////////////////////////////////////////////////////////////////////
TRI_json_t* toJson (char**);
// -----------------------------------------------------------------------------
// --SECTION-- public static methods
// -----------------------------------------------------------------------------
@ -689,6 +687,14 @@ namespace triagens {
bool _isRequestContextOwner;
////////////////////////////////////////////////////////////////////////////////
/// @brief default API compatibility
/// the value is an ArangoDB version number in the following format:
/// 10000 * major + 100 * minor (e.g. 10400 for ArangoDB 1.4)
////////////////////////////////////////////////////////////////////////////////
int32_t _defaultApiCompatibility;
////////////////////////////////////////////////////////////////////////////////
/// @brief whether or not overriding the HTTP method via custom headers
/// (x-http-method, x-method-override or x-http-method-override) is allowed

View File

@ -32,6 +32,7 @@
#endif
#include "BasicsC/common.h"
#include "BasicsC/conversions.h"
#include "Basics/Common.h"
#include "Basics/Utf8Helper.h"
#include "BasicsC/json.h"
@ -77,6 +78,35 @@ void Version::initialise () {
Values["repository-version"] = getRepositoryVersion();
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get numeric server version
////////////////////////////////////////////////////////////////////////////////
int32_t Version::getNumericServerVersion () {
char const* apiVersion = TRI_VERSION;
char const* p = apiVersion;
// read major version
while (*p >= '0' && *p <= '9') {
++p;
}
assert(*p == '.');
int32_t major = TRI_Int32String2(apiVersion, (p - apiVersion));
apiVersion = ++p;
// read minor version
while (*p >= '0' && *p <= '9') {
++p;
}
assert((*p == '.' || *p == '-' || *p == '\0') && p != apiVersion);
int32_t minor = TRI_Int32String2(apiVersion, (p - apiVersion));
return (int32_t) (minor * 100L + major * 10000L);
}
////////////////////////////////////////////////////////////////////////////////
/// @brief get server version
////////////////////////////////////////////////////////////////////////////////

View File

@ -28,8 +28,7 @@
#ifndef TRIAGENS_REST_VERSION_H
#define TRIAGENS_REST_VERSION_H 1
#include <string>
#include <map>
#include "Basics/Common.h"
// -----------------------------------------------------------------------------
// --SECTION-- forward declarations
@ -88,6 +87,12 @@ namespace triagens {
static void initialise ();
////////////////////////////////////////////////////////////////////////////////
/// @brief get numeric server version
////////////////////////////////////////////////////////////////////////////////
static int32_t getNumericServerVersion ();
////////////////////////////////////////////////////////////////////////////////
/// @brief get server version
////////////////////////////////////////////////////////////////////////////////

View File

@ -299,6 +299,7 @@ namespace triagens {
_writeBuffer.appendText("Connection: Close\r\n");
}
_writeBuffer.appendText("User-Agent: ArangoDB\r\n");
_writeBuffer.appendText("X-Arango-Version: 1.4\r\n");
_writeBuffer.appendText("Accept-Encoding: deflate\r\n");
// do basic authorization

View File

@ -63,6 +63,7 @@ TRI_v8_global_s::TRI_v8_global_s (v8::Isolate* isolate)
BodyKey(),
ClientKey(),
CodeKey(),
CompatibilityKey(),
ContentTypeKey(),
DatabaseKey(),
DoCompactKey(),
@ -126,6 +127,7 @@ TRI_v8_global_s::TRI_v8_global_s (v8::Isolate* isolate)
BodyKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("body"));
ClientKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("client"));
CodeKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("code"));
CompatibilityKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("compatibility"));
ContentTypeKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("contentType"));
CookiesKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("cookies"));
DatabaseKey = v8::Persistent<v8::String>::New(isolate, TRI_V8_SYMBOL("database"));

View File

@ -347,6 +347,12 @@ typedef struct TRI_v8_global_s {
v8::Persistent<v8::String> CodeKey;
////////////////////////////////////////////////////////////////////////////////
/// @brief "compatibility" key name
////////////////////////////////////////////////////////////////////////////////
v8::Persistent<v8::String> CompatibilityKey;
////////////////////////////////////////////////////////////////////////////////
/// @brief "contentType" key name
////////////////////////////////////////////////////////////////////////////////