mirror of https://gitee.com/bigwinds/arangodb
Doc - add duplicate check in the build script (#5897)
This commit is contained in:
parent
0bf38a1c8b
commit
bf32c4e7e1
|
@ -23,7 +23,7 @@ Solution
|
|||
|
||||
The EDGES can be simply replaced by a call to the AQL traversal.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
|
||||
The syntax is slightly different but mapping should be simple:
|
||||
|
||||
|
@ -35,14 +35,14 @@ The syntax is slightly different but mapping should be simple:
|
|||
[..] FOR v, e IN OUTBOUND @startId @@edgeCollection RETURN e
|
||||
```
|
||||
|
||||
#### Using EdgeExamples
|
||||
**Using EdgeExamples**
|
||||
|
||||
Examples have to be transformed into AQL filter statements.
|
||||
How to do this please read the GRAPH_VERTICES section
|
||||
in [Migrating GRAPH_* Functions from 2.8 or earlier to 3.0](MigratingGraphFunctionsTo3.md).
|
||||
Apply these filters on the edge variable `e`.
|
||||
|
||||
#### Option incluceVertices
|
||||
**Option incluceVertices**
|
||||
|
||||
In order to include the vertices you just use the vertex variable v as well:
|
||||
|
||||
|
@ -62,7 +62,7 @@ The NEIGHBORS is a breadth-first-search on the graph with a global unique check
|
|||
Due to syntax changes the vertex collection of the start vertex is no longer mandatory to be given.
|
||||
You may have to adjust bindParameteres for this query.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
|
||||
The default options did just return the neighbors `_id` value.
|
||||
|
||||
|
@ -76,7 +76,7 @@ The default options did just return the neighbors `_id` value.
|
|||
|
||||
NOTE: The direction cannot be given as a bindParameter any more it has to be hard-coded in the query.
|
||||
|
||||
#### Using edgeExamples
|
||||
**Using edgeExamples**
|
||||
|
||||
Examples have to be transformed into AQL filter statements.
|
||||
How to do this please read the GRAPH_VERTICES section
|
||||
|
@ -109,7 +109,7 @@ FILTER e.label == 'friend'
|
|||
RETURN DISTINCT n._id
|
||||
```
|
||||
|
||||
#### Option includeData
|
||||
**Option includeData**
|
||||
|
||||
If you want to include the data simply return the complete document instead of only the `_id`value.
|
||||
|
||||
|
@ -126,7 +126,7 @@ If you want to include the data simply return the complete document instead of o
|
|||
This function computes all paths of the entire edge collection (with a given minDepth and maxDepth) as you can imagine this feature is extremely expensive and should never be used.
|
||||
However paths can again be replaced by AQL traversal.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
By default paths of length 0 to 10 are returned. And circles are not followed.
|
||||
|
||||
```
|
||||
|
@ -138,7 +138,7 @@ FOR start IN @@vertexCollection
|
|||
FOR v, e, p IN 0..10 OUTBOUND start @@edgeCollection RETURN {source: start, destination: v, edges: p.edges, vertices: p.vertices}
|
||||
```
|
||||
|
||||
#### followCycles
|
||||
**followCycles**
|
||||
|
||||
If this option is set we have to modify the options of the traversal by modifying the `uniqueEdges` property:
|
||||
|
||||
|
@ -151,7 +151,7 @@ FOR start IN @@vertexCollection
|
|||
FOR v, e, p IN 0..10 OUTBOUND start @@edgeCollection OPTIONS {uniqueEdges: 'none'} RETURN {source: start, destination: v, edges: p.edges, vertices: p.vertices}
|
||||
```
|
||||
|
||||
#### minDepth and maxDepth
|
||||
**minDepth and maxDepth**
|
||||
|
||||
If this option is set we have to give these parameters directly before the direction.
|
||||
|
||||
|
|
|
@ -24,16 +24,16 @@ Graph functions covered in this recipe:
|
|||
Solution 1: Quick and Dirty (not recommended)
|
||||
---------------------------------------------
|
||||
|
||||
### When to use this solution
|
||||
**When to use this solution**
|
||||
|
||||
I am not willing to invest a lot if time into the upgrade process and i am
|
||||
I am not willing to invest a lot if time into the upgrade process and I am
|
||||
willing to surrender some performance in favor of less effort.
|
||||
Some constellations may not work with this solution due to the nature of
|
||||
user-defined functions.
|
||||
Especially check for AQL queries that do both modifications
|
||||
and `GRAPH_*` functions.
|
||||
|
||||
### Registering user-defined functions
|
||||
**Registering user-defined functions**
|
||||
|
||||
This step has to be executed once on ArangoDB for every database we are using.
|
||||
|
||||
|
@ -46,13 +46,13 @@ graphs._registerCompatibilityFunctions();
|
|||
|
||||
These have registered all old `GRAPH_*` functions as user-defined functions again, with the prefix `arangodb::`.
|
||||
|
||||
### Modify the application code
|
||||
**Modify the application code**
|
||||
|
||||
Next we have to go through our application code and replace all calls to `GRAPH_*` by `arangodb::GRAPH_*`.
|
||||
Now run a testrun of our application and check if it worked.
|
||||
Perform a test run of the application and check if it worked.
|
||||
If it worked we are ready to go.
|
||||
|
||||
### Important Information
|
||||
**Important Information**
|
||||
|
||||
The user defined functions will call translated subqueries (as described in Solution 2).
|
||||
The optimizer does not know anything about these subqueries beforehand and cannot optimize the whole plan.
|
||||
|
@ -62,14 +62,14 @@ a "really" translated query may work while the user-defined function work around
|
|||
Solution 2: Translating the queries (recommended)
|
||||
-------------------------------------------------
|
||||
|
||||
### When to use this solution
|
||||
**When to use this solution**
|
||||
|
||||
I am willing to invest some time on my queries in order to get
|
||||
maximum performance, full query optimization and a better
|
||||
control of my queries. No forcing into the old layout
|
||||
any more.
|
||||
|
||||
### Before you start
|
||||
**Before you start**
|
||||
|
||||
If you are using `vertexExamples` which are not only `_id` strings do not skip
|
||||
the GRAPH_VERTICES section, because it will describe how to translate them to
|
||||
|
@ -90,9 +90,9 @@ FOR start GRAPH_VERTICES(@graph, @myExample)
|
|||
|
||||
All non GRAPH_VERTICES functions will only explain the transformation for a single input document's `_id`.
|
||||
|
||||
### Options used everywhere
|
||||
**Options used everywhere**
|
||||
|
||||
#### Option edgeCollectionRestriction
|
||||
**Option edgeCollectionRestriction**
|
||||
|
||||
In order to use edge Collection restriction we just use the feature that the traverser
|
||||
can walk over a list of edge collections directly. So the edgeCollectionRestrictions
|
||||
|
@ -108,7 +108,7 @@ just form this list (exampleGraphEdges):
|
|||
|
||||
Note: The `@graphName` bindParameter is not used anymore and probably has to be removed from the query.
|
||||
|
||||
#### Option includeData
|
||||
**Option includeData**
|
||||
|
||||
If we use the option includeData we simply return the object directly instead of only the _id
|
||||
|
||||
|
@ -122,7 +122,7 @@ Example GRAPH_EDGES:
|
|||
[..] FOR v, e IN ANY @startId GRAPH @graphName RETURN DISTINCT e
|
||||
```
|
||||
|
||||
#### Option direction
|
||||
**Option direction**
|
||||
|
||||
The direction has to be placed before the start id.
|
||||
Note here: The direction has to be placed as Word it cannot be handed in via a bindParameter
|
||||
|
@ -136,7 +136,7 @@ anymore:
|
|||
[..] FOR v, e IN INBOUND @startId GRAPH @graphName RETURN DISTINCT e._id
|
||||
```
|
||||
|
||||
#### Options minDepth, maxDepth
|
||||
**Options minDepth, maxDepth**
|
||||
|
||||
If we use the options minDepth and maxDepth (both default 1 if not set) we can simply
|
||||
put them in front of the direction part in the Traversal statement.
|
||||
|
@ -151,7 +151,7 @@ Example GRAPH_EDGES:
|
|||
[..] FOR v, e IN 2..4 ANY @startId GRAPH @graphName RETURN DISTINCT e._id
|
||||
```
|
||||
|
||||
#### Option maxIteration
|
||||
**Option maxIteration**
|
||||
|
||||
The option `maxIterations` is removed without replacement.
|
||||
Your queries are now bound by main memory not by an arbitrary number of iterations.
|
||||
|
@ -165,7 +165,7 @@ There we have three possibilities:
|
|||
2. The example is `null` or `{}`.
|
||||
3. The example is a non empty object or an array.
|
||||
|
||||
#### Example is '_id' string
|
||||
**Example is '_id' string**
|
||||
|
||||
This is the easiest replacement. In this case we simply replace the function with a call to `DOCUMENT`:
|
||||
|
||||
|
@ -181,7 +181,7 @@ NOTE: The `@graphName` is not required anymore, we may have to adjust bindParame
|
|||
|
||||
The AQL graph features can work with an id directly, no need to call `DOCUMENT` before if we just need this to find a starting point.
|
||||
|
||||
#### Example is `null` or the empty object
|
||||
**Example is `null` or the empty object**
|
||||
|
||||
This case means we use all documents from the graph.
|
||||
Here we first have to now the vertex collections of the graph.
|
||||
|
@ -225,7 +225,7 @@ collections are actually relevant as this `UNION` is a rather expensive operatio
|
|||
If we use the option `vertexCollectionRestriction` in the original query. The `UNION` has to be formed
|
||||
by the collections in this restriction instead of ALL collections.
|
||||
|
||||
#### Example is a non-empty object
|
||||
**Example is a non-empty object**
|
||||
|
||||
First we follow the instructions for the empty object above.
|
||||
In this section we will just focus on a single collection `vertices`, the UNION for multiple collections
|
||||
|
@ -248,7 +248,7 @@ Example:
|
|||
[..]
|
||||
```
|
||||
|
||||
#### Example is an array
|
||||
**Example is an array**
|
||||
|
||||
The idea transformation is almost identical to a single non-empty object.
|
||||
For each element in the array we create the filter conditions and than we
|
||||
|
@ -270,7 +270,7 @@ For each element in the array we create the filter conditions and than we
|
|||
|
||||
The GRAPH_EDGES can be simply replaced by a call to the AQL traversal.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
|
||||
The default options did use a direction `ANY` and returned a distinct result of the edges.
|
||||
Also it did just return the edges `_id` value.
|
||||
|
@ -283,7 +283,7 @@ Also it did just return the edges `_id` value.
|
|||
[..] FOR v, e IN ANY @startId GRAPH @graphName RETURN DISTINCT e._id
|
||||
```
|
||||
|
||||
#### Option edgeExamples.
|
||||
**Option edgeExamples.**
|
||||
|
||||
See `GRAPH_VERTICES` on how to transform examples to AQL FILTER. Apply the filter on the edge variable `e`.
|
||||
|
||||
|
@ -291,7 +291,7 @@ See `GRAPH_VERTICES` on how to transform examples to AQL FILTER. Apply the filte
|
|||
|
||||
The GRAPH_NEIGHBORS is a breadth-first-search on the graph with a global unique check for vertices. So we can replace it by a an AQL traversal with these options.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
|
||||
The default options did use a direction `ANY` and returned a distinct result of the neighbors.
|
||||
Also it did just return the neighbors `_id` value.
|
||||
|
@ -304,16 +304,16 @@ Also it did just return the neighbors `_id` value.
|
|||
[..] FOR n IN ANY @startId GRAPH @graphName OPTIONS {bfs: true, uniqueVertices: 'global'} RETURN n
|
||||
```
|
||||
|
||||
#### Option neighborExamples
|
||||
**Option neighborExamples**
|
||||
|
||||
See `GRAPH_VERTICES` on how to transform examples to AQL FILTER. Apply the filter on the neighbor variable `n`.
|
||||
|
||||
#### Option edgeExamples
|
||||
**Option edgeExamples**
|
||||
|
||||
See `GRAPH_VERTICES` on how to transform examples to AQL FILTER. Apply the filter on the edge variable `e`.
|
||||
|
||||
However this is a bit more complicated as it interferes with the global uniqueness check.
|
||||
For edgeExamples it is sufficent when any edge pointing to the neighbor matches the filter. Using `{uniqueVertices: 'global'}` first picks any edge randomly. Than it checks against this edge only.
|
||||
For edgeExamples it is sufficient when any edge pointing to the neighbor matches the filter. Using `{uniqueVertices: 'global'}` first picks any edge randomly. Than it checks against this edge only.
|
||||
If we know there are no vertex pairs with multiple edges between them we can use the simple variant which is save:
|
||||
|
||||
```
|
||||
|
@ -334,7 +334,7 @@ If there may be multiple edges between the same pair of vertices we have to make
|
|||
[..] FOR n, e IN ANY @startId GRAPH @graphName OPTIONS {bfs: true} FILTER e.label == 'friend' RETURN DISTINCT n._id
|
||||
```
|
||||
|
||||
#### Option vertexCollectionRestriction
|
||||
**Option vertexCollectionRestriction**
|
||||
|
||||
If we use the vertexCollectionRestriction we have to postFilter the neighbors based on their collection. Therefore we can make use of the function `IS_SAME_COLLECTION`:
|
||||
|
||||
|
@ -398,7 +398,7 @@ This function computes all paths of the entire graph (with a given minDepth and
|
|||
However paths can again be replaced by AQL traversal.
|
||||
Assume we only have one vertex collection `vertices` again.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
By default paths of length 0 to 10 are returned. And circles are not followed.
|
||||
|
||||
```
|
||||
|
@ -410,7 +410,7 @@ FOR start IN vertices
|
|||
FOR v, e, p IN 0..10 OUTBOUND start GRAPH 'graph' RETURN {source: start, destination: v, edges: p.edges, vertices: p.vertices}
|
||||
```
|
||||
|
||||
#### followCycles
|
||||
**followCycles**
|
||||
|
||||
If this option is set we have to modify the options of the traversal by modifying the `uniqueEdges` property:
|
||||
|
||||
|
@ -428,7 +428,7 @@ FOR v, e, p IN 0..10 OUTBOUND start GRAPH 'graph' OPTIONS {uniqueEdges: 'none'}
|
|||
This feature involves several full-collection scans and therefore is extremely expensive.
|
||||
If you really need it you can transform it with the help of `ATTRIBUTES`, `KEEP` and `ZIP`.
|
||||
|
||||
#### Start with single _id
|
||||
**Start with single _id**
|
||||
|
||||
```
|
||||
// OLD
|
||||
|
@ -445,7 +445,7 @@ FILTER LENGTH(shared) > 1 // Return them only if they share an attribute
|
|||
RETURN ZIP([left._id], [KEEP(right, shared)]) // Build the result
|
||||
```
|
||||
|
||||
#### Start with vertexExamples
|
||||
**Start with vertexExamples**
|
||||
|
||||
Again we assume we only have a single collection `vertices`.
|
||||
We have to transform the examples into filters. Iterate
|
||||
|
@ -480,7 +480,7 @@ FOR left IN vertices
|
|||
|
||||
A shortest path computation is now done via the new SHORTEST_PATH AQL statement.
|
||||
|
||||
#### No options
|
||||
**No options**
|
||||
|
||||
```
|
||||
// OLD
|
||||
|
@ -500,7 +500,7 @@ RETURN { // We rebuild the old format
|
|||
}
|
||||
```
|
||||
|
||||
#### Options weight and defaultWeight
|
||||
**Options weight and defaultWeight**
|
||||
|
||||
The new AQL SHORTEST_PATH offers the options `weightAttribute` and `defaultWeight`.
|
||||
|
||||
|
@ -767,7 +767,7 @@ Path data (shortened):
|
|||
The first and second vertex of the nth path are connected by the first edge
|
||||
(`p[n].vertices[0]` ⟝ `p[n].edges[0]` → `p[n].vertices[1]`) and so on. This
|
||||
structure might actually be more convenient to process compared to a tree-like
|
||||
structure. Note that the edge documents are also included, in constrast to the
|
||||
structure. Note that the edge documents are also included, in contrast to the
|
||||
removed graph traversal function.
|
||||
|
||||
Contact us via our social channels if you need further help.
|
||||
|
|
|
@ -10,7 +10,7 @@ Solution
|
|||
--------
|
||||
|
||||
Arangodb, as many other opensource projects nowadays is standing on the shoulder of giants.
|
||||
This gives us a solid foundation to bring you a uniq feature set, but it introduces a lot of
|
||||
This gives us a solid foundation to bring you a unique feature set, but it introduces a lot of
|
||||
dependencies that need to be in place in order to compile arangodb.
|
||||
|
||||
Since build infrastructures are very different depending on the target OS, choose your target
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 79 KiB |
Binary file not shown.
Before Width: | Height: | Size: 124 KiB |
Binary file not shown.
Before Width: | Height: | Size: 56 KiB |
|
@ -8,7 +8,7 @@ The `ArangoTemplate` class is the default implementation of the operations inter
|
|||
|
||||
# Repositories
|
||||
|
||||
## Introduction
|
||||
## Introduction to repositories
|
||||
|
||||
Spring Data Commons provides a composable repository infrastructure which Spring Data ArangoDB is built on. These allow for interface-based composition of repositories consisting of provided default implementations for certain interfaces (like `CrudRepository`) and custom implementations for other methods.
|
||||
|
||||
|
@ -352,7 +352,7 @@ repository.findByName(sort, "Tony");
|
|||
|
||||
# Mapping
|
||||
|
||||
## Introduction
|
||||
## Introduction to mapping
|
||||
|
||||
In this section we will describe the features and conventions for mapping Java objects to documents and how to override those conventions with annotation based mapping metadata.
|
||||
|
||||
|
|
|
@ -4,6 +4,8 @@ Durability Configuration
|
|||
Global Configuration
|
||||
--------------------
|
||||
|
||||
**Pre-setting on database creation**
|
||||
|
||||
There are global configuration values for durability, which can be adjusted by
|
||||
specifying the following configuration options:
|
||||
|
||||
|
@ -22,32 +24,7 @@ synchronize data in RocksDB's write-ahead logs to disk. Automatic syncs will
|
|||
only be performed for not-yet synchronized data, and only for operations that
|
||||
have been executed without the *waitForSync* attribute.
|
||||
|
||||
|
||||
Per-collection configuration
|
||||
----------------------------
|
||||
|
||||
You can also configure the durability behavior on a per-collection basis.
|
||||
Use the ArangoDB shell to change these properties.
|
||||
|
||||
|
||||
@startDocuBlock collectionProperties
|
||||
|
||||
|
||||
Per-operation configuration
|
||||
---------------------------
|
||||
|
||||
Many data-modification operations and also ArangoDB's transactions allow to specify
|
||||
a *waitForSync* attribute, which when set ensures the operation data has been
|
||||
synchronized to disk when the operation returns.
|
||||
|
||||
Disk-Usage Configuration (MMFiles engine)
|
||||
-----------------------------------------
|
||||
|
||||
The amount of disk space used by the MMFiles engine is determined by a few configuration
|
||||
options.
|
||||
|
||||
Global Configuration
|
||||
--------------------
|
||||
**Adjusting at run-time**
|
||||
|
||||
The total amount of disk storage required by the MMFiles engine is determined by the size of
|
||||
the write-ahead logfiles plus the sizes of the collection journals and datafiles.
|
||||
|
@ -79,8 +56,36 @@ are is determined by the following global configuration value:
|
|||
@startDocuBlock databaseMaximalJournalSize
|
||||
|
||||
|
||||
|
||||
Per-collection configuration
|
||||
----------------------------
|
||||
**Pre-setting during collection creation**
|
||||
|
||||
You can also configure the durability behavior on a per-collection basis.
|
||||
Use the ArangoDB shell to change these properties.
|
||||
|
||||
|
||||
@startDocuBlock collectionProperties
|
||||
|
||||
|
||||
**Adjusting at run-time**
|
||||
|
||||
The journal size can also be adjusted on a per-collection level using the collection's
|
||||
*properties* method.
|
||||
|
||||
|
||||
Per-operation configuration
|
||||
---------------------------
|
||||
|
||||
Many data-modification operations and also ArangoDB's transactions allow to specify
|
||||
a *waitForSync* attribute, which when set ensures the operation data has been
|
||||
synchronized to disk when the operation returns.
|
||||
|
||||
|
||||
Disk-Usage Configuration (MMFiles engine)
|
||||
-----------------------------------------
|
||||
|
||||
The amount of disk space used by the MMFiles engine is determined by a few configuration
|
||||
options.
|
||||
|
||||
|
||||
|
|
|
@ -163,8 +163,8 @@ console.timeEnd
|
|||
|
||||
Stops a timer created by a call to *time* and logs the time elapsed.
|
||||
|
||||
console.timeEnd
|
||||
---------------
|
||||
console.trace
|
||||
-------------
|
||||
|
||||
`console.trace()`
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ The following methods exist on the collection object (returned by *db.name*):
|
|||
|
||||
*Indexes*
|
||||
|
||||
* [collection.dropIndex(index)](../../Indexing/WorkingWithIndexes.md#dropping-an-index)
|
||||
* [collection.dropIndex(index)](../../Indexing/WorkingWithIndexes.md#dropping-an-index-via-a-collection-handle)
|
||||
* [collection.ensureIndex(description)](../../Indexing/WorkingWithIndexes.md#creating-an-index)
|
||||
* [collection.getIndexes(name)](../../Indexing/WorkingWithIndexes.md#listing-all-indexes-of-a-collection)
|
||||
* [collection.index(index)](../../Indexing/WorkingWithIndexes.md#index-identifiers-and-handles)
|
||||
|
|
|
@ -18,7 +18,7 @@ The following methods exists on the *_db* object:
|
|||
*Indexes*
|
||||
|
||||
* [db._index(index)](../../Indexing/WorkingWithIndexes.md#fetching-an-index-by-handle)
|
||||
* [db._dropIndex(index)](../../Indexing/WorkingWithIndexes.md#dropping-an-index)
|
||||
* [db._dropIndex(index)](../../Indexing/WorkingWithIndexes.md#dropping-an-index-via-a-database-handle)
|
||||
|
||||
*Properties*
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ with 3 _Agents_, and two single server instances.
|
|||
We will assume that all processes runs on the same machine (127.0.0.1). Such scenario
|
||||
should be used for testing only.
|
||||
|
||||
### Agency
|
||||
### Local Test Agency
|
||||
|
||||
To start up an _Agency_ you first have to activate it. This is done by providing
|
||||
the option `--agency.activate true`.
|
||||
|
@ -67,7 +67,7 @@ arangod --server.endpoint tcp://0.0.0.0:5003 \
|
|||
--database.directory agent3 &
|
||||
```
|
||||
|
||||
### Single Server Instances
|
||||
### Single Server Test Instances
|
||||
|
||||
To start the two single server instances, you can use the following commands:
|
||||
|
||||
|
@ -121,7 +121,7 @@ If we use:
|
|||
|
||||
then the commands you have to use are reported in the following subparagraphs.
|
||||
|
||||
### Agency
|
||||
### Agency
|
||||
|
||||
On 192.168.1.1:
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ In this paragraph we will include commands to manually start a Cluster with 3 _A
|
|||
We will assume that all processes runs on the same machine (127.0.0.1). Such scenario
|
||||
should be used for testing only.
|
||||
|
||||
### Agency
|
||||
### Local Test Agency
|
||||
|
||||
To start up an _Agency_ you first have to activate it. This is done by providing
|
||||
the option `--agency.activate true`.
|
||||
|
@ -68,7 +68,7 @@ arangod --server.endpoint tcp://0.0.0.0:5003 \
|
|||
--database.directory agent3 &
|
||||
```
|
||||
|
||||
### DBServers and Coordinators
|
||||
### Local Test DBServers and Coordinators
|
||||
|
||||
These two roles share a common set of relevant options. First you should specify
|
||||
the role using `--cluster.my-role`. This can either be `PRIMARY` (a database server)
|
||||
|
@ -165,7 +165,7 @@ If we use:
|
|||
|
||||
then the commands you have to use are reported in the following subparagraphs.
|
||||
|
||||
### Agency
|
||||
### Agency
|
||||
|
||||
On 192.168.1.1:
|
||||
|
||||
|
|
|
@ -117,7 +117,7 @@ regardless of the value of this attribute.
|
|||
|
||||
|
||||
|
||||
### Dropping an index
|
||||
### Dropping an index via a collection handle
|
||||
<!-- arangod/V8Server/v8-vocindex.cpp -->
|
||||
|
||||
|
||||
|
@ -207,7 +207,7 @@ Returns the index with *index-handle* or null if no such index exists.
|
|||
@endDocuBlock IndexHandle
|
||||
|
||||
|
||||
### Dropping an index
|
||||
### Dropping an index via a database handle
|
||||
<!-- js/server/modules/@arangodb/arango-database.js -->
|
||||
|
||||
|
||||
|
|
|
@ -174,7 +174,7 @@ RETURN { found: OLD, updated: NEW }
|
|||
A more detailed description of `UPSERT` can be found here:
|
||||
http://jsteemann.github.io/blog/2015/03/27/preview-of-the-upsert-command/
|
||||
|
||||
### Miscellaneous changes
|
||||
### Miscellaneous AQL changes
|
||||
|
||||
When errors occur inside AQL user functions, the error message will now contain a stacktrace,
|
||||
indicating the line of code in which the error occurred. This should make debugging AQL user functions
|
||||
|
|
|
@ -403,7 +403,7 @@ If the query cache is operated in `demand` mode, it can be controlled per query
|
|||
if the cache should be checked for a result.
|
||||
|
||||
|
||||
### Miscellaneous changes
|
||||
### Miscellaneous AQL changes
|
||||
|
||||
### Optimizer
|
||||
|
||||
|
|
|
@ -361,8 +361,8 @@ Authorization
|
|||
Read more in the [overview](../Administration/ManagingUsers/README.md).
|
||||
|
||||
|
||||
Foxx
|
||||
----
|
||||
Foxx and authorization
|
||||
----------------------
|
||||
|
||||
* the [cookie session transport](../Foxx/Reference/Sessions/Transports/Cookie.md) now supports all options supported by the [cookie method of the response object](../Foxx/Reference/Routers/Response.md#cookie).
|
||||
|
||||
|
|
|
@ -401,7 +401,7 @@ The `@arangodb/request` response object now stores the parsed JSON response
|
|||
body in a property `json` instead of `body` when the request was made using the
|
||||
`json` option. The `body` instead contains the response body as a string.
|
||||
|
||||
### Edges API
|
||||
### JavaScript Edges API
|
||||
|
||||
When completely replacing an edge via a collection's `replace()` function the replacing
|
||||
edge data now needs to contain the `_from` and `_to` attributes for the new edge. Previous
|
||||
|
@ -447,7 +447,7 @@ The collection function `byConditionSkiplist()` has been removed in 3.0. The sam
|
|||
can be achieved by issuing an AQL query with the target condition, which will automatically use
|
||||
a suitable index if present.
|
||||
|
||||
#### Revision id handling
|
||||
#### Javascript Revision id handling
|
||||
|
||||
The `exists()` method of a collection now throws an exception when the specified document
|
||||
exists but its revision id does not match the revision id specified. Previous versions of
|
||||
|
@ -609,7 +609,7 @@ based on AQL internally in 3.0, the API now returns a JSON object with a `result
|
|||
|
||||
### Edges API
|
||||
|
||||
#### CRUD operations
|
||||
#### CRUD operations on edges
|
||||
|
||||
The API for documents and edges have been unified in ArangoDB 3.0. The CRUD operations
|
||||
for documents and edges are now handled by the same endpoint at `/_api/document`. For
|
||||
|
@ -791,7 +791,7 @@ contain a JSON object with an attribute named `user`, containing the name of the
|
|||
be created. Previous versions of ArangoDB also checked this attribute, but additionally
|
||||
looked for an attribute `username` if the `user` attribute did not exist.
|
||||
|
||||
### Undocumented APIs
|
||||
### Undocumented HTTP APIs
|
||||
|
||||
The following undocumented HTTP REST endpoints have been removed from ArangoDB's REST
|
||||
API:
|
||||
|
|
|
@ -16,13 +16,13 @@ Components
|
|||
|
||||
### Replication Logger
|
||||
|
||||
#### Purpose
|
||||
**Purpose**
|
||||
|
||||
The _replication logger_ will write all data-modification operations into the
|
||||
_write-ahead log_. This log may then be read by clients to replay any data
|
||||
modification on a different server.
|
||||
|
||||
#### Checking the state
|
||||
**Checking the state**
|
||||
|
||||
To query the current state of the _logger_, use the *state* command:
|
||||
|
||||
|
@ -73,7 +73,7 @@ and maximum tick values per logfile:
|
|||
|
||||
### Replication Applier
|
||||
|
||||
#### Purpose
|
||||
**Purpose**
|
||||
|
||||
The purpose of the _replication applier_ is to read data from a master database's
|
||||
event log, and apply them locally. The _applier_ will check the master database
|
||||
|
|
|
@ -281,6 +281,7 @@ function book-check-markdown-leftovers()
|
|||
|
||||
function check-dangling-anchors()
|
||||
{
|
||||
rm -rf /tmp/tags/
|
||||
echo "${STD_COLOR}##### checking for dangling anchors${RESET}"
|
||||
find books/ -name '*.html' | while IFS= read -r htmlf; do
|
||||
fn=$(basename "${htmlf}")
|
||||
|
@ -289,6 +290,30 @@ function check-dangling-anchors()
|
|||
grep '<h. ' < "${htmlf}" | \
|
||||
sed -e 's;.*id=";;' -e 's;".*;;' > "/tmp/tags/${dir}/${fn}"
|
||||
done
|
||||
|
||||
fail=0
|
||||
rm -f /tmp/failduplicatetags.txt
|
||||
find /tmp/tags -type f | while IFS= read -r htmlf; do
|
||||
sort "${htmlf}" |grep -v ^$ > /tmp/sorted.txt
|
||||
sort -u "${htmlf}" |grep -v ^$ > /tmp/sortedunique.txt
|
||||
if test "$(comm -3 /tmp/sorted.txt /tmp/sortedunique.txt|wc -l)" -ne 0; then
|
||||
echo "${ERR_COLOR}"
|
||||
echo "in ${htmlf}: "
|
||||
comm -3 /tmp/sorted.txt /tmp/sortedunique.txt
|
||||
echo "${RESET}"
|
||||
touch /tmp/failduplicatetags.txt
|
||||
fi
|
||||
done
|
||||
|
||||
rm -f /tmp/sorted.txt /tmp/sortedunique.txt
|
||||
if test -f /tmp/failduplicatetags.txt; then
|
||||
echo "${ERR_COLOR}"
|
||||
echo "duplicate anchors detected - see above"
|
||||
echo "${RESET}"
|
||||
rm -f /tmp/failduplicatetags.txt
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -f /tmp/anchorlist.txt
|
||||
|
||||
echo "${STD_COLOR}##### fetching anchors from generated http files${RESET}"
|
||||
|
@ -361,17 +386,21 @@ function check-dangling-anchors()
|
|||
function book-check-images-referenced()
|
||||
{
|
||||
NAME="$1"
|
||||
set +e
|
||||
find "${NAME}" -name \*.png | while IFS= read -r image; do
|
||||
baseimage=$(basename "$image")
|
||||
if ! grep -Rq "${baseimage}" "${NAME}"; then
|
||||
echo "${ERR_COLOR}"
|
||||
echo "$image is not used!"
|
||||
echo "${RESET}"
|
||||
exit "1"
|
||||
fi
|
||||
done
|
||||
set -e
|
||||
echo "${STD_COLOR}##### checking for unused image files ${NAME}${RESET}"
|
||||
ERRORS=$(find "${NAME}" -name '*.png' | while IFS= read -r image; do
|
||||
baseimage=$(basename "$image")
|
||||
if ! grep -Rq "${baseimage}" "${NAME}"; then
|
||||
printf "\n${image}"
|
||||
fi
|
||||
done
|
||||
)
|
||||
if test "$(printf "${ERRORS}" | wc -l)" -gt 0; then
|
||||
echo "${ERR_COLOR}";
|
||||
echo "the following images are not referenced by any page: "
|
||||
echo "${ERRORS}"
|
||||
echo "${RESET}";
|
||||
exit 1;
|
||||
fi
|
||||
}
|
||||
|
||||
function build-book-symlinks()
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -932,7 +932,7 @@ def restreplybody(cargo, r=Regexen()):
|
|||
if restReplyBodyParam == None:
|
||||
# https://github.com/swagger-api/swagger-ui/issues/1430
|
||||
# once this is solved we can skip this:
|
||||
operation['description'] += '\n#### HTTP ' + currentReturnCode + '\n'
|
||||
operation['description'] += '\n**HTTP ' + currentReturnCode + '**\n'
|
||||
operation['description'] += "*A json document with these Properties is returned:*\n"
|
||||
operation['responses'][currentReturnCode]['x-description-offset'] = len(operation['description'])
|
||||
|
||||
|
|
Loading…
Reference in New Issue