mirror of https://gitee.com/bigwinds/arangodb
Doc - Dangling interbook checker [New] (#5735)
This commit is contained in:
parent
edc31ff7dd
commit
60bc6a948c
|
@ -33,7 +33,7 @@ Rule of thumb is, the closer the UDF is to your final `RETURN` statement
|
|||
(or maybe even inside it), the better.
|
||||
|
||||
When used in clusters, UDFs are always executed on the
|
||||
[coordinator](../../Manual/Scalability/Architecture.html).
|
||||
[coordinator](../../Manual/Scalability/Cluster/Architecture.html).
|
||||
|
||||
Using UDFs in clusters may result in a higher resource allocation
|
||||
in terms of used V8 contexts and server threads. If you run out
|
||||
|
@ -60,6 +60,6 @@ Once it is in the `_aqlfunctions` collection, it is available on all
|
|||
coordinators without additional effort.
|
||||
|
||||
Keep in mind that system collections are excluded from dumps created with
|
||||
[arangodump](../../Manual/Administration/Arangodump.html) by default.
|
||||
[arangodump](../../Manual/Programs/Arangodump/index.html) by default.
|
||||
To include AQL UDF in a dump, the dump needs to be started with
|
||||
the option *--include-system-collections true*.
|
||||
|
|
|
@ -78,7 +78,7 @@ Match documents where the **attribute-name** exists in the document
|
|||
and is of the specified type.
|
||||
|
||||
- *attribute-name* - the path of the attribute to exist in the document
|
||||
- *analyzer* - string with the analyzer used, i.e. *"text_en"* or [one of the other available string analyzers](../../../Manual/Views/ArangoSearch/Analyzers.html)
|
||||
- *analyzer* - string with the analyzer used, i.e. *"text_en"* or [one of the other available string analyzers](../../Manual/Views/ArangoSearch/Analyzers.html)
|
||||
- *type* - data type as string; one of:
|
||||
- **bool**
|
||||
- **boolean**
|
||||
|
@ -101,7 +101,7 @@ The phrase can be expressed as an arbitrary number of *phraseParts* separated by
|
|||
- *attribute-name* - the path of the attribute to compare against in the document
|
||||
- *phrasePart* - a string to search in the token stream; may consist of several words; will be split using the specified *analyzer*
|
||||
- *skipTokens* number of words or tokens to treat as wildcards
|
||||
- *analyzer* - string with the analyzer used, i.e. *"text_en"* or [one of the other available string analyzers](../../../Manual/Views/ArangoSearch/Analyzers.html)
|
||||
- *analyzer* - string with the analyzer used, i.e. *"text_en"* or [one of the other available string analyzers](../../Manual/Views/ArangoSearch/Analyzers.html)
|
||||
|
||||
### STARTS_WITH()
|
||||
|
||||
|
@ -121,7 +121,7 @@ The resulting Array can i.e. be used in subsequent `FILTER` statements with the
|
|||
This can be used to better understand how the specific analyzer is going to behave.
|
||||
|
||||
- *input* string to tokenize
|
||||
- *analyzer* [one of the available string analyzers](../../../Manual/Views/ArangoSearch/Analyzers.html)
|
||||
- *analyzer* [one of the available string analyzers](../../Manual/Views/ArangoSearch/Analyzers.html)
|
||||
|
||||
#### Filtering examples
|
||||
|
||||
|
|
|
@ -46,14 +46,14 @@ your ArangoDB 3.0 distribution!):
|
|||
arangorestore --server.endpoint tcp://localhost:8530 --input-directory dump
|
||||
|
||||
to import your data into your new ArangoDB 3.0 instance. See
|
||||
[this page](../../Manual/Administration/Arangorestore.html)
|
||||
[this page](../../Manual/Programs/Arangorestore/index.html)
|
||||
for details on the available command line options. If your ArangoDB 3.0
|
||||
instance is a cluster, then simply use one of the coordinators as
|
||||
`--server.endpoint`.
|
||||
|
||||
That is it, your data is migrated.
|
||||
|
||||
### Controling the number of shards and the replication factor
|
||||
### Controlling the number of shards and the replication factor
|
||||
|
||||
This procedure works for all four combinations of single server and cluster
|
||||
for source and destination respectively. If the target is a single server
|
||||
|
@ -70,7 +70,7 @@ use replication factor 1 for all collections. If the source was a
|
|||
single server, the same will happen, additionally, `arangorestore`
|
||||
will always create collections with just a single shard.
|
||||
|
||||
There are essentially 3 ways to change this behaviour:
|
||||
There are essentially 3 ways to change this behavior:
|
||||
|
||||
1. The first is to create the collections explicitly on the
|
||||
ArangoDB 3.0 cluster, and then set the `--create-collection false` flag.
|
||||
|
|
|
@ -27,7 +27,7 @@ https://dcos.io/docs/1.7/usage/tutorials/arangodb/
|
|||
|
||||
To understand how ArangoDB ensures that it is highly available make sure to read the cluster documentation here:
|
||||
|
||||
[ArangoDB Architecture Documentation](../..//Manual/Scalability/Architecture.html)
|
||||
[ArangoDB Architecture Documentation](../../Manual/Scalability/Cluster/Architecture.html)
|
||||
|
||||
### Deploy a load balancer for the coordinators
|
||||
|
||||
|
|
|
@ -9,7 +9,8 @@ You've already built a custom version of ArangoDB and want to run it. Possibly i
|
|||
Solution
|
||||
--------
|
||||
|
||||
First, you need to build your own version of ArangoDB. If you haven't done so already, have a look at any of the [Compiling](README.md) recipes.
|
||||
First, you need to build your own version of ArangoDB. If you haven't done so
|
||||
already, have a look at any of the [Compiling](README.md) recipes.
|
||||
|
||||
This recipe assumes you're in the root directory of the ArangoDB distribution and compiling has successfully finished.
|
||||
|
||||
|
@ -31,9 +32,18 @@ bin/arangod \
|
|||
|
||||
This part shows how to run your custom build with the config and data from a pre-existing stable installation.
|
||||
|
||||
**BEWARE** ArangoDB's developers may change the db file format and after running with a changed file format, there may be no way back. Alternatively you can run your build in isolation and [dump](../../Manual/Administration/Arangodump.html) and [restore](../../Manual/Administration/Arangorestore.html) the data from the stable to your custom build.
|
||||
{% hint 'danger' %}
|
||||
ArangoDB's developers may change the db file format and after running with a
|
||||
changed file format, there may be no way back. Alternatively you can run your
|
||||
build in isolation and [dump](../../Manual/Programs/Arangodump/index.html) and
|
||||
[restore](../../Manual/Programs/Arangorestore/index.html) the data from the
|
||||
stable to your custom build.
|
||||
{% endhint %}
|
||||
|
||||
When running like this, you must run the db as the arangod user (the default installed by the package) in order to have write access to the log, database directory etc. Running as root will likely mess up the file permissions - good luck fixing that!
|
||||
When running like this, you must run the db as the arangod user (the default
|
||||
installed by the package) in order to have write access to the log, database
|
||||
directory etc. Running as root will likely mess up the file permissions - good
|
||||
luck fixing that!
|
||||
|
||||
```bash
|
||||
# become root first
|
||||
|
|
|
@ -56,15 +56,17 @@ while(cursor.hasNext()) {
|
|||
config.known; // Returns the result of type name: counter. In arangosh this will print out complete result
|
||||
```
|
||||
|
||||
To execute this script accordingly replace db.v and db.e with your collections (v is vertices, e is edges) and write it to a file: (e.g.) traverse.js
|
||||
To execute this script accordingly replace db.v and db.e with your collections
|
||||
(v is vertices, e is edges) and write it to a file, e.g. traverse.js,
|
||||
then execute it in arangosh:
|
||||
|
||||
```
|
||||
cat traverse.js | arangosh
|
||||
```
|
||||
|
||||
If you want to use it in production you should have a look at the Foxx framework which allows you to store and execute this script on server side and make it accessible via your own API:
|
||||
[Foxx](../../../2.8/Foxx/index.html)
|
||||
If you want to use it in production you should have a look at the Foxx framework which allows
|
||||
you to store and execute this script on server side and make it accessible via your own API:
|
||||
[Foxx](../../Manual/Foxx/index.html)
|
||||
|
||||
|
||||
Comment
|
||||
|
|
|
@ -50,7 +50,7 @@ Clients should thus not send the extra header when they have strict durability
|
|||
requirements or if they rely on result of the sent operation for further actions.
|
||||
|
||||
The maximum number of queued tasks is determined by the startup option
|
||||
*-scheduler.maximal-queue-size*. If more than this number of tasks are already queued,
|
||||
*--server.maximal-queue-size*. If more than this number of tasks are already queued,
|
||||
the server will reject the request with an HTTP 500 error.
|
||||
|
||||
Finally, please note that it is not possible to cancel such a
|
||||
|
|
|
@ -80,7 +80,7 @@ response to the client instantly and thus finish this HTTP-request.
|
|||
The server will execute the tasks from the queue asynchronously as fast
|
||||
as possible, while clients can continue to do other work.
|
||||
If the server queue is full (i.e. contains as many tasks as specified by the
|
||||
option ["--scheduler.maximal-queue-size"](../../Manual/Administration/Configuration/Communication.html)),
|
||||
option ["--server.maximal-queue-size"](../../Manual/Programs/Arangod/Options.html#server-options)),
|
||||
then the request will be rejected instantly with an *HTTP 500* (internal
|
||||
server error) response.
|
||||
|
||||
|
@ -198,7 +198,7 @@ HTTP layer:
|
|||
*HTTP/1.1* will get an *HTTP 505* (HTTP version not supported) error in return.
|
||||
* ArangoDB will reject client requests with a negative value in the
|
||||
*Content-Length* request header with *HTTP 411* (Length Required).
|
||||
* Arangodb doesn't support POST with *transfer-encoding: chunked* which forbids
|
||||
* ArangoDB doesn't support POST with *transfer-encoding: chunked* which forbids
|
||||
the *Content-Length* header above.
|
||||
* the maximum URL length accepted by ArangoDB is 16K. Incoming requests with
|
||||
longer URLs will be rejected with an *HTTP 414* (Request-URI too long) error.
|
||||
|
@ -213,7 +213,7 @@ HTTP layer:
|
|||
complete its request. If the client does not send the remaining body data
|
||||
within this time, ArangoDB will close the connection. Clients should avoid
|
||||
sending such malformed requests as this will block one tcp connection,
|
||||
and may lead to a temporary filedescriptor leak.
|
||||
and may lead to a temporary file descriptor leak.
|
||||
* when clients send a body or a *Content-Length* value bigger than the maximum
|
||||
allowed value (512 MB), ArangoDB will respond with *HTTP 413* (Request Entity
|
||||
Too Large).
|
||||
|
@ -260,7 +260,7 @@ ArangoDB will set the following headers in the response:
|
|||
* `access-control-allow-credentials`: will be set to `false` by default.
|
||||
For details on when it will be set to `true` see the next section on cookies.
|
||||
|
||||
* `access-control-allow-headers`: will be set to the exect value of the
|
||||
* `access-control-allow-headers`: will be set to the exact value of the
|
||||
request's `access-control-request-headers` header or omitted if no such
|
||||
header was sent in the request.
|
||||
|
||||
|
@ -275,7 +275,7 @@ ArangoDB will set the following headers in the response:
|
|||
* `access-control-expose-headers`: will be set to a list of response headers used
|
||||
by the ArangoDB HTTP API.
|
||||
|
||||
* `access-control-max-age`: will be set to an implementation-specifc value.
|
||||
* `access-control-max-age`: will be set to an implementation-specific value.
|
||||
|
||||
### Actual request
|
||||
|
||||
|
@ -294,7 +294,7 @@ ArangoDB will add the following headers to the response:
|
|||
When making CORS requests to endpoints of Foxx services, the value of the
|
||||
`access-control-expose-headers` header will instead be set to a list of
|
||||
response headers used in the response itself (but not including the
|
||||
`access-control-` headers). Note that [Foxx services may override this behaviour](../../Manual/Foxx/Cors).
|
||||
`access-control-` headers). Note that [Foxx services may override this behavior](../../Manual/Foxx/Reference/Cors.html).
|
||||
|
||||
### Cookies and authentication
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ There is a job that can restore this property safely. However, while the
|
|||
job is running,
|
||||
- the `replicationFactor` *must not be changed* for any affected collection or
|
||||
prototype collection (i.e. set in `distributeShardsLike`, including
|
||||
[SmartGraphs](../../Manual/Graphs/SmartGraphs/)),
|
||||
[SmartGraphs](../../Manual/Graphs/SmartGraphs/index.html)),
|
||||
- *neither should shards be moved* of one of those prototypes
|
||||
- and shutdown of DBServers *should be avoided*
|
||||
during the repairs. Also only one repair job should run at any given time.
|
||||
|
@ -160,7 +160,7 @@ this:
|
|||
If something is to be repaired, the response will have the property
|
||||
`collections` with an entry `<db>/<collection>` for each collection which
|
||||
has to be repaired. Each collection also as a separate `error` property
|
||||
which will be `true` iff an error occured for this collection (and `false`
|
||||
which will be `true` iff an error occurred for this collection (and `false`
|
||||
otherwise). If `error` is `true`, the properties `errorNum` and
|
||||
`errorMessage` will also be set, and in some cases also `errorDetails`
|
||||
with additional information on how to handle a specific error.
|
||||
|
@ -170,7 +170,7 @@ with additional information on how to handle a specific error.
|
|||
As this job possibly has to move a lot of data around, it can take a while
|
||||
depending on the size of the affected collections. So this should *not
|
||||
be called synchronously*, but only via
|
||||
[Async Results](../../HTTP/AsyncResultsManagement): i.e., set the
|
||||
[Async Results](../../HTTP/AsyncResultsManagement/index.html): i.e., set the
|
||||
header `x-arango-async: store` to put the job into background and get
|
||||
its results later. Otherwise the request will most probably result in a
|
||||
timeout and the response will be lost! The job will still continue unless
|
||||
|
@ -224,5 +224,5 @@ $ wget --method=PUT -qSO - http://localhost:8529/_api/job/152223973119118 | jq .
|
|||
```
|
||||
|
||||
The final response will look like the response of the `GET` call.
|
||||
If an error occured the response should contain details on how to proceed.
|
||||
If an error occurred the response should contain details on how to proceed.
|
||||
If in doubt, ask as on Slack: https://arangodb.com/community/
|
||||
|
|
|
@ -6,7 +6,7 @@ Replication
|
|||
|
||||
This is an introduction to ArangoDB's HTTP replication interface.
|
||||
The replication architecture and components are described in more details in
|
||||
[Replication](../../Manual/Administration/Replication/index.html).
|
||||
[Replication](../../Manual/Architecture/Replication/index.html).
|
||||
|
||||
The HTTP replication interface serves four main purposes:
|
||||
- fetch initial data from a server (e.g. for a backup, or for the initial synchronization
|
||||
|
|
|
@ -234,5 +234,5 @@ Note that while a slave has only partly executed a transaction from the master,
|
|||
a write lock on the collections involved in the transaction.
|
||||
|
||||
You may also want to check the master and slave states via the HTTP APIs
|
||||
(see [HTTP Interface for Replication](../../../../HTTP/Replications/index.html)).
|
||||
(see [HTTP Interface for Replication](../../../HTTP/Replications/index.html)).
|
||||
|
||||
|
|
|
@ -233,5 +233,5 @@ Note that while a slave has only partly executed a transaction from the master,
|
|||
a write lock on the collections involved in the transaction.
|
||||
|
||||
You may also want to check the master and slave states via the HTTP APIs
|
||||
(see [HTTP Interface for Replication](../../../../HTTP/Replications/index.html)).
|
||||
(see [HTTP Interface for Replication](../../../HTTP/Replications/index.html)).
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@ Geo Queries
|
|||
===========
|
||||
|
||||
{% hint 'warning' %}
|
||||
It is recommended to use AQL instead, see [**Geo functions**](../../../AQL/Functions/Geo.html).
|
||||
It is recommended to use AQL instead, see [**Geo functions**](../../../../AQL/Functions/Geo.html).
|
||||
{% endhint %}
|
||||
|
||||
The ArangoDB allows to select documents based on geographic coordinates. In
|
||||
|
@ -62,5 +62,5 @@ Related topics
|
|||
--------------
|
||||
|
||||
Other ArangoDB geographic features are described in:
|
||||
- [AQL Geo functions](../../../AQL/Functions/Geo.html)
|
||||
- [AQL Geo functions](../../../../AQL/Functions/Geo.html)
|
||||
- [Geo-Spatial indexes](../../../Indexing/Geo.md)
|
||||
|
|
|
@ -10,4 +10,4 @@ While Foxx is primarily designed to be used to access the database itself, Arang
|
|||
[Scripts](Scripts.md) can be used to perform one-off tasks, which can also be scheduled to be performed asynchronously using the built-in job queue.
|
||||
|
||||
Finally, Foxx services can be installed and managed over the Web-UI or through
|
||||
ArangoDB's [HTTP API](../../HTTP/Foxx/Management.html).
|
||||
ArangoDB's [HTTP API](../../../HTTP/Foxx/Management.html).
|
||||
|
|
|
@ -161,7 +161,7 @@ and [Arangorestore](../Programs/Arangorestore/README.md) to restore a backup int
|
|||
Managing graphs
|
||||
---------------
|
||||
|
||||
By default you should use [the interface your driver provides to manage graphs](../HTTP/Gharial/Management.html).
|
||||
By default you should use [the interface your driver provides to manage graphs](../../HTTP/Gharial/Management.html).
|
||||
|
||||
This is i.e. documented [in Graphs-Section of the ArangoDB Java driver](https://github.com/arangodb/arangodb-java-driver#graphs).
|
||||
|
||||
|
@ -315,7 +315,6 @@ Cookbook examples
|
|||
The above referenced chapters describe the various APIs of ArangoDBs graph engine with small examples. Our cookbook has some more real life examples:
|
||||
|
||||
- [Traversing a graph in full depth](../../Cookbook/Graph/FulldepthTraversal.html)
|
||||
- [Using an example vertex with the java driver](../../Cookbook/Graph/JavaDriverGraphExampleVertex.html)
|
||||
- [Retrieving documents from ArangoDB without knowing the structure](../../Cookbook/UseCases/JavaDriverBaseDocument.html)
|
||||
- [Using a custom visitor from node.js](../../Cookbook/Graph/CustomVisitorFromNodeJs.html)
|
||||
- [AQL Example Queries on an Actors and Movies Database](../../Cookbook/Graph/ExampleActorsAndMovies.html)
|
||||
|
|
|
@ -42,7 +42,7 @@ Communication Layer
|
|||
|
||||
ArangoDB up to 3.0 used [libev](http://software.schmorp.de/pkg/libev.html) for
|
||||
the communication layer. ArangoDB starting from 3.1 uses
|
||||
[Boost ASIO](www.boost.org).
|
||||
[Boost ASIO](https://www.boost.org).
|
||||
|
||||
Starting with ArangoDB 3.1 we begin to provide the VelocyStream Protocol (vst) as
|
||||
a addition to the established http protocol.
|
||||
|
|
|
@ -150,6 +150,7 @@ function ppbook-precheck-bad-headings()
|
|||
function ppbook-check-html-link()
|
||||
{
|
||||
NAME="$1"
|
||||
MSG="$2"
|
||||
echo "${STD_COLOR}##### checking for invalid HTML links in ${NAME}${RESET}"
|
||||
echo "${ALLBOOKS}" | tr " " "\n" | sed -e 's;^;/;' -e 's;$;/;' > /tmp/books.regex
|
||||
|
||||
|
@ -165,7 +166,7 @@ function ppbook-check-html-link()
|
|||
if test "$(wc -l < /tmp/relative_html_links.txt)" -gt 0; then
|
||||
echo "${ERR_COLOR}"
|
||||
echo "Found links to .html files inside of the document! use <foo>.md instead!"
|
||||
echo
|
||||
echo "${MSG}"
|
||||
cat /tmp/relative_html_links.txt
|
||||
echo "${RESET}"
|
||||
exit 1
|
||||
|
@ -266,21 +267,6 @@ function book-check-markdown-leftovers()
|
|||
exit 1
|
||||
fi
|
||||
|
||||
set +e
|
||||
ERRORS=$(find "books/${NAME}" -name '*.html' \
|
||||
-exec grep '<a href=".*\.md#*.*"' {} \; \
|
||||
-print | \
|
||||
grep -v https:// | \
|
||||
grep -v http://)
|
||||
set -e
|
||||
if test "$(echo -n "${ERRORS}" | wc -l)" -gt 0; then
|
||||
echo "${ERR_COLOR}"
|
||||
echo "found unconverted markdown links: "
|
||||
echo "${ERRORS}"
|
||||
echo "${RESET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set +e
|
||||
ERRORS=$(find "books/${NAME}" -name '*.html' -exec grep '\[.*\](.*[\.html|\.md|http|#.*])' {} \; -print)
|
||||
set -e
|
||||
|
@ -303,11 +289,25 @@ function check-dangling-anchors()
|
|||
grep '<h. ' < "${htmlf}" | \
|
||||
sed -e 's;.*id=";;' -e 's;".*;;' > "/tmp/tags/${dir}/${fn}"
|
||||
done
|
||||
rm -f /tmp/anchorlist.txt
|
||||
|
||||
echo "${STD_COLOR}##### fetching anchors from generated http files${RESET}"
|
||||
grep -R "a href.*#" books/ | \
|
||||
grep -v -E "(styles/header\\.js|/app\\.js|class=\"navigation|https*://|href=\"#\")" | \
|
||||
sed 's;\(.*\.html\):.*a href="\(.*\)#\(.*\)">.*</a>.*;\1,\2,\3;' | grep -v " " > /tmp/anchorlist.txt
|
||||
for file in $(find books -name \*.html); do
|
||||
# - strip of the menu
|
||||
# - then the page tail.
|
||||
# - remove links to external pages
|
||||
cat $file | \
|
||||
sed -r -n -e '/normal markdown-section/,${p}'| \
|
||||
sed -r -n -e '/.*id="page-footer".*/q;p' | \
|
||||
grep '<a href="' | \
|
||||
grep -v 'target="_blank"' | \
|
||||
sed -e 's;.*href=";;' -e 's;".*;;' > /tmp/thisdoc.txt
|
||||
# Links with anchors:
|
||||
cat /tmp/thisdoc.txt |grep '#' | sed "s;\(.*\)#\(.*\);${file},\1,\2;" >> /tmp/anchorlist.txt
|
||||
# links without anchors:
|
||||
cat /tmp/thisdoc.txt |grep -v '#' | sed "s;\(.*\);${file},\1,;" >> /tmp/anchorlist.txt
|
||||
|
||||
done
|
||||
|
||||
echo "${STD_COLOR}##### cross checking anchors${RESET}"
|
||||
NO=0
|
||||
|
@ -321,19 +321,19 @@ function check-dangling-anchors()
|
|||
if test -z "$FN"; then
|
||||
FN="$SFN"
|
||||
else
|
||||
SFNP=$(sed 's;/[a-zA-Z0-9]*.html;;' <<< "$SFN")
|
||||
SFNP=$(sed 's;/[a-zA-Z0-9.-]*.html;;' <<< "$SFN")
|
||||
FN="${SFNP}/${FN}"
|
||||
fi
|
||||
if test -d "$FN"; then
|
||||
FN="${FN}index.html"
|
||||
fi
|
||||
if test -n "$ANCHOR"; then
|
||||
if test ! -f "/tmp/tags/${FN}"; then
|
||||
echo "${ERR_COLOR}"
|
||||
echo "File referenced by ${i} doesn't exist."
|
||||
NO=$((NO + 1))
|
||||
echo "${RESET}"
|
||||
else
|
||||
if test -n "$ANCHOR"; then
|
||||
if grep -q "^$ANCHOR$" "/tmp/tags/$FN"; then
|
||||
true
|
||||
else
|
||||
|
@ -658,7 +658,7 @@ function build-books()
|
|||
done
|
||||
|
||||
for book in ${ALLBOOKS}; do
|
||||
ppbook-check-html-link "${book}"
|
||||
ppbook-check-html-link "${book}" ""
|
||||
done
|
||||
|
||||
check-docublocks ""
|
||||
|
@ -772,6 +772,7 @@ case "$VERB" in
|
|||
build-book "$NAME"
|
||||
check-docublocks "some of the above errors may be because of referenced books weren't rebuilt."
|
||||
check-dangling-anchors "some of the above errors may be because of referenced books weren't rebuilt."
|
||||
ppbook-check-html-link "${NAME}" "some of the above errors may be because of referenced books weren't rebuilt."
|
||||
;;
|
||||
check-book)
|
||||
check-summary "${NAME}"
|
||||
|
@ -780,8 +781,7 @@ case "$VERB" in
|
|||
ppbook-check-directory-link "${NAME}"
|
||||
book-check-images-referenced "${NAME}"
|
||||
book-check-markdown-leftovers "${NAME}"
|
||||
check-docublocks "some of the above errors may be because of referenced books weren't rebuilt."
|
||||
check-dangling-anchors "some of the above errors may be because of referenced books weren't rebuilt."
|
||||
check-dangling-anchors "${NAME}" "some of the above errors may be because of referenced books weren't rebuilt."
|
||||
;;
|
||||
build-dist-books)
|
||||
build-dist-books
|
||||
|
|
|
@ -39,7 +39,7 @@ contains the distance between the given point and the document in meters.
|
|||
Note: the *near* simple query function is **deprecated** as of ArangoDB 2.6.
|
||||
The function may be removed in future versions of ArangoDB. The preferred
|
||||
way for retrieving documents from a collection using the near operator is
|
||||
to use the AQL *NEAR* function in an [AQL query](../../AQL/Functions/Geo.html) as follows:
|
||||
to use the AQL *NEAR* function in an AQL query as follows:
|
||||
|
||||
```js
|
||||
FOR doc IN NEAR(@@collection, @latitude, @longitude, @limit)
|
||||
|
|
|
@ -28,7 +28,7 @@ contains the distance between the given point and the document in meters.
|
|||
Note: the *within* simple query function is **deprecated** as of ArangoDB 2.6.
|
||||
The function may be removed in future versions of ArangoDB. The preferred
|
||||
way for retrieving documents from a collection using the within operator is
|
||||
to use the AQL *WITHIN* function in an [AQL query](../../AQL/Functions/Geo.html) as follows:
|
||||
to use the AQL *WITHIN* function in an AQL query as follows:
|
||||
|
||||
```
|
||||
FOR doc IN WITHIN(@@collection, @latitude, @longitude, @radius, @distanceAttributeName)
|
||||
|
|
|
@ -77,6 +77,7 @@ be driver or other utilities which shouldn't be directly in sync to the ArangoDB
|
|||
The maintainer of the respective component can alter the documentation, and once a good point in
|
||||
time is reached, it can be sync'ed over via `Documentation/Scripts/fetchRefs.sh`, which spiders
|
||||
the `SUMMARY.md` files of all books, creates a clone of the external resource, adds a `don't edit this here` note to the files, and copies them over.
|
||||
Use your *github username* as first parameter to clone using HTTP + authentification, or `git` if you want to use ssh+key for authentification
|
||||
|
||||
The syntax of the `SUMMARY.md` integration are special comment lines that contain `git` in them in a semicolon separated value list:
|
||||
|
||||
|
|
|
@ -27,6 +27,9 @@ for book in ${ALLBOOKS}; do
|
|||
git pull --all
|
||||
)
|
||||
else
|
||||
if test "${GITAUTH}" == "git"; then
|
||||
AUTHREPO=$(echo "${AUTHREPO}" | sed -e "s;github.com/;github.com:;" -e "s;https://;;" )
|
||||
fi
|
||||
git clone "${AUTHREPO}" "${CODIR}"
|
||||
fi
|
||||
|
||||
|
|
Loading…
Reference in New Issue