1
0
Fork 0

Doc - Added Kubernetes documentation (#5617)

This commit is contained in:
Luigi Servini 2018-06-15 12:37:45 +02:00 committed by sleto-it
parent 3bbc89ee96
commit e9699a38ba
16 changed files with 1458 additions and 0 deletions

View File

@ -0,0 +1,19 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Authentication
The ArangoDB Kubernetes Operator will by default create ArangoDB deployments
that require authentication to access the database.
It uses a single JWT secret (stored in a Kubernetes secret)
to provide *super-user* access between all servers of the deployment
as well as access from the ArangoDB Operator to the deployment.
To disable authentication, set `spec.auth.jwtSecretName` to `None`.
Initially the deployment is accessible through the web user-interface and
API's, using the user `root` with an empty password.
Make sure to change this password immediately after starting the deployment!
## See also
- [Secure connections (TLS)](./Tls.md)

View File

@ -0,0 +1,41 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Configuration & secrets
An ArangoDB cluster has lots of configuration options.
Some will be supported directly in the ArangoDB Operator,
others will have to specified separately.
## Built-in options
All built-in options are passed to ArangoDB servers via commandline
arguments configured in the Pod-spec.
## Other configuration options
All commandline options of `arangod` (and `arangosync`) are available
by adding options to the `spec.<group>.args` list of a group
of servers.
These arguments are added to th commandline created for these servers.
## Secrets
The ArangoDB cluster needs several secrets such as JWT tokens
TLS certificates and so on.
All these secrets are stored as Kubernetes Secrets and passed to
the applicable Pods as files, mapped into the Pods filesystem.
The name of the secret is specified in the custom resource.
For example:
```yaml
apiVersion: "cluster.arangodb.com/v1alpha"
kind: "Cluster"
metadata:
name: "example-arangodb-cluster"
spec:
mode: Cluster
auth:
jwtSecretName: <name-of-JWT-token-secret>
```

View File

@ -0,0 +1,208 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# ArangoDeploymentReplication Custom Resource
The ArangoDB Replication Operator creates and maintains ArangoDB
`arangosync` configurations in a Kubernetes cluster, given a replication specification.
This replication specification is a `CustomResource` following
a `CustomResourceDefinition` created by the operator.
Example minimal replication definition for 2 ArangoDB cluster with sync in the same Kubernetes cluster:
```yaml
apiVersion: "replication.database.arangodb.com/v1alpha"
kind: "ArangoDeploymentReplication"
metadata:
name: "replication-from-a-to-b"
spec:
source:
deploymentName: cluster-a
auth:
keyfileSecretName: cluster-a-sync-auth
destination:
deploymentName: cluster-b
```
This definition results in:
- the arangosync `SyncMaster` in deployment `cluster-b` is called to configure a synchronization
from the syncmasters in `cluster-a` to the syncmasters in `cluster-b`,
using the client authentication certificate stored in `Secret` `cluster-a-sync-auth`.
To access `cluster-a`, the JWT secret found in the deployment of `cluster-a` is used.
To access `cluster-b`, the JWT secret found in the deployment of `cluster-b` is used.
Example replication definition for replicating from a source that is outside the current Kubernetes cluster
to a destination that is in the same Kubernetes cluster:
```yaml
apiVersion: "replication.database.arangodb.com/v1alpha"
kind: "ArangoDeploymentReplication"
metadata:
name: "replication-from-a-to-b"
spec:
source:
endpoint: ["https://163.172.149.229:31888", "https://51.15.225.110:31888", "https://51.15.229.133:31888"]
auth:
keyfileSecretName: cluster-a-sync-auth
tls:
caSecretName: cluster-a-sync-ca
destination:
deploymentName: cluster-b
```
This definition results in:
- the arangosync `SyncMaster` in deployment `cluster-b` is called to configure a synchronization
from the syncmasters located at the given list of endpoint URL's to the syncmasters `cluster-b`,
using the client authentication certificate stored in `Secret` `cluster-a-sync-auth`.
To access `cluster-a`, the keyfile (containing a client authentication certificate) is used.
To access `cluster-b`, the JWT secret found in the deployment of `cluster-b` is used.
## Specification reference
Below you'll find all settings of the `ArangoDeploymentReplication` custom resource.
### `spec.source.deploymentName: string`
This setting specifies the name of an `ArangoDeployment` resource that runs a cluster
with sync enabled.
This cluster configured as the replication source.
### `spec.source.endpoint: []string`
This setting specifies zero or more master endpoint URL's of the source cluster.
Use this setting if the source cluster is not running inside a Kubernetes cluster
that is reachable from the Kubernetes cluster the `ArangoDeploymentReplication` resource is deployed in.
Specifying this setting and `spec.source.deploymentName` at the same time is not allowed.
### `spec.source.auth.keyfileSecretName: string`
This setting specifies the name of a `Secret` containing a client authentication certificate called `tls.keyfile` used to authenticate
with the SyncMaster at the specified source.
If `spec.source.auth.userSecretName` has not been set,
the client authentication certificate found in the secret with this name is also used to configure
the synchronization and fetch the synchronization status.
This setting is required.
### `spec.source.auth.userSecretName: string`
This setting specifies the name of a `Secret` containing a `username` & `password` used to authenticate
with the SyncMaster at the specified source in order to configure synchronization and fetch synchronization status.
The user identified by the username must have write access in the `_system` database of the source ArangoDB cluster.
### `spec.source.tls.caSecretName: string`
This setting specifies the name of a `Secret` containing a TLS CA certificate `ca.crt` used to verify
the TLS connection created by the SyncMaster at the specified source.
This setting is required, unless `spec.source.deploymentName` has been set.
### `spec.destination.deploymentName: string`
This setting specifies the name of an `ArangoDeployment` resource that runs a cluster
with sync enabled.
This cluster configured as the replication destination.
### `spec.destination.endpoint: []string`
This setting specifies zero or more master endpoint URL's of the destination cluster.
Use this setting if the destination cluster is not running inside a Kubernetes cluster
that is reachable from the Kubernetes cluster the `ArangoDeploymentReplication` resource is deployed in.
Specifying this setting and `spec.destination.deploymentName` at the same time is not allowed.
### `spec.destination.auth.keyfileSecretName: string`
This setting specifies the name of a `Secret` containing a client authentication certificate called `tls.keyfile` used to authenticate
with the SyncMaster at the specified destination.
If `spec.destination.auth.userSecretName` has not been set,
the client authentication certificate found in the secret with this name is also used to configure
the synchronization and fetch the synchronization status.
This setting is required, unless `spec.destination.deploymentName` or `spec.destination.auth.userSecretName` has been set.
Specifying this setting and `spec.destination.userSecretName` at the same time is not allowed.
### `spec.destination.auth.userSecretName: string`
This setting specifies the name of a `Secret` containing a `username` & `password` used to authenticate
with the SyncMaster at the specified destination in order to configure synchronization and fetch synchronization status.
The user identified by the username must have write access in the `_system` database of the destination ArangoDB cluster.
Specifying this setting and `spec.destination.keyfileSecretName` at the same time is not allowed.
### `spec.destination.tls.caSecretName: string`
This setting specifies the name of a `Secret` containing a TLS CA certificate `ca.crt` used to verify
the TLS connection created by the SyncMaster at the specified destination.
This setting is required, unless `spec.destination.deploymentName` has been set.
## Authentication details
The authentication settings in a `ArangoDeploymentReplication` resource are used for two distinct purposes.
The first use is the authentication of the syncmasters at the destination with the syncmasters at the source.
This is always done using a client authentication certificate which is found in a `tls.keyfile` field
in a secret identified by `spec.source.auth.keyfileSecretName`.
The second use is the authentication of the ArangoDB Replication operator with the syncmasters at the source
or destination. These connections are made to configure synchronization, stop configuration and fetch the status
of the configuration.
The method used for this authentication is derived as follows (where `X` is either `source` or `destination`):
- If `spec.X.userSecretName` is set, the username + password found in the `Secret` identified by this name is used.
- If `spec.X.keyfileSecretName` is set, the client authentication certificate (keyfile) found in the `Secret` identifier by this name is used.
- If `spec.X.deploymentName` is set, the JWT secret found in the deployment is used.
## Creating client authentication certificate keyfiles
The client authentication certificates needed for the `Secrets` identified by `spec.source.auth.keyfileSecretName` & `spec.destination.auth.keyfileSecretName`
are normal ArangoDB keyfiles that can be created by the `arangosync create client-auth keyfile` command.
In order to do so, you must have access to the client authentication CA of the source/destination.
If the client authentication CA at the source/destination also contains a private key (`ca.key`), the ArangoDeployment operator
can be used to create such a keyfile for you, without the need to have `arangosync` installed locally.
Read the following paragraphs for instructions on how to do that.
## Creating and using access packages
An access package is a YAML file that contains:
- A client authentication certificate, wrapped in a `Secret` in a `tls.keyfile` data field.
- A TLS certificate authority public key, wrapped in a `Secret` in a `ca.crt` data field.
The format of the access package is such that it can be inserted into a Kubernetes cluster using the standard `kubectl` tool.
To create an access package that can be used to authenticate with the ArangoDB SyncMasters of an `ArangoDeployment`,
add a name of a non-existing `Secret` to the `spec.sync.externalAccess.accessPackageSecretNames` field of the `ArangoDeployment`.
In response, a `Secret` is created in that Kubernetes cluster, with the given name, that contains a `accessPackage.yaml` data field
that contains a Kubernetes resource specification that can be inserted into the other Kubernetes cluster.
The process for creating and using an access package for authentication at the source cluster is as follows:
- Edit the `ArangoDeployment` resource of the source cluster, set `spec.sync.externalAccess.accessPackageSecretNames` to `["my-access-package"]`
- Wait for the `ArangoDeployment` operator to create a `Secret` named `my-access-package`.
- Extract the access package from the Kubernetes source cluster using:
```bash
kubectl get secret my-access-package --template='{{index .data "accessPackage.yaml"}}' | base64 -D > accessPackage.yaml
```
- Insert the secrets found in the access package in the Kubernetes destination cluster using:
```bash
kubectl apply -f accessPackage.yaml
```
As a result, the destination Kubernetes cluster will have 2 additional `Secrets`. One contains a client authentication certificate
formatted as a keyfile. Another contains the public key of the TLS CA certificate of the source cluster.

View File

@ -0,0 +1,379 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# ArangoDeployment Custom Resource
The ArangoDB Deployment Operator creates and maintains ArangoDB deployments
in a Kubernetes cluster, given a deployment specification.
This deployment specification is a `CustomResource` following
a `CustomResourceDefinition` created by the operator.
Example minimal deployment definition of an ArangoDB database cluster:
```yaml
apiVersion: "database.arangodb.com/v1alpha"
kind: "ArangoDeployment"
metadata:
name: "example-arangodb-cluster"
spec:
mode: Cluster
```
Example more elaborate deployment definition:
```yaml
apiVersion: "database.arangodb.com/v1alpha"
kind: "ArangoDeployment"
metadata:
name: "example-arangodb-cluster"
spec:
mode: Cluster
environment: Production
agents:
count: 3
args:
- --log.level=debug
resources:
requests:
storage: 8Gi
storageClassName: ssd
dbservers:
count: 5
resources:
requests:
storage: 80Gi
storageClassName: ssd
coordinators:
count: 3
image: "arangodb/arangodb:3.3.4"
```
## Specification reference
Below you'll find all settings of the `ArangoDeployment` custom resource.
Several settings are for various groups of servers. These are indicated
with `<group>` where `<group>` can be any of:
- `agents` for all agents of a `Cluster` or `ActiveFailover` pair.
- `dbservers` for all dbservers of a `Cluster`.
- `coordinators` for all coordinators of a `Cluster`.
- `single` for all single servers of a `Single` instance or `ActiveFailover` pair.
- `syncmasters` for all syncmasters of a `Cluster`.
- `syncworkers` for all syncworkers of a `Cluster`.
### `spec.mode: string`
This setting specifies the type of deployment you want to create.
Possible values are:
- `Cluster` (default) Full cluster. Defaults to 3 agents, 3 dbservers & 3 coordinators.
- `ActiveFailover` Active-failover single pair. Defaults to 3 agents and 2 single servers.
- `Single` Single server only (note this does not provide high availability or reliability).
This setting cannot be changed after the deployment has been created.
### `spec.environment: string`
This setting specifies the type of environment in which the deployment is created.
Possible values are:
- `Development` (default) This value optimizes the deployment for development
use. It is possible to run a deployment on a small number of nodes (e.g. minikube).
- `Production` This value optimizes the deployment for production use.
It puts required affinity constraints on all pods to avoid agents & dbservers
from running on the same machine.
### `spec.image: string`
This setting specifies the docker image to use for all ArangoDB servers.
In a `development` environment this setting defaults to `arangodb/arangodb:latest`.
For `production` environments this is a required setting without a default value.
It is highly recommend to use explicit version (not `latest`) for production
environments.
### `spec.imagePullPolicy: string`
This setting specifies the pull policy for the docker image to use for all ArangoDB servers.
Possible values are:
- `IfNotPresent` (default) to pull only when the image is not found on the node.
- `Always` to always pull the image before using it.
### `spec.storageEngine: string`
This setting specifies the type of storage engine used for all servers
in the cluster.
Possible values are:
- `MMFiles` To use the MMFiles storage engine.
- `RocksDB` (default) To use the RocksDB storage engine.
This setting cannot be changed after the cluster has been created.
### `spec.rocksdb.encryption.keySecretName`
This setting specifies the name of a kubernetes `Secret` that contains
an encryption key used for encrypting all data stored by ArangoDB servers.
When an encryption key is used, encryption of the data in the cluster is enabled,
without it encryption is disabled.
The default value is empty.
This requires the Enterprise version.
The encryption key cannot be changed after the cluster has been created.
The secret specified by this setting, must have a data field named 'key' containing
an encryption key that is exactly 32 bytes long.
### `spec.externalAccess.type: string`
This setting specifies the type of `Service` that will be created to provide
access to the ArangoDB deployment from outside the Kubernetes cluster.
Possible values are:
- `None` To limit access to application running inside the Kubernetes cluster.
- `LoadBalancer` To create a `Service` of type `LoadBalancer` for the ArangoDB deployment.
- `NodePort` To create a `Service` of type `NodePort` for the ArangoDB deployment.
- `Auto` (default) To create a `Service` of type `LoadBalancer` and fallback to a `Service` or type `NodePort` when the
`LoadBalancer` is not assigned an IP address.
### `spec.externalAccess.loadBalancerIP: string`
This setting specifies the IP used to for the LoadBalancer to expose the ArangoDB deployment on.
This setting is used when `spec.externalAccess.type` is set to `LoadBalancer` or `Auto`.
If you do not specify this setting, an IP will be chosen automatically by the load-balancer provisioner.
### `spec.externalAccess.nodePort: int`
This setting specifies the port used to expose the ArangoDB deployment on.
This setting is used when `spec.externalAccess.type` is set to `NodePort` or `Auto`.
If you do not specify this setting, a random port will be chosen automatically.
### `spec.auth.jwtSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the JWT token used for accessing all ArangoDB servers.
When no name is specified, it defaults to `<deployment-name>-jwt`.
To disable authentication, set this value to `None`.
If you specify a name of a `Secret`, that secret must have the token
in a data field named `token`.
If you specify a name of a `Secret` that does not exist, a random token is created
and stored in a `Secret` with given name.
Changing a JWT token results in stopping the entire cluster
and restarting it.
### `spec.tls.caSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
a standard CA certificate + private key used to sign certificates for individual
ArangoDB servers.
When no name is specified, it defaults to `<deployment-name>-ca`.
To disable authentication, set this value to `None`.
If you specify a name of a `Secret` that does not exist, a self-signed CA certificate + key is created
and stored in a `Secret` with given name.
The specified `Secret`, must contain the following data fields:
- `ca.crt` PEM encoded public key of the CA certificate
- `ca.key` PEM encoded private key of the CA certificate
### `spec.tls.altNames: []string`
This setting specifies a list of alternate names that will be added to all generated
certificates. These names can be DNS names or email addresses.
The default value is empty.
### `spec.tls.ttl: duration`
This setting specifies the time to live of all generated
server certificates.
The default value is `2160h` (about 3 month).
When the server certificate is about to expire, it will be automatically replaced
by a new one and the affected server will be restarted.
Note: The time to live of the CA certificate (when created automatically)
will be set to 10 years.
### `spec.sync.enabled: bool`
This setting enables/disables support for data center 2 data center
replication in the cluster. When enabled, the cluster will contain
a number of `syncmaster` & `syncworker` servers.
The default value is `false`.
### `spec.sync.externalAccess.type: string`
This setting specifies the type of `Service` that will be created to provide
access to the ArangoSync syncMasters from outside the Kubernetes cluster.
Possible values are:
- `None` To limit access to applications running inside the Kubernetes cluster.
- `LoadBalancer` To create a `Service` of type `LoadBalancer` for the ArangoSync SyncMasters.
- `NodePort` To create a `Service` of type `NodePort` for the ArangoSync SyncMasters.
- `Auto` (default) To create a `Service` of type `LoadBalancer` and fallback to a `Service` or type `NodePort` when the
`LoadBalancer` is not assigned an IP address.
Note that when you specify a value of `None`, a `Service` will still be created, but of type `ClusterIP`.
### `spec.sync.externalAccess.loadBalancerIP: string`
This setting specifies the IP used for the LoadBalancer to expose the ArangoSync SyncMasters on.
This setting is used when `spec.sync.externalAccess.type` is set to `LoadBalancer` or `Auto`.
If you do not specify this setting, an IP will be chosen automatically by the load-balancer provisioner.
### `spec.sync.externalAccess.nodePort: int`
This setting specifies the port used to expose the ArangoSync SyncMasters on.
This setting is used when `spec.sync.externalAccess.type` is set to `NodePort` or `Auto`.
If you do not specify this setting, a random port will be chosen automatically.
### `spec.sync.externalAccess.masterEndpoint: []string`
This setting specifies the master endpoint(s) advertised by the ArangoSync SyncMasters.
If not set, this setting defaults to:
- If `spec.sync.externalAccess.loadBalancerIP` is set, it defaults to `https://<load-balancer-ip>:<8629>`.
- Otherwise it defaults to `https://<sync-service-dns-name>:<8629>`.
### `spec.sync.externalAccess.accessPackageSecretNames: []string`
This setting specifies the names of zero of more `Secrets` that will be created by the deployment
operator containing "access packages". An access package contains those `Secrets` that are needed
to access the SyncMasters of this `ArangoDeployment`.
By removing a name from this setting, the corresponding `Secret` is also deleted.
Note that to remove all access packages, leave an empty array in place (`[]`).
Completely removing the setting results in not modifying the list.
See [the `ArangoDeploymentReplication` specification](./DeploymentReplicationResource.md) for more information
on access packages.
### `spec.sync.auth.jwtSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the JWT token used for accessing all ArangoSync master servers.
When not specified, the `spec.auth.jwtSecretName` value is used.
If you specify a name of a `Secret` that does not exist, a random token is created
and stored in a `Secret` with given name.
### `spec.sync.auth.clientCASecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
a PEM encoded CA certificate used for client certificate verification
in all ArangoSync master servers.
This is a required setting when `spec.sync.enabled` is `true`.
The default value is empty.
### `spec.sync.mq.type: string`
This setting sets the type of message queue used by ArangoSync.
Possible values are:
- `Direct` (default) for direct HTTP connections between the 2 data centers.
### `spec.sync.tls.caSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
a standard CA certificate + private key used to sign certificates for individual
ArangoSync master servers.
When no name is specified, it defaults to `<deployment-name>-sync-ca`.
If you specify a name of a `Secret` that does not exist, a self-signed CA certificate + key is created
and stored in a `Secret` with given name.
The specified `Secret`, must contain the following data fields:
- `ca.crt` PEM encoded public key of the CA certificate
- `ca.key` PEM encoded private key of the CA certificate
### `spec.sync.tls.altNames: []string`
This setting specifies a list of alternate names that will be added to all generated
certificates. These names can be DNS names or email addresses.
The default value is empty.
### `spec.sync.monitoring.tokenSecretName: string`
This setting specifies the name of a kubernetes `Secret` that contains
the bearer token used for accessing all monitoring endpoints of all ArangoSync
servers.
When not specified, no monitoring token is used.
The default value is empty.
### `spec.ipv6.forbidden: bool`
This setting prevents the use of IPv6 addresses by ArangoDB servers.
The default is `false`.
### `spec.<group>.count: number`
This setting specifies the number of servers to start for the given group.
For the agent group, this value must be a positive, odd number.
The default value is `3` for all groups except `single` (there the default is `1`
for `spec.mode: Single` and `2` for `spec.mode: ActiveFailover`).
For the `syncworkers` group, it is highly recommended to use the same number
as for the `dbservers` group.
### `spec.<group>.args: [string]`
This setting specifies additional commandline arguments passed to all servers of this group.
The default value is an empty array.
### `spec.<group>.resources.requests.cpu: cpuUnit`
This setting specifies the amount of CPU requested by server of this group.
See https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ for details.
### `spec.<group>.resources.requests.memory: memoryUnit`
This setting specifies the amount of memory requested by server of this group.
See https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ for details.
### `spec.<group>.resources.requests.storage: storageUnit`
This setting specifies the amount of storage required for each server of this group.
The default value is `8Gi`.
This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`
because servers in these groups do not need persistent storage.
### `spec.<group>.serviceAccountName: string`
This setting specifies the `serviceAccountName` for the `Pods` created
for each server of this group.
Using an alternative `ServiceAccount` is typically used to separate access rights.
The ArangoDB deployments do not require any special rights.
### `spec.<group>.storageClassName: string`
This setting specifies the `storageClass` for the `PersistentVolume`s created
for each server of this group.
This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`
because servers in these groups do not need persistent storage.
### `spec.<group>.tolerations: [Toleration]`
This setting specifies the `tolerations` for the `Pod`s created
for each server of this group.
By default, suitable tolerations are set for the following keys with the `NoExecute` effect:
- `node.kubernetes.io/not-ready`
- `node.kubernetes.io/unreachable`
- `node.alpha.kubernetes.io/unreachable` (will be removed in future version)
For more information on tolerations, consult the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).

View File

@ -0,0 +1,129 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Configuring your driver for ArangoDB access
In this chapter you'll learn how to configure a driver for accessing
an ArangoDB deployment in Kubernetes.
The exact methods to configure a driver are specific to that driver.
## Database endpoint(s)
The endpoint(s) (or URLs) to communicate with is the most important
parameter your need to configure in your driver.
Finding the right endpoints depend on wether your client application is running in
the same Kubernetes cluster as the ArangoDB deployment or not.
### Client application in same Kubernetes cluster
If your client application is running in the same Kubernetes cluster as
the ArangoDB deployment, you should configure your driver to use the
following endpoint:
```text
https://<deployment-name>.<namespace>.svc:8529
```
Only if your deployment has set `spec.tls.caSecretName` to `None`, should
you use `http` instead of `https`.
### Client application outside Kubernetes cluster
If your client application is running outside the Kubernetes cluster in which
the ArangoDB deployment is running, your driver endpoint depends on the
external-access configuration of your ArangoDB deployment.
If the external-access of the ArangoDB deployment is of type `LoadBalancer`,
then use the IP address of that `LoadBalancer` like this:
```text
https://<load-balancer-ip>:8529
```
If the external-access of the ArangoDB deployment is of type `NodePort`,
then use the IP address(es) of the `Nodes` of the Kubernetes cluster,
combined with the `NodePort` that is used by the external-access service.
For example:
```text
https://<kubernetes-node-1-ip>:30123
```
You can find the type of external-access by inspecting the external-access `Service`.
To do so, run the following command:
```bash
kubectl get service -n <namespace-of-deployment> <deployment-name>-ea
```
The output looks like this:
```bash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
example-simple-cluster-ea LoadBalancer 10.106.175.38 192.168.10.208 8529:31890/TCP 1s app=arangodb,arango_deployment=example-simple-cluster,role=coordinator
```
In this case the external-access is of type `LoadBalancer` with a load-balancer IP address
of `192.168.10.208`.
This results in an endpoint of `https://192.168.10.208:8529`.
## TLS settings
As mentioned before the ArangoDB deployment managed by the ArangoDB operator
will use a secure (TLS) connection unless you set `spec.tls.caSecretName` to `None`
in your `ArangoDeployment`.
When using a secure connection, you can choose to verify the server certificates
provides by the ArangoDB servers or not.
If you want to verify these certificates, configure your driver with the CA certificate
found in a Kubernetes `Secret` found in the same namespace as the `ArangoDeployment`.
The name of this `Secret` is stored in the `spec.tls.caSecretName` setting of
the `ArangoDeployment`. If you don't set this setting explicitly, it will be
set automatically.
Then fetch the CA secret using the following command (or use a Kubernetes client library to fetch it):
```bash
kubectl get secret -n <namespace> <secret-name> --template='{{index .data "ca.crt"}}' | base64 -D > ca.crt
```
This results in a file called `ca.crt` containing a PEM encoded, x509 CA certificate.
## Query requests
For most client requests made by a driver, it does not matter if there is any kind
of load-balancer between your client application and the ArangoDB deployment.
{% hint 'info' %}
Note that even a simple `Service` of type `ClusterIP` already behaves as a load-balancer.
{% endhint %}
The exception to this is cursor related requests made to an ArangoDB `Cluster` deployment.
The coordinator that handles an initial query request (that results in a `Cursor`)
will save some in-memory state in that coordinator, if the result of the query
is too big to be transfer back in the response of the initial request.
Follow-up requests have to be made to fetch the remaining data.
These follow-up requests must be handled by the same coordinator to which the initial
request was made.
As soon as there is a load-balancer between your client application and the ArangoDB cluster,
it is uncertain which coordinator will actually handle the follow-up request.
To resolve this uncertainty, make sure to run your client application in the same
Kubernetes cluster and synchronize your endpoints before making the
initial query request.
This will result in the use (by the driver) of internal DNS names of all coordinators.
A follow-up request can then be sent to exactly the same coordinator.
If your client application is running outside the Kubernetes cluster this is much harder
to solve.
The easiest way to work around it, is by making sure that the query results are small
enough.
When that is not feasible, it is also possible to resolve this
when the internal DNS names of your Kubernetes cluster are exposed to your client application
and the resuling IP addresses are routeable from your client application.
To expose internal DNS names of your Kubernetes cluster, your can use [CoreDNS](https://coredns.io).

View File

@ -0,0 +1,11 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Metrics
The ArangoDB Kubernetes Operator (`kube-arangodb`) exposes metrics of
its operations in a format that is compatible with [Prometheus](https://prometheus.io).
The metrics are exposed through HTTPS on port `8528` under path `/metrics`.
Look at [examples/metrics](https://github.com/arangodb/kube-arangodb/tree/master/examples/metrics)
for examples of `Services` and `ServiceMonitors` you can use to integrate
with Prometheus through the [Prometheus-Operator by CoreOS](https://github.com/coreos/prometheus-operator).

View File

@ -0,0 +1,22 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# ArangoDB Kubernetes Operator
The ArangoDB Kubernetes Operator (`kube-arangodb`) is a set of operators
that you deploy in your Kubernetes cluster to:
- Manage deployments of the ArangoDB database
- Provide `PersistentVolumes` on local storage of your nodes for optimal storage performance.
- Configure ArangoDB Datacenter to Datacenter replication
Each of these uses involves a different custom resource.
- Use an [`ArangoDeployment` resource](./DeploymentResource.md) to
create an ArangoDB database deployment.
- Use an [`ArangoLocalStorage` resource](./StorageResource.md) to
provide local `PersistentVolumes` for optimal I/O performance.
- Use an [`ArangoDeploymentReplication` resource](./DeploymentReplicationResource.md) to
configure ArangoDB Datacenter to Datacenter replication.
Continue with [Using the ArangoDB Kubernetes Operator](./Usage.md)
to learn how to install the ArangoDB Kubernetes operator and create
your first deployment.

View File

@ -0,0 +1,22 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Scaling
The ArangoDB Kubernetes Operator supports up and down scaling of
the number of dbservers & coordinators.
Currently it is not possible to change the number of
agents of a cluster.
The scale up or down, change the number of servers in the custom
resource.
E.g. change `spec.dbservers.count` from `3` to `4`.
Then apply the updated resource using:
```bash
kubectl apply -f yourCustomResourceFile.yaml
```
Inspect the status of the custom resource to monitor
the progress of the scaling operation.

View File

@ -0,0 +1,126 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Services and load balancer
The ArangoDB Kubernetes Operator will create services that can be used to
reach the ArangoDB servers from inside the Kubernetes cluster.
By default, the ArangoDB Kubernetes Operator will also create an additional
service to reach the ArangoDB deployment from outside the Kubernetes cluster.
For exposing the ArangoDB deployment to the outside, there are 2 options:
- Using a `NodePort` service. This will expose the deployment on a specific port (above 30.000)
on all nodes of the Kubernetes cluster.
- Using a `LoadBalancer` service. This will expose the deployment on a load-balancer
that is provisioned by the Kubernetes cluster.
The `LoadBalancer` option is the most convenient, but not all Kubernetes clusters
are able to provision a load-balancer. Therefore we offer a third (and default) option: `Auto`.
In this option, the ArangoDB Kubernetes Operator tries to create a `LoadBalancer`
service. It then waits for up to a minute for the Kubernetes cluster to provision
a load-balancer for it. If that has not happened after a minute, the service
is replaced by a service of type `NodePort`.
To inspect the created service, run:
```bash
kubectl get services <deployment-name>-ea
```
To use the ArangoDB servers from outside the Kubernetes cluster
you have to add another service as explained below.
## Services
If you do not want the ArangoDB Kubernetes Operator to create an external-access
service for you, set `spec.externalAccess.Type` to `None`.
If you want to create external access services manually, follow the instructions below.
### Single server
For a single server deployment, the operator creates a single
`Service` named `<deployment-name>`. This service has a normal cluster IP
address.
### Full cluster
For a full cluster deployment, the operator creates two `Services`.
- `<deployment-name>-int` a headless `Service` intended to provide
DNS names for all pods created by the operator.
It selects all ArangoDB & ArangoSync servers in the cluster.
- `<deployment-name>` a normal `Service` that selects only the coordinators
of the cluster. This `Service` is configured with `ClientIP` session
affinity. This is needed for cursor requests, since they are bound to
a specific coordinator.
When the coordinators are asked to provide endpoints of the cluster
(e.g. when calling `client.SynchronizeEndpoints()` in the go driver)
the DNS names of the individual `Pods` will be returned
(`<pod>.<deployment-name>-int.<namespace>.svc`)
### Full cluster with DC2DC
For a full cluster with datacenter replication deployment,
the same `Services` are created as for a Full cluster, with the following
additions:
- `<deployment-name>-sync` a normal `Service` that selects only the syncmasters
of the cluster.
## Load balancer
If you want full control of the `Services` needed to access the ArangoDB deployment
from outside your Kubernetes cluster, set `spec.externalAccess.Type` of the `ArangoDeployment` to `None`
and create a `Service` as specified below.
Create a `Service` of type `LoadBalancer` or `NodePort`, depending on your
Kubernetes deployment.
This service should select:
- `arango_deployment: <deployment-name>`
- `role: coordinator`
The following example yields a service of type `LoadBalancer` with a specific
load balancer IP address.
With this service, the ArangoDB cluster can now be reached on `https://1.2.3.4:8529`.
```yaml
kind: Service
apiVersion: v1
metadata:
name: arangodb-cluster-exposed
spec:
selector:
arango_deployment: arangodb-cluster
role: coordinator
type: LoadBalancer
loadBalancerIP: 1.2.3.4
ports:
- protocol: TCP
port: 8529
targetPort: 8529
```
The following example yields a service of type `NodePort` with the ArangoDB
cluster exposed on port 30529 of all nodes of the Kubernetes cluster.
```yaml
kind: Service
apiVersion: v1
metadata:
name: arangodb-cluster-exposed
spec:
selector:
arango_deployment: arangodb-cluster
role: coordinator
type: NodePort
ports:
- protocol: TCP
port: 8529
targetPort: 8529
nodePort: 30529
```

View File

@ -0,0 +1,134 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Storage
An ArangoDB cluster relies heavily on fast persistent storage.
The ArangoDB Kubernetes Operator uses `PersistentVolumeClaims` to deliver
the storage to Pods that need them.
## Storage configuration
In the `ArangoDeployment` resource, one can specify the type of storage
used by groups of servers using the `spec.<group>.storageClassName`
setting.
This is an example of a `Cluster` deployment that stores its agent & dbserver
data on `PersistentVolumes` that use the `my-local-ssd` `StorageClass`
```yaml
apiVersion: "database.arangodb.com/v1alpha"
kind: "ArangoDeployment"
metadata:
name: "cluster-using-local-ssh"
spec:
mode: Cluster
agents:
storageClassName: my-local-ssd
dbservers:
storageClassName: my-local-ssd
```
The amount of storage needed is configured using the
`spec.<group>.resources.requests.storage` setting.
Note that configuring storage is done per group of servers.
It is not possible to configure storage per individual
server.
This is an example of a `Cluster` deployment that requests volumes of 80GB
for every dbserver, resulting in a total storage capacity of 240GB (with 3 dbservers).
```yaml
apiVersion: "database.arangodb.com/v1alpha"
kind: "ArangoDeployment"
metadata:
name: "cluster-using-local-ssh"
spec:
mode: Cluster
dbservers:
resources:
requests:
storage: 80Gi
```
## Local storage
For optimal performance, ArangoDB should be configured with locally attached
SSD storage.
The easiest way to accomplish this is to deploy an
[`ArangoLocalStorage` resource](./StorageResource.md).
The ArangoDB Storage Operator will use it to provide `PersistentVolumes` for you.
This is an example of an `ArangoLocalStorage` resource that will result in
`PersistentVolumes` created on any node of the Kubernetes cluster
under the directory `/mnt/big-ssd-disk`.
```yaml
apiVersion: "storage.arangodb.com/v1alpha"
kind: "ArangoLocalStorage"
metadata:
name: "example-arangodb-storage"
spec:
storageClass:
name: my-local-ssd
localPath:
- /mnt/big-ssd-disk
```
Note that using local storage required `VolumeScheduling` to be enabled in your
Kubernetes cluster. ON Kubernetes 1.10 this is enabled by default, on version
1.9 you have to enable it with a `--feature-gate` setting.
### Manually creating `PersistentVolumes`
The alternative is to create `PersistentVolumes` manually, for all servers that
need persistent storage (single, agents & dbservers).
E.g. for a `Cluster` with 3 agents and 5 dbservers, you must create 8 volumes.
Note that each volume must have a capacity that is equal to or higher than the
capacity needed for each server.
To select the correct node, add a required node-affinity annotation as shown
in the example below.
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume-agent-1
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["node-1"]
}
]}
]}
}'
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-ssd
local:
path: /mnt/disks/ssd1
```
For Kubernetes 1.9 and up, you should create a `StorageClass` which is configured
to bind volumes on their first use as shown in the example below.
This ensures that the Kubernetes scheduler takes all constraints on a `Pod`
that into consideration before binding the volume to a claim.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
```

View File

@ -0,0 +1,63 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# ArangoLocalStorage Custom Resource
The ArangoDB Storage Operator creates and maintains ArangoDB
storage resources in a Kubernetes cluster, given a storage specification.
This storage specification is a `CustomResource` following
a `CustomResourceDefinition` created by the operator.
Example minimal storage definition:
```yaml
apiVersion: "storage.arangodb.com/v1alpha"
kind: "ArangoLocalStorage"
metadata:
name: "example-arangodb-storage"
spec:
storageClass:
name: my-local-ssd
localPath:
- /mnt/big-ssd-disk
```
This definition results in:
- a `StorageClass` called `my-local-ssd`
- the dynamic provisioning of PersistentVolume's with
a local volume on a node where the local volume starts
in a sub-directory of `/mnt/big-ssd-disk`.
- the dynamic cleanup of PersistentVolume's (created by
the operator) after one is released.
The provisioned volumes will have a capacity that matches
the requested capacity of volume claims.
## Specification reference
Below you'll find all settings of the `ArangoLocalStorage` custom resource.
### `spec.storageClass.name: string`
This setting specifies the name of the storage class that
created `PersistentVolume` will use.
If empty, this field defaults to the name of the `ArangoLocalStorage`
object.
If a `StorageClass` with given name does not yet exist, it
will be created.
### `spec.storageClass.isDefault: bool`
This setting specifies if the created `StorageClass` will
be marked as default storage class. (default is `false`)
### `spec.localPath: stringList`
This setting specifies one of more local directories
(on the nodes) used to create persistent volumes in.
### `spec.nodeSelector: nodeSelector`
This setting specifies which nodes the operator will
provision persistent volumes on.

View File

@ -0,0 +1,55 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Secure connections (TLS)
The ArangoDB Kubernetes Operator will by default create ArangoDB deployments
that use secure TLS connections.
It uses a single CA certificate (stored in a Kubernetes secret) and
one certificate per ArangoDB server (stored in a Kubernetes secret per server).
To disable TLS, set `spec.tls.caSecretName` to `None`.
## Install CA certificate
If the CA certificate is self-signed, it will not be trusted by browsers,
until you install it in the local operating system or browser.
This process differs per operating system.
To do so, you first have to fetch the CA certificate from its Kubernetes
secret.
```bash
kubectl get secret <deploy-name>-ca --template='{{index .data "ca.crt"}}' | base64 -D > ca.crt
```
### Windows
To install a CA certificate in Windows, follow the
[procedure described here](http://wiki.cacert.org/HowTo/InstallCAcertRoots).
### MacOS
To install a CA certificate in MacOS, run:
```bash
sudo /usr/bin/security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
```
To uninstall a CA certificate in MacOS, run:
```bash
sudo /usr/bin/security remove-trusted-cert -d ca.crt
```
### Linux
To install a CA certificate in Linux, on Ubuntu, run:
```bash
sudo cp ca.crt /usr/local/share/ca-certificates/<some-name>.crt
sudo update-ca-certificates
```
## See also
- [Authentication](./Authentication.md)

View File

@ -0,0 +1,116 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Troubleshooting
While Kubernetes and the ArangoDB Kubernetes operator will automatically
resolve a lot of issues, there are always cases where human attention
is needed.
This chapter gives your tips & tricks to help you troubleshoot deployments.
## Where to look
In Kubernetes all resources can be inspected using `kubectl` using either
the `get` or `describe` command.
To get all details of the resource (both specification & status),
run the following command:
```bash
kubectl get <resource-type> <resource-name> -n <namespace> -o yaml
```
For example, to get the entire specification and status
of an `ArangoDeployment` resource named `my-arangodb` in the `default` namespace,
run:
```bash
kubectl get ArangoDeployment my-arango -n default -o yaml
# or shorter
kubectl get arango my-arango -o yaml
```
Several types of resources (including all ArangoDB custom resources) support
events. These events show what happened to the resource over time.
To show the events (and most important resource data) of a resource,
run the following command:
```bash
kubectl describe <resource-type> <resource-name> -n <namespace>
```
## Getting logs
Another invaluable source of information is the log of containers being run
in Kubernetes.
These logs are accessible through the `Pods` that group these containers.
To fetch the logs of the default container running in a `Pod`, run:
```bash
kubectl logs <pod-name> -n <namespace>
# or with follow option to keep inspecting logs while they are written
kubectl logs <pod-name> -n <namespace> -f
```
To inspect the logs of a specific container in `Pod`, add `-c <container-name>`.
You can find the names of the containers in the `Pod`, using `kubectl describe pod ...`.
{% hint 'info' %}
Note that the ArangoDB operators are being deployed themselves as a Kubernetes `Deployment`
with 2 replicas. This means that you will have to fetch the logs of 2 `Pods` running
those replicas.
{% endhint %}
## What if
### The `Pods` of a deployment stay in `Pending` state
There are two common causes for this.
1) The `Pods` cannot be scheduled because there are not enough nodes available.
This is usally only the case with a `spec.environment` setting that has a value of `Production`.
Solution:
Add more nodes.
1) There are no `PersistentVolumes` available to be bound to the `PersistentVolumeClaims`
created by the operator.
Solution:
Use `kubectl get persistentvolumes` to inspect the available `PersistentVolumes`
and if needed, use the [`ArangoLocalStorage` operator](./StorageResource.md) to provision `PersistentVolumes`.
### When restarting a `Node`, the `Pods` scheduled on that node remain in `Terminating` state
When a `Node` no longer makes regular calls to the Kubernetes API server, it is
marked as not available. Depending on specific settings in your `Pods`, Kubernetes
will at some point decide to terminate the `Pod`. As long as the `Node` is not
completely removed from the Kubernetes API server, Kubernetes will try to use
the `Node` itself to terminate the `Pod`.
The `ArangoDeployment` operator recognizes this condition and will try to replace those
`Pods` with `Pods` on different nodes. The exact behavior differs per type of server.
### What happens when a `Node` with local data is broken
When a `Node` with `PersistentVolumes` hosted on that `Node` is broken and
cannot be repaired, the data in those `PersistentVolumes` is lost.
If an `ArangoDeployment` of type `Single` was using one of those `PersistentVolumes`
the database is lost and must be restored from a backup.
If an `ArangoDeployment` of type `ActiveFailover` or `Cluster` was using one of
those `PersistentVolumes`, it depends on the type of server that was using the volume.
- If an `Agent` was using the volume, it can be repaired as long as 2 other agents are still healthy.
- If a `DBServer` was using the volume, and the replication factor of all database
collections is 2 or higher, and the remaining dbservers are still healthy,
the cluster will duplicate the remaining replicas to
bring the number of replicases back to the original number.
- If a `DBServer` was using the volume, and the replication factor of a database
collection is 1 and happens to be stored on that dbserver, the data is lost.
- If a single server of an `ActiveFailover` deployment was using the volume, and the
other single server is still healthy, the other single server will become leader.
After replacing the failed single server, the new follower will synchronize with
the leader.

View File

@ -0,0 +1,42 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Upgrading
The ArangoDB Kubernetes Operator supports upgrading an ArangoDB from
one version to the next.
## Upgrade an ArangoDB deployment
To upgrade a cluster, change the version by changing
the `spec.image` setting and the apply the updated
custom resource using:
```bash
kubectl apply -f yourCustomResourceFile.yaml
```
The ArangoDB operator will perform an sequential upgrade
of all servers in your deployment. Only one server is upgraded
at a time.
For patch level upgrades (e.g. 3.3.9 to 3.3.10) each server
is stopped and restarted with the new version.
For minor level upgrades (e.g. 3.3.9 to 3.4.0) each server
is stopped, then the new version is started with `--database.auto-upgrade`
and once that is finish the new version is started with the normal arguments.
The process for major level upgrades depends on the specific version.
## Upgrade the operator itself
To update the ArangoDB Kubernetes Operator itself to a new version,
update the image version of the deployment resource
and apply it using:
```bash
kubectl apply -f examples/yourUpdatedDeployment.yaml
```
## See also
- [Scaling](./Scaling.md)

View File

@ -0,0 +1,76 @@
<!-- don't edit here, its from https://@github.com/arangodb/kube-arangodb.git / docs/Manual/ -->
# Using the ArangoDB Kubernetes Operator
## Installation
The ArangoDB Kubernetes Operator needs to be installed in your Kubernetes
cluster first.
To do so, run (replace `<version>` with the version of the operator that you want to install):
```bash
export URLPREFIX=https://raw.githubusercontent.com/arangodb/kube-arangodb/<version>/manifests
kubectl apply -f $URLPREFIX/crd.yaml
kubectl apply -f $URLPREFIX/arango-deployment.yaml
```
To use `ArangoLocalStorage` resources, also run:
```bash
kubectl apply -f $URLPREFIX/arango-storage.yaml
```
To use `ArangoDeploymentReplication` resources, also run:
```bash
kubectl apply -f $URLPREFIX/arango-deployment-replication.yaml
```
You can find the latest release of the ArangoDB Kubernetes Operator
[in the kube-arangodb repository](https://github.com/arangodb/kube-arangodb/releases/latest).
## ArangoDB deployment creation
Once the operator is running, you can create your ArangoDB database deployment
by creating a `ArangoDeployment` custom resource and deploying it into your
Kubernetes cluster.
For example (all examples can be found [in the kube-arangodb repository](https://github.com/arangodb/kube-arangodb/tree/master/examples)):
```bash
kubectl apply -f examples/simple-cluster.yaml
```
## Deployment removal
To remove an existing ArangoDB deployment, delete the custom
resource. The operator will then delete all created resources.
For example:
```bash
kubectl delete -f examples/simple-cluster.yaml
```
**Note that this will also delete all data in your ArangoDB deployment!**
If you want to keep your data, make sure to create a backup before removing the deployment.
## Operator removal
To remove the entire ArangoDB Kubernetes Operator, remove all
clusters first and then remove the operator by running:
```bash
kubectl delete deployment arango-deployment-operator
# If `ArangoLocalStorage` operator is installed
kubectl delete deployment -n kube-system arango-storage-operator
# If `ArangoDeploymentReplication` operator is installed
kubectl delete deployment arango-deployment-replication-operator
```
## See also
- [Driver configuration](./DriverConfiguration.md)
- [Scaling](./Scaling.md)
- [Upgrading](./Upgrading.md)

View File

@ -124,6 +124,21 @@
* [Processes](Deployment/Distributed.md) * [Processes](Deployment/Distributed.md)
* [Docker](Deployment/Docker.md) * [Docker](Deployment/Docker.md)
* [Multiple Datacenters](Deployment/DC2DC.md) * [Multiple Datacenters](Deployment/DC2DC.md)
* [Kubernetes](Deployment/Kubernetes/README.md)
* [Using the Operator](Deployment/Kubernetes/Usage.md)
* [Deployment Resource Reference](Deployment/Kubernetes/DeploymentResource.md)
* [Driver Configuration](Deployment/Kubernetes/DriverConfiguration.md)
* [Authentication](Deployment/Kubernetes/Authentication.md)
* [Scaling](Deployment/Kubernetes/Scaling.md)
* [Upgrading](Deployment/Kubernetes/Upgrading.md)
* [ArangoDB Configuration & Secrets](Deployment/Kubernetes/ConfigAndSecrets.md)
* [Metrics](Deployment/Kubernetes/Metrics.md)
* [Services & Load balancer](Deployment/Kubernetes/ServicesAndLoadBalancer.md)
* [Deployment Replication Resource Reference](Deployment/Kubernetes/DeploymentReplicationResource.md)
* [Storage](Deployment/Kubernetes/Storage.md)
* [Storage Resource](Deployment/Kubernetes/StorageResource.md)
* [TLS](Deployment/Kubernetes/Tls.md)
* [Troubleshooting](Deployment/Kubernetes/Troubleshooting.md)
# #
* [Administration](Administration/README.md) * [Administration](Administration/README.md)
* [Web Interface](Administration/WebInterface/README.md) * [Web Interface](Administration/WebInterface/README.md)