ArangoDB v3.13 is under development and not released yet. This documentation is not final and potentially incomplete.
ArangoDB Server Options
The startup options of the arangod
executable
Usage: arangod [<options>]
To list the commonly used startup options with a description of each option, run
the server executable in a command-line with the --help
(or -h
) option:
arangod --help
To list all available startup options and their descriptions, use:
arangod --help-all
You can specify the database directory for the server as a positional (unnamed) parameter:
arangod /path/to/datadir
You can also be explicit by using a named parameter:
arangod --database.directory /path/to/datadir
All other startup options need to be passed as named parameters, using two
hyphens (--
), followed by the option name, an equals sign (=
) or a space,
and the option value. The value needs to be wrapped in double quote marks ("
)
if the value contains whitespace characters. Extra whitespace around =
is
allowed:
arangod --database.directory = "/path with spaces/to/datadir"
See Configuration if you want to translate startup options set to configuration files and to learn more about startup options in general.
See
Fetch Current Configuration Options
if you want to query the arangod
server for the current settings at runtime.
General
--check-configuration
Type: boolean
Check the configuration and exit.
This is a command, no value needs to be specified. The process terminates after executing the command.
--config
Type: string
The configuration file or “none”.
--configuration
Type: string
The configuration file or “none”.
--console
Type: boolean
Start the server with a JavaScript emergency console.
This option can be specified without a value to enable it.
Show details
In this exclusive emergency mode, all networking and HTTP interfaces of the server are disabled. No requests can be made to the server in this mode, and the only way to work with the server in this mode is by using the emergency console.
The server cannot be started in this mode if it is already running in this or another mode.
--daemon
Type: boolean
Start the server as a daemon (background process). Requires --pid-file to be set.
This option can be specified without a value to enable it.
--default-language
Deprecated in: v3.10.0
Type: string
An ISO-639 language code. You can only set this option once, when initializing the database.
Show details
The default language is used for sorting and
comparing strings. The language value is a two-letter language code (ISO-639) or
it is composed by a two-letter language code followed by a two letter country
code (ISO-3166). For example: de
, en
, en_US
, en_UK
.
The default is the system locale of the platform.
--default-language-check
Introduced in: v3.8.0
Type: boolean
Check if --icu-language
/ --default-language
matches the stored language.
This option can be specified without a value to enable it.
Default: true
--define
Type: string…
Define a value for a @key@
entry in the configuration file using the syntax "key=value"
.
--dump-dependencies
Type: boolean
Dump the dependency graph of the feature phases (internal) and exit.
This is a command, no value needs to be specified. The process terminates after executing the command.
--dump-options
Type: boolean
Dump all available startup options in JSON format and exit.
This is a command, no value needs to be specified. The process terminates after executing the command.
--experimental-vector-index
Introduced in: v3.12.4
Type: boolean
Turn on experimental vector index feature.
This option can be specified without a value to enable it.
Show details
Turn on experimental vector index features. If this is enabled downgrading from this version will no longer be possible.
--fortune
Type: boolean
Show a fortune cookie on startup.
This option can be specified without a value to enable it.
--gid
Type: string
Switch to this group ID after reading configuration files.
Show details
The name (identity) of the group to run the server as.
If you don’t specify this option, the server does not attempt to change its GID, so that the GID the server runs as is the primary group of the user who started the server.
If you specify this option, the server changes its GID after opening ports and reading configuration files, but before accepting connections or opening other files (such as recovery files).
--honor-nsswitch
Type: boolean
Allow hostname lookup configuration via /etc/nsswitch.conf if on Linux/glibc.
This option can be specified without a value to enable it.
--hund
Type: boolean
Make ArangoDB bark on startup.
This option can be specified without a value to enable it.
--icu-language
Introduced in: v3.9.1
Type: string
An ICU locale ID to set a language and optionally additional properties that affect string comparisons and sorting. You can only set this option once, when initializing the database.
Show details
With this option, you can get the sorting and
comparing order exactly as it is defined in the ICU standard. The language value
can be a two-letter language code (ISO-639), a two-letter language code followed
by a two letter country code (ISO-3166), or any other valid ICU locale
definition. For example: de
, en
, en_US
, en_UK
,
de_AT@collation=phonebook
.
For the Swedish language (sv
), for instance, the correct ICU-based sorting
order for letters is 'a','A','b','B','z','Z','å','Ä','ö','Ö'
. To get this
order, use --icu-language sv
. If you use --default-language sv
instead, the
sorting order will be "A", "a", "B", "b", "Z", "z", "å", "Ä", "Ö", "ö"
.
Note: You can use only one of the language options, either --icu-language
or --default-language
. Setting both of them results in an error.
--log
Deprecated in: v3.5.0
Type: string…
Set the topic-specific log level, using --log level
for the general topic or --log topic=level
for the specified topic (can be specified multiple times). Available log levels: fatal, error, warning, info, debug, trace.
Default: info
--pid-file
Type: string
The name of the process ID file to use if the server runs as a daemon.
--supervisor
Type: boolean
Start the server in supervisor mode. Requires --pid-file to be set.
This option can be specified without a value to enable it.
Show details
Runs an arangod process as supervisor with another arangod process as child, which acts as the server. In the event that the server unexpectedly terminates due to an internal error, the supervisor automatically restarts the server. Enabling this option implies that the server runs as a daemon.
--uid
Type: string
Switch to this user ID after reading the configuration files.
Show details
The name (identity) of the user to run the server as.
If you don’t specify this option, the server does not attempt to change its UID, so that the UID used by the server is the same as the UID of the user who started the server.
If you specify this option, the server changes its UID after opening ports and reading configuration files, but before accepting connections or opening other files (such as recovery files). This is useful if the server must be started with raised privileges (in certain environments) but security considerations require that these privileges are dropped once the server has started work.
Note: You cannot use this option to bypass operating system security.
In general, this option (and the related --gid
) can lower privileges but not
raise them.
--use-splice-syscall
Introduced in: v3.9.4
Type: boolean
Use the splice() syscall for file copying (may not be supported on all filesystems).
This option can be specified without a value to enable it.
Default: true
Show details
While the syscall is generally available since
Linux 2.6.x, it is also required that the underlying filesystem supports the
splice operation. This is not true for some encrypted filesystems
(e.g. ecryptfs), on which splice()
calls can fail.
You can set the --use-splice-syscall
startup option to false
to use a less
efficient, but more portable file copying method instead, which should work on
all filesystems.
--version
Type: boolean
Print the version and other related information, then exit.
This is a command, no value needs to be specified. The process terminates after executing the command.
--version-json
Introduced in: v3.9.0
Type: boolean
Print the version and other related information in JSON format, then exit.
This is a command, no value needs to be specified. The process terminates after executing the command.
--working-directory
Type: string
The working directory in daemon mode.
Default: /var/tmp
agency
--agency.activate
Type: boolean
Activate the Agency.
This option can be specified without a value to enable it.
Effective on Agents only.
--agency.compaction-keep-size
Type: uint64
Keep as many Agency log entries before compaction point.
Default: 50000
Effective on Agents only.
--agency.compaction-step-size
Type: uint64
The step size between state machine compactions.
Default: 1000
Effective on Agents only.
--agency.disaster-recovery-id
Type: string
Specify the ID for this agent. WARNING: This is a dangerous option, for disaster recover only!
Effective on Agents only.
--agency.election-timeout-max
Type: double
The maximum timeout before an Agent calls for a new election (in seconds).
Default: 5
Effective on Agents only.
--agency.election-timeout-min
Type: double
The minimum timeout before an Agent calls for a new election (in seconds).
Default: 1
Effective on Agents only.
--agency.endpoint
Type: string…
The Agency endpoints.
Effective on Agents only.
--agency.max-append-size
Type: uint64
The maximum size of appendEntries document (number of log entries).
Default: 250
Effective on Agents only.
--agency.my-address
Type: string
Which address to advertise to the outside.
Effective on Agents only.
--agency.pool-size
Deprecated in: v3.11.0
Type: uint64
The number of Agents in the pool.
Default: 1
Effective on Agents only.
--agency.size
Type: uint64
The number of Agents.
Default: 1
Effective on Agents only.
--agency.supervision
Type: boolean
Perform ArangoDB cluster supervision.
This option can be specified without a value to enable it.
Effective on Agents only.
--agency.supervision-delay-add-follower
Introduced in: v3.9.6, v3.10.2
Type: uint64
The delay in supervision, before an AddFollower job is executed (in seconds).
Effective on Agents only.
--agency.supervision-delay-failed-follower
Introduced in: v3.9.6, v3.10.2
Type: uint64
The delay in supervision, before a FailedFollower job is executed (in seconds).
Effective on Agents only.
--agency.supervision-expired-servers-grace-period
Introduced in: v3.12.4
Type: double
The supervision time after which a server is removed from the agency if it does no longer send heartbeats (in seconds).
Default: 3600
Effective on Agents only.
--agency.supervision-failed-leader-adds-follower
Introduced in: v3.9.7, v3.10.2
Type: boolean
Flag indicating whether or not the FailedLeader job adds a new follower.
This option can be specified without a value to enable it.
Default: true
Effective on Agents only.
--agency.supervision-frequency
Type: double
The ArangoDB cluster supervision frequency (in seconds).
Default: 1
Effective on Agents only.
--agency.supervision-grace-period
Type: double
The supervision time after which a server is considered to have failed (in seconds).
Default: 10
Effective on Agents only.
Show details
A value of 10
seconds is recommended for regular
cluster deployments.
--agency.supervision-ok-threshold
Type: double
The supervision time after which a server is considered to be bad (in seconds).
Default: 5
Effective on Agents only.
--agency.wait-for-sync
Type: boolean
Wait for hard disk syncs on every persistence call (required in production).
This option can be specified without a value to enable it.
Default: true
Effective on Agents only.
arangosearch
--arangosearch.columns-cache-limit
Introduced in: v3.9.5
Type: uint64
The limit (in bytes) for ArangoSearch columns cache (0 = no caching).
--arangosearch.columns-cache-only-leader
Introduced in: v3.10.6
Type: boolean
Cache ArangoSearch columns only for leader shards.
This option can be specified without a value to enable it.
--arangosearch.commit-threads
Type: uint32
The upper limit to the allowed number of commit threads (0 = auto-detect).
Show details
The option value must fall in the range
[ 1..4 * NumberOfCores ]
. Set it to 0
to automatically choose a sensible
number based on the number of cores in the system.
--arangosearch.commit-threads-idle
Deprecated in: v3.11.6, v3.12.0
Type: uint32
The upper limit to the allowed number of idle threads to use for commit tasks (0 = auto-detect)
Show details
The option value must fall in the range
[ 1..arangosearch.commit-threads ]
. Set it to 0
to automatically choose a
sensible number based on the number of cores in the system.
--arangosearch.consolidation-threads
Type: uint32
The upper limit to the allowed number of consolidation threads (0 = auto-detect).
Show details
The option value must fall in the range
[ 1..arangosearch.consolidation-threads ]
. Set it to 0
to automatically
choose a sensible number based on the number of cores in the system.
--arangosearch.consolidation-threads-idle
Deprecated in: v3.11.6, v3.12.0
Type: uint32
The upper limit to the allowed number of idle threads to use for consolidation tasks (0 = auto-detect).
--arangosearch.default-parallelism
Introduced in: v3.11.6, v3.12.0
Type: uint32
Default parallelism for ArangoSearch queries
Default: 1
--arangosearch.execution-threads-limit
Introduced in: v3.11.6, v3.12.0
Type: uint32
The maximum number of threads that can be used to process ArangoSearch indexes during a SEARCH operation of a query.
--arangosearch.fail-queries-on-out-of-sync
Introduced in: v3.9.4
Type: boolean
Whether retrieval queries on out-of-sync View links and inverted indexes should fail.
This option can be specified without a value to enable it.
Show details
If set to true
, any data retrieval queries on
out-of-sync links/indexes fail with the error ‘collection/view is out of sync’
(error code 1481).
If set to false
, queries on out-of-sync links/indexes are answered normally,
but the returned data may be incomplete.
--arangosearch.skip-recovery
Introduced in: v3.9.4
Type: string…
Skip the data recovery for the specified View link or inverted index on startup. The value for this option needs to have the format ‘
--arangosearch.threads
Deprecated in: v3.7.5
Type: uint32
The exact number of threads to use for asynchronous tasks (0 = auto-detect).
Show details
From version 3.7.5 on, you should set the commit and consolidation thread counts separately via the following options instead:
--arangosearch.commit-threads
--arangosearch.consolidation-threads
If either --arangosearch.commit-threads
or
--arangosearch.consolidation-threads
is set, then --arangosearch.threads
and
arangosearch.threads-limit
are ignored. If only the legacy options are set,
then the commit and consolidation thread counts are calculated as follows:
- Maximum: The smaller value out of
--arangosearch.threads
andarangosearch.threads-limit
divided by 2, but at least 1. - Minimum: the maximum divided by 2, but at least 1.
--arangosearch.threads-limit
Deprecated in: v3.7.5
Type: uint32
The upper limit to the auto-detected number of threads to use for asynchronous tasks (0 = use default).
Show details
From version 3.7.5 on, you should set the commit and consolidation thread counts separately via the following options instead:
--arangosearch.commit-threads
--arangosearch.consolidation-threads
If either --arangosearch.commit-threads
or
--arangosearch.consolidation-threads
is set, then --arangosearch.threads
and
arangosearch.threads-limit
are ignored. If only the legacy options are set,
then the commit and consolidation thread counts are calculated as follows:
- Maximum: The smaller value out of
--arangosearch.threads
andarangosearch.threads-limit
divided by 2, but at least 1. - Minimum: the maximum divided by 2, but at least 1.
async-registry
--async-registry.cleanup-timeout
Type: uint64
Timeout in seconds between async-registry garbage collection swipes.
Default: 1
Show details
Each thread that is involved in the async-registry needs to garbage collect its finished async function calls regularly. This option controls how often this is done in seconds. This can possibly be performance relevant because each involved thread aquires a lock.
audit
--audit.hostname
Type: string
The server name to be used in audit log messages. By default, the system hostname is used.
--audit.max-entry-length
Introduced in: v3.8.0, v3.7.9
Type: uint32
The maximum length of a log entry (in bytes).
Default: 134217728
Show details
You can use this option to limit the maximum line length for individual audit log messages that are written into audit logs by arangod.
Any audit log messages longer than the specified value are truncated and the
suffix ...
is appended to them.
The default value is 128 MB, which is very high and should effectively mean downwards-compatibility with previous arangod versions, which did not restrict the maximum size of audit log messages.
--audit.output
Type: string…
Specifies the target(s) of the audit log.
Show details
To write to a file, use file://<filename>
with a
relative or absolute file path.
To write to a syslog server, use syslog://<facility>
or
syslog://<facility>/<application-name>
.
You can specify this option multiple times to configure the output for multiple targets.
Any occurrence of $PID
inside a filename is replaced with the actual process
ID at runtime. This allows you to log to process-specific files, e.g.
--audit.output file:///var/log/arangod.log.$PID
. Note that your terminal may
require the dollar sign to be escaped.
--audit.queue
Introduced in: v3.8.0
Type: boolean
Queue audit log messages before writing them out (can reduce latency).
This option can be specified without a value to enable it.
Show details
You can control whether audit log messages are submitted to a queue and written to disk in batches or if they should be written to disk directly without being queued with this option.
Queueing audit log entries may be beneficial for latency, but can lead to
unqueued messages being lost in case of a crash or power loss. Setting this
option to false
mimics the behavior of version 3.7 and before, where audit log
messages were not queued but written in a blocking fashion.
--audit.write-log-level
Introduced in: v3.9.0
Type: boolean
Add the log level of the audit event (TRACE, INFO, DEBUG, WARNING, ERROR) to audit log messages.
This option can be specified without a value to enable it.
Show details
You can control whether the log level is shown in
the audit log messages with this option. If you omit it or set it to false
,
the log level is not shown in messages:
2016-10-03 15:47:26 | server1 | audit-authentication | n/a | database1 | 127.0.0.1:61528 | http basic | credentials wrong | /_api/version
If you set the option to true
, the log level is included in the messages:
2016-10-03 15:47:26 | INFO | server1 | audit-authentication | n/a | database1 | 127.0.0.1:61528 | http basic | credentials wrong | /_api/version
backup
--backup.api-enabled
Type: string
Whether the Hot Backup API is enabled (true) or not (false), or only enabled for superuser JWT (jwt).
Default: true
--backup.files-per-batch
Introduced in: v3.8.8, v3.9.4
Type: uint64
Define how many files rclone should process in one call when uploading or downloading Hot Backups.
Default: 100
Effective on DB-Servers and Single Servers only.
--backup.local-path-prefix
Type: string
Restrict any backup target to the given path prefix.
Default: /
--backup.number-parallel-remote-copies
Introduced in: v3.11.4
Type: uint32
The number of remote-to-remote copy operations to run in parallel when uploading/downloading Hot Backups (0 = use system-specific default of 100)
Effective on DB-Servers and Single Servers only.
Show details
This is to speed up the incremental upload process. Note that each instance needs approx. 7 to 10 MB of RAM in addition to arangod
. The default should normally be OK for most situations.
cache
--cache.acceleration-factor-for-edge-compression
Introduced in: v3.11.2
Type: uint32
The acceleration factor for the LZ4 compression of in-memory edge cache entries.
Default: 1
Effective on DB-Servers and Single Servers only.
Show details
This value controls the LZ4-internal acceleration factor for the LZ4 compression. Higher values typically yield less compression in exchange for faster compression and decompression speeds. An increase of 1 commonly leads to a compression speed increase of 3%, and could slightly increase decompression speed.
--cache.high-water-multiplier
Introduced in: v3.12.0
Type: double
The multiplier to be used for calculating the in-memory cache’s effective memory usage limit.
Default: 0.56
Effective on DB-Servers and Single Servers only.
Show details
This value controls the cache’s effective memory usage limit.
The user-defined memory limit (i.e. --cache.size
) is multipled with this
value to create the effective memory limit, from which on the cache will
try to free up memory by evicting the oldest entries.
--cache.ideal-lower-fill-ratio
Introduced in: v3.12.0
Type: double
The lower bound fill ratio value for a cache table.
Default: 0.08
Effective on DB-Servers and Single Servers only.
Show details
Cache tables with a fill ratio lower than this value will be shrunk by the cache rebalancer.
--cache.ideal-upper-fill-ratio
Introduced in: v3.12.0
Type: double
The upper bound fill ratio value for a cache table.
Default: 0.33
Effective on DB-Servers and Single Servers only.
Show details
Cache tables with a fill ratio higher than this value will be inflated in size by the cache rebalancer.
--cache.max-cache-value-size
Introduced in: v3.12.2
Type: uint64
The maximum payload size of an individual cache value (excluding the size of the key).
Default: 4194304
Effective on DB-Servers and Single Servers only.
--cache.max-spare-memory-usage
Introduced in: v3.12.0
Type: uint64
The maximum memory usage for spare tables in the in-memory cache.
Default: 67108864
Effective on DB-Servers and Single Servers only.
--cache.min-value-size-for-edge-compression
Introduced in: v3.11.2
Type: uint64
The size threshold (in bytes) from which on payloads in the edge index cache transparently get LZ4-compressed.
Default: 1073741824
Effective on DB-Servers and Single Servers only.
Show details
By transparently compressing values in the in-memory
edge index cache, more data can be held in memory than without compression.
Storing compressed values can increase CPU usage for the on-the-fly compression
and decompression. In case compression is undesired, this option can be set to a
very high value, which will effectively disable it. To use compression, set the
option to a value that is lower than medium-to-large average payload sizes.
It is normally not that useful to compress values that are smaller than 100 bytes.
--cache.rebalancing-interval
Type: uint64
The time between cache rebalancing attempts (in microseconds). The minimum value is 500000 (0.5 seconds).
Default: 2000000
Show details
The server uses a cache system which pools memory across many different cache tables. In order to provide intelligent internal memory management, the system periodically reclaims memory from caches which are used less often and reallocates it to caches which get more activity.
--cache.size
Type: uint64
The global size limit for all caches (in bytes).
Default: dynamic (e.g. 7735568384
)
Show details
The global caching system, all caches, and all the data contained therein are constrained to this limit.
If there is less than 4 GiB of RAM in the system, default value is 256 MiB.
If there is more, the default is (system RAM size - 2 GiB) * 0.25
.
cluster
--cluster.agency-endpoint
Type: string…
Agency endpoint(s) to connect to.
Effective on Coordinators and DB-Servers only.
Show details
You can specify this option multiple times to let the server use a cluster of Agency servers.
Endpoints have the following pattern:
tcp://ipv4-address:port
- TCP/IP endpoint, using IPv4tcp://[ipv6-address]:port
- TCP/IP endpoint, using IPv6ssl://ipv4-address:port
- TCP/IP endpoint, using IPv4, SSL encryptionssl://[ipv6-address]:port
- TCP/IP endpoint, using IPv6, SSL encryption
You must specify at least one endpoint or ArangoDB refuses to start. It is recommended to specify at least two endpoints, so that ArangoDB has an alternative endpoint if one of them becomes unavailable:
--cluster.agency-endpoint tcp://192.168.1.1:4001 --cluster.agency-endpoint tcp://192.168.1.2:4002 ...
--cluster.api-jwt-policy
Introduced in: v3.8.0
Type: string
Controls the access permissions required for accessing /_admin/cluster REST APIs (jwt-all = JWT required to access all operations, jwt-write = JWT required for POST/PUT/DELETE operations, jwt-compat = 3.7 compatibility mode)
Default: jwt-compat
Possible values: “jwt-all”, “jwt-compat”, “jwt-write”
Effective on Coordinators only.
Show details
The possible values for the option are:
jwt-all
: requires a valid JWT for all accesses to/_admin/cluster
and its sub-routes. If you use this configuration, the Cluster and Nodes sections of the web interface are disabled, as they rely on the ability to read data from several cluster APIs.jwt-write
: requires a valid JWT for write accesses (all HTTP methods except GET) to/_admin/cluster
. You can use this setting to allow privileged users to read data from the cluster APIs, but not to do any modifications. Modifications (carried out by write accesses) are then only possible by requests with a valid JWT.All existing permission checks for the cluster API routes are still in effect with this setting, meaning that read operations without a valid JWT may still require dedicated other permissions (as in v3.7).
jwt-compat
: no additional access checks are in place for the cluster APIs. However, all existing permissions checks for the cluster API routes are still in effect with this setting, meaning that all operations may still require dedicated other permissions (as in v3.7).
The default value is jwt-compat
, which means that this option does not cause
any extra JWT checks compared to v3.7.
--cluster.connectivity-check-interval
Introduced in: v3.11.4
Type: uint32
The interval (in seconds) in which cluster-internal connectivity checks are performed.
Default: 3600
Effective on Coordinators and DB-Servers only.
Show details
Setting this option to a value greater than zero makes Coordinators and DB-Servers run period connectivity checks with approximately the specified frequency. The first connectivity check is carried out approximately 15 seconds after server start. Note that a random delay is added to the interval on each server, so that different servers do not execute their connectivity checks all at the same time. Setting this option to a value of zero disables these connectivity checks."
--cluster.create-waits-for-sync-replication
Type: boolean
Let the active Coordinator wait for all replicas to create collections.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and DB-Servers only.
--cluster.default-replication-factor
Type: uint32
The default replication factor for non-system collections.
Default: 1
Effective on Coordinators only.
Show details
If you don’t set this option, it defaults to the
value of the --cluster.min-replication-factor
option. If set, the value must
be between the values of --cluster.min-replication-factor
and
--cluster.max-replication-factor
.
Note that you can still adjust the replication factor per collection. This value is only the default value used for new collections if no replication factor is specified when creating a collection.
Warning: If you use multiple Coordinators, use the same value on all Coordinators.
--cluster.failed-write-concern-status-code
Introduced in: v3.11.0
Type: uint32
The HTTP status code to send if a shard has not enough in-sync replicas to fulfill the write concern.
Default: 403
Possible values: 403, 503
Effective on DB-Servers only.
Show details
The default behavior is to return an HTTP
403 Forbidden
status code. You can set the option to 503
to return a
503 Service Unavailable
.
--cluster.force-one-shard
Type: boolean
Force the OneShard mode for all new collections.
This option can be specified without a value to enable it.
Effective on Coordinators and DB-Servers only.
Show details
If set to true
, forces the cluster into creating
all future collections with only a single shard and using the same DB-Server
as these collections’ shards leader. All collections created this way are
eligible for specific AQL query optimizations that can improve query performance
and provide advanced transactional guarantees.
Warning: Use the same value on all Coordinators and all DBServers!
--cluster.index-create-timeout
Type: double
The amount of time (in seconds) the Coordinator waits for an index to be created before giving up.
Default: 259200
Effective on Coordinators only.
--cluster.max-number-of-move-shards
Introduced in: v3.9.0
Type: uint32
The number of shards to be moved per rebalance operation. If set to 0, no shards are moved.
Default: 10
Effective on Coordinators only.
Show details
This option limits the maximum number of move
shards operations that can be made when the Rebalance Shards button is
clicked in the web interface. For backwards compatibility, the default value is
10
. A value of 0
disables the button.
--cluster.max-number-of-shards
Type: uint32
The maximum number of shards that can be configured when creating new collections (0 = unrestricted).
Default: 1000
Effective on Coordinators only.
Show details
If you change the value of this setting and restart the servers, no changes are applied to existing collections that would violate the new setting.
Warning: If you use multiple Coordinators, use the same value on all Coordinators.
--cluster.max-replication-factor
Type: uint32
The maximum replication factor for new collections (0 = unrestricted).
Default: 10
Effective on Coordinators only.
Show details
If you change the value of this setting and restart the servers, no changes are applied to existing collections that would violate the new setting.
Warning: If you use multiple Coordinators, use the same value on all Coordinators.
--cluster.min-replication-factor
Type: uint32
The minimum replication factor for new collections.
Default: 1
Effective on Coordinators only.
Show details
If you change the value of this setting and restart the servers, no changes are applied to existing collections that would violate the new setting.
Warning: If you use multiple Coordinators, use the same value on all Coordinators.
--cluster.my-address
Type: string
This server’s endpoint for cluster-internal communication.
Effective on Coordinators and DB-Servers only.
Show details
If specified, the endpoint needs to be in one of the following formats:
tcp://ipv4-address:port
- TCP/IP endpoint, using IPv4tcp://[ipv6-address]:port
- TCP/IP endpoint, using IPv6ssl://ipv4-address:port
- TCP/IP endpoint, using IPv4, SSL encryptionssl://[ipv6-address]:port
- TCP/IP endpoint, using IPv6, SSL encryption
If you don’t specify an endpoint, the server looks up its internal endpoint address in the Agency. If no endpoint can be found in the Agency for the server’s ID, ArangoDB refuses to start.
Examples
Listen only on the interface with the address 192.168.1.1
:
--cluster.my-address tcp://192.168.1.1:8530
Listen on all IPv4 and IPv6 addresses which are configured on port 8530
:
--cluster.my-address ssl://[::]:8530
--cluster.my-advertised-endpoint
Type: string
This server’s advertised endpoint for external communication (optional, e.g. aan external IP address or load balancer).
Effective on Coordinators and DB-Servers only.
Show details
If specified, the endpoint needs to be in one of the following formats:
tcp://ipv4-address:port
- TCP/IP endpoint, using IPv4tcp://[ipv6-address]:port
- TCP/IP endpoint, using IPv6ssl://ipv4-address:port
- TCP/IP endpoint, using IPv4, SSL encryptionssl://[ipv6-address]:port
- TCP/IP endpoint, using IPv6, SSL encryption
If you don’t specify an advertised endpoint, no external endpoint is advertised.
Examples
If an external interface is available to this server, you can specify it to communicate with external software / drivers:
--cluster.my-advertised-endpoint tcp://some.public.place:8530
All specifications of endpoints apply.
--cluster.my-role
Type: string
This server’s role.
Show details
For a cluster, the possible values are DBSERVER
(backend data server) and COORDINATOR
(frontend server for external and
application access).
--cluster.no-heartbeat-delay-before-shutdown
Introduced in: v3.12.4
Type: double
The delay (in seconds) before shutting down a coordinator if no heartbeat can be sent. Set to 0 to deactivate this shutdown
Default: 1800
Effective on Coordinators and DB-Servers only.
Show details
Setting this option to a value greater than zero will let a coordinator which cannot send a heartbeat to the agency for the specified time shut down. This is necessary to prevent that a coordinator survives longer than the agency supervision has patience before it removes the coordinator from the agency meta data. Without this it would be possible that a coordinator is still running transactions and committing them, which could, for example, render hotbackups inconsistent.
--cluster.require-persisted-id
Type: boolean
If set to true
, then the instance only starts if a UUID file is found in the database directory on startup. This ensures that the instance is started using an already existing database directory and not a new one. For the first start, you must either create the UUID file manually or set the option to false
for the initial startup.
This option can be specified without a value to enable it.
--cluster.resign-leadership-on-shutdown
Type: boolean
Create a resign leader ship job for this DB-Server on shutdown.
This option can be specified without a value to enable it.
Effective on DB-Servers only.
--cluster.shard-synchronization-attempt-timeout
Introduced in: v3.9.2
Type: double
The timeout (in seconds) for every shard synchronization attempt. Running into the timeout does not lead to a synchronization failure, but continues the synchronization shortly after. Setting a timeout can help to split the replication of large shards into smaller chunks and release snapshots on the leader earlier.
Default: 1200
Show details
Warning: If you use multiple DB-Servers, use the same value on all DB-Servers.
--cluster.synchronous-replication-timeout-factor
Type: double
All synchronous replication timeouts are multiplied by this factor.
Default: 1
Show details
Warning: If you use multiple DB-Servers, use the same value on all DB-Servers.
--cluster.synchronous-replication-timeout-maximum
Introduced in: v3.8.0
Type: double
All synchronous replication timeouts are at most this value (in seconds).
Default: 3600
Show details
Warning: This option should generally remain untouched and only be changed with great care!
Extend or shorten the timeouts for the internal synchronous replication mechanism between DB-Servers. All such timeouts are affected by this change.
Warning: If you use multiple DB-Servers, use the same value on all DB-Servers.
--cluster.synchronous-replication-timeout-minimum
Type: double
All synchronous replication timeouts are at least this value (in seconds).
Default: 900
Show details
Warning: This option should generally remain untouched and only be changed with great care!
The minimum timeout in seconds for the internal synchronous replication mechanism between DB-Servers. If replication requests are slow, but the servers are otherwise healthy, timeouts can cause followers to be dropped unnecessarily, resulting in costly resync operations. Increasing this value may help avoid such resyncs. Conversely, decreasing it may cause more resyncs, while lowering the latency of individual write operations.
Warning: If you use multiple DB-Servers, use the same value on all DB-Servers.
--cluster.synchronous-replication-timeout-per-4k
Type: double
All synchronous replication timeouts are increased by this amount per 4096 bytes (in seconds).
Default: 0.1
Show details
Warning: If you use multiple DB-Servers, use the same value on all DB-Servers.
--cluster.system-replication-factor
Type: uint32
The default replication factor for system collections.
Default: 2
Effective on Coordinators only.
Show details
Warning: If you use multiple Coordinators, use the same value on all Coordinators.
--cluster.upgrade
Type: string
Perform a cluster upgrade if necessary (auto = perform an upgrade and shut down only if --database.auto-upgrade true
is set, disable = ignore --database.auto-upgrade
and never perform an upgrade, force = ignore --database.auto-upgrade
and always perform an upgrade and shut down, online = always perform an upgrade but don’t shut down).
Default: auto
Possible values: “auto”, “disable”, “force”, “online”
Effective on Coordinators only.
--cluster.write-concern
Type: uint32
The global default write concern used for writes to new collections.
Default: 1
Effective on Coordinators only.
Show details
This value is used as the default write concern for databases, which in turn is used as the default for collections.
Warning: If you use multiple Coordinators, use the same value on all Coordinators.
database
--database.auto-upgrade
Type: boolean
Perform a database upgrade if necessary.
This option can be specified without a value to enable it.
Show details
If you specify this option, then the server performs a database upgrade instead of starting normally.
A database upgrade first compares the version number stored in the VERSION
file in the database directory with the current server version.
If the version number found in the database directory is higher than that of the server, the server considers this is an unintentional downgrade and warns about this. Using the server in these conditions is neither recommended nor supported.
If the version number found in the database directory is lower than that of the server, the server checks whether there are any upgrade tasks to perform. It then executes all required upgrade tasks and prints the status. If one of the upgrade tasks fails, the server exits with an error. Re-starting the server with the upgrade option again triggers the upgrade check and execution until the problem is fixed.
Whether or not you specify this option, the server always perform a version
check on startup. If you running the server with a non-matching version number
in the VERSION
file, the server refuses to start.
--database.auto-upgrade-full-compaction
Type: boolean
Perform a full RocksDB compaction after database upgrade.
This option can be specified without a value to enable it.
Show details
If this option is specified together with --database.auto-upgrade, the server will perform a full RocksDB compaction after the database upgrade has completed successfully but before shutting down.
This performs a complete compaction of all column families with both changeLevel and compactBottomMostLevel options enabled, which can help optimize the database files after an upgrade.
The server will exit with an error code if the compaction fails.
--database.check-version
Type: boolean
Check the version of the database and exit.
This is a command, no value needs to be specified. The process terminates after executing the command.
--database.default-replication-version
Introduced in: v3.12.0
Experimental
Type: string
The default replication version, can be overwritten when creating a new database, possible values: 1, 2
Default: 1
Possible values: “1”
--database.directory
Type: string
The path to the database directory.
Default: /var/lib/arangodb3
Show details
This defines the location where all data of a server is stored.
Make sure the directory is writable by the arangod process. You should further
not use a database directory which is provided by a network filesystem such as
NFS. The reason is that networked filesystems might cause inconsistencies when
there are multiple parallel readers or writers or they lack features required by
arangod, e.g. flock()
.
--database.extended-names
Introduced in: v3.9.0
Experimental
Type: boolean
Allow most UTF-8 characters in the names of databases, collections, Views, and indexes. Once in use, this option cannot be turned off again.
This option can be specified without a value to enable it.
Default: true
--database.ignore-datafile-errors
Type: boolean
Load collections even if datafiles may contain errors.
This option can be specified without a value to enable it.
--database.init-database
Type: boolean
Initialize an empty database.
This is a command, no value needs to be specified. The process terminates after executing the command.
--database.io-heartbeat
Introduced in: v3.8.7, v3.9.2
Type: boolean
Perform I/O heartbeat to test the underlying volume.
This option can be specified without a value to enable it.
Default: true
--database.max-databases
Introduced in: v3.12.0
Type: uint64
The maximum number of databases that can exist in parallel.
Default: 18446744073709552000
Show details
If the maximum number of databases is reached, no additional databases can be created in the deployment. In order to create additional databases, other databases need to be removed first."
--database.password
Type: string
The initial password of the root user.
--database.required-directory-state
Type: string
The required state of the database directory at startup (non-existing: the database directory must not exist, existing: thedatabase directory must exist, empty: the database directory must exist but be empty, populated: the database directory must exist and contain specific files already, any: any state is allowed)
Default: any
Possible values: “any”, “empty”, “existing”, “non-existing”, “populated”
--database.restore-admin
Type: boolean
Reset the admin users and set a new password.
This is a command, no value needs to be specified. The process terminates after executing the command.
--database.upgrade-check
Type: boolean
Skip the database upgrade if set to false.
This option can be specified without a value to enable it.
Default: true
--database.wait-for-sync
Type: boolean
The default waitForSync behavior. Can be overwritten when creating a collection.
This option can be specified without a value to enable it.
dump
--dump.max-batch-size
Introduced in: v3.12.0
Type: uint64
Maximum batch size value (in bytes) that can be used in a dump.
Default: 1073741824
Effective on DB-Servers and Single Servers only.
Show details
Each batch in a dump can grow to at most this size.
--dump.max-docs-per-batch
Introduced in: v3.12.0
Type: uint64
Maximum number of documents per batch that can be used in a dump.
Default: 1000000
Effective on DB-Servers and Single Servers only.
Show details
Each batch in a dump can grow to at most this size.
--dump.max-memory-usage
Introduced in: v3.12.0
Type: uint64
Maximum memory usage (in bytes) to be used by all ongoing dumps.
Default: dynamic (e.g. 6188454707
)
Effective on DB-Servers and Single Servers only.
Show details
The approximate per-server maximum allowed memory usage value for all ongoing dump actions combined.
--dump.max-parallelism
Introduced in: v3.12.0
Type: uint64
Maximum parallelism that can be used in a dump.
Default: 8
Effective on DB-Servers and Single Servers only.
Show details
Each dump action on a server can use at most this many parallel threads. Note that end users can still start multiple dump actions that run in parallel.
encryption
--encryption.key-generator
Type: string
A program providing the encryption key on stdout. If set, encryption at rest is enabled.
Show details
The program must output 32 bytes of data on the standard output and exit.
--encryption.keyfile
Type: string
The path to the file that contains the encryption key. Must contain 32 bytes of data. If set, encryption at rest is enabled.
Show details
You must secure the encryption key file so that
only arangodump
, arangorestore
, and arangod
can access it. You should also
ensure that the file is not readable if someone steals your hardware, for
example, by encrypting /mytmpfs
or creating an in-memory file-system under
/mytmpfs
.
foxx
--foxx.allow-install-from-remote
Introduced in: v3.8.5
Type: boolean
Allow installing Foxx apps from remote URLs other than GitHub.
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
--foxx.api
Type: boolean
Whether to enable the Foxx management REST APIs.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
--foxx.enable
Introduced in: v3.10.5
Type: boolean
Enable Foxx.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
Show details
If set to false
, access to any custom Foxx
services in the deployment will be forbidden. Access to ArangoDB’s built-in
web interface will still be possible though.
Note: when setting this option to false
, the management API for Foxx
services will automatically be disabled as well. This is the same as manually
setting the startup option --foxx.api false
.
--foxx.force-update-on-startup
Type: boolean
Ensure that all Foxx services are synchronized before completing the startup sequence.
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
Show details
If set to true
, all Foxx services in all
databases are synchronized between multiple Coordinators during the startup
sequence. This ensures that all Foxx services are up-to-date when a Coordinator
reports itself as ready.
In case the option is set to false
(i.e. no waiting), the Coordinator
completes the startup sequence faster, and the Foxx services are propagated
lazily. Until the initialization procedure has completed for the local Foxx
apps, any request to a Foxx app is responded to with an HTTP 500 error and a
message waiting for initialization of Foxx services in this database
. This can
cause an unavailability window for Foxx services on Coordinator startup for the
initial requests to Foxx apps until the app propagation has completed.
If you don’t use Foxx, you should set this option to false
to benefit from a
faster Coordinator startup. Deployments relying on Foxx apps being available as
soon as a Coordinator is integrated or responding should set this option to
true
.
The option only has an effect for cluster setups. On single servers all Foxx apps are available from the very beginning.
Note: ArangoDB 3.8 changes the default value to false
for this option.
In previous versions, this option had a default value of true
.
--foxx.queues
Type: boolean
Enable or disable Foxx queues.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
Show details
If set to true
, the Foxx queues are available
and jobs in the queues are executed asynchronously.
If set to false
, the queue manager is disabled and any jobs are prevented from
being processed, which may reduce CPU load a bit.
--foxx.queues-poll-interval
Type: double
The poll interval for the Foxx queue manager (in seconds)
Default: 1
Effective on Coordinators and Single Servers only.
Show details
Lower values lead to more immediate and more frequent Foxx queue job execution, but make the queue thread wake up and query the queues more often. If set to a low value, the queue thread might cause CPU load.
If you don’t use Foxx queues much, then you may increase this value to make the queues thread wake up less.
--foxx.store
Type: boolean
Whether to enable the Foxx store in the web interface.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
http
--http.compress-response-threshold
Introduced in: v3.12.0
Type: uint64
The HTTP response body size from which on responses are transparently compressed in case the client asks for it.
Show details
Automatically compress outgoing HTTP responses with the deflate or gzip compression format, in case the client request advertises support for this. Compression will only happen for HTTP/1.1 and HTTP/2 connections, if the size of the uncompressed response body exceeds the threshold value controlled by this startup option, and if the response body size after compression is less than the original response body size. Using the value 0 disables the automatic response compression."
--http.handle-content-encoding-for-unauthenticated-requests
Introduced in: v3.12.0
Type: boolean
Handle Content-Encoding headers for unauthenticated requests.
This option can be specified without a value to enable it.
Show details
If the option is set to true
, the server will automatically
uncompress incoming HTTP requests with Content-Encodings gzip and deflate
even if the request is not authenticated.
--http.keep-alive-timeout
Type: double
The keep-alive timeout for HTTP connections (in seconds).
Default: 300
Show details
Idle keep-alive connections are closed by the
server automatically when the timeout is reached. A keep-alive-timeout value of
0
disables the keep-alive feature entirely.
--http.permanently-redirect-root
Type: boolean
Whether to use a permanent or temporary redirect.
This option can be specified without a value to enable it.
Default: true
--http.redirect-root-to
Type: string
Redirect of the root URL.
Default: /_admin/aardvark/index.html
--http.return-queue-time-header
Introduced in: v3.9.0
Type: boolean
Whether to return the x-arango-queue-time-seconds
header in all responses.
This option can be specified without a value to enable it.
Default: true
Show details
The value contained in this header indicates the current queueing/dequeuing time for requests in the scheduler (in seconds). Client applications and drivers can use this value to control the server load and also react on overload.
--http.trusted-origin
Type: string…
The trusted origin URLs for CORS requests with credentials.
javascript
--javascript.allow-admin-execute
Type: boolean
For testing purposes, allow /_admin/execute
. Never enable this option in production!
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
Show details
You can use this option to control whether
user-defined JavaScript code is allowed to be executed on the server by sending
HTTP requests to the /_admin/execute
API endpoint with an authenticated user
account.
The default value is false
, which disables the execution of user-defined
code. This is also the recommended setting for production. In test environments,
it may be convenient to turn the option on in order to send arbitrary setup
or teardown commands for execution on the server.
--javascript.allow-external-process-control
Type: boolean
Allow the execution and control of external processes from within JavaScript actions.
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
--javascript.allow-port-testing
Type: boolean
Allow the testing of ports from within JavaScript actions.
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
--javascript.app-path
Type: string
The directory for Foxx applications.
Default: /var/lib/arangodb3-apps
Effective on Coordinators and Single Servers only.
--javascript.copy-installation
Type: boolean
Copy the contents of javascript.startup-directory
on first start.
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
Show details
This option is intended to be useful for rolling
upgrades. If you set it to true
, you can upgrade the underlying ArangoDB
packages without influencing the running arangod instance.
Setting this value does only make sense if you use ArangoDB outside of a container solution, like Docker or Kubernetes.
--javascript.enabled
Type: boolean
Enable the V8 JavaScript engine.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
Show details
By default, the V8 engine is enabled on single servers and Coordinators. It is disabled by default on Agents and DB-Servers.
It is possible to turn the V8 engine off also on the latter instance types to reduce the footprint of ArangoDB. Turning the V8 engine off on single servers or Coordinators will automatically render certain functionality unavailable or dysfunctional. The affected functionality includes JavaScript transactions, Foxx, AQL user-defined functions, the built-in web interface and some server APIs.
--javascript.endpoints-allowlist
Type: string…
Endpoints that can be connected to via the @arangodb/request
module in JavaScript actions.
Effective on Coordinators and Single Servers only.
--javascript.endpoints-denylist
Type: string…
Endpoints that cannot be connected to via the @arangodb/request
module in JavaScript actions (if not in the allowlist).
Effective on Coordinators and Single Servers only.
--javascript.environment-variables-allowlist
Type: string…
Environment variables that are accessible in JavaScript.
Effective on Coordinators and Single Servers only.
--javascript.environment-variables-denylist
Type: string…
Environment variables that are inaccessible in JavaScript (if not in the allowlist).
Effective on Coordinators and Single Servers only.
--javascript.files-allowlist
Type: string…
Filesystem paths that are accessible from within JavaScript actions.
Effective on Coordinators and Single Servers only.
--javascript.gc-frequency
Type: double
Time-based garbage collection frequency for JavaScript objects (each x seconds).
Default: 60
Effective on Coordinators and Single Servers only.
Show details
This option is useful to have the garbage collection still work in periods with no or little numbers of requests.
--javascript.gc-interval
Type: uint64
Request-based garbage collection interval for JavaScript objects (each x requests).
Default: 2000
Effective on Coordinators and Single Servers only.
--javascript.harden
Type: boolean
Disable access to JavaScript functions in the internal module: getPid() and logLevel().
This option can be specified without a value to enable it.
Effective on Coordinators and Single Servers only.
--javascript.module-directory
Type: string…
Additional paths containing JavaScript modules.
Effective on Coordinators and Single Servers only.
--javascript.script
Type: string…
Run the script and exit.
--javascript.script-parameter
Type: string…
Script parameter.
--javascript.startup-directory
Type: string
A path to the directory containing the JavaScript startup scripts.
Default: /usr/share/arangodb3/js
Effective on Coordinators and Single Servers only.
--javascript.startup-options-allowlist
Type: string…
Startup options whose names match this regular expression are allowed and exposed to JavaScript.
Effective on Coordinators and Single Servers only.
--javascript.startup-options-denylist
Type: string…
Startup options whose names match this regular expression are not exposed (if not in the allowlist) to JavaScript actions.
Effective on Coordinators and Single Servers only.
--javascript.tasks
Introduced in: v3.8.0
Type: boolean
Enable JavaScript tasks.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
--javascript.transactions
Introduced in: v3.8.0
Type: boolean
Enable JavaScript transactions.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
--javascript.user-defined-functions
Introduced in: v3.10.4
Type: boolean
Enable JavaScript user-defined functions (UDFs) in AQL queries.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
--javascript.v8-contexts
Type: uint64
The maximum number of V8 contexts that are created for executing JavaScript actions.
Effective on Coordinators and Single Servers only.
Show details
More contexts allow executing more JavaScript actions in parallel, provided that there are also enough threads available. Note that each V8 context uses a substantial amount of memory and requires periodic CPU processing time for garbage collection.
This option configures the maximum number of V8 contexts that can be used in
parallel. On server start, only as many V8 contexts are created as are
configured by the --javascript.v8-contexts-minimum
option. The actual number
of available V8 contexts may vary between --javascript.v8-contexts-minimum
and --javascript.v8-contexts
at runtime. When there are unused V8 contexts
that linger around, the server’s garbage collector thread automatically deletes
them.
--javascript.v8-contexts-max-age
Type: double
The maximum age for each V8 context (in seconds) before it is disposed.
Default: 60
Effective on Coordinators and Single Servers only.
Show details
If both --javascript.v8-contexts-max-invocations
and --javascript.v8-contexts-max-age
are set, then the context is destroyed
when either of the specified threshold values is reached.
--javascript.v8-contexts-max-invocations
Type: uint64
The maximum number of invocations for each V8 context before it is disposed (0 = unlimited).
Effective on Coordinators and Single Servers only.
--javascript.v8-contexts-minimum
Type: uint64
The minimum number of V8 contexts to keep available for executing JavaScript actions.
Effective on Coordinators and Single Servers only.
Show details
The actual number of V8 contexts never drops below
this value, but it may go up as high as specified by the
--javascript.v8-contexts
option.
When there are unused V8 contexts that linger around and the number of V8
contexts is greater than --javascript.v8-contexts-minimum
, the server’s
garbage collector thread automatically deletes them.
--javascript.v8-max-heap
Type: uint64
The maximal heap size (in MiB).
Default: 3072
--javascript.v8-options
Type: string…
Options to pass to V8.
Show details
You can optionally pass arguments to the V8 JavaScript engine. The V8 engine runs with the default settings unless you explicitly specify them. The options are forwarded to the V8 engine, which parses them on its own. Passing invalid options may result in an error being printed on stderr and the option being ignored.
You need to pass the options as one string, with V8 option names being prefixed
with two hyphens. Multiple options need to be separated by whitespace. To get
a list of all available V8 options, you can use the value "--help"
as follows:
--javascript.v8-options="--help"
Another example of specific V8 options being set at startup:
--javascript.v8-options="--log --no-logfile-per-isolate --logfile=v8.log"
Names and features or usable options depend on the version of V8 being used, and might change in the future if a different version of V8 is being used in ArangoDB. Not all options offered by V8 might be sensible to use in the context of ArangoDB. Use the specific options only if you are sure that they are not harmful for the regular database operation.
log
--log.api-enabled
Type: string
Whether the log API is enabled (true) or not (false), or only enabled for the superuser (jwt).
Default: true
Show details
Credentials are not written to log files. Nevertheless, some logged data might be sensitive depending on the context of the deployment. For example, if request logging is switched on, user requests and corresponding data might end up in log files. Therefore, a certain care with log files is recommended.
Since the database server offers an API to control logging and query logging
data, this API has to be secured properly. By default, the API is accessible
for admin users (administrative access to the _system
database).
However, you can restrict it further to the superuser or disable it altogether:
true
: The/_admin/log
API is accessible for admin users.jwt
: The/_admin/log
API is accessible for the superuser only (authentication with JWT superuser token and empty username).false
: The/_admin/log
API is not accessible at all.
--log.color
Type: boolean
Use colors for TTY logging.
This option can be specified without a value to enable it.
Default: dynamic (e.g. true
)
--log.escape-control-chars
Introduced in: v3.9.0
Type: boolean
Escape control characters in log messages.
This option can be specified without a value to enable it.
Default: true
Show details
This option applies to the control characters,
that have hex codes below \x20
, and also the character DEL
with hex code
\x7f
.
If you set this option to false
, control characters are retained when they
have a visible representation, and replaced with a space character in case they
do not have a visible representation. For example, the control character \n
is visible, so a \n
is displayed in the log. Contrary, the control character
BEL
is not visible, so a space is displayed instead.
If you set this option to true
, the hex code for the character is displayed,
for example, the BEL
character is displayed as \x07
.
The default value for this option is true
to ensure compatibility with
previous versions.
A side effect of turning off the escaping is that it reduces the CPU overhead
for the logging. However, this is only noticeable if logging is set to a very
verbose level (e.g. debug
or trace
).
--log.escape-unicode-chars
Introduced in: v3.9.0
Type: boolean
Escape Unicode characters in log messages.
This option can be specified without a value to enable it.
Show details
If you set this option to false
, Unicode
characters are retained and written to the log as-is. For example, 犬
is
logged as 犬
.
If you set this options to true
, any Unicode characters are escaped, and the
hex codes for all Unicode characters are logged instead. For example, 犬
is
logged as \u72AC
.
The default value for this option is set to false
for compatibility with
previous versions.
A side effect of turning off the escaping is that it reduces the CPU overhead
for the logging. However, this is only noticeable if logging is set to a very
verbose level (e.g. debug
or trace
).
--log.file
Type: string
Shortcut for --log.output file://<filename>
Default: -
--log.file-group
Type: string
The group to use for a new log file. The user must be a member of this group.
--log.file-mode
Type: string
The mode to use for a new log file. The umask is applied as well.
--log.force-direct
Type: boolean
Do not start a separate thread for logging.
This option can be specified without a value to enable it.
Show details
You can use this option to disable logging in an
extra logging thread. If set to true
, any log messages are immediately
printed in the thread that triggered the log message. This is non-optimal for
performance but can aid debugging. If set to false
, log messages are handed
off to an extra logging thread, which asynchronously writes the log messages.
--log.foreground-tty
Type: boolean
Also log to TTY if backgrounded.
This option can be specified without a value to enable it.
--log.hostname
Introduced in: v3.8.0
Type: string
The hostname to use in log message. Leave empty for none, use “auto” to automatically determine a hostname.
Show details
You can specify a hostname to be logged at the
beginning of each log message (for regular logging) or inside the hostname
attribute (for JSON-based logging).
The default value is an empty string, meaning no hostnames is logged.
If you set this option to auto
, the hostname is automatically determined.
--log.ids
Type: boolean
Log unique message IDs.
This option can be specified without a value to enable it.
Default: true
Show details
Each log invocation in the ArangoDB source code contains a unique log ID, which can be used to quickly find the location in the source code that produced a specific log message.
Log IDs are printed as 5-digit hexadecimal identifiers in square brackets between the log level and the log topic:
2020-06-22T21:16:48Z [39028] INFO [144fe] {general} using storage engine 'rocksdb'
(where 144fe
is the log ID).
--log.in-memory
Introduced in: v3.8.0
Type: boolean
Use an in-memory log appender which can be queried via the API and web interface.
This option can be specified without a value to enable it.
Default: true
Show details
You can use this option to toggle storing log
messages in memory, from which they can be consumed via the /_admin/log
HTTP API and via the web interface.
By default, this option is turned on, so log messages are consumable via the API and web interface. Turning this option off disables that functionality, saves a bit of memory for the in-memory log buffers, and prevents potential log information leakage via these means.
--log.in-memory-level
Type: string
Use an in-memory log appender only for this log level and higher.
Default: info
Possible values: “debug”, “err”, “error”, “fatal”, “info”, “trace”, “warn”, “warning”
Show details
You can use this option to control which log
messages are preserved in memory (in case --log.in-memory
is enabled).
The default value is info
, meaning all log messages of types info
,
warning
, error
, and fatal
are stored in-memory by an instance. By setting
this option to warning
, only warning
, error
and fatal
log messages are
preserved in memory, and by setting the option to error
, only error
and
fatal
messages are kept.
This option is useful because the number of in-memory log messages is limited to the latest 2048 messages, and these slots are shared between informational, warning, and error messages by default.
--log.keep-logrotate
Type: boolean
Keep the old log file after receiving a SIGHUP.
This option can be specified without a value to enable it.
--log.level
Type: string…
Set the topic-specific log level, using --log.level level
for the general topic or --log.level topic=level
for the specified topic (can be specified multiple times).
Available log levels: fatal, error, warning, info, debug, trace.
Available log topics: all, agency, agencycomm, agencystore, aql, arangosearch, audit-authentication, audit-authorization, audit-collection, audit-database, audit-document, audit-hotbackup, audit-service, audit-view, authentication, authorization, backup, bench, cache, cluster, communication, config, crash, deprecation, development, dump, engines, flush, general, graphs, heartbeat, httpclient, libiresearch, license, maintenance, memory, queries, rep-state, rep-wal, replication, replication2, requests, restore, rocksdb, security, ssl, startup, statistics, supervision, syscall, threads, trx, ttl, v8, validation, views.
Default: info
Show details
ArangoDB’s log output is grouped by topics.
--log.level
can be specified multiple times at startup, for as many topics as
needed. The log verbosity and output files can be adjusted per log topic.
arangod --log.level all=warning --log.level queries=trace --log.level startup=trace
This sets a global log level of warning
and two topic-specific levels
(trace
for queries and info
for startup). Note that --log.level warning
does not set a log level globally for all existing topics, but only the
general
topic. Use the pseudo-topic all
to set a global log level.
The same in a configuration file:
[log]
level = all=warning
level = queries=trace
level = startup=trace
The available log levels are:
fatal
: Only log fatal errors.error
: Only log errors.warning
: Only log warnings and errors.info
: Log information messages, warnings, and errors.debug
: Log debug and information messages, warnings, and errors.trace
: Logs trace, debug, and information messages, warnings, and errors.
Note that the debug
and trace
levels are very verbose.
Some relevant log topics available in ArangoDB 3 are:
agency
: Information about the cluster Agency.performance
: Performance-related messages.queries
: Executed AQL queries, slow queries.replication
: Replication-related information.requests
: HTTP requests.startup
: Information about server startup and shutdown.threads
: Information about threads.
You can adjust the log levels at runtime via the PUT /_admin/log/level
HTTP API endpoint.
Audit logging (Enterprise Edition): The server logs all audit events by
default. Low priority events, such as statistics operations, are logged with the
debug
log level. To keep such events from cluttering the log, set the
appropriate log topics to the info
log level.
--log.line-number
Type: boolean
Include the function name, file name, and line number of the source code that issues the log message. Format: [func@FileName.cpp:123]
This option can be specified without a value to enable it.
--log.max-entry-length
Type: uint32
The maximum length of a log entry (in bytes).
Default: 134217728
Show details
Note: This option does not include audit log
messages. See --audit.max-entry-length
instead.
Any log messages longer than the specified value are truncated and the suffix
...
is added to them.
The purpose of this option is to shorten long log messages in case there is not a lot of space for log files, and to keep rogue log messages from overusing resources.
The default value is 128 MB, which is very high and should effectively mean downwards-compatibility with previous arangod versions, which did not restrict the maximum size of log messages.
--log.max-queued-entries
Introduced in: v3.10.12, v3.11.5, v3.12.0
Type: uint32
Upper limit of log entries that are queued in a background thread.
Default: 16384
Show details
Log entries are pushed on a queue for asynchronous
writing unless you enable the --log.force-direct
startup option. If you use a
slow log output (e.g. syslog), the queue might grow and eventually overflow.
You can configure the upper bound of the queue with this option. If the queue is full, log entries are written synchronously until the queue has space again.
--log.output
Type: string…
Log destination(s), e.g. file:///path/to/file (any occurrence of $PID is replaced with the process ID).
Show details
This option allows you to direct the global or per-topic log messages to different outputs. The output definition can be one of the following:
-
for stdout+
for stderrsyslog://<syslog-facility>
syslog://<syslog-facility>/<application-name>
file://<relative-or-absolute-path>
To set up a per-topic output configuration, use
--log.output <topic>=<definition>
:
--log.output queries=file://queries.log
The above example logs query-related messages to the file queries.log
.
You can specify the option multiple times in order to configure the output for different log topics:
--log.level queries=trace --log.output queries=file:///queries.log --log.level requests=info --log.output requests=file:///requests.log
The above example logs all query-related messages to the file queries.log
and HTTP requests with a level of info
or higher to the file requests.log
.
Any occurrence of $PID
in the log output value is replaced at runtime with
the actual process ID. This enables logging to process-specific files:
--log.output 'file://arangod.log.$PID'
Note that dollar sign may need extra escaping when specified on a command-line such as Bash.
If you specify --log.file-mode <octalvalue>
, then any newly created log
file uses octalvalue
as file mode. Please note that the umask
value is
applied as well.
If you specify --log.file-group <name>
, then any newly created log file tries
to use <name>
as the group name. Note that you have to be a member of that
group. Otherwise, the group ownership is not changed.
The old --log.file
option is still available for convenience. It is a
shortcut for the more general option --log.output file://filename
.
The old --log.requests-file
option is still available. It is a shortcut for
the more general option --log.output requests=file://...
.
To change the log levels for the specified output you can add a comma separated
list of topics with their respective level after the output definition, separated
by a semicolon:
--log.output file:///path/to/file;queries=trace,requests=info
--log.output -;all=error
--log.performance
Deprecated in: v3.5.0
Type: boolean
Shortcut for --log.level performance=trace
.
This option can be specified without a value to enable it.
--log.prefix
Type: string
Prefix log message with this string.
Show details
Example: arangod ... --log.prefix "-->"
2020-07-23T09:46:03Z --> [17493] INFO ...
--log.process
Introduced in: v3.8.0
Type: boolean
Show the process identifier (PID) in log messages.
This option can be specified without a value to enable it.
Default: true
--log.recording-api-enabled
Type: string
Whether the recording API is enabled (true) or not (false), or only enabled for the superuser (jwt).
Default: true
Show details
The /_admin/server/api-calls
and
/_admin/server/aql-queries
endpoints provide access to recorded API calls
and AQL queries respectively. They are referred to as the recording API.
Since this data might be sensitive depending on the context of the deployment,
these endpoints need to be properly secured. By default, the recording API is
accessible for admin users (users with administrative access to the _system
database). However, you can restrict it further to the superuser or disable it
altogether:
true
: The recording API is accessible for admin users.jwt
: The recording API is accessible for the superuser only (authentication with JWT superuser token and empty username).false
: The recording API is not accessible at all.
Whether API calls and AQL queries are recorded is independent of this option.
It is controlled by the --server.api-call-recording
and
--server.aql-query-recording
startup options.
--log.request-parameters
Type: boolean
include full URLs and HTTP request parameters in trace logs
This option can be specified without a value to enable it.
Default: true
--log.role
Type: boolean
Log the server role.
This option can be specified without a value to enable it.
Show details
If you set this option to true
, log messages
contains a single character with the server’s role. The roles are:
U
: Undefined / unclear (used at startup)S
: Single serverC
: CoordinatorP
: Primary / DB-ServerA
: Agent
--log.shorten-filenames
Type: boolean
Shorten filenames in log output (use with --log.line-number
).
This option can be specified without a value to enable it.
Default: true
--log.structured-param
Introduced in: v3.10.0
Type: string…
Toggle the usage of the log category parameter in structured log messages.
Show details
Some log messages can be displayed together with additional information in a structured form. The following parameters are available:
database
: The name of the database.username
: The name of the user.queryid
: The ID of the AQL query (on DB-Servers only).url
: The endpoint path.
The format to enable or disable a parameter is <parameter>=<bool>
, or
<parameter>
to enable it. You can specify the option multiple times to
configure multiple parameters:
arangod --log.structured-param database=true --log.structured-param url --log.structured-param username=false
You can adjust the parameter settings at runtime using the
/_admin/log/structured
HTTP API.
--log.thread
Type: boolean
Show the thread identifier in log messages.
This option can be specified without a value to enable it.
Default: true
--log.thread-name
Type: boolean
Show thread name in log messages.
This option can be specified without a value to enable it.
--log.time-format
Type: string
The time format to use in logs.
Default: utc-datestring-micros
Possible values: “local-datestring”, “timestamp”, “timestamp-micros”, “timestamp-millis”, “uptime”, “uptime-micros”, “uptime-millis”, “utc-datestring”, “utc-datestring-micros”, “utc-datestring-millis”
Show details
Overview over the different options:
Format | Example | Description |
---|---|---|
timestamp | 1553766923000 | Unix timestamps, in seconds |
timestamp-millis | 1553766923000.123 | Unix timestamps, in seconds, with millisecond precision |
timestamp-micros | 1553766923000.123456 | Unix timestamps, in seconds, with microsecond precision |
uptime | 987654 | seconds since server start |
uptime-millis | 987654.123 | seconds since server start, with millisecond precision |
uptime-micros | 987654.123456 | seconds since server start, with microsecond precision |
utc-datestring | 2019-03-28T09:55:23Z | UTC-based date and time in format YYYY-MM-DDTHH:MM:SSZ |
utc-datestring-millis | 2019-03-28T09:55:23.123Z | like utc-datestring , but with millisecond precision |
local-datestring | 2019-03-28T10:55:23 | local date and time in format YYYY-MM-DDTHH:MM:SS |
--log.use-json-format
Introduced in: v3.8.0
Type: boolean
Use JSON as output format for logging.
This option can be specified without a value to enable it.
Show details
You can use this option to switch the log output to the JSON format. Each log message then produces a separate line with JSON-encoded log data, which can be consumed by other applications.
The object attributes produced for each log message are:
Key | Value |
---|---|
time | date/time of log message, in format specified by --log.time-format |
prefix | only emitted if --log.prefix is set |
pid | process id, only emitted if --log.process is set |
tid | thread id, only emitted if --log.thread is set |
thread | thread name, only emitted if --log.thread-name is set |
role | server role (1 character), only emitted if --log.role is set |
level | log level (e.g. "WARN" , "INFO" ) |
file | source file name of log message, only emitted if --log.line-number is set |
line | source file line of log message, only emitted if --log.line-number is set |
function | source file function name, only emitted if --log.line-number is set |
topic | log topic name |
id | log id (5 digit hexadecimal string), only emitted if --log.ids is set |
hostname | hostname if --log.hostname is set |
message | the actual log message payload |
--log.use-local-time
Deprecated in: v3.5.0
Type: boolean
Use the local timezone instead of UTC.
This option can be specified without a value to enable it.
Show details
This option is deprecated.
Use --log.time-format local-datestring
instead.
--log.use-microtime
Deprecated in: v3.5.0
Type: boolean
Use Unix timestamps in seconds with microsecond precision.
This option can be specified without a value to enable it.
Show details
This option is deprecated.
Use --log.time-format timestamp-micros
instead.
network
--network.compress-request-threshold
Introduced in: v3.12.0
Type: uint64
The HTTP request body size from which on cluster-internal requests are transparently compressed.
Default: 200
Effective on Coordinators and DB-Servers only.
Show details
Automatically compress outgoing HTTP requests in cluster-internal traffic with the deflate, gzip or lz4 compression format. Compression will only happen if the size of the uncompressed request body exceeds the threshold value controlled by this startup option, and if the request body size after compression is less than the original request body size. Using the value 0 disables the automatic compression."
--network.compression-method
Introduced in: v3.12.0
Type: string
The compression method used for cluster-internal requests.
Default: none
Possible values: “auto”, “deflate”, “gzip”, “lz4”, “none”
Effective on Coordinators and DB-Servers only.
Show details
Setting this option to ’none’ will disable compression for
cluster-internal requests.
To enable compression for cluster-internal requests, set this option to either
‘deflate’, ‘gzip’, ’lz4’ or ‘auto’.
The ‘deflate’ and ‘gzip’ compression methods are general purpose,
but have significant CPU overhead for performing the compression work.
The ’lz4’ compression method compresses slightly worse, but has a lot lower
CPU overhead for performing the compression.
The ‘auto’ compression method will use ‘deflate’ by default, and ’lz4’ for
requests which have a size that is at least 3 times the configured threshold
size.
The compression method only matters if --network.compress-request-threshold
is set to value greater than zero. If the threshold is set to value of 0,
then no compression will be performed.
--network.idle-connection-ttl
Type: uint64
The default time-to-live of idle connections for cluster-internal communication (in milliseconds).
Default: 120000
--network.io-threads
Type: uint32
The number of network I/O threads for cluster-internal communication.
Default: 2
--network.max-open-connections
Type: uint64
The maximum number of open TCP connections for cluster-internal communication per endpoint
Default: 1024
--network.max-requests-in-flight
Introduced in: v3.8.0
Type: uint64
The number of internal requests that can be in flight at a given point in time.
Default: 65536
--network.protocol
Deprecated in: v3.9.0
Type: string
The network protocol to use for cluster-internal communication.
Possible values: “”, “h2”, “http”, “http2”
--network.verify-hosts
Type: boolean
Verify peer certificates when using TLS in cluster-internal communication.
This option can be specified without a value to enable it.
query
--query.allow-collections-in-expressions
Introduced in: v3.8.0
Deprecated in: v3.9.0
Type: boolean
Allow full collections to be used in AQL expressions.
This option can be specified without a value to enable it.
Show details
If set to true
, using collection names in
arbitrary places in AQL expressions is allowed, although using collection names
like this is very likely unintended.
For example, consider the following query:
FOR doc IN collection RETURN collection
Here, the collection name is collection
, and its usage in the FOR
loop is
intended and valid. However, collection
is also used in the RETURN
statement, which is legal but potentially unintended. It should likely be
RETURN doc
or RETURN doc.someAttribute
instead. Otherwise, the entire
collection is materialized and returned as many times as there are documents in
the collection. This can take a long time and even lead to out-of-memory crashes
in the worst case.
If you set the option to false
, such unintentional usage of collection names
in queries is prohibited, and instead makes the query fail with error 1568
(“collection used as expression operand”).
The default value of the option was true
in v3.8, meaning that potentially
unintended usage of collection names in queries were still allowed. In v3.9,
the default value changes to false
. The option is also deprecated from
3.9.0 on and will be removed in future versions. From then on, unintended
usage of collection names will always be disallowed.
--query.cache-entries
Type: uint64
The maximum number of results in query result cache per database.
Default: 128
Show details
If a query is eligible for caching and the number of items in the database’s query cache is equal to this threshold value, another cached query result is removed from the cache.
This option only has an effect if the query cache mode is set to either on
or
demand
.
--query.cache-entries-max-size
Type: uint64
The maximum cumulated size of results in the query result cache per database (in bytes).
Default: 268435456
Show details
When a query result is inserted into the query results cache, it is checked if the total size of cached results would exceed this value, and if so, another cached query result is removed from the cache before a new one is inserted.
This option only has an effect if the query cache mode is set to either on
or
demand
.
--query.cache-entry-max-size
Type: uint64
The maximum size of an individual result entry in query result cache (in bytes).
Default: 16777216
Show details
Query results are only eligible for caching if their size does not exceed this setting’s value.
--query.cache-include-system-collections
Type: boolean
Whether to include system collection queries in the query result cache.
This option can be specified without a value to enable it.
Show details
Not storing these results is normally beneficial if you use the query results cache, as queries on system collections are internal to ArangoDB and use space in the query results cache unnecessarily.
--query.cache-mode
Type: string
The mode for the AQL query result cache. Can be “on”, “off”, or “demand”.
Default: off
Show details
Toggles the AQL query results cache behavior. The possible values are:
off
: do not use query results cacheon
: always use query results cache, except for queries that have theircache
attribute set tofalse
demand
: use query results cache only for queries that have theircache
attribute set totrue
--query.collection-logger-all-slow-queries
Introduced in: v3.12.2
Type: boolean
Whether or not to include all slow queries in query collection logging.
This option can be specified without a value to enable it.
Default: true
--query.collection-logger-cleanup-interval
Introduced in: v3.12.2
Type: uint64
The interval (in milliseconds) in which query information is purged from the _queries system collection.
Default: 600000
--query.collection-logger-enabled
Introduced in: v3.12.2
Type: boolean
Whether or not to enable logging of queries to a system collection.
This option can be specified without a value to enable it.
--query.collection-logger-include-system-database
Introduced in: v3.12.2
Type: boolean
Whether or not to include _system database queries in query collection logging.
This option can be specified without a value to enable it.
--query.collection-logger-max-buffered-queries
Introduced in: v3.12.2
Type: uint64
The maximum number of queries to buffer for query collection logging.
Default: 4096
--query.collection-logger-probability
Introduced in: v3.12.2
Type: double
The probability with which queries are included in query collection logging.
Default: 0.1
--query.collection-logger-push-interval
Introduced in: v3.12.2
Type: uint64
The interval (in milliseconds) in which query information is flushed to the _queries system collection.
Default: 3000
--query.collection-logger-retention-time
Introduced in: v3.12.2
Type: double
The time duration (in seconds) for with which queries are kept in the _queries system collection before they are purged.
Default: 28800
--query.fail-on-warning
Type: boolean
Whether AQL queries should fail with errors even for recoverable warnings.
This option can be specified without a value to enable it.
Show details
If set to true
, AQL queries that produce
warnings are instantly aborted and throw an exception. This option can be set
to catch obvious issues with AQL queries early.
If set to false
, AQL queries that produce warnings are not aborted and return
the warnings along with the query results.
You can override the option for each individual AQL query via the
failOnWarning
attribute.
--query.global-memory-limit
Introduced in: v3.8.0
Type: uint64
The memory threshold for all AQL queries combined (in bytes, 0 = no limit).
Default: dynamic (e.g. 26802703319
)
Show details
You can use this option to set a limit on the
combined estimated memory usage of all AQL queries (in bytes). If this option
has a value of 0
, then no global memory limit is in place. This is also the
default value and the same behavior as in version 3.7 and older.
If you set this option to a value greater than zero, then the total memory usage of all AQL queries is limited approximately to the configured value. The limit is enforced by each server node in a cluster independently, i.e. it can be set separately for Coordinators, DB-Servers etc. The memory usage of a query that runs on multiple servers in parallel is not summed up, but tracked separately on each server.
If a memory allocation in a query would lead to the violation of the configured global memory limit, then the query is aborted with error code 32 (“resource limit exceeded”).
The global memory limit is approximate, in the same fashion as the per-query
memory limit exposed by the option --query.memory-limit
is. The global memory
tracking has a granularity of 32 KiB chunks.
If both, --query.global-memory-limit
and --query.memory-limit
, are set, you
must set the former at least as high as the latter.
--query.log-failed
Introduced in: v3.9.5, v3.10.2
Type: boolean
Whether to log failed AQL queries.
This option can be specified without a value to enable it.
Effective on Coordinators, Agents and Single Servers only.
Show details
If set to true
, all failed AQL queries are
logged to the server log. You can use this option during development, or to
catch unexpected failed queries in production.
--query.log-memory-usage-threshold
Introduced in: v3.9.5, v3.10.2
Type: uint64
Log queries that have a peak memory usage larger than this threshold.
Default: 1073741824
Effective on Coordinators, Agents and Single Servers only.
Show details
A warning is logged if queries exceed the specified threshold. This is useful for finding queries that use a large amount of memory.
--query.max-artifact-log-length
Introduced in: v3.9.5, v3.10.2
Type: uint64
The maximum length of query strings and bind parameter values in logs before they get truncated.
Default: 4096
Effective on Coordinators, Agents and Single Servers only.
Show details
This option allows you to truncate overly long query strings and bind parameter values to a reasonable length in log files.
--query.max-collections-per-query
Introduced in: v3.10.7
Type: uint64
The maximum number of collections/shards that can be used in one AQL query.
Default: 2048
--query.max-dnf-condition-members
Introduced in: v3.11.0
Type: uint64
The maximum number of OR sub-nodes in the internal representation of an AQL FILTER condition.
Default: 786432
Effective on Coordinators and Single Servers only.
Show details
Yon can use this option to limit the computation time and memory usage when converting complex AQL FILTER conditions into the internal DNF (disjunctive normal form) format. FILTER conditions with a lot of logical branches (AND, OR, NOT) can take a large amount of processing time and memory. This startup option limits the computation time and memory usage for such conditions.
Once the threshold value is reached during the DNF conversion of a FILTER condition, the conversion is aborted, and the query continues with a simplified internal representation of the condition, which cannot be used for index lookups.
--query.max-nodes-per-callstack
Introduced in: v3.9.0
Type: uint64
The maximum number of execution nodes on the callstack before splitting the remaining nodes into a separate thread
Default: 250
--query.max-parallelism
Type: uint64
The maximum number of threads to use for a single query; the actual query execution may use less depending on various factors.
Default: 4
--query.max-query-async-prefetch-slots
Introduced in: v3.12.0
Type: uint64
The maximum per-query number of slots available for asynchronous prefetching inside any AQL query.
Default: 32
Effective on Coordinators, DB-Servers and Single Servers only.
--query.max-runtime
Type: double
The runtime threshold for AQL queries (in seconds, 0 = no limit).
Show details
Sets a default maximum runtime for AQL queries.
The default value is 0
, meaning that the runtime of AQL queries is not
limited. If you set it to any positive value, it restricts the runtime of all
AQL queries, unless you override it with the maxRuntime
query option on a
per-query basis.
If a query exceeds the configured runtime, it is killed on the next occasion when the query checks its own status. Killing is best effort-based, so it is not guaranteed that a query will no longer than exactly the configured amount of time.
Warning: This option affects all queries in all databases, including queries issued for administration and database-internal purposes.
--query.max-total-async-prefetch-slots
Introduced in: v3.12.0
Type: uint64
The maximum total number of slots available for asynchronous prefetching across all AQL queries.
Default: 256
Effective on Coordinators, DB-Servers and Single Servers only.
--query.memory-limit
Type: uint64
The memory threshold per AQL query (in bytes, 0 = no limit).
Default: dynamic (e.g. 19853854311
)
Show details
The default maximum amount of memory (in bytes) that a single AQL query can use. When a single AQL query reaches the specified limit value, the query is aborted with a resource limit exceeded exception. In a cluster, the memory accounting is done per server, so the limit value is effectively a memory limit per query per server node.
Some operations, namely calls to AQL functions and their intermediate results, are not properly tracked.
You can override the limit by setting the memoryLimit
option for individual
queries when running them. Overriding the per-query limit value is only possible
if the --query.memory-limit-override
option is set to true
.
The default per-query memory limit value in version 3.8 and later depends on
the amount of available RAM. In version 3.7 and older, the default value was
0
, meaning “unlimited”.
The default values are:
Available memory: 0 (0MiB) Limit: 0 unlimited, %mem: n/a
Available memory: 134217728 (128MiB) Limit: 33554432 (32MiB), %mem: 25.0
Available memory: 268435456 (256MiB) Limit: 67108864 (64MiB), %mem: 25.0
Available memory: 536870912 (512MiB) Limit: 201326592 (192MiB), %mem: 37.5
Available memory: 805306368 (768MiB) Limit: 402653184 (384MiB), %mem: 50.0
Available memory: 1073741824 (1024MiB) Limit: 603979776 (576MiB), %mem: 56.2
Available memory: 2147483648 (2048MiB) Limit: 1288490189 (1228MiB), %mem: 60.0
Available memory: 4294967296 (4096MiB) Limit: 2576980377 (2457MiB), %mem: 60.0
Available memory: 8589934592 (8192MiB) Limit: 5153960755 (4915MiB), %mem: 60.0
Available memory: 17179869184 (16384MiB) Limit: 10307921511 (9830MiB), %mem: 60.0
Available memory: 25769803776 (24576MiB) Limit: 15461882265 (14745MiB), %mem: 60.0
Available memory: 34359738368 (32768MiB) Limit: 20615843021 (19660MiB), %mem: 60.0
Available memory: 42949672960 (40960MiB) Limit: 25769803776 (24576MiB), %mem: 60.0
Available memory: 68719476736 (65536MiB) Limit: 41231686041 (39321MiB), %mem: 60.0
Available memory: 103079215104 (98304MiB) Limit: 61847529063 (58982MiB), %mem: 60.0
Available memory: 137438953472 (131072MiB) Limit: 82463372083 (78643MiB), %mem: 60.0
Available memory: 274877906944 (262144MiB) Limit: 164926744167 (157286MiB), %mem: 60.0
Available memory: 549755813888 (524288MiB) Limit: 329853488333 (314572MiB), %mem: 60.0
You can set a global memory limit for the total memory used by all AQL queries
that currently execute via the --query.global-memory-limit
option.
From ArangoDB 3.8 on, the per-query memory tracking has a granularity of 32 KB chunks. That means checking for memory limits such as “1” (e.g. for testing) may not make a query fail if the total memory allocations in the query don’t exceed 32 KiB. The effective lowest memory limit value that can be enforced is thus 32 KiB. Memory limit values higher than 32 KiB will be checked whenever the total memory allocations cross a 32 KiB boundary.
--query.memory-limit-override
Introduced in: v3.8.0
Type: boolean
Allow increasing the per-query memory limits for individual queries.
This option can be specified without a value to enable it.
Default: true
Show details
You can use this option to control whether
individual AQL queries can increase their memory limit via the memoryLimit
query option. This is the default, so a query that increases its memory limit is
allowed to use more memory than set via the --query.memory-limit
startup
option value.
If the option is set to false
, individual queries can only lower their maximum
allowed memory usage but not increase it.
--query.optimizer-max-plans
Type: uint64
The maximum number of query plans to create for a query.
Default: 128
Show details
You can control how many different query execution plans the AQL query optimizer generates at most for any given AQL query with this option. Normally, the AQL query optimizer generates a single execution plan per AQL query, but there are some cases in which it creates multiple competing plans.
More plans can lead to better optimized queries. However, plan creation has its costs. The more plans are created and shipped through the optimization pipeline, the more time is spent in the optimizer. You can lower the number to make the optimizer stop creating additional plans when it has already created enough plans.
Note that this setting controls the default maximum number of plans to create.
The value can still be adjusted on a per-query basis by setting the
maxNumberOfPlans
attribute for individual queries.
--query.optimizer-rules
Type: string…
Enable or disable specific optimizer rules by default. Specify the rule name prefixed with -
for disabling, or +
for enabling.
Show details
You can use this option to selectively enable or disable AQL query optimizer rules by default. You can specify the option multiple times.
For example, to turn off the rules use-indexes-for-sort
and
reduce-extraction-to-projection
by default, use the following:
--query.optimizer-rules "-use-indexes-for-sort" --query.optimizer-rules "-reduce-extraction-to-projection"
The purpose of this startup option is to be able to enable potential future experimental optimizer rules, which may be shipped in a disabled-by-default state.
--query.parallelize-traversals
Type: boolean
Whether to enable traversal parallelization.
This option can be specified without a value to enable it.
Default: true
--query.plan-cache-invalidation-time
Introduced in: v3.12.4
Type: double
The time in seconds after which a query plan is invalidated in the query plan cache.
Default: 900
--query.plan-cache-max-entries
Introduced in: v3.12.4
Type: uint64
The maximum number of plans in query plan cache per database.
Default: 128
--query.plan-cache-max-entry-size
Introduced in: v3.12.4
Type: uint64
The maximum size of an individual entry in the query plan cache in each database.
Default: 2097152
--query.plan-cache-max-memory-usage
Introduced in: v3.12.4
Type: uint64
The maximum allowed memory usage for the query plan cache in each database.
Default: 8388608
--query.registry-ttl
Type: double
The default time-to-live of cursors and query snippets (in seconds). If set to 0 or lower, the value defaults to 30 for single server instances and 600 for Coordinator instances.
--query.require-with
Introduced in: v3.7.11, v3.8.0
Type: boolean
Whether AQL queries should require the WITH collection-name
clause even on single servers (enable this to remove this behavior difference between single server and cluster).
This option can be specified without a value to enable it.
Show details
If set to true
, AQL queries in single server
mode also require WITH
clauses in AQL queries where a cluster installation
would require them.
The option is set to false
by default, but you can turn it on in single
servers to remove this behavior difference between single servers and clusters,
making a later transition from single server to cluster easier.
--query.slow-streaming-threshold
Type: double
The threshold for slow streaming AQL queries (in seconds).
Default: 10
Show details
You can control after what execution time streaming AQL queries are considered “slow” with this option. It exists to give streaming queries a separate, potentially higher timeout value than for regular queries. Streaming queries are often executed in lockstep with application data processing logic, which then also accounts for the queries’ runtime. It is thus expected that the lifetime of streaming queries is longer than for regular queries.
--query.slow-threshold
Type: double
The threshold for slow AQL queries (in seconds).
Default: 10
Show details
You can control after what execution time an AQL query is considered “slow” with this option. Any slow queries that exceed the specified execution time are logged when they are finished.
You can turn off the tracking of slow queries entirely by setting the option
--query.tracking
to false
.
--query.smart-joins
Type: boolean
Whether to enable the SmartJoins query optimization.
This option can be specified without a value to enable it.
Default: true
--query.tracking
Type: boolean
Whether to track queries.
This option can be specified without a value to enable it.
Default: true
--query.tracking-slow-queries
Type: boolean
Whether to track slow queries.
This option can be specified without a value to enable it.
Default: true
--query.tracking-with-bindvars
Type: boolean
Whether to track bind variable of AQL queries.
This option can be specified without a value to enable it.
Default: true
Show details
If set to true
, then the bind variables are
tracked and shown for all running and slow AQL queries. This also enables the
display of bind variable values in the list of cached AQL query results. This
option only has an effect if --query.tracking
is set to true
or if the query
results cache is used.
You can disable tracking and displaying bind variable values by setting the
option to false
.
--query.tracking-with-datasources
Type: boolean
Whether to track data sources of AQL queries.
This option can be specified without a value to enable it.
--query.tracking-with-querystring
Type: boolean
Whether to track the query string.
This option can be specified without a value to enable it.
Default: true
random
--random.generator
Type: uint32
The random number generator to use (1 = MERSENNE, 2 = RANDOM, 3 = URANDOM, 4 = COMBINED). The options 2, 3, and 4 are deprecated and will be removed in a future version.
Default: 1
Possible values: 1, 2, 3, 4
Show details
1
: a pseudo-random number generator using an implication of the Mersenne Twister MT19937 algorithm2
: use a blocking random (or pseudo-random) number generator3
: use the non-blocking random (or pseudo-random) number generator supplied by the operating system4
: a combination of the blocking random number generator and the Mersenne Twister
rclone
--rclone.argument
Introduced in: v3.9.11, v3.10.7, v3.11.1
Type: string…
Prepend custom arguments to rclone.
Show details
You can add custom arguments to invocations of rclone, which is called for uploading and downloading Hot Backups. For example, you can enable debug logging to a separate file on startup as follows:
arangod --rclone.argument "--log-level=DEBUG" --rclone.argument "--log-file=rclone.log" ...
If you use the ArangoDB Starter, you can utilize the ARANGODB_SERVER_DIR
environment variable that it sets to generate separate log files for every
cluster node, like --all.rclone.argument="--log-file=@ARANGODB_SERVER_DIR@/rclone.log
.
--rclone.executable
Type: string
Path to the rclone executable used for uploading and downloading of Hot Backups.
replication
--replication.auto-repair-revision-trees
Introduced in: v3.10.6
Type: boolean
Whether to automatically repair revision trees of shards after too many shard synchronization failures.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers only.
--replication.auto-start
Type: boolean
Enable or disable the automatic start of replication appliers.
This option can be specified without a value to enable it.
Default: true
--replication.connect-timeout
Type: double
The default timeout value for replication connection attempts (in seconds).
Default: 10
--replication.max-parallel-tailing-invocations
Type: uint64
The maximum number of concurrently allowed WAL tailing invocations (0 = unlimited).
--replication.quick-keys-limit
Type: uint64
Limit at which ‘quick’ calls to the replication keys API return only the document count for the second run.
Default: 1000000
--replication.request-timeout
Type: double
The default timeout value for replication requests (in seconds).
Default: 600
--replication.sync-by-revision
Type: boolean
Whether to use the newer revision-based replication protocol.
This option can be specified without a value to enable it.
Default: true
rocksdb
--rocksdb.allow-fallocate
Type: boolean
Whether to allow RocksDB to use fallocate calls. If disabled, fallocate calls are bypassed and no pre-allocation is done.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
Show details
Preallocation is turned on by default, but you can
turn it off for operating system versions that are known to have issues with it.
This option only has an effect on operating systems that support
fallocate
.
--rocksdb.auto-fill-index-caches-on-startup
Introduced in: v3.9.6, v3.10.2
Type: boolean
Whether to automatically fill the in-memory index caches with entries from edge indexes and cache-enabled persistent indexes on server startup.
This option can be specified without a value to enable it.
Effective on DB-Servers and Single Servers only.
Show details
Enabling this option may cause additional CPU and
I/O load. You can limit how many index filling operations can execute
concurrently with the --rocksdb.max-concurrent-index-fill-tasks
startup
option.
--rocksdb.auto-flush-check-interval
Introduced in: v3.10.5
Type: double
The interval (in seconds) in which auto-flushes of WAL and column family data is executed.
Default: 1800
Effective on DB-Servers and Single Servers only.
--rocksdb.auto-flush-min-live-wal-files
Introduced in: v3.10.5
Type: uint64
The minimum number of live WAL files that triggers an auto-flush of WAL and column family data.
Default: 20
Effective on DB-Servers and Single Servers only.
--rocksdb.auto-refill-index-caches-on-followers
Introduced in: v3.10.5
Type: boolean
Whether or not to automatically (re-)fill the in-memory index caches on followers as well.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers and Single Servers only.
Show details
Set this to false
to only (re-)fill in-memory
index caches on leaders and save memory on followers.
Note that the value of this option should be identical for all DBServers.
--rocksdb.auto-refill-index-caches-on-modify
Introduced in: v3.9.6, v3.10.2
Type: boolean
Whether to automatically (re-)fill the in-memory index caches with entries from edge indexes and cache-enabled persistent indexes on insert/update/replace/remove operations by default.
This option can be specified without a value to enable it.
Effective on DB-Servers and Single Servers only.
Show details
When documents are added, modified, or removed, these changes are tracked and a background thread tries to update the index caches accordingly if the feature is enabled, by adding new, updating existing, or deleting and refilling cache entries.
You can enable the feature for individual INSERT
, UPDATE
, REPLACE
, and
REMOVE
operations in AQL queries, for individual document API requests that
insert, update, replace, or remove single or multiple documents, as well
as enable it by default using this startup option.
The background refilling is done on a best-effort basis and not guaranteed to succeed, for example, if there is no memory available for the cache subsystem, or during cache grow/shrink operations. A background thread is used so that foreground write operations are not slowed down by a lot. It may still cause additional I/O activity to look up data from the storage engine to repopulate the cache.
--rocksdb.auto-refill-index-caches-queue-capacity
Introduced in: v3.9.6, v3.10.2
Type: uint64
How many changes can be queued at most for automatically refilling the index caches.
Default: 131072
Effective on DB-Servers and Single Servers only.
Show details
This option restricts how many cache entries the background thread for (re-)filling the in-memory index caches can queue at most. This limits the memory usage for the case of the background thread being slower than other operations that invalidate cache entries of edge indexes or cache-enabled persistent indexes.
--rocksdb.blob-compression-type
Introduced in: v3.11.0
Experimental
Type: string
The compression algorithm to use for blob data in the documents column family. Requires --rocksdb.enable-blob-files
.
Default: lz4
Possible values: “lz4”, “lz4hc”, “none”, “snappy”
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.blob-file-size
Introduced in: v3.11.0
Experimental
Type: uint64
The size limit for blob files in the documents column family (in bytes). Requires --rocksdb.enable-blob-files
.
Default: 1073741824
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.blob-file-starting-level
Introduced in: v3.12.6
Experimental
Type: uint32
The level from which on to use blob files in the documents column family. Requires --rocksdb.enable-blob-files
.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.blob-garbage-collection-age-cutoff
Introduced in: v3.11.0
Experimental
Type: double
The age cutoff for garbage collecting blob files in the documents column family (percentage value from 0 to 1 determines how many blob files are garbage collected during compaction). Requires --rocksdb.enable-blob-files
and --rocksdb.enable-blob-garbage-collection
.
Default: 0.25
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.blob-garbage-collection-force-threshold
Introduced in: v3.11.0
Experimental
Type: double
The garbage ratio threshold for scheduling targeted compactions for the oldest blob files in the documents column family (percentage value between 0 and 1). Requires --rocksdb.enable-blob-files
and --rocksdb.enable-blob-garbage-collection
.
Default: 1
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.block-align-data-blocks
Type: boolean
If enabled, data blocks are aligned on the lesser of page size and block size.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
This may waste some memory but may reduce the number of cross-page I/O operations.
--rocksdb.block-cache-estimated-entry-charge
Introduced in: v3.12.6
Experimental
Type: uint64
The estimated charge of cache entries (in bytes) for the hyper-clock cache.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.block-cache-jemalloc-allocator
Introduced in: v3.11.0
Experimental
Type: boolean
Use jemalloc-based memory allocator for RocksDB block cache.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
The jemalloc-based memory allocator for the RocksDB block cache will also exclude the block cache contents from coredumps, potentially making generated coredumps a lot smaller. In order to use this option, the executable needs to be compiled with jemalloc support (which is the default on Linux).
--rocksdb.block-cache-shard-bits
Type: int64
The number of shard bits to use for the block cache (-1 = default value).
Default: -1
Effective on DB-Servers, Agents and Single Servers only.
Show details
The number of bits used to shard the block cache
to allow concurrent operations. To keep individual shards at a reasonable size
(i.e. at least 512 KiB), keep this value to at most
block-cache-shard-bits / 512 KiB
. Default: block-cache-size / 2^19
.
--rocksdb.block-cache-size
Type: uint64
The size of block cache (in bytes).
Default: dynamic (e.g. 9282682060
)
Effective on DB-Servers, Agents and Single Servers only.
Show details
This is the maximum size of the block cache in
bytes. Increasing this value may improve performance. If there is more than
4 GiB of RAM in the system, the default value is
(system RAM size - 2GiB) * 0.3
.
For systems with less RAM, the default values are:
- 512 MiB for systems with between 2 and 4 GiB of RAM.
- 256 MiB for systems with between 1 and 2 GiB of RAM.
- 128 MiB for systems with less than 1 GiB of RAM.
--rocksdb.block-cache-type
Introduced in: v3.12.6
Type: string
The block cache type to use (note: the ‘hyper-clock’ cache type is experimental).
Default: lru
Possible values: “hyper-clock”, “lru”
--rocksdb.bloom-filter-bits-per-key
Introduced in: v3.10.3
Type: double
The average number of bits to use per key in a Bloom filter.
Default: 10
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.cache-index-and-filter-blocks
Type: boolean
If enabled, the RocksDB block cache quota also includes RocksDB memtable sizes.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
Show details
If you set this option to true
, RocksDB tracks
all loaded index and filter blocks in the block cache, so that they count
towards RocksDB’s block cache memory limit.
If you set this option to false
, the memory usage for index and filter blocks
is not accounted for.
The default value of --rocksdb.cache-index-and-filter-blocks
was false
in
versions before 3.10, and was changed to true
from version 3.10 onwards.
To improve stability of memory usage and avoid untracked memory allocations by
RocksDB, it is recommended to set this option to true
. Note that tracking
index and filter blocks leaves less room for other data in the block cache, so
in case servers have unused RAM capacity available, it may be useful to increase
the overall size of the block cache.
--rocksdb.cache-index-and-filter-blocks-with-high-priority
Type: boolean
If enabled and --rocksdb.cache-index-and-filter-blocks
is also enabled, cache index and filter blocks with high priority, making index and filter blocks be less likely to be evicted than data blocks.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.checksum-type
Introduced in: v3.10.0
Type: string
The checksum type to use for table files.
Default: xxHash64
Possible values: “XXH3”, “crc32c”, “xxHash”, “xxHash64”
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.compaction-read-ahead-size
Type: uint64
If non-zero, bigger reads are performed when doing compaction. If you run RocksDB on spinning disks, you should set this to at least 2 MB. That way, RocksDB’s compaction does sequential instead of random reads.
Default: 8388608
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.compaction-style
Introduced in: v3.10.0
Type: string
The compaction style which is used to pick the next file(s) to be compacted (note: all styles except ’level’ are experimental).
Default: level
Possible values: “fifo”, “level”, “none”, “universal”
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.compression-type
Introduced in: v3.10.0
Type: string
The compression algorithm to use within RocksDB.
Default: lz4
Possible values: “lz4”, “lz4hc”, “none”, “snappy”
--rocksdb.create-sha-files
Type: boolean
Whether to enable the generation of sha256 files for each .sst file.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers and Single Servers only.
--rocksdb.debug-logging
Type: boolean
Whether to enable RocksDB debug logging.
This option can be specified without a value to enable it.
Effective on DB-Servers and Single Servers only.
Show details
If set to true
, enables verbose logging of
RocksDB’s actions into the logfile written by ArangoDB (if the
--rocksdb.use-file-logging
option is off), or RocksDB’s own log (if the
--rocksdb.use-file-logging
option is on).
This option is turned off by default, but you can enable it for debugging RocksDB internals and performance.
--rocksdb.delayed-write-rate
Type: uint64
Limit the write rate to the database (in bytes per second) when writing to the last mem-table allowed and if more than 3 mem-tables are allowed, or if a certain number of level-0 files are surpassed and writes need to be slowed down.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.dynamic-level-bytes
Type: boolean
Whether to determine the number of bytes for each level dynamically to minimize space amplification.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
Show details
If set to true
, the amount of data in each level
of the LSM tree is determined dynamically to minimize the space amplification.
Otherwise, the level sizes are fixed. The dynamic sizing allows RocksDB to
maintain a well-structured LSM tree regardless of total data size.
--rocksdb.enable-blob-cache
Introduced in: v3.12.6
Experimental
Type: boolean
Enable caching of blobs in the block cache for the documents column family.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.enable-blob-files
Introduced in: v3.11.0
Experimental
Type: boolean
Enable blob files for the documents column family.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.enable-blob-garbage-collection
Introduced in: v3.11.0
Experimental
Type: boolean
Enable blob garbage collection during compaction in the documents column family. Requires --rocksdb.enable-blob-files
.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.enable-index-compression
Introduced in: v3.10.0
Type: boolean
Enable index compression.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.enable-pipelined-write
Type: boolean
If enabled, use a two stage write queue for WAL writes and memtable writes.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.enable-statistics
Type: boolean
Whether RocksDB statistics should be enabled.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.encryption-gen-internal-key
Type: boolean
Generate an internal encryption-at-rest key.
This option can be specified without a value to enable it.
--rocksdb.encryption-hardware-acceleration
Introduced in: v3.8.0
Type: boolean
Use Intel intrinsics-based encryption, requiring a CPU with the AES-NI instruction set. If turned off, then OpenSSL is used, which may use hardware-accelerated encryption, too.
This option can be specified without a value to enable it.
Default: true
--rocksdb.encryption-key-generator
Type: string
A program providing the encryption key on stdout. If set, encryption is enabled.
--rocksdb.encryption-key-rotation
Type: boolean
Allow encryption key rotation.
This option can be specified without a value to enable it.
--rocksdb.encryption-keyfile
Type: string
A file containing an encryption key. If set, encryption is enabled.
--rocksdb.encryption-keyfolder
Type: string
A folder containing all possible user encryption keys. All keys are used to decrypt the internal keystore.
--rocksdb.enforce-block-cache-size-limit
Type: boolean
If enabled, strictly enforces the block cache size limit.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
Whether the maximum size of the RocksDB block cache is strictly enforced. You can set this option to limit the memory usage of the block cache to at most the specified size. If inserting a data block into the cache would exceed the cache’s capacity, the data block is not inserted. If disabled, a data block may still get inserted into the cache. It is evicted later, but the cache may temporarily grow beyond its capacity limit.
The default value for --rocksdb.enforce-block-cache-size-limit
was false
before version 3.10, but was changed to true
from version 3.10 onwards.
To improve stability of memory usage and prevent exceeding the block cache
capacity limit (as configurable via --rocksdb.block-cache-size
), it is
recommended to set this option to true
.
--rocksdb.exclusive-writes
Deprecated in: v3.8.0
Type: boolean
If enabled, writes are exclusive. This allows the RocksDB engine to mimic the collection locking behavior of the now-removed MMFiles storage engine, but inhibits concurrent write operations.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
This option allows you to make all writes to the RocksDB storage exclusive and therefore avoid write-write conflicts.
This option was introduced to open a way to upgrade from the legacy MMFiles to the RocksDB storage engine without modifying client application code. You should avoid enabling this option as the use of exclusive locks on collections introduce a noticeable throughput penalty.
Note: The MMFiles engine was removed and this option is a stopgap measure only. This option is thus deprecated, and will be removed in a future version.
--rocksdb.force-legacy-comparator
Introduced in: v3.12.2
Type: boolean
If set to true
, forces a new database directory to use the legacy sorting method. This is only for testing. Don’t use.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.format-version
Introduced in: v3.10.0
Type: uint32
The table format version to use inside RocksDB.
Default: 5
Possible values: 3, 4, 5, 6
Effective on DB-Servers, Agents and Single Servers only.
Show details
Note that format version 6 can only be read by RocksDB versions
= 8.6.0. Thus switching to format version 6 will make the database files incompatible with ArangoDB versions with a lower RocksDB version in case of downgrading.
--rocksdb.intermediate-commit-count
Type: uint64
An intermediate commit is performed automatically when this number of operations is reached in a transaction, and a new transaction is started.
Default: 1000000
--rocksdb.intermediate-commit-size
Type: uint64
An intermediate commit is performed automatically when a transaction has accumulated operations of this size (in bytes), and a new transaction is started.
Default: 536870912
--rocksdb.level0-compaction-trigger
Type: int64
The number of level-0 files that triggers a compaction.
Default: 2
Effective on DB-Servers, Agents and Single Servers only.
Show details
Compaction of level-0 to level-1 is triggered when this many files exist in level-0. If you set this option to a higher number, it may help bulk writes at the expense of slowing down reads.
--rocksdb.level0-slowdown-trigger
Type: int64
The number of level-0 files that triggers a write slowdown
Default: 16
Effective on DB-Servers, Agents and Single Servers only.
Show details
When this many files accumulate in level-0, writes
are slowed down to --rocksdb.delayed-write-rate
to allow compaction to
catch up.
--rocksdb.level0-stop-trigger
Type: int64
The number of level-0 files that triggers a full write stop
Default: 256
Effective on DB-Servers, Agents and Single Servers only.
Show details
When this many files accumulate in level-0, writes are stopped to allow compaction to catch up.
--rocksdb.limit-open-files-at-startup
Type: boolean
Limit the amount of .sst files RocksDB inspects at startup, in order to reduce the startup I/O operations.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.max-background-jobs
Type: int32
The maximum number of concurrent background jobs (compactions and flushes).
Default: dynamic (e.g. 8
)
Effective on DB-Servers, Agents and Single Servers only.
Show details
The jobs are submitted to the low priority thread pool. The default value is the number of processors in the system.
--rocksdb.max-bytes-for-level-base
Type: uint64
If not using dynamic level sizes, this controls the maximum total data size for level-1 of the LSM tree.
Default: 268435456
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.max-bytes-for-level-multiplier
Type: double
If not using dynamic level sizes, the maximum number of bytes for level L of the LSM tree can be calculated as max-bytes-for-level-base * (max-bytes-for-level-multiplier ^ (L-1))
Default: 10
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.max-concurrent-index-fill-tasks
Introduced in: v3.9.6, v3.10.2
Type: uint64
The maximum number of index fill tasks that can run concurrently on server startup.
Default: dynamic (e.g. 1
)
Effective on DB-Servers and Single Servers only.
Show details
The lower this number, the lower the impact of the index cache filling, but the longer it takes to complete.
--rocksdb.max-parallel-compactions
Type: uint64
The maximum number of parallel compactions jobs.
Default: 2
--rocksdb.max-subcompactions
Type: uint32
The maximum number of concurrent sub-jobs for a background compaction.
Default: 4
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.max-total-wal-size
Type: uint64
The maximum total size of WAL files that force a flush of stale column families.
Default: 268435456
Effective on DB-Servers, Agents and Single Servers only.
Show details
When reached, force a flush of all column families whose data is backed by the oldest WAL files. If you set this option to a low value, regular flushing of column family data from memtables is triggered, so that WAL files can be moved to the archive.
If you set this option to a high value, regular flushing is avoided but may prevent WAL files from being moved to the archive and being removed.
--rocksdb.max-transaction-size
Type: uint64
The transaction size limit (in bytes).
Default: 18446744073709552000
Show details
Transactions store all keys and values in RAM, so large transactions run the risk of causing out-of-memory situations. This setting allows you to ensure that it does not happen by limiting the size of any individual transaction. Transactions whose operations would consume more RAM than this threshold value are aborted automatically with error 32 (“resource limit exceeded”).
--rocksdb.max-write-buffer-number
Type: uint64
The maximum number of write buffers that build up in memory (default: number of column families + 2 = 12 write buffers). You can only increase the number.
Default: 13
Effective on DB-Servers, Agents and Single Servers only.
Show details
If this number is reached before the buffers can be flushed, writes are slowed or stalled.
--rocksdb.max-write-buffer-number-definitions
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the definitions column family
--rocksdb.max-write-buffer-number-documents
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the documents column family
--rocksdb.max-write-buffer-number-edge
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the edge column family
--rocksdb.max-write-buffer-number-fulltext
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the fulltext column family
--rocksdb.max-write-buffer-number-geo
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the geo column family
--rocksdb.max-write-buffer-number-mdi
Introduced in: v3.12.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the mdi column family
--rocksdb.max-write-buffer-number-mdi-prefixed
Introduced in: v3.12.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the mdi-prefixed column family
--rocksdb.max-write-buffer-number-primary
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the primary column family
--rocksdb.max-write-buffer-number-replicated-logs
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the replicated-logs column family
--rocksdb.max-write-buffer-number-vector
Introduced in: v3.12.4
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the vector column family
--rocksdb.max-write-buffer-number-vpack
Introduced in: v3.8.0
Type: uint64
If non-zero, overrides the value of --rocksdb.max-write-buffer-number
for the vpack column family
--rocksdb.max-write-buffer-size-to-maintain
Type: int64
The maximum size of immutable write buffers that build up in memory per column family. Larger values mean that more in-memory data can be used for transaction conflict checking (-1 = use automatic default value, 0 = do not keep immutable flushed write buffers, which is the default and usually correct).
Effective on DB-Servers, Agents and Single Servers only.
Show details
The default value 0
restores the memory usage
pattern of version 3.6. This makes RocksDB not keep any flushed immutable
write-buffers in memory.
--rocksdb.min-blob-size
Introduced in: v3.11.0
Experimental
Type: uint64
The size threshold for storing documents in blob files (in bytes, 0 = store all documents in blob files). Requires --rocks.enable-blob-files
.
Default: 256
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.min-write-buffer-number-to-merge
Type: uint64
The minimum number of write buffers that are merged together before writing to storage.
Default: dynamic (e.g. 1
)
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.minimum-disk-free-bytes
Introduced in: v3.8.0
Type: uint64
The minimum number of free disk bytes for considering the server healthy in health checks (0 = disable the check).
Default: 16777216
Effective on DB-Servers and Single Servers only.
--rocksdb.minimum-disk-free-percent
Introduced in: v3.8.0
Type: double
The minimum percentage of free disk space for considering the server healthy in health checks (0 = disable the check).
Default: 0.01
Effective on DB-Servers and Single Servers only.
--rocksdb.num-levels
Type: uint64
The number of levels for the database in the LSM tree.
Default: 7
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.num-threads-priority-high
Introduced in: v3.8.5
Type: uint32
The number of threads for high priority operations (e.g. flush).
Effective on DB-Servers, Agents and Single Servers only.
Show details
The recommended value is to set this equal to
max-background-flushes
. The default value is number of processors / 2
.
--rocksdb.num-threads-priority-low
Type: uint32
The number of threads for low priority operations (e.g. compaction).
Effective on DB-Servers, Agents and Single Servers only.
Show details
The default value is
number of processors / 2
.
--rocksdb.num-uncompressed-levels
Type: uint64
The number of levels that do not use compression in the LSM tree.
Default: 2
Effective on DB-Servers, Agents and Single Servers only.
Show details
Levels above the default of 2
use
compression to reduce the disk space requirements for storing data in these
levels.
--rocksdb.optimize-filters-for-hits
Type: boolean
Whether the implementation should optimize the filters mainly for cases where keys are found rather than also optimize for keys missed. You can enable the option if you know that there are very few misses or the performance in the case of misses is not important for your application.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.optimize-filters-for-memory
Introduced in: v3.12.6
Type: boolean
Optimize RocksDB bloom filters to reduce internal memory fragmentation.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.partition-files-for-documents
Introduced in: v3.12.0
Experimental
Type: boolean
If enabled, the document data for different collections/shards will end up in different .sst files.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
Show details
Enabling this option will make RocksDB’s compaction write the document data for different collections/shards into different .sst files. Otherwise the document data from different collections/shards can be mixed and written into the same .sst files.
Enabling this option usually has the benefit of making the RocksDB compaction more efficient when a lot of different collections/shards are written to in parallel. The disavantage of enabling this option is that there can be more .sst files than when the option is turned off, and the disk space used by these .sst files can be higher than if there are fewer .sst files (this is because there is some per-.sst file overhead). In particular on deployments with many collections/shards this can lead to a very high number of .sst files, with the potential of outgrowing the maximum number of file descriptors the ArangoDB process can open. Thus the option should only be enabled on deployments with a limited number of collections/shards.
--rocksdb.partition-files-for-edge-index
Introduced in: v3.12.0
Experimental
Type: boolean
If enabled, the index data for different edge indexes will end up in different .sst files.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
Enabling this option will make RocksDB’s compaction write the edge index data for different edge collections/shards into different .sst files. Otherwise the edge index data from different edge collections/shards can be mixed and written into the same .sst files.
Enabling this option usually has the benefit of making the RocksDB compaction more efficient when a lot of different edge collections/shards are written to in parallel. The disavantage of enabling this option is that there can be more .sst files than when the option is turned off, and the disk space used by these .sst files can be higher than if there are fewer .sst files (this is because there is some per-.sst file overhead). In particular on deployments with many edge collections/shards this can lead to a very high number of .sst files, with the potential of outgrowing the maximum number of file descriptors the ArangoDB process can open. Thus the option should only be enabled on deployments with a limited number of edge collections/shards.
--rocksdb.partition-files-for-mdi-index
Introduced in: v3.12.0
Experimental
Type: boolean
If enabled, the index data for different mdi indexes will end up in different .sst files.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
Enabling this option will make RocksDB’s compaction write the persistent index data for different mdi indexes (also indexes from different collections/shards) into different .sst files. Otherwise the persistent index data from different collections/shards/indexes can be mixed and written into the same .sst files.
Enabling this option usually has the benefit of making the RocksDB compaction more efficient when a lot of different collections/shards/indexes are written to in parallel. The disavantage of enabling this option is that there can be more .sst files than when the option is turned off, and the disk space used by these .sst files can be higher than if there are fewer .sst files (this is because there is some per-.sst file overhead). In particular on deployments with many collections/shards/indexes this can lead to a very high number of .sst files, with the potential of outgrowing the maximum number of file descriptors the ArangoDB process can open. Thus the option should only be enabled on deployments with a limited number of edge collections/shards/indexes.
--rocksdb.partition-files-for-persistent-index
Introduced in: v3.12.0
Experimental
Type: boolean
If enabled, the index data for different persistent indexes will end up in different .sst files.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
Enabling this option will make RocksDB’s compaction write the persistent index data for different persistent indexes (also indexes from different collections/shards) into different .sst files. Otherwise the persistent index data from different collections/shards/indexes can be mixed and written into the same .sst files.
Enabling this option usually has the benefit of making the RocksDB compaction more efficient when a lot of different collections/shards/indexes are written to in parallel. The disavantage of enabling this option is that there can be more .sst files than when the option is turned off, and the disk space used by these .sst files can be higher than if there are fewer .sst files (this is because there is some per-.sst file overhead). In particular on deployments with many collections/shards/indexes this can lead to a very high number of .sst files, with the potential of outgrowing the maximum number of file descriptors the ArangoDB process can open. Thus the option should only be enabled on deployments with a limited number of edge collections/shards/indexes.
--rocksdb.partition-files-for-primary-index
Introduced in: v3.12.0
Experimental
Type: boolean
If enabled, the primary index data for different collections/shards will end up in different .sst files.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
Enabling this option will make RocksDB’s compaction write the primary index data for different collections/shards into different .sst files. Otherwise the primary index data from different collections/shards can be mixed and written into the same .sst files.
Enabling this option usually has the benefit of making the RocksDB compaction more efficient when a lot of different collections/shards are written to in parallel. The disavantage of enabling this option is that there can be more .sst files than when the option is turned off, and the disk space used by these .sst files can be higher than if there are fewer .sst files (this is because there is some per-.sst file overhead). In particular on deployments with many collections/shards this can lead to a very high number of .sst files, with the potential of outgrowing the maximum number of file descriptors the ArangoDB process can open. Thus the option should only be enabled on deployments with a limited number of collections/shards.
--rocksdb.partition-files-for-vector-index
Introduced in: v3.12.4
Experimental
Type: boolean
If enabled, the index data for different vector indexes will end up in different .sst files.
This option can be specified without a value to enable it.
Effective on DB-Servers and Single Servers only.
Show details
Enabling this option makes RocksDB’s compaction write the index data for different vector indexes (also indexes from different collections/shards) into different .sst files. Otherwise, the index data from different collections/shards/indexes can be mixed and written into the same .sst files.
Enabling this option usually has the benefit of making the RocksDB compaction more efficient when a lot of different collections/shards/indexes are written to in parallel. The disadvantage of enabling this option is that there can be more .sst files than when the option is disabled, and the disk space used by these .sst files can be higher than if there are fewer .sst files because there is some overhead per .sst file. For deployments with many collections/shards/indexes in particular, this can lead to a very high number of .sst files, with the potential of outgrowing the maximum number of file descriptors the ArangoDB process can open. The option should thus only be enabled for deployments with a limited number of edge collections/shards/indexes.
--rocksdb.pending-compactions-slowdown-trigger
Introduced in: v3.8.5
Type: uint64
The number of pending compaction bytes that triggers a write slowdown.
Default: 1073741824
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.pending-compactions-stop-trigger
Introduced in: v3.8.5
Type: uint64
The number of pending compaction bytes that triggers a full write stop.
Default: 34359738368
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.periodic-compaction-ttl
Introduced in: v3.9.3
Type: uint64
Time-to-live (in seconds) for periodic compaction of .sst files, based on the file age (0 = no periodic compaction).
Default: 86400
Effective on DB-Servers, Agents and Single Servers only.
Show details
The default value from RocksDB is ~30 days. To
avoid periodic auto-compaction and the I/O caused by it, you can set this
option to 0
.
--rocksdb.pin-l0-filter-and-index-blocks-in-cache
Type: boolean
If enabled and --rocksdb.cache-index-and-filter-blocks
is also enabled, filter and index blocks are pinned and only evicted from cache when the table reader is freed.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.pin-top-level-index-and-filter
Type: boolean
If enabled and --rocksdb.cache-index-and-filter-blocks
is also enabled, the top-level index of partitioned filter and index blocks are pinned and only evicted from cache when the table reader is freed.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.prepopulate-blob-cache
Introduced in: v3.12.6
Experimental
Type: boolean
Pre-populate the blob cache on flushes.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.prepopulate-block-cache
Introduced in: v3.10.0
Type: boolean
Pre-populate block cache on flushes.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.recycle-log-file-num
Type: uint64
If enabled, keep a pool of log files around for recycling.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.reserve-file-metadata-memory
Introduced in: v3.11.0
Type: boolean
account for .sst file metadata memory in block cache
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.reserve-table-builder-memory
Introduced in: v3.10.0
Type: boolean
Account for table building memory in block cache.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.reserve-table-reader-memory
Introduced in: v3.10.0
Type: boolean
Account for table reader memory in block cache.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.sync-delay-threshold
Type: uint64
The threshold for self-observation of WAL disk syncs (in milliseconds, 0 = no warnings). Any WAL disk sync longer ago than this threshold triggers a warning
Default: 5000
Effective on DB-Servers and Single Servers only.
--rocksdb.sync-interval
Type: uint64
The interval for automatic, non-requested disk syncs (in milliseconds, 0 = turn automatic syncing off)
Default: 100
Effective on DB-Servers and Single Servers only.
Show details
Automatic synchronization of data from RocksDB’s
write-ahead logs to disk is only performed for not-yet synchronized data, and
only for operations that have been executed without the waitForSync
attribute.
--rocksdb.table-block-size
Type: uint64
The approximate size (in bytes) of the user data packed per block for uncompressed data.
Default: 16384
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.target-file-size-base
Type: uint64
Per-file target file size for compaction (in bytes). The actual target file size for each level is --rocksdb.target-file-size-base
multiplied by --rocksdb.target-file-size-multiplier
^ (level - 1)
Default: 67108864
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.target-file-size-multiplier
Type: uint64
The multiplier for --rocksdb.target-file-size
. A value of 1 means that files in different levels will have the same size.
Default: 1
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.throttle
Type: boolean
Enable write-throttling.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers and Single Servers only.
Show details
If enabled, dynamically throttles the ingest rate of writes if necessary to reduce chances of compactions getting too far behind and blocking incoming writes.
--rocksdb.throttle-frequency
Introduced in: v3.8.5
Type: uint64
The frequency for write-throttle calculations (in milliseconds).
Default: 1000
Effective on DB-Servers and Single Servers only.
Show details
If the throttling is enabled, it recalculates a new maximum ingestion rate with this frequency.
--rocksdb.throttle-lower-bound-bps
Introduced in: v3.8.5
Type: uint64
The lower bound for throttle’s write bandwidth (in bytes per second).
Default: 10485760
Effective on DB-Servers and Single Servers only.
--rocksdb.throttle-max-write-rate
Introduced in: v3.8.5
Type: uint64
The maximum write rate enforced by throttle (in bytes per second, 0 = unlimited).
Effective on DB-Servers and Single Servers only.
Show details
The actual write rate established by the throttling is the minimum of this value and the value that the regular throttle calculation produces, i.e. this option can be used to set a fixed upper bound on the write rate.
--rocksdb.throttle-scaling-factor
Introduced in: v3.8.5
Type: uint64
The adaptiveness scaling factor for write-throttle calculations.
Default: 17
Effective on DB-Servers and Single Servers only.
Show details
There is normally no need to change this value.
--rocksdb.throttle-slots
Introduced in: v3.8.5
Type: uint64
The number of historic metrics to use for throttle value calculation.
Default: 120
Effective on DB-Servers and Single Servers only.
Show details
If throttling is enabled, this parameter controls the number of previous intervals to use for throttle value calculation.
--rocksdb.throttle-slow-down-writes-trigger
Introduced in: v3.8.5
Type: uint64
The number of level 0 files whose payload is not considered in throttle calculations when penalizing the presence of L0 files.
Default: 8
Effective on DB-Servers and Single Servers only.
Show details
There is normally no need to change this value.
--rocksdb.total-write-buffer-size
Type: uint64
The maximum total size of in-memory write buffers (0 = unbounded).
Default: dynamic (e.g. 12376909414
)
Effective on DB-Servers, Agents and Single Servers only.
Show details
The total amount of data to build up in all in-memory buffers (backed by log files). You can use this option together with the block cache size configuration option to limit memory usage.
If set to 0
, the memory usage is not limited.
If set to a value larger than 0
, this caps memory usage for write buffers but
may have an effect on performance. If there is more than 4 GiB of RAM in the
system, the default value is (system RAM size - 2 GiB) * 0.5
.
For systems with less RAM, the default values are:
- 512 MiB for systems with between 1 and 4 GiB of RAM.
- 256 MiB for systems with less than 1 GiB of RAM.
--rocksdb.transaction-lock-stripes
Introduced in: v3.9.2
Type: uint64
The number of lock stripes to use for transaction locks.
Default: dynamic (e.g. 16
)
Effective on DB-Servers, Agents and Single Servers only.
Show details
You can control the number of lock stripes to use for RocksDB’s transaction lock manager with this option. You can use higher values to reduce a potential contention in the lock manager.
The option defaults to the number of available cores, but is increased to a
value of 16
if the number of cores is lower.
--rocksdb.transaction-lock-timeout
Deprecated in: v3.12.6
Type: int64
If positive, specifies the wait timeout in milliseconds when a transaction attempts to lock a document. A negative value is not recommended as it can lead to deadlocks (0 = no waiting, < 0 no timeout). This is deprecated since we internally control the lock timeout for different cases.
Default: 1000
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.use-direct-io-for-flush-and-compaction
Type: boolean
Use O_DIRECT for writing files for flush and compaction.
This option can be specified without a value to enable it.
--rocksdb.use-direct-reads
Type: boolean
Use O_DIRECT for reading files.
This option can be specified without a value to enable it.
--rocksdb.use-file-logging
Type: boolean
Use a file-base logger for RocksDB’s own logs.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
Show details
If set to true
, enables writing of RocksDB’s own
informational log files into RocksDB’s database directory.
This option is turned off by default, but you can enable it for debugging RocksDB internals and performance.
--rocksdb.use-fsync
Type: boolean
Whether to use fsync calls when writing to disk (set to false for issuing fdatasync calls only).
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.use-io_uring
Introduced in: v3.12.0
Type: boolean
Check for existence of io_uring at startup and use it if available. Should be set to false only to opt out of using io_uring.
This option can be specified without a value to enable it.
Default: true
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.verify-sst
Introduced in: v3.11.0
Type: boolean
Verify the validity of .sst files present in the engine-rocksdb
directory on startup.
This is a command, no value needs to be specified. The process terminates after executing the command.
Effective on DB-Servers, Agents and Single Servers only.
Show details
If set to true
, during startup, all .sst files
in the engine-rocksdb
folder in the database directory are checked for
potential corruption and errors. The server process stops after the check and
returns an exit code of 0
if the validation was successful, or a non-zero
exit code if there is an error in any of the .sst files.
--rocksdb.wal-archive-size-limit
Deprecated in: v3.12.0
Type: uint64
The maximum total size (in bytes) of archived WAL files to keep on the leader (0 = unlimited).
Effective on DB-Servers and Single Servers only.
Show details
A value of 0
does not restrict the size of the
archive, so the leader removes archived WAL files when there are no replication
clients needing them. Any non-zero value restricts the size of the WAL files
archive to about the specified value and trigger WAL archive file deletion once
the threshold is reached. You can use this to get rid of archived WAL files in
a disk size-constrained environment.
Note: The value is only a threshold, so the archive may get bigger than
the configured value until the background thread actually deletes files from
the archive. Also note that deletion from the archive only kicks in after
--rocksdb.wal-file-timeout-initial
seconds have elapsed after server start.
Archived WAL files are normally deleted automatically after a short while when there is no follower attached that may read from the archive. However, in case when there are followers attached that may read from the archive, WAL files normally remain in the archive until their contents have been streamed to the followers. In case there are slow followers that cannot catch up, this causes a growth of the WAL files archive over time.
You can use the option to force a deletion of WAL files from the archive even if there are followers attached that may want to read the archive. In case the option is set and a leader deletes files from the archive that followers want to read, this aborts the replication on the followers. Followers can restart the replication doing a resync, though, but they may not be able to catch up if WAL file deletion happens too early.
Thus it is best to leave this option at its default value of 0
except in cases
when disk size is very constrained and no replication is used.
--rocksdb.wal-directory
Type: string
Absolute path for RocksDB WAL files. If not set, a subdirectory journals
inside the database directory is used.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.wal-file-timeout
Type: double
The timeout after which unused WAL files are deleted (in seconds).
Default: 10
Effective on DB-Servers and Single Servers only.
Show details
Data of ongoing transactions is stored in RAM. Transactions that get too big (in terms of number of operations involved or the total size of data created or modified by the transaction) are committed automatically. Effectively, this means that big user transactions are split into multiple smaller RocksDB transactions that are committed individually. The entire user transaction does not necessarily have ACID properties in this case.
--rocksdb.wal-file-timeout-initial
Type: double
The initial timeout (in seconds) after which unused WAL files deletion kicks in after server start.
Default: 60
Effective on DB-Servers and Single Servers only.
Show details
If you decrease the value, the server starts the removal of obsolete WAL files earlier after server start. This is useful in testing environments that are space-restricted and do not require keeping much WAL file data at all.
--rocksdb.wal-recovery-skip-corrupted
Type: boolean
Skip corrupted records in WAL recovery.
This option can be specified without a value to enable it.
Effective on DB-Servers, Agents and Single Servers only.
--rocksdb.write-buffer-size
Type: uint64
The amount of data to build up in memory before converting to a sorted on-disk file (0 = disabled).
Default: 67108864
Effective on DB-Servers, Agents and Single Servers only.
Show details
The amount of data to build up in each in-memory buffer (backed by a log file) before closing the buffer and queuing it to be flushed to standard storage. Larger values than the default may improve performance, especially for bulk loads.
server
--server.allow-use-database
Type: boolean
Allow to change the database in REST actions. Only needed internally for unit tests.
This option can be specified without a value to enable it.
--server.api-call-recording
Type: boolean
Whether to record recent API calls for debugging purposes.
This option can be specified without a value to enable it.
Default: true
--server.api-recording-memory-limit
Type: uint64
Size limit for the list of API call records.
Default: 26214400
--server.aql-query-recording
Type: boolean
Whether to record recent AQL queries for debugging purposes.
This option can be specified without a value to enable it.
Default: true
--server.aql-recording-memory-limit
Type: uint64
Size limit for the list of AQL query records.
Default: 26214400
--server.authentication
Type: boolean
Whether to use authentication for all client requests.
This option can be specified without a value to enable it.
Default: true
Show details
You can set this option to false
to turn off
authentication on the server-side, so that all clients can execute any action
without authorization and privilege checks. You should only do this if you bind
the server to localhost
to not expose it to the public internet
--server.authentication-system-only
Type: boolean
Use HTTP authentication only for requests to /_api and /_admin endpoints.
This option can be specified without a value to enable it.
Default: true
Show details
If you set this option to true
, then HTTP
authentication is only required for requests going to URLs starting with /_
,
but not for other endpoints. You can thus use this option to expose custom APIs
of Foxx microservices without HTTP authentication to the outside world, but
prevent unauthorized access of ArangoDB APIs and the admin interface.
Note that checking the URL is performed after any database name prefix has been
removed. That means, if the request URL is /_db/_system/myapp/myaction
, the
URL /myapp/myaction
is checked for the /_
prefix.
Authentication still needs to be enabled for the server via
--server.authentication
in order for HTTP authentication to be forced for the
ArangoDB APIs and the web interface. Only setting
--server.authentication-system-only
is not enough.
--server.authentication-timeout
Type: double
The timeout for the authentication cache (in seconds, 0 = indefinitely).
--server.authentication-unix-sockets
Type: boolean
Whether to use authentication for requests via UNIX domain sockets.
This option can be specified without a value to enable it.
Default: true
Show details
If you set this option to false
, authentication
for requests coming in via UNIX domain sockets is turned off on the server-side.
Clients located on the same host as the ArangoDB server can use UNIX domain
sockets to connect to the server without authentication. Requests coming in by
other means (e.g. TCP/IP) are not affected by this option.
--server.cluster-metrics-timeout
Introduced in: v3.10.0
Type: uint32
Cluster metrics polling timeout (in seconds).
--server.count-descriptors-interval
Introduced in: v3.11.0
Type: uint64
Controls the interval (in milliseconds) in which the number of open file descriptors for the process is determined (0 = disable counting).
Default: 60000
--server.descriptors-minimum
Introduced in: v3.12.0
Type: uint64
The minimum number of file descriptors needed to start (0 = no minimum)
Default: 8192
--server.early-connections
Introduced in: v3.10.0
Type: boolean
Allow requests to a limited set of APIs early during the server startup.
This option can be specified without a value to enable it.
--server.endpoint
Type: string…
Endpoint for client requests (e.g. http://127.0.0.1:8529
, or https://192.168.1.1:8529
)
Default: tcp://0.0.0.0:8529
Show details
You can specify this option multiple times to let the ArangoDB server listen for incoming requests on multiple endpoints.
The endpoints are normally specified either in ArangoDB’s configuration file or
on the command-line with --server.endpoint
. ArangoDB supports different types
of endpoints:
tcp://ipv4-address:port
- TCP/IP endpoint, using IPv4tcp://[ipv6-address]:port
- TCP/IP endpoint, using IPv6ssl://ipv4-address:port
- TCP/IP endpoint, using IPv4, SSL encryptionssl://[ipv6-address]:port
- TCP/IP endpoint, using IPv6, SSL encryptionunix:///path/to/socket
- Unix domain socket endpoint
You can use http://
as an alias for tcp://
, and https://
as an alias for
ssl://
.
If a TCP/IP endpoint is specified without a port number, then the default port (8529) is used.
If you use SSL-encrypted endpoints, you must also supply the path to a server
certificate using the --ssl.keyfile
option.
arangod --server.endpoint tcp://127.0.0.1:8529 \
--server.endpoint ssl://127.0.0.1:8530 \
--ssl.keyfile server.pem /tmp/data-dir
...
2022-11-07T10:39:30Z [1] INFO [6ea38] {general} using endpoint 'http+ssl://0.0.0.0:8530' for ssl-encrypted requests
2022-11-07T10:39:30Z [1] INFO [6ea38] {general} using endpoint 'http+tcp://0.0.0.0:8529' for non-encrypted requests
2022-11-07T10:39:31Z [1] INFO [cf3f4] {general} ArangoDB (version 3.10.0 [linux]) is ready for business. Have fun!
On one specific ethernet interface, each port can only be bound
exactly once. You can look up your available interfaces using the ifconfig
command on Linux. The general names of the
interfaces differ between operating systems and the hardware they run on.
However, every host has typically a so called loopback interface, which is a
virtual interface. By convention, it always has the address 127.0.0.1
(IPv4)
or ::1
(IPv6), and can only be reached from the very same host. Ethernet
interfaces usually have names like eth0
, wlan0
, eth1:17
, le0
.
To find out which services already use ports (so ArangoDB can’t bind them
anymore), you can use the netstat
command. It behaves a little different on
each platform; run it with -lnpt
on Linux for valuable information.
ArangoDB can also do a so called broadcast bind using tcp://0.0.0.0:8529
.
This way, it is reachable on all interfaces of the host. This may be useful on
development systems that frequently change their network setup, like laptops.
ArangoDB can also listen to IPv6 link-local addresses via adding the zone ID
to the IPv6 address in the form [ipv6-link-local-address%zone-id]
. However,
what you probably want instead is to bind to a local IPv6 address. Local IPv6
addresses start with fd
. If you only see a fe80:
IPv6 address in your
interface configuration but no IPv6 address starting with fd
, your interface
has no local IPv6 address assigned. You can read more about IPv6 link-local
addresses here: https://en.wikipedia.org/wiki/Link-local_address#IPv6 .
To bind to a link-local and local IPv6 address, run ifconfig
or equivalent
command. The command lists all interfaces and assigned IP addresses. The
link-local address may be fe80::6257:18ff:fe82:3ec6%eth0
(IPv6 address plus
interface name). A local IPv6 address may be fd12:3456::789a
.
To bind ArangoDB to it, start arangod
with
--server.endpoint tcp://[fe80::6257:18ff:fe82:3ec6%eth0]:8529
.
You can use telnet
to test the connection.
--server.ensure-whitespace-metrics-format
Introduced in: v3.10.6
Type: boolean
Set to true
to ensure whitespace between the exported metric value and the preceding token (metric name or labels) in the metrics output.
This option can be specified without a value to enable it.
Default: true
Show details
Using the whitespace characters in the output may be required to make the metrics output compatible with some processing tools, although Prometheus itself doesn’t need it.
--server.export-metrics-api
Type: boolean
Whether to enable the metrics API.
This option can be specified without a value to enable it.
Default: true
--server.export-read-write-metrics
Type: boolean
Whether to enable metrics for document reads and writes.
This option can be specified without a value to enable it.
Show details
Enabling this option exposes the following
additional metrics via the GET /_admin/metrics/v2
endpoint:
arangodb_document_writes_total
arangodb_document_writes_replication_total
arangodb_document_insert_time
arangodb_document_read_time
arangodb_document_update_time
arangodb_document_replace_time
arangodb_document_remove_time
arangodb_collection_truncates_total
arangodb_collection_truncates_replication_total
arangodb_collection_truncate_time
--server.export-shard-usage-metrics
Introduced in: v3.12.0
Type: string
Whether or not to export shard usage metrics.
Default: disabled
Possible values: “disabled”, “enabled-per-shard”, “enabled-per-shard-per-user”
Effective on DB-Servers only.
Show details
This option can be used to make DB-Servers export detailed shard usage metrics.
By default, this option is set to
disabled
so that no shard usage metrics are exported.Set the option to
enabled-per-shard
to make DB-Servers collect per-shard usage metrics whenever a shard is accessed.Set this option to
enabled-per-shard-per-user
to make DB-Servers collect usage metrics per shard and per user whenever a shard is accessed.
Note that enabling shard usage metrics can produce a lot of metrics if there are many shards and/or users in the system.
--server.gid
Type: string
Switch to this group ID after reading configuration files.
--server.harden
Type: boolean
Lock down REST APIs that reveal version information or server internals for non-admin users.
This option can be specified without a value to enable it.
--server.io-threads
Type: uint64
The number of threads used to handle I/O.
Default: dynamic (e.g. 2
)
--server.jwt-secret
Deprecated in: v3.3.22, v3.4.2
Type: string
The secret to use when doing JWT authentication.
--server.jwt-secret-folder
Type: string
A folder containing one or more JWT secret files to use for JWT authentication.
Show details
Files are sorted alphabetically, the first secret is used for signing + verifying JWT tokens (active secret), and all other secrets are only used to validate incoming JWT tokens (passive secrets). Only one secret needs to verify a JWT token for it to be accepted.
You can reload JWT secrets from disk without restarting the server or the nodes
of a cluster deployment via the POST /_admin/server/jwt
HTTP API endpoint.
You can use this feature to roll out new JWT secrets throughout a cluster.
--server.jwt-secret-keyfile
Type: string
A file containing the JWT secret to use when doing JWT authentication.
Show details
ArangoDB uses JSON Web Tokens to authenticate requests. Using this option lets you specify a JWT secret stored in a file. The secret must be at most 64 bytes long.
Warning: Avoid whitespace characters in the secret because they may get trimmed, leading to authentication problems:
- Character Tabulation (
\t
, U+0009) - End of Line (
\n
, U+000A) - Line Tabulation (
\v
, U+000B) - Form Feed (
\f
, U+000C) - Carriage Return (
\r
, U+000D) - Space (U+0020)
- Next Line (U+0085)
- No-Nreak Space (U+00A0)
In single server setups, ArangoDB generates a secret if none is specified.
In cluster deployments which have authentication enabled, a secret must be set consistently across all cluster nodes so they can talk to each other.
ArangoDB also supports an --server.jwt-secret
option to pass the secret
directly (without a file). However, this is discouraged for security
reasons.
You can reload JWT secrets from disk without restarting the server or the nodes
of a cluster deployment via the POST /_admin/server/jwt
HTTP API endpoint.
You can use this feature to roll out new JWT secrets throughout a cluster.
--server.license-check-interval
Type: uint64
Sets the License check update interval in seconds (max 120 seconds)
Default: 120
--server.license-disk-usage-grace-period-readonly
Type: uint64
The warning duration until the server enters read-only mode when exceeding the dataset size limit (in seconds). The maximum is 2 days.
Default: 172800
--server.license-disk-usage-grace-period-shutdown
Type: uint64
The read-only duration until the server shuts down when exceeding the dataset size limit (in seconds). The maximum is 2 days.
Default: 172800
--server.license-disk-usage-limit
Type: uint64
Sets the Disk-Usage limit in bytes (max 100 GiB)
Default: 107374182400
--server.license-disk-usage-update-interval
Type: uint64
Sets the Disk-Usage update interval in seconds (max 700 minutes)
Default: 42000
--server.maintenance-actions-block
Type: int32
The minimum number of seconds finished actions block duplicates.
Default: 2
Effective on DB-Servers only.
--server.maintenance-actions-linger
Type: int32
The minimum number of seconds finished actions remain in the deque.
Default: 3600
Effective on DB-Servers only.
--server.maintenance-slow-threads
Introduced in: v3.8.3
Type: uint32
The maximum number of threads available for slow maintenance actions (long SynchronizeShard and long EnsureIndex).
Default: dynamic (e.g. 1
)
Effective on DB-Servers only.
--server.maintenance-threads
Type: uint32
The maximum number of threads available for maintenance actions.
Default: dynamic (e.g. 3
)
Effective on DB-Servers only.
--server.maximal-number-sync-shard-actions
Introduced in: v3.12.5
Type: uint64
The maximum number of SynchronizeShard actions which may be queued at any given time.
Default: 32
Effective on DB-Servers only.
--server.maximal-queue-size
Type: uint64
The size of the priority 3 FIFO.
Default: 4096
Show details
You can specify the maximum size of the queue for asynchronous task execution. If the queue already contains this many tasks, new tasks are rejected until other tasks are popped from the queue. Setting this value may help preventing an instance from being overloaded or from running out of memory if the queue is filled up faster than the server can process requests.
--server.maximal-threads
Type: uint64
The maximum number of request handling threads to run (0 = use system-specific default of 32)
Show details
This option determines the maximum number of
request processing threads the server is allowed to start for request handling.
If this number of threads is already running, arangod does not start further
threads for request handling. The default value is
max(32, 2 * available cores)
, so twice the number of CPU cores, but at least
32 threads.
The actual number of request processing threads is adjusted dynamically at
runtime and is between --server.minimal-threads
and
--server.maximal-threads
.
--server.minimal-threads
Type: uint64
The minimum number of request handling threads to run.
Default: 4
Show details
This option determines the minimum number of request processing threads the server starts and always keeps around.
--server.ongoing-low-priority-multiplier
Introduced in: v3.8.0
Type: double
Controls the number of low priority requests that can be ongoing at a given point in time, relative to the maximum number of request handling threads.
Default: 4
Show details
There are some countermeasures built into Coordinators to prevent a cluster from being overwhelmed by too many concurrently executing requests.
If a request is executed on a Coordinator but needs to wait for some operation on a DB-Server, the operating system thread executing the request can often postpone execution on the Coordinator, put the request to one side and do something else in the meantime. When the response from the DB-Server arrives, another worker thread continues the work. This is a form of asynchronous implementation, which is great to achieve better thread utilization and enhance throughput.
On the other hand, this runs the risk that work is started on new requests faster than old ones can be finished off. Before version 3.8, this could overwhelm the cluster over time, and lead to out-of-memory situations and other unwanted side effects. For example, it could lead to excessive latency for individual requests.
There is a limit as to how many requests coming from the low priority queue
(most client requests are of this type), can be executed concurrently.
The default value for this is 4 times as many as there are scheduler threads
(see --server.minimal-threads
and --server.maximal-threads), which is good
for most workloads. Requests in excess of this are not started but remain on
the scheduler’s input queue (see --server.maximal-queue-size
).
Very occasionally, 4 is already too much. You would notice this if the latency for individual requests is already too high because the system tries to execute too many of them at the same time (for example, if they fight for resources).
On the other hand, in rare cases it is possible that throughput can be improved by increasing the value, if latency is not a big issue and all requests essentially spend their time waiting, so that a high concurrency is acceptable. This increases memory usage, though.
--server.options-api
Introduced in: v3.12.0
Type: string
The policy for exposing the options API.
Default: jwt
Possible values: “admin”, “disabled”, “jwt”, “public”
--server.prio1-size
Type: uint64
The size of the priority 1 FIFO.
Default: 4096
--server.prio2-size
Introduced in: v3.8.0
Type: uint64
The size of the priority 2 FIFO.
Default: 4096
--server.rest-server
Type: boolean
Start a REST server.
This option can be specified without a value to enable it.
Default: true
--server.scheduler
Introduced in: v3.12.1
Type: string
The scheduler type to use.
Default: supervised
Possible values: “supervised”, “threadpools”
--server.scheduler-queue-size
Type: uint64
The number of simultaneously queued requests inside the scheduler.
Default: 4096
--server.session-timeout
Introduced in: v3.9.0
Type: double
The lifetime for tokens (in seconds) that can be obtained from the POST /_open/auth
endpoint. Used by the web interface for JWT-based sessions.
Default: 3600
Effective on Coordinators and Single Servers only.
Show details
The web interface uses JWT for authentication. However, the session are renewed automatically as long as you regularly interact with the web interface in your browser. You are not logged out while actively using it.
--server.statistics
Type: boolean
Whether to enable statistics gathering and statistics APIs.
This option can be specified without a value to enable it.
Default: true
Show details
If you set this option to false
, then ArangoDB’s
statistics gathering is turned off. Statistics gathering causes regular
background CPU activity, memory usage, and writes to the storage engine, so
using this option to turn statistics off might relieve heavily-loaded instances
a bit.
A side effect of setting this option to false
is that no statistics are
shown in the dashboard of ArangoDB’s web interface, and that the REST API for
server statistics at /_admin/statistics
returns HTTP 404.
--server.statistics-all-databases
Introduced in: v3.8.0
Type: boolean
Provide cluster statistics in the web interface for all databases.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators only.
--server.statistics-history
Type: boolean
Whether to store statistics in the database.
This option can be specified without a value to enable it.
Default: dynamic (e.g. true
)
Show details
If you set this option to false
, then ArangoDB’s
statistics gathering is turned off. Statistics gathering causes regular
background CPU activity, memory usage, and writes to the storage engine, so
using this option to turn statistics off might relieve heavily-loaded instances
a bit.
If set to false
, no statistics are shown in the dashboard of ArangoDB’s
web interface, but the current statistics are available and can be queried
using the REST API for server statistics at /_admin/statistics
.
This is less intrusive than setting the --server.statistics
option to
false
.
--server.storage-engine
Type: string
The storage engine type (note that the MMFiles engine is unavailable since v3.7.0 and cannot be used anymore).
Default: auto
Possible values: “auto”, “rocksdb”
Show details
ArangoDB’s storage engine is based on RocksDB, see http://rocksdb.org . It is the only available engine from ArangoDB v3.7 onwards.
The storage engine type needs to be the same for an entire deployment.
Live switching of storage engines on already installed systems isn’t supported.
Configuring the wrong engine (not matching the previously used one) results
in the server refusing to start. You may use auto
to let ArangoDB choose the
previously used one.
--server.support-info-api
Introduced in: v3.9.0
Type: string
The policy for exposing the support info and also the telemetrics API.
Default: admin
Possible values: “admin”, “disabled”, “jwt”, “public”
--server.telemetrics-api
Introduced in: v3.11.0
Type: boolean
Whether to enable the telemetrics API.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators, DB-Servers and Single Servers only.
--server.telemetrics-api-max-requests
Introduced in: v3.11.0
Type: uint64
The maximum number of requests from arangosh that the telemetrics API responds to without rate-limiting.
Default: 3
Effective on Coordinators and Single Servers only.
Show details
This option limits requests from the arangosh to the telemetrics API, but not any other requests to the API.
Requests to the telemetrics API are counted for every 2 hour interval, and then reset. This means after a period of at most 2 hours, the telemetrics API becomes usable again.
The purpose of this option is to keep a deployment from being overwhelmed by too many telemetrics requests issued by arangosh instances that are used for batch processing.
--server.uid
Type: string
Switch to this user ID after reading configuration files.
--server.unavailability-queue-fill-grade
Type: double
The queue fill grade from which onwards the server is considered unavailable because of an overload (ratio, 0 = disable)
Default: 0.75
Show details
You can use this option to set a high-watermark for the scheduler’s queue fill grade, from which onwards the server starts reporting unavailability via its availability API.
This option has a consequence for the /_admin/server/availability
REST API
only, which is often called by load-balancers and other availability probing
systems.
The /_admin/server/availability
REST API returns HTTP 200 if the fill
grade of the scheduler’s queue is below the configured value, or HTTP 503 if
the fill grade is equal to or above it. This can be used to flag a server as
unavailable in case it is already highly loaded.
The default value for this option is 0.75
since version 3.8, i.e. 75%.
To prevent sending more traffic to an already overloaded server, it can be
sensible to reduce the default value to even 0.5
. This would mean that
instances with a queue longer than 50% of their maximum queue capacity would
return HTTP 503 instead of HTTP 200 when their availability API is probed.
--server.validate-utf8-strings
Type: boolean
Perform UTF-8 string validation for incoming JSON and VelocyPack data.
This option can be specified without a value to enable it.
Default: true
ssl
--ssl.cafile
Type: string
The CA file used for secure connections.
Show details
You can use this option to specify a file with CA certificates that are sent to the client whenever the server requests a client certificate. If you specify a file, the server only accepts client requests with certificates issued by these CAs. Do not specify this option if you want clients to be able to connect without specific certificates.
The certificates in the file must be PEM-formatted.
--ssl.cipher-list
Type: string
The SSL ciphers to use. See the OpenSSL documentation.
Default: HIGH:!EXPORT:!aNULL@STRENGTH
Show details
You can use this option to restrict the server to certain SSL ciphers only, and to define the relative usage preference of SSL ciphers.
The format of the option’s value is documented in the OpenSSL documentation.
To check which ciphers are available on your platform, you may use the following shell command:
> openssl ciphers -v
ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1
ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(256)
Mac=SHA1
...
--ssl.ecdh-curve
Type: string
The SSL ECDH curve, see the output of “openssl ecparam -list_curves”.
Default: prime256v1
--ssl.keyfile
Type: string
The path to a PEM file (server certificate + private key) to use for secure connections.
Show details
If you use TLS/SSL encryption by binding the
server to an ssl://
endpoint (e.g. --server.endpoint ssl://127.0.0.1:8529
),
you must use this option to specify the filename of the server’s private key.
The file must be PEM-formatted and contain both, the certificate and the
server’s private key.
You can generate a keyfile using OpenSSL as follows:
# create private key in file "server.key"
openssl genpkey -out server.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes-128-cbc
# create certificate signing request (csr) in file "server.csr"
openssl req -new -key server.key -out server.csr
# copy away original private key to "server.key.org"
cp server.key server.key.org
# remove passphrase from the private key
openssl rsa -in server.key.org -out server.key
# sign the csr with the key, creates certificate PEM file "server.crt"
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
# combine certificate and key into single PEM file "server.pem"
cat server.crt server.key > server.pem
You may use certificates issued by a Certificate Authority or self-signed certificates. Self-signed certificates can be created by a tool of your choice. When using OpenSSL for creating the self-signed certificate, the above commands should create a valid keyfile with a structure like this:
-----BEGIN CERTIFICATE-----
(base64 encoded certificate)
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
(base64 encoded private key)
-----END RSA PRIVATE KEY-----
For further information please check the manuals of the tools you use to create the certificate.
--ssl.options
Type: uint64
The SSL connection options. See the OpenSSL documentation.
Default: 2147485776
Show details
You can use this option to set various SSL-related options. Individual option values must be combined using bitwise OR.
Which options are available on your platform is determined by the OpenSSL version you use. The list of options available on your platform might be retrieved by the following shell command:
> grep "#define SSL_OP_.*" /usr/include/openssl/ssl.h
#define SSL_OP_MICROSOFT_SESS_ID_BUG 0x00000001L
#define SSL_OP_NETSCAPE_CHALLENGE_BUG 0x00000002L
#define SSL_OP_LEGACY_SERVER_CONNECT 0x00000004L
#define SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG 0x00000008L
#define SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG 0x00000010L
#define SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER 0x00000020L
...
A description of the options can be found online in the OpenSSL documentation: http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html )
--ssl.prefer-http1-in-alpn
Type: boolean
Allows to let the server prefer HTTP/1.1 over HTTP/2 in ALPN protocol negotiations
This option can be specified without a value to enable it.
--ssl.protocol
Type: uint64
The SSL protocol (1 = SSLv2 (unsupported), 2 = SSLv2 or SSLv3 (negotiated), 3 = SSLv3, 4 = TLSv1, 5 = TLSv1.2, 6 = TLSv1.3, 9 = generic TLS (negotiated))
Default: 9
Possible values: 1, 2, 3, 4, 5, 6, 9
Show details
Use this option to specify the default encryption protocol to be used. The default value is 9 (generic TLS), which allows the negotiation of the TLS version between the client and the server, dynamically choosing the highest mutually supported version of TLS.
Note that SSLv2 is unsupported as of version 3.4, because of the inherent security vulnerabilities in this protocol. Selecting SSLv2 as protocol aborts the startup.
--ssl.require-peer-certificate
Type: boolean
Require a peer certificate from the client before connecting.
This option can be specified without a value to enable it.
--ssl.server-name-indication
Type: string…
Use a different server keyfile and certificate if the client indicates a specific server name. Format: SERVERNAME=KEYFILENAME
Show details
Sometimes, it is desirable to have the same server use different server keys and certificates when it is contacted under different names. This is what the TLS “server name” extension is for. See https://en.wikipedia.org/wiki/Server_Name_Indication ) for details. for details). With this extension, the client can choose a server name, and the server can, using this information during the TLS handshake, use different server keys and certificate chains.
You can specify the option multiple times. Each value must be a string in the
format SERVERNAME=KEYFILENAME
. Replace SERVERNAME
by a server name and
KEYFILENAME
by the file name of the key file to be used for that server name.
The format of the keyfile is identical to the one used for the --ssl.keyfile
option. The keyfile used by default is the one in the --ssl.keyfile
option,
and only if there is an exact match between one server name given with
--ssl.server-name-indication
and the one in the handshake, the server switches
to the alternative keyfile.
--ssl.session-cache
Type: boolean
Enable the session cache for connections.
This option can be specified without a value to enable it.
tcp
--tcp.backlog-size
Type: uint64
Specify the size of the backlog for the listen
system call.
Default: 64
Show details
The maximum value is platform-dependent.
If you specify a value higher than defined in the system header’s SOMAXCONN
may result in a warning on server start. The actual value used by listen
may also be silently truncated on some platforms (this happens inside the
listen
system call).
--tcp.reuse-address
Type: boolean
Try to reuse TCP port(s).
This option can be specified without a value to enable it.
Default: true
Show details
If you set this option to true
, the socket
option SO_REUSEADDR
is set on all server endpoints, which is the default.
If you set this option to false
, it is possible that it takes up to a minute
after a server has terminated until it is possible for a new server to use the
same endpoint again.
Note: This can be a security risk because it might be possible for another process to bind to the same address and port, possibly hijacking network traffic.
temp
--temp.intermediate-results-capacity
Introduced in: v3.10.0
Experimental
Type: uint64
The maximum capacity (in bytes) to use for ephemeral, intermediate results on disk (0 = unlimited).
--temp.intermediate-results-encryption
Introduced in: v3.10.0
Experimental
Type: boolean
Encrypt ephemeral, intermediate results on disk.
This option can be specified without a value to enable it.
--temp.intermediate-results-encryption-hardware-acceleration
Introduced in: v3.10.0
Experimental
Type: boolean
Use Intel intrinsics-based encryption, requiring a CPU with the AES-NI instruction set. If turned off, then OpenSSL is used, which may use hardware-accelerated encryption, too.
This option can be specified without a value to enable it.
Default: true
--temp.intermediate-results-path
Introduced in: v3.10.0
Experimental
Type: string
The path for storing ephemeral, intermediate results on disk (empty = not used).
Show details
Queries can store intermediate and final results temporarily on disk if a specified threshold is exceeded, to decrease the memory usage. Specify a path to a directory for the temporary data to activate the spillover feature. The directory must not be located underneath the instance’s database directory.
The threshold value to start spilling data onto disk is either a number of rows
produced by a query or an amount of memory used in bytes, which you can set as
query options (spillOverThresholdNumRows
and spillOverThresholdMemoryUsage
).
Note: This feature is experimental and is turned off by default. Also, the query results are still built up entirely in memory on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query.
--temp.intermediate-results-spillover-threshold-memory-usage
Introduced in: v3.10.0
Experimental
Type: uint64
The memory usage threshold (in bytes) after which a spillover from RAM to disk happens for intermediate results (threshold per query executor).
Default: 134217728
--temp.intermediate-results-spillover-threshold-num-rows
Introduced in: v3.10.0
Experimental
Type: uint64
The number of result rows after which a spillover from RAM to disk happens for intermediate results (threshold per query executor).
Default: 5000000
--temp.path
Type: string
The path for temporary files.
Show details
ArangoDB uses the path for storing temporary files, for extracting data from uploaded zip files (e.g. for Foxx services), and other things.
Ideally, the temporary path is set to an instance-specific subdirectory of the operating system’s temporary directory. To avoid data loss, the temporary path should not overlap with any directories that contain important data, for example, the instance’s database directory.
If you set the temporary path to the same directory as the instance’s database directory, a startup error is logged and the startup is aborted.
transaction
--transaction.streaming-idle-timeout
Introduced in: v3.8.0
Type: double
The idle timeout (in seconds) for Stream Transactions.
Default: 60
Effective on Coordinators and Single Servers only.
Show details
Stream Transactions automatically expire after this period when no further operations are posted into them. Posting an operation into a non-expired Stream Transaction resets the transaction’s timeout to the configured idle timeout.
--transaction.streaming-lock-timeout
Type: double
The lock timeout (in seconds) in case of parallel access to the same Stream Transaction.
Default: 8
--transaction.streaming-max-transaction-size
Introduced in: v3.12.0
Type: uint64
The maximum transaction size (in bytes) for Stream Transactions.
Default: 536870912
Effective on DB-Servers and Single Servers only.
ttl
--ttl.frequency
Type: uint64
The frequency (in milliseconds) for the TTL background thread invocation (0 = turn the TTL background thread off entirely).
Default: 30000
Show details
The lower this value, the more frequently the TTL background thread kicks in and scans all available TTL indexes for expired documents, and the earlier the expired documents are actually removed.
--ttl.max-collection-removes
Type: uint64
The maximum number of documents to remove per collection in each invocation of the TTL thread.
Default: 100000
Show details
You can configure this value separately from the total removal amount so that the per-collection time window for locking and potential write-write conflicts can be reduced.
--ttl.max-total-removes
Type: uint64
The maximum number of documents to remove per invocation of the TTL thread.
Default: 1000000
Show details
In order to avoid “random” load spikes by the background thread suddenly kicking in and removing a lot of documents at once, you can cap the number of to-be-removed documents per thread invocation.
The TTL background thread goes back to sleep once it has removed the configured number of documents in one iteration. If more candidate documents are left for removal, they are removed in subsequent runs of the background thread.
web-interface
--web-interface.proxy-request-check
Type: boolean
Enable proxy request checking.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.
--web-interface.trusted-proxy
Type: string…
The list of proxies to trust (can be IP or network). Make sure --web-interface.proxy-request-check
is enabled.
Effective on Coordinators and Single Servers only.
--web-interface.version-check
Type: boolean
Alert the user if new versions are available.
This option can be specified without a value to enable it.
Default: true
Effective on Coordinators and Single Servers only.