1
0
Fork 0

updated CHANGELOG (#3690)

This commit is contained in:
Jan 2017-11-14 00:37:51 +01:00 committed by Frank Celler
parent caa5145482
commit 80c214ff67
1 changed files with 104 additions and 43 deletions

147
CHANGELOG
View File

@ -1,9 +1,9 @@
devel
-----
* add readonly mode rest API
* add readonly mode REST API
* allow compilation of ArangoDB with g++7
* allow compilation of ArangoDB source code with g++7
* AQL: during a traversal if a vertex is not found. It will not print an ERROR to the log and continue
with a NULL value, but will register a warning at the query and continue with a NULL value.
@ -11,35 +11,11 @@ devel
vertex which is perfectly valid, but it may be a n issue on the data model, so users
can directly see it on the query now and do not "by accident" have to check the LOG output.
* UI: fixed event cleanup in cluster shards view
* fixed issue #3618: Inconsistent behavior of OR statement with object bind parameters
* potential fix for issue #3562: Document WITHIN_RECTANGLE not found
* increase default maximum number of V8 contexts to at least 16 if not
explicitly configured otherwise. the mode for determining the actual maximum
value of V8 contexts is unchanged and works as follows:
- if explicitly set, the value of the configuration option `--javascript.v8-contexts`
is used as the maximum number of V8 contexts
- when the option is not set, the maximum number of V8 contexts is determined
by the configuration option `--server.threads` if that option is set. if
`--server.threads` is not set, then the maximum number of V8 contexts is the
server's reported hardware concurrency (number of processors visible
to the arangod process). if that would result in a maximum value of 16 in
any of these two cases, then the maximum value will be increased to 16.
* fixed issue #3447: ArangoError 1202: AQL: NotFound: (while executing) when
updating collection
* potential fix for issue #3581: Unexpected "rocksdb unique constraint
violated" with unique hash index
* fix agency precondition check for complex objects
* introduce `enforceReplicationFactor`: An optional parameter controlling
if the server should bail out during collection creation if there are not
enough DBServers available for the desired `replicationFactor`.
* introduce `enforceReplicationFactor` attribute for creating collections:
this optional parameter controls if the coordinator should bail out during collection
creation if there are not enough DBServers available for the desired `replicationFactor`.
* fixed issue #3516: Show execution time in arangosh
@ -66,15 +42,15 @@ devel
* make AQL `DISTINCT` not change the order of the results it is applied on
* enable JEMalloc background thread for purging and returning unused memory
back to the operating system (Linux only)
* incremental transfer of initial collection data now can handle partial
responses for a chunk, allowing the leader/master to send smaller chunks
(in terms of HTTP response size) and limit memory usage
this optimization is only active if client applications send the "offset" parameter
in their requests to PUT `/_api/replication/keys/<id>?type=docs`
* initial creation of shards for cluster collections is now faster with
replicationFactor values bigger than 1. this is achieved by an optimization
`replicationFactor` values bigger than 1. this is achieved by an optimization
for the case when the collection on the leader is still empty
* potential fix for issue #3517: several "filesystem full" errors in logs
@ -84,7 +60,7 @@ devel
* show C++ function name of call site in ArangoDB log output
This requires option `--log.line-number` to be set to *true*
this requires option `--log.line-number` to be set to *true*
* UI: added word wrapping to query editor
@ -120,7 +96,7 @@ devel
* removed `--compat28` parameter from arangodump and replication API
old Arango versions will no longer be supported by these tools.
older ArangoDB versions will no longer be supported by these tools.
* increase the recommended value for `/proc/sys/vm/max_map_count` to a value
eight times as high as the previous recommended value. Increasing the
@ -133,19 +109,64 @@ devel
WARNING {memory} execute 'sudo sysctl -w "vm.max_map_count=512000"'
v3.2.7 (XXXX-XX-XX)
v3.2.7 (2017-11-14)
-------------------
* fixed some undefined behavior in some internal value caches for AQL GatherNodes
and SortNodes, which could have led to sorted results being effectively not
correctly sorted.
* improved the speed of shardDistribution in cluster, by moving it to c++. It is now guaranteed
return after ~2 seconds even if the entire cluster is unresponsive.
* make the replication applier for the RocksDB engine start automatically after a
restart of the server if the applier was configured with its `autoStart` property
set to `true`. previously the replication appliers were only automatically restarted
at server start for the MMFiles engine.
* enable JEMalloc background thread for purging and returning unused memory
back to the operating system (Linux only)
* fixed arangodump batch size adaptivity in cluster mode and upped default batch size
for arangodump
these changes speed up arangodump in cluster context
* smart graphs now return a proper inventory in response to replication inventory
requests
* fixed issue #3618: Inconsistent behavior of OR statement with object bind parameters
* only users with read/write rights on the "_system" database can now execute
"_admin/shutdown" as well as modify properties of the write-ahead log (WAL)
* increase default maximum number of V8 contexts to at least 16 if not explicitly
configured otherwise.
the procedure for determining the actual maximum value of V8 contexts is unchanged
apart from the value `16` and works as follows:
- if explicitly set, the value of the configuration option `--javascript.v8-contexts`
is used as the maximum number of V8 contexts
- when the option is not set, the maximum number of V8 contexts is determined
by the configuration option `--server.threads` if that option is set. if
`--server.threads` is not set, then the maximum number of V8 contexts is the
server's reported hardware concurrency (number of processors visible
to the arangod process). if that would result in a maximum value of less than 16
in any of these two cases, then the maximum value will be increased to 16.
* fixed issue #3447: ArangoError 1202: AQL: NotFound: (while executing) when
updating collection
* potential fix for issue #3581: Unexpected "rocksdb unique constraint
violated" with unique hash index
* fixed geo index optimizer rule for geo indexes with a single (array of coordinates)
attribute.
* improved the speed of the shards overview in cluster (API endpoint /_api/cluster/shardDistribution API)
It is now guaranteed to return after ~2 seconds even if the entire cluster is unresponsive.
* fix agency precondition check for complex objects
this fixes issues with several CAS operations in the agency
* several fixes for agency restart and shutdown
* the cluster-internal representation of planned collection objects is now more
lightweight than before, using less memory and not allocating any cache for indexes
etc.
* fixed issue #3403: How to kill long running AQL queries with the browser console's
AQL (display issue)
@ -153,14 +174,54 @@ v3.2.7 (XXXX-XX-XX)
* fixed issue #3549: server reading ENGINE config file fails on common standard
newline character
* UI: fixed error notifications for collection modifications
* several improvements for the truncate operation on collections:
* the timeout for the truncate operation was increased in cluster mode in
order to prevent too frequent "could not truncate collection" errors
* after a truncate operation, collections in MMFiles still used disk space.
to reclaim disk space used by truncated collection, the truncate actions
in the web interface and from the ArangoShell now issue an extra WAL flush
command (in cluster mode, this command is also propagated to all servers).
the WAL flush allows all servers to write out any pending operations into the
datafiles of the truncated collection. afterwards, a final journal rotate
command is sent, which enables the compaction to entirely remove all datafiles
and journals for the truncated collection, so that all disk space can be
reclaimed
* for MMFiles a special method will be called after a truncate operation so that
all indexes of the collection can free most of their memory. previously some
indexes (hash and skiplist indexes) partially kept already allocated memory
in order to avoid future memory allocations
* after a truncate operation in the RocksDB engine, an additional compaction
will be triggered for the truncated collection. this compaction removes all
deletions from the key space so that follow-up scans over the collection's key
range do not have to filter out lots of already-removed values
These changes make truncate operations potentially more time-consuming than before,
but allow for memory/disk space savings afterwards.
* enable JEMalloc background threads for purging and returning unused memory
back to the operating system (Linux only)
JEMalloc will create its background threads on demand. The number of background
threads is capped by the number of CPUs or active arenas. The background threads run
periodically and purge unused memory pages, allowing memory to be returned to the
operating system.
This change will make the arangod process create several additional threads.
It is accompanied by an increased `TasksMax` value in the systemd service configuration
file for the arangodb3 service.
* upgraded bundled V8 engine to bugfix version v5.7.492.77
the upgrade fixes a memory leak in upstream V8 described in
this upgrade fixes a memory leak in upstream V8 described in
https://bugs.chromium.org/p/v8/issues/detail?id=5945 that will result in memory
chunks only getting uncommitted but not unmapped
* UI: fixed error notifications for collection modifications
v3.2.6 (2017-10-26)
-------------------