Hope this will make people using feed_command less likely: this hides bugs.
Already found at least two:
1. msgpackparse() will show internal error: hash_add() in case of duplicate
keys, though it will still work correctly. Currently silenced.
2. ttimeoutlen was spelled incorrectly, resulting in option not being set when
expected. Test was still functioning somehow though. Currently fixed.
It appears that used msgpack library is not able to parse back message created
by msgpack_rpc_from_object() if nesting level is too high, so log_server_msg now
cares about msgpack_unpack_next() return value. Also error message from
server_notifications_spec.lua is not readable if something is wrong (though at
least now it does not crash when parsing deeply nested structures).
log_server_msg() in the test reports
[msgpack-rpc] nvim -> client(1) [error] "parse error"
This removes some stack overflows in new test regarding deeply nested variables.
Now in place of crashing vim_to_object/msgpack_rpc_from_object/etc it crashes
clear_tv with stack overflow.
This ought to prevent stack overflow, but I do not see this actually working:
*lua* code crashes with stack overflow when trying to deserialize msgpack from
Neovim, Neovim is fine even if nesting level is increased 100x (though test
becomes very slow); not sure how recursive function may survive this. So it
looks like there are currently only two positive effects:
1. NULL lists are returned as empty (#4596).
2. Functional tests are slightly more fast. Very slightly. Checked for Release
build for test/functional/eval tests because benchmarking of debug mode is
not very useful.
It is otherwise impossible to determine which test failed sanitizer/valgrind
check. test/functional/helpers.lua module return was changed so that tests which
do not provide after_each function to get new check will automatically fail.
Sanity API checks made by the python-client in the api-python travis target were
converted to lua and will now live in this repository. This will simplify
performing breaking changes to the API as it won't be necessary to send parallel
PRs the python-client.