Left over change from acb7c82 (fix leak when a api function is
incorrectly called with a list.). These initializations are now never
used and causes warnings in static analysis
Problems:
- Disables cross-compiling (alternative: keeps two hash implementations which
need to be synchronized with each other).
- Puts code-specific name literals into CMakeLists.txt.
- Workaround for lua’s absence of bidirectional pipe communication is rather
ugly.
Removes all kinds of problems with sorting, provides a ready-to-use function
list representation for genvimvim.lua, does not require specifying function name
twice (VimL function name (string) + f_ function name).
Without this the "cd scripts/.." might change to another dir (since
CDPATH is looked at before a local path), and then NEOVIM_SOURCE_DIR
might end up being "/somewhere/else\n/somewhere/else" (since the "cd"
prints the dir already in that case).
Closes https://github.com/neovim/neovim/pull/5213.
It is a wrong thing to do, this makes valid variable values be treated
incorrectly: in
XDG_DATA_HOME='/home/$foo/.local/share'
`$foo` should be treated literally and not expanded to `foo` environment
variable value.
Also makes option_expand not try to expand too long strings even if these too
long strings are default values. Previously it thought that default values
should always be expanded. Also does not try to expand NULL should it be the
default value just in case.
Fixes#4961
If a user has multiple remotes set for neovim/neovim, then
find_get_remote was returning 'remote1\nremote2\n', which breaks
anything trying to use it. Since we're just using this remote to fetch
from, any one will do.
It's acceptable for “git describe --tags --exact-match …” to fail, since
all runtime updates commits are untagged. All that matters is that we
get a tag when one exists.
Therefore, ignore the failure status of the git describe call, relying
on the captured output instead.
There are a total of 5 shell scripts in the Neovim source tree.
All but runtime\macros\less.sh had warnings/errors when run through
Shellcheck (http://www.shellcheck.net/).
This commit fixes all warnings/errors and also changes the shebang to
"#!/bin/sh" when possible (this was not possible for vim-patch.sh
because it uses many bashisms).
The shellcheck errors that were fixed are:
SC2068: Double quote array expansions to avoid re-splitting elements.
SC2086: Double quote to prevent globbing and word splitting.
SC2124: Assigning an array to a string! Assign as array, or use *
instead of @ to concatenate
SC2155: Declare and assign separately to avoid masking return values.
Make get_vim_sources fetch the whole repository (it's not THAT big) so we can
pick up all the patches. The regexp didn't pick up the NA patches if there was a
comma after NA, so I extended it (I allowed arbitray things after NA, so maybe
someone can write a comment or so, should not lead to confusion).
* Calling "vim-patch.sh -p" on a checked-out branch already created with
"-p" will re-use the branch and append commits.
* Fetch upstream/master before checking out branch on first call of "-p".
* Reverted creation of commit in submit step ("-s") to previous behavior:
Create an empty commit with correct commit message when "-p" is called.
* Submitting a pull request with "-s" will create a correct pull request
message even if multiple patches have been ported in one single branch
with "-p".
When calling "vim-patch.sh -s" on a checked-out branch created with
"vim-patch.sh -p", create commit from staged changes, push to origin,
create pull request (using hub), and clean up patch files.
This will avoid confusion if additional `it()` blocks are added.
(`setup()` only runs once per `describe()` block, whereas `before_each()`
runs before each `it()`).
Any patch may contain mixed encodings, so we must process them as byte
arrays. E.g. with stock `sed` on OS X patch
8a94d873aa8c753a8522ea86a049bdf2abd0c507 causes this error:
sed: RE error: illegal byte sequence
To avoid that, set LC_ALL=C.
Also remove redundant *.patch creation from review_pr().
What works:
1. ShaDa file dumping: header, registers, jump list, history, search patterns,
substitute strings, variables.
2. ShaDa file reading: registers, global marks, variables.
Most was not tested.
TODO:
1. Merging.
2. Reading history, local marks, jump and buffer lists.
3. Documentation update.
4. Converting some data from &encoding.
5. Safer variant of dumping viminfo (dump to temporary file then rename).
6. Removing old viminfo code (currently masked with `#if 0` in a ShaDa file for
reference).
API functions exposed via msgpack-rpc now fall into two categories:
- async functions, which are executed as soon as the request is parsed
- sync functions, which are invoked in nvim main loop when processing the
`K_EVENT special key
Only a few functions which can be safely executed in any context are marked as
async.
* Link to the commit details on GitHub for a tagged version.
* Non-tagged patches (runtime updates) are still linked to
Google Code because they are identified by a Mercurial commit hash.
Fixes the handling of the initial input lines of a test script by simply
skipping all initial empty lines.
Helped-by: Florian Walch <florian@fwalch.com>
Suggested-by: Florian Walch <florian@fwalch.com>
Add check to see if a string contains ], which can result in
cases where wrapping a string in [[...]] breaks. Use [=[...]=] instead
on those strings.
Use [=[...]=] for insert() and expect().
- Check for mercurial before using it
- Make 'Merging patches...' wiki page easier to copy
- Use `basename` instead of assuming the user is running vim-patch.sh
via the repo root
- Appease shellcheck by quoting path variables
- Remove unneeded variable quoting inside [[ ]] blocks
- Don't unconditionally 'exit 1'
'-h' and '--help' are both recognized options, so current behavior is
misleading.
* Remove 'test' prefix from test names.
* Ask if existing spec files should be overwritten.
* Fix for legacy tests with no initial buffer content (e.g. test_signs).
- Call compiler from CMake instead of lua script to generate a
preprocessor file - allows for better/early error detection if
the compiler fails
- Preprocessor files are saved along with the headers as .i files
- Accept preprocessor lines with trailing chars after # as is
the case in Clang/Windows
- The fourth argument to gendeclarations.lua is now the path to
the proprocessor output file
- Expose more logging control from the log.c module(get log stream and omit
newlines)
- Remove logging from the generated functions in msgpack-gen.lua
- Refactor channel.c/helpers.c to log every msgpack-rpc payload using
msgpack_object_print(a helper function from msgpack.h)
- Remove the api_stringify function, it was only useful for logging msgpack-rpc
which is now handled by msgpack_object_print.
Since all API functions now run immediately after a msgpack-rpc request is
parsed by libuv callbacks, a mechanism was added to override this behavior and
allow certain functions to run in Nvim main loop.
The mechanism is simple: Any API function tagged with the FUNC_ATTR_DEFERRED (a
"dummy" attribute only used by msgpack-gen.lua) will be called when Nvim main
loop receives a K_EVENT key.
To implement this mechanism it was necessary some restructuration on the
msgpack-rpc modules, especially in the msgpack-gen.lua script.
Now that the lua client is available, python/lupa are no longer necessary to run
the functional tests. The helper functions previously defined in
run-functional-tests.py were adapted to test/functional/helpers.lua.
The 'lupa' python package provides a simple way to seamless integrate lua and
python code.
This commit replaces vroom by a python script that exposes the 'neovim' package
to a lua state, and invokes busted to run functional tests. This is a temporary
solution that will enable writing functional tests using lua/bused while a lua
client library is not available.
The reason for dropping vroom is flexibility: Lua/busted has a nice DSL-style
syntax while also providing the customization power of a full programming
language. Another reason is to use a single framework for unit/functional tests.
Two other changes were performed in this commit:
- Instead of "gcc-unittest/gcc-ia32", the travis builds for gcc are now
identified by "gcc/gcc-32". They will run unit/functional tests for both 64
and 32 bits.
- Old integration tests(in src/nvim/testdir) are now ran by the 'oldtest' target
Adapt gendeclarations.lua/msgpack-gen.lua to allow the `ArrayOf(...)` and
`DictionaryOf(...)` types in function headers. These are simple macros that
expand to Array and Dictionary respectively, but the information is kept in the
metadata object, which is useful for building clients in statically typed
languages.
Instead of building all metadata from msgpack-gen.lua, we now merge the
generated part with manual information(such as types and features). The metadata
is accessible through the api method `vim_get_api_info`.
This was done to simplify the generator while also increasing flexibility(by
being able to add more metadata)
Enhance msgpack-gen.lua to extract custom api type codes from the ObjectType
enum in api/private/defs.h. The type information is made available from the api
metadata and clients can use to correctly serialize/deserialize these types
using msgpack EXT type.
A new method is now exposed via msgpack-rpc: "get_api_metadata". This method has
the same job as the old method '0', it returns an object with API metadata for
use by generators.
There's one difference in the return value though: instead of returning a
string containing another serialized msgpack document, the metadata object is
returned directly(a separate deserialization step by clients is not required).
Use Map(String, rpc_method_handler_fn) for storing/retrieving rpc method
handlers in msgpack_rpc_init and msgpack_rpc_dispatch.
Also refactor serialization/validation functions in the
msgpack_rpc.c/msgpack_rpc_helpers.c modules to accept the new STR and BIN types.
Using msgpack v5 will let nvim be more compatible with msgpack libraries for
other platforms.
This also replaces "raw" references by "bin" which is the new name for msgpack
binary data type
:diffsplit command used to include some flag value twice. If I was using bitwise
OR it would be OK, but I had addition here. Changed to use bitwise OR.
To simplify modification/inclusion of continuous integration targets, this
removes travis.sh which contains a big if statement in favor of multiple scripts
under the new '.ci' directory.
Also changed the default log level to INFO so developers won't end up with big
log files without asking explicitly(DLOG statements were placed in really "hot"
code)
- Initialize variables before validating argument count to remove possibility of
freeing uninitialized pointers
- Set the error when the argument count validation fails
This simplifies the generated msgpack_rpc_dispatch() function, separates the
code for each RPC method more clearly and allows easy implementation of
alternative dispatching methods (e.g. string method id dispatch).
Stop forcing some platform setting that are really intended to be used
for Travis CI. Under other systems, like Arch Linux, it prevents
dependencies from being correctly located.
This is how API dispatching worked before this commit:
- The generated `msgpack_rpc_dispatch` function receives a the `msgpack_packer`
argument.
- The response is incrementally built while validating/calling the API.
- Return values/errors are also packed into the `msgpack_packer` while the
final response is being calculated.
Now the `msgpack_packer` argument is no longer provided, and the
`msgpack_rpc_dispatch` function returns `Object`/`Error` values to
`msgpack_rpc_call`, which will use those values to build the response in a
single pass.
This was done because the new `channel_send_call` function created the
possibility of having recursive API invocations, and this wasn't possible when
sharing a single `msgpack_sbuffer` across call frames(it was shared implicitly
through the `msgpack_packer` instance).
Since we only start to build the response when the necessary information has
been computed, it's now safe to share a single `msgpack_sbuffer` instance
across all channels and API invocations.
Some other changes also had to be performed:
- Handling of the metadata discover was moved to `msgpack_rpc_call`
- Expose more types as subtypes of `Object`, this was required to forward the
return value from `msgpack_rpc_dispatch` to `msgpack_rpc_call`
- Added more helper macros for casting API types to `Object`
any
Modify gendeclarations.lua to check if the generated non-static declaration
header changed before rewriting it with a new version. This is to prevent
unnecessary rebuilds of modules that depend on modules that had private changes.
- The 'stripdecls.py' script replaces declarations in all headers by includes to
generated headers.
`ag '#\s*if(?!ndef NEOVIM_).*((?!#\s*endif).*\n)*#ifdef INCLUDE_GENERATED'`
was used for this.
- Add and integrate gendeclarations.lua into the build system to generate the
required includes.
- Add -Wno-unused-function
- Made a bunch of old-style definitions ANSI
This adds a requirement: all type and structure definitions must be present
before INCLUDE_GENERATED_DECLARATIONS-protected include.
Warning: mch_expandpath (path.h.generated.h) was moved manually. So far it is
the only exception.
I hadn't spotted that the `sh -e` commandline was being used. I *think* this
is what's causing the exit 0 line not to run. Pray for success.
It's a real shame I can't this locally, what a mess.
Run only on push to branch coverity-scan. We can use a cron script to do
this 4 times a week (that's our allowance).
NOTE: possible future improvements are:
1. Fold the build matrix item into another short one so we don't overburden
travis. It's a little less clear but it should be nicer on the
infrastructure.
2. Change the security token, one can do that from the coverity admin page.
3. Don't do the naive `make depend`, but use the prebuilt libraries.
It inteferes with development activities by breaking your build in the
middle of a refactor. Instead, let's enable -Werror on the Travis CI
builds via a TRAVIS_CI_BUILD option.
- Add a 'expect' utility script that can run simple API tests using clients
developed for any platform.
- Extend travis build matrix to run API tests using the python client and
valgrind.
This script can be used to write API tests without having to manage nvim's
lifetime:
- It starts a single nvim instance listening on a known socket
- Invokes the test runner, which should connect to NEOVIM_LISTEN_ADDRESS
- The nvim instance started by the script provides a `BeforeEachTest` function,
which should be called before each test to reset nvim to a clean state.
- It takes care of shutting down nvim once the tests are finished.
As explained
[here](https://github.com/neovim/neovim/pull/737#issuecomment-43941520), it's
not possible to fully reset nvim to it's initial state, but the `BeforeEachTest`
function should be enough for most test cases. Tests requiring a fully clean
nvim instance should take care of starting/stopping nvim.
- Leave src as include dir (for includes to recognize 'nvim/' prefix).
- Change subdirectory from src to src/nvim.
- Fix msgpack generation.
- Fix some other paths to new locations.
- Split functions with multiple files in the 'api' subdirectory
- Move/Add more types in the 'api/defs.h' header
- Add more prototypes
- Refactor scripts/msgpack-gen.lua
- Move msgpack modules to 'os' subdirectory
- Build targeting 32-bit with travis
- Code in `before_install`/`after_success` was moved to travis.sh since it
provides greater flexibility for detecting the build matrix environment. This
improves the build speed since we now install only what's necessary.
- Now clint has a dedicated travis worker
Dependencies are now hosted in a github repository and this brings two advantages:
- Improved build time with travis since we no longer have to build each
dependency
- Less chance of build errors due to external servers being down since Github is
now the single point of failure
This adds a lua script which parses the contents of 'api.h'. After the api is
parsed into a metadata table. After that, it will generate:
- A msgpack blob for the metadata table. This msgpack object contains everything
scripting engines need to generate their own wrappers for the remote API.
- The `msgpack_rpc_dispatch` function, which takes care of validating msgpack
requests, converting arguments to C types and passing control to the
appropriate 'api.h' function. The result is then serialized back to msgpack
and returned to the client.
This approach was used because:
- It automatically modifies `msgpack_rpc_dispatch` to reflect API changes.
- Scripting engines that generate remote call wrappers using the msgpack
metadata will also adapt automatically to API changes
It appears the llvm.org/apt/ repository isn't always reliable. So let's
use the release tarball instead. Also, make using 3.4 conditional, so
we can use the clang 3.3 if things still manage to go awry in the
future. Note: using 3.3 means that we won't get leak detection.
I left the logic for using llvm.org/apt/, just in case we want try using
it again sometime.
This achieves several goals:
* Less reliance on scripts so we have better portability to Windows
(though we still have a ways to go for proper Windows support).
Luajit, luarocks, moonscript, and busted are all installed via CMake
now.
* Trying to make use of pkg-config to get the correct libraries. The
latest libuv is still broken in this regard, but we'll at least be in
a position to use it.
* Allow the use of Ninja or make. The former runs faster in many
environments, and automatically makes use of parallel builds.
This also allows for system installed dependencies--though not through
the Makefile just yet--and adds support for FreeBSD.
This also make us build libuv and luajit as static libraries only, since
we're only concerned about having static libraries for our bundled
dependencies.
- Valgrind configuration removed
- Fix errors reported by the undefined behavior sanitizer
- Travis will now run two build steps:
- A normal build of a shared library for unit testing(in parallel with gcc)
- A clang build with some sanitizers enabled for integration testing.
After these changes travis will run much faster, while providing valgrind-like
error detection.
Tests will be written using the [moonscript](http://moonscript.org/) language,
a lua 'dialect' that is whitespace-significant and has a syntax similar to
coffeescript. The test framework used is [busted](http://olivinelabs.com/busted/),
a bdd framework for lua/moonscript.
Luajit has a nice ffi module, which lets lua programs link shared libraries and
call it's functions without writing any C code.
To take advantage of this fact for testing C functions, a new target was added
to CMakeLists.txt, which compiles neovim as a shared library that is loaded by
the process running the tests.
This commit adds necessary code for downloading and installing a lua package
manager(luarocks) locally. It wasn't added as a subtree because there are quite
a few blobs in its source tree.
Now it checks for the existance of curl after
failing to find wget.
Note that I ended up removing the quotes around $url
when referencing it in the call to wget, since urls can't have spaces
anyways, and the correct quoting was messy.
To test, I did
rm -r .deps
make clean
make cmake
make
And it worked.