loop->children might have been a linked list because used to be
modified in place while looped over. However the loops that exists
rather schedules events to be processed later, outside of the loop,
so this can not happen anymore.
When a linked list is otherwise useful it is better to use
lib/queue_defs.h which defines an _intrusive_ linked list (i e it
doesn't need to do allocations for list items like klist ).
Coverity warns about a possible null pointer dereference in the `memcpy`
call in `kv_concat_len`. The `memcpy` follows `kv_ensure_space` which
(re)allocates the `items` pointer if the vector's capacity is not large
enough to contain all of the items being appended. The only way `items`
would be NULL at this point is if `capacity` were mistakenly set to some
large number without `items` ever having being set in the first place.
This should not happen when using the kvec API so if this condition is
ever false it is a bug, which the `assert` will catch.
FUNC_ATTR_* should only be used in .c files with generated headers.
Defining FUNC_ATTR_* as empty in headers causes misuses of them to be
silently ignored. Instead don't define them by default, and only define
them as empty after a .c file has included its generated header.
We already have an extensive suite of static analysis tools we use,
which causes a fair bit of redundancy as we get duplicate warnings. PVS
is also prone to give false warnings which creates a lot of work to
identify and disable.
The removes the previous restriction that nvim_buf_set_extmark()
could not be used to highlight arbitrary multi-line regions
The problem can be summarized as follows: let's assume an extmark with a
hl_group is placed covering the region (5,0) to (50,0) Now, consider
what happens if nvim needs to redraw a window covering the lines 20-30.
It needs to be able to ask the marktree what extmarks cover this region,
even if they don't begin or end here.
Therefore the marktree needs to be augmented with the information covers
a point, not just what marks begin or end there. To do this, we augment
each node with a field "intersect" which is a set the ids of the
marks which overlap this node, but only if it is not part of the set of
any parent. This ensures the number of nodes that need to be explicitly
marked grows only logarithmically with the total number of explicitly
nodes (and thus the number of of overlapping marks).
Thus we can quickly iterate all marks which overlaps any query position
by looking up what leaf node contains that position. Then we only need
to consider all "start" marks within that leaf node, and the "intersect"
set of that node and all its parents.
Now, and the major source of complexity is that the tree restructuring
operations (to ensure that each node has T-1 <= size <= 2*T-1) also need
to update these sets. If a full inner node is split in two, one of the
new parents might start to completely overlap some ranges and its ids
will need to be moved from its children's sets to its own set.
Similarly, if two undersized nodes gets joined into one, it might no
longer completely overlap some ranges, and now the children which do
needs to have the have the ids in its set instead. And then there are
the pivots! Yes the pivot operations when a child gets moved from one
parent to another.
This involves two redesigns of the map.c implementations:
1. Change of macro style and code organization
The old khash.h and map.c implementation used huge #define blocks with a
lot of backslash line continuations.
This instead uses the "implementation file" .c.h pattern. Such a file is
meant to be included multiple times, with different macros set prior to
inclusion as parameters. we already use this pattern e.g. for
eval/typval_encode.c.h to implement different typval encoders reusing a
similar structure.
We can structure this code into two parts. one that only depends on key
type and is enough to implement sets, and one which depends on both key
and value to implement maps (as a wrapper around sets, with an added
value[] array)
2. Separate the main hash buckets from the key / value arrays
Change the hack buckets to only contain an index into separate key /
value arrays
This is a common pattern in modern, state of the art hashmap
implementations. Even though this leads to one more allocated array, it
is this often is a net reduction of memory consumption. Consider
key+value consuming at least 12 bytes per pair. On average, we will have
twice as many buckets per item.
Thus old implementation:
2*12 = 24 bytes per item
New implementation
1*12 + 2*4 = 20 bytes per item
And the difference gets bigger with larger items.
One might think we have pulled a fast one here, as wouldn't the average size of
the new key/value arrays be 1.5 slots per items due to amortized grows?
But remember, these arrays are fully dense, and thus the accessed memory,
measured in _cache lines_, the unit which actually matters, will be the
fully used memory but just rounded up to the nearest cache line
boundary.
This has some other interesting properties, such as an insert-only
set/map will be fully ordered by insert only. Preserving this ordering
in face of deletions is more tricky tho. As we currently don't use
ordered maps, the "delete" operation maintains compactness of the item
arrays in the simplest way by breaking the ordering. It would be
possible to implement an order-preserving delete although at some cost,
like allowing the items array to become non-dense until the next rehash.
Finally, in face of these two major changes, all code used in khash.h
has been integrated into map.c and friends. Given the heavy edits it
makes no sense to "layer" the code into a vendored and a wrapper part.
Rather, the layered cake follows the specialization depth: code shared
for all maps, code specialized to a key type (and its equivalence
relation), and finally code specialized to value+key type.
This reduces the total number of khash_t instantiations from 22 to 8.
Make the khash internal functions take the size of values as a runtime
parameter. This is abstracted with typesafe Map containers which
are still specialized for both key, value type.
Introduce `Set(key)` type for when there is no value.
Refactor shada.c to use Map/Set instead of khash directly.
This requires `map_ref` operation to be more flexible.
Return pointers to both key and value, plus an indicator for new_item.
As a bonus, `map_key` is now redundant.
Instead of Map(cstr_t, FileMarks), use a pointer map as the FileMarks struct is
humongous.
Make `event_strings` actually work like an intern pool instead of wtf it
was doing before.
libnvim couldn't be easily used in C++ due to the use of reserved keywords.
Additionally, add explicit casts to *alloc function calls used in inline
functions, as C++ doesn't allow implicit casts from void pointers.