1

bcachefs: serialize on cached key in early bucket allocator

bcachefs had a transient bug where freespace_initialized was not
properly being set, which lead to unexpected use of the early bucket
allocator at runtime. This issue has been fixed, but the existence
of it uncovered a coherency issue in the early bucket allocation
code that is somewhat related to how uncached iterators deal with
the key cache.

The problem itself manifests as occasional failure of generic/113
due to corruption, often seen as a duplicate backpointer or multiple
data types per-bucket error. The immediate cause of the error is a
racing bucket allocation along the lines of the following sequence:

- Task 1 selects key A in bch2_bucket_alloc_early() and schedules.
- Task 2 selects the same key A, but proceeds to complete the
  allocation and associated I/O, after which it releases the
  open_bucket.
- Task 1 resumes with key A, but does not recognize the bucket is
  now allocated because the open_bucket has been removed
  from the hash when it was released in the previous step.

This generally shouldn't happen because the allocating task updates
the alloc btree key before releasing the bucket. This is not
sufficient in this particular instance, however, because an uncached
iterator for a cached btree doesn't actually lock the key cache slot
when no key exists for a given slot in the cache. Thus the fact that
the allocation side updates the cached key means that multiple
uncached iters can stumble across the same alloc key and duplicate
the bucket allocation as described above.

This is something that probably needs a longer term fix in the
iterator code. As a short term fix, close the race through explicit
use of a cached iterator for likely allocation candidates. We don't
want to scan the btree with a cached iterator because that would
unnecessarily pollute the cache. This mitigates cache pollution by
primarily scanning the tree with an uncached iterator, but closes
the race by creating a key cache entry for any prospective slot
prior to the bucket allocation attempt (also similar to how
_alloc_freelist() works via try_alloc_bucket()). This survives many
iterations of generic/113 on a kernel hacked to always use the early
bucket allocator.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This commit is contained in:
Brian Foster 2023-11-01 15:02:44 -04:00 committed by Kent Overstreet
parent f82755e4e8
commit 385a82f62a

View File

@ -399,12 +399,22 @@ bch2_bucket_alloc_early(struct btree_trans *trans,
struct bucket_alloc_state *s,
struct closure *cl)
{
struct btree_iter iter;
struct bkey_s_c k;
struct btree_iter iter, citer;
struct bkey_s_c k, ck;
struct open_bucket *ob = NULL;
u64 alloc_start = max_t(u64, ca->mi.first_bucket, ca->new_fs_bucket_idx);
u64 alloc_cursor = max(alloc_start, READ_ONCE(ca->alloc_cursor));
int ret;
/*
* Scan with an uncached iterator to avoid polluting the key cache. An
* uncached iter will return a cached key if one exists, but if not
* there is no other underlying protection for the associated key cache
* slot. To avoid racing bucket allocations, look up the cached key slot
* of any likely allocation candidate before attempting to proceed with
* the allocation. This provides proper exclusion on the associated
* bucket.
*/
again:
for_each_btree_key_norestart(trans, iter, BTREE_ID_alloc, POS(ca->dev_idx, alloc_cursor),
BTREE_ITER_SLOTS, k, ret) {
@ -419,13 +429,25 @@ again:
continue;
a = bch2_alloc_to_v4(k, &a_convert);
if (a->data_type != BCH_DATA_free)
continue;
/* now check the cached key to serialize concurrent allocs of the bucket */
ck = bch2_bkey_get_iter(trans, &citer, BTREE_ID_alloc, k.k->p, BTREE_ITER_CACHED);
ret = bkey_err(ck);
if (ret)
break;
a = bch2_alloc_to_v4(ck, &a_convert);
if (a->data_type != BCH_DATA_free)
goto next;
s->buckets_seen++;
ob = __try_alloc_bucket(trans->c, ca, k.k->p.offset, watermark, a, s, cl);
next:
citer.path->preserve = false;
bch2_trans_iter_exit(trans, &citer);
if (ob)
break;
}