weixin_39593469
weixin_39593469
2020-12-08 19:02

Head tracker has unsafe API wrt concurrency

Description

Functions like head: https://github.com/sigp/lighthouse/blob/3cfd70d7fd290a9f293d6d869bade90aba6c95ae/beacon_node/beacon_chain/src/beacon_chain.rs#L475-L485

or heads: https://github.com/sigp/lighthouse/blob/3cfd70d7fd290a9f293d6d869bade90aba6c95ae/beacon_node/beacon_chain/src/beacon_chain.rs#L509-L513

or head_info: https://github.com/sigp/lighthouse/blob/3cfd70d7fd290a9f293d6d869bade90aba6c95ae/beacon_node/beacon_chain/src/beacon_chain.rs#L488-L506

expose an API that is unsafe wrt to concurrent modifications of the head tracker.

Version

3cfd70d7fd290a9f293d6d869bade90aba6c95ae

Present Behaviour

Forks pruning deletes entries from the head tracker in a separate thread. It may happen that a head gets removed from the head tracker while it is being used in a different thread. That's a race condition.

Expected Behaviour

Functions like head(), heads(), head_info() should hold a read lock on the head tracker while its contents are being used, by returning lock guards instead of simply copying Hash256.

Steps to resolve

Return lock guards from the mentioned functions and fix all the resulting compilation errors.

该提问来源于开源项目:sigp/lighthouse

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

4条回答

  • weixin_39628945 weixin_39628945 5月前

    The Problem

    While investigating this I've realised that the concurrency issues go beyond just the head tracker, and that "fixing the head tracker" alone is not feasible or desirable.

    The two main places where we update the head tracker are:

    1. When importing blocks (import_block) -- we add the new block to the head tracker and remove its parent (which may or may not be present, the parent could be an older non-head block):

    https://github.com/sigp/lighthouse/blob/c9596fcf0ee3a17bfb1df48e23ed6339c3791457/beacon_node/beacon_chain/src/beacon_chain.rs#L1586-L1587

    1. When pruning (prune_abandoned_forks) -- we delete all the heads that were pruned from the database.

    https://github.com/sigp/lighthouse/blob/c9596fcf0ee3a17bfb1df48e23ed6339c3791457/beacon_node/beacon_chain/src/migrate.rs#L243-L245

    The issue Adam highlighted is definitely a problem, a beacon node could interleave updating a head (via block import) with pruning heads, which could lead to a head in the head tracker that has had its parent (and abandoned ancestors) pruned from the database. The real issue is actually concurrent database access, as the following possible execution shows:

    1. Start processing a block B2 that has parent B1.
    2. Before processing of B2 completes, pruning deletes B1 and its associated data from the on-disk database.
    3. Processing of B2 completes, which results in B2 and its state being stored in the database.

    End result: the invariant that a block being in the DB implies its parent is also in the DB is violated (because B2 is in the DB but B1 isn't). The atomicity of the individual writes of (2) and (3) does nothing to help us, because the writes are only related semantically by this invariant, and don't actually conflict.

    Using a Big Lock

    A simple solution would be to wrap block importing and pruning in one big lock, so that interleaving of their internal steps becomes impossible, something like:

    
    import_block:
      - Lock A
      - Write new block to database
      - Write block to head tracker
      - Unlock A
    
    prune_abandoned_heads:
      - Lock A
      - Delete abandoned heads from database
      - Delete abandoned heads from the head tracker
      - Unlock A
    

    This prevents consistency issues by making the execution trace above impossible. However, it is very coarse-grained and involves substantial risk: no blocks can be imported whilst pruning runs. If pruning runs for an extended period, the node could end up stalled, unable to import new blocks while it waits for pruning to complete.

    The coarse-grained locking is the issue here, and prohibits perfectly safe executions like this:

    1. Start processing a block on the canonical chain (or some viable fork), B
    2. Start pruning the database, deleting lots of non-viable forks, but no data related to B
    3. Finish processing B, store it in the database

    Other Solutions

    I thought a bit about a data-structure for head-tracking that allows each head to be locked/mutated/deleted individually, but this still doesn't solve the database consistency issue.

    The database consistency issue might be best addressed by using a read-write transaction that fails if the data it read has been deleted by the time it commits. This sort of transaction isn't available in LevelDB, but might be in LMDB/RocksDB (which begs the question of #483). Another complication is caching, if the transaction doesn't need to read the parent block/state because it is cached, then it might not conflict "correctly" (but this is easy to work around I think). More research and thought is required, this comment is getting long enough... TBC

    点赞 评论 复制链接分享
  • weixin_39628945 weixin_39628945 5月前

    It's worth noting that these concurrency issues are quite unlikely to occur in practice: usually if a head is about to be pruned, then block proposers have stopped producing blocks on top of it. But this isn't guaranteed, and the consequences (being unable to prune your DB and having to resync) are quite severe.

    点赞 评论 复制链接分享
  • weixin_39924179 weixin_39924179 5月前

    Thanks for the analysis . I agree that this is an important issue that must eventually be addressed, but I don't think we should aim for this to be completed prior to October sec reviews for the following reasons:

    • It's not trivial and we are tight on resources prior to audit.
    • It's an edge case that we're unlikely to see occur often.

    As such, I'm going to change this to A1.

    点赞 评论 复制链接分享
  • weixin_39628945 weixin_39628945 5月前

    I think a manifestation of this bug was observed on Spadina, where it broke the pruning algorithm. I think the flow is:

    1. Pruning starts, deletes blocks + states from the database (transaction commits)
    2. Process shuts down due to Ctrl-C
    3. Head-tracker is left un-updated
    4. Process restarts, head tracker contains deleted head
    5. All pruning runs fail because the pruned head blocks from the head tracker can no longer be found in the database
    点赞 评论 复制链接分享

相关推荐