dongwei3120
dongwei3120
2016-11-08 06:12
浏览 39
已采纳

HDFS排除AddblockRequestProto中的数据节点

I am implementing a datanode failover for writing in HDFS, that HDFS can still write a block when the first datanode of the block fails.

The algorithm is. First, the failure node would be identified. Then, a new block is requested. The HDFS port api provides excludeNodes, which I used to tell Namenode not to allocate new block there. failedDatanodes are identified failed datanodes, and they are correct in logs.

req := &hdfs.AddBlockRequestProto{
    Src:           proto.String(bw.src),
    ClientName:    proto.String(bw.clientName),
    ExcludeNodes:  failedDatanodes,
}

But, the namenode still locates the block to the failed datanodes.

Anyone knows why? Did I miss anything here? Thank you.

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • dougou1943
    dougou1943 2016-11-11 18:29
    已采纳

    I found the solution that, first abandon the block and then request the new block. In the previous design, the new requested block cannot replace the old one

    点赞 评论

相关推荐