dongwei3120 2016-11-08 06:12
浏览 46
已采纳

HDFS排除AddblockRequestProto中的数据节点

I am implementing a datanode failover for writing in HDFS, that HDFS can still write a block when the first datanode of the block fails.

The algorithm is. First, the failure node would be identified. Then, a new block is requested. The HDFS port api provides excludeNodes, which I used to tell Namenode not to allocate new block there. failedDatanodes are identified failed datanodes, and they are correct in logs.

req := &hdfs.AddBlockRequestProto{
    Src:           proto.String(bw.src),
    ClientName:    proto.String(bw.clientName),
    ExcludeNodes:  failedDatanodes,
}

But, the namenode still locates the block to the failed datanodes.

Anyone knows why? Did I miss anything here? Thank you.

  • 写回答

1条回答 默认 最新

  • dougou1943 2016-11-11 18:29
    关注

    I found the solution that, first abandon the block and then request the new block. In the previous design, the new requested block cannot replace the old one

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 #MATLAB仿真#车辆换道路径规划
  • ¥15 java 操作 elasticsearch 8.1 实现 索引的重建
  • ¥15 数据可视化Python
  • ¥15 要给毕业设计添加扫码登录的功能!!有偿
  • ¥15 kafka 分区副本增加会导致消息丢失或者不可用吗?
  • ¥15 微信公众号自制会员卡没有收款渠道啊
  • ¥100 Jenkins自动化部署—悬赏100元
  • ¥15 关于#python#的问题:求帮写python代码
  • ¥20 MATLAB画图图形出现上下震荡的线条
  • ¥15 关于#windows#的问题:怎么用WIN 11系统的电脑 克隆WIN NT3.51-4.0系统的硬盘