medianDepth for new features initialization

In image update


// Get new features
if(filterState.fsm_.getValidCount() < startDetectionTh_*mtState::nMax_){
  // Compute the median depth parameters for each camera, using the state features.
  std::array<double mtstate::ncam_> medianDepthParameters;
  if(maxUncertaintyToDepthRatioForDepthInitialization_>0){
    filterState.getMedianDepthParameters(initDepth_, &medianDepthParameters,maxUncertaintyToDepthRatioForDepthInitialization_);
  } else {
    medianDepthParameters.fill(initDepth_);
  }
</double>

Two things to get better initialization estimate when a new frame loses too many features: - Computing medianDepthParameters for every frame rather just the frames where number of tracked features is low, since frames with many tracked features gives better initialization estimate - Assume some smoothness in motion and use previous frame's median for current frame depth initialization.

This handles the vicious cycle where I suddenly lose most features in the current frame so its mediandepth for initialization is not good enough, which leads to bad depth initialization of all new features which leads to tracking failure for most of them in the subsequent imgUpdate and so on till it diverges. This for example happened when I am about to land close to the ground. I tested this strategy and it works better than the one we currently have.

该提问来源于开源项目:ethz-asl/rovio

查看全部
weixin_39587113
weixin_39587113
2020/11/22 02:03
  • 点赞
  • 收藏
  • 回答
    私信

21个回复