[已解决]HigherHRNet复现问题:tensorboard Error
if not cfg.MULTIPROCESSING_DISTRIBUTED or (
cfg.MULTIPROCESSING_DISTRIBUTED
and args.rank % ngpus_per_node == 0
):
dump_input = torch.rand(
(1, 3, cfg.DATASET.INPUT_SIZE, cfg.DATASET.INPUT_SIZE)
)
# writer_dict['writer'].add_graph(model, (dump_input, ))
# logger.info(get_model_summary(model, dump_input, verbose=cfg.VERBOSE))
解决如上图,将dist_train.py里的该行代码注释掉即可解决
writer_dict['writer'].add_graph(model, (dump_input, ))