weixin_39713763
weixin_39713763
2021-01-08 13:22

AssertionError: Either no mask or multiple masks found for the ID

Hello! milesial. Thanks for your code. When I trained with the first dataset I downloaded on the web, it was no problem. When I train with my own processed CT images I run into this problem:AssertionError: Either no mask or multiple masks found for the ID + name of images

According to issue( #122 ), I changed the image names to 1image,2image,3image,.... , but still reported the same error. How can I solve this problem? I look forward to your reply.

该提问来源于开源项目:milesial/Pytorch-UNet

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

11条回答

  • DavidLee97 DavidLee97 1月前

    today, i solve the problem.

     in dataset.py: line46: mask_file = glob(self.masks_dir + idx + self.mask_suffix + '.') line47: img_file = glob(self.imgs_dir + idx + '.')

    the value of self.mask_suffix should be '_mask' 

    点赞 2 评论 复制链接分享
  • weixin_39792751 weixin_39792751 3月前

    Hi, a few questions: - Are you on the latest version? This should have been fixed in https://github.com/milesial/Pytorch-UNet/pull/187 - Can you put here the full error, including the ID and the mask names - You said you renamed your images, did you also rename your masks to match?

    点赞 评论 复制链接分享
  • weixin_39713763 weixin_39713763 3月前

    Thank you very much for your reply. (1) I read the issue #187, and learned about your changes in the code 'dataset.py'. (2) At the beginning, I named both the images in img_file and mask_file as 1images.png, 2images.png......... ,but it reported the same error.
    Later, I named the pictures 001.png, 002.png, 003.png...... respectively, But I ran into the same problem: File "train.py", line 200, in val_percent=args.val / 100) File "train.py", line 76, in train_net for batch in train_loader: File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next return self._process_data(data) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 256, in getitem return self.dataset[self.indices[idx]] File "/home/QY/QY/segmentation/unet_code/unet_pytorch/utils/dataset.py", line 50, in getitem f'Either no mask or multiple masks found for the ID {idx}: {mask_file}' AssertionError: Either no mask or multiple masks found for the ID 025: []

    (3)Until this afternoon, I solved this problem by the following method: I found that my pictures are all 4-channel, so I changed all the pictures to 1 channel and 8 bits. Hope this method can provide some help for others. Although I can use this data set to start training the network, I don’t think I have solved the problem fundamentally, so I still have a question: why I changed all the pictures to 3 channels, and the channel in the code was also changed to 3(RGB), but it appeared The following problems: python train.py INFO: Using device cuda INFO: Network: 3 input channels 1 output channels (classes) Bilinear upscaling INFO: Creating dataset with 27 examples INFO: Starting training: Epochs: 5 Batch size: 1 Learning rate: 0.1 Training size: 25 Validation size: 2 Checkpoints: True Device: cuda Images scaling: 0.5

    Epoch 1/5: 0%| | 0/25 [00:00<?, ?img/s] Traceback (most recent call last): File "train.py", line 200, in val_percent=args.val / 100) File "train.py", line 91, in train_net loss = criterion(masks_pred, true_masks) #经过网络计算的图片和标签进行计算loss File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, *kwargs) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 601, in forward reduction=self.reduction) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 2098, in binary_cross_entropy_with_logits raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) ValueError: Target size (torch.Size([1, 3, 512, 512])) must be the same as input size (torch.Size([1, 1, 512, 512]))

    Is this problem caused by the discrepancy between bits of the input pictures and the required bits?

    点赞 评论 复制链接分享
  • weixin_39713763 weixin_39713763 3月前

    Thank you very much for your reply. (1) I read the issue #187, and learned about your changes in the code 'dataset.py'. (2) At the beginning, I named both the images in img_file and mask_file as 1images.png, 2images.png......... ,but it reported the same error.
    Later, I named the pictures 001.png, 002.png, 003.png...... respectively, But I ran into the same problem: File "train.py", line 200, in val_percent=args.val / 100) File "train.py", line 76, in train_net for batch in train_loader: File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next return self._process_data(data) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 256, in getitem return self.dataset[self.indices[idx]] File "/home/QY/QY/segmentation/unet_code/unet_pytorch/utils/dataset.py", line 50, in getitem f'Either no mask or multiple masks found for the ID {idx}: {mask_file}' AssertionError: Either no mask or multiple masks found for the ID 025: []

    (3)Until this afternoon, I solved this problem by the following method: I found that my pictures are all 4-channel, so I changed all the pictures to 1 channel and 8 bits. Hope this method can provide some help for others. Although I can use this data set to start training the network, I don’t think I have solved the problem fundamentally, so I still have a question: why I changed all the pictures to 3 channels, and the channel in the code was also changed to 3(RGB), but it appeared The following problems: python train.py INFO: Using device cuda INFO: Network: 3 input channels 1 output channels (classes) Bilinear upscaling INFO: Creating dataset with 27 examples INFO: Starting training: Epochs: 5 Batch size: 1 Learning rate: 0.1 Training size: 25 Validation size: 2 Checkpoints: True Device: cuda Images scaling: 0.5

    Epoch 1/5: 0%| | 0/25 [00:00<?, ?img/s] Traceback (most recent call last): File "train.py", line 200, in val_percent=args.val / 100) File "train.py", line 91, in train_net loss = criterion(masks_pred, true_masks) #经过网络计算的图片和标签进行计算loss File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, *kwargs) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 601, in forward reduction=self.reduction) File "/home/QY/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 2098, in binary_cross_entropy_with_logits raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) ValueError: Target size (torch.Size([1, 3, 512, 512])) must be the same as input size (torch.Size([1, 1, 512, 512]))

    Is this problem caused by the discrepancy between bits of the input pictures and the required bits?

    点赞 评论 复制链接分享
  • weixin_39792751 weixin_39792751 3月前
    • About the filename error, I'm glad you were able to figure it out. It is just about the glob in dataset.py and has nothing to do with the number of channels, as the files are not even read at this stage (if you have an image 025.png, you have to have a mask 025.png (or whatever extension) in the mask folder). https://github.com/milesial/Pytorch-UNet/blob/84f8392b619940bd542dc670761a0a7a1357001d/utils/dataset.py#L46

    • Make sure your masks are loaded as greyscale and 1 channel if you wish to train with 1 output channel. https://github.com/milesial/Pytorch-UNet/issues/149 https://github.com/milesial/Pytorch-UNet/issues/164 https://github.com/milesial/Pytorch-UNet/issues/113

    点赞 评论 复制链接分享
  • weixin_39713763 weixin_39713763 3月前

    Thanks for your reply. 1、Actually, I'm sure my image 025.png has a corresponding mask 025,.png, but this problem still occurs: Either no mask or multiple masks found for the ID 025: [] 2、I read issues #169 #149 #113 and I learned that my input should be 1-channel because my output is 2 classes. 3、Also, I have changed my images and masks to 1 channel and grayscale, these images can be successfully input the network for training and validation. However, I am having the same problem, the loss value has remained the same and the validation dice coeff has been close to 0(1e-8). I looked at the issue #173 , maybe there is something wrong with my dataset? Do I need to preprocess my data? (My dataset is MR images)

    点赞 评论 复制链接分享
  • weixin_39792751 weixin_39792751 3月前
    1. does the mask show up when you run ls data/masks/025.* ?
    2. your masks should be 1 channel yes
    3. you can merge that issue with #173 and try answering my comment https://github.com/milesial/Pytorch-UNet/issues/173#issuecomment-664591317
    点赞 评论 复制链接分享
  • weixin_39713763 weixin_39713763 3月前

    1、I can make sure that there is a corresponding mask 025.png in the training data folder. But since I deleted the data I couldn't use before, so I'm not sure if there is a corresponding mask 025.png in the training, I'll recheck it later. Can you leave the issue open for a while? 2、I'm preprocessing my dataset(MR images) during this time, and I'll merge the issue to #173 for ongoing discussion. Thank you very much!

    点赞 评论 复制链接分享
  • weixin_39608132 weixin_39608132 3月前

    I am using the carvana dataset. So, the naming format or channel of the dataset is not creating any problem! However, I am also getting a lot of errors like .

    could you tell me what I have to do? My errors are given below:

    `runfile('/home/mostafiz/Desktop/MSc/Pytorch/Pytorch-UNet-master/train.py', wdir='/home/mostafiz/Desktop/MSc/Pytorch/Pytorch-UNet-master') Reloaded modules: dice_loss, eval, unet.unet_parts, unet.unet_model, unet, utils.dataset INFO: Using device cpu INFO: Network: 3 input channels 1 output channels (classes) Bilinear upscaling INFO: Creating dataset with 5088 examples INFO: Starting training: Epochs: 5 Batch size: 1 Learning rate: 0.0001 Training size: 4580 Validation size: 508 Checkpoints: True Device: cpu Images scaling: 0.5

    Epoch 1/5: 0%| | 0/4580 [00:00<?, ?img/s] Traceback (most recent call last):

    File "/home/mostafiz/Desktop/MSc/Pytorch/Pytorch-UNet-master/train.py", line 177, in train_net(net=net,

    File "/home/mostafiz/Desktop/MSc/Pytorch/Pytorch-UNet-master/train.py", line 69, in train_net for batch in train_loader:

    File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data()

    File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data)

    File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise()

    File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg)

    AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/mostafiz/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataset.py", line 257, in getitem return self.dataset[self.indices[idx]] File "/home/mostafiz/Desktop/MSc/Pytorch/Pytorch-UNet-master/utils/dataset.py", line 50, in getitem assert len(mask_file) == 1, \ AssertionError: Either no mask or multiple masks found for the ID b98c63cd6102_13: []`

    点赞 评论 复制链接分享
  • weixin_39713763 weixin_39713763 3月前

    As milesial said, Your images(in 'imgs' folder) and masks(in 'masks' folder) need to be matched. For example, if you have an image 025.png(or whatever extension), you have to have a mask 025.png (or whatever extension) in the mask folder.

    But you can also modify the matching rules, see these two lines in dataset.py: line46: mask_file = glob(self.masks_dir + idx + self.mask_suffix + '.') line47: img_file = glob(self.imgs_dir + idx + '.')

    点赞 评论 复制链接分享
  • weixin_39608132 weixin_39608132 3月前

    I have done everything, every rule in different issues (same problems) and yes, I have download the actual dataset and it contains the same name of images and masks.

    点赞 评论 复制链接分享