weixin_39715834
weixin_39715834
2020-12-01 22:56

How to modify your code LapSRN_WGAN for same size image processing?

Hello. Please help me how to modify your code LapSRN_WGAN for same size image processing without upsampling input image? I want use LapSRN_WGAN for image processing with same size

Should I modify this line of code in https://github.com/twtygqyy/pytorch-LapSRN/blob/master/lapsrn_wgan.py

Change " nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=4, stride=2, padding=1, bias=False)", to nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=1, padding=1, bias=False),

Or may be I can just remove this layer?

该提问来源于开源项目:twtygqyy/pytorch-LapSRN

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

7条回答

  • weixin_39715834 weixin_39715834 5月前

    I have already used your VDSR model in same maner like iitem 2 . unfortunately qualiy of restored images is not appropriate. May you advise some method to improve perceptual quality of restored images. bad_samples_230 reconst_230_psnr 30 8107676076

    May be I should change kernel Size or quantity of layer in VDSR model or may be method of learning rate adjusting?

    Or may be I strat to train LapSRN_WGAN in same maner?

    点赞 评论 复制链接分享
  • weixin_39917437 weixin_39917437 5月前

    -pinigin I will suggest you to use GAN instead of MSE-loss for training the network. It usually works better for this task. Check out http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/

    点赞 评论 复制链接分享
  • weixin_39917437 weixin_39917437 5月前

    Hi -pinigin, you are right, but you will have to change the size of the input training samples. Check my VDSR repo which same size input and output are used for training.

    点赞 评论 复制链接分享
  • weixin_39715834 weixin_39715834 5月前

    Thank you I understood. I chahge size of input image. But I don't understand what I have to do with nn.ConvTranspose2d layer? Just remove or replace on simple Conv2D?

    点赞 评论 复制链接分享
  • weixin_39917437 weixin_39917437 5月前

    -pinigin you can just remove nn.ConvTranspose2d layers.

    点赞 评论 复制链接分享
  • weixin_39715834 weixin_39715834 5月前

    May I ask you? If you do not mind I need to know your expert opinion. Can I use LapSRN_WGAN for image inpainting? I want to eliminate artefacts from grayscale image? What should I do? Can LapSRN_WGAN reduce or even elminate scratches or other artefacts from grayscale photo? What can you advise me?

    点赞 评论 复制链接分享
  • weixin_39917437 weixin_39917437 5月前

    Hi -pinigin Of course the code can be modified for image in-painting. Basically there are two steps you need to do: 1. Remove deconvolutional layers, keep input and output in same shape. You can check my VDSR repo for reference. 2. Create input and output training datasets. I guess input could be image with mask, and output should be the raw image.

    点赞 评论 复制链接分享