https://pytorch.org/tutorials/advanced/neural_style_tutorial.html?highlight=neural%20transfer
这是neural style tansfer链接,里面有一段话:
Loading the Images
Now we will import the style and content images. The original PIL images have values between 0 and 255, but when transformed into torch tensors, their values are converted to be between 0 and 1. The images also need to be resized to have the same dimensions. An important detail to note is that neural networks from the torch library are trained with tensor values ranging from 0 to 1. If you try to feed the networks with 0 to 255 tensor images, then the activated feature maps will be unable to sense the intended content and style. However, pre-trained networks from the Caffe library are trained with 0 to 255 tensor images.
torch的网络训练tensor值都是[0.,1.], vgg19 是以[0, 255]来训练的,那传入vgg的图片是否应该由取值[0.,1.]的tensor转为[0, 255],而后面又说:
Additionally, VGG networks are trained on images with each channel normalized by mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]. We will use them to normalize the image before sending it into the network.
vgg network的每一个通道都应该用mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225], 这说明进入vgg的tensor还是[0., 1.],不是[0, 255]
本人非计算机专业,希望有人能解惑