数据集制作方法:以x2为例
训练数据:一张原始图作为高分辨率图像(h, w),先下采样到(h/2, w/2),然后再cubic上采样到(h, w)得到低分辨率图像, 该网络只处理Y通道图像, 训练的时候patch_size 默认设为33。
事实上两者的分辨率是一样的,也就是网络的输入和输出尺寸是相同的,但是清晰度不同
评估和预测的时候网络输入是 整幅图像
网络模型:
参考论文和srcnn网络结构可视化
主要看下图即可:
1)去掉bicubic interpolation,直接进入特征提取层,但是在网络最后加入反卷积层
2)特征映射层(mapping layer)替换为shrinking-mapping-expanding,达到加速目的
3)Fsrcnn更深的网络,更低的计算量
4)Fsrcnn除了反卷积层,其他层对于2x,3x,4x的训练参数是可以共享的,因此可以加速训练和测试
首先速度
其次结果
最后一层是反卷积层,进行图像尺寸的放大,因此前面层的训练参数可以共享,不同的分辨率放大倍数,只需要对 反卷积层微调即可,不同放大倍数反卷积层的stride有差异。
Sub-Pixel Convolutional Neural Network
1.Sub-pixel convolution layer
子像素卷积层 作为 扩大 图像尺寸的层, fsrcnn是用的反卷积层 扩大图像尺寸
就是 以放大二倍来说,倒数第二层 会有 4个通道, 然后4个通道重新排布扩大为4个像素。也就将倒数第二层的 size扩大了2倍
2.网络架构
三个conv2d + 一个pixel shuffle
RDN:Residual Dense Network for Image Super-Resolution
1.网络架构
EDSR 利用了residual block
SRDenseNet 利用了 dense skip block
RDN将两者结合在一起,进一步提升特征提取和融合的能力,主要结构如下:
其中 residual dense block结构如下
RDN model
import argparse
import torch
from torch import nn
class DenseLayer(nn.Module):
def __init__(self, in_channels, out_channels):
super(DenseLayer, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=3 // 2)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
return torch.cat([x, self.relu(self.conv(x))], 1)
class RDB(nn.Module):
def __init__(self, in_channels, growth_rate, num_layers):
super(RDB, self).__init__()
self.layers = nn.Sequential(*[DenseLayer(in_channels + growth_rate * i, growth_rate) for i in range(num_layers)])
# local feature fusion
self.lff = nn.Conv2d(in_channels + growth_rate * num_layers, growth_rate, kernel_size=1)
def forward(self, x):
return x + self.lff(self.layers(x)) # local residual learning
class RDN(nn.Module):
def __init__(self, scale_factor, num_channels, num_features, growth_rate, num_blocks, num_layers):
super(RDN, self).__init__()
self.G0 = num_features
self.G = growth_rate
self.D = num_blocks
self.C = num_layers
# shallow feature extraction
self.sfe1 = nn.Conv2d(num_channels, num_features, kernel_size=3, padding=3 // 2)
self.sfe2 = nn.Conv2d(num_features, num_features, kernel_size=3, padding=3 // 2)
# residual dense blocks
self.rdbs = nn.ModuleList([RDB(self.G0, self.G, self.C)])
for _ in range(self.D - 1):
self.rdbs.append(RDB(self.G, self.G, self.C))
# global feature fusion
self.gff = nn.Sequential(
nn.Conv2d(self.G * self.D, self.G0, kernel_size=1),
nn.Conv2d(self.G0, self.G0, kernel_size=3, padding=3 // 2)
)
# up-sampling
assert 2 <= scale_factor <= 4
if scale_factor == 2 or scale_factor == 4:
self.upscale = []
for _ in range(scale_factor // 2):
self.upscale.extend([nn.Conv2d(self.G0, self.G0 * (2 ** 2), kernel_size=3, padding=3 // 2),
nn.PixelShuffle(2)])
self.upscale = nn.Sequential(*self.upscale)
else:
self.upscale = nn.Sequential(
nn.Conv2d(self.G0, self.G0 * (scale_factor ** 2), kernel_size=3, padding=3 // 2),
nn.PixelShuffle(scale_factor)
)
self.output = nn.Conv2d(self.G0, num_channels, kernel_size=3, padding=3 // 2)
def forward(self, x):
sfe1 = self.sfe1(x)
sfe2 = self.sfe2(sfe1)
x = sfe2
local_features = []
for i in range(self.D):
x = self.rdbs[i](x)
local_features.append(x)
x = self.gff(torch.cat(local_features, 1)) + sfe1 # global residual learning
x = self.upscale(x)
x = self.output(x)
return x
from torchviz import make_dot
import tensorwatch as tw
from torchinfo import summary
import netron
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--num-features', type=int, default=64)
parser.add_argument('--growth-rate', type=int, default=64)
parser.add_argument('--num-blocks', type=int, default=16)
parser.add_argument('--num-layers', type=int, default=8)
parser.add_argument('--scale', type=int, default=4)
parser.add_argument('--patch-size', type=int, default=32)
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--batch-size', type=int, default=16)
parser.add_argument('--num-epochs', type=int, default=800)
parser.add_argument('--num-workers', type=int, default=8)
parser.add_argument('--seed', type=int, default=123)
args = parser.parse_args()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# device = 'cpu'
modelviz = RDN(scale_factor=args.scale,
num_channels=3,
num_features=args.num_features,
growth_rate=args.growth_rate,
num_blocks=args.num_blocks,
num_layers=args.num_layers).to(device)
# 打印模型结构
h, w, c = 20, 20, 3
print(modelviz)
summary(modelviz, input_size=(8, c, h, w), col_names=["kernel_size", "output_size", "num_params", "mult_adds"])
for p in modelviz.parameters():
if p.requires_grad:
print(p.shape)
# 创建输入, 看看输出结果
input = torch.rand(8, c, h, w).to(device)
out = modelviz(input)
print(out.shape)
# 1. 使用 torchviz 可视化
g = make_dot(out)
g.view() # 直接在当前路径下保存 pdf 并打开
# g.render(filename='netStructure/myNetModel', view=False, format='pdf') # 保存 pdf 到指定路径不打开
# 2. 保存成pt文件后进行可视化
torch.save(modelviz, "modelviz.pt")
modelData = 'modelviz.pt'
netron.start(modelData)
# 3. 使用tensorwatch可视化
print(tw.model_stats(modelviz, (8, c, h, w)))
# tw.draw_model(modelviz, input)```