这里采用了一个在图像分割领域比较熟知的U-Net网络结构
是一个基于FCN做改进后的一个深度学习网络
包含下采样(编码器,特征提取)和上采样(解码器,分辨率还原)两个阶段,因模型结构比较像U型而命名为U-Net
根据PaddlePaddle飞桨开源框架上的文档代码进行一些更改:
可以通过以下渠道下载:
本案例使用原文里的一个例子的 Oxford-IIIT Pet数据集
里面包含了宠物照片和对应的标签数据
宠物图片在 /images
标签数据在 /annotations/trimaps
具体详情参考 飞桨官方文档说明
在工程中新建文件夹 /resources/Oxford-IIIT Pet/images
,将所有数据原始图片均放置于此
在工程中新建文件夹 /resources/Oxford-IIIT Pet/masks
,将所有数据标签图片均放置于此
宠物图片数据集里为jpg格式,这边利用tool_jpg2png.py
将其统一为png吧
import os
from PIL import Image
# 原图和标签图片地址
resources_path = "./resources/Oxford-IIIT Pet"
origin_images_path = resources_path + "/images"
img_name_list = os.listdir(origin_images_path)
for img_name in img_name_list:
if img_name[-3:] == "jpg":
tp = Image.open(origin_images_path + '/' + img_name)
tp.save(origin_images_path + '/' + img_name[:-3] + 'png')
os.remove(origin_images_path + '/' + img_name)
网络结构在model.py
中定义
根据U-Net的图片中设置相似的结构,具体如下:
-----------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
=============================================================================
Conv2D-1 [[1, 3, 160, 160]] [1, 16, 160, 160] 448
BatchNorm2D-1 [[1, 16, 160, 160]] [1, 16, 160, 160] 64
ReLU-1 [[1, 16, 160, 160]] [1, 16, 160, 160] 0
Conv2D-2 [[1, 16, 160, 160]] [1, 16, 160, 160] 2,320
BatchNorm2D-2 [[1, 16, 160, 160]] [1, 16, 160, 160] 64
ReLU-2 [[1, 16, 160, 160]] [1, 16, 160, 160] 0
MaxPool2D-1 [[1, 16, 160, 160]] [1, 16, 80, 80] 0
Conv2D-3 [[1, 16, 80, 80]] [1, 32, 80, 80] 4,640
BatchNorm2D-3 [[1, 32, 80, 80]] [1, 32, 80, 80] 128
ReLU-3 [[1, 32, 80, 80]] [1, 32, 80, 80] 0
Conv2D-4 [[1, 32, 80, 80]] [1, 32, 80, 80] 9,248
BatchNorm2D-4 [[1, 32, 80, 80]] [1, 32, 80, 80] 128
ReLU-4 [[1, 32, 80, 80]] [1, 32, 80, 80] 0
MaxPool2D-2 [[1, 32, 80, 80]] [1, 32, 40, 40] 0
Conv2D-5 [[1, 32, 40, 40]] [1, 64, 40, 40] 18,496
BatchNorm2D-5 [[1, 64, 40, 40]] [1, 64, 40, 40] 256
ReLU-5 [[1, 64, 40, 40]] [1, 64, 40, 40] 0
Conv2D-6 [[1, 64, 40, 40]] [1, 64, 40, 40] 36,928
BatchNorm2D-6 [[1, 64, 40, 40]] [1, 64, 40, 40] 256
ReLU-6 [[1, 64, 40, 40]] [1, 64, 40, 40] 0
MaxPool2D-3 [[1, 64, 40, 40]] [1, 64, 20, 20] 0
Conv2D-7 [[1, 64, 20, 20]] [1, 128, 20, 20] 73,856
BatchNorm2D-7 [[1, 128, 20, 20]] [1, 128, 20, 20] 512
ReLU-7 [[1, 128, 20, 20]] [1, 128, 20, 20] 0
Conv2D-8 [[1, 128, 20, 20]] [1, 128, 20, 20] 147,584
BatchNorm2D-8 [[1, 128, 20, 20]] [1, 128, 20, 20] 512
ReLU-8 [[1, 128, 20, 20]] [1, 128, 20, 20] 0
MaxPool2D-4 [[1, 128, 20, 20]] [1, 128, 10, 10] 0
Conv2D-9 [[1, 128, 10, 10]] [1, 256, 10, 10] 295,168
BatchNorm2D-9 [[1, 256, 10, 10]] [1, 256, 10, 10] 1,024
ReLU-9 [[1, 256, 10, 10]] [1, 256, 10, 10] 0
Conv2D-10 [[1, 256, 10, 10]] [1, 256, 10, 10] 590,080
BatchNorm2D-10 [[1, 256, 10, 10]] [1, 256, 10, 10] 1,024
ReLU-10 [[1, 256, 10, 10]] [1, 256, 10, 10] 0
Upsample-1 [[1, 256, 10, 10]] [1, 256, 20, 20] 0
Conv2D-11 [[1, 256, 20, 20]] [1, 128, 20, 20] 32,896
Conv2DTranspose-1 [[1, 128, 20, 20]] [1, 128, 20, 20] 147,584
BatchNorm2D-11 [[1, 128, 20, 20]] [1, 128, 20, 20] 512
ReLU-11 [[1, 128, 20, 20]] [1, 128, 20, 20] 0
Conv2DTranspose-2 [[1, 128, 20, 20]] [1, 128, 20, 20] 147,584
BatchNorm2D-12 [[1, 128, 20, 20]] [1, 128, 20, 20] 512
ReLU-12 [[1, 128, 20, 20]] [1, 128, 20, 20] 0
Upsample-2 [[1, 128, 20, 20]] [1, 128, 40, 40] 0
Conv2D-12 [[1, 128, 40, 40]] [1, 64, 40, 40] 8,256
Conv2DTranspose-3 [[1, 64, 40, 40]] [1, 64, 40, 40] 36,928
BatchNorm2D-13 [[1, 64, 40, 40]] [1, 64, 40, 40] 256
ReLU-13 [[1, 64, 40, 40]] [1, 64, 40, 40] 0
Conv2DTranspose-4 [[1, 64, 40, 40]] [1, 64, 40, 40] 36,928
BatchNorm2D-14 [[1, 64, 40, 40]] [1, 64, 40, 40] 256
ReLU-14 [[1, 64, 40, 40]] [1, 64, 40, 40] 0
Upsample-3 [[1, 64, 40, 40]] [1, 64, 80, 80] 0
Conv2D-13 [[1, 64, 80, 80]] [1, 32, 80, 80] 2,080
Conv2DTranspose-5 [[1, 32, 80, 80]] [1, 32, 80, 80] 9,248
BatchNorm2D-15 [[1, 32, 80, 80]] [1, 32, 80, 80] 128
ReLU-15 [[1, 32, 80, 80]] [1, 32, 80, 80] 0
Conv2DTranspose-6 [[1, 32, 80, 80]] [1, 32, 80, 80] 9,248
BatchNorm2D-16 [[1, 32, 80, 80]] [1, 32, 80, 80] 128
ReLU-16 [[1, 32, 80, 80]] [1, 32, 80, 80] 0
Upsample-4 [[1, 32, 80, 80]] [1, 32, 160, 160] 0
Conv2D-14 [[1, 32, 160, 160]] [1, 16, 160, 160] 528
Conv2DTranspose-7 [[1, 16, 160, 160]] [1, 16, 160, 160] 2,320
BatchNorm2D-17 [[1, 16, 160, 160]] [1, 16, 160, 160] 64
ReLU-17 [[1, 16, 160, 160]] [1, 16, 160, 160] 0
Conv2DTranspose-8 [[1, 16, 160, 160]] [1, 16, 160, 160] 2,320
BatchNorm2D-18 [[1, 16, 160, 160]] [1, 16, 160, 160] 64
ReLU-18 [[1, 16, 160, 160]] [1, 16, 160, 160] 0
Conv2D-15 [[1, 16, 160, 160]] [1, 4, 160, 160] 68
=============================================================================
Total params: 1,620,644
Trainable params: 1,614,756
Non-trainable params: 5,888
-----------------------------------------------------------------------------
Input size (MB): 0.29
Forward/backward pass size (MB): 91.31
Params size (MB): 6.18
Estimated Total Size (MB): 97.78
-----------------------------------------------------------------------------
根据自己电脑硬件合理调整参数进行训练,执行train.py
文件
保存模型于 output
文件夹
$ python train.py
# Epoch 1/15
# step 30/416 [=>............................] - loss: 0.9846 - ETA: 5:49 - 907ms/step
执行predict.py
文件,预测测试集中前两个数据
效果还可以吧
谢谢