前面学习了通过加深网络和加宽网络来改进模型质量,提高模型精度的深度学习backbone模型(LeNet,VGGNet,AlexNet,GoogleNet,ResNet),这里介绍如何使网络更快,结构更轻量化的改进深度神经网络模型之一————SqueezeNet,它能够在ImageNet数据集上达到AlexNet近似的效果,但是参数比AlexNet少50倍。
SqueezeNet 的主要思想如下:
import tensorflow as tf
import os
from tensorflow.keras.layers import *
from tensorflow.keras import Model
import numpy as np
class FireBlock(Model):
def __init__(self,s_filters_num,e_filters_num):
super().__init__()
self.squeeze=Conv2D(filters=s_filters_num,kernel_size=1,padding='same',activation='relu')
self.expand_1=Conv2D(filters=e_filters_num,kernel_size=1,padding='same',activation='relu')
self.expand_3=Conv2D(filters=e_filters_num,kernel_size=3,padding='same',activation='relu')
def call(self,x):
x=self.squeeze(x)
x1=self.expand_1(x)
x2=self.expand_3(x)
y=tf.concat([x1,x2],-1)
return y
class SqueezeNet(Model):
def __init__(self):
super().__init__()
self.c1=Conv2D(filters=96,kernel_size=7,padding='same',strides=2,activation='relu')
self.p1=MaxPooling2D(pool_size=(3,3),strides=2)
self.f1=FireBlock(16,64)
self.f2=FireBlock(16,64)
self.f3=FireBlock(32,128)
self.p2=MaxPooling2D(pool_size=(3,3),strides=2)
self.f4=FireBlock(32,128)
self.f5=FireBlock(48,192)
self.f6=FireBlock(48,192)
self.f7=FireBlock(64,256)
self.p3=MaxPooling2D(pool_size=(3,3),strides=2)
self.f8=FireBlock(64,256)
self.c4=Conv2D(filters=1000,kernel_size=1,padding='same',activation='relu')
self.p4=GlobalAveragePooling2D()
def call(self,x):
x=self.c1(x)
x=self.p1(x)
x=self.f1(x)
x=self.f2(x)
x=self.f3(x)
x=self.p2(x)
x=self.f4(x)
x=self.f5(x)
x=self.f6(x)
x=self.f7(x)
x=self.p3(x)
x=self.f8(x)
x=self.c4(x)
y=self.p4(x)
return y
model=SqueezeNet()
上面为基于tensorflow2.4封装好的SqueezeNet,通过加载数据集,模型编译(model.compile())后就行训练模型了,我的笔记本电脑显卡不行,我只试运行了一下,可以训练数据。
参考:
SqueezeNet详解
SqueezeNet详细解读
深度学习中的经典基础网络结构(backbone)总结