• GoogLeNet 08


    一、发展

    1989年,Yann LeCun提出了一种用反向传导进行更新的卷积神经网络,称为LeNet。

    1998年,Yann LeCun提出了一种用反向传导进行更新的卷积神经网络,称为LeNet-5

    AlexNet是2012年ISLVRC 2012(ImageNet Large Scale Visual Recognition  Challenge)竞赛的冠军网络,分类准确率由传统的 70%+提升到 80%+。 它是由Hinton和他的学生Alex Krizhevsky设计的。也是在那年之后,深度学习开始迅速发展。


    VGG在2014年由牛津大学著名研究组VGG (Visual Geometry  Group) 提出,斩获该年ImageNet竞  中 Localization Task (定位 任务) 第一名 和 Classification Task (分类任务) 第二名。[1.38亿个参数]

    GoogLeNet在2014年由Google团队提出,斩获当年ImageNet竞赛中Classification Task (分类任务) 第一名。[论文:Going deeper with convolutions][600多万参数]

    ResNet

    二、GoogLeNet 

    2.1 特点

    • 引入了Inception结构(融合不同尺度的特征信息)
    • 使用1x1的卷积核进行降维以及映射处理
    • 添加两个辅助分类器帮助训练
    • 丢弃全连接层,使用平均池化层(大大减少模型 参数)

    2.2 inception结构

    (1)减少参数

    (2)卷积核比较小,可以扫到细节特征;卷积核大,可以扫描大的结构

    左边是inception原始结构(用多个卷积核分别扫描,然后组合起来)

    右边加了一个维度缩减(用多个卷积核分别扫描,在卷积核基础上进行堆叠减少维度)

    2.3 辅助分类器

    三、GoogLeNet实现

    3.1 Inception实现

    1. from tensorflow import keras
    2. import tensorflow as tf
    3. import numpy as np
    4. import pandas as pd
    5. import matplotlib.pyplot as plt
    6. class Inception(keras.layers.Layer):
    7. def __init__(self, ch1x1, ch3x3red, ch3x3, ch5x5red, ch5x5, pool_proj, **kwargs):
    8. super().__init__(**kwargs)
    9. self.branch1 = keras.layers.Conv2D(ch1x1, kernel_size=1, activation='relu')
    10. self.branch2 = keras.Sequential([
    11. keras.layers.Conv2D(ch3x3red, kernel_size=1, activation='relu'),
    12. keras.layers.Conv2D(ch3x3, kernel_size=3, padding='SAME', activation='relu')
    13. ])
    14. self.branch3 = keras.Sequential([
    15. keras.layers.Conv2D(ch5x5red, kernel_size=1, activation='relu'),
    16. keras.layers.Conv2D(ch5x5, kernel_size=5, padding='SAME', activation='relu')
    17. ])
    18. self.branch4 = keras.Sequential([
    19. keras.layers.MaxPool2D(pool_size=3, strides=1, padding='SAME'),
    20. keras.layers.Conv2D(pool_proj, kernel_size=1, activation='relu')
    21. ])
    22. def call(self, inputs, **kwargs):
    23. branch1 = self.branch1(inputs)
    24. branch2 = self.branch2(inputs)
    25. branch3 = self.branch3(inputs)
    26. branch4 = self.branch4(inputs)
    27. outputs = keras.layers.concatenate([branch1, branch2, branch3, branch4])
    28. return outputs

    3.2 辅助输出结构

    1. # 定义辅助输出结构
    2. class InceptionAux(keras.layers.Layer):
    3. def __init__(self, num_classes, **kwargs):
    4. super().__init__(**kwargs)
    5. self.average_pool = keras.layers.AvgPool2D(pool_size=5, strides=3)
    6. self.conv = keras.layers.Conv2D(128, kernel_size=1, activation='relu')
    7. self.fc1 = keras.layers.Dense(1024, activation='relu')
    8. self.fc2 = keras.layers.Dense(num_classes)
    9. self.softmax = keras.layers.Softmax()
    10. def call(self, inputs, **kwargs):
    11. x = self.average_pool(inputs)
    12. x = self.conv(x)
    13. x = keras.layers.Flatten()(x)
    14. x = keras.layers.Dropout(rate=0.5)(x)
    15. x = self.fc1(x)
    16. x = keras.layers.Dropout(rate=0.5)(x)
    17. x = self.fc2(x)
    18. x = self.softmax(x)
    19. return x

    3.3 GoogLeNet实现

    1. def GoogLeNet(im_height=224, im_width=224, class_num=1000, aux_logits=False):
    2. input_image = keras.layers.Input(shape=(im_height, im_width, 3), dtype='float32')
    3. x = keras.layers.Conv2D(64, kernel_size=7, strides=2, padding='SAME', activation='relu')(input_image)
    4. # 注意MaxPool2D, padding='SAME', 224/2=112, padding='VALID', (224 -(3 -1 )) / 2 = 111, same向上取整.
    5. x = keras.layers.MaxPool2D(pool_size=3, strides=2, padding='SAME')(x)
    6. x = keras.layers.Conv2D(64, kernel_size=1, strides=1, padding='SAME', activation='relu')(x)
    7. x = keras.layers.Conv2D(192, kernel_size=3, strides=1, padding='SAME', activation='relu')(x)
    8. x = keras.layers.MaxPool2D(pool_size=3, strides=2, padding='SAME')(x)
    9. x = Inception(64, 96, 128, 16, 32, 32, name='inception_3a')(x)
    10. x = Inception(128, 128, 192, 32, 96, 64, name='inception_3b')(x)
    11. x = keras.layers.MaxPool2D(pool_size=3, strides=2, padding='SAME')(x)
    12. x = Inception(192, 96, 208, 16, 48, 64, name='inception_4a')(x)
    13. if aux_logits:
    14. aux1 = InceptionAux(class_num, name='aux_1')(x)
    15. x = Inception(160, 112, 224, 24, 64, 64, name='inception_4b')(x)
    16. x = Inception(128, 128, 256, 24, 64, 64, name='inception_4c')(x)
    17. x = Inception(112, 144, 288, 32, 64, 64, name='inception_4d')(x)
    18. if aux_logits:
    19. aux2 = InceptionAux(class_num, name='aux_2')(x)
    20. x = Inception(256, 160, 320, 32, 128, 128, name='inception_4e')(x)
    21. x = keras.layers.MaxPool2D(pool_size=3, strides=2, padding='SAME')(x)
    22. x = Inception(256, 160, 320, 32, 128, 128, name='inception_5a')(x)
    23. x = Inception(384, 192, 384, 48, 128, 128, name='inception_5b')(x)
    24. x = keras.layers.AvgPool2D(pool_size=7, strides=1)(x)
    25. x = keras.layers.Flatten()(x)
    26. x = keras.layers.Dropout(rate=0.4)(x)
    27. x = keras.layers.Dense(class_num)(x)
    28. aux3 = keras.layers.Softmax(name='aux_3')(x)
    29. if aux_logits:
    30. aux = aux1 * 0.2 + aux2 * 0.3 + aux3 * 0.5
    31. model = keras.models.Model(inputs=input_image, outputs=aux)
    32. else:
    33. model = keras.models.Model(inputs=input_image, outputs=aux3)
    34. return model

    3.4 数据生成

    1. train_dir = './training/training/'
    2. valid_dir = './validation/validation/'
    3. # 图片数据生成器
    4. train_datagen = keras.preprocessing.image.ImageDataGenerator(
    5. rescale = 1. / 255,
    6. rotation_range = 40,
    7. width_shift_range = 0.2,
    8. height_shift_range = 0.2,
    9. shear_range = 0.2,
    10. zoom_range = 0.2,
    11. horizontal_flip = True,
    12. vertical_flip = True,
    13. fill_mode = 'nearest'
    14. )
    15. height = 224
    16. width = 224
    17. channels = 3
    18. batch_size = 32
    19. num_classes = 10
    20. train_generator = train_datagen.flow_from_directory(train_dir,
    21. target_size = (height, width),
    22. batch_size = batch_size,
    23. shuffle = True,
    24. seed = 7,
    25. class_mode = 'categorical')
    26. valid_datagen = keras.preprocessing.image.ImageDataGenerator(
    27. rescale = 1. / 255
    28. )
    29. valid_generator = valid_datagen.flow_from_directory(valid_dir,
    30. target_size = (height, width),
    31. batch_size = batch_size,
    32. shuffle = True,
    33. seed = 7,
    34. class_mode = 'categorical')
    35. print(train_generator.samples)
    36. print(valid_generator.samples)

    3.4 训练

    1. googlenet = GoogLeNet(class_num=10)
    2. googlenet.summary()
    3. googlenet.compile(optimizer='adam',
    4. loss='categorical_crossentropy',
    5. metrics=['acc'])
    6. history = googlenet.fit(train_generator,
    7. steps_per_epoch=train_generator.samples // batch_size,
    8. epochs=10,
    9. validation_data=valid_generator,
    10. validation_steps = valid_generator.samples // batch_size
    11. )

  • 相关阅读:
    使用 Java 枚举和自定义数据类型
    函数的扩展
    【SpringBoot】SpringBoot整合JWT
    使用 tensorboard 常见的问题及解决办法
    AE动画调整
    「项目管理」加强沟通管理和制定项目进度计划
    【JS红宝书学习笔记】第6章 集合引用类型
    这些在ISIS的代码是多少呀
    c语言进阶:冒泡排序函数初步实现到逐步优化
    【LeetCode】剑指 Offer <二刷>(6)
  • 原文地址:https://blog.csdn.net/peng_258/article/details/132747430