• 卷积神经网络(CNN):乳腺癌识别


    ​活动地址:CSDN21天学习挑战赛

            本文主要介绍了通过深度学习进行乳腺癌识别的应用,首先简单介绍了乳腺癌医学背景和相关知识,接着介绍了目前能获得的公开的乳腺癌数据集,最后介绍了神经网络的实现方式和处理后的效果以及性能分析。卷积神经网络(CNN)已经尝试应用于癌症检查,但是基于CNN模型的共同缺点是不稳定性以及对训练数据的依赖。部署模型时,假设训练数据和测试数据是从同一分布中提取的。这可能是医学成像中的一个问题,在这些医学成像中,诸如相机设置或化学药品染色的年龄之类的元素在设施和医院之间会有所不同,并且会影响图像的颜色。这些变化对人眼来说可能并不明显,但是它们可能会影响CNN的重要特征并导致模型性能下降。因此,重要的是要开发一种能够适应域之间差异的鲁棒算法。


    一、准备环境

    1. import tensorflow as tf
    2. gpus = tf.config.list_physical_devices("GPU")
    3. if gpus:
    4. gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    5. tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    6. tf.config.set_visible_devices([gpu0],"GPU")
    7. import matplotlib.pyplot as plt
    8. import os,PIL,pathlib
    9. import numpy as np
    10. import pandas as pd
    11. import warnings
    12. from tensorflow import keras
    13. warnings.filterwarnings("ignore") #忽略警告信息
    14. plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
    15. plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号

    二、导入数据

    1.导入数据

    1. import pathlib
    2. data_dir = "./26-data"
    3. data_dir = pathlib.Path(data_dir)
    4. image_count = len(list(data_dir.glob('*/*')))
    5. print("图片总数为:",image_count)
    6. #图片总数为: 13403

    图片总数为: 13403

     

    1. batch_size = 16
    2. img_height = 50
    3. img_width = 50
    1. """
    2. 关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
    3. """
    4. train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    5. data_dir,
    6. validation_split=0.2,
    7. subset="training",
    8. seed=12,
    9. image_size=(img_height, img_width),
    10. batch_size=batch_size)

    Found 13403 files belonging to 2 classes.Using 10723 files for training.

    1. """
    2. 关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
    3. """
    4. val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    5. data_dir,
    6. validation_split=0.2,
    7. subset="validation",
    8. seed=12,
    9. image_size=(img_height, img_width),
    10. batch_size=batch_size)

    Found 13403 files belonging to 2 classes.Using 2680 files for validation.

     

    1. class_names = train_ds.class_names
    2. print(class_names)

    ['0', '1']

     2.检查数据

    1. for image_batch, labels_batch in train_ds:
    2. print(image_batch.shape)
    3. print(labels_batch.shape)
    4. break

    (16, 50, 50, 3)
    (16,)

    3.配置数据集

    • shuffle():打乱数据,关于此函数的详细介绍可以参考:https:lzhuanlan.zhihu.com/p/42417456
    • prefetch():预取数据,加速运行,其详细介绍可以参考我前两篇文章,里面都有讲解。.
    • cache():将数据集缓存到内存当中,加速运行
    1. AUTOTUNE = tf.data.AUTOTUNE
    2. def train_preprocessing(image,label):
    3. return (image/255.0,label)
    4. train_ds = (
    5. train_ds.cache()
    6. .shuffle(1000)
    7. .map(train_preprocessing) # 这里可以设置预处理函数
    8. # .batch(batch_size) # 在image_dataset_from_directory处已经设置了batch_size
    9. .prefetch(buffer_size=AUTOTUNE)
    10. )
    11. val_ds = (
    12. val_ds.cache()
    13. .shuffle(1000)
    14. .map(train_preprocessing) # 这里可以设置预处理函数
    15. # .batch(batch_size) # 在image_dataset_from_directory处已经设置了batch_size
    16. .prefetch(buffer_size=AUTOTUNE)
    17. )

    4.数据可视化

     

    三、构建模型

    1. import tensorflow as tf
    2. model = tf.keras.Sequential([
    3. tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu",input_shape=[img_width, img_height, 3]),
    4. tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu"),
    5. tf.keras.layers.MaxPooling2D((2,2)),
    6. tf.keras.layers.Dropout(0.5),
    7. tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu"),
    8. tf.keras.layers.MaxPooling2D((2,2)),
    9. tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu"),
    10. tf.keras.layers.MaxPooling2D((2,2)),
    11. tf.keras.layers.Flatten(),
    12. tf.keras.layers.Dense(2, activation="softmax")
    13. ])
    14. model.summary()

     四、编译

    1. model.compile(optimizer="adam",
    2. loss='sparse_categorical_crossentropy',
    3. metrics=['accuracy'])

    五、训练模型

    1. from tensorflow.keras.callbacks import ModelCheckpoint, Callback, EarlyStopping, ReduceLROnPlateau, LearningRateScheduler
    2. NO_EPOCHS = 100
    3. PATIENCE = 5
    4. VERBOSE = 1
    5. # 设置动态学习率
    6. annealer = LearningRateScheduler(lambda x: 1e-3 * 0.99 ** (x+NO_EPOCHS))
    7. # 设置早停
    8. earlystopper = EarlyStopping(monitor='loss', patience=PATIENCE, verbose=VERBOSE)
    9. #
    10. checkpointer = ModelCheckpoint('best_model.h5',
    11. monitor='val_accuracy',
    12. verbose=VERBOSE,
    13. save_best_only=True,
    14. save_weights_only=True)
    1. Epoch 1/100
    2. 671/671 [==============================] - 37s 52ms/step - loss: 0.5644 - accuracy: 0.7007 - val_loss: 0.5268 - val_accuracy: 0.7228
    3. Epoch 00001: val_accuracy improved from -inf to 0.72276, saving model to best_model.h5
    4. Epoch 2/100
    5. 671/671 [==============================] - 8s 12ms/step - loss: 0.4430 - accuracy: 0.8062 - val_loss: 0.4252 - val_accuracy: 0.8317
    6. Epoch 00002: val_accuracy improved from 0.72276 to 0.83172, saving model to best_model.h5
    7. Epoch 3/100
    8. 671/671 [==============================] - 7s 11ms/step - loss: 0.4225 - accuracy: 0.8152 - val_loss: 0.4436 - val_accuracy: 0.8194
    9. Epoch 00003: val_accuracy did not improve from 0.83172
    10. Epoch 4/100
    11. 671/671 [==============================] - 7s 11ms/step - loss: 0.4000 - accuracy: 0.8242 - val_loss: 0.3964 - val_accuracy: 0.8358
    12. Epoch 00004: val_accuracy improved from 0.83172 to 0.83582, saving model to best_model.h5
    13. Epoch 5/100
    14. 671/671 [==============================] - 7s 11ms/step - loss: 0.3826 - accuracy: 0.8289 - val_loss: 0.3652 - val_accuracy: 0.8403
    15. Epoch 00005: val_accuracy improved from 0.83582 to 0.84030, saving model to best_model.h5
    16. Epoch 6/100
    17. 671/671 [==============================] - 7s 11ms/step - loss: 0.3751 - accuracy: 0.8320 - val_loss: 0.4237 - val_accuracy: 0.8209
    18. Epoch 00006: val_accuracy did not improve from 0.84030
    19. Epoch 7/100
    20. 671/671 [==============================] - 7s 11ms/step - loss: 0.3658 - accuracy: 0.8345 - val_loss: 0.3552 - val_accuracy: 0.8608
    21. Epoch 00007: val_accuracy improved from 0.84030 to 0.86082, saving model to best_model.h5
    22. Epoch 8/100
    23. 671/671 [==============================] - 8s 11ms/step - loss: 0.3589 - accuracy: 0.8398 - val_loss: 0.3421 - val_accuracy: 0.8459
    24. Epoch 00008: val_accuracy did not improve from 0.86082
    25. Epoch 9/100
    26. 671/671 [==============================] - 7s 11ms/step - loss: 0.3533 - accuracy: 0.8430 - val_loss: 0.3504 - val_accuracy: 0.8444
    27. Epoch 00009: val_accuracy did not improve from 0.86082
    28. Epoch 10/100
    29. 671/671 [==============================] - 7s 11ms/step - loss: 0.3489 - accuracy: 0.8454 - val_loss: 0.3508 - val_accuracy: 0.8493
    30. Epoch 00010: val_accuracy did not improve from 0.86082
    31. Epoch 11/100
    32. 671/671 [==============================] - 8s 11ms/step - loss: 0.3446 - accuracy: 0.8463 - val_loss: 0.3326 - val_accuracy: 0.8616
    33. Epoch 00011: val_accuracy improved from 0.86082 to 0.86157, saving model to best_model.h5
    34. Epoch 12/100
    35. 671/671 [==============================] - 8s 11ms/step - loss: 0.3467 - accuracy: 0.8491 - val_loss: 0.3850 - val_accuracy: 0.8455
    36. Epoch 00012: val_accuracy did not improve from 0.86157
    37. Epoch 13/100
    38. 671/671 [==============================] - 7s 11ms/step - loss: 0.3337 - accuracy: 0.8524 - val_loss: 0.3387 - val_accuracy: 0.8623
    39. Epoch 00013: val_accuracy improved from 0.86157 to 0.86231, saving model to best_model.h5
    40. Epoch 14/100
    41. 671/671 [==============================] - 8s 11ms/step - loss: 0.3308 - accuracy: 0.8561 - val_loss: 0.3292 - val_accuracy: 0.8713
    42. Epoch 00014: val_accuracy improved from 0.86231 to 0.87127, saving model to best_model.h5
    43. Epoch 15/100
    44. 671/671 [==============================] - 7s 11ms/step - loss: 0.3230 - accuracy: 0.8580 - val_loss: 0.3403 - val_accuracy: 0.8500
    45. Epoch 00015: val_accuracy did not improve from 0.87127
    46. Epoch 16/100
    47. 671/671 [==============================] - 7s 11ms/step - loss: 0.3234 - accuracy: 0.8574 - val_loss: 0.3169 - val_accuracy: 0.8840
    48. Epoch 00016: val_accuracy improved from 0.87127 to 0.88396, saving model to best_model.h5
    49. Epoch 17/100
    50. 671/671 [==============================] - 7s 11ms/step - loss: 0.3168 - accuracy: 0.8651 - val_loss: 0.3056 - val_accuracy: 0.8787
    51. Epoch 00017: val_accuracy did not improve from 0.88396
    52. Epoch 18/100
    53. 671/671 [==============================] - 7s 11ms/step - loss: 0.3090 - accuracy: 0.8673 - val_loss: 0.2904 - val_accuracy: 0.8914
    54. Epoch 00018: val_accuracy improved from 0.88396 to 0.89142, saving model to best_model.h5
    55. Epoch 19/100
    56. 671/671 [==============================] - 8s 11ms/step - loss: 0.3090 - accuracy: 0.8650 - val_loss: 0.3056 - val_accuracy: 0.8828
    57. Epoch 00019: val_accuracy did not improve from 0.89142
    58. Epoch 20/100
    59. 671/671 [==============================] - 8s 11ms/step - loss: 0.3008 - accuracy: 0.8736 - val_loss: 0.3003 - val_accuracy: 0.8813
    60. Epoch 00020: val_accuracy did not improve from 0.89142
    61. Epoch 21/100
    62. 671/671 [==============================] - 8s 11ms/step - loss: 0.2987 - accuracy: 0.8715 - val_loss: 0.3085 - val_accuracy: 0.8840
    63. Epoch 00021: val_accuracy did not improve from 0.89142
    64. Epoch 22/100
    65. 671/671 [==============================] - 7s 11ms/step - loss: 0.2965 - accuracy: 0.8723 - val_loss: 0.3309 - val_accuracy: 0.8694
    66. Epoch 00022: val_accuracy did not improve from 0.89142
    67. Epoch 23/100
    68. 671/671 [==============================] - 7s 11ms/step - loss: 0.2937 - accuracy: 0.8753 - val_loss: 0.3135 - val_accuracy: 0.8619
    69. Epoch 00023: val_accuracy did not improve from 0.89142
    70. Epoch 24/100
    71. 671/671 [==============================] - 7s 11ms/step - loss: 0.2872 - accuracy: 0.8781 - val_loss: 0.3174 - val_accuracy: 0.8664
    72. Epoch 00024: val_accuracy did not improve from 0.89142
    73. Epoch 25/100
    74. 671/671 [==============================] - 7s 11ms/step - loss: 0.2827 - accuracy: 0.8802 - val_loss: 0.3107 - val_accuracy: 0.8698
    75. Epoch 00025: val_accuracy did not improve from 0.89142
    76. Epoch 26/100
    77. 671/671 [==============================] - 7s 11ms/step - loss: 0.2803 - accuracy: 0.8815 - val_loss: 0.2883 - val_accuracy: 0.8858
    78. Epoch 00026: val_accuracy did not improve from 0.89142
    79. Epoch 27/100
    80. 671/671 [==============================] - 7s 11ms/step - loss: 0.2802 - accuracy: 0.8811 - val_loss: 0.3010 - val_accuracy: 0.8746
    81. Epoch 00027: val_accuracy did not improve from 0.89142
    82. Epoch 28/100
    83. 671/671 [==============================] - 7s 11ms/step - loss: 0.2787 - accuracy: 0.8832 - val_loss: 0.3022 - val_accuracy: 0.8832
    84. Epoch 00028: val_accuracy did not improve from 0.89142
    85. Epoch 29/100
    86. 671/671 [==============================] - 7s 11ms/step - loss: 0.2740 - accuracy: 0.8846 - val_loss: 0.2763 - val_accuracy: 0.8851
    87. Epoch 00029: val_accuracy did not improve from 0.89142
    88. Epoch 30/100
    89. 671/671 [==============================] - 7s 11ms/step - loss: 0.2771 - accuracy: 0.8815 - val_loss: 0.2766 - val_accuracy: 0.8951
    90. Epoch 00030: val_accuracy improved from 0.89142 to 0.89515, saving model to best_model.h5
    91. Epoch 31/100
    92. 671/671 [==============================] - 8s 11ms/step - loss: 0.2739 - accuracy: 0.8851 - val_loss: 0.2764 - val_accuracy: 0.8914
    93. Epoch 00031: val_accuracy did not improve from 0.89515
    94. Epoch 32/100
    95. 671/671 [==============================] - 7s 11ms/step - loss: 0.2676 - accuracy: 0.8858 - val_loss: 0.2646 - val_accuracy: 0.8940
    96. Epoch 00032: val_accuracy did not improve from 0.89515
    97. Epoch 33/100
    98. 671/671 [==============================] - 7s 11ms/step - loss: 0.2682 - accuracy: 0.8871 - val_loss: 0.2759 - val_accuracy: 0.8922
    99. Epoch 00033: val_accuracy did not improve from 0.89515
    100. Epoch 34/100
    101. 671/671 [==============================] - 7s 11ms/step - loss: 0.2639 - accuracy: 0.8901 - val_loss: 0.3046 - val_accuracy: 0.8757
    102. Epoch 00034: val_accuracy did not improve from 0.89515
    103. Epoch 35/100
    104. 671/671 [==============================] - 7s 11ms/step - loss: 0.2645 - accuracy: 0.8896 - val_loss: 0.3199 - val_accuracy: 0.8716
    105. Epoch 00035: val_accuracy did not improve from 0.89515
    106. Epoch 36/100
    107. 671/671 [==============================] - 7s 11ms/step - loss: 0.2603 - accuracy: 0.8891 - val_loss: 0.3165 - val_accuracy: 0.8679
    108. Epoch 00036: val_accuracy did not improve from 0.89515
    109. Epoch 37/100
    110. 671/671 [==============================] - 7s 11ms/step - loss: 0.2638 - accuracy: 0.8891 - val_loss: 0.3043 - val_accuracy: 0.8791
    111. Epoch 00037: val_accuracy did not improve from 0.89515
    112. Epoch 38/100
    113. 671/671 [==============================] - 7s 11ms/step - loss: 0.2596 - accuracy: 0.8933 - val_loss: 0.2878 - val_accuracy: 0.8821
    114. Epoch 00038: val_accuracy did not improve from 0.89515
    115. Epoch 39/100
    116. 671/671 [==============================] - 7s 11ms/step - loss: 0.2555 - accuracy: 0.8936 - val_loss: 0.2620 - val_accuracy: 0.8914
    117. Epoch 00039: val_accuracy did not improve from 0.89515
    118. Epoch 40/100
    119. 671/671 [==============================] - 7s 11ms/step - loss: 0.2586 - accuracy: 0.8912 - val_loss: 0.2927 - val_accuracy: 0.8791
    120. Epoch 00040: val_accuracy did not improve from 0.89515
    121. Epoch 41/100
    122. 671/671 [==============================] - 7s 11ms/step - loss: 0.2558 - accuracy: 0.8912 - val_loss: 0.2908 - val_accuracy: 0.8843
    123. Epoch 00041: val_accuracy did not improve from 0.89515
    124. Epoch 42/100
    125. 671/671 [==============================] - 7s 11ms/step - loss: 0.2493 - accuracy: 0.8950 - val_loss: 0.2861 - val_accuracy: 0.8933
    126. Epoch 00042: val_accuracy did not improve from 0.89515
    127. Epoch 43/100
    128. 671/671 [==============================] - 7s 11ms/step - loss: 0.2503 - accuracy: 0.8933 - val_loss: 0.2855 - val_accuracy: 0.8869
    129. Epoch 00043: val_accuracy did not improve from 0.89515
    130. Epoch 44/100
    131. 671/671 [==============================] - 7s 11ms/step - loss: 0.2483 - accuracy: 0.8942 - val_loss: 0.2745 - val_accuracy: 0.8959
    132. Epoch 00044: val_accuracy improved from 0.89515 to 0.89590, saving model to best_model.h5
    133. Epoch 45/100
    134. 671/671 [==============================] - 8s 11ms/step - loss: 0.2497 - accuracy: 0.8967 - val_loss: 0.2590 - val_accuracy: 0.8951
    135. Epoch 00045: val_accuracy did not improve from 0.89590
    136. Epoch 46/100
    137. 671/671 [==============================] - 8s 11ms/step - loss: 0.2460 - accuracy: 0.8976 - val_loss: 0.2656 - val_accuracy: 0.8944
    138. Epoch 00046: val_accuracy did not improve from 0.89590
    139. Epoch 47/100
    140. 671/671 [==============================] - 7s 11ms/step - loss: 0.2440 - accuracy: 0.8993 - val_loss: 0.2612 - val_accuracy: 0.8955
    141. Epoch 00047: val_accuracy did not improve from 0.89590
    142. Epoch 48/100
    143. 671/671 [==============================] - 7s 11ms/step - loss: 0.2409 - accuracy: 0.8979 - val_loss: 0.2798 - val_accuracy: 0.8922
    144. Epoch 00048: val_accuracy did not improve from 0.89590
    145. Epoch 49/100
    146. 671/671 [==============================] - 8s 11ms/step - loss: 0.2438 - accuracy: 0.8989 - val_loss: 0.2524 - val_accuracy: 0.8963
    147. Epoch 00049: val_accuracy improved from 0.89590 to 0.89627, saving model to best_model.h5
    148. Epoch 50/100
    149. 671/671 [==============================] - 7s 11ms/step - loss: 0.2441 - accuracy: 0.8995 - val_loss: 0.2616 - val_accuracy: 0.8966
    150. Epoch 00050: val_accuracy improved from 0.89627 to 0.89664, saving model to best_model.h5
    151. Epoch 51/100
    152. 671/671 [==============================] - 8s 11ms/step - loss: 0.2390 - accuracy: 0.9010 - val_loss: 0.2683 - val_accuracy: 0.8940
    153. Epoch 00051: val_accuracy did not improve from 0.89664
    154. Epoch 52/100
    155. 671/671 [==============================] - 7s 11ms/step - loss: 0.2390 - accuracy: 0.9004 - val_loss: 0.2462 - val_accuracy: 0.9007
    156. Epoch 00052: val_accuracy improved from 0.89664 to 0.90075, saving model to best_model.h5
    157. Epoch 53/100
    158. 671/671 [==============================] - 7s 11ms/step - loss: 0.2386 - accuracy: 0.8999 - val_loss: 0.3076 - val_accuracy: 0.8769
    159. Epoch 00053: val_accuracy did not improve from 0.90075
    160. Epoch 54/100
    161. 671/671 [==============================] - 8s 11ms/step - loss: 0.2363 - accuracy: 0.9024 - val_loss: 0.2433 - val_accuracy: 0.9060
    162. Epoch 00054: val_accuracy improved from 0.90075 to 0.90597, saving model to best_model.h5
    163. Epoch 55/100
    164. 671/671 [==============================] - 7s 11ms/step - loss: 0.2314 - accuracy: 0.9050 - val_loss: 0.2610 - val_accuracy: 0.8989
    165. Epoch 00055: val_accuracy did not improve from 0.90597
    166. Epoch 56/100
    167. 671/671 [==============================] - 8s 11ms/step - loss: 0.2355 - accuracy: 0.9003 - val_loss: 0.2585 - val_accuracy: 0.8974
    168. Epoch 00056: val_accuracy did not improve from 0.90597
    169. Epoch 57/100
    170. 671/671 [==============================] - 8s 11ms/step - loss: 0.2370 - accuracy: 0.9023 - val_loss: 0.2430 - val_accuracy: 0.9022
    171. Epoch 00057: val_accuracy did not improve from 0.90597
    172. Epoch 58/100
    173. 671/671 [==============================] - 7s 11ms/step - loss: 0.2298 - accuracy: 0.9063 - val_loss: 0.2604 - val_accuracy: 0.8951
    174. Epoch 00058: val_accuracy did not improve from 0.90597
    175. Epoch 59/100
    176. 671/671 [==============================] - 7s 11ms/step - loss: 0.2272 - accuracy: 0.9030 - val_loss: 0.2420 - val_accuracy: 0.9037
    177. Epoch 00059: val_accuracy did not improve from 0.90597
    178. Epoch 60/100
    179. 671/671 [==============================] - 8s 11ms/step - loss: 0.2254 - accuracy: 0.9072 - val_loss: 0.2538 - val_accuracy: 0.8959
    180. Epoch 00060: val_accuracy did not improve from 0.90597
    181. Epoch 61/100
    182. 671/671 [==============================] - 7s 11ms/step - loss: 0.2300 - accuracy: 0.9058 - val_loss: 0.2689 - val_accuracy: 0.8910
    183. Epoch 00061: val_accuracy did not improve from 0.90597
    184. Epoch 62/100
    185. 671/671 [==============================] - 8s 11ms/step - loss: 0.2289 - accuracy: 0.9044 - val_loss: 0.2585 - val_accuracy: 0.8963
    186. Epoch 00062: val_accuracy did not improve from 0.90597
    187. Epoch 63/100
    188. 671/671 [==============================] - 8s 11ms/step - loss: 0.2265 - accuracy: 0.9046 - val_loss: 0.2879 - val_accuracy: 0.8832
    189. Epoch 00063: val_accuracy did not improve from 0.90597
    190. Epoch 64/100
    191. 671/671 [==============================] - 7s 11ms/step - loss: 0.2304 - accuracy: 0.9039 - val_loss: 0.2587 - val_accuracy: 0.8948
    192. Epoch 00064: val_accuracy did not improve from 0.90597
    193. Epoch 65/100
    194. 671/671 [==============================] - 8s 11ms/step - loss: 0.2247 - accuracy: 0.9057 - val_loss: 0.2619 - val_accuracy: 0.8970
    195. Epoch 00065: val_accuracy did not improve from 0.90597
    196. Epoch 66/100
    197. 671/671 [==============================] - 7s 11ms/step - loss: 0.2251 - accuracy: 0.9076 - val_loss: 0.2847 - val_accuracy: 0.8851
    198. Epoch 00066: val_accuracy did not improve from 0.90597
    199. Epoch 67/100
    200. 671/671 [==============================] - 8s 11ms/step - loss: 0.2239 - accuracy: 0.9091 - val_loss: 0.2899 - val_accuracy: 0.8832
    201. Epoch 00067: val_accuracy did not improve from 0.90597
    202. Epoch 68/100
    203. 671/671 [==============================] - 8s 11ms/step - loss: 0.2229 - accuracy: 0.9077 - val_loss: 0.2987 - val_accuracy: 0.8772
    204. Epoch 00068: val_accuracy did not improve from 0.90597
    205. Epoch 69/100
    206. 671/671 [==============================] - 7s 11ms/step - loss: 0.2244 - accuracy: 0.9064 - val_loss: 0.2493 - val_accuracy: 0.8981
    207. Epoch 00069: val_accuracy did not improve from 0.90597
    208. Epoch 70/100
    209. 671/671 [==============================] - 7s 11ms/step - loss: 0.2209 - accuracy: 0.9077 - val_loss: 0.2572 - val_accuracy: 0.9007
    210. Epoch 00070: val_accuracy did not improve from 0.90597
    211. Epoch 71/100
    212. 671/671 [==============================] - 8s 11ms/step - loss: 0.2242 - accuracy: 0.9078 - val_loss: 0.2657 - val_accuracy: 0.8970
    213. Epoch 00071: val_accuracy did not improve from 0.90597
    214. Epoch 72/100
    215. 671/671 [==============================] - 8s 11ms/step - loss: 0.2220 - accuracy: 0.9071 - val_loss: 0.2421 - val_accuracy: 0.9075
    216. Epoch 00072: val_accuracy improved from 0.90597 to 0.90746, saving model to best_model.h5
    217. Epoch 73/100
    218. 671/671 [==============================] - 7s 11ms/step - loss: 0.2178 - accuracy: 0.9093 - val_loss: 0.2348 - val_accuracy: 0.9067
    219. Epoch 00073: val_accuracy did not improve from 0.90746
    220. Epoch 74/100
    221. 671/671 [==============================] - 7s 11ms/step - loss: 0.2178 - accuracy: 0.9106 - val_loss: 0.2572 - val_accuracy: 0.9037
    222. Epoch 00074: val_accuracy did not improve from 0.90746
    223. Epoch 75/100
    224. 671/671 [==============================] - 8s 11ms/step - loss: 0.2197 - accuracy: 0.9115 - val_loss: 0.2632 - val_accuracy: 0.8955
    225. Epoch 00075: val_accuracy did not improve from 0.90746
    226. Epoch 76/100
    227. 671/671 [==============================] - 7s 11ms/step - loss: 0.2154 - accuracy: 0.9105 - val_loss: 0.2601 - val_accuracy: 0.8959
    228. Epoch 00076: val_accuracy did not improve from 0.90746
    229. Epoch 77/100
    230. 671/671 [==============================] - 7s 11ms/step - loss: 0.2186 - accuracy: 0.9080 - val_loss: 0.2433 - val_accuracy: 0.9034
    231. Epoch 00077: val_accuracy did not improve from 0.90746
    232. Epoch 78/100
    233. 671/671 [==============================] - 8s 11ms/step - loss: 0.2147 - accuracy: 0.9127 - val_loss: 0.2809 - val_accuracy: 0.8862
    234. Epoch 00078: val_accuracy did not improve from 0.90746
    235. Epoch 79/100
    236. 671/671 [==============================] - 8s 11ms/step - loss: 0.2118 - accuracy: 0.9122 - val_loss: 0.2361 - val_accuracy: 0.9041
    237. Epoch 00079: val_accuracy did not improve from 0.90746
    238. Epoch 80/100
    239. 671/671 [==============================] - 7s 11ms/step - loss: 0.2122 - accuracy: 0.9105 - val_loss: 0.2469 - val_accuracy: 0.8966
    240. Epoch 00080: val_accuracy did not improve from 0.90746
    241. Epoch 81/100
    242. 671/671 [==============================] - 7s 11ms/step - loss: 0.2121 - accuracy: 0.9116 - val_loss: 0.2430 - val_accuracy: 0.9026
    243. Epoch 00081: val_accuracy did not improve from 0.90746
    244. Epoch 82/100
    245. 671/671 [==============================] - 7s 11ms/step - loss: 0.2135 - accuracy: 0.9132 - val_loss: 0.2306 - val_accuracy: 0.9082
    246. Epoch 00082: val_accuracy improved from 0.90746 to 0.90821, saving model to best_model.h5
    247. Epoch 83/100
    248. 671/671 [==============================] - 7s 11ms/step - loss: 0.2106 - accuracy: 0.9117 - val_loss: 0.2476 - val_accuracy: 0.9082
    249. Epoch 00083: val_accuracy did not improve from 0.90821
    250. Epoch 84/100
    251. 671/671 [==============================] - 7s 11ms/step - loss: 0.2083 - accuracy: 0.9156 - val_loss: 0.2607 - val_accuracy: 0.8970
    252. Epoch 00084: val_accuracy did not improve from 0.90821
    253. Epoch 85/100
    254. 671/671 [==============================] - 8s 11ms/step - loss: 0.2070 - accuracy: 0.9136 - val_loss: 0.2582 - val_accuracy: 0.8974
    255. Epoch 00085: val_accuracy did not improve from 0.90821
    256. Epoch 86/100
    257. 671/671 [==============================] - 8s 11ms/step - loss: 0.2118 - accuracy: 0.9120 - val_loss: 0.3005 - val_accuracy: 0.8735
    258. Epoch 00086: val_accuracy did not improve from 0.90821
    259. Epoch 87/100
    260. 671/671 [==============================] - 7s 11ms/step - loss: 0.2082 - accuracy: 0.9126 - val_loss: 0.2911 - val_accuracy: 0.8802
    261. Epoch 00087: val_accuracy did not improve from 0.90821
    262. Epoch 88/100
    263. 671/671 [==============================] - 8s 11ms/step - loss: 0.2078 - accuracy: 0.9155 - val_loss: 0.2466 - val_accuracy: 0.9034
    264. Epoch 00088: val_accuracy did not improve from 0.90821
    265. Epoch 89/100
    266. 671/671 [==============================] - 7s 11ms/step - loss: 0.2114 - accuracy: 0.9132 - val_loss: 0.2587 - val_accuracy: 0.8989
    267. Epoch 00089: val_accuracy did not improve from 0.90821
    268. Epoch 90/100
    269. 671/671 [==============================] - 7s 11ms/step - loss: 0.2057 - accuracy: 0.9152 - val_loss: 0.2813 - val_accuracy: 0.8922
    270. Epoch 00090: val_accuracy did not improve from 0.90821
    271. Epoch 91/100
    272. 671/671 [==============================] - 8s 11ms/step - loss: 0.2079 - accuracy: 0.9147 - val_loss: 0.2526 - val_accuracy: 0.9015
    273. Epoch 00091: val_accuracy did not improve from 0.90821
    274. Epoch 92/100
    275. 671/671 [==============================] - 7s 11ms/step - loss: 0.2102 - accuracy: 0.9135 - val_loss: 0.2576 - val_accuracy: 0.9026
    276. Epoch 00092: val_accuracy did not improve from 0.90821
    277. Epoch 93/100
    278. 671/671 [==============================] - 8s 11ms/step - loss: 0.2083 - accuracy: 0.9162 - val_loss: 0.2506 - val_accuracy: 0.8974
    279. Epoch 00093: val_accuracy did not improve from 0.90821
    280. Epoch 94/100
    281. 671/671 [==============================] - 7s 11ms/step - loss: 0.2064 - accuracy: 0.9155 - val_loss: 0.2705 - val_accuracy: 0.8944
    282. Epoch 00094: val_accuracy did not improve from 0.90821
    283. Epoch 95/100
    284. 671/671 [==============================] - 8s 11ms/step - loss: 0.2025 - accuracy: 0.9183 - val_loss: 0.2589 - val_accuracy: 0.8981
    285. Epoch 00095: val_accuracy did not improve from 0.90821
    286. Epoch 96/100
    287. 671/671 [==============================] - 8s 11ms/step - loss: 0.2015 - accuracy: 0.9181 - val_loss: 0.2549 - val_accuracy: 0.8970
    288. Epoch 00096: val_accuracy did not improve from 0.90821
    289. Epoch 97/100
    290. 671/671 [==============================] - 8s 11ms/step - loss: 0.2010 - accuracy: 0.9179 - val_loss: 0.2401 - val_accuracy: 0.9011
    291. Epoch 00097: val_accuracy did not improve from 0.90821
    292. Epoch 98/100
    293. 671/671 [==============================] - 7s 11ms/step - loss: 0.1981 - accuracy: 0.9199 - val_loss: 0.2531 - val_accuracy: 0.9007
    294. Epoch 00098: val_accuracy did not improve from 0.90821
    295. Epoch 99/100
    296. 671/671 [==============================] - 7s 11ms/step - loss: 0.2016 - accuracy: 0.9163 - val_loss: 0.2463 - val_accuracy: 0.8985
    297. Epoch 00099: val_accuracy did not improve from 0.90821
    298. Epoch 100/100
    299. 671/671 [==============================] - 7s 11ms/step - loss: 0.1972 - accuracy: 0.9191 - val_loss: 0.2718 - val_accuracy: 0.8884
    300. Epoch 00100: val_accuracy did not improve from 0.90821

    六、评估模型

    1.Accuracy与Loss图

    1. acc = train_model.history['accuracy']
    2. val_acc = train_model.history['val_accuracy']
    3. loss = train_model.history['loss']
    4. val_loss = train_model.history['val_loss']
    5. epochs_range = range(len(acc))
    6. plt.figure(figsize=(12, 4))
    7. plt.subplot(1, 2, 1)
    8. plt.plot(epochs_range, acc, label='Training Accuracy')
    9. plt.plot(epochs_range, val_acc, label='Validation Accuracy')
    10. plt.legend(loc='lower right')
    11. plt.title('Training and Validation Accuracy')
    12. plt.subplot(1, 2, 2)
    13. plt.plot(epochs_range, loss, label='Training Loss')
    14. plt.plot(epochs_range, val_loss, label='Validation Loss')
    15. plt.legend(loc='upper right')
    16. plt.title('Training and Validation Loss')
    17. plt.show()

     

    2.混淆矩阵

    1. from sklearn.metrics import confusion_matrix
    2. import seaborn as sns
    3. import pandas as pd
    4. # 定义一个绘制混淆矩阵图的函数
    5. def plot_cm(labels, predictions):
    6. # 生成混淆矩阵
    7. conf_numpy = confusion_matrix(labels, predictions)
    8. # 将矩阵转化为 DataFrame
    9. conf_df = pd.DataFrame(conf_numpy, index=class_names ,columns=class_names)
    10. plt.figure(figsize=(8,7))
    11. sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu")
    12. plt.title('混淆矩阵',fontsize=15)
    13. plt.ylabel('真实值',fontsize=14)
    14. plt.xlabel('预测值',fontsize=14)
    15. val_pre = []
    16. val_label = []
    17. for images, labels in val_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
    18. for image, label in zip(images, labels):
    19. # 需要给图片增加一个维度
    20. img_array = tf.expand_dims(image, 0)
    21. # 使用模型预测图片中的人物
    22. prediction = model.predict(img_array)
    23. val_pre.append(class_names[np.argmax(prediction)])
    24. val_label.append(class_names[label])
    25. plot_cm(val_label, val_pre)

     

    3.各项指标评估

     

    1. from sklearn import metrics
    2. def test_accuracy_report(model):
    3. print(metrics.classification_report(val_label, val_pre, target_names=class_names))
    4. score = model.evaluate(val_ds, verbose=0)
    5. print('Loss function: %s, accuracy:' % score[0], score[1])
    6. test_accuracy_report(model)
    1. precision recall f1-score support
    2. 乳腺癌细胞 0.86 0.92 0.89 1339
    3. 正常细胞 0.92 0.86 0.88 1341
    4. accuracy 0.89 2680
    5. macro avg 0.89 0.89 0.89 2680
    6. weighted avg 0.89 0.89 0.89 2680
    7. Loss function: 0.27176907658576965, accuracy: 0.8884328603744507

    >- 参考文章地址: >- 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/k-vYaC8l7uxX51WoypLkTw) 中的学习记录博客
    >- 参考文章地址: [🔗深度学习100例-卷积神经网络(CNN)天气识别 | 第5天](https://mtyjkh.blog.csdn.net/article/details/117186183)
     

  • 相关阅读:
    前端文件下载的正确打开方式以及网页间的跳转方式
    卷?这份Java后端架构指南首次公开就摘星百万,肝完直接60K+
    [2023.09.20]:Yew的前端开发经历小结
    JDBC与Spring事务及事务传播性原理解析-上篇
    uniapp实现登录组件之外区域置灰并引导登录
    Vue2基础用法及案例
    简述 AOP 动态代理
    获取spring容器中的bean实例
    通过ISO9001认证,如何实现质量体系有效性
    【java】【MyBatisPlus】【一】快速入门程序
  • 原文地址:https://blog.csdn.net/qq_21402983/article/details/126452518