• 深度学习Day-14:RNN实现心脏病预测


     🍨 本文为:[🔗365天深度学习训练营] 中的学习记录博客
     🍖 原作者:[K同学啊 | 接辅导、项目定制]

    要求:

    1. 本地读取并加载数据;
    2. 了解循环神经网络RNN的构建过程;
    3. 测试集accuracy达到87%;

    一、 基础配置

    • 语言环境:Python3.7
    • 编译器选择:Pycharm
    • 深度学习环境:TensorFlow2.4.1
    • 数据集:私有数据集

    二、 前期准备 

    1.设置GPU

    1. import tensorflow as tf
    2. gpus = tf.config.list_physical_devices("GPU")
    3. if gpus:
    4. tf.config.experimental.set_memory_growth(gpus[0], True) #设置GPU显存用量按需使用
    5. tf.config.set_visible_devices([gpus[0]],"GPU")
    6. # 打印显卡信息,确认GPU可用
    7. print(gpus)

    根据个人设备情况,选择使用GPU/CPU进行训练,若GPU可用则输出:

    [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

    由于在设备上安装的CUDA版本与TensorFlow版本不一致,故这里直接安装了CPU版的TensorFlow,无上述输出。

    2. 导入数据

    本项目所采用的数据集未收录于公开数据中,故需要自己在文件目录中导入相应数据集合,并设置对应文件目录,以供后续学习过程中使用。

    运行下述代码:

    1. import pandas as pd
    2. df = pd.read_csv("./data/heart.csv")
    3. print(df)

    得到如下输出:

    1. age sex cp trestbps chol fbs ... exang oldpeak slope ca thal target
    2. 0 63 1 3 145 233 1 ... 0 2.3 0 0 1 1
    3. 1 37 1 2 130 250 0 ... 0 3.5 0 0 2 1
    4. 2 41 0 1 130 204 0 ... 0 1.4 2 0 2 1
    5. 3 56 1 1 120 236 0 ... 0 0.8 2 0 2 1
    6. 4 57 0 0 120 354 0 ... 1 0.6 2 0 2 1
    7. .. ... ... .. ... ... ... ... ... ... ... .. ... ...
    8. 298 57 0 0 140 241 0 ... 1 0.2 1 0 3 0
    9. 299 45 1 3 110 264 0 ... 0 1.2 1 0 3 0
    10. 300 68 1 0 144 193 1 ... 0 3.4 1 2 3 0
    11. 301 57 1 0 130 131 0 ... 1 1.2 1 1 3 0
    12. 302 57 0 1 130 236 0 ... 0 0.0 1 1 2 0
    13. [303 rows x 14 columns]

    3.检查数据 

    由于本次所采用的数据集为表格类型,因此,我们需要观察数据中是否有空值、异常值等错误数据的存在,这里我们先观察是否有空值:

    1. dfnull = df.isnull().sum()
    2. print(dfnull)

    可以得到如下输出: 

    1. age 0
    2. sex 0
    3. cp 0
    4. trestbps 0
    5. chol 0
    6. fbs 0
    7. restecg 0
    8. thalach 0
    9. exang 0
    10. oldpeak 0
    11. slope 0
    12. ca 0
    13. thal 0
    14. target 0
    15. dtype: int64

    可见,数据集中无空值存在。 

    三、数据预处理

    1.划分数据集

    测试集与验证集的关系:

    1. 验证集并没有参与训练过程中的梯度下降过程,即没有参与模型的参数更新过程;
    2. 广义上讲,验证集参与了“人工调参”的过程,我们根据每个epoch的训练结果,选择调整对应的超参数
    3. 可以认为,验证集参与了训练,但并没有使模型overfit验证集;
    1. from sklearn.preprocessing import StandardScaler
    2. from sklearn.model_selection import train_test_split
    3. X = df.iloc[:,:-1]
    4. y = df.iloc[:,-1]
    5. X_train,X_test,y_train,y_test = train_test_split(X,y,test_size= 0.1 , random_state= 1)
    6. print(X_train.shape,y_train.shape)

    得到如下输出:

    (272, 13) (272,)

    其中:

    • X = df.iloc[:,:-1]: 从DataFrame df 中选择所有行和除最后一列外的所有列作为特征矩阵 X;
    • y = df.iloc[:,-1]: 从DataFrame df 中选择所有行和最后一列作为目标变量 y;
    • X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=1): 将特征矩阵 X 和目标变量 y 划分为训练集和测试集。其中,test_size=0.1 表示测试集占总数据集的比例为10%,random_state=1 是随机种子,保证每次运行代码时划分的结果相同

    2.标准化

    1. sc = StandardScaler()
    2. X_train = sc.fit_transform(X_train)
    3. X_test = sc.transform(X_test)
    4. X_train = X_train.reshape(X_train.shape[0],X_train.shape[1],1)
    5. X_test = X_test .reshape(X_test .shape[0],X_test .shape[1],1)

    其中:

    • sc = StandardScaler(): 实例化了StandardScaler类,用于对特征进行标准化处理;
    • X_train = sc.fit_transform(X_train): 使用fit_transform方法对训练集的特征矩阵进行标准化处理,即将每个特征的数值缩放到均值为0、方差为1的范围内;
    • X_test = sc.transform(X_test): 使用transform方法对测试集的特征矩阵进行相同的标准化处理,使用的均值和方差参数来自于训练集;
    • X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1): 调整训练集特征矩阵的形状,将其变为三维数组,通常用于适配某些深度学习模型(比如卷积神经网络)的输入要求;
    • X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1): 同样地,调整测试集特征矩阵的形状,使其符合相同的三维格式要求;

    四、构建网络

    1. from tensorflow.keras.models import Sequential
    2. from tensorflow.keras.layers import Dense,LSTM,SimpleRNN
    3. model = Sequential()
    4. model.add(SimpleRNN(200,input_shape= (13,1),activation='relu'))
    5. model.add(Dense(100,activation='relu'))
    6. model.add(Dense(1,activation='sigmoid'))
    7. model.summary()

    可以得到如下输出:

    1. Model: "sequential"
    2. _________________________________________________________________
    3. Layer (type) Output Shape Param #
    4. =================================================================
    5. simple_rnn (SimpleRNN) (None, 200) 40400
    6. _________________________________________________________________
    7. dense (Dense) (None, 100) 20100
    8. _________________________________________________________________
    9. dense_1 (Dense) (None, 1) 101
    10. =================================================================
    11. Total params: 60,601
    12. Trainable params: 60,601
    13. Non-trainable params: 0
    14. _________________________________________________________________

    我们调用了tf.keras.layers.SimpleRNN这个原型函数:

    tf.keras.layers.SimpleRNN(units, activation='tanh', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', return_sequences=False)

     其中:

    • units: 整数,表示输出空间的维度(神经元的数量);
    • activation: 激活函数的名称,可选,默认为 'tanh'。可以是内置的激活函数,也可以是自定义的激活函数;
    • use_bias: 布尔值,表示是否使用偏置项,默认为 True;

    • kernel_initializer: 权重矩阵的初始化方法,默认为 'glorot_uniform',也称为 Xavier 初始化;

    • recurrent_initializer: 循环权重矩阵的初始化方法,默认为 'orthogonal',用于循环连接的权重;

    • bias_initializer: 偏置项的初始化方法,默认为 'zeros';

    • return_sequences: 布尔值,表示在输出序列中是否返回完整的序列,默认为 False。如果为 True,则返回整个输出序列;如果为 False,则只返回最后一个时间步的输出;

    五、 编译模型 

    通过下列示例代码:

    1. opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
    2. model.compile(loss='binary_crossentropy',
    3. optimizer=opt,
    4. metrics="accuracy")

    六、训练模型 

    通过下列示例代码:

    1. epochs = 100
    2. history = model.fit(X_train,y_train,
    3. epochs = epochs,
    4. batch_size = 128,
    5. validation_data=(X_test,y_test),
    6. verbose = 1)

    运行得到如下输出: 

    1. Epoch 1/100
    2. 3/3 [==============================] - 1s 119ms/step - loss: 0.6851 - accuracy: 0.5728 - val_loss: 0.6811 - val_accuracy: 0.6452
    3. Epoch 2/100
    4. 3/3 [==============================] - 0s 9ms/step - loss: 0.6831 - accuracy: 0.5796 - val_loss: 0.6795 - val_accuracy: 0.6452
    5. Epoch 3/100
    6. 3/3 [==============================] - 0s 9ms/step - loss: 0.6829 - accuracy: 0.5784 - val_loss: 0.6779 - val_accuracy: 0.6452
    7. Epoch 4/100
    8. 3/3 [==============================] - 0s 8ms/step - loss: 0.6815 - accuracy: 0.5812 - val_loss: 0.6762 - val_accuracy: 0.6452
    9. Epoch 5/100
    10. 3/3 [==============================] - 0s 11ms/step - loss: 0.6796 - accuracy: 0.5947 - val_loss: 0.6745 - val_accuracy: 0.6452
    11. Epoch 6/100
    12. 3/3 [==============================] - 0s 8ms/step - loss: 0.6797 - accuracy: 0.6031 - val_loss: 0.6729 - val_accuracy: 0.6774
    13. Epoch 7/100
    14. 3/3 [==============================] - 0s 7ms/step - loss: 0.6774 - accuracy: 0.6253 - val_loss: 0.6713 - val_accuracy: 0.7097
    15. Epoch 8/100
    16. 3/3 [==============================] - 0s 9ms/step - loss: 0.6760 - accuracy: 0.6394 - val_loss: 0.6697 - val_accuracy: 0.7097
    17. Epoch 9/100
    18. 3/3 [==============================] - 0s 9ms/step - loss: 0.6760 - accuracy: 0.6239 - val_loss: 0.6682 - val_accuracy: 0.7097
    19. Epoch 10/100
    20. 3/3 [==============================] - 0s 7ms/step - loss: 0.6756 - accuracy: 0.6380 - val_loss: 0.6668 - val_accuracy: 0.7097
    21. Epoch 11/100
    22. 3/3 [==============================] - 0s 10ms/step - loss: 0.6743 - accuracy: 0.6564 - val_loss: 0.6655 - val_accuracy: 0.7097
    23. Epoch 12/100
    24. 3/3 [==============================] - 0s 9ms/step - loss: 0.6738 - accuracy: 0.6524 - val_loss: 0.6640 - val_accuracy: 0.7419
    25. Epoch 13/100
    26. 3/3 [==============================] - 0s 10ms/step - loss: 0.6714 - accuracy: 0.6631 - val_loss: 0.6625 - val_accuracy: 0.7419
    27. Epoch 14/100
    28. 3/3 [==============================] - 0s 10ms/step - loss: 0.6715 - accuracy: 0.6524 - val_loss: 0.6610 - val_accuracy: 0.7742
    29. Epoch 15/100
    30. 3/3 [==============================] - 0s 9ms/step - loss: 0.6694 - accuracy: 0.6850 - val_loss: 0.6595 - val_accuracy: 0.7419
    31. Epoch 16/100
    32. 3/3 [==============================] - 0s 8ms/step - loss: 0.6683 - accuracy: 0.6811 - val_loss: 0.6580 - val_accuracy: 0.7419
    33. Epoch 17/100
    34. 3/3 [==============================] - 0s 8ms/step - loss: 0.6688 - accuracy: 0.6598 - val_loss: 0.6565 - val_accuracy: 0.7419
    35. Epoch 18/100
    36. 3/3 [==============================] - 0s 8ms/step - loss: 0.6677 - accuracy: 0.6741 - val_loss: 0.6550 - val_accuracy: 0.7419
    37. Epoch 19/100
    38. 3/3 [==============================] - 0s 7ms/step - loss: 0.6665 - accuracy: 0.7028 - val_loss: 0.6535 - val_accuracy: 0.7419
    39. Epoch 20/100
    40. 3/3 [==============================] - 0s 11ms/step - loss: 0.6665 - accuracy: 0.6949 - val_loss: 0.6520 - val_accuracy: 0.7419
    41. Epoch 21/100
    42. 3/3 [==============================] - 0s 10ms/step - loss: 0.6645 - accuracy: 0.7027 - val_loss: 0.6505 - val_accuracy: 0.7742
    43. Epoch 22/100
    44. 3/3 [==============================] - 0s 9ms/step - loss: 0.6632 - accuracy: 0.7143 - val_loss: 0.6489 - val_accuracy: 0.7742
    45. Epoch 23/100
    46. 3/3 [==============================] - 0s 11ms/step - loss: 0.6619 - accuracy: 0.7084 - val_loss: 0.6474 - val_accuracy: 0.8065
    47. Epoch 24/100
    48. 3/3 [==============================] - 0s 9ms/step - loss: 0.6624 - accuracy: 0.6997 - val_loss: 0.6459 - val_accuracy: 0.8065
    49. Epoch 25/100
    50. 3/3 [==============================] - 0s 9ms/step - loss: 0.6618 - accuracy: 0.7083 - val_loss: 0.6445 - val_accuracy: 0.8387
    51. Epoch 26/100
    52. 3/3 [==============================] - 0s 8ms/step - loss: 0.6601 - accuracy: 0.7169 - val_loss: 0.6431 - val_accuracy: 0.8387
    53. Epoch 27/100
    54. 3/3 [==============================] - 0s 8ms/step - loss: 0.6601 - accuracy: 0.7236 - val_loss: 0.6417 - val_accuracy: 0.8387
    55. Epoch 28/100
    56. 3/3 [==============================] - 0s 9ms/step - loss: 0.6577 - accuracy: 0.7283 - val_loss: 0.6402 - val_accuracy: 0.8387
    57. Epoch 29/100
    58. 3/3 [==============================] - 0s 9ms/step - loss: 0.6567 - accuracy: 0.7282 - val_loss: 0.6388 - val_accuracy: 0.8387
    59. Epoch 30/100
    60. 3/3 [==============================] - 0s 9ms/step - loss: 0.6585 - accuracy: 0.7394 - val_loss: 0.6374 - val_accuracy: 0.8387
    61. Epoch 31/100
    62. 3/3 [==============================] - 0s 11ms/step - loss: 0.6551 - accuracy: 0.7501 - val_loss: 0.6359 - val_accuracy: 0.8387
    63. Epoch 32/100
    64. 3/3 [==============================] - 0s 9ms/step - loss: 0.6549 - accuracy: 0.7462 - val_loss: 0.6345 - val_accuracy: 0.8710
    65. Epoch 33/100
    66. 3/3 [==============================] - 0s 9ms/step - loss: 0.6523 - accuracy: 0.7597 - val_loss: 0.6331 - val_accuracy: 0.8710
    67. Epoch 34/100
    68. 3/3 [==============================] - 0s 8ms/step - loss: 0.6523 - accuracy: 0.7671 - val_loss: 0.6317 - val_accuracy: 0.9032
    69. Epoch 35/100
    70. 3/3 [==============================] - 0s 8ms/step - loss: 0.6512 - accuracy: 0.7749 - val_loss: 0.6303 - val_accuracy: 0.9355
    71. Epoch 36/100
    72. 3/3 [==============================] - 0s 9ms/step - loss: 0.6507 - accuracy: 0.7748 - val_loss: 0.6289 - val_accuracy: 0.9355
    73. Epoch 37/100
    74. 3/3 [==============================] - 0s 9ms/step - loss: 0.6512 - accuracy: 0.7641 - val_loss: 0.6275 - val_accuracy: 0.9032
    75. Epoch 38/100
    76. 3/3 [==============================] - 0s 9ms/step - loss: 0.6495 - accuracy: 0.7641 - val_loss: 0.6261 - val_accuracy: 0.9032
    77. Epoch 39/100
    78. 3/3 [==============================] - 0s 10ms/step - loss: 0.6499 - accuracy: 0.7659 - val_loss: 0.6247 - val_accuracy: 0.9032
    79. Epoch 40/100
    80. 3/3 [==============================] - 0s 10ms/step - loss: 0.6479 - accuracy: 0.7785 - val_loss: 0.6233 - val_accuracy: 0.9032
    81. Epoch 41/100
    82. 3/3 [==============================] - 0s 9ms/step - loss: 0.6483 - accuracy: 0.7640 - val_loss: 0.6218 - val_accuracy: 0.9032
    83. Epoch 42/100
    84. 3/3 [==============================] - 0s 10ms/step - loss: 0.6445 - accuracy: 0.7874 - val_loss: 0.6205 - val_accuracy: 0.9032
    85. Epoch 43/100
    86. 3/3 [==============================] - 0s 9ms/step - loss: 0.6447 - accuracy: 0.7747 - val_loss: 0.6192 - val_accuracy: 0.9032
    87. Epoch 44/100
    88. 3/3 [==============================] - 0s 9ms/step - loss: 0.6459 - accuracy: 0.7718 - val_loss: 0.6179 - val_accuracy: 0.9032
    89. Epoch 45/100
    90. 3/3 [==============================] - 0s 10ms/step - loss: 0.6435 - accuracy: 0.7765 - val_loss: 0.6165 - val_accuracy: 0.9032
    91. Epoch 46/100
    92. 3/3 [==============================] - 0s 9ms/step - loss: 0.6445 - accuracy: 0.7745 - val_loss: 0.6152 - val_accuracy: 0.9032
    93. Epoch 47/100
    94. 3/3 [==============================] - 0s 9ms/step - loss: 0.6443 - accuracy: 0.7695 - val_loss: 0.6138 - val_accuracy: 0.9032
    95. Epoch 48/100
    96. 3/3 [==============================] - 0s 9ms/step - loss: 0.6418 - accuracy: 0.7802 - val_loss: 0.6124 - val_accuracy: 0.9032
    97. Epoch 49/100
    98. 3/3 [==============================] - 0s 9ms/step - loss: 0.6410 - accuracy: 0.7783 - val_loss: 0.6110 - val_accuracy: 0.9032
    99. Epoch 50/100
    100. 3/3 [==============================] - 0s 11ms/step - loss: 0.6410 - accuracy: 0.7763 - val_loss: 0.6096 - val_accuracy: 0.9032
    101. Epoch 51/100
    102. 3/3 [==============================] - 0s 10ms/step - loss: 0.6392 - accuracy: 0.7831 - val_loss: 0.6082 - val_accuracy: 0.9032
    103. Epoch 52/100
    104. 3/3 [==============================] - 0s 10ms/step - loss: 0.6383 - accuracy: 0.7812 - val_loss: 0.6068 - val_accuracy: 0.9032
    105. Epoch 53/100
    106. 3/3 [==============================] - 0s 9ms/step - loss: 0.6387 - accuracy: 0.7657 - val_loss: 0.6053 - val_accuracy: 0.9032
    107. Epoch 54/100
    108. 3/3 [==============================] - 0s 11ms/step - loss: 0.6326 - accuracy: 0.8007 - val_loss: 0.6037 - val_accuracy: 0.9032
    109. Epoch 55/100
    110. 3/3 [==============================] - 0s 10ms/step - loss: 0.6375 - accuracy: 0.7694 - val_loss: 0.6023 - val_accuracy: 0.9032
    111. Epoch 56/100
    112. 3/3 [==============================] - 0s 9ms/step - loss: 0.6389 - accuracy: 0.7800 - val_loss: 0.6008 - val_accuracy: 0.9032
    113. Epoch 57/100
    114. 3/3 [==============================] - 0s 10ms/step - loss: 0.6328 - accuracy: 0.7994 - val_loss: 0.5993 - val_accuracy: 0.9032
    115. Epoch 58/100
    116. 3/3 [==============================] - 0s 11ms/step - loss: 0.6310 - accuracy: 0.7994 - val_loss: 0.5978 - val_accuracy: 0.9032
    117. Epoch 59/100
    118. 3/3 [==============================] - 0s 10ms/step - loss: 0.6297 - accuracy: 0.8034 - val_loss: 0.5962 - val_accuracy: 0.9032
    119. Epoch 60/100
    120. 3/3 [==============================] - 0s 11ms/step - loss: 0.6304 - accuracy: 0.7945 - val_loss: 0.5946 - val_accuracy: 0.9032
    121. Epoch 61/100
    122. 3/3 [==============================] - 0s 11ms/step - loss: 0.6256 - accuracy: 0.8023 - val_loss: 0.5930 - val_accuracy: 0.9032
    123. Epoch 62/100
    124. 3/3 [==============================] - 0s 13ms/step - loss: 0.6312 - accuracy: 0.7883 - val_loss: 0.5915 - val_accuracy: 0.9032
    125. Epoch 63/100
    126. 3/3 [==============================] - 0s 10ms/step - loss: 0.6247 - accuracy: 0.8147 - val_loss: 0.5899 - val_accuracy: 0.9032
    127. Epoch 64/100
    128. 3/3 [==============================] - 0s 11ms/step - loss: 0.6240 - accuracy: 0.8059 - val_loss: 0.5884 - val_accuracy: 0.9032
    129. Epoch 65/100
    130. 3/3 [==============================] - 0s 10ms/step - loss: 0.6280 - accuracy: 0.7981 - val_loss: 0.5869 - val_accuracy: 0.9032
    131. Epoch 66/100
    132. 3/3 [==============================] - 0s 11ms/step - loss: 0.6249 - accuracy: 0.8069 - val_loss: 0.5854 - val_accuracy: 0.9032
    133. Epoch 67/100
    134. 3/3 [==============================] - 0s 11ms/step - loss: 0.6244 - accuracy: 0.7932 - val_loss: 0.5838 - val_accuracy: 0.9032
    135. Epoch 68/100
    136. 3/3 [==============================] - 0s 10ms/step - loss: 0.6209 - accuracy: 0.8137 - val_loss: 0.5822 - val_accuracy: 0.9032
    137. Epoch 69/100
    138. 3/3 [==============================] - 0s 11ms/step - loss: 0.6193 - accuracy: 0.8059 - val_loss: 0.5806 - val_accuracy: 0.9032
    139. Epoch 70/100
    140. 3/3 [==============================] - 0s 10ms/step - loss: 0.6199 - accuracy: 0.8020 - val_loss: 0.5790 - val_accuracy: 0.9032
    141. Epoch 71/100
    142. 3/3 [==============================] - 0s 10ms/step - loss: 0.6185 - accuracy: 0.8097 - val_loss: 0.5774 - val_accuracy: 0.9032
    143. Epoch 72/100
    144. 3/3 [==============================] - 0s 10ms/step - loss: 0.6165 - accuracy: 0.8126 - val_loss: 0.5759 - val_accuracy: 0.9032
    145. Epoch 73/100
    146. 3/3 [==============================] - 0s 10ms/step - loss: 0.6126 - accuracy: 0.8253 - val_loss: 0.5744 - val_accuracy: 0.9032
    147. Epoch 74/100
    148. 3/3 [==============================] - 0s 11ms/step - loss: 0.6141 - accuracy: 0.8214 - val_loss: 0.5729 - val_accuracy: 0.9032
    149. Epoch 75/100
    150. 3/3 [==============================] - 0s 10ms/step - loss: 0.6150 - accuracy: 0.8077 - val_loss: 0.5713 - val_accuracy: 0.9032
    151. Epoch 76/100
    152. 3/3 [==============================] - 0s 11ms/step - loss: 0.6177 - accuracy: 0.7980 - val_loss: 0.5698 - val_accuracy: 0.9032
    153. Epoch 77/100
    154. 3/3 [==============================] - 0s 10ms/step - loss: 0.6156 - accuracy: 0.8048 - val_loss: 0.5682 - val_accuracy: 0.9032
    155. Epoch 78/100
    156. 3/3 [==============================] - 0s 10ms/step - loss: 0.6132 - accuracy: 0.8068 - val_loss: 0.5666 - val_accuracy: 0.9032
    157. Epoch 79/100
    158. 3/3 [==============================] - 0s 9ms/step - loss: 0.6120 - accuracy: 0.8146 - val_loss: 0.5650 - val_accuracy: 0.9032
    159. Epoch 80/100
    160. 3/3 [==============================] - 0s 9ms/step - loss: 0.6077 - accuracy: 0.8136 - val_loss: 0.5635 - val_accuracy: 0.9032
    161. Epoch 81/100
    162. 3/3 [==============================] - 0s 10ms/step - loss: 0.6121 - accuracy: 0.8009 - val_loss: 0.5620 - val_accuracy: 0.9032
    163. Epoch 82/100
    164. 3/3 [==============================] - 0s 11ms/step - loss: 0.6085 - accuracy: 0.8087 - val_loss: 0.5604 - val_accuracy: 0.9032
    165. Epoch 83/100
    166. 3/3 [==============================] - 0s 10ms/step - loss: 0.6060 - accuracy: 0.8136 - val_loss: 0.5587 - val_accuracy: 0.9032
    167. Epoch 84/100
    168. 3/3 [==============================] - 0s 12ms/step - loss: 0.6073 - accuracy: 0.8010 - val_loss: 0.5571 - val_accuracy: 0.9032
    169. Epoch 85/100
    170. 3/3 [==============================] - 0s 9ms/step - loss: 0.6042 - accuracy: 0.8088 - val_loss: 0.5553 - val_accuracy: 0.9032
    171. Epoch 86/100
    172. 3/3 [==============================] - 0s 13ms/step - loss: 0.6068 - accuracy: 0.7950 - val_loss: 0.5535 - val_accuracy: 0.9032
    173. Epoch 87/100
    174. 3/3 [==============================] - 0s 10ms/step - loss: 0.6001 - accuracy: 0.8136 - val_loss: 0.5516 - val_accuracy: 0.9032
    175. Epoch 88/100
    176. 3/3 [==============================] - 0s 10ms/step - loss: 0.5991 - accuracy: 0.8048 - val_loss: 0.5498 - val_accuracy: 0.9032
    177. Epoch 89/100
    178. 3/3 [==============================] - 0s 11ms/step - loss: 0.5977 - accuracy: 0.8107 - val_loss: 0.5481 - val_accuracy: 0.9032
    179. Epoch 90/100
    180. 3/3 [==============================] - 0s 10ms/step - loss: 0.5990 - accuracy: 0.7970 - val_loss: 0.5465 - val_accuracy: 0.9032
    181. Epoch 91/100
    182. 3/3 [==============================] - 0s 9ms/step - loss: 0.5981 - accuracy: 0.8165 - val_loss: 0.5448 - val_accuracy: 0.9032
    183. Epoch 92/100
    184. 3/3 [==============================] - 0s 10ms/step - loss: 0.5960 - accuracy: 0.8213 - val_loss: 0.5431 - val_accuracy: 0.9032
    185. Epoch 93/100
    186. 3/3 [==============================] - 0s 10ms/step - loss: 0.5971 - accuracy: 0.8126 - val_loss: 0.5413 - val_accuracy: 0.9032
    187. Epoch 94/100
    188. 3/3 [==============================] - 0s 11ms/step - loss: 0.5944 - accuracy: 0.8115 - val_loss: 0.5395 - val_accuracy: 0.9032
    189. Epoch 95/100
    190. 3/3 [==============================] - 0s 10ms/step - loss: 0.5964 - accuracy: 0.8066 - val_loss: 0.5376 - val_accuracy: 0.9032
    191. Epoch 96/100
    192. 3/3 [==============================] - 0s 9ms/step - loss: 0.5940 - accuracy: 0.8165 - val_loss: 0.5357 - val_accuracy: 0.9032
    193. Epoch 97/100
    194. 3/3 [==============================] - 0s 9ms/step - loss: 0.5911 - accuracy: 0.8087 - val_loss: 0.5338 - val_accuracy: 0.9032
    195. Epoch 98/100
    196. 3/3 [==============================] - 0s 8ms/step - loss: 0.5889 - accuracy: 0.8125 - val_loss: 0.5319 - val_accuracy: 0.9032
    197. Epoch 99/100
    198. 3/3 [==============================] - 0s 9ms/step - loss: 0.5901 - accuracy: 0.8096 - val_loss: 0.5299 - val_accuracy: 0.9032
    199. Epoch 100/100
    200. 3/3 [==============================] - 0s 8ms/step - loss: 0.5876 - accuracy: 0.8105 - val_loss: 0.5279 - val_accuracy: 0.9032
    201. Process finished with exit code 0

    模型训练结果为:val_accuracy = 90.32%

    六、 模型评估

    1.Loss与Accuracy图

    1. import matplotlib.pyplot as plt
    2. acc = history.history['accuracy']
    3. val_acc = history.history['val_accuracy']
    4. loss = history.history['loss']
    5. val_loss = history.history['val_loss']
    6. epochs_range = range(epochs)
    7. plt.figure(figsize=(14, 4))
    8. plt.subplot(1, 2, 1)
    9. plt.plot(epochs_range, acc, label='Training Accuracy-Adam')
    10. plt.plot(epochs_range, val_acc, label='Validation Accuracy-Adam')
    11. plt.legend(loc='lower right')
    12. plt.title('Training and Validation Accuracy')
    13. plt.subplot(1, 2, 2)
    14. plt.plot(epochs_range, loss, label='Training Loss-Adam')
    15. plt.plot(epochs_range, val_loss, label='Validation Loss-Adam')
    16. plt.legend(loc='upper right')
    17. plt.title('Training and Validation Loss')
    18. plt.show()

    得到的可视化结果:

    七、个人理解

    本项目为通过RNN来实现心脏病的预测,需要根据给定的CSV文件来实现该目标。由于CSV为表格类文件,故可能存在数据缺失的情况,这与之前的图片类数据有明显的差异,因此在进入网络模型训练前需要针对表格中的数据做一定的处理,由于初次接触表格数据,故只完成了数据是否为0的检查,之后的学习过程中将完善异常值处理等其他数据操作。

  • 相关阅读:
    四元数的可视化
    【夏虫语冰】测试服务器端口是否打开(命令行、Python)
    网络连接评分机制之NetworkFactory
    安卓 实现60s倒计时的CountDownTimer(小坑)
    javascipt中对象和数组的遍历,for in 和for of的区别是什么
    pnpm为什么卸载卸载不干净
    tkinter: 变量类别
    SpringBoot整合Redis,基于Jedis实现redis各种操作
    【牛客网】HJ91.走方格的方案数
    MySQL常用函数
  • 原文地址:https://blog.csdn.net/m0_51359915/article/details/136810568