• DL之GRU:基于2022年6月最新上证指数数据集利用GRU算法预测最新股票上证指数实现回归预测


    DL之GRU:基于2022年6月最新上证指数数据集结合Pytorch框架利用GRU算法预测最新股票上证指数实现回归预测

    目录

    基于2022年6月最新上证指数数据集结合Pytorch框架利用GRU算法预测最新股票上证指数实现回归预测

    # 0、数据集预整理

    # 1、读取数据集

    # 2、数据预处理

    # 2.1、数据清洗

    # 2.2、时间格式数据标准化

    # 2.3、定义y_train

    # 2.4、构造时序性矩阵数据集:基于y重新设计训练集——符合时序性

    # 2.5、对训练集进行 Z_score标准归一化处理

    # 2.6、将训练集的df格式转为tensor格式

    # 3、模型训练

    # 3.1、模型建立:定义GRU模型、优化器、损失函数

    # 3.2、模型训练:及时保存训练过程中的模型

    # 3.3、对标签数据单独进行归一化

    # 3.4、基于GRU模型预测:基于训练好的GRU模型,预测test数据集

    # 3.5、模型评估


    相关文章
    DL之GRU(Pytorch框架):基于2022年6月最新上证指数数据集利用GRU算法预测最新股票上证指数实现回归预测
    DL之GRU(Pytorch框架):基于2022年6月最新上证指数数据集利用GRU算法预测最新股票上证指数实现回归预测实现

    基于2022年6月最新上证指数数据集结合Pytorch框架利用GRU算法预测最新股票上证指数实现回归预测

    # 0、数据集预整理

    # 数据集下载地址上证指数(000001)历史交易数据_股票行情_网易财经

    # 1、读取数据集

    (7700, 11)

    日期股票代码名称收盘价最高价最低价开盘价前收盘涨跌额涨跌幅成交量成交金额
    1990/12/19'000001上证指数99.9899.9895.7996.05NoneNoneNone1260494000
    1990/12/20'000001上证指数104.39104.3999.98104.399.984.414.410919784000
    1990/12/21'000001上证指数109.13109.13103.73109.07104.394.744.54072816000
    1990/12/24'000001上证指数114.55114.55109.13113.57109.135.424.96663231000
    1990/12/25'000001上证指数120.25120.25114.55120.09114.555.74.976156000
    1990/12/26'000001上证指数125.27125.27120.25125.27120.255.024.174610053000
    1990/12/27'000001上证指数125.28125.28125.27125.27125.270.010.00866104000
    1990/12/28'000001上证指数126.45126.45125.28126.39125.281.170.933910888000
    1990/12/31'000001上证指数127.61127.61126.48126.56126.451.160.91747860000
    1991/1/2'000001上证指数128.84128.84127.61127.61127.611.230.96399159000
    1991/1/3'000001上证指数130.14130.14128.84128.84128.841.31.00914193000
    1991/1/4'000001上证指数131.44131.44130.14131.27130.141.30.9989420261000
    1991/1/7'000001上证指数132.06132.06131.45131.99131.440.620.4717217141000
    1991/1/8'000001上证指数132.68132.68132.06132.62132.060.620.469529261806000
    1991/1/9'000001上证指数133.34133.34132.68133.3132.680.660.497456033228000
    1991/1/10'000001上证指数133.97133.97133.34133.93133.340.630.472599905399000
    1991/1/11'000001上证指数134.6134.61134.51134.61133.970.630.4703133277115000
    1991/1/14'000001上证指数134.67135.19134.11134.11134.60.070.052125306883000
    1991/1/15'000001上证指数134.74134.74134.19134.21134.670.070.05214461010000
    1991/1/16'000001上证指数134.24134.74134.14134.19134.74-0.5-0.3711509270000

    # 2、数据预处理

    # 2.1、数据清洗

    # 2.2、时间格式数据标准化

    利用strptime()函数,将时间改为%Y-%m-%d格式

    # 2.3、定义y_train

    y_train shape: (7200,)

    # 2.4、构造时序性矩阵数据集:基于y重新设计训练集——符合时序性

    1. data_all_train shape: (7190, 11)
    2. data_all_train
    3. label_0 label_1 label_2 ... label_8 label_9 y
    4. 0 99.9800 104.3900 109.1300 ... 127.6100 128.8400 130.1400
    5. 1 104.3900 109.1300 114.5500 ... 128.8400 130.1400 131.4400
    6. 2 109.1300 114.5500 120.2500 ... 130.1400 131.4400 132.0600
    7. 3 114.5500 120.2500 125.2700 ... 131.4400 132.0600 132.6800
    8. 4 120.2500 125.2700 125.2800 ... 132.0600 132.6800 133.3400
    9. ... ... ... ... ... ... ... ...
    10. 7185 2870.3422 2868.4587 2875.4176 ... 2846.5473 2836.8036 2846.2217
    11. 7186 2868.4587 2875.4176 2898.5760 ... 2836.8036 2846.2217 2852.3512
    12. 7187 2875.4176 2898.5760 2883.7378 ... 2846.2217 2852.3512 2915.4311
    13. 7188 2898.5760 2883.7378 2867.9237 ... 2852.3512 2915.4311 2921.3980
    14. 7189 2883.7378 2867.9237 2813.7654 ... 2915.4311 2921.3980 2923.3711
    15. [7190 rows x 11 columns]
    label_0label_1label_2label_3label_4label_5label_6label_7label_8label_9y
    099.98104.39109.13114.55120.25125.27125.28126.45127.61128.84130.14
    1104.39109.13114.55120.25125.27125.28126.45127.61128.84130.14131.44
    2109.13114.55120.25125.27125.28126.45127.61128.84130.14131.44132.06
    3114.55120.25125.27125.28126.45127.61128.84130.14131.44132.06132.68
    4120.25125.27125.28126.45127.61128.84130.14131.44132.06132.68133.34
    5125.27125.28126.45127.61128.84130.14131.44132.06132.68133.34133.97
    6125.28126.45127.61128.84130.14131.44132.06132.68133.34133.97134.6
    7126.45127.61128.84130.14131.44132.06132.68133.34133.97134.6134.67
    8127.61128.84130.14131.44132.06132.68133.34133.97134.6134.67134.74
    9128.84130.14131.44132.06132.68133.34133.97134.6134.67134.74134.24
    10130.14131.44132.06132.68133.34133.97134.6134.67134.74134.24134.25
    11131.44132.06132.68133.34133.97134.6134.67134.74134.24134.25134.24
    12132.06132.68133.34133.97134.6134.67134.74134.24134.25134.24134.24
    13132.68133.34133.97134.6134.67134.74134.24134.25134.24134.24133.72
    14133.34133.97134.6134.67134.74134.24134.25134.24134.24133.72133.17
    15133.97134.6134.67134.74134.24134.25134.24134.24133.72133.17132.61
    16134.6134.67134.74134.24134.25134.24134.24133.72133.17132.61132.05
    17134.67134.74134.24134.25134.24134.24133.72133.17132.61132.05131.46
    18134.74134.24134.25134.24134.24133.72133.17132.61132.05131.46130.95
    19134.24134.25134.24134.24133.72133.17132.61132.05131.46130.95130.44
    20134.25134.24134.24133.72133.17132.61132.05131.46130.95130.44129.97
    21134.24134.24133.72133.17132.61132.05131.46130.95130.44129.97129.51

    # 2.5、对训练集进行 Z_score标准归一化处理

    1. data_all_tr2arr_mean: 1964.7695519269184
    2. data_all_tr2arr_std: 1068.4654234837196

    # 2.6、将训练集的df格式转为tensor格式

    1. train_loader:
    2. <torch.utils.data.dataloader.DataLoader object at 0x0000014BB5A68AC8>

    # 3、模型训练

    # 3.1、模型建立:定义GRU模型、优化器、损失函数

    采用GRU+Fully Connected Layer, hidden_size=64

    # 3.2、模型训练:及时保存训练过程中的模型

    1. 1 tensor(0.3308, grad_fn=<MseLossBackward>)
    2. 2 tensor(0.1350, grad_fn=<MseLossBackward>)
    3. 3 tensor(0.0127, grad_fn=<MseLossBackward>)
    4. 4 tensor(0.0110, grad_fn=<MseLossBackward>)
    5. 5 tensor(0.0114, grad_fn=<MseLossBackward>)
    6. 6 tensor(0.0099, grad_fn=<MseLossBackward>)
    7. 7 tensor(0.0222, grad_fn=<MseLossBackward>)
    8. 8 tensor(0.0130, grad_fn=<MseLossBackward>)
    9. 9 tensor(0.0150, grad_fn=<MseLossBackward>)
    10. 10 tensor(0.0133, grad_fn=<MseLossBackward>)
    11. 11 tensor(0.0057, grad_fn=<MseLossBackward>)
    12. 12 tensor(0.0163, grad_fn=<MseLossBackward>)
    13. 13 tensor(0.0216, grad_fn=<MseLossBackward>)
    14. 14 tensor(0.0193, grad_fn=<MseLossBackward>)
    15. 15 tensor(0.0333, grad_fn=<MseLossBackward>)
    16. 16 tensor(0.0146, grad_fn=<MseLossBackward>)
    17. 17 tensor(0.0118, grad_fn=<MseLossBackward>)
    18. 18 tensor(0.0052, grad_fn=<MseLossBackward>)
    19. 19 tensor(0.0046, grad_fn=<MseLossBackward>)
    20. 20 tensor(0.0033, grad_fn=<MseLossBackward>)
    21. 21 tensor(0.0078, grad_fn=<MseLossBackward>)
    22. 22 tensor(0.0088, grad_fn=<MseLossBackward>)
    23. 23 tensor(0.0049, grad_fn=<MseLossBackward>)
    24. 24 tensor(0.0085, grad_fn=<MseLossBackward>)
    25. 25 tensor(0.0044, grad_fn=<MseLossBackward>)
    26. 26 tensor(0.0034, grad_fn=<MseLossBackward>)
    27. 27 tensor(0.0050, grad_fn=<MseLossBackward>)
    28. 28 tensor(0.0070, grad_fn=<MseLossBackward>)
    29. 29 tensor(0.0072, grad_fn=<MseLossBackward>)
    30. 30 tensor(0.0065, grad_fn=<MseLossBackward>)
    31. 31 tensor(0.0037, grad_fn=<MseLossBackward>)
    32. 32 tensor(0.0054, grad_fn=<MseLossBackward>)
    33. 33 tensor(0.0033, grad_fn=<MseLossBackward>)
    34. 34 tensor(0.0314, grad_fn=<MseLossBackward>)
    35. 35 tensor(0.0035, grad_fn=<MseLossBackward>)
    36. 36 tensor(0.0063, grad_fn=<MseLossBackward>)
    37. 37 tensor(0.0080, grad_fn=<MseLossBackward>)
    38. 38 tensor(0.0028, grad_fn=<MseLossBackward>)
    39. 39 tensor(0.0068, grad_fn=<MseLossBackward>)
    40. 40 tensor(0.0040, grad_fn=<MseLossBackward>)
    41. 41 tensor(0.0021, grad_fn=<MseLossBackward>)
    42. 42 tensor(0.0031, grad_fn=<MseLossBackward>)
    43. 43 tensor(0.0017, grad_fn=<MseLossBackward>)
    44. 44 tensor(0.0040, grad_fn=<MseLossBackward>)
    45. 45 tensor(0.0025, grad_fn=<MseLossBackward>)
    46. 46 tensor(0.0018, grad_fn=<MseLossBackward>)
    47. 47 tensor(0.0041, grad_fn=<MseLossBackward>)
    48. 48 tensor(0.0025, grad_fn=<MseLossBackward>)
    49. 49 tensor(0.0013, grad_fn=<MseLossBackward>)
    50. 50 tensor(0.0034, grad_fn=<MseLossBackward>)
    51. 51 tensor(0.0014, grad_fn=<MseLossBackward>)
    52. 52 tensor(0.0045, grad_fn=<MseLossBackward>)
    53. 53 tensor(0.0051, grad_fn=<MseLossBackward>)
    54. 54 tensor(0.0036, grad_fn=<MseLossBackward>)
    55. 55 tensor(0.0019, grad_fn=<MseLossBackward>)
    56. 56 tensor(0.0046, grad_fn=<MseLossBackward>)
    57. 57 tensor(0.0032, grad_fn=<MseLossBackward>)
    58. 58 tensor(0.0033, grad_fn=<MseLossBackward>)
    59. 59 tensor(0.0033, grad_fn=<MseLossBackward>)
    60. 60 tensor(0.0025, grad_fn=<MseLossBackward>)
    61. 61 tensor(0.0021, grad_fn=<MseLossBackward>)
    62. 62 tensor(0.0021, grad_fn=<MseLossBackward>)
    63. 63 tensor(0.0036, grad_fn=<MseLossBackward>)
    64. 64 tensor(0.0018, grad_fn=<MseLossBackward>)
    65. 65 tensor(0.0075, grad_fn=<MseLossBackward>)
    66. 66 tensor(0.0074, grad_fn=<MseLossBackward>)
    67. 67 tensor(0.0010, grad_fn=<MseLossBackward>)
    68. 68 tensor(0.0018, grad_fn=<MseLossBackward>)
    69. 69 tensor(0.0039, grad_fn=<MseLossBackward>)
    70. 70 tensor(0.0009, grad_fn=<MseLossBackward>)
    71. 71 tensor(0.0035, grad_fn=<MseLossBackward>)
    72. 72 tensor(0.0035, grad_fn=<MseLossBackward>)
    73. 73 tensor(0.0011, grad_fn=<MseLossBackward>)
    74. 74 tensor(0.0047, grad_fn=<MseLossBackward>)
    75. 75 tensor(0.0020, grad_fn=<MseLossBackward>)
    76. 76 tensor(0.0008, grad_fn=<MseLossBackward>)
    77. 77 tensor(0.0019, grad_fn=<MseLossBackward>)
    78. 78 tensor(0.0019, grad_fn=<MseLossBackward>)
    79. 79 tensor(0.0025, grad_fn=<MseLossBackward>)
    80. 80 tensor(0.0013, grad_fn=<MseLossBackward>)
    81. 81 tensor(0.0023, grad_fn=<MseLossBackward>)
    82. 82 tensor(0.0028, grad_fn=<MseLossBackward>)
    83. 83 tensor(0.0020, grad_fn=<MseLossBackward>)
    84. 84 tensor(0.0017, grad_fn=<MseLossBackward>)
    85. 85 tensor(0.0010, grad_fn=<MseLossBackward>)
    86. 86 tensor(0.0011, grad_fn=<MseLossBackward>)
    87. 87 tensor(0.0048, grad_fn=<MseLossBackward>)
    88. 88 tensor(0.0008, grad_fn=<MseLossBackward>)
    89. 89 tensor(0.0008, grad_fn=<MseLossBackward>)
    90. 90 tensor(0.0015, grad_fn=<MseLossBackward>)
    91. 91 tensor(0.0024, grad_fn=<MseLossBackward>)
    92. 92 tensor(0.0036, grad_fn=<MseLossBackward>)
    93. 93 tensor(0.0030, grad_fn=<MseLossBackward>)
    94. 94 tensor(0.0017, grad_fn=<MseLossBackward>)
    95. 95 tensor(0.0005, grad_fn=<MseLossBackward>)
    96. 96 tensor(0.0014, grad_fn=<MseLossBackward>)
    97. 97 tensor(0.0037, grad_fn=<MseLossBackward>)
    98. 98 tensor(0.0048, grad_fn=<MseLossBackward>)
    99. 99 tensor(0.0022, grad_fn=<MseLossBackward>)
    100. 100 tensor(0.0006, grad_fn=<MseLossBackward>)
    101. 101 tensor(0.0005, grad_fn=<MseLossBackward>)
    102. 102 tensor(0.0027, grad_fn=<MseLossBackward>)
    103. 103 tensor(0.0015, grad_fn=<MseLossBackward>)
    104. 104 tensor(0.0014, grad_fn=<MseLossBackward>)
    105. 105 tensor(0.0029, grad_fn=<MseLossBackward>)
    106. 106 tensor(0.0011, grad_fn=<MseLossBackward>)
    107. 107 tensor(0.0082, grad_fn=<MseLossBackward>)
    108. 108 tensor(0.0017, grad_fn=<MseLossBackward>)
    109. 109 tensor(0.0034, grad_fn=<MseLossBackward>)
    110. 110 tensor(0.0010, grad_fn=<MseLossBackward>)
    111. 111 tensor(0.0015, grad_fn=<MseLossBackward>)
    112. 112 tensor(0.0017, grad_fn=<MseLossBackward>)
    113. 113 tensor(0.0016, grad_fn=<MseLossBackward>)
    114. 114 tensor(0.0006, grad_fn=<MseLossBackward>)
    115. 115 tensor(0.0023, grad_fn=<MseLossBackward>)
    116. 116 tensor(0.0006, grad_fn=<MseLossBackward>)
    117. 117 tensor(0.0018, grad_fn=<MseLossBackward>)
    118. 118 tensor(0.0013, grad_fn=<MseLossBackward>)
    119. 119 tensor(0.0016, grad_fn=<MseLossBackward>)
    120. 120 tensor(0.0007, grad_fn=<MseLossBackward>)
    121. 121 tensor(0.0007, grad_fn=<MseLossBackward>)
    122. 122 tensor(0.0043, grad_fn=<MseLossBackward>)
    123. 123 tensor(0.0038, grad_fn=<MseLossBackward>)
    124. 124 tensor(0.0011, grad_fn=<MseLossBackward>)
    125. 125 tensor(0.0025, grad_fn=<MseLossBackward>)
    126. 126 tensor(0.0013, grad_fn=<MseLossBackward>)
    127. 127 tensor(0.0005, grad_fn=<MseLossBackward>)
    128. 128 tensor(0.0013, grad_fn=<MseLossBackward>)
    129. 129 tensor(0.0021, grad_fn=<MseLossBackward>)
    130. 130 tensor(0.0011, grad_fn=<MseLossBackward>)
    131. 131 tensor(0.0034, grad_fn=<MseLossBackward>)
    132. 132 tensor(0.0022, grad_fn=<MseLossBackward>)
    133. 133 tensor(0.0019, grad_fn=<MseLossBackward>)
    134. 134 tensor(0.0020, grad_fn=<MseLossBackward>)
    135. 135 tensor(0.0009, grad_fn=<MseLossBackward>)
    136. 136 tensor(0.0100, grad_fn=<MseLossBackward>)
    137. 137 tensor(0.0009, grad_fn=<MseLossBackward>)
    138. 138 tensor(0.0012, grad_fn=<MseLossBackward>)
    139. 139 tensor(0.0009, grad_fn=<MseLossBackward>)
    140. 140 tensor(0.0003, grad_fn=<MseLossBackward>)
    141. 141 tensor(0.0007, grad_fn=<MseLossBackward>)
    142. 142 tensor(0.0017, grad_fn=<MseLossBackward>)
    143. 143 tensor(0.0027, grad_fn=<MseLossBackward>)
    144. 144 tensor(0.0149, grad_fn=<MseLossBackward>)
    145. 145 tensor(0.0027, grad_fn=<MseLossBackward>)
    146. 146 tensor(0.0024, grad_fn=<MseLossBackward>)
    147. 147 tensor(0.0013, grad_fn=<MseLossBackward>)
    148. 148 tensor(0.0011, grad_fn=<MseLossBackward>)
    149. 149 tensor(0.0006, grad_fn=<MseLossBackward>)
    150. 150 tensor(0.0008, grad_fn=<MseLossBackward>)
    151. save success! F:\File_Python\……\20220627_models/RNN_GRU_Model_300_150.pkl

    # 3.3、对标签数据单独进行归一化

    1. y2arr_normal:
    2. [-1.74529705 -1.74116964 -1.73673337 ... 1.26852879 1.29623048
    3. 1.32378233]

    # 3.4、基于GRU模型预测:基于训练好的GRU模型,预测test数据集

     cut_train_test: 7700 7200

    # 3.5、模型评估

    1. cut_train_test: 7700 7400
    2. RNN_GRU_Model_300 R2 value: 0.8737662561295777
    3. RNN_GRU_Model_300 MAE value: 48.39948391799124
    4. RNN_GRU_Model_300 MSE value: 3773.501360880409

    # 对比真实值VS预测值曲线  

     

  • 相关阅读:
    Microsoft SQL Server 图书管理数据库的建立
    OGV内容生产工业化
    【Python】批量提取图片经纬度并写入csv文件
    MySQL-运维篇 初识
    javaScript中Number数字类型方法入门
    海底捞逃离小县城
    米尔AM62x核心板,高配价低,AM335x升级首选
    【学习草稿】bert文本分类
    code编译时报错undefined reference to ...
    【MySQL 8】Generated Invisible Primary Keys(GIPK)
  • 原文地址:https://blog.csdn.net/qq_41185868/article/details/125493308