• 使用Pytorch实现linear_regression


    使用Pytorch实现线性回归

    # import necessary packages
    import torch
    import torch.nn as nn
    import numpy as np
    import matplotlib.pyplot as plt
    
    • 1
    • 2
    • 3
    • 4
    • 5
    # Set necessary Hyper-parameters.
    input_size = 1
    output_size = 1
    num_epochs = 60
    learning_rate = 0.001
    
    • 1
    • 2
    • 3
    • 4
    • 5
    # Define a Toy dataset.
    x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168], 
                        [9.779], [6.182], [7.59], [2.167], [7.042], 
                        [10.791], [5.313], [7.997], [3.1]], dtype=np.float32)
    
    y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573], 
                        [3.366], [2.596], [2.53], [1.221], [2.827], 
                        [3.465], [1.65], [2.904], [1.3]], dtype=np.float32)
    
    # Confirm the data shape.
    print(x_train.shape, y_train.shape)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    (15, 1) (15, 1)
    
    • 1
    # Linear regression model
    model = nn.Linear(input_size, output_size)
    
    • 1
    • 2
    # Loss and optimizer
    criterion = nn.MSELoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, )
    
    • 1
    • 2
    • 3
    # Train the model
    for epoch in range(num_epochs):
        # Convert numpy arrays to torch tensors
        inputs = torch.from_numpy(x_train)
        targets = torch.from_numpy(y_train)
    
        # Forward pass
        outputs = model(inputs)
        loss = criterion(outputs, targets)
    
        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    
        # Set an output counter
        if (epoch+1) % 5 == 0:
            print('Epoch [{}/{}], loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
    
    # Plot the graph
    predicted = model(torch.from_numpy(x_train)).detach().numpy()
    plt.plot(x_train, y_train, 'ro', label='Original data')
    plt.plot(x_train, predicted, label='Fitted line')
    plt.legend()
    plt.show()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    Epoch [5/60], loss: 7.1598
    Epoch [10/60], loss: 3.0717
    Epoch [15/60], loss: 1.4154
    Epoch [20/60], loss: 0.7443
    Epoch [25/60], loss: 0.4722
    Epoch [30/60], loss: 0.3618
    Epoch [35/60], loss: 0.3169
    Epoch [40/60], loss: 0.2985
    Epoch [45/60], loss: 0.2909
    Epoch [50/60], loss: 0.2876
    Epoch [55/60], loss: 0.2861
    Epoch [60/60], loss: 0.2853
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    在这里插入图片描述

    # Save the model checkpoint
    torch.save(model.state_dict(), 'model_param.ckpt')
    torch.save(model, 'model.ckpt')
    
    • 1
    • 2
    • 3
  • 相关阅读:
    Framework 到底该怎么学习?
    信息科学与技术导论
    SpringCloud微服务实战——搭建企业级开发框架(五十):集成移动端推送功能的系统通知公告数据库设计
    2.4 信道复用技术
    速卖通店铺销量怎么提升?
    开发需要了解的23种设计模式 | 导言
    C语言基础知识点(六)二维数组指针和地址
    MySQL性能指标TPS\QPS\IOPS如何压测?
    多边形内部水平方向近似最大矩形python实现
    leetcode 刷题 log day 49
  • 原文地址:https://blog.csdn.net/AIHUBEI/article/details/134543175