• pytorch的自动求导和简单的线性函数机器学习


    自动求导

    import torch
    x = torch.rand(1)
    w = torch.rand(1, requires_grad=True)
    b = torch.rand(1, requires_grad=True)
    
    y = w*x
    z=y+b
    
    z.backward(retain_graph=True) #不清空会累加
    
    w.grad
    b.grad
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    线性函数Demo

    准备训练数据,并生成矩阵格式

    import numpy as np
    import torch
    import torch.nn as nn
    x_values = [i for i in range(11)]
    x_train = np.array(x_values, dtype=np.float32)
    x_train = x_train.reshape(-1,1)
    x_train.shape
    
    y_values = [2*i + 1 for i in x_values]
    y_train = np.array(y_values, dtype=np.float32)
    y_train = y_train.reshape(-1,1)
    y_train.shape
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    创建训练模型

    nn模块提供了创建和训练神经网络的各种工具

    class LinearRegressionModel(nn.Module):
        def __init__(self, input_dim, output_dim):
            super(LinearRegressionModel, self).__init__()
            self.linear = nn.Linear(input_dim, output_dim)
            
        def forward(self, x):
            out = self.linear(x)
            return out
    input_dim = 1
    output_dim = 1
    model = LinearRegressionModel(input_dim, output_dim)
    epoches = 1000
    learning_rate = 0.01
    optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
    criterion = nn.MSELoss()
    for epoch in range(epoches):
        epoch += 1
        inputs = torch.from_numpy(x_train)
        labels = torch.from_numpy(y_train)
        
        optimizer.zero_grad()
        
        outputs = model(inputs)
        
        loss = criterion(outputs, labels)
        
        loss.backward()
        
        optimizer.step()
        if epoch % 50 == 0:
            print('epoch {}, loss {}'.format(epoch, loss.item()))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31

    保存和加载模型

    torch.save(model.state_dict(), 'model.pkl')
    model.load_state_dict(torch.load('model.pkl'))
    
    • 1
    • 2

    GPU训练Demo

    import torch
    import torch.nn as nn
    import numpy as np
    x_values = [i for i in range(11)]
    x_train = np.array(x_values, dtype=np.float32)
    x_train = x_train.reshape(-1,1)
    
    y_values = [2*i + 1 for i in x_values]
    y_train = np.array(y_values, dtype=np.float32)
    y_train = y_train.reshape(-1,1)
    
    class LinearRegressionModel(nn.Module):
        def __init__(self, input_dim, output_dim):
            super(LinearRegressionModel, self).__init__()
            self.linear = nn.Linear(input_dim, output_dim)
            
        def forward(self, x):
            out = self.linear(x)
            return out
    input_dim = 1
    output_dim = 1
    model = LinearRegressionModel(input_dim, output_dim)
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model.to(device)
    epoches = 1000
    learning_rate = 0.01
    optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
    criterion = nn.MSELoss()
    
    for epoch in range(epoches):
        epoch += 1
        inputs = torch.from_numpy(x_train).to(device)
        labels = torch.from_numpy(y_train).to(device)
        
        optimizer.zero_grad()
        
        outputs = model(inputs)
        
        loss = criterion(outputs, labels)
        
        loss.backward()
        
        optimizer.step()
        if epoch % 50 == 0:
            print('epoch {}, loss {}'.format(epoch, loss.item()))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
  • 相关阅读:
    iceoryx源码阅读(三)——共享内存通信(一)
    clickhouse 如何使用SQL 管理用户和角色
    【数据结构-哈希表 一】【原地哈希】:缺失的第一个正整数
    在线积分求解网站和求解举例
    Springboot集成Quartz
    数据结构初相识
    冯诺依曼体系结构与进程的初步理解
    盘点阿里、腾讯、百度大厂C#开源项目
    结构体初阶
    052-第三代软件开发-系统监测
  • 原文地址:https://blog.csdn.net/qq_40851534/article/details/126166952