• 数据集MNIST手写体识别 pyqt5+Pytorch/TensorFlow


    GitHub - LINHYYY/Real-time-handwritten-digit-recognition: VGG16和PyQt5的实时手写数字识别/Real-time handwritten digit recognition for VGG16 and PyQt5
    pyqt5+Pytorch内容已进行开源,链接如上,请遵守开源协议维护开源环境,如果觉得内容还可以的话请各位老板们点点star

    数据集

    给定数据集MNIST,Downloading data from

    https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz

    MNIST是一个计算机视觉数据集,它包含各种手写数字图片0,1,2,...,9

    MNIST数据集包含:60000行的训练数据集(mnist.train)和10000行的测试数据集(mnist.test)。训练数据集和测试数据集都包含有一张手写的数字,和对应的标签,训练数据集的图片是 mnist.train.images ,训练数据集的标签是 mnist.train.labels;测试数据集的图片是 mnist.test.images ,训练数据集的标签是 mnist.test.labels每一张图片都是28*28像素(784个像素点),可以用点矩阵来表示每张图片

     (一)应用TensorFlow机器学习库建模实现手写体(0,1,2,...,9)识别

    1.1安装TensorFlow:

    1. pip install tensorflow
    2. pip install --user --upgrade tensorflow  # install in $HOME
    3. pip install tensorflow_cpu-2.6.0-cp36-cp36-win_amd64.whl
    4. pip install tensorflow==2.2.0
    5. pip install tensorflow_cpu-2.6.0-cp38-cp38-win_amd64.whl

    查看安装库:pip list 

    验证安装:

    1. import tensorflow as tf
    2. print(tf.reduce_sum(tf.random.normal([1000, 1000])))

    1.2 安装Pytorch

    1. pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
    2. conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
    3. print(torch.__version__) #检查torch版本
    4. print(torch.cuda.device_count()) #Gpu数量
    5. print(torch.version.cuda) #检查cuda版本
    6. print(torch.cuda.is_available()) #检查cuda是否可用
    7. if torch.cuda.is_available():
    8.     device = torch.device("cuda:0")
    9. else: device = torch.device("cpu")
    10. print(device)

    2.下载数据集并归一化

    1. import tensorflow as tf
    2. tf.random.set_seed(100) # 随机种子,以便在后续的代码中生成可重复的随机数。
    3. # 注意,这个设置对GPU不起作用,因为GPU有自己独立的随机数生成器。
    4. mnist = tf.keras.datasets.mnist # 下载数据集
    5. (X_train, y_train), (X_test, y_test) = mnist.load_data() # 划分为训练集和测试集
    6. X_train, X_test = X_train/255.0, X_test/255.0 # 将图像数据归一化,使其范围在0到1之间

    3.使用Sequential快速构建模型并自动完成训练

    1. # 创建神经网络
    2. model = tf.keras.models.Sequential([
    3. # 展成一维,(60000,28,28)----->(60000,784)
    4. tf.keras.layers.Flatten(input_shape=(28, 28)), # 将图像数据展平为一维向量
    5. tf.keras.layers.Dense(128, activation='relu'), # 隐藏层128个节点
    6. tf.keras.layers.Dropout(0.2), # 丢弃20%的节点
    7. tf.keras.layers.Dense(10, activation='softmax') # 10个输出值
    8. ])
    9. model.summary() # 输出模型结构和参数信息
    10. # 编译模型,指定相关参数
    11. model.compile(optimizer='adam', # 指定优化器(Adam
    12. loss = 'sparse_categorical_crossentropy', # 交叉熵
    13. # sparse_categorical_crossentropy是Softmax损失函数,
    14. # 因为输出已经通过Softmax转成了概率(而不是logits),因此无需设置from_logits为True
    15. metrics=['accuracy']) # 评价标准
    16. print("开始训练...")
    17. model.fit(X_train ,y_train, epochs=10, batch_size=64) # batch_size默认为32
    18. print("训练完成")
    19. print("开始预测...")
    20. result = model.evaluate(X_test, y_test)
    21. print("预测结束,loss:{}, accuracy:{}".format(result[0], result[1]))

    执行结果:

    4.查看X_train、X_test形状

    (二) 使用keras.layers组合模型并手动控制训练过程

    • 准备数据
    1. import tensorflow as tf
    2. from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout
    3. from tensorflow.keras import Model
    4. tf.random.set_seed(100)
    5. (X_train, y_train), (X_test, y_test) = mnist.load_data() # 加载数据集
    6. X_train, X_test = X_train/255.0, X_test/255.0 # 归一化
    7. # 将特征数据集从(N,32,32)转变成(N,32,32,1),因为Conv2D需要(NHWC)四阶张量结构
    8. X_train = X_train[..., tf.newaxis]
    9. X_test = X_test[..., tf.newaxis]
    10. print(X_train.shape)
    11. batch_size = 64 # 设置训练集和测试集的批次大小
    12. # 手动生成mini_batch数据集
    13. # 使用shuffle()函数打乱数据,使用batch()函数将数据划分为批次
    14. train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train))
    15. .shuffle(10000).batch(batch_size)
    16. test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch
    17. (batch_size)

    • Python类建立组合模型保存训练集和测试集loss、accuracy
    1. # 定义模型结构
    2. import tensorflow as tf
    3. from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout
    4. from tensorflow.keras import Model
    5. class Basic_CNN_Model(Model):
    6. def __init__(self):
    7. super(Basic_CNN_Model, self).__init__()
    8. # 定义卷积层
    9. self.conv1 = Conv2D(32, 3, activation='relu') # 32个filter,3x3核(1x3x3)
    10. self.flatten = Flatten()
    11. self.d1 = Dense(128, activation='relu') # 隐藏层128个节点
    12. self.d2 = Dense(10, activation='softmax')
    13. def call(self, X):
    14. X = self.conv1(X)
    15. X = self.flatten(X)
    16. X = self.d1(X)
    17. return self.d2(X)
    18. model = Basic_CNN_Model()
    19. loss_object = tf.keras.losses.SparseCategoricalCrossentropy() # 因为是softmax输出,因此无需指定from_logits=True
    20. optimizer = tf.keras.optimizers.Adam()
    21. # tf.keras.metrics.Mean()对象,能够持续记录传入的数据并同步更新其mean值,直到调用reset_states()方法清空原有数据
    22. train_loss = tf.keras.metrics.Mean(name='train_loss') # 用于计算并保存平均loss
    23. train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') # 用于计算并保存平均accuracy
    24. test_loss = tf.keras.metrics.Mean(name='test_loss')
    25. test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

    • 定义单批次的训练和预测操作
    1. @tf.function # @tf.function用于将python的函数编译成tensorflow的图结构
    2. def train_step(images, labels): # 针对batch_size个样本,进行一次训练
    3. with tf.GradientTape() as tape:
    4. predictions = model(images)
    5. loss = loss_object(labels, predictions) # 计算损失函数值,(batch_size, loss)二维结构
    6. gradients = tape.gradient(loss, model.trainable_variables) # 根据loss反向传播计算所有权重参数的梯度
    7. optimizer.apply_gradients(zip(gradients, model.trainable_variables)) # 使用优化器更新权重参数的梯度
    8. train_loss(loss) # 结合历史数据和新传入的loss,计算新的平均值
    9. train_accuracy(labels, predictions)
    10. @tf.function # 装饰器
    11. def test_step(images, labels): # 针对batch_size个样本,进行一次预测(无需更新梯度)
    12. predictions = model(images)
    13. t_loss = loss_object(labels, predictions) # 计算损失函数值
    14. test_loss(t_loss)
    15. test_accuracy(labels, predictions)
    • 执行完整的训练过程
    1. EPOCHS = 10 # 迭代次数
    2. # TODO:完成完整的训练过程
    3. for epoch in range(EPOCHS):
    4. for images, labels in train_ds: # 训练
    5. train_step(images, labels)
    6. for images, labels in test_ds: # 测试
    7. test_step(images, labels)
    8. # TODO:输出本周期所有批次训练(或测试)数据的平均loss和accuracy
    9. # 输出当前周期数据的平均loss和accuracy
    10. print("Epoch {:03d}, Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, train_loss.result(), train_accuracy.result()))
    11. print("Test {:03d}, Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, test_loss.result(), test_accuracy.result()))

    执行结果:

    1. # 绘制训练测试曲线
    2. import numpy as np
    3. import pandas as pd
    4. import matplotlib.pyplot as plt
    5. # loss value
    6. plt.plot(tr_loss_data, range(0, EPOCHS), label='train_loss')
    7. plt.plot(ts_loss_data, range(0, EPOCHS), label='test_loss')
    8. plt.title('loss curve')
    9. plt.legend() #显示上面的label
    10. plt.xlabel('loss value') #x_label
    11. plt.ylabel('epoch')#y_label
    12. plt.show()
    13. # accuracy value
    14. plt.plot(tr_acc_data, range(0, EPOCHS), label='train_accuracy')
    15. plt.plot(ts_acc_data, range(0, EPOCHS), label='test_accuracy')
    16. plt.title('accuracy curve')
    17. plt.legend() #显示上面的label
    18. plt.xlabel('accuracy value') #x_label
    19. plt.ylabel('epoch')#y_label
    20. plt.show()

    (三) 自定义卷积神经网络

    1. # 导入所需库及库函数
    2. import tensorflow as tf
    3. from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPool2D, Dropout
    4. from tensorflow.keras import Model
    5. tf.random.set_seed(100) # 设定随机种子
    6. mnist = tf.keras.datasets.mnist
    7. (X_train, y_train), (X_test, y_test) = mnist.load_data() # 划分为训练集和测试集
    8. X_train, X_test = X_train/255.0, X_test/255.0 # 归一化
    9. # 将特征数据集从(N,32,32)转变成(N,32,32,1),因为Conv2D需要(NHWC)四阶张量结构
    10. X_train = X_train[..., tf.newaxis]
    11. X_test = X_test[..., tf.newaxis]
    12. batch_size = 64 #每次迭代都使用64个样本
    13. # 手动生成mini_batch数据集
    14. train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(10000).batch(batch_size)
    15. test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(batch_size)
    16. class Deep_CNN_Model(Model):
    17. # 包括两个卷积层、两个池化层、一个全连接层和一个softmax层
    18. def __init__(self):
    19. super(Deep_CNN_Model, self).__init__()
    20. self.conv1 = Conv2D(32, 5, activation='relu')
    21. self.pool1 = MaxPool2D()
    22. self.conv2 = Conv2D(64, 5, activation='relu')
    23. self.pool2 = MaxPool2D()
    24. self.flatten = Flatten()
    25. self.d1 = Dense(128, activation='relu')
    26. self.dropout = Dropout(0.2)
    27. self.d2 = Dense(10, activation='softmax')
    28. def call(self, X):
    29. X = self.conv1(X)
    30. X = self.pool1(X)
    31. X = self.conv2(X)
    32. X = self.pool2(X)
    33. X = self.flatten(X)
    34. X = self.d1(X)
    35. X = self.dropout(X) # 无需在此处设置training状态。只需要在调用Model.call时,传递training参数即可
    36. return self.d2(X)
    37. # 定义卷积神经网络、损失函数以及优化器
    38. model = Deep_CNN_Model()
    39. loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
    40. optimizer = tf.keras.optimizers.Adam()
    41. train_loss = tf.keras.metrics.Mean(name='train_loss')
    42. train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
    43. test_loss = tf.keras.metrics.Mean(name='test_loss')
    44. test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
    45. # TODO:定义单批次的训练和预测操作
    46. @tf.function # 装饰器将训练和测试操作转换为TensorFlow图模式
    47. def train_step(images, labels):
    48. with tf.GradientTape() as tape: # 记录模型在训练模式下的前向传播过程
    49. predictions = model(images, training=True)
    50. loss = loss_object(labels, predictions)
    51. gradients = tape.gradient(loss, model.trainable_variables) # 计算损失函数对模型参数的梯度
    52. optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    53. train_loss(loss)
    54. train_accuracy(labels, predictions)
    55. @tf.function
    56. def test_step(images, labels): # 计算模型在测试模式下的前向传播过程,以及计算损失函数和测试准确率
    57. predictions = model(images, training=False)
    58. loss = loss_object(labels, predictions)
    59. test_loss(loss)
    60. test_accuracy(labels, predictions)
    61. # TODO:执行完整的训练过程
    62. EPOCHS = 10 # 训练的周期数
    63. for epoch in range(EPOCHS):
    64. # 训练本周期所有批次数据
    65. for images, labels in train_ds:
    66. train_step(images, labels)
    67. # 在测试数据集上评估模型性能
    68. for test_images, test_labels in test_ds:
    69. test_step(test_images, test_labels)
    70. # TODO:输出本周期所有批次训练(或测试)数据的平均loss和accuracy
    71. train_loss_value, train_accuracy_value = train_loss.result(), train_accuracy.result()
    72. test_loss_value, test_accuracy_value = test_loss.result(), test_accuracy.result()
    73. print(f"Epoch {epoch+1}, Train Loss: {train_loss_value}, Train Accuracy: {train_accuracy_value}, Test Loss: {test_loss_value}, Test Accuracy: {test_accuracy_value}")
    74. # 重置损失和准确率
    75. train_loss.reset_states()
    76. train_accuracy.reset_states()
    77. test_loss.reset_states()
    78. test_accuracy.reset_states()

    代码训练执行结果:

    1. # 绘制折线图,描述变化趋势
    2. import numpy as np
    3. import pandas as pd
    4. import matplotlib.pyplot as plt
    5. # loss value
    6. plt.plot(A_loss_data, range(0, EPOCHS), label='train_loss')
    7. plt.plot(B_loss_data, range(0, EPOCHS), label='test_loss')
    8. plt.title('loss curve')
    9. plt.legend() #显示上面的label
    10. plt.xlabel('loss value') #x_label
    11. plt.ylabel('epoch')#y_label
    12. plt.show()
    13. # accuracy value
    14. plt.plot(A_acc_data, range(0, EPOCHS), label='train_accuracy')
    15. plt.plot(B_acc_data, range(0, EPOCHS), label='test_accuracy')
    16. plt.title('accuracy curve')
    17. plt.legend() #显示上面的label
    18. plt.xlabel('accuracy value') #x_label
    19. plt.ylabel('epoch')#y_label
    20. plt.show()

    (四) Pytorch自定义实现VGG16的手写数字识别

    VGG16是一种广泛使用的卷积神经网络模型,它在ImageNet图像分类任务中表现优异。VGG16模型由英国计算机科学家 Karen Simonyan 和 Andrew Zisserman 提出。VGG16模型采用了大量的3x3卷积层和最大池化层,使得模型能够提取到更加丰富的图像特征。

    VGG16模型包含13个卷积层和3个全连接层。其中,卷积层用于提取图像特征,全连接层用于分类。

    1. class VGGBlock(nn.Module):
    2. def __init__(self, in_channels, out_channels, batch_norm=False): # 输入输出通道数,是否使用批量归一化
    3. super().__init__()
    4. conv2_params = {'kernel_size': (3, 3),
    5. 'stride' : (1, 1),
    6. 'padding' : 1}
    7. noop = lambda x : x
    8. self._batch_norm = batch_norm
    9. # 卷积层
    10. self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=out_channels , **conv2_params)
    11. self.bn1 = nn.BatchNorm2d(out_channels) if batch_norm else noop
    12. self.conv2 = nn.Conv2d(in_channels=out_channels, out_channels=out_channels, **conv2_params)
    13. self.bn2 = nn.BatchNorm2d(out_channels) if batch_norm else noop
    14. # 最大池化层
    15. self.max_pooling = nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
    16. @property
    17. def batch_norm(self):
    18. return self._batch_norm
    19. def forward(self,x):
    20. # 依次经过conv1、conv2,使用ReLU激活函数,最后通过max_pooling层减小特征图的大小神经网络模型构建
    21. x = self.conv1(x)
    22. x = self.bn1(x)
    23. x = F.relu(x)
    24. x = self.conv2(x)
    25. x = self.bn2(x)
    26. x = F.relu(x)
    27. x = self.max_pooling(x)
    28. return x
    29. # VGG16类定义一个VGG16网络,该网络由四个卷积块和全连接层组成,该类继承自nn.Module。
    30. class VGG16(nn.Module):
    31. def __init__(self, input_size, num_classes=10, batch_norm=False): # 类别数(num_classes)
    32. super(VGG16, self).__init__()
    33. self.in_channels, self.in_width, self.in_height = input_size
    34. # VGG网络的四个卷积块
    35. self.block_1 = VGGBlock(self.in_channels, 64, batch_norm=batch_norm)
    36. self.block_2 = VGGBlock(64, 128, batch_norm=batch_norm)
    37. self.block_3 = VGGBlock(128, 256, batch_norm=batch_norm)
    38. self.block_4 = VGGBlock(256,512, batch_norm=batch_norm)
    39. # 全连接层
    40. self.classifier = nn.Sequential(
    41. nn.Linear(2048, 4096),
    42. nn.ReLU(True),
    43. nn.Dropout(p=0.65),
    44. nn.Linear(4096, 4096),
    45. nn.ReLU(True),
    46. nn.Dropout(p=0.65),
    47. nn.Linear(4096, num_classes)
    48. )
    49. @property
    50. def input_size(self):
    51. return self.in_channels, self.in_width, self.in_height
    52. def forward(self, x): # 将输入图像x传递给VGGBlock对象,然后将输出特征展平,最后通过全连接层计算类别概率
    53. x = self.block_1(x)
    54. x = self.block_2(x)
    55. x = self.block_3(x)
    56. x = self.block_4(x)
    57. x = x.view(x.size(0), -1)
    58. x = self.classifier(x)
    59. return x

    1、导入所需库

    1. import torch
    2. import torchvision
    3. import torchvision.transforms as transforms
    4. import torch.optim as optim
    5. import torch.nn.functional as F
    6. import torch.nn as nn
    7. from torchvision import models
    8. import matplotlib.pyplot as plt
    9. import numpy as np # linear algebra
    10. import pandas as pd
    11. import time
    12. from VGG import VGG16,VGGBlock

    2、进行模型训练

    整个训练过程分为以下几个步骤:

    1.初始化模型、损失函数、优化器等。

    1. def train(loaders, optimizer, criterion, epochs=10, save_param=True, dataset="mnist"):
    2.     global device
    3.     global model

    2.定义训练和测试的加载器

    1. model = model.to(device)
    2. history_loss = {"train": [], "test": []}
    3. history_accuracy = {"train": [], "test": []}
    4. best_test_accuracy = 0
    5. start_time = time.time()

    3.使用try-except结构捕获可能的键盘中断异常。

    1.     except KeyboardInterrupt: # 用户键盘中断异常
    2.         print("Interrupted")

    4.使用for循环进行训练和测试。

    1. for epoch in range(epochs):
    2.      sum_loss = {"train": 0, "test": 0}
    3.      sum_accuracy = {"train": 0, "test": 0}
    4.      for split in ["train", "test"]:
    5.          if split == "train":
    6.              model.train()
    7.          else:
    8.              model.eval()

    5.计算每个批次的损失和准确率。

    1. # 计算批次的loss/accuracy
    2. epoch_loss = {split: sum_loss[split] / len(loaders[split]) for split in ["train", "test"]}
    3. epoch_accuracy = {split: sum_accuracy[split] / len(loaders[split]) for split in ["train", "test"]}

    6.计算每个epoch的损失和准确率。

    1. for (inputs, labels) in loaders[split]:
    2.    inputs = inputs.to(device)
    3.    labels = labels.to(device)
    4.    
    5.    optimizer.zero_grad()
    6.    prediction = model(inputs)
    7.    labels = labels.long()
    8.    loss = criterion(prediction, labels)
    9.    sum_loss[split] += loss.item()  # 更新loss
    10.     if split == "train":
    11.          loss.backward()  # 计算梯度
    12.          optimizer.step()
    13.                    
    14.      _,pred_label = torch.max(prediction, dim = 1)
    15.      pred_labels = (pred_label == labels).float()
    16.      batch_accuracy = pred_labels.sum().item() / inputs.size(0)
    17.      sum_accuracy[split] += batch_accuracy  # 更新accuracy

    训练过程截图:

    3、Main主程序

    1. # main
    2. model = VGG16((1,32,32), batch_norm=True)
    3. # 随机梯度下降(SGD)
    4. optimizer = optim.SGD(model.parameters(), lr=0.001)
    5. criterion = nn.CrossEntropyLoss() # 交叉熵损失函数
    6. transform = transforms.Compose([
    7.   transforms.Resize(32),
    8.   transforms.ToTensor(),
    9. ])
    10. # 加载数据集
    11. train_set = torchvision.datasets.MNIST(root='', train=True, download=True, transform=transform)
    12. test_set = torchvision.datasets.MNIST(root='', train=False, download=True, transform=transform)
    13. # 查看数据集信息
    14. print(f"Number of training samples: {len(train_set)}")
    15. print(f"Number of test samples: {len(test_set)}")
    16. # 提取数据标签
    17. x_train, y_train = train_set.data, train_set.targets
    18. print(x_train, y_train)
    19. # 如果训练集的图像数据的维度是3,则添加一个维度,使其变为B*C*H*W的格式
    20. if len(x_train.shape) == 3:
    21.       x_train = x_train.unsqueeze(1)
    22. print(x_train.shape)
    23. # 制作 40 张图像的网格,每行 8 张图像
    24. x_grid = torchvision.utils.make_grid(x_train[:40], nrow=8, padding=2)
    25. print(x_grid.shape)
    26. # 将 tensor 转换为 numpy 数组
    27. npimg = x_grid.numpy()
    28. # 转换为 H*W*C 形状
    29. npimg_tr = np.transpose(npimg, (1, 2, 0))
    30. plt.imshow(npimg_tr, interpolation='nearest')
    31. image, label = train_set[200]
    32. plt.imshow(image.squeeze(), cmap='gray')
    33. print('Label:', label)
    34. train_loader = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)
    35. test_loader = torch.utils.data.DataLoader(test_set, batch_size=64, shuffle=False)
    36. loaders = {"train": train_loader,
    37.            "test": test_loader}
    38. train(loaders, optimizer, criterion, epochs=15

    在上述代码中,定义了优化器optimizer,使用SGD进行优化;使用交叉熵损失函数,并且定义一个图像处理管道transform,将图像大小调整为32x32,并将图像转化为张量。

    加载MNIST数据集,并查看数据集信息,从数据集中提取图像数据和标签,如果训练集的图像数据的维度是3,则添加一个维度,使其变为BCH*W的格式。将张量转换为NumPy数组,并将其转换为HWC的形状。

    4、使用pyqt5制作一个简单交互界面

    利用pyqt5进行简单交互界面的制作,并且调用预测图片结果的函数,实时处理并且反馈回简单界面中。

    声明了一个画板类,简单实现了清空画板、调用预测结果函数、退出的功能。

    通过将用户手写的数字图片保存,传入函数中进行结果的预测,反馈最终的可能性最大的结果标签。

    5、运行示例

  • 相关阅读:
    Cortex-M3概览
    【Golang】数组 && 切片
    关于NPM下载源的总结
    nodejs(三)
    人类与机器
    Transfer principle
    2023研究生数学建模D题思路
    阿里二面:SpringCloud 有几种服务调用方式?
    Dockerfile COPY的奇怪行为:自动解包一级目录
    Cargo 教程
  • 原文地址:https://blog.csdn.net/linghyu/article/details/139834515