• 基于MindSpore框架的数字调制信号盲识别研究


    作者:EK_Gao |来源:昇思MindSpore技术论坛

    概述 

    信号调试方式识别技术在军、民等领域具有重要研究价值,信号调制方式的正确识别是通信接收机正确解调译码的前提。盲调制识别技术旨在接收未知调制信号类型并自动分类识别,此技术能在大量复杂多样的调制信号中正确识别调制格式,其对通信效率的提升颇有益处。

    目前基于深度学习的盲调制识别技术能发挥深度学习在图像识别分类方面的极大优势,其稳定性和准确率都达到了空前高度。
    本实验将使用不同信噪比调制信号、不同类型调制信号与不同网络匹配并选取分类效果出色的AlexNet和GoogLeNet作为研究网络,利用华为MindSpore框架搭建网络,使能网络高效运算,得出分类识别效果。

    AlexNet

    2012年Hinton团队提出AlexNet网络,其将分类准确率由传统的70%提升至80%有余。AlexNet结构由5个卷积层实现对数据卷积操作并连接3个全连接层展平数据参数,数据最终经softmax层输出分类。

    AlexNet采用并行模式,使用ReLu激活函数,在最后2个全连接层中随机失活50%神经元,避免模型过拟合的同时提高网络适应能力,降低网络复杂度,提升运算速度和分类准确性,这是AlexNet解决图像分类问题的独特优势。

    为保证网络的各层之间数据匹配和顺畅运行,计算卷积和池化后特征矩阵输出尺寸公式如下:

    其中,N是矩阵卷积或者池化后输出特征矩阵尺寸的长和宽,输入图片尺寸为W×W,F是计算卷积核或者池化核尺寸为F×F,S为步长,P是补零列数或行数。

    GoogLeNet

    2014年GoogLeNet网络以93.33%的准确率在ISLVRC竞赛上大放异彩,它使用创新结构,在深度增加的同时,整个网络大小却小于AlexNet数倍,在一定的计算资源下,GoogLeNet体现出更好的性能优势。

    GoogLeNet引入Inception模块、辅助分类器等。

    01 GoogLeNet中Inception模块

    Inception模块引入并行结构,将特征矩阵并行输入四个分支处理,对处理后不同尺度特征矩阵按深度拼接。

    四个分支分别是1×1的卷积核,3×3的卷积核,5×5的卷积核和池化核大小为3×3的最大池化下采样。其中三个分支,使用1×1卷积核进行降维,减少参数,降低计算复杂度。

    02 GoogLeNet辅助分类器

    增加的辅助分类器如,它有利于避免梯度消失,并对中间层数据反馈,起到向前传导梯度的作用。两个辅助分类器结构相同,均采用平均下采样处理,通过1×1卷积核和ReLU激活函数减少参数。

    AlexNet参数

    根据的参数计算公式和计算方法,得到特征矩阵尺寸,构建AlexNet网络结构参数如下:

    01

    网络构建代码

    1. """Alexnet."""
    2. import numpy as np
    3. import mindspore.nn as nn
    4. from mindspore.ops import operations as P
    5. from mindspore.ops import functional as F
    6. from mindspore.common.tensor import Tensor
    7. import mindspore.common.dtype as mstype
    8. def conv(in_channels, out_channels, kernel_size, stride=1, padding=0, pad_mode="valid", has_bias=True):
    9. return nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding,
    10. has_bias=has_bias, pad_mode=pad_mode)
    11. def fc_with_initialize(input_channels, out_channels, has_bias=True):
    12. return nn.Dense(input_channels, out_channels, has_bias=has_bias)
    13. class DataNormTranspose(nn.Cell):
    14. """Normalize an tensor image with mean and standard deviation.
    15. Given mean: (R, G, B) and std: (R, G, B),
    16. will normalize each channel of the torch.*Tensor, i.e.
    17. channel = (channel - mean) / std
    18. Args:
    19. mean (sequence): Sequence of means for R, G, B channels respectively.
    20. std (sequence): Sequence of standard deviations for R, G, B channels
    21. respectively.
    22. """
    23. def __init__(self):
    24. super(DataNormTranspose, self).__init__()
    25. self.mean = Tensor(np.array([0.485 * 255, 0.456 * 255, 0.406 * 255]).reshape((1, 1, 1, 3)), mstype.float32)
    26. self.std = Tensor(np.array([0.229 * 255, 0.224 * 255, 0.225 * 255]).reshape((1, 1, 1, 3)), mstype.float32)
    27. def construct(self, x):
    28. x = (x - self.mean) / self.std
    29. x = F.transpose(x, (0, 3, 1, 2))
    30. return x
    31. class AlexNet(nn.Cell):
    32. """
    33. Alexnet
    34. """
    35. def __init__(self, num_classes=4, channel=3, phase='train', include_top=True, off_load=False):
    36. super(AlexNet, self).__init__()
    37. self.off_load = off_load
    38. if self.off_load is True:
    39. self.data_trans = DataNormTranspose()
    40. self.conv1 = conv(channel, 64, 11, stride=4, pad_mode="same", has_bias=True)
    41. self.conv2 = conv(64, 128, 5, pad_mode="same", has_bias=True)
    42. self.conv3 = conv(128, 192, 3, pad_mode="same", has_bias=True)
    43. self.conv4 = conv(192, 256, 3, pad_mode="same", has_bias=True)
    44. self.conv5 = conv(256, 256, 3, pad_mode="same", has_bias=True)
    45. self.relu = P.ReLU()
    46. self.max_pool2d = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='valid')
    47. self.include_top = include_top
    48. if self.include_top:
    49. dropout_ratio = 0.65
    50. if phase == 'test':
    51. dropout_ratio = 1.0
    52. self.flatten = nn.Flatten()
    53. self.fc1 = fc_with_initialize(6 * 6 * 256, 4096)
    54. self.fc2 = fc_with_initialize(4096, 4096)
    55. self.fc3 = fc_with_initialize(4096, num_classes)
    56. self.dropout = nn.Dropout(dropout_ratio)
    57. def construct(self, x):
    58. """define network"""
    59. if self.off_load is True:
    60. x = self.data_trans(x)
    61. x = self.conv1(x)
    62. x = self.relu(x)
    63. x = self.max_pool2d(x)
    64. x = self.conv2(x)
    65. x = self.relu(x)
    66. x = self.max_pool2d(x)
    67. x = self.conv3(x)
    68. x = self.relu(x)
    69. x = self.conv4(x)
    70. x = self.relu(x)
    71. x = self.conv5(x)
    72. x = self.relu(x)
    73. x = self.max_pool2d(x)
    74. if not self.include_top:
    75. return x
    76. x = self.flatten(x)
    77. x = self.fc1(x)
    78. x = self.relu(x)
    79. x = self.dropout(x)
    80. x = self.fc2(x)
    81. x = self.relu(x)
    82. x = self.dropout(x)
    83. x = self.fc3(x)
    84. return x

    02

    AlexNet配置参数(部分)

    GoogLeNet参数

    GoogLeNet网络结构,输入图像先经过卷积层和池化层后输入inception 3、inception 4和inception 5处理,并在inception 4a和inception 4d部分连接辅助分类器,再经随机失活、展平和softmax得到输出概率分布。

    01

    网络构建代码

    1. """GoogleNet"""
    2. import mindspore.nn as nn
    3. from mindspore.common.initializer import TruncatedNormal
    4. from mindspore.ops import operations as P
    5. def weight_variable():
    6. """Weight variable."""
    7. return TruncatedNormal(0.02)
    8. class Conv2dBlock(nn.Cell):
    9. """
    10. Basic convolutional block
    11. Args:
    12. in_channles (int): Input channel.
    13. out_channels (int): Output channel.
    14. kernel_size (int): Input kernel size. Default: 1
    15. stride (int): Stride size for the first convolutional layer. Default: 1.
    16. padding (int): Implicit paddings on both sides of the input. Default: 0.
    17. pad_mode (str): Padding mode. Optional values are "same", "valid", "pad". Default: "same".
    18. Returns:
    19. Tensor, output tensor.
    20. """
    21. def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, padding=0, pad_mode="same"):
    22. super(Conv2dBlock, self).__init__()
    23. self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride,padding=padding, pad_mode=pad_mode, weight_init=weight_variable())
    24. self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
    25. self.relu = nn.ReLU()
    26. def construct(self, x):
    27. x = self.conv(x)
    28. x = self.bn(x)
    29. x = self.relu(x)
    30. return x
    31. class Inception(nn.Cell):
    32. """
    33. Inception Block
    34. """
    35. def __init__(self, in_channels, n1x1, n3x3red, n3x3, n5x5red, n5x5, pool_planes):
    36. super(Inception, self).__init__()
    37. self.b1 = Conv2dBlock(in_channels, n1x1, kernel_size=1)
    38. self.b2 = nn.SequentialCell([Conv2dBlock(in_channels, n3x3red, kernel_size=1),Conv2dBlock(n3x3red, n3x3, kernel_size=3, padding=0)])
    39. self.b3 = nn.SequentialCell([Conv2dBlock(in_channels, n5x5red, kernel_size=1),Conv2dBlock(n5x5red, n5x5, kernel_size=3, padding=0)])
    40. self.maxpool = nn.MaxPool2d(kernel_size=3, stride=1, pad_mode="same")
    41. self.b4 = Conv2dBlock(in_channels, pool_planes, kernel_size=1)
    42. self.concat = P.Concat(axis=1)
    43. def construct(self, x):
    44. branch1 = self.b1(x)
    45. branch2 = self.b2(x)
    46. branch3 = self.b3(x)
    47. cell = self.maxpool(x)
    48. branch4 = self.b4(cell)
    49. return self.concat((branch1, branch2, branch3, branch4))
    50. class GoogLeNet(nn.Cell):
    51. """
    52. Googlenet architecture
    53. """
    54. def __init__(self, num_classes, include_top=True):
    55. super(GoogLeNet, self).__init__()
    56. self.conv1 = Conv2dBlock(3, 64, kernel_size=7, stride=2, padding=0)
    57. self.maxpool1 = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="same")
    58. self.conv2 = Conv2dBlock(64, 64, kernel_size=1)
    59. self.conv3 = Conv2dBlock(64, 192, kernel_size=3, padding=0)
    60. self.maxpool2 = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="same")
    61. self.block3a = Inception(192, 64, 96, 128, 16, 32, 32)
    62. self.block3b = Inception(256, 128, 128, 192, 32, 96, 64)
    63. self.maxpool3 = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="same")
    64. self.block4a = Inception(480, 192, 96, 208, 16, 48, 64)
    65. self.block4b = Inception(512, 160, 112, 224, 24, 64, 64)
    66. self.block4c = Inception(512, 128, 128, 256, 24, 64, 64)
    67. self.block4d = Inception(512, 112, 144, 288, 32, 64, 64)
    68. self.block4e = Inception(528, 256, 160, 320, 32, 128, 128)
    69. self.maxpool4 = nn.MaxPool2d(kernel_size=2, stride=2, pad_mode="same")
    70. self.block5a = Inception(832, 256, 160, 320, 32, 128, 128)
    71. self.block5b = Inception(832, 384, 192, 384, 48, 128, 128)
    72. self.dropout = nn.Dropout(keep_prob=0.8)
    73. self.include_top = include_top
    74. if self.include_top:
    75. self.mean = P.ReduceMean(keep_dims=True)
    76. self.flatten = nn.Flatten()
    77. self.classifier = nn.Dense(1024, num_classes, weight_init=weight_variable(),bias_init=weight_variable())
    78. def construct(self, x):
    79. """construct"""
    80. x = self.conv1(x)
    81. x = self.maxpool1(x)
    82. x = self.conv2(x)
    83. x = self.conv3(x)
    84. x = self.maxpool2(x)
    85. x = self.block3a(x)
    86. x = self.block3b(x)
    87. x = self.maxpool3(x)
    88. x = self.block4a(x)
    89. x = self.block4b(x)
    90. x = self.block4c(x)
    91. x = self.block4d(x)
    92. x = self.block4e(x)
    93. x = self.maxpool4(x)
    94. x = self.block5a(x)
    95. x = self.block5b(x)
    96. if not self.include_top:
    97. return x
    98. x = self.mean(x, (2, 3))
    99. x = self.flatten(x)
    100. x = self.classifier(x)
    101. return x

    02

    GoogLeNet网络训练参数(部分)

    AlexNet模型训练

    定义数据集函数

    1. """Produce the dataset"""
    2. import mindspore.dataset as ds
    3. import mindspore.dataset.vision.c_transforms as CV
    4. def create_dataset_imagenet(cfg, dataset_path, batch_size=32, repeat_num=1, training=True,
    5. shuffle=True, sampler=None, class_indexing=None):
    6. """
    7. create a train or eval imagenet2012 dataset for resnet50
    8. Args:
    9. dataset_path(string): the path of dataset.
    10. do_train(bool): whether dataset is used for train or eval.
    11. repeat_num(int): the repeat times of dataset. Default: 1
    12. batch_size(int): the batch size of dataset. Default: 32
    13. target(str): the device target. Default: Ascend
    14. Returns:
    15. dataset
    16. """
    17. data_set = ds.ImageFolderDataset(dataset_path, shuffle=shuffle, sampler=sampler, class_indexing=class_indexing )
    18. image_size = 224
    19. # define map operations
    20. transform_img = []
    21. if training:
    22. transform_img = [
    23. CV.RandomCropDecodeResize(image_size, scale=(0.08, 1.0), ratio=(0.75, 1.333)),
    24. CV.RandomHorizontalFlip(prob=0.5)
    25. ]
    26. else:
    27. transform_img = [
    28. CV.Decode(),
    29. CV.Resize((256, 256)),
    30. CV.CenterCrop(image_size)
    31. ]
    32. data_set = data_set.map(input_columns="image", operations=transform_img)
    33. data_set = data_set.batch(batch_size, drop_remainder=True)
    34. # apply dataset repeat operation
    35. if repeat_num > 1:
    36. data_set = data_set.repeat(repeat_num)
    37. return data_set

    实例化模型

    1. import time
    2. import mindspore.nn as nn
    3. from mindspore import Tensor
    4. from mindspore.train import Model
    5. from mindspore.nn.metrics import Accuracy
    6. from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor, TimeMonitor
    7. from mindspore.train.loss_scale_manager import DynamicLossScaleManager, FixedLossScaleManager
    8. from A_2AlexNet import AlexNet
    9. from A_3CreatDs import create_dataset_imagenet
    10. from A_4Generator_lr import get_lr
    11. from A_5Get_param_groups import get_param_groups
    12. from A_6Config import Config_Net as config
    13. # ds & net
    14. _off_load = True
    15. train_ds_path='../datasets/train_10dB'
    16. ds_train = create_dataset_imagenet(config, train_ds_path, config.batch_size,training=True)
    17. network = AlexNet(config.num_classes, phase='train', off_load=_off_load)
    18. metrics = {"Accuracy": Accuracy()}
    19. step_per_epoch = ds_train.get_dataset_size()
    20. loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
    21. lr = get_lr(config)
    22. opt = nn.Momentum(params=get_param_groups(network),
    23. learning_rate=Tensor(lr),
    24. momentum=config.momentum,
    25. weight_decay=config.weight_decay,
    26. loss_scale=config.loss_scale)
    27. loss_scale_manager = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False)
    28. model = Model(network,loss_fn=loss,optimizer=opt,metrics=metrics,loss_scale_manager=loss_scale_manager)
    29. #save ckpt
    30. ckpt_save_dir = config.save_checkpoint_path
    31. time_cb = TimeMonitor(data_size=step_per_epoch)
    32. config_ck = CheckpointConfig(save_checkpoint_steps=config.save_checkpoint_epochs,
    33. keep_checkpoint_max=config.keep_checkpoint_max)
    34. ckpoint_cb = ModelCheckpoint(prefix="alexnet", directory=ckpt_save_dir, config=config_ck)
    35. # train
    36. model.train(config.epoch_size, ds_train, callbacks=[time_cb, ckpoint_cb, LossMonitor()])

    验证

    1. import mindspore.nn as nn
    2. from mindspore import Model,load_checkpoint,load_param_into_net
    3. from A_2AlexNet import AlexNet
    4. from A_3CreatDs import create_dataset_imagenet
    5. from A_6Config import Config_Net as config
    6. #eval
    7. eval_ds_path='../datasets/5dB_eval' #import eval_ds
    8. eval_dataset = create_dataset_imagenet(cfg=config,dataset_path=eval_ds_path,batch_size=50,training=False)
    9. net_eval = AlexNet(config.num_classes, phase='test', off_load=True)
    10. ckpt_path='../ckpt/A/5dB/alexnet-100_54_0.12973762.ckpt' #import ckpt
    11. eval_ds_dict = load_checkpoint(ckpt_path)
    12. load_param_into_net(net_eval, eval_ds_dict)
    13. net_eval.set_train(False)
    14. loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
    15. metrics = {'accuracy': nn.Accuracy(),
    16. 'ConfusionMatrix':nn.ConfusionMatrix(config.num_classes)}#'2':nn.ConfusionMatrixMetric
    17. model_eval=Model(network=net_eval,loss_fn=loss,metrics=metrics) #model ok
    18. eval_result = model_eval.eval(eval_dataset)
    19. print("accuracy: ",eval_result)

    GoogLeNet训练

    1. import time
    2. from mindspore.nn.optim.momentum import Momentum
    3. from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor, TimeMonitor
    4. from mindspore.train.model import Model
    5. from mindspore.train.loss_scale_manager import DynamicLossScaleManager, FixedLossScaleManager
    6. from mindspore import Tensor
    7. from G_2GoogLeNet import GoogLeNet
    8. from G_3Dateset import create_dataset_imagenet
    9. from G_4Lossfun import CrossEntropySmooth
    10. from G_5Config import Config_Net as cfg
    11. from G_6Get_lr import get_lr
    12. def get_param_groups(network):
    13. """ get param groups """
    14. decay_params = []
    15. no_decay_params = []
    16. for x in network.trainable_params():
    17. parameter_name = x.name
    18. if parameter_name.endswith('.bias'):
    19. # all bias not using weight decay
    20. no_decay_params.append(x)
    21. elif parameter_name.endswith('.gamma'):
    22. # bn weight bias not using weight decay, be carefully for now x not include BN
    23. no_decay_params.append(x)
    24. elif parameter_name.endswith('.beta'):
    25. # bn weight bias not using weight decay, be carefully for now x not include BN
    26. no_decay_params.append(x)
    27. else:
    28. decay_params.append(x)
    29. return [{'params': no_decay_params, 'weight_decay': 0.0}, {'params': decay_params}]
    30. # ds & net
    31. train_ds_path='../datasets/2dB_train'
    32. train_ds = create_dataset_imagenet(train_ds_path,training=True,batch_size=cfg.batch_size)
    33. batch_num = train_ds.get_dataset_size()
    34. net_train = GoogLeNet(num_classes=cfg.num_classes)
    35. lr = get_lr(cfg)
    36. opt = Momentum(params=get_param_groups(net_train),
    37. learning_rate=Tensor(lr), #cfg.lr_init
    38. momentum=cfg.momentum,
    39. weight_decay=cfg.weight_decay,
    40. loss_scale=cfg.loss_scale)
    41. loss = CrossEntropySmooth(sparse=True,reduction="mean",smooth_factor=cfg.label_smooth_factor,num_classes=cfg.num_classes)
    42. loss_scale_manager = FixedLossScaleManager(cfg.loss_scale, drop_overflow_update=False)
    43. model = Model(net_train,loss_fn=loss,optimizer=opt,metrics={'acc'},
    44. keep_batchnorm_fp32=False,loss_scale_manager=loss_scale_manager)
    45. # save ckpt
    46. ckpt_save_dir = cfg.save_checkpoint_path
    47. config_ck = CheckpointConfig(save_checkpoint_steps=cfg.save_checkpoint_epochs, keep_checkpoint_max=cfg.keep_checkpoint_max)
    48. ckpoint_cb = ModelCheckpoint(prefix="googlenet", directory=ckpt_save_dir, config=config_ck)
    49. loss_cb = LossMonitor()
    50. time_cb = TimeMonitor(data_size=batch_num)
    51. cbs = [time_cb, ckpoint_cb, loss_cb]
    52. #train
    53. model.train(cfg.epoch_size, train_ds, callbacks=cbs)

    实验结果

    01

    灰度图

    灰度图数据集类型

    灰度图数据集下不同网络混淆矩阵

    灰度图数据集下,由于使用数据集SNR均为10dB,加之网络模型参数经过实验微调,所以初始准确率整体分别达81.27%、85.47%和86.00%,其中各网络均对QPSK识别效果较好,而16QAM、64QAM、8PSK和APSK识别效果较差,这与调制类型样本点数量和生成图像形状相关。

    02

    灰度增强图

    进一步使用灰度增强图对AlexNet和GoogLeNet训练验证,观察结合距离衰减模型后的灰度增强图对图像特征改进效果。
    灰度增强图数据集类型

    灰度增强图数据集下不同网络混淆矩阵

    结果显示使用AlexNet时8PSK、APSK和QPSK识别准确率达90%以上,但16QAM部分被错误分类为64QAM和APSK,一些64QAM被则错误识别为QPSK和16QAM。

    使用GoogLeNet时8PSK、APSK和QPSK识别效果较好,但16QAM和64QAM因图像相似而被部分识别错误。总体上看灰度增强图整体效果优于灰度图。

    03

    三通道图实验结果

    鉴于RGB图更契合网络分类数据格式,对信噪比2dB、5dB和10dB的RGB数据集训练验证,观察效果。
    三通道图数据集类型

    2dB RGB数据集下不同网络混淆矩阵

    在低信噪比2dB时整体识别率较低,多种调制信号图像特征存在模糊和重叠,导致网络特征提取困难。

    AlexNet网络中APSK结果尤为明显,图像大部分被错误识别为8PSK。GoogLeNet由于不同特征提取方式并没有出现这种状况,但总体的识别率也较低。

    5dB RGB数据集下不同网络混淆矩阵

    图像信噪比10dB时AlexNet和GoogLeNet网络的混淆矩阵,网络对调制信号识别精度均达到100%。说明10dB下网络能非常准确的提却不同调制信号图像的特征信息,对盲调制信号图像识别能有效利用深度学习网络参数模型对其精准分类。

    时间复杂度分析

    时间复杂度是对网络以及实验设备能力的评估。经样本数据多轮实验,对每个网络的训练时间统计平均值。本实验处理器为英特尔 Core i7-8700K CPU @ 3.70GHz,Windows 7操作系统。网络平均计算时间统计如下

    本实验基于CPU设备完成,若对速度和性能有更高的要求还可借助GPU或昇腾处理器对网络训练其对海量数据的计算处理有更大优势,其中昇腾处理器除了计算能力的优势还具有完美适配网络框架的能力。

    总结

    将实验过程所中得识别准确率作图对比

    在高信噪比情况下,AlexNet对灰度图、灰度增强图和三通道图的识别准确率由80%有余逐步提升至100%,GoogLeNet对灰度图、灰度增强图和三通道的识别率由85%逐步提升至100%,这说明图像处理过程对图像的有效特征产生了积极影响。

    对于RGB图像来说,两组网络在调制信号信噪比2dB时识别率分别约为80%和90%,信噪比10dB时识别率均达到100%,有了极大提升。
    实验结果表明,三通道图比灰度图有更高的识别率,信噪比高的调制信号更容易被正确识别,对于网络而言AlexNet运算速度更快,GoogLeNet识别准确率更高,在使用信噪比为10dB的三通道图像时,以上两种网络识别率均为100%。

  • 相关阅读:
    软件测试/校招推荐丨鼎捷软件股份有限公司岗位开放
    MMDeploy部署实战系列【第六章】:将编译好的MMdeploy导入到自己的项目中 (C++)
    【UI】饿了么 el-upload如何上传到不同的路径, 根据不同情况上传指不同的接口,不同的路径
    正气歌摘抄
    国学短剧《我是小影星》栏目火热开拍
    河北安新复合型水稻 国稻种芯·中国水稻节:雄安生态示范区
    『大模型笔记』Google CEO Sundar Pichai(桑达尔·皮查伊)谈人工智能的未来!
    QT:MainWIndow的使用
    Spark SQL_第六章笔记
    C++:构造函数与析构函数
  • 原文地址:https://blog.csdn.net/Kenji_Shinji/article/details/126180200