• yolov5s V6.1版本训练PASCAL VOC2012数据集&yolov5官网教程Train Custom Data


    零、 前言

    • 我这两天用yolov3_spp训练过VOC2012数据集的,本来想直接拿过来用yolov5训练,但是colab挂载device太慢了。复制过来后(在colab主界面直接拖动文件夹,而不是在本ipynb脚本里用cp命令复制,否则复制都很久)yolov5运行时要scan数据集,scan train文件夹要一个小时,太慢了(之前yolov3_spp 在voc数据集用trans_voc2yolo.py转换为coco数据集也是用了两个小时)。
    • 实在忍不了,看csdn帖子也抱怨这个,准备tar打包复制到drive外面,但是打包还是慢啊,干脆再下载数据集自己转化一次好了

    一、数据集下载及预处理

    1.1 安装yolov5并下载VOC2012数据集

    
    #clone yolov5安装依赖
    !git clone https://github.com/ultralytics/yolov5
    %cd yolov5
    !pip install -r requirements.txt
    
    #在data目录下下载PASCAL VOC2012数据集并解压
    !mkdir my_dataset
    %cd my_dataset
    !wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
    !tar -xvf VOCtrainval_11-May-2012.tar 
    #切换回yolov5主目录
    %cd ..
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    1.2 将VOC标注数据转为YOLO标注数据

    • yolov3_spp的数据转换脚本trans_voc2yolo.pycalculate_dataset.pypascal_voc_classes.json都放在my_dataset文件夹下。 (文章最后会直接给出这几个脚本的代码
    • 切换到yolov5项目主路径,运行trans_voc2yolo.py脚本(注意原脚本里面root路径前面应该加一个点.,即’./my_dataset/VOCdevkit’)
    • 这一步主要是在my_dataset文件夹下生成yolo格式数据集(my_yolo_dataset)以及my_data_label.names标签文件
    • 注意,这一步需要my_dataset文件夹下的pascal_voc_classes.json(label标签对应json文件)
    ├── my_yolo_dataset 自定义数据集根目录
    │         ├── train   训练集目录
    │         │     ├── images  训练集图像目录
    │         │     └── labels  训练集标签目录 
    │         └── val    验证集目录
    │               ├── images  验证集图像目录
    │               └── labels  验证集标签目录
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 生成的my_data_label.names标签文件格式如下:(如果没有该文件,可以自己手动编辑一个txt文档,然后重命名为.names格式即可)
    aeroplane
    bicycle
    bird
    boat
    bottle
    bus
    ...
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    import os
    
    assert os.path.exists('my_dataset/VOCdevkit/VOC2012/JPEGImages')
    #原脚本几处路径由data改为my_dataset
    #更改coco数据集保存路径save_file_root = "./my_dataset/my_yolo_dataset"
    !python my_dataset/trans_voc2yolo.py
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    translate train file...: 100% 5717/5717 [00:03<00:00, 1884.45it/s]
    translate val file...: 100% 5823/5823 [00:03<00:00, 1799.45it/s]
    
    • 1
    • 2

    1.3 根据摆放好的数据集信息生成一系列相关准备文件

    • 使用calculate_dataset.py脚本生成my_train_data.txt文件、my_val_data.txt文件以及my_data.data文件,这里不需要并生成新的my_yolov3.cfg文件,相关代码注释掉就行
    • 执行脚本前,需要根据自己的路径修改相关参数
    • 生成的文件都在yolov5/my_dataset/my_yolo_dataset下
    train_annotation_dir = "./my_yolo_dataset/train/labels"
    val_annotation_dir = "./my_yolo_dataset/val/labels"
    classes_label = "./my_data_label.names"
    ...
    ...
    train_txt_path = "my_train_data.txt"
    val_txt_path = "my_val_data.txt"
    
    ...
    create_data_data("my_data.data", classes_label, train_txt_path, val_txt_path, classes_info)
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    %cd my_dataset
    !python calculate_dataset.py
    
    • 1
    • 2

    二、修改配置文件

    2.1 修改coco.yaml文件

    复制data下coco.yaml文件,重命名为myvoc2coco.yaml,打开修改路径和names

    %cd ..
    %cp data/coco.yaml data/myvoc2coco.yaml
    
    • 1
    • 2
    /content/yolov5
    
    • 1
    #读取my_data_label.names文件,转为列表打印出来。这就是label_list,将myvoc2coco.yaml的names那一行内容改为这个label_list
    ls=[]
    with open('dataset/my_data_label.names','r') as f:
      lines = f.readlines()
      for line in lines:
        line=line.strip("\n")#去除末尾的换行符
        txt=str(line)#拆分为两个元素,再对每个元素实行类型转换
        ls.append(txt)
    print(ls)
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
    
    • 1

    /myvoc2coco.yaml 修改如下:

    path: my_dataset  # dataset root dir
    train: my_train_data.txt  # train images
    val: my_val_data.txt    # val images 
    test: test-dev2017.txt   #没有测试不需要改
    
    # Classes
    nc: 20  # number of classes
    names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 
        'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']  # class names
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    2.2 修改yolov5s.yaml

    将类别数nc=80改为20

    2.3 修改train.py

    上面两个修改之后,运行时还是会报错Overriding model.yaml nc=80 with nc=20。还是说模型中设定的分类类别与你自己在.yaml文件中配置的nc数值不匹配(明明yaml文件已经改了)

    修改train.py,拉到下面

    • weights:初始化模型权重文件是yolov5s.pt

    • cfg:配置文件是默认为空,但看其help是help=‘model.yaml path’说明其是指向模型的.yaml文件的。所以这里改为’models/yolov5s.yaml’

    • data:是指数据的一些路径,类别个数和类别名称等设置,如coco128.yaml

    • hyp:是一些超参数的设置,如果你清楚的话,可以改动。

    • epochs:是训练的轮数,默认是300轮。

    • batch-size:每一批数据的多少,如果你的显存小,就将这个数值设置的小一点。
      那么我们修改cfg参数如下,以yolov5s为例:

    2.4 PyYAML报错

    报错requirements: PyYAML>=5.3.1 not found and is required by YOLOv5, attempting auto-update... 以及

    yaml.reader.ReaderError: unacceptable character #x1f680: special characters are not allowed
      in "data/hyps/hyp.scratch-low.yaml", position 9
    
    • 1
    • 2

    运行以下代码就行

    !pip install --ignore-installed PyYAML 
    
    • 1
    Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
    Collecting PyYAML
      Using cached PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
    Installing collected packages: PyYAML
    Successfully installed PyYAML-6.0
    
    • 1
    • 2
    • 3
    • 4
    • 5

    三、开始训练

    3.1 开始训练

    下面就可以开始愉快的训练啦。之前yolov3_spp训练3个epoch是30min左右,yolov5这次花了13min,快了一倍。(不知道跟colab有没有关系,官方说colab每次分配的GPU会不一样)

    %cd ..
    !python train.py --img 640 --batch 16 --epochs 3 --data myvoc2coco.yaml --weights yolov5s.pt
    
    • 1
    • 2
    /content/yolov5
    [34m[1mtrain: [0mweights=yolov5s.pt, cfg=models/yolov5s.yaml, data=myvoc2coco.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
    [34m[1mgithub: [0mup to date with https://github.com/ultralytics/yolov5 ✅
    YOLOv5 🚀 v6.1-383-g3d47fc6 Python-3.7.13 torch-1.12.0+cu113 CUDA:0 (Tesla T4, 15110MiB)
    
    [34m[1mhyperparameters: [0mlr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
    [34m[1mWeights & Biases: [0mrun 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases
    [34m[1mClearML: [0mrun 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 runs in ClearML
    [34m[1mTensorBoard: [0mStart with 'tensorboard --logdir runs/train', view at http://localhost:6006/
    
                     from  n    params  module                                  arguments                     
      0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]              
      1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]                
      2                -1  1     18816  models.common.C3                        [64, 64, 1]                   
      3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]               
      4                -1  2    115712  models.common.C3                        [128, 128, 2]                 
      5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]              
      6                -1  3    625152  models.common.C3                        [256, 256, 3]                 
      7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]              
      8                -1  1   1182720  models.common.C3                        [512, 512, 1]                 
      9                -1  1    656896  models.common.SPPF                      [512, 512, 5]                 
     10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]              
     11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
     12           [-1, 6]  1         0  models.common.Concat                    [1]                           
     13                -1  1    361984  models.common.C3                        [512, 256, 1, False]          
     14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]              
     15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
     16           [-1, 4]  1         0  models.common.Concat                    [1]                           
     17                -1  1     90880  models.common.C3                        [256, 128, 1, False]          
     18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]              
     19          [-1, 14]  1         0  models.common.Concat                    [1]                           
     20                -1  1    296448  models.common.C3                        [256, 256, 1, False]          
     21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]              
     22          [-1, 10]  1         0  models.common.Concat                    [1]                           
     23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]          
     24      [17, 20, 23]  1     67425  models.yolo.Detect                      [20, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
    YOLOv5s summary: 270 layers, 7073569 parameters, 7073569 gradients, 16.1 GFLOPs
    
    Transferred 342/349 items from yolov5s.pt
    [34m[1mAMP: [0mchecks passed ✅
    [34m[1moptimizer:[0m SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias
    [34m[1malbumentations: [0mBlur(always_apply=False, p=0.01, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.01, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01), CLAHE(always_apply=False, p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
    [34m[1mtrain: [0mScanning '/content/yolov5/my_dataset/my_train_data' images and labels...5717 found, 0 missing, 0 empty, 0 corrupt: 100% 5717/5717 [00:06<00:00, 831.41it/s] 
    [34m[1mtrain: [0mNew cache created: /content/yolov5/my_dataset/my_train_data.cache
    [34m[1mval: [0mScanning '/content/yolov5/my_dataset/my_val_data' images and labels...5823 found, 0 missing, 0 empty, 0 corrupt: 100% 5823/5823 [00:04<00:00, 1236.37it/s]
    [34m[1mval: [0mNew cache created: /content/yolov5/my_dataset/my_val_data.cache
    Plotting labels to runs/train/exp2/labels.jpg... 
    
    [34m[1mAutoAnchor: [0m4.04 anchors/target, 1.000 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅
    Image sizes 640 train, 640 val
    Using 2 dataloader workers
    Logging results to [1mruns/train/exp2[0m
    Starting training for 3 epochs...
    
         Epoch   gpu_mem       box       obj       cls    labels  img_size
           0/2     3.72G   0.07408   0.03976   0.05901        33       640: 100% 358/358 [03:02<00:00,  1.96it/s]
                   Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 182/182 [01:03<00:00,  2.86it/s]
                     all       5823      15787      0.495      0.461      0.431      0.207
    
         Epoch   gpu_mem       box       obj       cls    labels  img_size
           1/2     6.26G   0.05159   0.03403   0.03152        37       640: 100% 358/358 [02:50<00:00,  2.10it/s]
                   Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 182/182 [01:01<00:00,  2.98it/s]
                     all       5823      15787      0.637      0.597      0.625      0.336
    
         Epoch   gpu_mem       box       obj       cls    labels  img_size
           2/2     6.26G   0.04695   0.03366   0.02354        44       640: 100% 358/358 [02:50<00:00,  2.10it/s]
                   Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 182/182 [00:59<00:00,  3.07it/s]
                     all       5823      15787      0.716      0.633       0.69      0.409
    
    3 epochs completed in 0.198 hours.
    Optimizer stripped from runs/train/exp2/weights/last.pt, 14.5MB
    Optimizer stripped from runs/train/exp2/weights/best.pt, 14.5MB
    
    Validating runs/train/exp2/weights/best.pt...
    Fusing layers... 
    YOLOv5s summary: 213 layers, 7064065 parameters, 0 gradients, 15.9 GFLOPs
                   Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 182/182 [01:04<00:00,  2.81it/s]
                     all       5823      15787      0.715      0.634       0.69      0.409
               aeroplane       5823        484      0.769      0.652      0.727      0.364
                 bicycle       5823        380      0.714      0.708      0.732      0.442
                    bird       5823        629      0.728      0.596      0.662      0.366
                    boat       5823        491      0.493      0.483      0.451      0.211
                  bottle       5823        733      0.554      0.574      0.563      0.317
                     bus       5823        320      0.793      0.722      0.773      0.557
                     car       5823       1173      0.696       0.72      0.773      0.453
                     cat       5823        618      0.788      0.759      0.801      0.499
                   chair       5823       1449      0.666      0.549      0.618      0.373
                     cow       5823        347      0.696      0.585      0.672      0.408
             diningtable       5823        374      0.801      0.476      0.584      0.278
                     dog       5823        773      0.826      0.597      0.761      0.501
                   horse       5823        373      0.783      0.705      0.783      0.502
               motorbike       5823        376      0.704      0.737      0.759      0.443
                  person       5823       5110      0.761      0.803      0.837      0.519
             pottedplant       5823        542      0.459      0.506      0.467      0.231
                   sheep       5823        485       0.72      0.636      0.695      0.423
                    sofa       5823        387      0.721      0.475      0.608      0.352
                   train       5823        329      0.879      0.748      0.823      0.509
               tvmonitor       5823        414      0.742      0.652      0.712      0.431
    Results saved to [1mruns/train/exp2[0m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99

    3.2 voc2coco数据转换脚本trans_voc2yolo.pycalculate_dataset.pypascal_voc_classes.json

    下面给的都是yolov3_spp原版的转换脚本。要是按我上面用yolov5运行,跟上面说的一样改路径就行。

    pascal_voc_classes.json

    {
        "aeroplane": 1,
        "bicycle": 2,
        "bird": 3,
        "boat": 4,
        "bottle": 5,
        "bus": 6,
        "car": 7,
        "cat": 8,
        "chair": 9,
        "cow": 10,
        "diningtable": 11,
        "dog": 12,
        "horse": 13,
        "motorbike": 14,
        "person": 15,
        "pottedplant": 16,
        "sheep": 17,
        "sofa": 18,
        "train": 19,
        "tvmonitor": 20
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    trans_voc2yolo.py原版脚本:

    """
    本脚本有两个功能:
    1.将voc数据集标注信息(.xml)转为yolo标注格式(.txt),并将图像文件复制到相应文件夹
    2.根据json标签文件,生成对应names标签(my_data_label.names)
    """
    import os
    from tqdm import tqdm
    from lxml import etree
    import json
    import shutil
    
    
    # voc数据集根目录以及版本
    voc_root = "/data/VOCdevkit"
    voc_version = "VOC2012"
    
    # 转换的训练集以及验证集对应txt文件
    train_txt = "train.txt"
    val_txt = "val.txt"
    
    # 转换后的文件保存目录
    save_file_root = "./my_yolo_dataset"
    
    # label标签对应json文件
    label_json_path = './data/pascal_voc_classes.json'
    
    # 拼接出voc的images目录,xml目录,txt目录
    voc_images_path = os.path.join(voc_root, voc_version, "JPEGImages")
    voc_xml_path = os.path.join(voc_root, voc_version, "Annotations")
    train_txt_path = os.path.join(voc_root, voc_version, "ImageSets", "Main", train_txt)
    val_txt_path = os.path.join(voc_root, voc_version, "ImageSets", "Main", val_txt)
    
    # 检查文件/文件夹都是否存在
    assert os.path.exists(voc_images_path), "VOC images path not exist..."
    assert os.path.exists(voc_xml_path), "VOC xml path not exist..."
    assert os.path.exists(train_txt_path), "VOC train txt file not exist..."
    assert os.path.exists(val_txt_path), "VOC val txt file not exist..."
    assert os.path.exists(label_json_path), "label_json_path does not exist..."
    if os.path.exists(save_file_root) is False:
        os.makedirs(save_file_root)
    
    
    def parse_xml_to_dict(xml):
        """
        将xml文件解析成字典形式,参考tensorflow的recursive_parse_xml_to_dict
        Args:
            xml: xml tree obtained by parsing XML file contents using lxml.etree
        Returns:
            Python dictionary holding XML contents.
        """
    
        if len(xml) == 0:  # 遍历到底层,直接返回tag对应的信息
            return {xml.tag: xml.text}
    
        result = {}
        for child in xml:
            child_result = parse_xml_to_dict(child)  # 递归遍历标签信息
            if child.tag != 'object':
                result[child.tag] = child_result[child.tag]
            else:
                if child.tag not in result:  # 因为object可能有多个,所以需要放入列表里
                    result[child.tag] = []
                result[child.tag].append(child_result[child.tag])
        return {xml.tag: result}
    
    
    def translate_info(file_names: list, save_root: str, class_dict: dict, train_val='train'):
        """
        将对应xml文件信息转为yolo中使用的txt文件信息
        :param file_names:
        :param save_root:
        :param class_dict:
        :param train_val:
        :return:
        """
        save_txt_path = os.path.join(save_root, train_val, "labels")
        if os.path.exists(save_txt_path) is False:
            os.makedirs(save_txt_path)
        save_images_path = os.path.join(save_root, train_val, "images")
        if os.path.exists(save_images_path) is False:
            os.makedirs(save_images_path)
    
        for file in tqdm(file_names, desc="translate {} file...".format(train_val)):
            # 检查下图像文件是否存在
            img_path = os.path.join(voc_images_path, file + ".jpg")
            assert os.path.exists(img_path), "file:{} not exist...".format(img_path)
    
            # 检查xml文件是否存在
            xml_path = os.path.join(voc_xml_path, file + ".xml")
            assert os.path.exists(xml_path), "file:{} not exist...".format(xml_path)
    
            # read xml
            with open(xml_path) as fid:
                xml_str = fid.read()
            xml = etree.fromstring(xml_str)
            data = parse_xml_to_dict(xml)["annotation"]
            img_height = int(data["size"]["height"])
            img_width = int(data["size"]["width"])
    
            # write object info into txt
            assert "object" in data.keys(), "file: '{}' lack of object key.".format(xml_path)
            if len(data["object"]) == 0:
                # 如果xml文件中没有目标就直接忽略该样本
                print("Warning: in '{}' xml, there are no objects.".format(xml_path))
                continue
    
            with open(os.path.join(save_txt_path, file + ".txt"), "w") as f:
                for index, obj in enumerate(data["object"]):
                    # 获取每个object的box信息
                    xmin = float(obj["bndbox"]["xmin"])
                    xmax = float(obj["bndbox"]["xmax"])
                    ymin = float(obj["bndbox"]["ymin"])
                    ymax = float(obj["bndbox"]["ymax"])
                    class_name = obj["name"]
                    class_index = class_dict[class_name] - 1  # 目标id从0开始
    
                    # 进一步检查数据,有的标注信息中可能有w或h为0的情况,这样的数据会导致计算回归loss为nan
                    if xmax <= xmin or ymax <= ymin:
                        print("Warning: in '{}' xml, there are some bbox w/h <=0".format(xml_path))
                        continue
    
                    # 将box信息转换到yolo格式
                    xcenter = xmin + (xmax - xmin) / 2
                    ycenter = ymin + (ymax - ymin) / 2
                    w = xmax - xmin
                    h = ymax - ymin
    
                    # 绝对坐标转相对坐标,保存6位小数
                    xcenter = round(xcenter / img_width, 6)
                    ycenter = round(ycenter / img_height, 6)
                    w = round(w / img_width, 6)
                    h = round(h / img_height, 6)
    
                    info = [str(i) for i in [class_index, xcenter, ycenter, w, h]]
    
                    if index == 0:
                        f.write(" ".join(info))
                    else:
                        f.write("\n" + " ".join(info))
    
            # copy image into save_images_path
            path_copy_to = os.path.join(save_images_path, img_path.split(os.sep)[-1])
            if os.path.exists(path_copy_to) is False:
                shutil.copyfile(img_path, path_copy_to)
    
    
    def create_class_names(class_dict: dict):
        keys = class_dict.keys()
        with open("./data/my_data_label.names", "w") as w:
            for index, k in enumerate(keys):
                if index + 1 == len(keys):
                    w.write(k)
                else:
                    w.write(k + "\n")
    
    
    def main():
        # read class_indict
        json_file = open(label_json_path, 'r')
        class_dict = json.load(json_file)
    
        # 读取train.txt中的所有行信息,删除空行
        with open(train_txt_path, "r") as r:
            train_file_names = [i for i in r.read().splitlines() if len(i.strip()) > 0]
        # voc信息转yolo,并将图像文件复制到相应文件夹
        translate_info(train_file_names, save_file_root, class_dict, "train")
    
        # 读取val.txt中的所有行信息,删除空行
        with open(val_txt_path, "r") as r:
            val_file_names = [i for i in r.read().splitlines() if len(i.strip()) > 0]
        # voc信息转yolo,并将图像文件复制到相应文件夹
        translate_info(val_file_names, save_file_root, class_dict, "val")
    
        # 创建my_data_label.names文件
        create_class_names(class_dict)
    
    
    if __name__ == "__main__":
        main()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179

    trans_voc2yolo.py原版脚本:

    """
    该脚本有3个功能:
    1.统计训练集和验证集的数据并生成相应.txt文件
    2.创建data.data文件,记录classes个数, train以及val数据集文件(.txt)路径和label.names文件路径
    3.根据yolov3-spp.cfg创建my_yolov3.cfg文件修改其中的predictor filters以及yolo classes参数(这两个参数是根据类别数改变的)
    """
    import os
    
    train_annotation_dir = "./my_yolo_dataset/train/labels"
    val_annotation_dir = "./my_yolo_dataset/val/labels"
    classes_label = "./data/my_data_label.names"
    cfg_path = "./cfg/yolov3-spp.cfg"
    
    assert os.path.exists(train_annotation_dir), "train_annotation_dir not exist!"
    assert os.path.exists(val_annotation_dir), "val_annotation_dir not exist!"
    assert os.path.exists(classes_label), "classes_label not exist!"
    assert os.path.exists(cfg_path), "cfg_path not exist!"
    
    
    def calculate_data_txt(txt_path, dataset_dir):
        # create my_data.txt file that record image list
        with open(txt_path, "w") as w:
            for file_name in os.listdir(dataset_dir):
                if file_name == "classes.txt":
                    continue
    
                img_path = os.path.join(dataset_dir.replace("labels", "images"),
                                        file_name.split(".")[0]) + ".jpg"
                line = img_path + "\n"
                assert os.path.exists(img_path), "file:{} not exist!".format(img_path)
                w.write(line)
    
    
    def create_data_data(create_data_path, label_path, train_path, val_path, classes_info):
        # create my_data.data file that record classes, train, valid and names info.
        # shutil.copyfile(label_path, "./data/my_data_label.names")
        with open(create_data_path, "w") as w:
            w.write("classes={}".format(len(classes_info)) + "\n")  # 记录类别个数
            w.write("train={}".format(train_path) + "\n")           # 记录训练集对应txt文件路径
            w.write("valid={}".format(val_path) + "\n")             # 记录验证集对应txt文件路径
            w.write("names=data/my_data_label.names" + "\n")        # 记录label.names文件路径
    
    
    def change_and_create_cfg_file(classes_info, save_cfg_path="./cfg/my_yolov3.cfg"):
        # create my_yolov3.cfg file changed predictor filters and yolo classes param.
        # this operation only deal with yolov3-spp.cfg
        filters_lines = [636, 722, 809]
        classes_lines = [643, 729, 816]
        cfg_lines = open(cfg_path, "r").readlines()
    
        for i in filters_lines:
            assert "filters" in cfg_lines[i-1], "filters param is not in line:{}".format(i-1)
            output_num = (5 + len(classes_info)) * 3
            cfg_lines[i-1] = "filters={}\n".format(output_num)
    
        for i in classes_lines:
            assert "classes" in cfg_lines[i-1], "classes param is not in line:{}".format(i-1)
            cfg_lines[i-1] = "classes={}\n".format(len(classes_info))
    
        with open(save_cfg_path, "w") as w:
            w.writelines(cfg_lines)
    
    
    def main():
        # 统计训练集和验证集的数据并生成相应txt文件
        train_txt_path = "data/my_train_data.txt"
        val_txt_path = "data/my_val_data.txt"
        calculate_data_txt(train_txt_path, train_annotation_dir)
        calculate_data_txt(val_txt_path, val_annotation_dir)
    
        classes_info = [line.strip() for line in open(classes_label, "r").readlines() if len(line.strip()) > 0]
        # 创建data.data文件,记录classes个数, train以及val数据集文件(.txt)路径和label.names文件路径
        create_data_data("./data/my_data.data", classes_label, train_txt_path, val_txt_path, classes_info)
    
        # 根据yolov3-spp.cfg创建my_yolov3.cfg文件修改其中的predictor filters以及yolo classes参数(这两个参数是根据类别数改变的)
        change_and_create_cfg_file(classes_info)
    
    
    if __name__ == '__main__':
        main()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80

    四、YOLOv5 Train Custom Data 教程

    在yolov5的《Train Custom Data》教程里,有完整的示例代码,这里简单介绍下。

    4.1 推理

    detect.py 可以在各种数据集上运行YOLOv5来做推理, 从最新的 YOLOv5 版本 自动下载模型,并将结果保存到 runs/detect,示例如下:

    !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images
    #display.Image(filename='runs/detect/exp/zidane.jpg', width=600)
    
    • 1
    • 2

    在这里插入图片描述

    4.2 验证精度

    COCO val或test-dev 数据集上验证模型的准确性。 模型会从最新的YOLOv5版本自动下载。要按class显示结果,请使用 --verbose 标志。

    Download COCO val 2017 dataset (1GB - 5000 images), and test model accuracy.

    # Download COCO val
    torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip')
    !unzip -q tmp.zip -d ../datasets && rm tmp.zip
    
    # Run YOLOv5x on COCO val
    !python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    4.3 COCO 测试

    下载COCO test2017数据集 (7GB - 40,000张图片), 在test-dev上测试模型精度(20,000张图片,无标签). 结果保存为一个*.json文件,这个文件会压缩并提交到https://competitions.codalab.org/competitions/20794 上的评估器。

    # Download COCO test-dev2017
    torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017labels.zip', 'tmp.zip')
    !unzip -q tmp.zip -d ../datasets && rm tmp.zip
    !f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f -d ../datasets/coco/images
    
    # Run YOLOv5x on COCO test
    !python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half --task test
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    4.4 训练demo

    COCO128数据集上用 --data coco128.yaml训练一个yolov5模型, --weights yolov5s.pt来使用预训练权重, 或者用--weights '' --cfg yolov5s.yaml.来随机初始化权重(不推荐)。

    • 预训练Models最新版本YOLOv5自动下载
    • 可自动下载的Datasets 包括: COCO, COCO128, VOC, Argoverse, VisDrone, GlobalWheat, xView, Objects365, SKU-110K.
    • 训练结果 保存到runs/train/,例如runs/train/exp2, runs/train/exp3 等。
    • 下面会启动tensorboard和ClearML跟踪训练。ClearML安装运行clearml-init后会连接到一个ClearML服务器,此时会弹出一个窗口,需要用户凭证。点击你自己的开源服务器,按Create new credentials新建项目,然后弹出窗口点击复制信息,复制到刚才弹出的窗口就行。后面还会弹出三个窗口,全部回车确认就行,这样ClearML就启动成功了。
    • ClearML不启动--data coco128.yaml训练会报错,估计可以改配置取消,还没有仔细看。
    # 启动tensorboard
    %load_ext tensorboard
    %tensorboard --logdir runs/train
    
    • 1
    • 2
    • 3
    # ClearML  (optional)
    %pip install -q clearml
    !clearml-init
    
    • 1
    • 2
    • 3

    运行显示如下:

    ClearML SDK setup process
    
    Please create new clearml credentials through the settings page in your `clearml-server` web app (e.g. http://localhost:8080//settings/workspace-configuration) 
    Or create a free account at https://app.clear.ml/settings/workspace-configuration
    
    In settings page, press "Create new credentials", then press "Copy to clipboard".
    
    Paste copied configuration here:
    api {      # hongxu 张's workspace     web_server: https://app.clear.ml     api_server: https://api.clear.ml     files_server: https://files.clear.ml     credentials {         "access_key" = "CGKJOS2I9UIBQEI8JI2L"         "secret_key" = "1gq6WyjXaY0Pwg4vdOaqMIU3W4oOZ15Fqkxq39PoJmTv6gAUzd"     } }
    Detected credentials key="XXXX" secret="1gq6***"
    WEB Host configured to: [https://app.clear.ml] #此处弹出窗口直接回车确认,下面两个也是
    API Host configured to: [https://api.clear.ml] 
    File Store Host configured to: [https://files.clear.ml] 
    
    ClearML Hosts configuration:
    Web App: https://app.clear.ml
    API: https://api.clear.ml
    File Store: https://files.clear.ml
    
    Verifying credentials ...
    Credentials verified!
    
    New configuration stored in /root/clearml.conf
    ClearML setup completed successfully.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    # Weights & Biases  (optional)
    """这步一运行colab就断了。所以我直接跳过,训练正常进行"""
    %pip install -q wandb
    import wandb
    wandb.login()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    # Train YOLOv5s on COCO128 for 3 epochs
    !python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
    
    • 1
    • 2

    4.5 可视化

    4.5.1 ClearML 日志记录和自动化🌟 NEW

    ClearML 完全集成到 YOLOv5 中,以跟踪您的实验、管理数据集版本,甚至远程执行训练运行。启用ClearML运行(使用你自己的开源服务器,或者我们免费托管的服务器):

    pip install clearml
    clearml-init #连接到一个ClearML服务器
    
    • 1
    • 2

    您可以使用 ClearML Data 对数据集进行版本控制,然后只需使用其唯一 ID 将其传递给 YOLOv5。这将帮助您跟踪数据,而不会增加额外的麻烦。查看ClearML Tutorial获取详细信息。

    4.5.2 wandb记录权重&偏差

    Weights & Biases (W&B) 与 YOLOv5 集成,用于训练运行的实时可视化和云记录。这样可以更好的运行比较和自省,以及提高团队的可见性和协作。 pip install wandb来启用W&B,然后正常训练(首次使用时将指导您进行设置)。

    训练期间可以在https://wandb.ai/home看到实时更新。 并且您可以创建和分享您的详细 Result Reports。更多详情请查看YOLOv5 Weights & Biases Tutorial
    在这里插入图片描述

    4.5.3 Local Logging

    训练结果使用TensorboardCSV 记录器自动记录到runs/train, 为每一次新的训练创建一个新的目录,如runs/train/exp2,runs/train/exp3等。

    这个目录包括训练和验证统计,mosaics,labels,predictions and mosaics数据增强,以及包括precision-recall (PR)曲线和混淆矩阵这些指标和图表。
    在这里插入图片描述
    结果文件results.csv在每个epoch后更新,然后results.png在训练完成后绘制为(下图)。您还可以results.csv手动绘制任何文件:

    from utils.plots import plot_results
    plot_results('path/to/results.csv')  # plot 'results.csv' as 'results.png'
    
    • 1
    • 2

    在这里插入图片描述

    4.6 使用Roboflow训练自定义数据 🌟 NEW

    Roboflow 能使你在自己的数据集上轻松地组织,标记,和预处理一个高质量的数据集. Roboflow也能够轻松地建立一个active learning pipeline, 与您的团队协作改进数据集,并使用roboflow pip包直接集成到您的模型构建工作流程中。

    自定义训练示例: How to Train YOLOv5 On a Custom Dataset
    自定义训练Notebook: Open In Colab
    在这里插入图片描述

  • 相关阅读:
    SkyWalking快速上手(八)——sleuth+zipkin和SkyWalking的区别
    【好文推荐】openGauss索引推荐功能测试
    字节跳动开源隐私合规检测工具appshark
    UE5笔记【五】操作细节——光源、光线参数配置、光照图修复
    haskell的列表推导中的基本概念和其他列表功能head/tail/take/drop
    深入浅出带你了解PHAR反序列化
    Java.lang.Class类 toString()方法有什么功能呢?
    unity中的常用属性修饰符
    我的十年程序员生涯--考研失利,倒也还好
    2023数维杯国际赛数学建模D题思路模型分析
  • 原文地址:https://blog.csdn.net/qq_56591814/article/details/126277200