• YOLOv5 分类模型 OpenCV和PyTorch两者实现预处理的差异


    YOLOv5 分类模型 OpenCV和PyTorch两者实现预处理的差异

    flyfish

    PyTorch封装了PIL库
    简单对比下两者的使用方法

    import cv2
    from PIL import Image
    import numpy as np
    
    full_path_file_name="/media/a//ILSVRC2012_val_00001244.JPEG"
    
    
    #OpenCV读取图像默认是BGR顺序
    cv_image=cv2.imread(full_path_file_name) #BGR
    print(cv_image.shape)
    cv_image=cv2.cvtColor(cv_image,cv2.COLOR_BGR2RGB)
    #print("cv_image:",cv_image)#(400, 500, 3) HWC
    
    #PIL读取图像默认是RGB顺序
    pil_image=Image.open(full_path_file_name)
    print("pil_image:",pil_image)
    numpy_image=np.array(pil_image)
    print(numpy_image.shape)#(400, 500, 3) HWC BGR
    #print("numpy_image:",numpy_image)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    在这里插入图片描述

    这样OpenCV和PIL返回的是相同的数据

    如果是height > width的情况下,图像缩放大小是
    ( size × height width , size ) \left(\text{size} \times \frac{\text{height}}{\text{width}}, \text{size}\right) (size×widthheight,size)

    https://github.com/pytorch/vision/
    
    • 1
    vision/torchvision/transforms/functional.py
    
    • 1

    产生的问题
    PyTorch中使用transforms.Resizetransforms.Resize使用了双线性插值和抗锯齿antialiasing,与cv2.resize处理不同。所以会造成推理结果有差异

    def resize(img: Tensor, size: List[int], interpolation: InterpolationMode = InterpolationMode.BILINEAR,
               max_size: Optional[int] = None) -> Tensor:
    
    • 1
    • 2
    The output image might be different depending on its type: when downsampling, the interpolation of PIL images
    and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
    in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
    types.
    
    • 1
    • 2
    • 3
    • 4

    对比下差异

    from skimage.metrics import structural_similarity as ssim
    from skimage.metrics import peak_signal_noise_ratio as psnr
    from skimage.metrics import mean_squared_error as mse
    
    
    target_size =224
    
    img_w = pil_image.width
    img_h = pil_image.height
    
    image_width, image_height =0,0
    if(img_h >= img_w):# hw
        image_width, image_height =target_size, int(target_size * img_h / img_w)
    else:
        image_width, image_height =int(target_size * img_w  / img_h),target_size
        
    
    
    print(image_width, image_height)
    pil_resize_img = pil_image.resize((image_width, image_height), Image.BILINEAR)
    
    #print("pil_resize_img:",np.array(pil_resize_img))
    
    pil_resize_img=np.array(pil_resize_img)
    
    cv_resize_img0 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_CUBIC)
    #print("cv_resize_img:",cv_resize_img0)
    cv_resize_img1 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_NEAREST)
    cv_resize_img2 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_LINEAR)
    cv_resize_img3 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_AREA)
    cv_resize_img4 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_LANCZOS4)
    cv_resize_img5 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_LINEAR_EXACT)
    cv_resize_img6 = cv2.resize(cv_image, (image_width, image_height), interpolation=cv2.INTER_NEAREST_EXACT)
    
    
    print(mse(pil_resize_img,pil_resize_img))
    print(mse(pil_resize_img,cv_resize_img0))
    print(mse(pil_resize_img,cv_resize_img1))
    print(mse(pil_resize_img,cv_resize_img2))
    print(mse(pil_resize_img,cv_resize_img3))
    print(mse(pil_resize_img,cv_resize_img4))
    print(mse(pil_resize_img,cv_resize_img5))
    print(mse(pil_resize_img,cv_resize_img6))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43

    可以使用structural_similarity、peak_signal_noise_ratio 、mean_squared_error对比
    这里使用mean_squared_error

    0.0
    30.721508290816328
    103.37267219387755
    13.030575042517007
    2.272438350340136
    36.33767538265306
    13.034412202380953
    51.2258237670068
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    PyTorch推荐做法是 Therefore, it is preferable to train and serve a model with the same input types.训练和部署使用相同的输入

  • 相关阅读:
    结合OB Cloud区别于MySQL的4大特性,规划降本方案
    堡塔APP 免费使用教程【图文教程】
    Revit中新建Revit填充图案及“问题视图”操作
    Unity注解使用方法快速上手
    IDEA 报错:Process terminated【已解决】
    STM32F407ZGT6|定时器中断
    【webrtc】接收/发送的rtp包、编解码的VCM包、CopyOnWriteBuffer
    51单片机11(蜂鸣器硬件设计和软件设计)
    树莓派也能用于心脏病数据安全管理!
    NEFU数字图像处理(5)图像压缩编码
  • 原文地址:https://blog.csdn.net/flyfish1986/article/details/134555807