• cv2.dnn.readNetFromONNX读取yolov8的onnx报错解决过程


    1. Loading /home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.onnx for ONNX OpenCV DNN inference...
    2. [ERROR:0@3.062] global onnx_importer.cpp:1051 handleNode DNN/ONNX: ERROR during processing node with 2 inputs and 2 outputs: [Split]:(onnx_node!/model.22/Split) from domain='ai.onnx'
    3. Traceback (most recent call last):
    4. File "/home/inference/Amplitudemode_AI/all_model_and_pred/AI_Ribfrac_ths/onnx_test_seg/infer-seg.py", line 167, in
    5. model = AutoBackend(weights="/home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.onnx", dnn=True)
    6. File "/home/inference/miniconda3/envs/yolov8/lib/python3.10/site-packages/ultralytics/nn/autobackend.py", line 124, in __init__
    7. net = cv2.dnn.readNetFromONNX(w)
    8. cv2.error: OpenCV(4.7.0) /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1073: error: (-2:Unspecified error) in function 'handleNode'
    9. > Node [Split@ai.onnx]:(onnx_node!/model.22/Split) parse error: OpenCV(4.7.0) /io/opencv/modules/dnn/src/layers/slice_layer.cpp:274: error: (-215:Assertion failed) splits > 0 && inpShape[axis_rw] % splits == 0 in function 'getMemoryShapes'
    10. >

    上述是尝试用opencv读取模型时的报错信息。

    接着去github上的yolov8官方项目的问题区搜索,经过尝试最终搜索关键字如下:

    ONNX DNN  splits > 0 && inpShape[axis_rw] % splits == 0 in function 'getMemoryShapes 

    找到对应问题如下:Exported ONNX cannot be opened in OpenCV · Issue #226 · ultralytics/ultralytics · GitHubSearch before asking I have searched the YOLOv8 issues and found no similar bug report. YOLOv8 Component Export Bug I created a custom object detector and export to ONNX. Attempting to load it in OpenCV results in the following error: `[...icon-default.png?t=N7T8https://github.com/ultralytics/ultralytics/issues/226

     找到解决方法如下转换时要设置(关键是添加opset=11)

    yolo mode=export model=runs/detect/train/weights/best.pt imgsz=[640,640] format=onnx opset=11

    实际转化代码如下:

    1. from ultralytics import YOLO
    2. model = YOLO(
    3. "/home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.pt")
    4. success = model.export(format="onnx", opset=11, simplify=True) # export the model to onnx format
    5. assert success

    用转换好的onnx调用官方api推理如下:

    1. from ultralytics import YOLO
    2. model = YOLO("/home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.onnx") # 模型加载
    3. results = model.predict(
    4. source='/home/inference/tt',imgsz=640, save=True, boxes=False) # save plotted images 保存绘制图片

    正常推理成功。

    ps 2024年2月22日

    用官方api推理少了关键dnn=True的配置,添加时会报错。

    故而我用官方的模型去测试先转onnx如下代码:

    1. # -*-coding:utf-8-*-
    2. from ultralytics import YOLO
    3. # Load a model
    4. model = YOLO('yolov8n-seg.pt') # load an official model
    5. # Export the model
    6. model.export(format='onnx', opset=12)

    这里用opset=12,是发现官方实例里升级到12了https://github.com/ultralytics/ultralytics/tree/main/examples/YOLOv8-OpenCV-ONNX-Python

    命令行测试推理如下报错:

    1. yolo predict task=segment model=yolov8n-seg.onnx imgsz=640 dnn
    2. WARNING ⚠️ 'source' is missing. Using default 'source=/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets'.
    3. Ultralytics YOLOv8.1.17 🚀 Python-3.9.18 torch-1.12.1+cu102 CUDA:0 (Tesla T4, 14927MiB)
    4. Loading yolov8n-seg.onnx for ONNX OpenCV DNN inference...
    5. WARNING ⚠️ Metadata not found for 'model=yolov8n-seg.onnx'
    6. Traceback (most recent call last):
    7. File "/home/inference/miniconda3/envs/yolov8v2/bin/yolo", line 8, in
    8. sys.exit(entrypoint())
    9. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/cfg/__init__.py", line 568, in entrypoint
    10. getattr(model, mode)(**overrides) # default args from model
    11. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/model.py", line 429, in predict
    12. return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
    13. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/predictor.py", line 213, in predict_cli
    14. for _ in gen: # noqa, running CLI inference without accumulating any outputs (do not modify)
    15. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
    16. response = gen.send(None)
    17. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/predictor.py", line 290, in stream_inference
    18. self.results = self.postprocess(preds, im, im0s)
    19. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/models/yolo/segment/predict.py", line 30, in postprocess
    20. p = ops.non_max_suppression(
    21. File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/utils/ops.py", line 230, in non_max_suppression
    22. output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
    23. RuntimeError: Trying to create tensor with negative dimension -837: [0, -837]

     找到了对应问题:

    Error while inferencing with DNN module using CLI and ONNX export · Issue #2178 · ultralytics/ultralytics · GitHub

    里面没有人解决,顺便提以下检测模型是能正常调用dnn推理的。

    看到资料说可能是torch版本问题,分别尝试了2.0.2,1.12.1,1.11.0没有解决(这期间改动根据代码里注释 WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False 改动也没用),也可能是opencv版本的问题,分别尝试了4.9,4.8,4.7也没解决。感觉是个大坑啊。

  • 相关阅读:
    shell_56.Linux永久重定向
    高等教育心理学:学习的基本理论(重要)
    合成数据在计算机视觉任务中的应用指南
    PyG (PyTorch Geometric) 异质图神经网络HGNN
    性价比比苹果HomeKit高的智汀Smart Assistant全方位指南
    Day34力扣打卡
    【单调栈】下一个更大元素 III
    Nacos 如何实现配置文件动态更新的
    进程切换及一些常见概念(面试必问)
    typeof的作用
  • 原文地址:https://blog.csdn.net/qq_36401512/article/details/136189767