• deepstream python yolov5使用记录


    deepstream_python_apps

    1、下载nvidia官方发布的 deepstream_python_apps到 /opt/nvidia/deepstream/deepstream/sour ces 目录下,根据deepstrema版本下载对应版本代码,我使用得deepstream6.0,所以我克隆v1.1.0版本代码

    2、根据HOWTO.md文件安装依赖,主要是安装使用得Gst Pythonpyds模块

    3、运行官方例子,检查环境是否安装成功

    deepstream_python_yolov5

    1、下载yolov5得yolov5-deepstream-python代码,

    2、编译,CUDA_VER版本根据自己版本设置

    CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo/

    3、根据tensorrt-yolov5将模型转换成engine模型

    首先根据需要修改参数,主要是input w、h,batct_size:

    yololayer.h    

    1. static constexpr int CLASS_NUM = 10;
    2. static constexpr int INPUT_H = 1088; // yolov5's input height and width must be divisible by 32.
    3. static constexpr int INPUT_W = 1088;

    yolov5.cpp

    1. #define USE_FP16 // set USE_INT8 or USE_FP16 or USE_FP32
    2. #define DEVICE 0 // GPU id
    3. #define NMS_THRESH 0.4
    4. #define CONF_THRESH 0.5
    5. #define BATCH_SIZE 20
    6. #define MAX_IMAGE_INPUT_SIZE_THRESH 3000 * 3000 // ensure it exceed the maximum size in the input images !
    1. // clone code according to above #Different versions of yolov5
    2. // download https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt
    3. cp {tensorrtx}/yolov5/gen_wts.py {ultralytics}/yolov5
    4. cd {ultralytics}/yolov5
    5. python gen_wts.py -w yolov5s.pt -o yolov5s.wts
    6. // a file 'yolov5s.wts' will be generated.
    1. cd {tensorrtx}/yolov5/
    2. // update CLASS_NUM in yololayer.h if your model is trained on custom dataset
    3. mkdir build
    4. cd build
    5. cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
    6. cmake ..
    7. make
    8. sudo ./yolov5 -s [.wts] [.engine] [n/s/m/l/x/n6/s6/m6/l6/x6 or c/c6 gd gw] // serialize model to plan file
    9. sudo ./yolov5 -d [.engine] [image folder] // deserialize and run inference, the images in [image folder] will be processed.
    10. // For example yolov5s
    11. sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
    12. sudo ./yolov5 -d yolov5s.engine ../samples
    13. // For example Custom model with depth_multiple=0.17, width_multiple=0.25 in yolov5.yaml
    14. sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c 0.17 0.25
    15. sudo ./yolov5 -d yolov5.engine ../samples

    4、检查 deepstream_yolov5_config.txt and main.py中得路径,步骤3会生成libmyplugins.so插件,因为我这边导入import ctypes报错,所以将其注销

    1. #import ctypes
    2. import pyds
    3. #ctypes.cdll.LoadLibrary('/home/nvidia/lefugang/tensorrtx/yolov5/build/libmyplugins.so')

    5、 修改main.py中sink插件,使得可以在终端输出结果,不需要再显示器上显示画面,主要是去掉nvegltransform插件,因为该插件时跟随nveglglessink使用的

    1. #!/usr/bin/env python3
    2. ################################################################################
    3. # SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
    4. # SPDX-License-Identifier: Apache-2.0
    5. #
    6. # Licensed under the Apache License, Version 2.0 (the "License");
    7. # you may not use this file except in compliance with the License.
    8. # You may obtain a copy of the License at
    9. #
    10. # http://www.apache.org/licenses/LICENSE-2.0
    11. #
    12. # Unless required by applicable law or agreed to in writing, software
    13. # distributed under the License is distributed on an "AS IS" BASIS,
    14. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    15. # See the License for the specific language governing permissions and
    16. # limitations under the License.
    17. ################################################################################
    18. import sys
    19. # import keyboard
    20. sys.path.append('../')
    21. import gi
    22. gi.require_version('Gst', '1.0')
    23. from gi.repository import GObject, Gst
    24. from common.is_aarch_64 import is_aarch64
    25. from common.bus_call import bus_call
    26. #import ctypes
    27. import pyds
    28. #ctypes.cdll.LoadLibrary('/home/nvidia/lefugang/tensorrtx/yolov5/build/libmyplugins.so')
    29. PGIE_CLASS_ID_VEHICLE = 0
    30. PGIE_CLASS_ID_BICYCLE = 1
    31. PGIE_CLASS_ID_PERSON = 2
    32. PGIE_CLASS_ID_ROADSIGN = 3
    33. def osd_sink_pad_buffer_probe(pad,info,u_data):
    34. frame_number=0
    35. #Intiallizing object counter with 0.
    36. num_rects=0
    37. gst_buffer = info.get_buffer()
    38. if not gst_buffer:
    39. print("Unable to get GstBuffer ")
    40. return
    41. # Retrieve batch metadata from the gst_buffer
    42. # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    43. # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    44. batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    45. l_frame = batch_meta.frame_meta_list
    46. while l_frame is not None:
    47. try:
    48. # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
    49. # The casting is done by pyds.glist_get_nvds_frame_meta()
    50. # The casting also keeps ownership of the underlying memory
    51. # in the C code, so the Python garbage collector will leave
    52. # it alone.
    53. #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
    54. frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    55. except StopIteration:
    56. break
    57. display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    58. frame_number=frame_meta.frame_num
    59. num_rects = frame_meta.num_obj_meta
    60. l_obj=frame_meta.obj_meta_list
    61. while l_obj is not None:
    62. try:
    63. # Casting l_obj.data to pyds.NvDsObjectMeta
    64. #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
    65. obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
    66. except StopIteration:
    67. break
    68. # set bbox color in rgba
    69. print(obj_meta.class_id, obj_meta.obj_label, obj_meta.confidence)
    70. # set the border width in pixel
    71. obj_meta.rect_params.border_width=0
    72. obj_meta.rect_params.has_bg_color=1
    73. obj_meta.rect_params.bg_color.set(0.0, 0.5, 0.3, 0.4)
    74. try:
    75. l_obj=l_obj.next
    76. except StopIteration:
    77. break
    78. # Acquiring a display meta object. The memory ownership remains in
    79. # the C code so downstream plugins can still access it. Otherwise
    80. # the garbage collector will claim it when this probe function exits.
    81. display_meta.num_labels = 1
    82. py_nvosd_text_params = display_meta.text_params[0]
    83. # Setting display text to be shown on screen
    84. # Note that the pyds module allocates a buffer for the string, and the
    85. # memory will not be claimed by the garbage collector.
    86. # Reading the display_text field here will return the C address of the
    87. # allocated string. Use pyds.get_string() to get the string content.
    88. # py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])
    89. # Now set the offsets where the string should appear
    90. py_nvosd_text_params.x_offset = 10
    91. py_nvosd_text_params.y_offset = 12
    92. # Font , font-color and font-size
    93. py_nvosd_text_params.font_params.font_name = "Serif"
    94. py_nvosd_text_params.font_params.font_size = 10
    95. # set(red, green, blue, alpha); set to White
    96. py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)
    97. # Text background color
    98. py_nvosd_text_params.set_bg_clr = 1
    99. # set(red, green, blue, alpha); set to Black
    100. py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
    101. # Using pyds.get_string() to get display_text as string
    102. # print(pyds.get_string(py_nvosd_text_params.display_text))
    103. pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
    104. try:
    105. l_frame=l_frame.next
    106. except StopIteration:
    107. break
    108. return Gst.PadProbeReturn.OK
    109. def main(args):
    110. # Check input arguments
    111. if len(args) != 2:
    112. sys.stderr.write("usage: %s \n" % args[0])
    113. sys.exit(1)
    114. # Standard GStreamer initialization
    115. GObject.threads_init()
    116. Gst.init(None)
    117. # Create gstreamer elements
    118. # Create Pipeline element that will form a connection of other elements
    119. print("Creating Pipeline \n ")
    120. pipeline = Gst.Pipeline()
    121. if not pipeline:
    122. sys.stderr.write(" Unable to create Pipeline \n")
    123. # Source element for reading from the file
    124. print("Creating Source \n ")
    125. source = Gst.ElementFactory.make("filesrc", "file-source")
    126. if not source:
    127. sys.stderr.write(" Unable to create Source \n")
    128. # Since the data format in the input file is elementary h264 stream,
    129. # we need a h264parser
    130. print("Creating H264Parser \n")
    131. h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    132. if not h264parser:
    133. sys.stderr.write(" Unable to create h264 parser \n")
    134. # Use nvdec_h264 for hardware accelerated decode on GPU
    135. print("Creating Decoder \n")
    136. decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    137. if not decoder:
    138. sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")
    139. # Create nvstreammux instance to form batches from one or more sources.
    140. streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    141. if not streammux:
    142. sys.stderr.write(" Unable to create NvStreamMux \n")
    143. # Use nvinfer to run inferencing on decoder's output,
    144. # behaviour of inferencing is set through config file
    145. pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    146. if not pgie:
    147. sys.stderr.write(" Unable to create pgie \n")
    148. # Use convertor to convert from NV12 to RGBA as required by nvosd
    149. nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    150. if not nvvidconv:
    151. sys.stderr.write(" Unable to create nvvidconv \n")
    152. # Create OSD to draw on the converted RGBA buffer
    153. nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    154. if not nvosd:
    155. sys.stderr.write(" Unable to create nvosd \n")
    156. # Finally render the osd output
    157. if is_aarch64():
    158. transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
    159. print("Creating EGLSink \n")
    160. sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
    161. if not sink:
    162. sys.stderr.write(" Unable to create egl sink \n")
    163. print("Playing file %s " %args[1])
    164. source.set_property('location', args[1])
    165. streammux.set_property('width', 1920)
    166. streammux.set_property('height', 1080)
    167. streammux.set_property('batch-size', 1)
    168. streammux.set_property('batched-push-timeout', 4000000)
    169. pgie.set_property('config-file-path', "config/deepstream_yolov5_config.txt")
    170. print("Adding elements to Pipeline \n")
    171. pipeline.add(source)
    172. pipeline.add(h264parser)
    173. pipeline.add(decoder)
    174. pipeline.add(streammux)
    175. pipeline.add(pgie)
    176. pipeline.add(nvvidconv)
    177. pipeline.add(nvosd)
    178. pipeline.add(sink)
    179. # we link the elements together
    180. # file-source -> h264-parser -> nvh264-decoder ->
    181. # nvinfer -> nvvidconv -> nvosd -> video-renderer
    182. print("Linking elements in the Pipeline \n")
    183. source.link(h264parser)
    184. h264parser.link(decoder)
    185. sinkpad = streammux.get_request_pad("sink_0")
    186. if not sinkpad:
    187. sys.stderr.write(" Unable to get the sink pad of streammux \n")
    188. srcpad = decoder.get_static_pad("src")
    189. if not srcpad:
    190. sys.stderr.write(" Unable to get source pad of decoder \n")
    191. srcpad.link(sinkpad)
    192. streammux.link(pgie)
    193. pgie.link(nvvidconv)
    194. nvvidconv.link(nvosd)
    195. nvosd.link(sink)
    196. # create an event loop and feed gstreamer bus mesages to it
    197. #GObject.timeout_add_seconds(5, pipeline_pause(pipeline))
    198. loop = GObject.MainLoop()
    199. bus = pipeline.get_bus()
    200. bus.add_signal_watch()
    201. bus.connect ("message", bus_call, loop)
    202. # Lets add probe to get informed of the meta data generated, we add probe to
    203. # the sink pad of the osd element, since by that time, the buffer would have
    204. # had got all the metadata.
    205. osdsinkpad = nvosd.get_static_pad("sink")
    206. if not osdsinkpad:
    207. sys.stderr.write(" Unable to get sink pad of nvosd \n")
    208. osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
    209. print("Starting pipeline \n")
    210. pipeline.set_state(Gst.State.PLAYING)
    211. try:
    212. loop.run()
    213. except:
    214. pass
    215. # cleanup
    216. pipeline.set_state(Gst.State.NULL)
    217. if __name__ == '__main__':
    218. sys.exit(main(sys.argv))

    6、运行

    LD_PRELOAD=/home/nvidia/lfg/tensorrtx/yolov5/build/libmyplugins.so python main.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
    1. Creating Pipeline
    2. Creating streamux
    3. Creating source_bin 0
    4. Creating source bin
    5. source-bin-00
    6. Creating Pgie
    7. Creating tiler
    8. Creating nvvidconv
    9. Creating nvosd
    10. Creating transform
    11. Creating EGLSink
    12. Atleast one of the sources is live
    13. Adding elements to Pipeline
    14. Linking elements in the Pipeline
    15. Now playing...
    16. 1 : rtsp://admin:asdf1234@10.1.7.220:554
    17. Starting pipeline
    18. 0:00:04.399294673 9779 0x5561021e30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/yolov5-deepstream-python/best.engine
    19. INFO: [Implicit Engine Info]: layers num: 2
    20. 0 INPUT kFLOAT data 3x1088x1088
    21. 1 OUTPUT kFLOAT prob 6001x1x1
    22. 0:00:04.399614801 9779 0x5561021e30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/yolov5-deepstream-python/best.engine
    23. 0:00:04.429867023 9779 0x5561021e30 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:config/deepstream_yolov5_config.txt sucessfully
    24. Decodebin child added: source
    25. Decodebin child added: decodebin0
    26. Decodebin child added: rtph264depay0
    27. Decodebin child added: h264parse0
    28. Decodebin child added: capsfilter0
    29. Decodebin child added: nvv4l2decoder0
    30. Opening in BLOCKING MODE
    31. NvMMLiteOpen : Block : BlockType = 261
    32. NVMEDIA: Reading vendor.tegra.display-size : status: 6
    33. NvMMLiteBlockCreate : Block : BlockType = 261
    34. In cb_newpad
    35. gstname= video/x-raw
    36. features= <Gst.CapsFeatures object at 0x7fa3fd3e88 (GstCapsFeatures at 0x7f3009cda0)>
    37. Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    38. Frame Number= 1 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    39. Frame Number= 2 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    40. Frame Number= 3 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    41. Frame Number= 4 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    42. Frame Number= 5 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    43. Frame Number= 6 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    44. Frame Number= 7 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    45. Frame Number= 8 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    46. Frame Number= 9 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    47. Frame Number= 10 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    48. Frame Number= 11 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    49. Frame Number= 12 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    50. Frame Number= 13 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    51. Frame Number= 14 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    52. Frame Number= 15 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    53. Frame Number= 16 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    54. Frame Number= 17 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    55. Frame Number= 18 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
    56. Frame Number= 19 Number of Objects= 0 Vehicle_count= 0 Person_count= 0

     7、成功

  • 相关阅读:
    【老生谈算法】matlab实现图像增强处理算法源码——图像增强处理算法
    ContentProvider的执行时机
    电脑硬件及电脑配置知识大全
    vue2.x封装svg组件并使用
    MATLAB | kmeans聚类如何绘制更强的聚类边界(决策边界)
    WEB 渗透之信息收集
    字符串左旋解法和子字符串判断法
    LT-mapper,LT-removert代码运行与学习
    使用序列化技术保存数据 改进 IO流完成项目实战水果库存系统
    【面试宝藏】Redis 常见面试题解析
  • 原文地址:https://blog.csdn.net/LFGxiaogang/article/details/126481441