• 树莓派传输图片到服务器处理(一)


    imagezmq

    使用imageimq 来做这个传输图片的功能,github地址如下
    https://github.com/jeffbass/imagezmq.git
    我们使用清华的源来加速下载该库
    pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple imagezmq
    pip install pyzmq
    pip install imutils
    imutils

    import cv2
    import imagezmq
    
    image_hub = imagezmq.ImageHub()
    while True:
        name , image = image_hub.recv_image()
        print(image.shape[:2])
        cv2.imshow(name , image)
        # 处理图像
        #...
        key = cv2.waitKey(25)
        if key == 27:
            break
        image_hub.send_reply(b'RET')
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    树莓派上写代码,树莓派我们可以读取它的摄像头来进行图片获取,然后发送给服务端,不过这个暂时真不实用,因为一个摄像头不一定有一个树莓派贵,后面我们说改进方法。

    import sys
    
    import socket
    import time
    import cv2
    from imutils.video import VideoStream
    import imagezmq
    
    #use either of the formats below to specifiy address of display computer
    #sender = imagezmq.ImageSender(connect_to='tcp://jeff-macbook:5555')
    sender = imagezmq.ImageSender(connect_to='tcp://localhost:5555')
    
    rpi_name = socket.gethostname()  # send RPi hostname with each image
    picam = VideoStream(usePiCamera=True).start()
    time.sleep(2.0)  # allow camera sensor to warm up
    while True:  # send images as stream until Ctrl-C
        image = picam.read()
        sender.send_image(rpi_name, image)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    这样就可以接收树莓派摄像头的图像了,但是这样没有压缩,我们可以压缩以后再发送过去
    压缩方法使用cv2的imencode方法,

    ret_code, jpg_buffer = cv2.imencode(
                ".jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality])
    
    • 1
    • 2

    同时可以控制GPIO来使得LED灯来亮和灭。

    import sys
    
    import socket
    import time
    import traceback
    import cv2
    from imutils.video import VideoStream
    import imagezmq
    import RPi.GPIO as GPIO
    
    #use either of the formats below to specifiy address of display computer
    sender = imagezmq.ImageSender(connect_to='tcp://jeff-macbook:5555')
    #sender = imagezmq.ImageSender(connect_to='tcp://192.168.1.190:5555')
    
    #optionally, turn on the LED area lighting
    use_led = true # set to True or False as needed
    #optionally, filp the image vertically
    flip = True  # set to True of False as needed
    
    if use_led:
        GPIO.setmode(GPIO.BCM)
        GPIO.setup(18, GPIO.OUT)
        GPIO.output(18, True)  # turn on LEDs
    
    rpi_name = socket.gethostname()  # send RPi hostname with each image
    picam = VideoStream(usePiCamera=True).start()
    time.sleep(2.0)  # allow camera sensor to warm up
    jpeg_quality = 95  # 0 to 100, higher is better quality, 95 is cv2 default
    try:
        while True:  # send images as stream until Ctrl-C
            image = picam.read()
            # processing of image before sending would go here.
            # for example, rotation, ROI selection, conversion to grayscale, etc.
            if flip:
                image = cv2.flip(image, -1)
            ret_code, jpg_buffer = cv2.imencode(
                ".jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality])
            reply_from_mac = sender.send_jpg(rpi_name, jpg_buffer)
            # above line shows how to capture REP reply text from Mac
    except (KeyboardInterrupt, SystemExit):
        pass  # Ctrl-C was pressed to end program
    except Exception as ex:
        print('Python error with no Exception handler:')
        print('Traceback error:', ex)
        traceback.print_exc()
    finally:
        if use_led:
            GPIO.output(18, False)  # turn off LEDs
            GPIO.cleanup()  # close GPIO channel and release it
        picam.stop()  # stop the camera thread
        sender.close()  # close the ZMQ socket and context
        sys.exit()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52

    这样接收就要修改了,接收要使用numpy,np.frombuffer 函数来把数据交给cv2 的 imdecode函数来进行jpg解码,这样,程序就可以拿到BGR数据,来进行下一步数据处理。

    import sys
    
    import time
    import traceback
    import numpy as np
    import cv2
    from collections import defaultdict
    from imutils.video import FPS
    import imagezmq
    
    #instantiate image_hub
    image_hub = imagezmq.ImageHub()
    
    image_count = 0
    sender_image_counts = defaultdict(int)  # dict for counts by sender
    first_image = True
    
    try:
        while True:  # receive images until Ctrl-C is pressed
            sent_from, jpg_buffer = image_hub.recv_jpg()
            if first_image:
                fps = FPS().start()  # start FPS timer after first image is received
                first_image = False
            image = cv2.imdecode(np.frombuffer(jpg_buffer, dtype='uint8'), -1)
            # see opencv docs for info on -1 parameter
            fps.update()
            image_count += 1  # global count of all images received
            sender_image_counts[sent_from] += 1  # count images for each RPi name
            cv2.imshow(sent_from, image)  # display images 1 window per sent_from
            cv2.waitKey(1)
            # other image processing code, such as saving the image, would go here.
            # often the text in "sent_from" will have additional information about
            # the image that will be used in processing the image.
            image_hub.send_reply(b'OK')  # REP reply
    except (KeyboardInterrupt, SystemExit):
        pass  # Ctrl-C was pressed to end program; FPS stats computed below
    except Exception as ex:
        print('Python error with no Exception handler:')
        print('Traceback error:', ex)
        traceback.print_exc()
    finally:
        # stop the timer and display FPS information
        print()
        print('Test Program: ', __file__)
        print('Total Number of Images received: {:,g}'.format(image_count))
        if first_image:  # never got images from any RPi
            sys.exit()
        fps.stop()
        print('Number of Images received from each RPi:')
        for RPi in sender_image_counts:
            print('    ', RPi, ': {:,g}'.format(sender_image_counts[RPi]))
        compressed_size = len(jpg_buffer)
        print('Size of last jpg buffer received: {:,g} bytes'.format(compressed_size))
        image_size = image.shape
        print('Size of last image received: ', image_size)
        uncompressed_size = 1
        for dimension in image_size:
            uncompressed_size *= dimension
        print('    = {:,g} bytes'.format(uncompressed_size))
        print('Compression ratio: {:.2f}'.format(compressed_size / uncompressed_size))
        print('Elasped time: {:,.2f} seconds'.format(fps.elapsed()))
        print('Approximate FPS: {:.2f}'.format(fps.fps()))
        cv2.destroyAllWindows()  # closes the windows opened by cv2.imshow()
        image_hub.close()  # closes ZMQ socket and context
        sys.exit()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65

    前面说到数据处理,比如使用caffe来进行图像分类,使用cv2的dnn模块来进行推理

    net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
    
    • 1
    	frame = imutils.resize(frame, width=400)
    	(h, w) = frame.shape[:2]
    	blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)),
    		0.007843, (300, 300), 127.5)
    
    	# 检测和预测
    	net.setInput(blob)
    	detections = net.forward()
    # 其他代码.....
      
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    当然,我们还可以使用其他比如yolo来处理接收到的图像。

    改进方式

    使用树莓派我们可以预处理一些图像,比如使用yolo或者tensorflow来检测一些图像,然后把检测到我们感兴趣的图像发送到服务器端进行处理,树莓派本身具有GPU,有硬件编码的功能,我们也可以使用他来发送h264或者h265编码格式,进行RTP传输,这个以后再改进,写第二篇吧

  • 相关阅读:
    网络编程原理二
    测试老中医、备战金九银十:38道关于软件测试技术面试题(附带答案)
    Spring事务与事务传播机制
    redis与Java交互
    微服务及其在app自动化领域的应用
    AI自己写代码让智能体进化!OpenAI的大模型有“人类思想”那味了
    时间显示相关
    Vue3:组件高级(下)
    Java基础 --- 线程状态 Thread States
    hivesql连续日期统计最大逾期/未逾期案例
  • 原文地址:https://blog.csdn.net/qianbo042311/article/details/126859536