闲得慌来试试极客的打板,然后做一个简单的记录
进入一个赛道除了首先先看看题目要求之外就可以了解一下整体的布局
使用的是linux架构
因此
cd /home/data
进入数据集
ls
检查目录下的文件
本次使用的是yolov5,直接选择解压后的文件会失败,所以就直接不解压将整个压缩包上传。
具体路径是:工作台--->我的模型--->导入文件
我选择的是JupyterLab,本着拿yolov5试试水的想法
先将yolov5文件下载到本地然后将压缩包上传
打开终端
wget 路径
此处的路径可以直接在我的模型中copy下来
unzip yolov5-master.zip
将文件夹解压
然后就是下载依赖
pip install -r requirements.txt # install
如果下载不了出了warnning就换源
pip install -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com -r requirements.txt
其实就是在之前的语句中加入
-i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
报错:
ERROR: Cannot determine archive format of /tmp/pip-req-build-a76kcii4
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn -r requirements.txt
一样的
因为我平时跑yolo用的都是pycharm,没使用过JupyterLab,我的习惯是将代码跑起来看看哪里出问题再修改,不太习惯没有run按键。
核心就是一句话:
使用命令行
常用的命令行就包括
(a)移动文件
mv 文件的原先路径 ,文件想要移动到的路径
这里的路径按道理来说可以是绝对也可以是相对的,但是根据绝对路径不容易错
的原则,我常用绝对路径
(b)运行python文件
python 文件路径 --参数名 传入参数
©下载&解压
这个在之前介绍过
wget 路径
unzip 压缩包
(d)复制文件
举个例子,他的xml和图片文件一般是放在data
cp /home/data/831/*.xml /project/train/src_repo/dataset/Annotations
cp /home/data/831/*.jpg /project/train/src_repo/dataset/images
同样的道理如果有多个文件夹也可以
cp /home/data/*/*.xml /project/train/src_repo/dataset/Annotations
cp /home/data/*/*.jpg /project/train/src_repo/dataset/images
导师给的
import argparse
import os
import platform
import shutil
import time
from pathlib import Path
import sys
import json
sys.path.insert(1, '/project/ev_sdk/src/yolo/')
import cv2
import torch
import torch.backends.cudnn as cudnn
from numpy import random
import numpy as np
import argparse
import time
import cv2
import torch
import torch.backends.cudnn as cudnn
from numpy import random
from utils.augmentations import letterbox
from models.experimental import attempt_load
from utils.datasets import LoadImages, LoadStreams
from utils.general import apply_classifier, check_img_size, check_imshow, check_requirements, check_suffix, colorstr, \
increment_path, non_max_suppression, print_args, scale_coords, set_logging, \
strip_optimizer, xyxy2xywh
from utils.plots import Annotator, colors
from utils.torch_utils import select_device, time_sync
from glob import glob
##for yolov5 V6.0##############################
# Load model
conf_thres = 0.3
iou_thres = 0.05
prob_thres = 0.3
imgsz = 320
model_path='/project/train/models/1exp/weights/best.pt' # 模型地址一定要和测试阶段选择的模型地址一致!!!
device = '0'
stride = 32
names = ['person', 'knife', 'hand', 'others']
def init():
# Initialize
global imgsz, device, stride
set_logging()
device = select_device('0')
half = device.type != 'cpu' # half precision only supported on CUDA
# Load model
model = attempt_load(weights, map_location=device) # load FP32 model
stride = int(model.stride.max()) # model stride
imgsz = check_img_size(imgsz, s=stride) # check img_size
model.eval()
model.half() # to FP16
return model
@torch.no_grad()
def process_image(model, input_image=None, args=None, **kwargs):
# Padded resize
img0 = input_image
img = letterbox(img0, new_shape=imgsz, stride=stride, auto=True)[0]
# Convert
img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
img = np.ascontiguousarray(img)
img = torch.from_numpy(img).to(device)
img = img.half()
# img = img.float()
img /= 255.0 # 0 - 255 to 0.0 - 1.0
if len(img.shape) == 3:
img = img[None]
pred = model(img, augment=False)[0]
# Apply NMS
pred = non_max_suppression(pred, conf_thres, iou_thres, agnostic=False)
fake_result = {}
fake_result["algorithm_data"] = {
"is_alert": False,
"target_count": 0,
"target_info": []
}
fake_result["model_data"] = {"objects": []}
# Process detections
cnt = 0
for i, det in enumerate(pred): # detections per image
gn = torch.tensor(img0.shape)[[1, 0, 1, 0]] # normalization gain whwh
if det is not None and len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], img0.shape).round()
for *xyxy, conf, cls in det:
if conf < prob_thres:
continue
cnt += 1
fake_result["model_data"]['objects'].append({
"xmin":int(xyxy[0]),
"ymin":int(xyxy[1]),
"xmax":int(xyxy[2]),
"ymax":int(xyxy[3]),
"confidence":float(conf),
"name":names[int(cls)]
})
fake_result["algorithm_data"]["target_info"].append({
"xmin":int(xyxy[0]),
"ymin":int(xyxy[1]),
"xmax":int(xyxy[2]),
"ymax":int(xyxy[3]),
"confidence":float(conf),
"name":names[int(cls)]
}
)
if cnt:
fake_result ["algorithm_data"]["is_alert"] = True
fake_result ["algorithm_data"]["target_count"] = cnt
return json.dumps(fake_result, indent = 4)
if __name__ == '__main__':
from glob import glob
# Test API
image_names = glob('/home/data/*/*.jpg')[:10]
predictor = init()
s = 0
for image_name in image_names:
img = cv2.imread(image_name)
t1 = time.time()
res = process_image(predictor, img)
print(res)
t2 = time.time()
s += t2 - t1
print(1/(s/100))
有一个小细节是
编码环境和训练环境是不相同的
换句话说就是 你的训练结果可能会存放在训练环境,因此你在编码环境并不能查看到,但是仍然可以测试