当前位置: 首页 > news >正文

做网站要几个人 要多少钱如何制作小程序码

做网站要几个人 要多少钱,如何制作小程序码,单本小说网站源码,县建设局 协会网站目录 0. 背景1. 模型转化1.1 基础环境1.2 创建python环境1.3 将yolov5s.pt转为yolov5s.onnx1.4 将yolov5s.onnx转为yolov5s.rknn 2. 开发板部署2.1. c版本2.1. python版本#xff08;必须是python 3.9#xff09; 3. 性能测试 0. 背景 全面国产化#xff0c;用瑞芯微rk3588… 目录 0. 背景1. 模型转化1.1 基础环境1.2 创建python环境1.3 将yolov5s.pt转为yolov5s.onnx1.4 将yolov5s.onnx转为yolov5s.rknn 2. 开发板部署2.1. c版本2.1. python版本必须是python 3.9 3. 性能测试 0. 背景 全面国产化用瑞芯微rk3588开发板替代jetson nano开发板。 1. 模型转化 模型转化这一步需要在笔记本上的ubuntu20.04桌面版的虚拟机内完成包括yolov5s.pt转为yolov5s.onnxyolov5s.onnx转为yolov5s.rknn等两个主要步骤。 主要参考博客《yolov5篇—yolov5训练pt模型并转换为rknn模型部署在RK3588开发板上——从训练到部署全过程》 1.1 基础环境 基础环境x86平台的ubuntu 20.04虚拟机注意这里必须要x86平台的计算机一般笔记本就可以虚拟机中系统版本为ubuntu 20.04桌面版 1.2 创建python环境 虚拟机中安装miniconda然后激活base环境创建python 3.8的conda环境注意这里python版本必须为3.8参考以下命令 conda create -n rk3588 python3.8 conda activate rk3588 pip install numpy -i https://mirror.baidu.com/pypi/simple cd ~/Desktop git clone https://gitcode.net/mirrors/rockchip-linux/rknn-toolkit2.git pip install -r rknn-toolkit2/doc/requirements_cp38-1.4.0.txt -i https://mirror.baidu.com/pypi/simple pip install pandas1.4.* pyyaml matplotlib3.3.* seaborn -i https://mirror.baidu.com/pypi/simple1.3 将yolov5s.pt转为yolov5s.onnx 首先将yolov5项目代码下载到桌面注意这里的yolov5项目实际为v5.0版本如下 cd ~/Desktop git clone https://gitcode.net/mirrors/ultralytics/yolov5.git cd yolov5 git reset --hard c5360f6e7009eb4d05f14d1cc9dae0963e949213其次从yolov5项目地址中找到yolov5s.pt的下载地址用迅雷下载即可将yolov5s.pt上传到虚拟机~/Desktop/yolov5/weights目录下 再次修改~/Desktop/yolov5/models/yolo.py中的Detect函数如下图所示注意该部分仅限于转化时使用在训练时不能修改 再次修改~/Desktop/yolov5/export.py中的export_onnx()函数如下图所示 最后在命令行调用以下命令在weights目录下存在yolov5s.onnx文件 python export.py --weights weights/yolov5s.pt --img 640 --batch 1 --include onnx1.4 将yolov5s.onnx转为yolov5s.rknn 首先下载rknn-toolkit2项目。该步骤实际上已经在环境准备中做完。 cd ~/Desktop git clone https://gitcode.net/mirrors/rockchip-linux/rknn-toolkit2.git其次安装rknn-toolkit2的环境。该步骤实际上已经在环境准备中做完。 cd ~/Desktop/rknn-toolkit2 cd doc pip install -r requirements_cp38-1.4.0.txt -i https://mirror.baidu.com/pypi/simple再次安装rknn-toolkit2工具包。 cd ~/Desktop/rknn-toolkit2 cd packages pip install rknn_toolkit2-1.4.0_22dcfef4-cp38-cp38-linux_x86_64.whl -i https://mirror.baidu.com/pypi/simple测试是否安装成功。在终端运行python环境然后输入 from rknn.api import RKNN再次将yolov5s.onnx复制到~/Desktop/rknn-toolkit2/examples/onnx/yolov5目录下将该目录下的test.py作出一些修改如下图 最后执行python test.py即可在同级目录下获得yolov5s.rknn。 2. 开发板部署 利用yolov5s.onnx我们运行yolov5代码。这里区分为c版本和python版本。以下操作均在开发板上进行。 2.1. c版本 在rk3588开发板上下载官方demo cd ~/Desktop git clone https://gitcode.net/mirrors/rockchip-linux/rknpu2.git修改文件。首先进入到rknpu2/examples/rknn_yolov5_demo目录下然后修改include文件中的头文件postprocess.h如下图 其次修改model目录下的coco_80_labels_list.txt文件改为自己的类并保存如下图 最后将转换后的rknn文件放在model/RK3588目录下编译并运行shell该命令成功执行后会生成install目录。 bash ./build-linux_RK3588.sh3运行demo。将yolov5s.rknn上传到model/RK3588目录下在model目录下放入需要推理的图片运行 cd install/rknn_yolov5_demo_linux ./rknn_yolov5_demo ./model/RK3588/yolov5s.rknn ./model/bus.jpg2.1. python版本必须是python 3.9 该版本API主要参考《RKNN Toolkit Lite2用户使用指南》。 更新源 # 默认注释了源码镜像以提高 apt update 速度如有需要可自行取消注释 deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ focal main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy main restricted universe multiverse deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ focal-updates main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-updates main restricted universe multiverse deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ focal-backports main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-backports main restricted universe multiverse deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ focal-security main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-security main restricted universe multiverse更新源 sudo apt-get update miniconda安装 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh (在线安装) 推荐离线安装进入miniconda官网选择py3.8版本用迅雷下载下来如下 然后将Miniconda3-py38_23.1.0-1-Linux-aarch64.sh上传到rk3588板子的~/Downloads目录下执行安装操作 bash ./Miniconda3-py38_23.1.0-1-Linux-aarch64.sh创建python环境主要包含numpyopencvpsutils等。 conda create -n rk3588 python3.9 conda activate rk3588 pip install numpy opencv-python -i https://mirror.baidu.com/pypi/simple下载RKNN Toolkit2项目到桌面上 cd ~/Desktop git clone https://gitcode.net/mirrors/rockchip-linux/rknn-toolkit2.git安装RKNN Toolkit Lite2环境 cd rknn-toolkit2/rknn_toolkit_lite2/packages pip install rknn_toolkit_lite2-1.4.0-cp39-cp39-linux_aarch64.whl -i https://mirror.baidu.com/pypi/simple添加.so文件。这里主要为了确保python脚本可正常调用npu的C脚本。 cd ~/Downloads git clone https://gitcode.net/mirrors/rockchip-linux/rknpu2.git sudo cp rknpu2/runtime/RK3588/Linux/librknn_api/aarch64/librknn* /usr/lib测试环境。测试案例在examples/inference_with_lite目录下。 cd rknn-toolkit2/rknn_toolkit_lite2/examples/inference_with_lite python test.py运行结果如下 测试yolov5的python脚本。在inference_with_lite目录下创建data将测试图片放入该目录中将yolov5s.rknn上传到inference_with_lite目录下创建yolov5.py对测试图片进行推理并将结果保存到同级目录下res.jpg参考连接https://github.com/ChuanSe/yolov5-PT-to-RKNN/blob/main/detect.py代码如下 import os import urllib import traceback import time import sys import numpy as np import cv2 #from rknn.api import RKNN import platform from rknnlite.api import RKNNLite import multiprocessingONNX_MODEL yolov5s.onnx RKNN_MODEL yolov5s.rknn IMG_PATH ./data/car.png DATASET ./dataset.txtQUANTIZE_ON TrueOBJ_THRESH 0.25 NMS_THRESH 0.45 IMG_SIZE 640CLASSES (person, bicycle, car, motorbike , aeroplane , bus , train, truck , boat, traffic light,fire hydrant, stop sign , parking meter, bench, bird, cat, dog , horse , sheep, cow, elephant,bear, zebra , giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite,baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife ,spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza , donut, cake, chair, sofa,pottedplant, bed, diningtable, toilet , tvmonitor, laptop , mouse , remote , keyboard , cell phone, microwave ,oven , toaster, sink, refrigerator , book, clock, vase, scissors , teddy bear , hair drier, toothbrush )# decice tree for rk356x/rk3588 DEVICE_COMPATIBLE_NODE /proc/device-tree/compatibledef get_host():# get platform and device typesystem platform.system()machine platform.machine()os_machine system - machineif os_machine Linux-aarch64:try:with open(DEVICE_COMPATIBLE_NODE) as f:device_compatible_str f.read()if rk3588 in device_compatible_str:host RK3588else:host RK356xexcept IOError:print(Read device node {} failed..format(DEVICE_COMPATIBLE_NODE))exit(-1)else:host os_machinereturn hostINPUT_SIZE 224 RK3588_RKNN_MODEL resnet18_for_rk3588.rknndef sigmoid(x):return 1 / (1 np.exp(-x))def xywh2xyxy(x):# Convert [x, y, w, h] to [x1, y1, x2, y2]y np.copy(x)y[:, 0] x[:, 0] - x[:, 2] / 2 # top left xy[:, 1] x[:, 1] - x[:, 3] / 2 # top left yy[:, 2] x[:, 0] x[:, 2] / 2 # bottom right xy[:, 3] x[:, 1] x[:, 3] / 2 # bottom right yreturn ydef process(input, mask, anchors):anchors [anchors[i] for i in mask]grid_h, grid_w map(int, input.shape[0:2])box_confidence sigmoid(input[..., 4])box_confidence np.expand_dims(box_confidence, axis-1)box_class_probs sigmoid(input[..., 5:])box_xy sigmoid(input[..., :2])*2 - 0.5col np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)row np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)col col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis-2)row row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis-2)grid np.concatenate((col, row), axis-1)box_xy gridbox_xy * int(IMG_SIZE/grid_h)box_wh pow(sigmoid(input[..., 2:4])*2, 2)box_wh box_wh * anchorsbox np.concatenate((box_xy, box_wh), axis-1)return box, box_confidence, box_class_probsdef filter_boxes(boxes, box_confidences, box_class_probs):Filter boxes with box threshold. Its a bit different with origin yolov5 post process!# Argumentsboxes: ndarray, boxes of objects.box_confidences: ndarray, confidences of objects.box_class_probs: ndarray, class_probs of objects.# Returnsboxes: ndarray, filtered boxes.classes: ndarray, classes for boxes.scores: ndarray, scores for boxes.boxes boxes.reshape(-1, 4)box_confidences box_confidences.reshape(-1)box_class_probs box_class_probs.reshape(-1, box_class_probs.shape[-1])_box_pos np.where(box_confidences OBJ_THRESH)boxes boxes[_box_pos]box_confidences box_confidences[_box_pos]box_class_probs box_class_probs[_box_pos]class_max_score np.max(box_class_probs, axis-1)classes np.argmax(box_class_probs, axis-1)_class_pos np.where(class_max_score OBJ_THRESH)boxes boxes[_class_pos]classes classes[_class_pos]scores (class_max_score* box_confidences)[_class_pos]return boxes, classes, scoresdef nms_boxes(boxes, scores):Suppress non-maximal boxes.# Argumentsboxes: ndarray, boxes of objects.scores: ndarray, scores of objects.# Returnskeep: ndarray, index of effective boxes.x boxes[:, 0]y boxes[:, 1]w boxes[:, 2] - boxes[:, 0]h boxes[:, 3] - boxes[:, 1]areas w * horder scores.argsort()[::-1]keep []while order.size 0:i order[0]keep.append(i)xx1 np.maximum(x[i], x[order[1:]])yy1 np.maximum(y[i], y[order[1:]])xx2 np.minimum(x[i] w[i], x[order[1:]] w[order[1:]])yy2 np.minimum(y[i] h[i], y[order[1:]] h[order[1:]])w1 np.maximum(0.0, xx2 - xx1 0.00001)h1 np.maximum(0.0, yy2 - yy1 0.00001)inter w1 * h1ovr inter / (areas[i] areas[order[1:]] - inter)inds np.where(ovr NMS_THRESH)[0]order order[inds 1]keep np.array(keep)return keepdef yolov5_post_process(input_data):masks [[0, 1, 2], [3, 4, 5], [6, 7, 8]]anchors [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],[59, 119], [116, 90], [156, 198], [373, 326]]boxes, classes, scores [], [], []for input, mask in zip(input_data, masks):b, c, s process(input, mask, anchors)b, c, s filter_boxes(b, c, s)boxes.append(b)classes.append(c)scores.append(s)boxes np.concatenate(boxes)boxes xywh2xyxy(boxes)classes np.concatenate(classes)scores np.concatenate(scores)nboxes, nclasses, nscores [], [], []for c in set(classes):inds np.where(classes c)b boxes[inds]c classes[inds]s scores[inds]keep nms_boxes(b, s)nboxes.append(b[keep])nclasses.append(c[keep])nscores.append(s[keep])if not nclasses and not nscores:return None, None, Noneboxes np.concatenate(nboxes)classes np.concatenate(nclasses)scores np.concatenate(nscores)return boxes, classes, scoresdef draw(image, boxes, scores, classes):Draw the boxes on the image.# Argument:image: original image.boxes: ndarray, boxes of objects.classes: ndarray, classes of objects.scores: ndarray, scores of objects.all_classes: all classes name.for box, score, cl in zip(boxes, scores, classes):top, left, right, bottom boxprint(class: {}, score: {}.format(CLASSES[cl], score))print(box coordinate left,top,right,down: [{}, {}, {}, {}].format(top, left, right, bottom))top int(top)left int(left)right int(right)bottom int(bottom)cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)cv2.putText(image, {0} {1:.2f}.format(CLASSES[cl], score),(top, left - 6),cv2.FONT_HERSHEY_SIMPLEX,0.6, (0, 0, 255), 2)def letterbox(im, new_shape(640, 640), color(0, 0, 0)):# Resize and pad image while meeting stride-multiple constraintsshape im.shape[:2] # current shape [height, width]if isinstance(new_shape, int):new_shape (new_shape, new_shape)# Scale ratio (new / old)r min(new_shape[0] / shape[0], new_shape[1] / shape[1])# Compute paddingratio r, r # width, height ratiosnew_unpad int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh paddingdw / 2 # divide padding into 2 sidesdh / 2if shape[::-1] ! new_unpad: # resizeim cv2.resize(im, new_unpad, interpolationcv2.INTER_LINEAR)top, bottom int(round(dh - 0.1)), int(round(dh 0.1))left, right int(round(dw - 0.1)), int(round(dw 0.1))im cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, valuecolor) # add borderreturn im, ratio, (dw, dh)def scale_coords(img1_shape, coords, img0_shape, ratio_padNone):# 将预测的坐标信息coords(相对img1_shape)转换回相对原图尺度img0_shape#:param img1_shape: 缩放后的图像大小 [H, W][384, 512]#:param coords: 预测的box信息 [7,4] [anchor_nums, x1y1x2y2] 这个预测信息是相对缩放后的图像尺寸img1_shape的#:param img0_shape: 原图的大小 [H, W, C][375, 500, 3]#:param ratio_pad: 缩放过程中的缩放比例以及pad 一般不传入#:return: coords: 相对原图尺寸img0_shape的预测信息# Rescale coords (xyxy) from img1_shape to img0_shapeif ratio_pad is None: # calculate from img0_shape# gain old/new 1.024 max(img1_shape): 求img1的较长边 这一步对应的是之前的letterbox步骤gain max(img1_shape) / max(img0_shape)# wh padding 这一步起不起作用完全取决于letterbox的方式# 当letterbox为letter_pad_img时pad(0.0, 64.0); 当letterbox为leeter_img时,pad(0.0, 0.0)pad (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2else:gain ratio_pad[0][0]pad ratio_pad[1]# 将相对img1的预测信息缩放得到相对原图img0的预测信息coords[:, [0, 2]] - pad[0] # x paddingcoords[:, [1, 3]] - pad[1] # y paddingcoords[:, :4] / gain # 缩放# 缩放到原图的预测结果并对预测值进行了一定的约束防止预测结果超出图像的尺寸clip_coords(coords, img0_shape)return coordsdef clip_coords(boxes, img_shape):# Clip bounding xyxy bounding boxes to image shape (height, width)# np.clip(c, a, b): 将矩阵c中所有的元素约束在[a, b]中间# 如果某个元素小于a,就将这个元素变为a;如果元素大于b,就将这个元素变为b# 这里将预测得到的xyxy做个约束是因为当物体处于图片边缘的时候预测值是有可能超过图片大小的#:param boxes: 函数开始缩放到原图的预测结果[7, 4]# 函数结束缩放到原图的预测结果并对预测值进行了一定的约束防止预测结果超出图像的尺寸#:param img_shape: 原图的shape [H, W, C][375, 500, 3]boxes[:, 0] np.clip(boxes[:, 0], 0, img_shape[1]) # x1boxes[:, 1] np.clip(boxes[:, 1], 0, img_shape[0]) # y1boxes[:, 2] np.clip(boxes[:, 2], 0, img_shape[1]) # x2boxes[:, 3] np.clip(boxes[:, 3], 0, img_shape[0]) # y2def yolov5Detection(roundNum):print(当前进程ID{}.format(os.getpid()))#host_name get_host()rknn_model yolov5s.rknn# Create RKNN object#rknn RKNN(verboseTrue)#rknn_lite RKNNLite(verboseTrue) # 详细日志显示在终端上rknn_lite RKNNLite()# load RKNN modelprint(-- Load RKNN model)ret rknn_lite.load_rknn(rknn_model)if ret ! 0:print(Load RKNN model failed)exit(ret)print(done)# Init runtime environmentprint(-- Init runtime environment)#ret rknn.init_runtime()ret rknn_lite.init_runtime(core_maskRKNNLite.NPU_CORE_AUTO)# ret rknn.init_runtime(rk3566)if ret ! 0:print(Init runtime environment failed!)exit(ret)print(done)starttime time.time()for ii in range(roundNum):print(进程{}执行第{}轮推理.format(os.getpid(), ii1))# Set inputsimg0 cv2.imread(IMG_PATH)img img0.copy()img, ratio, (dw, dh) letterbox(img, new_shape(IMG_SIZE, IMG_SIZE))img cv2.cvtColor(img, cv2.COLOR_BGR2RGB)img cv2.resize(img, (IMG_SIZE, IMG_SIZE))# Inferenceprint(-- Running model)outputs rknn_lite.inference(inputs[img])#np.save(./onnx_yolov5_0.npy, outputs[0])#np.save(./onnx_yolov5_1.npy, outputs[1])#np.save(./onnx_yolov5_2.npy, outputs[2])print(done)# post processinput0_data outputs[0]input1_data outputs[1]input2_data outputs[2]input0_data input0_data.reshape([3, -1]list(input0_data.shape[-2:]))input1_data input1_data.reshape([3, -1]list(input1_data.shape[-2:]))input2_data input2_data.reshape([3, -1]list(input2_data.shape[-2:]))input_data list()input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))boxes, classes, scores yolov5_post_process(input_data) # 此时检测框为缩放后的尺寸img1_shape img.shape # letterbox缩放后的图片尺寸img0_shape img0.shape # 原始图片尺寸boxes self.scale_coords(img1_shape, boxes, img0_shape) # 将缩放后图片上的预测结果调整到原图片尺寸上#img_1 cv2.cvtColor(img, cv2.COLOR_RGB2BGR)img_1 img0.copy()if boxes is not None:draw(img_1, boxes, scores, classes) # 在原图上做检测框#cv2.imwrite(res.jpg, img_1)# show output# cv2.imshow(post process result, img_1)# cv2.waitKey(0)# cv2.destroyAllWindows()#time.sleep(0.001)endtime time.time()print(进程Pid:{}, 总耗时{}秒单轮平均耗时{}秒.format(os.getpid(), endtime-starttime, (endtime-starttime) / float(roundNum)))rknn_lite.release()if __name__ __main__:roundNum 1000total 9processes []for i in range(total):myprocess multiprocessing.Process(targetyolov5Detection,args(roundNum,))processes.append(myprocess)for i in range(total):processes[i].daemon Trueprocesses[i].start()for _ in range(roundNum):print(主进程pid:{}当前共有{}个子进程.format(os.getpid(), total))time.sleep(1)3. 性能测试 以下测试为1000次循环yolov5的图片读取、推理、后处理等步骤下文的推理速度为单次读取、推理和后处理等完整流程的总耗时。
http://www.hkea.cn/news/14430303/

相关文章:

  • 做原型的网站电销系统开发
  • 有什么网站做微商网站不做备案
  • 网站搭建哪里找有名气新网站域名备案流程
  • 网站维护的主要工作asp网站免费完整源码
  • 广西建设职业技术学院管理工程系网站2014考试前培训时间seo博客模板
  • 网站建设中的pv指的是啥个人网站可以做电商么
  • 公司网站建设和百度推广流程图互联网品牌营销服务公司
  • 云羽网络网站建设微信客户端官网
  • 网站建设合同需要交印花税广州全面优化各项防控措施
  • phpcms 网站路径新公司怎么做网络推广
  • 如何做一个门户网站百度地图添加到网站
  • 做旅游网站的目的是什么南昌网站建设方案维护
  • 网站宽度 1000px公众号里的电影网站怎么做的
  • 搜什么关键词能找到网站深圳互联网网页设计招聘
  • 网站做飘浮怎么做有无广告销售版本"有广告免费无广告收费"网站
  • 自己有服务器怎么做网站黄骅市海边沙滩在哪里
  • 重庆市建设网站首页wordpress文章怎么生成云标签
  • 2024免费网站推广电脑版微信
  • 国内自助建站平台有哪些网站建设国内外研究现况
  • 龙川网站建设二维码生成器在线
  • 如何做招聘网站的数据分析大淘客网站免费空间
  • 红酒企业网站模板广宁住房和城乡建设局网站
  • wordpress不适合大型网站自己做的网站字体变成方框
  • js素材网站自己做网站切入地图
  • 网站开发的编程软件湘潭网站建设 r磐石网络
  • 做电商引流软文网站百度引流平台
  • 芜湖网站网站建设天津seo选天津旗舰科技a
  • 珠宝出售网站模板厦门网站建设哪家比较好
  • 山东住房建设厅官网站房产公司网站模板
  • 网站关键字排名减压轻松网站开发