当前位置: 首页 > news >正文

展览公司网站建设微小店网站建设官网

展览公司网站建设,微小店网站建设官网,威海网站制作都有哪些,设计理念网站服务器配置如下#xff1a; CPU/NPU#xff1a;鲲鹏 CPU#xff08;ARM64#xff09;A300I pro推理卡 系统#xff1a;Kylin V10 SP1【下载链接】【安装链接】 驱动与固件版本版本#xff1a; Ascend-hdk-310p-npu-driver_23.0.1_linux-aarch64.run【下载链接】 Ascend-…服务器配置如下 CPU/NPU鲲鹏 CPUARM64A300I pro推理卡 系统Kylin V10 SP1【下载链接】【安装链接】 驱动与固件版本版本 Ascend-hdk-310p-npu-driver_23.0.1_linux-aarch64.run【下载链接】 Ascend-hdk-310p-npu-firmware_7.1.0.4.220.run【下载链接】 MCU版本Ascend-hdk-310p-mcu_23.2.3【下载链接】 CANN开发套件版本7.0.1【Toolkit下载链接】【Kernels下载链接】 测试om模型环境如下 Python版本3.8.11 推理工具ais_bench 测试YOLO系列v5/6/7/8/9/10/11 专栏其他文章 Atlas800昇腾服务器型号3000—驱动与固件安装一 Atlas800昇腾服务器型号3000—CANN安装二 Atlas800昇腾服务器型号3000—YOLO全系列om模型转换测试三 Atlas800昇腾服务器型号3000—AIPP加速前处理四 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【检测】五 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【实例分割】六 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【关键点】七 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【跟踪】八 全部代码githubhttps://github.com/Bigtuo/NPU-ais_bench 1 基础环境安装 详情见第三章环境安装https://blog.csdn.net/weixin_45679938/article/details/142966255 2 ais_bench编译安装 注意目前ais_bench工具只支持单个input的带有动态AIPP配置的模型只支持静态shape、动态batch、动态宽高三种场景不支持动态shape场景。 参考链接https://gitee.com/ascend/tools/tree/master/ais-bench_workload/tool/ais_bench 2.1 安装aclruntime包 在安装环境执行如下命令安装aclruntime包 说明若为覆盖安装请增加“–force-reinstall”参数强制安装. pip3 install -v githttps://gitee.com/ascend/tools.git#eggaclruntimesubdirectoryais-bench_workload/tool/ais_bench/backend -i https://pypi.tuna.tsinghua.edu.cn/simple2.2 安装ais_bench推理程序包 在安装环境执行如下命令安装ais_bench推理程序包 pip3 install -v githttps://gitee.com/ascend/tools.git#eggais_benchsubdirectoryais-bench_workload/tool/ais_bench -i https://pypi.tuna.tsinghua.edu.cn/simple卸载和更新【忽略】 # 卸载aclruntime pip3 uninstall aclruntime # 卸载ais_bench推理程序 pip3 uninstall ais_bench3 裸代码推理测试 # 1.进入运行环境yolo【普通用户】 conda activate yolo # 2.激活atc【atc --help测试是否可行】 source ~/bashrc注意ais_bench调用和使用方式与onnx-runtime几乎一致因此可参考进行撰写脚本 代码逻辑如下 下面代码整个处理过程主要包括预处理—推理—后处理—画图。 假设图像resize为640×640 前处理输出结果维度(1, 3, 640, 640) YOLOv5/6/7推理输出结果维度(1, 8400×3, 85)其中85表示4个box坐标信息置信度分数80个类别概率8400×3表示(80×8040×4020×20)×3不同于v8与v9采用类别里面最大的概率作为置信度score YOLOv8/9/11推理输出结果维度(1, 84, 8400)其中84表示4个box坐标信息80个类别概率8400表示80×8040×4020×20 YOLOv10推理输出结果维度(1, 300, 6)其中300是默认输出数量无nms操作阈值过滤即可6是4个box坐标信息置信度分数类别。 后处理输出结果维度(5, 6)其中第一个5表示图bus.jpg检出5个目标第二个维度6表示(x1, y1, x2, y2, conf, cls)。 完整代码如下 新建YOLO_ais_bench_det_aipp.py内容如下 import argparse import time import cv2 import numpy as np import osfrom ais_bench.infer.interface import InferSessionclass YOLO:YOLO object detection model class for handling inferencedef __init__(self, om_model, imgsz(640, 640), device_id0, model_ndtypenp.single, modestatic, postprocess_typev8, aippFalse):Initialization.Args:om_model (str): Path to the om model.# 构建ais_bench推理引擎self.session InferSession(device_iddevice_id, model_pathom_model)# Numpy dtype: support both FP32(np.single) and FP16(np.half) om modelself.ndtype model_ndtypeself.mode modeself.postprocess_type postprocess_typeself.aipp aipp self.model_height, self.model_width imgsz[0], imgsz[1] # 图像resize大小def __call__(self, im0, conf_threshold0.4, iou_threshold0.45):The whole pipeline: pre-process - inference - post-process.Args:im0 (Numpy.ndarray): original input image.conf_threshold (float): confidence threshold for filtering predictions.iou_threshold (float): iou threshold for NMS.Returns:boxes (List): list of bounding boxes.# 前处理Pre-processt1 time.time()im, ratio, (pad_w, pad_h) self.preprocess(im0)pre_time round(time.time() - t1, 3)# 推理 inferencet2 time.time()preds self.session.infer([im], modeself.mode)[0] # mode有动态dymshape和静态static等det_time round(time.time() - t2, 3)# 后处理Post-processt3 time.time()if self.postprocess_type v5:boxes self.postprocess_v5(preds,im0im0,ratioratio,pad_wpad_w,pad_hpad_h,conf_thresholdconf_threshold,iou_thresholdiou_threshold,)elif self.postprocess_type v8:boxes self.postprocess_v8(preds,im0im0,ratioratio,pad_wpad_w,pad_hpad_h,conf_thresholdconf_threshold,iou_thresholdiou_threshold,)elif self.postprocess_type v10:boxes self.postprocess_v10(preds,im0im0,ratioratio,pad_wpad_w,pad_hpad_h,conf_thresholdconf_threshold)else:boxes []post_time round(time.time() - t3, 3)return boxes, (pre_time, det_time, post_time)# 前处理包括resize, pad, 其中HWC to CHWBGR to RGB归一化增加维度CHW - BCHW可选择是否开启AIPP加速处理def preprocess(self, img):Pre-processes the input image.Args:img (Numpy.ndarray): image about to be processed.Returns:img_process (Numpy.ndarray): image preprocessed for inference.ratio (tuple): width, height ratios in letterbox.pad_w (float): width padding in letterbox.pad_h (float): height padding in letterbox.# Resize and pad input image using letterbox() (Borrowed from Ultralytics)shape img.shape[:2] # original image shapenew_shape (self.model_height, self.model_width)r min(new_shape[0] / shape[0], new_shape[1] / shape[1])ratio r, rnew_unpad int(round(shape[1] * r)), int(round(shape[0] * r))pad_w, pad_h (new_shape[1] - new_unpad[0]) / 2, (new_shape[0] - new_unpad[1]) / 2 # wh paddingif shape[::-1] ! new_unpad: # resizeimg cv2.resize(img, new_unpad, interpolationcv2.INTER_LINEAR)top, bottom int(round(pad_h - 0.1)), int(round(pad_h 0.1))left, right int(round(pad_w - 0.1)), int(round(pad_w 0.1))img cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value(114, 114, 114)) # 填充# 是否开启aipp加速预处理需atc中完成if self.aipp:return img, ratio, (pad_w, pad_h)# Transforms: HWC to CHW - BGR to RGB - div(255) - contiguous - add axis(optional)img np.ascontiguousarray(np.einsum(HWC-CHW, img)[::-1], dtypeself.ndtype) / 255.0img_process img[None] if len(img.shape) 3 else imgreturn img_process, ratio, (pad_w, pad_h)# YOLOv5/6/7通用后处理包括阈值过滤与NMSdef postprocess_v5(self, preds, im0, ratio, pad_w, pad_h, conf_threshold, iou_threshold):Post-process the prediction.Args:preds (Numpy.ndarray): predictions come from ort.session.run().im0 (Numpy.ndarray): [h, w, c] original input image.ratio (tuple): width, height ratios in letterbox.pad_w (float): width padding in letterbox.pad_h (float): height padding in letterbox.conf_threshold (float): conf threshold.iou_threshold (float): iou threshold.Returns:boxes (List): list of bounding boxes.# (Batch_size, Num_anchors, xywh_score_conf_cls), v5和v6的[..., 4]是置信度分数v8v9采用类别里面最大的概率作为置信度scorex preds # outputs: predictions (1, 8400*3, 85)# Predictions filtering by conf-thresholdx x[x[..., 4] conf_threshold]# Create a new matrix which merge these(box, score, cls) into one# For more details about numpy.c_(): https://numpy.org/doc/1.26/reference/generated/numpy.c_.htmlx np.c_[x[..., :4], x[..., 4], np.argmax(x[..., 5:], axis-1)]# NMS filtering# 经过NMS后的值, np.array([[x, y, w, h, conf, cls], ...]), shape(-1, 4 1 1)x x[cv2.dnn.NMSBoxes(x[:, :4], x[:, 4], conf_threshold, iou_threshold)]# 重新缩放边界框为画图做准备if len(x) 0:# Bounding boxes format change: cxcywh - xyxyx[..., [0, 1]] - x[..., [2, 3]] / 2x[..., [2, 3]] x[..., [0, 1]]# Rescales bounding boxes from model shape(model_height, model_width) to the shape of original imagex[..., :4] - [pad_w, pad_h, pad_w, pad_h]x[..., :4] / min(ratio)# Bounding boxes boundary clampx[..., [0, 2]] x[:, [0, 2]].clip(0, im0.shape[1])x[..., [1, 3]] x[:, [1, 3]].clip(0, im0.shape[0])return x[..., :6] # boxeselse:return []# YOLOv8/9/11通用后处理包括阈值过滤与NMSdef postprocess_v8(self, preds, im0, ratio, pad_w, pad_h, conf_threshold, iou_threshold):Post-process the prediction.Args:preds (Numpy.ndarray): predictions come from ort.session.run().im0 (Numpy.ndarray): [h, w, c] original input image.ratio (tuple): width, height ratios in letterbox.pad_w (float): width padding in letterbox.pad_h (float): height padding in letterbox.conf_threshold (float): conf threshold.iou_threshold (float): iou threshold.Returns:boxes (List): list of bounding boxes.x preds # outputs: predictions (1, 84, 8400)# Transpose the first output: (Batch_size, xywh_conf_cls, Num_anchors) - (Batch_size, Num_anchors, xywh_conf_cls)x np.einsum(bcn-bnc, x) # (1, 8400, 84)# Predictions filtering by conf-thresholdx x[np.amax(x[..., 4:], axis-1) conf_threshold]# Create a new matrix which merge these(box, score, cls) into one# For more details about numpy.c_(): https://numpy.org/doc/1.26/reference/generated/numpy.c_.htmlx np.c_[x[..., :4], np.amax(x[..., 4:], axis-1), np.argmax(x[..., 4:], axis-1)]# NMS filtering# 经过NMS后的值, np.array([[x, y, w, h, conf, cls], ...]), shape(-1, 4 1 1)x x[cv2.dnn.NMSBoxes(x[:, :4], x[:, 4], conf_threshold, iou_threshold)]# 重新缩放边界框为画图做准备if len(x) 0:# Bounding boxes format change: cxcywh - xyxyx[..., [0, 1]] - x[..., [2, 3]] / 2x[..., [2, 3]] x[..., [0, 1]]# Rescales bounding boxes from model shape(model_height, model_width) to the shape of original imagex[..., :4] - [pad_w, pad_h, pad_w, pad_h]x[..., :4] / min(ratio)# Bounding boxes boundary clampx[..., [0, 2]] x[:, [0, 2]].clip(0, im0.shape[1])x[..., [1, 3]] x[:, [1, 3]].clip(0, im0.shape[0])return x[..., :6] # boxeselse:return []# YOLOv10后处理包括阈值过滤-无NMSdef postprocess_v10(self, preds, im0, ratio, pad_w, pad_h, conf_threshold):x preds # outputs: predictions (1, 300, 6) - (xyxy_conf_cls)# Predictions filtering by conf-thresholdx x[x[..., 4] conf_threshold]# 重新缩放边界框为画图做准备if len(x) 0:# Rescales bounding boxes from model shape(model_height, model_width) to the shape of original imagex[..., :4] - [pad_w, pad_h, pad_w, pad_h]x[..., :4] / min(ratio)# Bounding boxes boundary clampx[..., [0, 2]] x[:, [0, 2]].clip(0, im0.shape[1])x[..., [1, 3]] x[:, [1, 3]].clip(0, im0.shape[0])return x # boxeselse:return []if __name__ __main__:# Create an argument parser to handle command-line argumentsparser argparse.ArgumentParser()parser.add_argument(--det_model, typestr, defaultryolov8s.om, helpPath to OM model)parser.add_argument(--source, typestr, defaultrimages, helpPath to input image)parser.add_argument(--out_path, typestr, defaultrresults, help结果保存文件夹)parser.add_argument(--imgsz_det, typetuple, default(640, 640), helpImage input size)parser.add_argument(--classes, typelist, default[person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light,fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow,elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee,skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard,tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich,orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed,dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven,toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush], help类别)parser.add_argument(--conf, typefloat, default0.25, helpConfidence threshold)parser.add_argument(--iou, typefloat, default0.6, helpNMS IoU threshold)parser.add_argument(--device_id, typeint, default0, helpdevice id)parser.add_argument(--mode, defaultstatic, helpom是动态dymshape或静态static)parser.add_argument(--model_ndtype, defaultnp.single, helpom是fp32或fp16)parser.add_argument(--postprocess_type, typestr, defaultv8, help后处理方式, 对应v5/v8/v10三种后处理)parser.add_argument(--aipp, defaultFalse, actionstore_true, help是否开启aipp加速YOLO预处理, 需atc中完成om集成)args parser.parse_args()# 创建结果保存文件夹if not os.path.exists(args.out_path):os.mkdir(args.out_path)print(开始运行)# Build modeldet_model YOLO(args.det_model, args.imgsz_det, args.device_id, args.model_ndtype, args.mode, args.postprocess_type, args.aipp)color_palette np.random.uniform(0, 255, size(len(args.classes), 3)) # 为每个类别生成调色板for i, img_name in enumerate(os.listdir(args.source)):try:t1 time.time()# Read image by OpenCVimg cv2.imread(os.path.join(args.source, img_name))# 检测Inferenceboxes, (pre_time, det_time, post_time) det_model(img, conf_thresholdargs.conf, iou_thresholdargs.iou)print({}/{} 总耗时间: {:.3f}s, 其中, 预处理: {:.3f}s, 推理: {:.3f}s, 后处理: {:.3f}s, 识别{}个目标.format(i1, len(os.listdir(args.source)), time.time() - t1, pre_time, det_time, post_time, len(boxes)))# Draw rectanglesfor (*box, conf, cls_) in boxes:cv2.rectangle(img, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])),color_palette[int(cls_)], 2, cv2.LINE_AA)cv2.putText(img, f{args.classes[int(cls_)]}: {conf:.3f}, (int(box[0]), int(box[1] - 9)),cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, cv2.LINE_AA)cv2.imwrite(os.path.join(args.out_path, img_name), img)except Exception as e:print(e) 检测结果可视化如下效果与GPU上推理几乎一致 4 推理耗时 YOLO各系列推理耗时640*640如下 YOLOv5s8-9ms YOLOv7-tiny7-8ms YOLOv714ms YOLOv8s6ms YOLOv9s12ms YOLOv10s6ms YOLOv11s8ms 预处理耗时bus.jpg12ms 后处理耗时除YOLOv10几乎无耗时外其余1-2ms。 注意上述耗时未使用AIPP进行前处理加速如YOLOv8s加速后前处理推理大约6-7ms。
http://www.hkea.cn/news/14412010/

相关文章:

  • 帝国cms 网站搬家东莞竞价推广
  • 专门做金融的招聘网站河北建筑工程学院招生信息网
  • 厦门网站建设ui镇江市城市建设档案馆网站
  • 济源网站优化wordpress 带水印
  • 网站开发需要学什么技能施工企业账务处理
  • 泉州做网站优化公司wordpress开源app
  • 旅游网站模板 手机wordpress 去除rss
  • 电子商务网站开发与应用论文旅游网站设计asp
  • 策划类网站怎样做某个网站有更新的提醒
  • html网站首页广州古柏广告策划有限公司
  • linux下用python做网站网站设计模版免费下载
  • 做网站需要展示工厂么南通网站建设找哪家
  • 视频网站开发前景如何青岛建站
  • 一篇网站设计小结网站建设公司 成都
  • 网站推广效果不好原因是北京做招聘网站的公司
  • 做网站构建公司网站设计
  • pageadmin 制作网站怎么绑定域名怎么做仲博注册网站
  • ps做图 游戏下载网站网站备案撤销
  • 临沂网站制作专业重庆丰都建设局网站
  • 建设部评职称查询网站如何建立网址的步骤
  • 西宁市建设网站多少钱网站被入侵后需做的检测(1)
  • 建筑类网站建设国际新闻快报社
  • 重庆建站网站流程及费用网站添加新闻栏怎么做
  • 如何查询网站的空间大小东莞工作招聘网
  • 做移动网站点击软件电子商务网站建设毕业设计
  • 网站建设 翰臣科技建设网站要什么手续
  • 如何做自己的在线作品网站多用户商城系统哪种好用
  • 怎样建设档案馆网站360建筑网官方网站
  • 中小企业建站平台美食类网站开发需求分析
  • 网站漂浮广告怎么做湖南住房和城乡建设厅官网