购买服务器后如何做网站,单页网站cms,校园网站建设培训的心得体会,wordpress没有描述本文是关于基于YOLOv4开发构建目标检测模型的超详细实战教程#xff0c;超详细实战教程相关的博文在前文有相应的系列#xff0c;感兴趣的话可以自行移步阅读即可#xff1a;《基于yolov7开发实践实例分割模型超详细教程》
《YOLOv7基于自己的数据集从零构建模型完整训练、…本文是关于基于YOLOv4开发构建目标检测模型的超详细实战教程超详细实战教程相关的博文在前文有相应的系列感兴趣的话可以自行移步阅读即可《基于yolov7开发实践实例分割模型超详细教程》
《YOLOv7基于自己的数据集从零构建模型完整训练、推理计算超详细教程》
《DETR (DEtection TRansformer)基于自建数据集开发构建目标检测模型超详细教程》
《基于yolov5-v7.0开发实践实例分割模型超详细教程》
《轻量级模型YOLOv5-Lite基于自己的数据集【焊接质量检测】从零构建模型超详细教程》
《轻量级模型NanoDet基于自己的数据集【接打电话检测】从零构建模型超详细教程》
《基于YOLOv5-v6.2全新版本模型构建自己的图像识别模型超详细教程》
《基于自建数据集【海底生物检测】使用YOLOv5-v6.1/2版本构建目标检测模型超详细教程》 《超轻量级目标检测模型Yolo-FastestV2基于自建数据集【手写汉字检测】构建模型训练、推理完整流程超详细教程》
《基于YOLOv8开发构建目标检测模型超详细教程【以焊缝质量检测数据场景为例】》
最早期接触v3和v4的时候印象中模型的训练方式都是基于Darknet框架开发构建的模型都是通过cfg文件进行配置的从v5开始才全面转向了PyTorch形式的项目延续到了现在。
yolov4.cfg如下
[net]
batch64
subdivisions8
# Training
#width512
#height512
width608
height608
channels3
momentum0.949
decay0.0005
angle0
saturation 1.5
exposure 1.5
hue.1learning_rate0.0013
burn_in1000
max_batches 500500
policysteps
steps400000,450000
scales.1,.1#cutmix1
mosaic1#:104x104 54:52x52 85:26x26 104:13x13 for 416[convolutional]
batch_normalize1
filters32
size3
stride1
pad1
activationmish# Downsample[convolutional]
batch_normalize1
filters64
size3
stride2
pad1
activationmish[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[route]
layers -2[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters32
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters64
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[route]
layers -1,-7[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish# Downsample[convolutional]
batch_normalize1
filters128
size3
stride2
pad1
activationmish[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[route]
layers -2[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters64
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters64
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationmish[route]
layers -1,-10[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish# Downsample[convolutional]
batch_normalize1
filters256
size3
stride2
pad1
activationmish[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[route]
layers -2[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationmish[route]
layers -1,-28[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish# Downsample[convolutional]
batch_normalize1
filters512
size3
stride2
pad1
activationmish[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[route]
layers -2[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationmish[route]
layers -1,-28[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish# Downsample[convolutional]
batch_normalize1
filters1024
size3
stride2
pad1
activationmish[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[route]
layers -2[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters512
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters512
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters512
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[convolutional]
batch_normalize1
filters512
size3
stride1
pad1
activationmish[shortcut]
from-3
activationlinear[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationmish[route]
layers -1,-16[convolutional]
batch_normalize1
filters1024
size1
stride1
pad1
activationmish##########################[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters1024
activationleaky[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky### SPP ###
[maxpool]
stride1
size5[route]
layers-2[maxpool]
stride1
size9[route]
layers-4[maxpool]
stride1
size13[route]
layers-1,-3,-5,-6
### End SPP ###[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters1024
activationleaky[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[upsample]
stride2[route]
layers 85[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[route]
layers -1, -3[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters512
activationleaky[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters512
activationleaky[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky[upsample]
stride2[route]
layers 54[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky[route]
layers -1, -3[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters256
activationleaky[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters256
activationleaky[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky##########################[convolutional]
batch_normalize1
size3
stride1
pad1
filters256
activationleaky[convolutional]
size1
stride1
pad1
filters255
activationlinear[yolo]
mask 0,1,2
anchors 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes80
num9
jitter.3
ignore_thresh .7
truth_thresh 1
scale_x_y 1.2
iou_thresh0.213
cls_normalizer1.0
iou_normalizer0.07
iou_lossciou
nms_kindgreedynms
beta_nms0.6
max_delta5[route]
layers -4[convolutional]
batch_normalize1
size3
stride2
pad1
filters256
activationleaky[route]
layers -1, -16[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters512
activationleaky[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters512
activationleaky[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters512
activationleaky[convolutional]
size1
stride1
pad1
filters255
activationlinear[yolo]
mask 3,4,5
anchors 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes80
num9
jitter.3
ignore_thresh .7
truth_thresh 1
scale_x_y 1.1
iou_thresh0.213
cls_normalizer1.0
iou_normalizer0.07
iou_lossciou
nms_kindgreedynms
beta_nms0.6
max_delta5[route]
layers -4[convolutional]
batch_normalize1
size3
stride2
pad1
filters512
activationleaky[route]
layers -1, -37[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters1024
activationleaky[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters1024
activationleaky[convolutional]
batch_normalize1
filters512
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
size3
stride1
pad1
filters1024
activationleaky[convolutional]
size1
stride1
pad1
filters255
activationlinear[yolo]
mask 6,7,8
anchors 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes80
num9
jitter.3
ignore_thresh .7
truth_thresh 1
random1
scale_x_y 1.05
iou_thresh0.213
cls_normalizer1.0
iou_normalizer0.07
iou_lossciou
nms_kindgreedynms
beta_nms0.6
max_delta5
yolov4-tiny.cfg如下
[net]
# Testing
#batch1
#subdivisions1
# Training
batch64
subdivisions1
width416
height416
channels3
momentum0.9
decay0.0005
angle0
saturation 1.5
exposure 1.5
hue.1learning_rate0.00261
burn_in1000
max_batches 500200
policysteps
steps400000,450000
scales.1,.1[convolutional]
batch_normalize1
filters32
size3
stride2
pad1
activationleaky[convolutional]
batch_normalize1
filters64
size3
stride2
pad1
activationleaky[convolutional]
batch_normalize1
filters64
size3
stride1
pad1
activationleaky[route]
layers-1
groups2
group_id1[convolutional]
batch_normalize1
filters32
size3
stride1
pad1
activationleaky[convolutional]
batch_normalize1
filters32
size3
stride1
pad1
activationleaky[route]
layers -1,-2[convolutional]
batch_normalize1
filters64
size1
stride1
pad1
activationleaky[route]
layers -6,-1[maxpool]
size2
stride2[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationleaky[route]
layers-1
groups2
group_id1[convolutional]
batch_normalize1
filters64
size3
stride1
pad1
activationleaky[convolutional]
batch_normalize1
filters64
size3
stride1
pad1
activationleaky[route]
layers -1,-2[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky[route]
layers -6,-1[maxpool]
size2
stride2[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationleaky[route]
layers-1
groups2
group_id1[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationleaky[convolutional]
batch_normalize1
filters128
size3
stride1
pad1
activationleaky[route]
layers -1,-2[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[route]
layers -6,-1[maxpool]
size2
stride2[convolutional]
batch_normalize1
filters512
size3
stride1
pad1
activationleaky##################################[convolutional]
batch_normalize1
filters256
size1
stride1
pad1
activationleaky[convolutional]
batch_normalize1
filters512
size3
stride1
pad1
activationleaky[convolutional]
size1
stride1
pad1
filters255
activationlinear[yolo]
mask 3,4,5
anchors 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes80
num6
jitter.3
scale_x_y 1.05
cls_normalizer1.0
iou_normalizer0.07
iou_lossciou
ignore_thresh .7
truth_thresh 1
random0
resize1.5
nms_kindgreedynms
beta_nms0.6[route]
layers -4[convolutional]
batch_normalize1
filters128
size1
stride1
pad1
activationleaky[upsample]
stride2[route]
layers -1, 23[convolutional]
batch_normalize1
filters256
size3
stride1
pad1
activationleaky[convolutional]
size1
stride1
pad1
filters255
activationlinear[yolo]
mask 1,2,3
anchors 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes80
num6
jitter.3
scale_x_y 1.05
cls_normalizer1.0
iou_normalizer0.07
iou_lossciou
ignore_thresh .7
truth_thresh 1
random0
resize1.5
nms_kindgreedynms
beta_nms0.6
最开始的时候还是蛮喜欢这种形式的非常的简洁直接使用Darknet框架训练也很方便到后面随着模型改进各种组件的替换Darknet变得越发不适用了。YOLOv4的话感觉定位相比于v3和v5来说比较尴尬一些git里面搜索yolov4结果如下所示 排名第一的项目是pytorch-YOLOv4地址在这里如下所示 从说明里面来看这个只是一个minimal的实现 官方的实现应该是 仔细看的话会发现官方这里提供了YOLOv3风格的实现项目以及YOLOv5风格的实现项目本文主要是以YOLOv3风格的YOLOv4项目为基准来讲解完整的实践流程项目地址在这里如下所示 首先下载所需要的项目如下 下载到本地解压缩后如下所示 网上直接百度下载这两个weights文件放在weights目录下如下所示 然后随便复制过来一个自己之前yolov5项目的数据集放在当前项目目录下我是前面刚好基于yolov5做了钢铁缺陷检测项目数据集可以直接拿来用如果没有现成的数据集的话可以看我签名yolov5的超详细教程里面可以按照步骤自己创建数据集即可。如下所示 这里我选择的是基于yolov4-tiny版本的模型来进行开发训练为的就是计算速度能够更快一些。
修改train.py里面的内容如下所示
parser argparse.ArgumentParser()
parser.add_argument(--weights, typestr, defaultweights/yolov4-tiny.weights, helpinitial weights path)
parser.add_argument(--cfg, typestr, defaultcfg/yolov4-tiny.cfg, helpmodel.yaml path)
parser.add_argument(--data, typestr, defaultdata/self.yaml, helpdata.yaml path)
parser.add_argument(--hyp, typestr, defaultdata/hyp.scratch.yaml, helphyperparameters path)
parser.add_argument(--epochs, typeint, default100)
parser.add_argument(--batch-size, typeint, default8, helptotal batch size for all GPUs)
parser.add_argument(--img-size, nargs, typeint, default[640, 640], help[train, test] image sizes)
parser.add_argument(--rect, actionstore_true, helprectangular training)
parser.add_argument(--resume, nargs?, constTrue, defaultFalse, helpresume most recent training)
parser.add_argument(--nosave, actionstore_true, helponly save final checkpoint)
parser.add_argument(--notest, actionstore_true, helponly test final epoch)
parser.add_argument(--noautoanchor, actionstore_true, helpdisable autoanchor check)
parser.add_argument(--evolve, actionstore_true, helpevolve hyperparameters)
parser.add_argument(--bucket, typestr, default, helpgsutil bucket)
parser.add_argument(--cache-images, actionstore_true, helpcache images for faster training)
parser.add_argument(--image-weights, actionstore_true, helpuse weighted image selection for training)
parser.add_argument(--device, default0, helpcuda device, i.e. 0 or 0,1,2,3 or cpu)
parser.add_argument(--multi-scale, actionstore_true, helpvary img-size /- 50%%)
parser.add_argument(--single-cls, actionstore_true, helptrain as single-class dataset)
parser.add_argument(--adam, actionstore_true, helpuse torch.optim.Adam() optimizer)
parser.add_argument(--sync-bn, actionstore_true, helpuse SyncBatchNorm, only available in DDP mode)
parser.add_argument(--local_rank, typeint, default-1, helpDDP parameter, do not modify)
parser.add_argument(--log-imgs, typeint, default16, helpnumber of images for WB logging, max 100)
parser.add_argument(--workers, typeint, default8, helpmaximum number of dataloader workers)
parser.add_argument(--project, defaultruns/train, helpsave to project/name)
parser.add_argument(--name, defaultexp, helpsave to project/name)
parser.add_argument(--exist-ok, actionstore_true, helpexisting project/name ok, do not increment)
opt parser.parse_args()
终端直接执行
python train.py
即可。
当然也可以选择基于参数指定的形式启动如下
python train.py --device 0 --batch-size 16 --img 640 640 --data self.yaml --cfg cfg/yolov4-tiny.cfg --weights weights/yolov4-tiny.weights --name yolov4-tiny
根据个人喜好来选择即可。
启动训练终端输出如下所示 训练完成截图如下所示 训练完成我们来看下结果文件如下所示 可以看到结果文件直观来看跟yolov5项目差距还是很大的评估指标只有一个PR图所以如果是做论文的话最好还是使用yolov5来做会好点。
PR曲线如下所示 训练可视化如下所示 LABEL数据可视化如下所示 weights目录如下所示 这个跟yolov5项目差异也是很大的yolov5项目只有两个pt文件一个是最优的一个是最新的但是yolov4项目居然产生了19个文件保存的可以说是非常详细了有点像yolov7但是比v7维度更多一些。 感兴趣的话都可以按照我上面的教程步骤开发构建自己的目标检测模型。