当前位置: 首页 > news >正文

net112企业建站系统网站建设自身优势的分析

net112企业建站系统,网站建设自身优势的分析,如何在网站插做视频,优化网站被百度屏之前用pytorch构建了squeezenet#xff0c;个人觉得pytorch是最好用的#xff0c;但是有的工程就是需要caffe结构的#xff0c;所以本篇也用caffe构建一个squeezenet网络。 数据处理 首先要对数据进行处理#xff0c;跟pytorch不同#xff0c;pytorch读取数据只需要给数据… 之前用pytorch构建了squeezenet个人觉得pytorch是最好用的但是有的工程就是需要caffe结构的所以本篇也用caffe构建一个squeezenet网络。 数据处理 首先要对数据进行处理跟pytorch不同pytorch读取数据只需要给数据集所在目录即可直接从中读取数据而caffe需要一个包含每张图片的绝对路径以及所在类别的txt文件从中读取数据。写一个生成次txt文件的脚本 import os import randomfolder cotta # 数据集目录相对路径 names os.listdir(folder)f1 open(/train_txt/train_cotta.txt, a) # 生成的txt地址 f2 open(/train_txt/test_water_workcloth.txt, a)for name in names:imgnames os.listdir(folder / name)random.shuffle(imgnames)numimg len(imgnames)for i in range(numimg):f1.write(%s %s\n % (folder / name / imgnames[i], name[0]))# if i int(0.9*numimg):# f1.write(%s %s\n%(folder / name / imgnames[i], name[0]))# else:# f2.write(%s %s\n%(folder / name / imgnames[i], name[0])) # f2.close() f1.close()数据集的目录也要跟pytorch的一致一个类的数据放在一个目录中目录名为类名。且脚本与该目录同级。 运行脚本后生成的txt内容如下 /cotta/0_other/0_1_391_572_68_68.jpg 0 /cotta/1_longSleeves/9605_1_5_565_357_82_70.jpg 1 /cotta/2_cotta/713_0.99796_1_316_162_96_87.jpg 2 ...... 图片相对路径 图片所属类别网络结构配置文件 trainval.prototxt layer {name: datatype: ImageDatatop: datatop: labeltransform_param {mirror: truecrop_size: 96}image_data_param {source: /train_txt/train_cotta.txt # 生成的txt的相对路径root_folder: /data/ # 存放数据集目录的路径batch_size: 64shuffle: truenew_height: 96new_width: 96}} layer {name: conv1type: Convolutionbottom: datatop: conv1convolution_param {num_output: 96kernel_size: 3stride: 1pad: 1weight_filler {type: xavier}} }layer { name: BatchNorm1 type: BatchNorm bottom: conv1 top: BatchNorm1 }layer {name: relu_conv1type: ReLUbottom: BatchNorm1top: BatchNorm1 } layer {name: pool1type: Poolingbottom: BatchNorm1top: pool1pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: fire2/squeeze1x1type: Convolutionbottom: pool1top: fire2/squeeze1x1convolution_param {num_output: 16kernel_size: 1weight_filler {type: xavier}} }layer { name: fire2/bn_squeeze1x1 type: BatchNorm bottom: fire2/squeeze1x1 top: fire2/bn_squeeze1x1 }layer {name: fire2/relu_squeeze1x1type: ReLUbottom: fire2/bn_squeeze1x1top: fire2/bn_squeeze1x1 } layer {name: fire2/expand1x1type: Convolutionbottom: fire2/bn_squeeze1x1top: fire2/expand1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire2/bn_expand1x1 type: BatchNorm bottom: fire2/expand1x1 top: fire2/bn_expand1x1 }layer {name: fire2/relu_expand1x1type: ReLUbottom: fire2/bn_expand1x1top: fire2/bn_expand1x1 } layer {name: fire2/expand3x3type: Convolutionbottom: fire2/bn_expand1x1top: fire2/expand3x3convolution_param {num_output: 64pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire2/bn_expand3x3 type: BatchNorm bottom: fire2/expand3x3 top: fire2/bn_expand3x3 }layer {name: fire2/relu_expand3x3type: ReLUbottom: fire2/bn_expand3x3top: fire2/bn_expand3x3 } layer {name: fire2/concattype: Concatbottom: fire2/bn_expand1x1bottom: fire2/bn_expand3x3top: fire2/concat }#fire2 ends: 128 channels layer {name: fire3/squeeze1x1type: Convolutionbottom: fire2/concattop: fire3/squeeze1x1convolution_param {num_output: 16kernel_size: 1weight_filler {type: xavier}} }layer { name: fire3/bn_squeeze1x1 type: BatchNorm bottom: fire3/squeeze1x1 top: fire3/bn_squeeze1x1 }layer {name: fire3/relu_squeeze1x1type: ReLUbottom: fire3/bn_squeeze1x1top: fire3/bn_squeeze1x1 } layer {name: fire3/expand1x1type: Convolutionbottom: fire3/bn_squeeze1x1top: fire3/expand1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire3/bn_expand1x1 type: BatchNorm bottom: fire3/expand1x1 top: fire3/bn_expand1x1 }layer {name: fire3/relu_expand1x1type: ReLUbottom: fire3/bn_expand1x1top: fire3/bn_expand1x1 } layer {name: fire3/expand3x3type: Convolutionbottom: fire3/bn_expand1x1top: fire3/expand3x3convolution_param {num_output: 64pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire3/bn_expand3x3 type: BatchNorm bottom: fire3/expand3x3 top: fire3/bn_expand3x3 }layer {name: fire3/relu_expand3x3type: ReLUbottom: fire3/bn_expand3x3top: fire3/bn_expand3x3 } layer {name: fire3/concattype: Concatbottom: fire3/bn_expand1x1bottom: fire3/bn_expand3x3top: fire3/concat }#fire3 ends: 128 channelslayer {name: bypass_23type: Eltwisebottom: fire2/concatbottom: fire3/concattop: fire3_EltAdd }layer {name: fire4/squeeze1x1type: Convolutionbottom: fire3_EltAddtop: fire4/squeeze1x1convolution_param {num_output: 32kernel_size: 1weight_filler {type: xavier}} }layer { name: fire4/bn_squeeze1x1 type: BatchNorm bottom: fire4/squeeze1x1 top: fire4/bn_squeeze1x1 }layer {name: fire4/relu_squeeze1x1type: ReLUbottom: fire4/bn_squeeze1x1top: fire4/bn_squeeze1x1 } layer {name: fire4/expand1x1type: Convolutionbottom: fire4/bn_squeeze1x1top: fire4/expand1x1convolution_param {num_output: 128kernel_size: 1weight_filler {type: xavier}} }layer { name: fire4/bn_expand1x1 type: BatchNorm bottom: fire4/expand1x1 top: fire4/bn_expand1x1 }layer {name: fire4/relu_expand1x1type: ReLUbottom: fire4/bn_expand1x1top: fire4/bn_expand1x1 } layer {name: fire4/expand3x3type: Convolutionbottom: fire4/bn_expand1x1top: fire4/expand3x3convolution_param {num_output: 128pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire4/bn_expand3x3 type: BatchNorm bottom: fire4/expand3x3 top: fire4/bn_expand3x3 }layer {name: fire4/relu_expand3x3type: ReLUbottom: fire4/bn_expand3x3top: fire4/bn_expand3x3 } layer {name: fire4/concattype: Concatbottom: fire4/bn_expand1x1bottom: fire4/bn_expand3x3top: fire4/concat } #fire4 ends: 256 channelslayer {name: pool4type: Poolingbottom: fire4/concattop: pool4pooling_param {pool: MAXkernel_size: 2stride: 2} } #fire4 ends: 256 channels / pooled layer {name: fire5/squeeze1x1type: Convolutionbottom: pool4top: fire5/squeeze1x1convolution_param {num_output: 32kernel_size: 1weight_filler {type: xavier}} }layer { name: fire5/bn_squeeze1x1 type: BatchNorm bottom: fire5/squeeze1x1 top: fire5/bn_squeeze1x1 }layer {name: fire5/relu_squeeze1x1type: ReLUbottom: fire5/bn_squeeze1x1top: fire5/bn_squeeze1x1 } layer {name: fire5/expand1x1type: Convolutionbottom: fire5/bn_squeeze1x1top: fire5/expand1x1convolution_param {num_output: 128kernel_size: 1weight_filler {type: xavier}} }layer { name: fire5/bn_expand1x1 type: BatchNorm bottom: fire5/expand1x1 top: fire5/bn_expand1x1 }layer {name: fire5/relu_expand1x1type: ReLUbottom: fire5/bn_expand1x1top: fire5/bn_expand1x1 } layer {name: fire5/expand3x3type: Convolutionbottom: fire5/bn_expand1x1top: fire5/expand3x3convolution_param {num_output: 128pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire5/bn_expand3x3 type: BatchNorm bottom: fire5/expand3x3 top: fire5/bn_expand3x3 }layer {name: fire5/relu_expand3x3type: ReLUbottom: fire5/bn_expand3x3top: fire5/bn_expand3x3 } layer {name: fire5/concattype: Concatbottom: fire5/bn_expand1x1bottom: fire5/bn_expand3x3top: fire5/concat }#fire5 ends: 256 channels layer {name: bypass_45type: Eltwisebottom: pool4bottom: fire5/concattop: fire5_EltAdd }layer {name: fire6/squeeze1x1type: Convolutionbottom: fire5_EltAddtop: fire6/squeeze1x1convolution_param {num_output: 48kernel_size: 1weight_filler {type: xavier}} }layer { name: fire6/bn_squeeze1x1 type: BatchNorm bottom: fire6/squeeze1x1 top: fire6/bn_squeeze1x1 }layer {name: fire6/relu_squeeze1x1type: ReLUbottom: fire6/bn_squeeze1x1top: fire6/bn_squeeze1x1 } layer {name: fire6/expand1x1type: Convolutionbottom: fire6/bn_squeeze1x1top: fire6/expand1x1convolution_param {num_output: 192kernel_size: 1weight_filler {type: xavier}} }layer { name: fire6/bn_expand1x1 type: BatchNorm bottom: fire6/expand1x1 top: fire6/bn_expand1x1 }layer {name: fire6/relu_expand1x1type: ReLUbottom: fire6/bn_expand1x1top: fire6/bn_expand1x1 } layer {name: fire6/expand3x3type: Convolutionbottom: fire6/bn_expand1x1top: fire6/expand3x3convolution_param {num_output: 192pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire6/bn_expand3x3 type: BatchNorm bottom: fire6/expand3x3 top: fire6/bn_expand3x3 }layer {name: fire6/relu_expand3x3type: ReLUbottom: fire6/bn_expand3x3top: fire6/bn_expand3x3 } layer {name: fire6/concattype: Concatbottom: fire6/bn_expand1x1bottom: fire6/bn_expand3x3top: fire6/concat } #fire6 ends: 384 channelslayer {name: fire7/squeeze1x1type: Convolutionbottom: fire6/concattop: fire7/squeeze1x1convolution_param {num_output: 48kernel_size: 1weight_filler {type: xavier}} }layer { name: fire7/bn_squeeze1x1 type: BatchNorm bottom: fire7/squeeze1x1 top: fire7/bn_squeeze1x1 }layer {name: fire7/relu_squeeze1x1type: ReLUbottom: fire7/bn_squeeze1x1top: fire7/bn_squeeze1x1 } layer {name: fire7/expand1x1type: Convolutionbottom: fire7/bn_squeeze1x1top: fire7/expand1x1convolution_param {num_output: 192kernel_size: 1weight_filler {type: xavier}} }layer { name: fire7/bn_expand1x1 type: BatchNorm bottom: fire7/expand1x1 top: fire7/bn_expand1x1 }layer {name: fire7/relu_expand1x1type: ReLUbottom: fire7/bn_expand1x1top: fire7/bn_expand1x1 } layer {name: fire7/expand3x3type: Convolutionbottom: fire7/bn_expand1x1top: fire7/expand3x3convolution_param {num_output: 192pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire7/bn_expand3x3 type: BatchNorm bottom: fire7/expand3x3 top: fire7/bn_expand3x3 }layer {name: fire7/relu_expand3x3type: ReLUbottom: fire7/bn_expand3x3top: fire7/bn_expand3x3 } layer {name: fire7/concattype: Concatbottom: fire7/bn_expand1x1bottom: fire7/bn_expand3x3top: fire7/concat } #fire7 ends: 384 channels layer {name: bypass_67type: Eltwisebottom: fire6/concatbottom: fire7/concattop: fire7_EltAdd }layer {name: fire8/squeeze1x1type: Convolutionbottom: fire7_EltAddtop: fire8/squeeze1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire8/bn_squeeze1x1 type: BatchNorm bottom: fire8/squeeze1x1 top: fire8/bn_squeeze1x1 }layer {name: fire8/relu_squeeze1x1type: ReLUbottom: fire8/bn_squeeze1x1top: fire8/bn_squeeze1x1 } layer {name: fire8/expand1x1type: Convolutionbottom: fire8/bn_squeeze1x1top: fire8/expand1x1convolution_param {num_output: 256kernel_size: 1weight_filler {type: xavier}} }layer { name: fire8/bn_expand1x1 type: BatchNorm bottom: fire8/expand1x1 top: fire8/bn_expand1x1 }layer {name: fire8/relu_expand1x1type: ReLUbottom: fire8/bn_expand1x1top: fire8/bn_expand1x1 } layer {name: fire8/expand3x3type: Convolutionbottom: fire8/bn_expand1x1top: fire8/expand3x3convolution_param {num_output: 256pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire8/bn_expand3x3 type: BatchNorm bottom: fire8/expand3x3 top: fire8/bn_expand3x3 }layer {name: fire8/relu_expand3x3type: ReLUbottom: fire8/bn_expand3x3top: fire8/bn_expand3x3 } layer {name: fire8/concattype: Concatbottom: fire8/bn_expand1x1bottom: fire8/bn_expand3x3top: fire8/concat } #fire8 ends: 512 channelslayer {name: pool8type: Poolingbottom: fire8/concattop: pool8pooling_param {pool: MAXkernel_size: 2stride: 2} } #fire8 ends: 512 channels layer {name: fire9/squeeze1x1type: Convolutionbottom: pool8top: fire9/squeeze1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire9/bn_squeeze1x1 type: BatchNorm bottom: fire9/squeeze1x1 top: fire9/bn_squeeze1x1 }layer {name: fire9/relu_squeeze1x1type: ReLUbottom: fire9/bn_squeeze1x1top: fire9/bn_squeeze1x1 } layer {name: fire9/expand1x1type: Convolutionbottom: fire9/bn_squeeze1x1top: fire9/expand1x1convolution_param {num_output: 256kernel_size: 1weight_filler {type: xavier}} }layer { name: fire9/bn_expand1x1 type: BatchNorm bottom: fire9/expand1x1 top: fire9/bn_expand1x1 }layer {name: fire9/relu_expand1x1type: ReLUbottom: fire9/bn_expand1x1top: fire9/bn_expand1x1 } layer {name: fire9/expand3x3type: Convolutionbottom: fire9/bn_expand1x1top: fire9/expand3x3convolution_param {num_output: 256pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire9/bn_expand3x3 type: BatchNorm bottom: fire9/expand3x3 top: fire9/bn_expand3x3 }layer {name: fire9/relu_expand3x3type: ReLUbottom: fire9/bn_expand3x3top: fire9/bn_expand3x3 } layer {name: fire9/concattype: Concatbottom: fire9/bn_expand1x1bottom: fire9/bn_expand3x3top: fire9/concat } #fire9 ends: 512 channelslayer {name: conv10_newtype: Convolutionbottom: fire9/concattop: conv10convolution_param {num_output: 3kernel_size: 1weight_filler {type: gaussianmean: 0.0std: 0.01}} }layer {name: pool10type: Poolingbottom: conv10top: pool10pooling_param {pool: AVEglobal_pooling: true} }# loss, top1, top5 layer {name: losstype: SoftmaxWithLossbottom: pool10bottom: labeltop: lossinclude { # phase: TRAIN} } layer {name: accuracytype: Accuracybottom: pool10bottom: labeltop: accuracy#include {# phase: TEST#} } 在最后一层卷积层conv10中的num_output修改类别数量。 模型超参配置文件 solver.prototxt test_iter: 2000 #not subject to iter_size test_interval: 1000000 # base_lr: 0.0001 base_lr: 0.005 # 学习率 display: 40 # max_iter: 600000 max_iter: 200000 # 迭代数 iter_size: 2 #global batch size batch_size * iter_size lr_policy: poly power: 1.0 #linearly decrease LR momentum: 0.9 weight_decay: 0.0002 snapshot: 10000 # 每多少次迭代保存一个模型 snapshot_prefix: /data/zxc/classfication/model/model_cotta/cotta_ # 模型保存路径 solver_mode: GPU random_seed: 42 net: ./trainNets_drive/trainval.prototxt # 网络结构配置文件的路径 test_initialization: false average_loss: 40max_itercaffe用的是迭代数而不是pytorch的轮数。pytorch中训练完全部的训练集为一轮而caffe中训练完一个batch_size的数据为一个迭代。如果想要等价与轮数的话一轮就等于len(train_data) / batch_size。如果有余数就要看pytorch里的dataloader里面设置舍去还是为一个batch如果舍去就是向下取整如果不舍去就是向上取整snapshot_prefix最后一部分为每个保存模型的前缀如图 运行命令 将运行命令写入bash文件中 train.sh /home/seg/anaconda3/envs/zxc/bin/caffe train -gpu 1 -solver ./solvers/solver_3.prototxt -weights/data/classfication/model/model_cotta/cotta__iter_200000.caffemodel 21 | tee log_3_4_class.txt -gpu选择哪块卡如果就一块就是0-solver后面跟网络超参配置文件路径-weights后面跟预训练模型可以用官方给的squeezenet的caffe版本的预训练模型我这里是训练中断从断点继续训练 编写完成后source activate 环境名称进入source环境然后source train.sh运行bash文件就能开始训练。
http://www.hkea.cn/news/14557241/

相关文章:

  • 温州网站建设大全wordpress 繁简
  • 商城网站开发实训报告南京企业网站设计公司
  • 装修设计公司快餐店兰州网站seo技术厂家
  • 湘潭网站开发公司策划案怎么做
  • 衍艺 网站建设网站关键字标签
  • 泰安集团网站建设价格xmlrpc wordpress关闭
  • 金湖县建设工程质量监督网站杭州网站设计推荐柚米
  • 如何将自己做的网站上传vue做的网站
  • 手机怎么建造网站老网站删除做新站会影响收录吗
  • 用dw做的网站生成链接吗深圳网络公司推广
  • 适合学生做的网站类型网站建设公司的服务器
  • 山东做网站的ps做图软件怎么下载网站
  • 网站竞争对手的选定一般参考什么标准的仅有网站做app
  • 建设网站有哪些目的是什么意思郑州广告牌制作市场
  • 注册网站流程及资料优化师
  • 宁波网站建设制作推广互联网产品推广方案范文
  • 唐山网站制作价格表白小程序制作
  • 深圳 赢客创想网络技术股份有限公司 网站建设如何推广引流
  • 区块链 做网站网站建设太金手指六六二七
  • 网站建设内容3000字域名更新自动转跳
  • 网站首页用什么字体好wordpress word发布文章
  • 成都协会网站建设百度h5发布
  • 网站建设与规划的书兰州做it网站运营的怎么样
  • 石家庄网站建设机构开发网站
  • 电子元器件网站怎么做wordpress多条件筛选插件
  • 开花店做网站wordpress 时间轴页面
  • 校园兼职网站建设对亚马逊网站做简要分析与评价
  • 高水平的番禺网站建设北京官网建设公司
  • 门户网站的发展趋势国外源码网站
  • dede网站建设教程云盘做的网站百度找不到