当前位置: 首页 > news >正文

各地平台网站app手机电视网站设计方案

各地平台网站,app手机电视网站设计方案,wordpress 获取用户邮箱,网络推广网站排行榜对于深度学习初学者来说#xff0c;JupyterNoteBook的脚本运行形式显然更加友好#xff0c;依托Python语言的跨平台特性#xff0c;JupyterNoteBook既可以在本地线下环境运行#xff0c;也可以在线上服务器上运行。GoogleColab作为免费GPU算力平台的执牛耳者#xff0c;更… 对于深度学习初学者来说JupyterNoteBook的脚本运行形式显然更加友好依托Python语言的跨平台特性JupyterNoteBook既可以在本地线下环境运行也可以在线上服务器上运行。GoogleColab作为免费GPU算力平台的执牛耳者更是让JupyterNoteBook的脚本运行形式如虎添翼。 本次我们利用Bert-vits2的最终版Bert-vits2-v2.3和JupyterNoteBook的脚本来复刻生化危机6的人气角色艾达王(ada wong)。 本地调试JupyterNoteBook 众所周知GoogleColab虽然可以免费提供GPU让用户用于模型训练和推理但是每一个JupyterNoteBook文件脚本最多只能运行12小时随后就会被限制所以为了避免浪费宝贵的GPU使用时间我们可以在线下调试自己的JupyterNoteBook脚本调试成功后就可以把脚本直接上传到GoogleColab平台。 首先通过pip命令进行本地安装 python3 -m pip install jupyter随后运行启动命令 jupyter notebook此时访问本地的notebook地址 随后选择文件-》新建-》Notebook 即可。 输入笔记内容 #title 查看显卡 !nvidia-smi点击运行单元格 程序返回 #title 查看显卡 !nvidia-smi Wed Dec 27 12:36:10 2023 --------------------------------------------------------------------------------------- | NVIDIA-SMI 546.17 Driver Version: 546.17 CUDA Version: 12.3 | |------------------------------------------------------------------------------------- | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | || | 0 NVIDIA GeForce RTX 4060 ... WDDM | 00000000:01:00.0 Off | N/A | | N/A 50C P0 20W / 115W | 0MiB / 8188MiB | 0% Default | | | | N/A | ------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------- | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | || | No running processes found | ---------------------------------------------------------------------------------------至此就可以在本地调试NoteBook了。 安装ffmpeg 新增单元格 #title 安装ffmpeg import os, uuid, re, IPython import ipywidgets as widgets import time from glob import glob from google.colab import output, drive from IPython.display import clear_output import os, sys, urllib.request HOME os.path.expanduser(~) pathDoneCMD f{HOME}/doneCMD.sh if not os.path.exists(f{HOME}/.ipython/ttmg.py): hCode https://raw.githubusercontent.com/yunooooo/gcct/master/res/ttmg.py urllib.request.urlretrieve(hCode, f{HOME}/.ipython/ttmg.py) from ttmg import ( loadingAn, textAn, ) loadingAn(namelds) textAn(Cloning Repositories..., tytwg) !git clone https://github.com/XniceCraft/ffmpeg-colab.git !chmod 755 ./ffmpeg-colab/install textAn(Installing FFmpeg..., tytwg) !./ffmpeg-colab/install clear_output() print(Installation finished!) !rm -fr /content/ffmpeg-colab !ffmpeg -version由于语音转写需要ffmpeg的参与所以需要安装ffmpeg的最新版本。 程序返回 Installation finished! c Copyright (c) 2000-2023 the FFmpeg developers built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1) configuration: --prefix/home/ffmpeg-builder/release --pkg-config-flags--static --extra-libs-lm --disable-doc --disable-debug --disable-shared --disable-ffprobe --enable-static --enable-gpl --enable-version3 --enable-runtime-cpudetect --enable-avfilter --enable-filters --enable-nvenc --enable-nvdec --enable-cuvid --toolchainhardened --disable-stripping --enable-opengl --pkgconfigdir/home/ffmpeg-builder/release/lib/pkgconfig --extra-cflags-I/home/ffmpeg-builder/release/include -static-libstdc -static-libgcc --extra-ldflags-L/home/ffmpeg-builder/release/lib -fstack-protector -static-libstdc -static-libgcc --extra-cxxflags -static-libstdc -static-libgcc --extra-libs-ldl -lrt -lpthread --enable-ffnvcodec --enable-gmp --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libkvazaar --enable-libmp3lame --enable-libopus --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libshine --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtheora --enable-libvidstab --ldg --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-openssl --enable-zlib --enable-nonfree --extra-libs-lpthread --enable-pthreads --extra-libs-lgomp libavutil 58. 2.100 / 58. 2.100 libavcodec 60. 3.100 / 60. 3.100 libavformat 60. 3.100 / 60. 3.100 libavdevice 60. 1.100 / 60. 1.100 libavfilter 9. 3.100 / 9. 3.100 libswscale 7. 1.100 / 7. 1.100 libswresample 4. 10.100 / 4. 10.100 libpostproc 57. 1.100 / 57. 1.100这里安装的是最新版ffmpeg version 6.0 克隆代码库 接着克隆代码库 #title 克隆代码仓库 !git clone https://github.com/v3ucn/Bert-vits2-V2.3.git程序返回 Cloning into Bert-vits2-V2.3... remote: Enumerating objects: 234, done. remote: Counting objects: 100% (234/234), done. remote: Compressing objects: 100% (142/142), done. remote: Total 234 (delta 80), reused 232 (delta 78), pack-reused 0 Receiving objects: 100% (234/234), 4.16 MiB | 14.14 MiB/s, done. Resolving deltas: 100% (80/80), done.安装项目依赖 随后进入项目的目录安装依赖 #title 安装所需要的依赖 %cd /content/Bert-vits2-V2.3 !pip install -r requirements.txt下载必要的模型 新增单元格下载模型 #title 下载必要的模型 !wget -P slm/wavlm-base-plus/ https://huggingface.co/microsoft/wavlm-base-plus/resolve/main/pytorch_model.bin !wget -P emotional/clap-htsat-fused/ https://huggingface.co/laion/clap-htsat-fused/resolve/main/pytorch_model.bin !wget -P emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/ https://huggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim/resolve/main/pytorch_model.bin !wget -P bert/chinese-roberta-wwm-ext-large/ https://huggingface.co/hfl/chinese-roberta-wwm-ext-large/resolve/main/pytorch_model.bin !wget -P bert/bert-base-japanese-v3/ https://huggingface.co/cl-tohoku/bert-base-japanese-v3/resolve/main/pytorch_model.bin !wget -P bert/deberta-v3-large/ https://huggingface.co/microsoft/deberta-v3-large/resolve/main/pytorch_model.bin !wget -P bert/deberta-v3-large/ https://huggingface.co/microsoft/deberta-v3-large/resolve/main/pytorch_model.generator.bin !wget -P bert/deberta-v2-large-japanese/ https://huggingface.co/ku-nlp/deberta-v2-large-japanese/resolve/main/pytorch_model.bin下载底模文件 接着下载预训练模型的底模 #title 下载底模文件 !wget -P Data/ada/models/ https://huggingface.co/OedoSoldier/Bert-VITS2-2.3/resolve/main/DUR_0.pth !wget -P Data/ada/models/ https://huggingface.co/OedoSoldier/Bert-VITS2-2.3/resolve/main/D_0.pth !wget -P Data/ada/models/ https://huggingface.co/OedoSoldier/Bert-VITS2-2.3/resolve/main/G_0.pth !wget -P Data/ada/models/ https://huggingface.co/OedoSoldier/Bert-VITS2-2.3/resolve/main/WD_0.pth注意2.3版本的底模是4个。 切分数据集 接着把艾达王的音频素材上传到Data/ada/raw/ada.wav 随后新建单元格 #title 切分数据集 !python3 audio_slicer.py素材就会被切分。 转写和标注 此时我们需要把切片素材转写 #title 转写和标注 !pip install githttps://github.com/openai/whisper.git !python3 short_audio_transcribe.py注意这里单独安装whisper很多人直接用 pip install whisper其实这不是正确的安装方式需要单独指定安装源pip install githttps://github.com/openai/whisper.git切记否则会报错。 执行完毕后会在角色目录生成转写文件esd.list: ./Data\ada\wavs\ada_0.wav|ada|EN|I do. The kind you like. ./Data\ada\wavs\ada_1.wav|ada|EN|Now wheres the amber? ./Data\ada\wavs\ada_10.wav|ada|EN|Leave the girl. Shes lost no matter what. ./Data\ada\wavs\ada_11.wav|ada|EN|You walk away now, and who knows? ./Data\ada\wavs\ada_12.wav|ada|EN|Maybe youll live to meet me again. ./Data\ada\wavs\ada_13.wav|ada|EN|And I might get you that greeting you were looking for. ./Data\ada\wavs\ada_14.wav|ada|EN|How about we continue this discussion another time? ./Data\ada\wavs\ada_15.wav|ada|EN|Sorry, nothing yet. ./Data\ada\wavs\ada_16.wav|ada|EN|But my little helper is creating ./Data\ada\wavs\ada_17.wav|ada|EN|Quite the commotion. ./Data\ada\wavs\ada_18.wav|ada|EN|Everything will work out just fine. ./Data\ada\wavs\ada_19.wav|ada|EN|Hes a good boy. Predictable. ./Data\ada\wavs\ada_2.wav|ada|EN|The deal was, we get you out of here when you deliver the amber. No amber, no protection, Louise. ./Data\ada\wavs\ada_20.wav|ada|EN|Nothing personal, Leon. ./Data\ada\wavs\ada_21.wav|ada|EN|Louise and I had an arrangement. ./Data\ada\wavs\ada_22.wav|ada|EN|Dont worry, Ill take good care of it. ./Data\ada\wavs\ada_23.wav|ada|EN|Just one question. ./Data\ada\wavs\ada_24.wav|ada|EN|What are you planning to do with this? ./Data\ada\wavs\ada_25.wav|ada|EN|So, were talking millions of casualties? ./Data\ada\wavs\ada_26.wav|ada|EN|Were changing course. Now. ./Data\ada\wavs\ada_3.wav|ada|EN|You can stop right there, Leon. ./Data\ada\wavs\ada_4.wav|ada|EN|wouldnt make me use this. ./Data\ada\wavs\ada_5.wav|ada|EN|Would you? You dont seem surprised. ./Data\ada\wavs\ada_6.wav|ada|EN|Interesting. ./Data\ada\wavs\ada_7.wav|ada|EN|Not a bad move ./Data\ada\wavs\ada_8.wav|ada|EN|Very smooth. Ah, Leon. ./Data\ada\wavs\ada_9.wav|ada|EN|You know I dont work and tell.这里一共27条切片语音对应27个转写文本注意语言是英语。 音频重新采样 对素材音频进行重新采样的操作 #title 重新采样 !python3 resample.py --sr 44100 --in_dir ./Data/ada/raw/ --out_dir ./Data/ada/wavs/预处理标签文件 接着处理转写文件生成训练集和验证集 #title 预处理标签文件 !python3 preprocess_text.py --transcription-path ./Data/ada/esd.list --t程序返回 pytorch_model.bin: 100% 1.32G/1.32G [00:1000:00, 122MB/s] spm.model: 100% 2.46M/2.46M [00:0000:00, 115MB/s] The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache(). 0it [00:00, ?it/s] [nltk_data] Downloading package averaged_perceptron_tagger to [nltk_data] /root/nltk_data... [nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip. [nltk_data] Downloading package cmudict to /root/nltk_data... [nltk_data] Unzipping corpora/cmudict.zip. 100% 27/27 [00:0000:00, 4457.63it/s] 总重复音频数0总未找到的音频数:0 训练集和验证集生成完成生成 BERT 特征文件 最后生成bert特征文件 #title 生成 BERT 特征文件 !python3 bert_gen.py --config-path ./Data/ada/configs/config.json对应27个素材 100% 27/27 [00:3300:00, 1.25s/it] bert生成完毕!, 共有27个bert.pt生成!模型训练 万事俱备开始训练 #title 开始训练 !python3 train_ms.py模型会在models目录生成项目默认设置了训练间隔是50步可以根据自己的需求修改config.json配置文件。 模型推理 一般情况下训练了50步或者100步左右可以推理一下查看效果然后继续训练 #title 开始推理 !python3 webui.py返回 | numexpr.utils | INFO | NumExpr defaulting to 2 threads. /usr/local/lib/python3.10/dist-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm. warnings.warn(torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.) | utils | INFO | Loaded checkpoint Data/ada/models/G_150.pth (iteration 25) 推理页面已开启! Running on local URL: http://127.0.0.1:7860 Running on public URL: https://814833a6f477ba151c.gradio.live点击第二个公网地址进行推理即可。 结语 至此我们已经完成了基于JupyterNoteBook的数据切分、转写、预处理、训练以及推理流程。最后奉上线上GoogleColab以飨众乡亲 https://colab.research.google.com/drive/1-H1DGG5dTy8u_8vFbq1HACXPX9AAM76s?uspsharing
http://www.hkea.cn/news/14410307/

相关文章:

  • 宽带技术网网站论坛申请网站备案前置审批
  • 网站主动服务方案国外网站国内备案
  • 电子商务网站建设课设网站模板桂林象鼻山景区官网
  • 公司网站建设计入明细科目网络营销站点页面设计原则
  • 提供秦皇岛网站建设哪里有网页设计版心常用尺寸
  • 网站建设运营思路中文商城html网站模板
  • 第一模板网站上的模板怎么下载怎么建造网站
  • 怎样提高网站打开速度慢点播视频网站怎么建设
  • 广州市网站建站做图片视频的网站有哪些问题吗
  • wordpress 文章标题列表石家庄seo网络优化的公司
  • 美容视频视频网站建设点击即玩的小游戏网站
  • 贵州网络公司网站建设设计师国外网站
  • 网站建设案例模板下载选做旅游网站的课题分析
  • 关于做网站的外语文献网站怎么做 凡科
  • 怎样设计一个网站平台易思腾网站建设
  • 网站资源做缓存东莞网站建设公司电话
  • safari网站入口自己制作网站的步骤
  • 做网站的分析报告案例婚纱摄影网站管理系统
  • 济南网站建设价格海南房产信息网
  • 网站开发流行语言论坛打赏网站开发
  • 营销型网站建设eyouc昆山网站建设官网
  • 建网页网站html5企业网站建设
  • lamp网站开发旅游网络营销策划方案
  • 成都网站设计与制作深圳罗湖网站制作公司哪家好
  • 网站设计比例楚雄网站开发rewlkj
  • icp备案网站名称是什么意思网站建设合同范文
  • 玉林住房和城乡建设部网站中国建设教育协会网站查询
  • 学院门户网站建设自评荔浦火车站建设在哪里
  • 计算机网站模板移动互联网开发技术电子书
  • 做门窗的网站外包网站开发合同