提交网站给百度,wordpress 律所,app开发公司怎么收费,邹平县建设局网站文章目录 简绍docker 安装方式访问地址对应官网在 dify 中 添加 xinference 容器内置大语言模型嵌入模型图像模型音频模型重排序模型视频模型 简绍
Xorbits Inference (Xinference) 是一个开源平台#xff0c;用于简化各种 AI 模型的运行和集成。借助 Xinference#xff0c;… 文章目录 简绍docker 安装方式访问地址对应官网在 dify 中 添加 xinference 容器内置大语言模型嵌入模型图像模型音频模型重排序模型视频模型 简绍
Xorbits Inference (Xinference) 是一个开源平台用于简化各种 AI 模型的运行和集成。借助 Xinference您可以使用任何开源 LLM、嵌入模型和多模态模型在云端或本地环境中运行推理并创建强大的 AI 应用。
docker 安装方式
docker 下载对应的 xinference
docker pull xprobe/xinferencedocker 运行注意 路径改成自己的
docker run -d --name xinference --gpus all -v E:/docker/xinference/models:/root/models -v E:/docker/xinference/.xinference:/root/.xinference -v E:/docker/xinference/.cache/huggingface:/root/.cache/huggingface -e XINFERENCE_HOME/root/models -p 9997:9997 xprobe/xinference:latest xinference-local -H 0.0.0.0-d: 让容器在后台运行。--name xinference: 为容器指定一个名称这里是xinference。--gpus all: 允许容器访问主机上的所有GPU这对于需要进行大量计算的任务如机器学习模型的推理非常有用。-v E:/docker/xinference/models:/root/models, -v E:/docker/xinference/.xinference:/root/.xinference, -v E:/docker/xinference/.cache/huggingface:/root/.cache/huggingface: 这些参数用于将主机的目录挂载到容器内部的特定路径以便于数据持久化和共享。例如第一个挂载是将主机的E:/docker/xinference/models目录映射到容器内的/root/models目录。-e XINFERENCE_HOME/root/models: 设置环境变量XINFERENCE_HOME其值为/root/models这可能是在容器内配置某些应用行为的方式。-p 9997:9997: 将主机的9997端口映射到容器的9997端口允许外部通过主机的该端口访问容器的服务。xprobe/xinference:latest: 指定要使用的镜像和标签这里使用的是xprobe/xinference镜像的latest版本。xinference-local -H 0.0.0.0: 在容器启动时执行的命令看起来像是以本地模式运行某个服务并监听所有网络接口。
访问地址
http://127.0.0.1:9997/
对应官网
https://inference.readthedocs.io/zh-cn/latest/index.html
在 dify 中 添加 xinference 容器
docker dify 添加 docker 容器内ip 配置
http://host.docker.internal:9997内置大语言模型 MODEL NAMEABILITIESCOTNEXT_LENGTHDESCRIPTIONaquila2generate2048Aquila2 series models are the base language modelsaquila2-chatchat2048Aquila2-chat series models are the chat modelsaquila2-chat-16kchat16384AquilaChat2-16k series models are the long-text chat modelsbaichuan-2generate4096Baichuan2 is an open-source Transformer based LLM that is trained on both Chinese and English data.baichuan-2-chatchat4096Baichuan2-chat is a fine-tuned version of the Baichuan LLM, specializing in chatting.c4ai-command-r-v01chat131072C4AI Command-R() is a research release of a 35 and 104 billion parameter highly performant generative model.code-llamagenerate100000Code-Llama is an open-source LLM trained by fine-tuning LLaMA2 for generating and discussing code.code-llama-instructchat100000Code-Llama-Instruct is an instruct-tuned version of the Code-Llama LLM.code-llama-pythongenerate100000Code-Llama-Python is a fine-tuned version of the Code-Llama LLM, specializing in Python.codegeex4chat131072the open-source version of the latest CodeGeeX4 model seriescodeqwen1.5generate65536CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.codeqwen1.5-chatchat65536CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.codeshellgenerate8194CodeShell is a multi-language code LLM developed by the Knowledge Computing Lab of Peking University.codeshell-chatchat8194CodeShell is a multi-language code LLM developed by the Knowledge Computing Lab of Peking University.codestral-v0.1generate32768Codestrall-22B-v0.1 is trained on a diverse dataset of 80 programming languages, including the most popular ones, such as Python, Java, C, C, JavaScript, and Bashcogagentchat, vision4096The CogAgent-9B-20241220 model is based on GLM-4V-9B, a bilingual open-source VLM base model. Through data collection and optimization, multi-stage training, and strategy improvements, CogAgent-9B-20241220 achieves significant advancements in GUI perception, inference prediction accuracy, action space completeness, and task generalizability.cogvlm2chat, vision8192CogVLM2 have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models.cogvlm2-video-llama3-chatchat, vision8192CogVLM2-Video achieves state-of-the-art performance on multiple video question answering tasks.csg-wukong-chat-v0.1chat32768csg-wukong-1B is a 1 billion-parameter small language model(SLM) pretrained on 1T tokens.deepseekgenerate4096DeepSeek LLM, trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.deepseek-chatchat4096DeepSeek LLM is an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.deepseek-codergenerate16384Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese.deepseek-coder-instructchat16384deepseek-coder-instruct is a model initialized from deepseek-coder-base and fine-tuned on 2B tokens of instruction data.deepseek-r1chat163840DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.deepseek-r1-distill-llamachat131072deepseek-r1-distill-llama is distilled from DeepSeek-R1 based on Llamadeepseek-r1-distill-qwenchat131072deepseek-r1-distill-qwen is distilled from DeepSeek-R1 based on Qwendeepseek-v2generate128000DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.deepseek-v2-chatchat128000DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.deepseek-v2-chat-0628chat128000DeepSeek-V2-Chat-0628 is an improved version of DeepSeek-V2-Chat.deepseek-v2.5chat128000DeepSeek-V2.5 is an upgraded version that combines DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct. The new model integrates the general and coding abilities of the two previous versions.deepseek-v3chat163840DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.deepseek-vl-chatchat, vision4096DeepSeek-VL possesses general multimodal understanding capabilities, capable of processing logical diagrams, web pages, formula recognition, scientific literature, natural images, and embodied intelligence in complex scenarios.gemma-2-itchat8192Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.gemma-itchat8192Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.glm-4vchat, vision8192GLM4 is the open source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.glm-edge-chatchat8192The GLM-Edge series is our attempt to face the end-side real-life scenarios, which consists of two sizes of large-language dialogue models and multimodal comprehension models (GLM-Edge-1.5B-Chat, GLM-Edge-4B-Chat, GLM-Edge-V-2B, GLM-Edge-V-5B). Among them, the 1.5B / 2B model is mainly for platforms such as mobile phones and cars, and the 4B / 5B model is mainly for platforms such as PCs.glm-edge-vchat, vision8192The GLM-Edge series is our attempt to face the end-side real-life scenarios, which consists of two sizes of large-language dialogue models and multimodal comprehension models (GLM-Edge-1.5B-Chat, GLM-Edge-4B-Chat, GLM-Edge-V-2B, GLM-Edge-V-5B). Among them, the 1.5B / 2B model is mainly for platforms such as mobile phones and cars, and the 4B / 5B model is mainly for platforms such as PCs.glm4-chatchat, tools131072GLM4 is the open source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.glm4-chat-1mchat, tools1048576GLM4 is the open source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.gorilla-openfunctions-v2chat4096OpenFunctions is designed to extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context.gpt-2generate1024GPT-2 is a Transformer-based LLM that is trained on WebTest, a 40 GB dataset of Reddit posts with 3 upvotes.internlm2-chatchat32768The second generation of the InternLM model, InternLM2.internlm2.5-chatchat32768InternLM2.5 series of the InternLM model.internlm2.5-chat-1mchat262144InternLM2.5 series of the InternLM model supports 1M long-contextinternlm3-instructchat, tools32768InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning.internvl-chatchat, vision32768InternVL 1.5 is an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.internvl2chat, vision32768InternVL 2 is an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.llama-2generate4096Llama-2 is the second generation of Llama, open-source and trained on a larger amount of data.llama-2-chatchat4096Llama-2-Chat is a fine-tuned version of the Llama-2 LLM, specializing in chatting.llama-3generate8192Llama 3 is an auto-regressive language model that uses an optimized transformer architecturellama-3-instructchat8192The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks..llama-3.1generate131072Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecturellama-3.1-instructchat, tools131072The Llama 3.1 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks..llama-3.2-visiongenerate, vision131072The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image…llama-3.2-vision-instructchat, vision131072Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image…llama-3.3-instructchat, tools131072The Llama 3.3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks..marco-o1chat, tools32768Marco-o1: Towards Open Reasoning Models for Open-Ended Solutionsminicpm-2b-dpo-bf16chat4096MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.minicpm-2b-dpo-fp16chat4096MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.minicpm-2b-dpo-fp32chat4096MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.minicpm-2b-sft-bf16chat4096MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.minicpm-2b-sft-fp32chat4096MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.minicpm-llama3-v-2_5chat, vision8192MiniCPM-Llama3-V 2.5 is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters.minicpm-v-2.6chat, vision32768MiniCPM-V 2.6 is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters.minicpm3-4bchat32768MiniCPM3-4B is the 3rd generation of MiniCPM series. The overall performance of MiniCPM3-4B surpasses Phi-3.5-mini-Instruct and GPT-3.5-Turbo-0125, being comparable with many recent 7B~9B models.mistral-instruct-v0.1chat8192Mistral-7B-Instruct is a fine-tuned version of the Mistral-7B LLM on public datasets, specializing in chatting.mistral-instruct-v0.2chat8192The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.mistral-instruct-v0.3chat32768The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.mistral-large-instructchat131072Mistral-Large-Instruct-2407 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities.mistral-nemo-instructchat1024000The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407mistral-v0.1generate8192Mistral-7B is a unmoderated Transformer based LLM claiming to outperform Llama2 on all benchmarks.mixtral-8x22b-instruct-v0.1chat65536The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1, specializing in chatting.mixtral-instruct-v0.1chat32768Mistral-8x7B-Instruct is a fine-tuned version of the Mistral-8x7B LLM, specializing in chatting.mixtral-v0.1generate32768The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.omnilmmchat, vision2048OmniLMM is a family of open-source large multimodal models (LMMs) adept at vision language modeling.openhermes-2.5chat8192Openhermes 2.5 is a fine-tuned version of Mistral-7B-v0.1 on primarily GPT-4 generated data.optgenerate2048Opt is an open-source, decoder-only, Transformer based LLM that was designed to replicate GPT-3.orion-chatchat4096Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI.orion-chat-ragchat4096Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI.phi-2generate2048Phi-2 is a 2.7B Transformer based LLM used for research on model safety, trained with data similar to Phi-1.5 but augmented with synthetic texts and curated websites.phi-3-mini-128k-instructchat128000The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.phi-3-mini-4k-instructchat4096The Phi-3-Mini-4k-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.platypus2-70b-instructgenerate4096Platypus-70B-instruct is a merge of garage-bAInd/Platypus2-70B and upstage/Llama-2-70b-instruct-v2.qvq-72b-previewchat, vision32768QVQ-72B-Preview is an experimental research model developed by the Qwen team, focusing on enhancing visual reasoning capabilities.qwen-chatchat32768Qwen-chat is a fine-tuned version of the Qwen LLM trained with alignment techniques, specializing in chatting.qwen-vl-chatchat, vision4096Qwen-VL-Chat supports more flexible interaction, such as multiple image inputs, multi-round question answering, and creative capabilities.qwen1.5-chatchat, tools32768Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.qwen1.5-moe-chatchat, tools32768Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.qwen2-audiogenerate, audio32768Qwen2-Audio: A large-scale audio-language model which is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions.qwen2-audio-instructchat, audio32768Qwen2-Audio: A large-scale audio-language model which is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions.qwen2-instructchat, tools32768Qwen2 is the new series of Qwen large language modelsqwen2-moe-instructchat, tools32768Qwen2 is the new series of Qwen large language models.qwen2-vl-instructchat, vision32768Qwen2-VL: To See the World More Clearly.Qwen2-VL is the latest version of the vision language models in the Qwen model familities.qwen2.5generate32768Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.qwen2.5-codergenerate32768Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen).qwen2.5-coder-instructchat, tools32768Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen).qwen2.5-instructchat, tools32768Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.qwen2.5-vl-instructchat, vision128000Qwen2.5-VL: Qwen2.5-VL is the latest version of the vision language models in the Qwen model familities.qwq-32b-previewchat32768QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities.seallm_v2generate8192We introduce SeaLLM-7B-v2, the state-of-the-art multilingual LLM for Southeast Asian (SEA) languagesseallm_v2.5generate8192We introduce SeaLLM-7B-v2.5, the state-of-the-art multilingual LLM for Southeast Asian (SEA) languagesskyworkgenerate4096Skywork is a series of large models developed by the Kunlun Group · Skywork team.skywork-mathgenerate4096Skywork is a series of large models developed by the Kunlun Group · Skywork team.starling-lmchat4096We introduce Starling-7B, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). The model harnesses the power of our new GPT-4 labeled ranking datasettelechatchat8192The TeleChat is a large language model developed and trained by China Telecom Artificial Intelligence Technology Co., LTD. The 7B model base is trained with 1.5 trillion Tokens and 3 trillion Tokens and Chinese high-quality corpus.tiny-llamagenerate2048The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens.wizardcoder-python-v1.0chat100000wizardmath-v1.0chat2048WizardMath is an open-source LLM trained by fine-tuning Llama2 with Evol-Instruct, specializing in math.xversegenerate2048XVERSE is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology.xverse-chatchat2048XVERSEB-Chat is the aligned version of model XVERSE.yigenerate4096The Yi series models are large language models trained from scratch by developers at 01.AI.yi-1.5generate4096Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.yi-1.5-chatchat4096Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.yi-1.5-chat-16kchat16384Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.yi-200kgenerate262144The Yi series models are large language models trained from scratch by developers at 01.AI.yi-chatchat4096The Yi series models are large language models trained from scratch by developers at 01.AI.yi-codergenerate131072Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.Excelling in long-context understanding with a maximum context length of 128K tokens.Supporting 52 major programming languages, including popular ones such as Java, Python, JavaScript, and C.yi-coder-chatchat131072Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.Excelling in long-context understanding with a maximum context length of 128K tokens.Supporting 52 major programming languages, including popular ones such as Java, Python, JavaScript, and C.yi-vl-chatchat, vision4096Yi Vision Language (Yi-VL) model is the open-source, multimodal version of the Yi Large Language Model (LLM) series, enabling content comprehension, recognition, and multi-round conversations about images. 嵌入模型 bce-embedding-base_v1bge-base-enbge-base-en-v1.5bge-base-zhbge-base-zh-v1.5bge-large-enbge-large-en-v1.5bge-large-zhbge-large-zh-noinstructbge-large-zh-v1.5bge-m3bge-small-en-v1.5bge-small-zhbge-small-zh-v1.5e5-large-v2gte-basegte-largegte-Qwen2jina-clip-v2jina-embeddings-v2-base-enjina-embeddings-v2-base-zhjina-embeddings-v2-small-enjina-embeddings-v3m3e-basem3e-largem3e-smallmultilingual-e5-largetext2vec-base-chinesetext2vec-base-chinese-paraphrasetext2vec-base-chinese-sentencetext2vec-base-multilingualtext2vec-large-chinese FLUX.1-devFLUX.1-schnellGOT-OCR2_0HunyuanDiT-v1.2HunyuanDiT-v1.2-Distilledkolorssd-turbosd3-mediumsd3.5-largesd3.5-large-turbosd3.5-mediumsdxl-turbostable-diffusion-2-inpaintingstable-diffusion-inpaintingstable-diffusion-v1.5stable-diffusion-xl-base-1.0stable-diffusion-xl-inpainting 图像模型 FLUX.1-devFLUX.1-schnellGOT-OCR2_0HunyuanDiT-v1.2HunyuanDiT-v1.2-Distilledkolorssd-turbosd3-mediumsd3.5-largesd3.5-large-turbosd3.5-mediumsdxl-turbostable-diffusion-2-inpaintingstable-diffusion-inpaintingstable-diffusion-v1.5stable-diffusion-xl-base-1.0stable-diffusion-xl-inpainting 音频模型
以下是 Xinference 中内置的音频模型列表: Belle-distilwhisper-large-v2-zhBelle-whisper-large-v2-zhBelle-whisper-large-v3-zhChatTTSCosyVoice-300MCosyVoice-300M-InstructCosyVoice-300M-SFTCosyVoice2-0.5BF5-TTSF5-TTS-MLXFishSpeech-1.5Kokoro-82MMeloTTS-ChineseMeloTTS-EnglishMeloTTS-English-v2MeloTTS-English-v3MeloTTS-FrenchMeloTTS-JapaneseMeloTTS-KoreanMeloTTS-SpanishSenseVoiceSmallwhisper-basewhisper-base-mlxwhisper-base.enwhisper-base.en-mlxwhisper-large-v3whisper-large-v3-mlxwhisper-large-v3-turbowhisper-large-v3-turbo-mlxwhisper-mediumwhisper-medium-mlxwhisper-medium.enwhisper-medium.en-mlxwhisper-smallwhisper-small-mlxwhisper-small.enwhisper-small.en-mlxwhisper-tinywhisper-tiny-mlxwhisper-tiny.enwhisper-tiny.en-mlx 重排序模型
以下是 Xinference 中内置的重排序模型列表: bce-reranker-base_v1bge-reranker-basebge-reranker-largebge-reranker-v2-gemmabge-reranker-v2-m3bge-reranker-v2-minicpm-layerwisejina-reranker-v2minicpm-reranker 视频模型
以下是 Xinference 中内置的视频模型列表: CogVideoX-2bCogVideoX-5bHunyuanVideo