免费做视频网站,哈尔滨模板建站哪个品牌好,怎么做网站后台,成都高端网站建设那家好好久没有体验新技术了#xff0c;今天来玩一下GraphRAG
顾名思义#xff0c;一种检索增强的方法#xff0c;利用图谱来实现RAG
1.配置环境
conda create -n GraphRAG python3.11
conda activate GraphRAG
pip install graphrag
2.构建GraphRAG
mkdir -p ./ragtest/i…好久没有体验新技术了今天来玩一下GraphRAG
顾名思义一种检索增强的方法利用图谱来实现RAG
1.配置环境
conda create -n GraphRAG python3.11
conda activate GraphRAG
pip install graphrag
2.构建GraphRAG
mkdir -p ./ragtest/input
#这本书详细介绍了如何通过提示工程技巧来引导像ChatGPT这样的语言模型生成高质量的文本。
curl https://raw.githubusercontent.com/win4r/mytest/main/book.txt ./ragtest/input/book.txt#初始化空间
python3 -m graphrag.index --init --root ./ragtest然后填写.env里面的内容可以直接写openai的key如下GRAPHRAG_API_KEYsk-ZZvxAMzrl.....................或者可以写GRAPHRAG_API_KEYollama
1如果是ollama的话
进入settings.yaml里面
# api_base: https://instance.openai.azure.com
取消注释并改为 api_base: http://127.0.0.1:11434/v1
同时将model改为llama3你自己的ollama模型
2用key的话将模型改为model: gpt-3.5-turbo-1106
文档28行还有一个词嵌入模型根据自己的选择更改
但是这个embeddings模型只能用openai的
如果上面用的是ollama的模型这里要将api_base改一下改为api_base: https://api.openai.com/v1
不然当进行到这一步的时候会继承访问上面ollama设置的base——url从而产生报错
#进行索引操作
python3 -m graphrag.index --root ./ragtest构建完成
encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: llama3model_supports_json: true # recommended if this is available for your model.# max_tokens: 4000# request_timeout: 180.0api_base: http://192.168.1.138:11434/v1# api_version: 2024-02-15-preview# organization: organization_id# deployment_name: azure_model_deployment_name# tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be madeparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asynciollm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: text-embedding-3-smallapi_base: https://api.openai.com/v1# api_version: 2024-02-15-preview# organization: organization_id# deployment_name: azure_model_deployment_name# tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 300overlap: 100group_by_columns: [id] # by default, we dont allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: inputfile_encoding: utf-8file_pattern: .*\\.txt$cache:type: file # or blobbase_dir: cache# connection_string: azure_blob_storage_connection_string# container_name: azure_blob_storage_container_namestorage:type: file # or blobbase_dir: output/${timestamp}/artifacts# connection_string: azure_blob_storage_connection_string# container_name: azure_blob_storage_container_namereporting:type: file # or console, blobbase_dir: output/${timestamp}/reports# connection_string: azure_blob_storage_connection_string# container_name: azure_blob_storage_container_nameentity_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: prompts/entity_extraction.txtentity_types: [organization,person,geo,event]max_gleanings: 0summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: prompts/summarize_descriptions.txtmax_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: prompts/claim_extraction.txtdescription: Any claims or facts that could be relevant to information discovery.max_gleanings: 0community_report:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: prompts/community_report.txtmax_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# max_tokens: 12000global_search:# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 20003. 全局检索和本地检索
python3 -m graphrag.query \
--root ./ragtest \
--method global \
show me some Prompts about Interpretable Soft Prompts.python3 -m graphrag.query \
--root ./ragtest \
--method local \
show me some Prompts about Knowledge Generation.
4.可视化
#pip3 install chainlitimport chainlit as cl
import subprocess
import shlexcl.on_chat_start
def start():cl.user_session.set(history, [])cl.on_message
async def main(message: cl.Message):history cl.user_session.get(history)# 从 Message 对象中提取文本内容query message.content# 构建命令cmd [python3, -m, graphrag.query,--root, ./ragtest,--method, local,]# 安全地添加查询到命令中cmd.append(shlex.quote(query))# 运行命令并捕获输出try:result subprocess.run(cmd, capture_outputTrue, textTrue, checkTrue)output result.stdout# 提取 SUCCESS: Local Search Response: 之后的内容response output.split(SUCCESS: Local Search Response:, 1)[-1].strip()history.append((query, response))cl.user_session.set(history, history)await cl.Message(contentresponse).send()except subprocess.CalledProcessError as e:error_message fAn error occurred: {e.stderr}await cl.Message(contenterror_message).send()if __name__ __main__:cl.run()