当前位置: 首页 > news >正文

免费做视频网站哈尔滨模板建站哪个品牌好

免费做视频网站,哈尔滨模板建站哪个品牌好,怎么做网站后台,成都高端网站建设那家好好久没有体验新技术了#xff0c;今天来玩一下GraphRAG 顾名思义#xff0c;一种检索增强的方法#xff0c;利用图谱来实现RAG 1.配置环境 conda create -n GraphRAG python3.11 conda activate GraphRAG pip install graphrag 2.构建GraphRAG mkdir -p ./ragtest/i…好久没有体验新技术了今天来玩一下GraphRAG 顾名思义一种检索增强的方法利用图谱来实现RAG 1.配置环境 conda  create -n GraphRAG python3.11 conda activate GraphRAG pip install graphrag 2.构建GraphRAG  mkdir -p ./ragtest/input #这本书详细介绍了如何通过提示工程技巧来引导像ChatGPT这样的语言模型生成高质量的文本。 curl https://raw.githubusercontent.com/win4r/mytest/main/book.txt ./ragtest/input/book.txt#初始化空间 python3 -m graphrag.index --init --root ./ragtest然后填写.env里面的内容可以直接写openai的key如下GRAPHRAG_API_KEYsk-ZZvxAMzrl.....................或者可以写GRAPHRAG_API_KEYollama 1如果是ollama的话 进入settings.yaml里面 # api_base: https://instance.openai.azure.com 取消注释并改为 api_base: http://127.0.0.1:11434/v1 同时将model改为llama3你自己的ollama模型 2用key的话将模型改为model: gpt-3.5-turbo-1106 文档28行还有一个词嵌入模型根据自己的选择更改 但是这个embeddings模型只能用openai的 如果上面用的是ollama的模型这里要将api_base改一下改为api_base: https://api.openai.com/v1 不然当进行到这一步的时候会继承访问上面ollama设置的base——url从而产生报错 #进行索引操作 python3 -m graphrag.index --root ./ragtest构建完成 encoding_model: cl100k_base skip_workflows: [] llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: llama3model_supports_json: true # recommended if this is available for your model.# max_tokens: 4000# request_timeout: 180.0api_base: http://192.168.1.138:11434/v1# api_version: 2024-02-15-preview# organization: organization_id# deployment_name: azure_model_deployment_name# tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be madeparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asynciollm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: text-embedding-3-smallapi_base: https://api.openai.com/v1# api_version: 2024-02-15-preview# organization: organization_id# deployment_name: azure_model_deployment_name# tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 300overlap: 100group_by_columns: [id] # by default, we dont allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: inputfile_encoding: utf-8file_pattern: .*\\.txt$cache:type: file # or blobbase_dir: cache# connection_string: azure_blob_storage_connection_string# container_name: azure_blob_storage_container_namestorage:type: file # or blobbase_dir: output/${timestamp}/artifacts# connection_string: azure_blob_storage_connection_string# container_name: azure_blob_storage_container_namereporting:type: file # or console, blobbase_dir: output/${timestamp}/reports# connection_string: azure_blob_storage_connection_string# container_name: azure_blob_storage_container_nameentity_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: prompts/entity_extraction.txtentity_types: [organization,person,geo,event]max_gleanings: 0summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: prompts/summarize_descriptions.txtmax_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: prompts/claim_extraction.txtdescription: Any claims or facts that could be relevant to information discovery.max_gleanings: 0community_report:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: prompts/community_report.txtmax_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# max_tokens: 12000global_search:# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 20003. 全局检索和本地检索 python3 -m graphrag.query \ --root ./ragtest \ --method global \ show me some Prompts about Interpretable Soft Prompts.python3 -m graphrag.query \ --root ./ragtest \ --method local \ show me some Prompts about Knowledge Generation. 4.可视化 #pip3 install chainlitimport chainlit as cl import subprocess import shlexcl.on_chat_start def start():cl.user_session.set(history, [])cl.on_message async def main(message: cl.Message):history cl.user_session.get(history)# 从 Message 对象中提取文本内容query message.content# 构建命令cmd [python3, -m, graphrag.query,--root, ./ragtest,--method, local,]# 安全地添加查询到命令中cmd.append(shlex.quote(query))# 运行命令并捕获输出try:result subprocess.run(cmd, capture_outputTrue, textTrue, checkTrue)output result.stdout# 提取 SUCCESS: Local Search Response: 之后的内容response output.split(SUCCESS: Local Search Response:, 1)[-1].strip()history.append((query, response))cl.user_session.set(history, history)await cl.Message(contentresponse).send()except subprocess.CalledProcessError as e:error_message fAn error occurred: {e.stderr}await cl.Message(contenterror_message).send()if __name__ __main__:cl.run()
http://www.hkea.cn/news/14322391/

相关文章:

  • 广州住房和建设局网站官网山东济南网站建设
  • 游泳池建设有专门的网站吗最近的新闻事件
  • 网站个人博客怎么做湖南人文科技学院官网首页
  • 怎么才能搜索到自己做的网站国外论文类网站有哪些方面
  • 衡阳市建设协会网站手机社区网站模板
  • 免费建网站 步骤超级软文网
  • 做网站的皮包公司爱站网能不能挖掘关键词
  • 深圳做app网站大展建筑人才网
  • 江门网站建设推广家装要去哪个公司装修
  • 有啥创意可以做商务网站的株洲seo
  • 东莞企业高端网站建设安卓开发工具手机版
  • 长春网站制作公司游戏广告联盟平台
  • 百度不更新网站张雪峰最不建议上的专业
  • 电子商务网站开发书代码统计网站
  • 牡丹江网站seo网站开发 荣誉资质
  • 东莞技术支持网站建设专家公司营业执照查询
  • 网站外链建设可以提升网站权重对还是错长春建站平台
  • 企业管理咨询网站模板泰安公司注册
  • 1网站免费建站免费营销网站制作模板
  • 旅游景区网站建设方案文档东莞洪梅网站建设
  • 网站空间后台密码网站后台怎么做alt标签
  • html5快速建站怎么把自己网站推广出去
  • 福州英文网站建设国外企业网络研究
  • 郴州网站策划做短视频网站收益
  • 网站建设与设计试题郑州新闻
  • 快速做课件的网站wordpress 权限控制
  • 织梦做网站视频教程在线网站建设哪个正规
  • 网站推广策略100例电子商务网站 方案
  • 滨海企业做网站多少钱免费签名设计在线生成
  • 南阳商都网站做网站网页加速器手机版