当前位置: 首页 > news >正文

电子商务网站运营图片在线制作免费软件

电子商务网站运营,图片在线制作免费软件,jsp源码做网站,中国最好的域名注册网站最近使用pandagpt需要vicuna-7b-v0#xff0c;重新过了一遍#xff0c;前段时间部署了vicuna-7b-v3#xff0c;还是有不少差别的#xff0c;transforms和fastchat版本更新导致许多地方不匹配#xff0c;出现很多错误#xff0c;记录一下。 更多相关内容可见Fastchat实战… 最近使用pandagpt需要vicuna-7b-v0重新过了一遍前段时间部署了vicuna-7b-v3还是有不少差别的transforms和fastchat版本更新导致许多地方不匹配出现很多错误记录一下。 更多相关内容可见Fastchat实战部署vicuna-7b-v1.3小羊驼_Spielberg_1的博客-CSDN博客 一、配置环境 conda create -n fastchat python3.9 # fastchat官方建议Python版本要 3.8切换到fastchat conda activate fastchat 安装torch1.13.1 torchvision0.14.1 torchaudio0.13.1 pip install torch1.13.1 torchvision0.14.1 torchaudio0.13.1 二、安装fastchat和transformers 安装fschat0.1.10官方建议vicuna-7b-delta-v0对应的fastchat版本低于0.1.10 pip install fschat0.1.10 安装transformers pip install transformers 三、合并权重生成vicuna-7b-v0模型 python -m fastchat.model.apply_delta \--base /root/LLaMA-7B-hf/llama-7b-hf \--target /root/vicuna-7b-v0 \--delta /root/vicuna-7b-delta-v0 basehf格式的llama-7b模型的路径target合并权重后生成的vicuna-7b-v0模型路径稍后启动FastChat要用delta从huggingface下载的vicuna-7b-delta-v0路径 vicuna-7b模型合并需要30G的RAM,请合理评估 生成目标模型保存在/root/vicuna-7b-v0 命令行输出 (fastchat) rootdl-230904040428gxb-pod-jupyter-7599dcdb54-qjppf:~# python -m fastchat.model.apply_delta --base /root/LLaMA-7B-hf/llama-7b-hf --target /root/vicuna-7b-v0 --delta /root/vicuna-7b-delta-v0 Loading the base model from /root/LLaMA-7B-hf/llama-7b-hf Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [02:1200:00,  4.03s/it] Loading the delta from /root/vicuna-7b-delta-v0 Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:5600:00, 88.21s/it] You are using the default legacy behaviour of the class transformers.models.llama.tokenization_llama.LlamaTokenizer. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacyTrue. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 You are resizing the embedding layer without providing a pad_to_multiple_of parameter. This means that the new embeding dimension will be 32001. This might induce some performance reduction as *Tensor Cores* will not be available. For more details  about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc Applying the delta Applying delta: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 291/291 [00:2400:00, 11.96it/s] Saving the target model to /root/vicuna-7b-v0 四、推理 python3 -m fastchat.serve.cli --model-path /root/vicuna-7b-v0 model-path上一步合并生成的target路径 控制台输出Human是输入Assistant是模型的回复 (fastchat) rootdl-230904040428gxb-pod-jupyter-7599dcdb54-qjppf:~# python3 -m fastchat.serve.cli --model-path /root/vicuna-7b-v0 You are using the default legacy behaviour of the class transformers.models.llama.tokenization_llama.LlamaTokenizer. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacyTrue. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:0000:00, 60.38s/it] Human: hello Assistant: Hello! How can I help you today? Human: can you tell me a jok Assistant: Sure, heres a joke for you: Why was the math book sad? Because it had too many problems! I hope you found that joke amusing. Do you have any other questions or topics youd like to discuss? Human: a guide to touring in China Assistant: China is a vast and diverse country with a rich history and culture, and there is no shortage of things to see and do. Here are some tips for touring in China: 1. Plan ahead: Research the destinations you want to visit, the best time to go, and the logistics of getting there. Consider factors such as cost, time, transportation, and accommodation. 2. Get a visa: Depending on your nationality, you may need to obtain a visa before you travel to China. Make sure you have enough time to apply for a visa and that you have all the necessary documents. 3. Pack appropriately: Pack clothes for the season, and bring a mix of casual and formal wear. Also, dont forget to pack any necessary medication and travel accessories like a first aid kit, phone charger, and a portable battery pack. 4. Learn some basic Mandarin: While many people in China speak English, having some basic Mandarin will help you communicate with locals and navigate the country more easily. 5. Be respectful: China is a highly cultured country, and its important to be respectful of local customs and traditions. Dress modestly, avoid pointing with your chin, and avoid discussing sensitive political or religious topics. 6. Try the local food: Chinese cuisine is diverse and delicious, and there are many local specialties to try. Dont be afraid to try new things, but be aware of food safety concerns and only eat at reputable restaurants. 7. Stay safe: Be aware of your surroundings and take proper safety precautions, such as avoiding unfamiliar or poorly lit areas at night, and keeping valuable items safe. 8. Enjoy the experience: China is a beautiful and fascinating country, and there is no shortage of things to see and do. Enjoy the experience, take it all in, and have a great time! Human: how to study vicuna Assistant: Studying vicuñas can be a challenging and rewarding experience, but it requires careful planning and preparation. Here are some tips for studying vicuñas: 1. Research the species: Learn about the biology, behavior, and ecology of vicuñas, including their habitat, diet, and social structure. 2. Identify the best sites: Identify the best sites to study vicuñas, such as national parks, wildlife reserves, or other protected areas. Make sure you have permission to study the vicuñas and that you are familiar with the local regulations and laws. 3. Develop your skills: Develop the necessary skills to study vicuñas, such as knowledge of the local language, tracking and observation skills, and experience in capturing and handling animals. 4. Set up your equipment: Set up your equipment, such as cameras, GPS devices, and other necessary tools, to monitor and study the vicuñas. 5. Observe and collect data: Observe the vicuñas in their natural habitat and collect data on their behavior, such as their movement patterns, feeding habits, and social interactions. 6. Analyze your data: Analyze the data you have collected and draw conclusions about the behavior and ecology of the vicuñas. 7. Communicate your findings: Communicate your findings to other researchers and conservationists, and use your research to inform conservation efforts and protect the vicuñas. 8. Consider the ethics: Remember to consider the ethical implications of your study and to minimize any negative impacts on the vicuñas and their habitat. 总结vicuna-7b支持英文回答能力有限。 遇到的问题 ImportError: cannot import name is_tokenizers_available from transformers.utils  原因transformers版本不匹配 解决方法安装transformers pip install transformers 查看版本为4.32.1 ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported. 翻译;ValueErrorTokenizer类LLaMATokenizer不存在或当前未导入。 原因transformers版本更新AutoTokenizer 更新为LlamaTokenizerAutoModelForCausalLM 更新为LlamaForCausalLM 解决办法 1、打开fastchat.model.apply_delta.py 将所有的AutoTokenizer 替换为 LlamaTokenizerAutoModelForCausalLM 替换为 LlamaForCausalLM 2、找到llama-7b的模型改动tokenizer_config.json文件 把tokenizer_class: LLaMATokenizer 改为 tokenizer_class: LlamaTokenizer. ImportError: cannot import name ‘LlamaTokenizerFast’ from ‘transformers’ 翻译ImportError:无法从“transformers”导入名称“LlamaTokenizerFast” 原因transformers中无法导入LlamaTokenizerFast 解决 确认您已经安装了最新的 Transformers 库请检查您是否已经安装了最新版本的 Transformers 库您可以使用pip命令来更新Transformers库: pip install --upgrade transformers 检查LlamaTokenizerFast是否存在于Transformers库中请确保您在Transformers库中找到了LlamaTokenizerFast类。您可以查看Transformers文档或使用以下命令来检查 python -c from transformers import LlamaTokenizerFast 如果该命令未报告任何错误则表示LlamaTokenizerFast类可用。 UnboundLocalError: local variable sentencepiece_model_pb2 referenced before assignment 解决办法 pip install protobuf ErrorAutoTokenizer.from_pretrainedUnboundLocalError: local variable sentencepiece_model_pb2 referenced before assignment · Issue #25848 · huggingface/transformers · GitHub RuntimeError: The size of tensor a (32000) must match the size of tensor b (32001) at non-singleton dimension 0 原因fastchat版本不匹配降低到0.1.10版本 查看“FastChat版本兼容性”文档https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md 有效的方式 pip install fschat0.1.10 RuntimeError: The size of tensor a (32000) must match the size of tensor b (32001) at non-singleton dimension 0 · Issue #132 · Vision-CAIR/MiniGPT-4 · GitHub 用到的ubuntu命令 chmod rwx file      给file文件添加读、写、执行权限。r表示可读w表示可写x表示执行 chmod -rwx file       删除file文件读、写、执行权限 nvidia-smi -l 5          每隔5秒刷新nvidia-smi实时查看GPU使用、显存占用情况 参考 Fastchat实战部署vicuna-7b-v1.3小羊驼_Spielberg_1的博客-CSDN博客 nvidia-smi命令实时查看GPU使用、显存占用情况_我们是宇宙中最孤独的孩子的博客-CSDN博客 MiniGPT-4 本地部署 RTX 3090 - 知乎 解决ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported_wx6176918821622的技术博客_51CTO博客 http://www.kuazhi.com/post/445223.html ChatGPT DeepSpeed 部署中bug以及解决方法_博客_夸智网 ErrorAutoTokenizer.from_pretrainedUnboundLocalError: local variable sentencepiece_model_pb2 referenced before assignment · Issue #25848 · huggingface/transformers · GitHub ubuntu如何修改读写权限设置 - 小小蚂蚁 小羊驼模型(FastChat-vicuna)运行踩坑记录 - 知乎 win10,win11 下部署Vicuna-7BVicuna-13B模型gpu cpu运行_babytiger的博客-CSDN博客
http://www.hkea.cn/news/14276906/

相关文章:

  • 金融网站建设方案wordpress搜索框代码
  • jsp网站制作详细教程广州工商学院官网
  • 好看的手机网站布局自助建站信息发布网企业
  • 江门找人做网站排名怎样搭建大型企业网络
  • 天津做企业网站公司企业搭建网站多少钱
  • 北京做百度网站花西子品牌营销策略分析
  • 保险微网站制作平面设计价格收费表
  • 福永自适应网站建汽车报价网址
  • 珠海左右创意园网站开发海尔工业互联网公司排名
  • 哪块行业需要网站建设使用h5做的学习网站源码
  • 北京兼职网站建设怎样看一个网站是谁做的
  • 信息发布型网站wordpress快速入门指南布局篇
  • 中企动力网站策划wordpress批量alt代码
  • 网站建设问题及解决办法选服务好的网站建设
  • 西安免费网站建站模板校园局域网设计方案
  • 哪个医学网站大夫可以做离线题推荐10个网站
  • 苏州建设网站wordpress iis6 伪静态
  • 网站建设问卷调研郑州搜索引擎优化
  • 企业网站建江津哪个网站可以做顺风车
  • 天津网站维护坑梓网站建设基本流程
  • 电商网站开发的主流技术wordpress post存储
  • 比较好的源码网站wordpress主题修改菜鸟教程
  • 房产公司网站建设wordpress修改主题头部
  • 宜昌网站改版网站建设投放广告
  • 佛山市官网网站建设怎么样昆山网站建设书生商友
  • 中江建设银行网站企业网站备案流几天
  • 上海网站建设找摩彼做网站的的价位
  • 简单企业网站建设免费云主机试用一年
  • 购买手机网站推荐网站建设简单流程
  • 青岛高端网站设计公司wordpress 相同文章