何九江 2024-01-11 07:53 采纳率: 0%
浏览 290

docker部署m3e-large-api无法启动

#遇到问题的现象描述
使用docker一键部署的m3e-large-api无法启动。期待解答。
使用命令:docker run -d -p 6100:6008 --gpus all --name=m3e-large-api stawky/m3e-large-api:latest

#问题相关代码片,运行结果,报错内容
2024-01-11 15:09:29 No sentence-transformers model found with name ./moka-ai_m3e-large. Creating a new one with MEAN pooling.
2024-01-11 15:09:32 Traceback (most recent call last):
2024-01-11 15:09:32 File "/usr/local/bin/uvicorn", line 8, in
2024-01-11 15:09:32 sys.exit(main())
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1157, in call
2024-01-11 15:09:32 return self.main(*args, **kwargs)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1078, in main
2024-01-11 15:09:32 rv = self.invoke(ctx)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
2024-01-11 15:09:32 return ctx.invoke(self.callback, **ctx.params)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 783, in invoke
2024-01-11 15:09:32 return __callback(*args, **kwargs)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 416, in main
2024-01-11 15:09:32 run(
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 587, in run
2024-01-11 15:09:32 server.run()
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 61, in run
2024-01-11 15:09:32 return asyncio.run(self.serve(sockets=sockets))
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
2024-01-11 15:09:32 return loop.run_until_complete(main)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
2024-01-11 15:09:32 return future.result()
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 68, in serve
2024-01-11 15:09:32 config.load()
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/config.py", line 467, in load
2024-01-11 15:09:32 self.loaded_app = import_from_string(self.app)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/importer.py", line 21, in import_from_string
2024-01-11 15:09:32 module = importlib.import_module(module_str)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/importlib/init.py", line 127, in import_module
2024-01-11 15:09:32 return _bootstrap._gcd_import(name[level:], package, level)
2024-01-11 15:09:32 File "", line 1014, in _gcd_import
2024-01-11 15:09:32 File "", line 991, in _find_and_load
2024-01-11 15:09:32 File "", line 975, in _find_and_load_unlocked
2024-01-11 15:09:32 File "", line 671, in _load_unlocked
2024-01-11 15:09:32 File "", line 843, in exec_module
2024-01-11 15:09:32 File "", line 219, in _call_with_frames_removed
2024-01-11 15:09:32 File "/app/localembedding.py", line 42, in
2024-01-11 15:09:32 class ChatMessage(BaseModel):
2024-01-11 15:09:32 File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.new
2024-01-11 15:09:32 File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
2024-01-11 15:09:32 File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
2024-01-11 15:09:32 File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
2024-01-11 15:09:32 File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/typing.py", line 774, in subclasscheck
2024-01-11 15:09:32 return issubclass(cls, self.origin)
2024-01-11 15:09:32 TypeError: issubclass() arg 1 must be a class

#我的初步解答思路是

#操作环境、软件版本等相关信息
win10 + docker4.26.1 + Ubuntu(不记得版本了,大概是22)

展开全部

  • 写回答

2条回答 默认 最新

  • CSDN-Ada助手 CSDN-AI 官方账号 2024-01-11 10:29
    关注

    【以下回答由 GPT 生成】

    由于报错信息中提到了缺少sentence-transformers模型,因此需要确保在启动容器之前,需要提前下载并加载对应的模型。下面是具体的解决方案:
    1. 确保已经下载了正确的sentence-transformers模型,并且该模型的地址为./moka-ai_m3e-large。
    
    2. 创建一个Dockerfile,内容如下:
    
    ```Dockerfile
    FROM stawky/m3e-large-api:latest
    WORKDIR /app
    COPY ./moka-ai_m3e-large /app/moka-ai_m3e-large
    
    1. 构建一个自定义的镜像,使用以下命令:
    docker build -t my-m3e-image .
    
    1. 使用该自定义镜像来运行容器,确保正确映射端口:
    docker run -d -p 6100:6008 --gpus all --name=m3e-large-api my-m3e-image
    

    通过以上步骤,可以确保正确加载了所需的模型,从而解决了启动时缺少模型的问题。 ```


    如果你已经解决了该问题, 非常希望你能够分享一下解决方案, 写成博客, 将相关链接放在评论区, 以帮助更多的人 ^-^
    评论
  • m0_62493953 2024-06-21 06:19
    关注

    好像是使用windows部署就会出现这种问题,我建议去魔搭上下模型部署
    requirements.txt 和 localembedding.py文件可以在docker容器中获取,趁他重启期间手速快点就行了
    docker exec -it 容器名 /bin/bash 进入容器
    cat requirements.txt 手动复制下来
    cat localembedding,py 手动复制下来
    新建文件分别粘贴
    之后执行
    pip install -r requirements.txt
    再执行
    python localembedding.py

    评论
编辑
预览

报告相同问题?

问题事件

  • 创建了问题 1月11日

悬赏问题

  • ¥35 spaceclaim脚本
  • ¥15 MC9S12XS128单片机实验
  • ¥15 失败的github程序安装
  • ¥15 WSL上下载的joern在windows怎么用?
  • ¥15 jetson nano4GB
  • ¥15 电脑回复出厂设置,重装过程报错提示,求解决方案Windows 无法分析或处理无人参与应答文件 [C:\WINDOWS\Panther\unattend.xml,如何解决?
  • ¥15 进入lighttools中的UDOP编辑器的方法
  • ¥15 求Gen6d训练数据集
  • ¥20 liunx中winscp中可以登入ftp,但是不能登入sftp,如何解决
  • ¥15 lighttools的光学属性自定义的用法流程
手机看
程序员都在用的中文IT技术交流社区

程序员都在用的中文IT技术交流社区

专业的中文 IT 技术社区,与千万技术人共成长

专业的中文 IT 技术社区,与千万技术人共成长

关注【CSDN】视频号,行业资讯、技术分享精彩不断,直播好礼送不停!

关注【CSDN】视频号,行业资讯、技术分享精彩不断,直播好礼送不停!

客服 返回
顶部