何九江 2024-01-11 15:53 采纳率: 0%
浏览 112

docker部署m3e-large-api无法启动

#遇到问题的现象描述
使用docker一键部署的m3e-large-api无法启动。期待解答。
使用命令:docker run -d -p 6100:6008 --gpus all --name=m3e-large-api stawky/m3e-large-api:latest

#问题相关代码片,运行结果,报错内容
2024-01-11 15:09:29 No sentence-transformers model found with name ./moka-ai_m3e-large. Creating a new one with MEAN pooling.
2024-01-11 15:09:32 Traceback (most recent call last):
2024-01-11 15:09:32 File "/usr/local/bin/uvicorn", line 8, in
2024-01-11 15:09:32 sys.exit(main())
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1157, in call
2024-01-11 15:09:32 return self.main(*args, **kwargs)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1078, in main
2024-01-11 15:09:32 rv = self.invoke(ctx)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
2024-01-11 15:09:32 return ctx.invoke(self.callback, **ctx.params)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/click/core.py", line 783, in invoke
2024-01-11 15:09:32 return __callback(*args, **kwargs)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 416, in main
2024-01-11 15:09:32 run(
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 587, in run
2024-01-11 15:09:32 server.run()
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 61, in run
2024-01-11 15:09:32 return asyncio.run(self.serve(sockets=sockets))
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
2024-01-11 15:09:32 return loop.run_until_complete(main)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
2024-01-11 15:09:32 return future.result()
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 68, in serve
2024-01-11 15:09:32 config.load()
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/config.py", line 467, in load
2024-01-11 15:09:32 self.loaded_app = import_from_string(self.app)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/site-packages/uvicorn/importer.py", line 21, in import_from_string
2024-01-11 15:09:32 module = importlib.import_module(module_str)
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/importlib/init.py", line 127, in import_module
2024-01-11 15:09:32 return _bootstrap._gcd_import(name[level:], package, level)
2024-01-11 15:09:32 File "", line 1014, in _gcd_import
2024-01-11 15:09:32 File "", line 991, in _find_and_load
2024-01-11 15:09:32 File "", line 975, in _find_and_load_unlocked
2024-01-11 15:09:32 File "", line 671, in _load_unlocked
2024-01-11 15:09:32 File "", line 843, in exec_module
2024-01-11 15:09:32 File "", line 219, in _call_with_frames_removed
2024-01-11 15:09:32 File "/app/localembedding.py", line 42, in
2024-01-11 15:09:32 class ChatMessage(BaseModel):
2024-01-11 15:09:32 File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.new
2024-01-11 15:09:32 File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
2024-01-11 15:09:32 File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
2024-01-11 15:09:32 File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
2024-01-11 15:09:32 File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis
2024-01-11 15:09:32 File "/usr/local/lib/python3.8/typing.py", line 774, in subclasscheck
2024-01-11 15:09:32 return issubclass(cls, self.origin)
2024-01-11 15:09:32 TypeError: issubclass() arg 1 must be a class

#我的初步解答思路是

#操作环境、软件版本等相关信息
win10 + docker4.26.1 + Ubuntu(不记得版本了,大概是22)

  • 写回答

2条回答 默认 最新

  • CSDN-Ada助手 CSDN-AI 官方账号 2024-01-11 18:29
    关注

    【以下回答由 GPT 生成】

    由于报错信息中提到了缺少sentence-transformers模型,因此需要确保在启动容器之前,需要提前下载并加载对应的模型。下面是具体的解决方案:
    1. 确保已经下载了正确的sentence-transformers模型,并且该模型的地址为./moka-ai_m3e-large。
    
    2. 创建一个Dockerfile,内容如下:
    
    ```Dockerfile
    FROM stawky/m3e-large-api:latest
    WORKDIR /app
    COPY ./moka-ai_m3e-large /app/moka-ai_m3e-large
    
    1. 构建一个自定义的镜像,使用以下命令:
    docker build -t my-m3e-image .
    
    1. 使用该自定义镜像来运行容器,确保正确映射端口:
    docker run -d -p 6100:6008 --gpus all --name=m3e-large-api my-m3e-image
    

    通过以上步骤,可以确保正确加载了所需的模型,从而解决了启动时缺少模型的问题。 ```


    如果你已经解决了该问题, 非常希望你能够分享一下解决方案, 写成博客, 将相关链接放在评论区, 以帮助更多的人 ^-^
    评论

报告相同问题?

问题事件

  • 创建了问题 1月11日

悬赏问题

  • ¥15 VESTA绘图原子颜色显示异常
  • ¥15 天翼云搭建多ip l2tp
  • ¥30 matlab求解刃位错方程
  • ¥15 python实现CAD识图
  • ¥15 如何实现在renpy中创建并应用不同大小尺寸对话框?
  • ¥15 table表格有一列是固定列 滑动到最下面时 固定的那一列有几行没显示出来
  • ¥20 单细胞数据注释时报错
  • ¥20 dify工作流分析sql查询结果
  • ¥15 vscode编译ros找不到头文件,cmake.list文件出问题,如何解决?(语言-c++|操作系统-linux)
  • ¥15 通过AT指令控制esp8266发送信息