十足冷静 2024-07-08 14:36 采纳率: 0%
浏览 162

(标签-bug|关键词-server)

chatglm3官方给出的测试文件bug
运行api_server正常,但是启动openai_api_request之后server端报错:
2024-07-08 14:21:39.196 | DEBUG    | __main__:create_chat_completion:242 - ==== request ====
{'messages': [ChatMessage(role='system', content="You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.", name=None, function_call=None), ChatMessage(role='user', content='你好�请你����
                                                                                           �语��讲�个��
��nvs/chatglm3/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
    response = await func(request)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/root/chatglm3/ChatGLM3/openai_api_demo/api_server.py", line 298, in create_chat_completion
    response = generate_chatglm3(model, tokenizer, gen_params)
  File "/root/chatglm3/ChatGLM3/openai_api_demo/utils.py", line 165, in generate_chatglm3
    for response in generate_stream_chatglm3(model, tokenizer, params):
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/root/chatglm3/ChatGLM3/openai_api_demo/utils.py", line 81, in generate_stream_chatglm3
    for total_ids in model.stream_generate(**inputs, eos_token_id=eos_token_id, **gen_kwargs):
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/root/.cache/huggingface/modules/transformers_modules/model/modeling_chatglm.py", line 1156, in stream_generate
    logits_warper = self._get_logits_warper(generation_config)
TypeError: GenerationMixin._get_logits_warper() missing 1 required positional argument: 'device'

调用open_api_request端报错:

(chatglm3) root@a4ca65dd166b:~/chatglm3/ChatGLM3/openai_api_demo# python3 openai_api_request.py 
Traceback (most recent call last):
  File "/root/chatglm3/ChatGLM3/openai_api_demo/openai_api_request.py", line 96, in <module>
    simple_chat(use_stream=False)
  File "/root/chatglm3/ChatGLM3/openai_api_demo/openai_api_request.py", line 67, in simple_chat
    response = client.chat.completions.create(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 643, in create
    return self._post(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 1261, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 942, in request
    return self._request(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 1026, in _request
    return self._retry_request(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 1074, in _retry_request
    return self._request(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 1026, in _request
    return self._retry_request(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 1074, in _retry_request
    return self._request(
  File "/root/anaconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 1041, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Internal Server Error
  • 写回答

3条回答 默认 最新

  • 专家-赤兔[在线] 全栈领域优质创作者 2024-07-08 14:36
    关注

    引自免费微信小程序:皆我百晓生

    根据你提供的错误信息,问题似乎出在两个方面:一方面是你在运行 openai_api_request 时遇到了内部服务器错误(openai.InternalServerError),另一方面是在调用 create_chat_completion 函数时遇到了类型错误(TypeError)。我会分别针对这两个问题进行分析并提供可能的解决方案。

    问题一:openai.InternalServerError 错误

    这个错误可能是由于OpenAI API服务器内部的问题导致的。这可能是由于OpenAI API服务器的临时问题或者是你的API请求出现了问题。首先,我建议你检查你的OpenAI API密钥是否正确,并且确保你正在使用的API版本是最新的。如果问题仍然存在,可能是由于OpenAI API服务器的内部问题,你可能需要等待一段时间后再尝试,或者联系OpenAI的支持获取更多帮助。

    问题二:TypeError: GenerationMixin._get_logits_warper() missing 1 required positional argument: 'device' 错误

    这个错误提示在调用 GenerationMixin._get_logits_warper() 函数时缺少了一个名为 'device' 的位置参数。这可能意味着你在调用这个函数时没有提供足够的参数。解决这个问题的方法是确保你在调用这个函数时提供了所有必需的参数。你可能需要检查你的代码以确保你在调用 create_chat_completion 或其他相关函数时正确地传递了 'device' 参数。你应该将 'device' 参数设置为正确的设备(例如 'cpu' 或 'cuda'),这取决于你的模型是在哪个设备上运行的。例如:

    response = generate_chatglm3(model, tokenizer, gen_params, device='cuda')
    

    如果上述解决方案不能解决你的问题,你可能需要检查你的代码以确定在哪个环节出了问题,并确保你的模型是在正确的环境下运行的(例如,是否安装了正确的库版本,是否配置了正确的环境等)。希望这些信息能帮助你解决问题。如果问题仍然存在,请提供更多的代码和错误信息以便我能更准确地帮助你解决问题。

    评论 编辑记录

报告相同问题?

问题事件

  • 创建了问题 7月8日

悬赏问题

  • ¥15 KeiI中头文件找不到怎么解决
  • ¥15 QT6将音频采样数据转PCM
  • ¥15 本地安装org.Hs.eg.dby一直这样的图片报错如何解决?
  • ¥15 下面三个文件分别是OFDM波形的数据,我的思路公式和我写的成像算法代码,有没有人能帮我改一改,如何解决?
  • ¥15 Ubuntu打开gazebo模型调不出来,如何解决?
  • ¥100 有chang请一位会arm和dsp的朋友解读一个工程
  • ¥50 求代做一个阿里云百炼的小实验
  • ¥15 查询优化:A表100000行,B表2000 行,内存页大小只有20页,运行时3页,设计两个表等值连接的最简单的算法
  • ¥15 led数码显示控制(标签-流程图)
  • ¥20 为什么在复位后出现错误帧