我使用springAI,调用vllm,让大模型调用mcp工具,一直报400错误,以下是我编写的代码
我添加了依赖:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-client-webflux</artifactId>
<version>1.0.0-M7</version>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>1.0.0-M7</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
定义了chatClient:
@Bean
ChatClient chatClient(ChatModel chatModel, List<McpSyncClient> mcpClients) {
var toolCallbackProvider = new SyncMcpToolCallbackProvider(mcpClients);
OpenAiChatOptions options = OpenAiChatOptions.builder()
.model("/home/ai/models/Qwen/Qwen2.5-VL-7B-Instruct-AWQ")
.temperature(0.7)
.maxTokens(500)
.build();
return ChatClient
.builder(chatModel)
.defaultSystem("你是一个专业的金融领域教授")
.defaultTools(toolCallbackProvider.getToolCallbacks())
.defaultOptions(options)
.build();
}
然后我定义了controller用于接口暴露
public McpController(ChatClient chatClient) {
this.chatClient = chatClient;
}
@RequestMapping(value = "/generate_stream", method = RequestMethod.GET)
public Flux<ServerSentEvent<Object>> generateStream(HttpServletResponse response,
@RequestParam("id") String id,
@RequestParam("prompt") String prompt) {
response.setCharacterEncoding("UTF-8");
var messageChatMemoryAdvisor = new MessageChatMemoryAdvisor(chatMemory, id, 10);
return this.chatClient
.prompt(prompt)
.advisors(messageChatMemoryAdvisor)
.stream()
.chatResponse()
.map(data->{
String text=data.getResult().getOutput().getText();
return ServerSentEvent.builder()
.data(text)
.build();
}).doOnError(error->{
System.out.println(error);
});
}
然后是application.yml文件配置了mcp工具调用:
spring:
ai:
openai:
base-url: http://192.168.8.4:8002 # vLLM ip
api-key: sk-no-key-required
mcp:
client:
type: SYNC
enabled: true
name: call-mcp-server
request-timeout: 30s
stdio:
servers-configuration: classpath:mcp-server.json
接着是mcp-server.json文件,调用的高德mcp
{
"mcpServers": {
"amap-maps": {
"command": "cmd",
"args": ["/c","npx","-y",
"@amap/amap-maps-mcp-server"],
"env": {
"AMAP_MAPS_API_KEY": "e0a48e0ccdf0a45d3e4019809eb79633"
}
}
}
}
这一套结束之后,我调用接口localhost:8080/generate_stream?id=2&prompt=@agen 规划去西安游玩路线,vllm就会直接报400错误,但是我不加defaultTools调用大模型没有问题,或者我用通义千问也可以回调mcp,就是用我自己的模型不行,有遇见相似问题的吗,共同探讨的吗