2017-11-04 00:38 阅读 49


I have this query:

SELECT DISTINCT(po.mark) franquicia, COUNT( failures, GROUP_CONCAT(po.log_transaction SEPARATOR ';') logs, DATE(po.created) fecha
FROM pagos_onlines AS po
INNER JOIN usuarios_tokens AS ut ON po.usuario_token =
AND created > '2017-10-03 11:00:01'
GROUP BY fecha, franquicia;

The problem is that log_transaction is a type BLOB in the table and it stores JSON data, so the field "logs" should be a huge string with concatenated JSONs. When I run this query only one and a half JSON appeared in the field "logs", so my guess is that the memory for that field runs out of space.

Is there a way to allocate memory on the fly for a specific column when running a MySQL SELECT query?

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享

2条回答 默认 最新

  • 已采纳
    dongwen5019 dongwen5019 2017-11-04 20:21

    This query almost certainly throws a warning about the data being cut by GROUP_CONCAT. If you are using a client or library that ignores warnings, you can run the query SHOW WARNINGS; immediately after running your query, and confirm that this warning is present.

    To resolve this, you need to increase the value of the system variable @@GROUP_CONCAT_MAX_LEN to get longer results without truncation.

    The default value is only 1024 (bytes, not characters, according to the documentation). You can change this in global config or change it in your session before running this query.

    点赞 评论 复制链接分享
  • douhe6181 douhe6181 2017-11-04 01:22

    The maximum size of a BLOB or TEXT object is determined by its type, but the largest value you actually can transmit between the client and server is determined by the amount of available memory and the size of the communications buffers. You can change the message buffer size by changing the value of the max_allowed_packet variable, but you must do so for both the server and your client program.

    You change the value of option max_allowed_packet .

    Official reference document:

    点赞 评论 复制链接分享