I have a very large file and i need to process each line (each line of file is independent). How can I use goroutines (or should I not use them?) to read the file in the fastest way?
3条回答 默认 最新
- doudan4834 2012-10-16 12:46关注
As long as your hard disk is orders of magnitude slower than your CPU, which is still a quite common situation, then you cannot magically make the file reading (domain: from a single HD) any faster by throwing more CPU cycles onto it. (Assuming cold file caches and/or file size much bigger then all available file cache memory).
本回答被题主选为最佳回答 , 对您是否有帮助呢?解决 无用评论 打赏 举报
悬赏问题
- ¥15 如何让企业微信机器人实现消息汇总整合
- ¥50 关于#ui#的问题:做yolov8的ui界面出现的问题
- ¥15 如何用Python爬取各高校教师公开的教育和工作经历
- ¥15 TLE9879QXA40 电机驱动
- ¥20 对于工程问题的非线性数学模型进行线性化
- ¥15 Mirare PLUS 进行密钥认证?(详解)
- ¥15 物体双站RCS和其组成阵列后的双站RCS关系验证
- ¥20 想用ollama做一个自己的AI数据库
- ¥15 关于qualoth编辑及缝合服装领子的问题解决方案探寻
- ¥15 请问怎么才能复现这样的图呀