Switch-biu 2025-08-19 10:06 采纳率: 81.8%
浏览 7

python代码预测调度报错

跑模型代码报错如下,不知道为什么一直提醒传参为负数

D:\anaconda3\envs\pfl_quantum_env\python.exe C:\Users\lenovo\PycharmProjects\No_Q_charging\Q-PFL_TimeMixer_Framework\main.py 
2025-08-19 09:29:27,743 - INFO - --- Starting Experiment: Centralized ---
2025-08-19 09:29:28,043 - INFO - --- Starting Centralized Training ---
2025-08-19 09:30:34,491 - INFO - Centralized Epoch 1 - MAE: 0.4119, RMSE: 0.6080
2025-08-19 09:30:34,512 - INFO - New best centralized model saved with RMSE: 0.6080
2025-08-19 09:31:54,069 - INFO - Centralized Epoch 2 - MAE: 0.3432, RMSE: 0.5074
2025-08-19 09:31:54,080 - INFO - New best centralized model saved with RMSE: 0.5074
2025-08-19 09:33:10,120 - INFO - Centralized Epoch 3 - MAE: 0.3010, RMSE: 0.4529
2025-08-19 09:33:10,129 - INFO - New best centralized model saved with RMSE: 0.4529
2025-08-19 09:34:12,998 - INFO - Centralized Epoch 4 - MAE: 0.2969, RMSE: 0.4461
2025-08-19 09:34:13,006 - INFO - New best centralized model saved with RMSE: 0.4461
2025-08-19 09:35:14,439 - INFO - Centralized Epoch 5 - MAE: 0.2960, RMSE: 0.4476
2025-08-19 09:36:11,765 - INFO - Centralized Epoch 6 - MAE: 0.2907, RMSE: 0.4403
2025-08-19 09:36:11,780 - INFO - New best centralized model saved with RMSE: 0.4403
2025-08-19 09:37:18,212 - INFO - Centralized Epoch 7 - MAE: 0.2930, RMSE: 0.4452
2025-08-19 09:38:21,377 - INFO - Centralized Epoch 8 - MAE: 0.2735, RMSE: 0.4208
2025-08-19 09:38:21,385 - INFO - New best centralized model saved with RMSE: 0.4208
2025-08-19 09:39:30,061 - INFO - Centralized Epoch 9 - MAE: 0.2802, RMSE: 0.4350
2025-08-19 09:40:31,686 - INFO - Centralized Epoch 10 - MAE: 0.2676, RMSE: 0.4084
2025-08-19 09:40:31,695 - INFO - New best centralized model saved with RMSE: 0.4084
2025-08-19 09:41:43,254 - INFO - Centralized Epoch 11 - MAE: 0.2565, RMSE: 0.3966
2025-08-19 09:41:43,262 - INFO - New best centralized model saved with RMSE: 0.3966
2025-08-19 09:42:45,933 - INFO - Centralized Epoch 12 - MAE: 0.2635, RMSE: 0.4104
2025-08-19 09:43:50,361 - INFO - Centralized Epoch 13 - MAE: 0.2614, RMSE: 0.4056
2025-08-19 09:44:53,441 - INFO - Centralized Epoch 14 - MAE: 0.2591, RMSE: 0.4023
2025-08-19 09:45:54,244 - INFO - Centralized Epoch 15 - MAE: 0.2540, RMSE: 0.3946
2025-08-19 09:45:54,251 - INFO - New best centralized model saved with RMSE: 0.3946
2025-08-19 09:46:55,909 - INFO - Centralized Epoch 16 - MAE: 0.2545, RMSE: 0.3970
2025-08-19 09:48:00,614 - INFO - Centralized Epoch 17 - MAE: 0.2593, RMSE: 0.4018
2025-08-19 09:48:59,210 - INFO - Centralized Epoch 18 - MAE: 0.2607, RMSE: 0.4062
2025-08-19 09:49:57,062 - INFO - Centralized Epoch 19 - MAE: 0.2615, RMSE: 0.4055
2025-08-19 09:50:58,116 - INFO - Centralized Epoch 20 - MAE: 0.2644, RMSE: 0.4079
2025-08-19 09:51:57,344 - INFO - Centralized Epoch 21 - MAE: 0.2615, RMSE: 0.4019
2025-08-19 09:52:57,890 - INFO - Centralized Epoch 22 - MAE: 0.2529, RMSE: 0.3944
2025-08-19 09:52:57,898 - INFO - New best centralized model saved with RMSE: 0.3944
2025-08-19 09:53:53,033 - INFO - Centralized Epoch 23 - MAE: 0.2491, RMSE: 0.3839
2025-08-19 09:53:53,040 - INFO - New best centralized model saved with RMSE: 0.3839
2025-08-19 09:54:47,294 - INFO - Centralized Epoch 24 - MAE: 0.2452, RMSE: 0.3788
2025-08-19 09:54:47,301 - INFO - New best centralized model saved with RMSE: 0.3788
2025-08-19 09:55:43,625 - INFO - Centralized Epoch 25 - MAE: 0.2498, RMSE: 0.3895
2025-08-19 09:56:39,948 - INFO - Centralized Epoch 26 - MAE: 0.2515, RMSE: 0.3907
2025-08-19 09:57:35,183 - INFO - Centralized Epoch 27 - MAE: 0.2471, RMSE: 0.3828
2025-08-19 09:58:36,315 - INFO - Centralized Epoch 28 - MAE: 0.2542, RMSE: 0.3932
2025-08-19 09:59:34,963 - INFO - Centralized Epoch 29 - MAE: 0.2572, RMSE: 0.4014
2025-08-19 10:00:32,700 - INFO - Centralized Epoch 30 - MAE: 0.2456, RMSE: 0.3820
2025-08-19 10:00:32,770 - INFO - --- Joint Framework Round 1/5 ---
2025-08-19 10:00:32,770 - INFO - Generating future demand predictions...
2025-08-19 10:00:32,842 - INFO - Predicted demands for scheduling: 
[10.03  1.32 15.92  5.36 16.33 18.38  0.3   9.55  9.5  21.15  0.14 10.71
  7.79  1.37 10.34]
2025-08-19 10:00:32,842 - INFO - Running Multi-Objective Scheduler...
Traceback (most recent call last):
  File "C:\Users\lenovo\PycharmProjects\No_Q_charging\Q-PFL_TimeMixer_Framework\main.py", line 162, in <module>
    run_experiment()
  File "C:\Users\lenovo\PycharmProjects\No_Q_charging\Q-PFL_TimeMixer_Framework\main.py", line 70, in run_experiment
    _, pareto_front = solve_scheduling_problem(CFG, predicted_demands, grid_base_load, client_preferences)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lenovo\PycharmProjects\No_Q_charging\Q-PFL_TimeMixer_Framework\src\scheduler\solver.py", line 15, in solve_scheduling_problem
    res = minimize(problem,
          ^^^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\optimize.py", line 67, in minimize
    res = algorithm.run()
          ^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\core\algorithm.py", line 138, in run
    self.next()
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\core\algorithm.py", line 154, in next
    infills = self.infill()
              ^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\core\algorithm.py", line 186, in infill
    infills = self._initialize_infill()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\algorithms\base\genetic.py", line 75, in _initialize_infill
    pop = self.initialization.do(self.problem, self.pop_size, algorithm=self)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\core\initialization.py", line 32, in do
    pop = self.sampling(problem, n_samples, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\core\operator.py", line 27, in __call__
    out = self.do(problem, elem, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\core\sampling.py", line 35, in do
    val = self._do(problem, n_samples, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda3\envs\pfl_quantum_env\Lib\site-packages\pymoo\operators\sampling\rnd.py", line 20, in _do
    X = np.random.random((n_samples, problem.n_var))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "numpy/random/mtrand.pyx", line 450, in numpy.random.mtrand.RandomState.random
  File "numpy/random/mtrand.pyx", line 441, in numpy.random.mtrand.RandomState.random_sample
  File "numpy/random/_common.pyx", line 310, in numpy.random._common.double_fill
ValueError: negative dimensions are not allowed

进程已结束,退出代码为 1

  • 写回答

6条回答 默认 最新

  • 檀越@新空间 2025-08-19 10:07
    关注

    上午好☀️☀️☀️️
    本答案参考通义千问

    根据你提供的日志信息,程序在运行过程中并没有直接报错,但你提到“一直提醒传参为负数”,这表明在某个地方可能存在参数传递错误,特别是某些参数可能被错误地设置为负数。

    以下是可能导致“传参为负数”错误的原因及解决方案


    1. 检查模型输入参数是否合理

    • 问题描述:模型训练或预测时,如果输入数据中存在负数(如时间序列数据、特征值等),可能会导致某些计算(如平方根、对数等)出错。
    • 解决方案
      • 检查数据预处理步骤:确保所有输入数据都是非负的。
      • 代码示例
        # 假设你有一个数据张量 data
        if torch.any(data < 0):
            print("警告:数据中包含负数!")
            # 可以选择截断为零或进行归一化
            data = torch.clamp(data, min=0)
        

    2. 检查损失函数是否允许负数输入

    • 问题描述:某些损失函数(如交叉熵、均方误差等)可能对负数敏感。
    • 解决方案
      • 确认损失函数的输入要求:例如,CrossEntropyLoss 不接受负数标签。
      • 代码示例
        # 确保预测值和真实值都是非负的
        predictions = model(input_data)
        targets = torch.tensor([0.5, 1.2, -0.3])  # 假设目标中有负数
        
        # 将目标限制在 [0, 1] 范围内
        targets = torch.clamp(targets, min=0, max=1)
        loss = criterion(predictions, targets)
        

    3. 检查优化器参数是否合理

    • 问题描述:某些优化器(如 Adam)的超参数(如学习率、权重衰减)如果设置为负数,会导致错误。
    • 解决方案
      • 确保所有超参数为正数
      • 代码示例
        optimizer = torch.optim.Adam(model.parameters(), lr=0.001)  # 学习率必须为正
        

    4. 检查模型结构是否存在负数参数

    • 问题描述:某些层(如 BatchNorm)可能在训练过程中生成负数参数。
    • 解决方案
      • 使用 torch.nn.utils.clip_grad_norm_ 防止梯度爆炸
      • 代码示例
        torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
        

    5. 检查模型输出是否合理

    • 问题描述:模型输出结果可能是负数,但在后续处理中需要非负值。
    • 解决方案
      • 使用激活函数(如 ReLUSigmoid)确保输出非负
      • 代码示例
        output = model(input_data)
        output = torch.relu(output)  # 确保输出非负
        

    6. 检查自定义函数中的参数传递

    • 问题描述:如果你在代码中调用了一个自定义函数,该函数可能期望一个正数参数,但实际传入了负数。
    • 解决方案
      • 检查所有函数调用,确保参数正确
      • 代码示例
        def some_function(x):
            if x < 0:
                raise ValueError("参数不能为负数")
            return x * 2
        
        result = some_function(5)  # 正确
        # result = some_function(-3)  # 错误,会抛出异常
        

    7. 检查调度算法中的参数

    • 问题描述:如果你的代码涉及调度算法(如充电调度、任务分配等),这些算法可能对参数有严格的要求。
    • 解决方案
      • 检查调度算法的逻辑,确保所有参数为非负
      • 代码示例
        def schedule_charging(demands):
            if any(d < 0 for d in demands):
                raise ValueError("需求不能为负数")
            # 继续调度逻辑
        

    8. 添加调试日志

    • 问题描述:你无法确定是哪个模块引发了错误。
    • 解决方案
      • 在关键函数中添加打印语句,输出变量的值。
      • 代码示例
        print(f"当前输入数据: {input_data}")
        print(f"当前模型输出: {output}")
        print(f"当前损失值: {loss.item()}")
        

    9. 使用断言检查参数

    • 问题描述:确保参数始终为正。
    • 解决方案
      • 在关键位置添加断言
      • 代码示例
        assert input_data.min() >= 0, "输入数据中存在负数"
        assert learning_rate > 0, "学习率不能为负数"
        

    10. 查看完整的错误信息

    • 问题描述:你提供的日志中没有明确的错误信息,只显示了训练过程。
    • 解决方案
      • 检查控制台输出是否有其他错误信息,尤其是 Traceback
      • 确保你的代码中捕获了所有异常,并打印详细信息。

    ✅ 总结

    | 问题原因 | 解决方案 | |----------|----------| | 数据中存在负数 | 使用 torch.clamptorch.relu 处理数据 | | 损失函数不支持负数 | 确保输入数据为非负 | | 优化器参数错误 | 确保学习率、权重衰减等为正数 | | 自定义函数参数错误 | 添加参数校验逻辑 | | 模型输出负数 | 使用激活函数限制输出范围 | | 调度算法参数错误 | 检查调度逻辑,确保参数合法 |


    如果你能提供具体的代码片段(如模型定义、损失函数、调度逻辑等),我可以进一步帮助你定位具体问题。

    评论

报告相同问题?

问题事件

  • 创建了问题 8月19日