我在运行如下错误时,报错
python run_mindformer.py --config configs/blip2/run_blip2_stage1_vit_g_qformer_pretrain.yaml --run_mode train
报错信息如下显示,TypeError: must be real number, not NoneType
[ERROR] ANALYZER(2358570,ffffa4f0c010,python):2024-04-29-11:33:52.223.570 [mindspore/ccsrc/pipeline/jit/ps/static_analysis/async_eval_result.cc:70] HandleException] Exception happened, check the information as below.
TypeError: must be real number, not NoneType
At:
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/numpy/utils_const.py(517): _ceil
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/ops/primitive.py(822): __infer__
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py(1547): compile
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py(997): compile
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py(1020): compile_and_run
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py(680): __call__
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py(919): _train_process
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py(617): _train
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py(114): wrapper
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py(1068): train
/home/ma-user/work/mindformers/mindformers/trainer/base_trainer.py(774): training_process
/home/ma-user/work/mindformers/mindformers/trainer/contrastive_language_image_pretrain/contrastive_language_image_pretrain.py(85): train
/home/ma-user/work/mindformers/mindformers/trainer/trainer.py(411): train
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/_checkparam.py(1313): wrapper
/home/ma-user/work/mindformers/run_mindformer.py(39): main
/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py(34): wrapper
/home/ma-user/work/mindformers/run_mindformer.py(268): <module>
# 0 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:417
if not self.sense_flag:
# 1 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:424
if self.return_grad:
# 2 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:419
loss = self.network(*inputs)
^
# 3 In file /home/ma-user/work/mindformers/mindformers/models/blip2/blip2_qformer.py:226
for i in range(self.group_size):
# 4 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:419
loss = self.network(*inputs)
^
# 5 In file /home/ma-user/work/mindformers/mindformers/models/blip2/blip2_qformer.py:226
for i in range(self.group_size):
# 6 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:419
loss = self.network(*inputs)
^
# 7 In file /home/ma-user/work/mindformers/mindformers/models/blip2/blip2_qformer.py:241
for i in range(self.group_size):
# 8 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:419
loss = self.network(*inputs)
^
# 9 In file /home/ma-user/work/mindformers/mindformers/models/blip2/blip2_qformer.py:241
for i in range(self.group_size):
# 10 In file /home/ma-user/work/mindformers/mindformers/models/blip2/blip2_qformer.py:315
if return_tuple:
^
# 11 In file /home/ma-user/work/mindformers/mindformers/models/blip2/blip2_qformer.py:261
loss_itc = (self.itc_loss(sim_i2t, targets) +
^
# 12 In file /home/ma-user/work/mindformers/mindformers/models/blip2/qformer.py:135
return self.nll_loss(log_softmax_result,
# 13 In file /home/ma-user/work/mindformers/mindformers/models/blip2/qformer.py:135
return self.nll_loss(log_softmax_result,
^
# 14 In file /home/ma-user/work/mindformers/mindformers/models/blip2/qformer.py:109
if weight is not None:
# 15 In file /home/ma-user/work/mindformers/mindformers/models/blip2/qformer.py:106
loss = self.neg(self.gather_d(inputs, target_dim, target))
^
# 16 In file /home/ma-user/work/mindformers/mindformers/models/blip2/qformer.py:84
pred_x = np.arange(target.shape[0]) * inputs.shape[-1]
^
# 17 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/numpy/array_creations.py:626
if stop is None and step is None: # (start, stop, step) -> (0, start, 1)
^
# 18 In file /home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/numpy/array_creations.py:627
num = _ceil(start)
^
(See file '/home/ma-user/work/mindformers/rank_0/om/analyze_fail.ir' for more details. Get instructions about `analyze_fail.ir` at https://www.mindspore.cn/search?inputValue=analyze_fail.ir)
2024-04-29 11:33:56,107 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
File "/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
result = run_func(*args, **kwargs)
File "/home/ma-user/work/mindformers/run_mindformer.py", line 39, in main
trainer.train()
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1313, in wrapper
return func(*args, **kwargs)
File "/home/ma-user/work/mindformers/mindformers/trainer/trainer.py", line 411, in train
self.trainer.train(
File "/home/ma-user/work/mindformers/mindformers/trainer/contrastive_language_image_pretrain/contrastive_language_image_pretrain.py", line 85, in train
self.training_process(
File "/home/ma-user/work/mindformers/mindformers/trainer/base_trainer.py", line 774, in training_process
model.train(config.runner_config.epochs, dataset,
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 1068, in train
self._train(epoch,
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 114, in wrapper
func(self, *args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 617, in _train
self._train_process(epoch, train_dataset, list_callback, cb_params, initial_epoch, valid_infos)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 919, in _train_process
outputs = self._train_network(*next_element)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 680, in __call__
out = self.compile_and_run(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1020, in compile_and_run
self.compile(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
_cell_graph_executor.compile(self, phase=self.phase,
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 1547, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/ops/primitive.py", line 822, in __infer__
return {'dtype': None, 'shape': None, 'value': fn(*value_args)}
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/numpy/utils_const.py", line 517, in _ceil
return math.ceil(number)
TypeError: must be real number, not NoneType
Traceback (most recent call last):
File "/home/ma-user/work/mindformers/run_mindformer.py", line 268, in <module>
main(config_)
File "/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
raise exc
File "/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
result = run_func(*args, **kwargs)
File "/home/ma-user/work/mindformers/run_mindformer.py", line 39, in main
trainer.train()
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1313, in wrapper
return func(*args, **kwargs)
File "/home/ma-user/work/mindformers/mindformers/trainer/trainer.py", line 411, in train
self.trainer.train(
File "/home/ma-user/work/mindformers/mindformers/trainer/contrastive_language_image_pretrain/contrastive_language_image_pretrain.py", line 85, in train
self.training_process(
File "/home/ma-user/work/mindformers/mindformers/trainer/base_trainer.py", line 774, in training_process
model.train(config.runner_config.epochs, dataset,
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 1068, in train
self._train(epoch,
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 114, in wrapper
func(self, *args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 617, in _train
self._train_process(epoch, train_dataset, list_callback, cb_params, initial_epoch, valid_infos)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/train/model.py", line 919, in _train_process
outputs = self._train_network(*next_element)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 680, in __call__
out = self.compile_and_run(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1020, in compile_and_run
self.compile(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
_cell_graph_executor.compile(self, phase=self.phase,
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 1547, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/ops/primitive.py", line 822, in __infer__
return {'dtype': None, 'shape': None, 'value': fn(*value_args)}
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/numpy/utils_const.py", line 517, in _ceil
return math.ceil(number)
TypeError: must be real number, not NoneType
我的配置文件如下:
seed: 42
run_mode: 'train'
output_dir: './output' # path to save checkpoint/strategy
load_checkpoint: ''
src_strategy_path_or_dir: ''
auto_trans_ckpt: False # If true, auto transform load_checkpoint to load in distributed model
only_save_strategy: False
resume_training: False
# context
context:
mode: 0 #0--Graph Mode; 1--Pynative Mode
device_target: "Ascend"
enable_graph_kernel: False
graph_kernel_flags: str = "--disable_expand_ops=Softmax,Dropout " \
"--enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true"
max_call_depth: 10000
save_graphs: False
save_graphs_path: "./graph"
device_id: 0
# aicc
remote_save_url: "Please input obs url on AICC platform."
# runner
runner_config:
epochs: 1
batch_size: 80
sink_size: 2
image_size: 224
sink_mode: False
initial_epoch: 0
has_trained_epoches: 0
has_trained_steps: 0
runner_wrapper:
type: TrainOneStepCell
sens: 1024
# parallel
use_parallel: True
parallel:
parallel_mode: 0 # 0-dataset, 1-semi, 2-auto, 3-hybrid
gradients_mean: True
search_mode: "sharding_propagation"
enable_parallel_optimizer: False
full_batch: False
parallel_config:
data_parallel: 2
model_parallel: 1
pipeline_stage: 1
micro_batch_num: 1
vocab_emb_dp: True
gradient_aggregation_group: 4
micro_batch_interleave_num: 1
# recompute
recompute_config:
recompute: False
parallel_optimizer_comm_recompute: False
mp_comm_recompute: True
recompute_slice_activation: False
# autotune
auto_tune: False
filepath_prefix: './autotune'
autotune_per_step: 10
# profile
profile: False
profile_start_step: 1
profile_stop_step: 10
init_start_profile: False
profile_communication: False
profile_memory: True
# Trainer
trainer:
type: ContrastiveLanguageImagePretrainTrainer
model_name: 'blip2_stage1_vit_g'
# train dataset
train_dataset: &train_dataset
data_loader:
type: MultiImgCapDataLoader
dataset_dir: "./data"
annotation_files: [
"vg/annotations/vg_caption.json",
"coco/annotations/coco_karpathy_train.json"
]
image_dirs: [
"vg/images",
"coco/images"
]
stage: "train"
column_names: ["image", "text"]
transforms:
- type: RandomResizedCrop
size: 224
scale: [0.5, 1.0]
interpolation: "bicubic"
- type: RandomHorizontalFlip
- type: ToTensor
- type: Normalize
mean: [0.48145466, 0.4578275, 0.40821073]
std: [0.26862954, 0.26130258, 0.27577711]
is_hwc: False
text_transforms:
type: CaptionTransform
prompt: ""
max_words: 50
max_length: 32
padding: 'max_length'
random_seed: 2022
truncation: True
tokenizer:
type: BertTokenizer
pad_token: '[PAD]'
bos_token: '[DEC]'
add_special_tokens: True
padding: 'max_length'
truncation: True
max_length: 32
num_parallel_workers: 8
python_multiprocessing: False
drop_remainder: True
batch_size: 1
repeat: 1
numa_enable: False
prefetch_size: 30
seed: 2022
return_attention_mask: True
train_dataset_task:
type: ContrastiveLanguageImagePretrainDataset
dataset_config: *train_dataset
# model
model:
model_config:
type: Blip2Config
freeze_vision: True
max_txt_len: 32
checkpoint_name_or_path: ""
dtype: "float32"
compute_dtype: "float16"
layernorm_dtype: "float32"
softmax_dtype: "float32"
vision_config:
type: ViTConfig
image_size: 224
patch_size: 14
num_channels: 3
initializer_range: 0.001
hidden_size: 1408
num_hidden_layers: 39
num_attention_heads: 16
intermediate_size: 6144
qkv_bias: true
hidden_act: gelu
post_layernorm_residual: false
layer_norm_eps: 1.0e-6
attention_probs_dropout_prob: 0.0
hidden_dropout_prob: 0.0
drop_path_rate: 0.0
use_mean_pooling: false
encoder_stride: 16
checkpoint_name_or_path: "vit_g_p16"
qformer_config:
num_hidden_layers: 12
num_attention_heads: 12
query_length: 32
resize_token_embeddings: True # if run on Atlas 800T A2, turn it to False
special_token_nums: 1
vocab_size: 30522
hidden_size: 768
encoder_width: 1408
bos_token_id: 30522
sep_token_id: 102
pad_token_id: 0
max_position_embeddings: 512
layer_norm_eps: 1.e-12
hidden_dropout_prob: 0.1
attention_probs_dropout_prob: 0.1
chunk_size_feed_forward: 0
cross_attention_freq: 2
intermediate_size: 3072
initializer_range: 0.02
hidden_act: "gelu"
dtype: "float32"
layernorm_dtype: "float32"
softmax_dtype: "float32"
compute_dtype: "float16"
add_cross_attention: True
use_relative_positions: False
tie_word_embeddings: True
output_attentions: False
output_hidden_states: False
use_return_dict: False
convert_param_from_bert: True
checkpoint_name_or_path: "bert_base_uncased"
arch:
type: Blip2Qformer
# lr sechdule
lr_schedule:
type: cosine
learning_rate: 1.e-4
lr_end: 1.e-5
warmup_lr_init: 1.e-6
warmup_steps: 5000
total_steps: -1 # -1 means it will load the total steps of the dataset
layer_scale: False
lr_scale: False
# optimizer
optimizer:
type: adamw
beta1: 0.9
beta2: 0.98
eps: 1.e-8
weight_decay: 0.05
# callbacks
callbacks:
- type: MFLossMonitor
- type: CheckpointMointor
prefix: "blip2_qformer"
save_checkpoint_steps: 7084
integrated_save: True
async_save: False
- type: ObsMonitor
step_upload_frequence: 1000
eval_callbacks:
- type: ObsMonitor
# image processor, tokenizer for prediction
processor:
type: Blip2Processor
image_processor:
type: Blip2ImageProcessor
image_size: 224
mean: [0.48145466, 0.4578275, 0.40821073]
std: [0.26862954, 0.26130258, 0.27577711]
is_hwc: False
tokenizer:
type: BertTokenizer
pad_token: '[PAD]'
bos_token: '[DEC]'
add_special_tokens: True
padding: 'max_length'
truncation: True
max_length: 32