tensorflow和keras框架,自定义了一个loss函数报错,该如何解决
自定义的loss函数,如下
```python
def myloss(y_true, y_pred, k=5):
los = torch.tensor(0)
f = torch.tensor(5)
for i in range(k):
e = K.mean(K.in_top_k(y_pred, K.argmax(y_true, axis=-1), i + 1), axis=-1)
los = los + e
return K.log(f / los)
出现报错:
Traceback (most recent call last):
File "D:\ljk\Chinese-Bert-nlp\bert-textcnn-for-multi-label-text-classfication-master\model_train.py", line 73, in <module>
history = model.fit(train_x, train_y, validation_data=(test_x, test_y), batch_size=config.batch_size, epochs=config.epochs)
File "D:\Python392\lib\site-packages\keras\engine\training_v1.py", line 854, in fit
return func.fit(
File "D:\Python392\lib\site-packages\keras\engine\training_arrays_v1.py", line 734, in fit
return fit_loop(
File "D:\Python392\lib\site-packages\keras\engine\training_arrays_v1.py", line 192, in model_iteration
f = _make_execution_function(model, mode)
File "D:\Python392\lib\site-packages\keras\engine\training_arrays_v1.py", line 620, in _make_execution_function
return model._make_execution_function(mode)
File "D:\Python392\lib\site-packages\keras\engine\training_v1.py", line 2364, in _make_execution_function
self._make_train_function()
File "D:\Python392\lib\site-packages\keras\engine\training_v1.py", line 2282, in _make_train_function
updates = self.optimizer.get_updates(
File "D:\Python392\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 869, in get_updates
grads = self.get_gradients(loss, params)
File "D:\Python392\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 859, in get_gradients
raise ValueError(
ValueError: Variable <tf.Variable 'conv1d/kernel:0' shape=(3, 768, 256) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.