Creating custom metrics
As simple callables (stateless)
Much like loss functions, any callable with signature metric_fn(y_true, y_pred) that returns an array of losses (one of sample in the input batch) can be passed to compile() as a metric. Note that sample weighting is automatically supported for any such metric.
Here's a simple example:
def my_metric_fn(y_true, y_pred):
squared_difference = tf.square(y_true - y_pred)
return tf.reduce_mean(squared_difference, axis=-1) # Note the axis=-1
model.compile(optimizer='adam', loss='mean_squared_error', metrics=[my_metric_fn])
In this case, the scalar metric value you are tracking during training and evaluation is the average of the per-batch metric values for all batches see during a given epoch (or during a given call to model.evaluate()).
出自keras官方文档的原话,看得我欲仙欲死,麻烦懂的前辈简单翻译一下,然后
我有2个问题想请教各位前辈:
1.keras自定义评价指标函数(y_true,y_pred)中的y_true和y_pred是所有样本放进模型后得到的值吗?还是只是说一个样本的值
2..keras自定义评价指标函数,要求retun的是一个标量还是一个向量
由于这几天在做一个多标签分类的项目,老板又催得急,看了官方文档,英语水平不好,又看不懂,特来请教