我目前由于实验需要, 想在TensorFlow中实验自定义算子, 自定义算子可由TF中几个算子组合而成. 然后转换到TensorFlow Lite上进行融合的优化算子实现. 参考TF官方自定义算子写了如下示例:
# mysin想要作为自定义算子,准备在lite端作为custom op进行融合优化计算
@tf.function
def mysin(x, myopt1):
x = tf.sin(x, name="mySin") + myopt1
return x
def CNNmodel(input_shape,filters=64, kernel=(3,3),size=4,dropout=0.2):
_inputs = layers.Input(shape=input_shape)
x = layers.Conv2D(8,(3,3),padding='same',use_bias=False,strides=(2,2), name='conv_0')(_inputs)
x = layers.BatchNormalization(axis=-1, name='conv_0_bn')(x)
x = layers.ReLU(6., name='conv_0_relu')(x)
x = layers.Conv2D(16,(3,3),padding='same',use_bias=False,strides=(2,2), name='conv_1')(_inputs)
x = layers.BatchNormalization(axis=-1, name='conv_1_bn')(x)
x = layers.ReLU(6., name='conv_1_relu')(x)
#这里调用mysin
x = layers.Lambda(lambda x: mysin(x, 0.4) )(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(dropout, name='dropout')(x)
x = layers.Dense(10)(x)
x = layers.Softmax()(x)
return keras.Model(inputs=_inputs,outputs=x)
然后转换为tflite代码:
model.save("save_model/",save_format="tf")
converter = tf.lite.TFLiteConverter.from_saved_model("save_model/")
converter.allow_custom_ops = True
tflite_model = converter.convert()
with open("mnist.tflite", 'wb') as f: # 以二进制类型将模型写进tflite文件中
f.write(tflite_model)
结果转换出的tflite模型仍是将模型中的op展开为TensorFlow Lite自带op节点. 如下图
请问我该如何做到自定义TensorFlow算子, 并导出到TensorFlow Lite作为custom op?