且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何批量获取中间Keras层的输出?

更新时间:2023-12-01 22:37:10

,您不需要在训练后再次编译.

No, you don't need to compile again after training.

基于您的顺序模型.

Layer 0 :: model.add(ResNet50(include_top = False, pooling = RESNET50_POOLING_AVERAGE, weights = resnet_weights_path)) #None
Layer 1 :: model.add(Dense(784, activation = 'relu'))
Layer 2 :: model.add(Dense(NUM_CLASSES, activation = DENSE_LAYER_ACTIVATION))

如果使用 Functional API 方法,访问这些层可能会有所不同.

Accessing the layers, may differ if used Functional API approach.

使用 Tensorflow 2.1.0 ,当您要访问中间输出时可以尝试这种方法.

Using Tensorflow 2.1.0, you could try this approach when you want to access intermediate outputs.

model_dense_784 = Model(inputs=model.input, outputs = model.layers[1].output)

pred_dense_784 = model_dense_784.predict(train_data_gen, steps = 1) # predict_generator is deprecated

print(pred_dense_784.shape) # Use this to check Output Shape

强烈建议使用 model.predict()方法,而不要使用 model.predict_generator()方法,因为该方法已已弃用.
您还可以使用 shape()方法来检查生成的输出是否与相同相同,如 model.summary()所示..

It is highly advisable to use the model.predict() method, rather than model.predict_generator() as it is already deprecated.
You could also use shape() method to check whether the output generated is the same as indicated on the model.summary().