In deep learning, one often works with a high-level interface of a particular framework. No need for computing gradients manually, just stack layers with your favorite Keras.
The bad thing: as soon as it’s a black box, black magic effects happen. Today I’ve faced one weird effect while implementing some kind of hard example mining.
Let’s have a look:
I see no reason to describe the whole code - just focus on two last functions.
For some mysterious reason, calling predict
within a hard sample generator leads to a ValueError
.
At the same time, it’s enough to call a single predict
before, and it works.
In the ideal world, I would investigate this in the very deep core of Keras and Tensorflow. In the real world, I’m happy to see the solution working with this weird fix.