Callback: an easy way to stop training after convergence

Lida Ghahremanlou (Jones)
1 min readFeb 25, 2021

Today I learnt something really cool how to stop training a neural network model when it reaches a certain point of accuracy, in other words, it convergences without worrying about hardcoding the right number of epochs. The solution is simple, using callback function in the training loop!

Let’s write a callback function in python. First, it needs to be implemented as a separate class, can be inline with other code, no need to be a in separate file. In that class, the on_epoch_end function gets called by the callback whenever the epoch ends. The magic happens at the logs object associated with callback that contains lots of great information about the current state of training. So, there is a possibility to query it for certain amount of loss and put cancelling the training if it meets the criteria.

  1. Define the class:
import tensorflow as tf
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.90):
print("\nReached 90% accuracy - Cancelling training -")
self.model.stop_training = True

2. Define the object of the class:

callbacks = myCallback()

3. Include the function into your fit function:

model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])

As simple as it is, you can stop the training when the model reaches to certain point of accuracy. Pretty cool, isn’t it? ;)

Source: Tensorflow Callback

--

--

Lida Ghahremanlou (Jones)

Senior Researcher and Data Science Lead at Microsoft | PhD in NLP and Semantic Technologies | Fiction Writer and Translator