Constant validation loss and increasing validation accuracy

by Mr.Sh4nnon   Last Updated May 15, 2019 16:19 PM - source

I am training a fully convolutional network. The loss is decreasing whilst the validation loss stays mostly where it is. There is some variance in the validation loss.

I thought it might overfits, but the validation accuracy is increasing with each epoch. Is this legit? How would something like this happen? Introducing L2 regularization helped in the beginning. Validation loss is at a lower level but stays more or less constant. Big L2 values worsened the loss, the validation loss and the validation accuracy. So I kept at around 1e-5.

My loss function is a categorical-cross-entropy for a one-hot encoded label. The accuracy is just Keras' standard "accuracy" metric.



Related Questions



Loss vs. Classification Accuracy in applied problems

Updated August 28, 2017 11:19 AM

Rleation between L2 loss and accuracy

Updated February 23, 2017 12:19 PM

How to interpret losses and accuracies

Updated December 02, 2017 08:19 AM