by Mr.Sh4nnon
Last Updated May 15, 2019 16:19 PM - source

I am training a fully convolutional network. The loss is decreasing whilst the validation loss stays mostly where it is. There is some variance in the validation loss.

I thought it might overfits, but the validation accuracy is increasing with each epoch. Is this legit? How would something like this happen? Introducing L2 regularization helped in the beginning. Validation loss is at a lower level but stays more or less constant. Big L2 values worsened the loss, the validation loss and the validation accuracy. So I kept at around 1e-5.

My loss function is a categorical-cross-entropy for a one-hot encoded label. The accuracy is just Keras' standard "accuracy" metric.

- Serverfault Help
- Superuser Help
- Ubuntu Help
- Webapps Help
- Webmasters Help
- Programmers Help
- Dba Help
- Drupal Help
- Wordpress Help
- Magento Help
- Joomla Help
- Android Help
- Apple Help
- Game Help
- Gaming Help
- Blender Help
- Ux Help
- Cooking Help
- Photo Help
- Stats Help
- Math Help
- Diy Help
- Gis Help
- Tex Help
- Meta Help
- Electronics Help
- Stackoverflow Help
- Bitcoin Help
- Ethereum Help