Let’s say we fit a linear regression. What does the correlation between it’s training error and testing error say about the model, its performance or the data? What does a very low or very high correlation imply?
When I fit the same regression but with a ridge penalty, I found that the correlation between training and test error doesn’t change with the ridge penalty parameter lambda. Can I say something about the data?