Let’s say we fit a linear regression. What does the correlation between it’s training error and testing error say about the model, its performance or the data? What does a very low or very high correlation imply?

When I fit the same regression but with a ridge penalty, I found that the correlation between training and test error doesn’t change with the ridge penalty parameter lambda. Can I say something about the data?

- Serverfault Help
- Superuser Help
- Ubuntu Help
- Webapps Help
- Webmasters Help
- Programmers Help
- Dba Help
- Drupal Help
- Wordpress Help
- Magento Help
- Joomla Help
- Android Help
- Apple Help
- Game Help
- Gaming Help
- Blender Help
- Ux Help
- Cooking Help
- Photo Help
- Stats Help
- Math Help
- Diy Help
- Gis Help
- Tex Help
- Meta Help
- Electronics Help
- Stackoverflow Help
- Bitcoin Help
- Ethereum Help