I am doing some bayesian analysis and exploring posterior distribution with mcmc method. I would like some clarification with estimating the covariance matrix. I have a model with 6 parameters. Initially I do not have any covariance matrix, so I just try to guess the steps by hand, trying to achieve ~30% acceptance rate(I actually I achieve less than that, ~12 %). Then I run a chain with 10^4 steps, and I estimate the covariance matrix from that. Then I run new chains with this covariance matrix, something around a few tens of 10^4 steps each.
First of all I find that the efficiency using the covariance matrix is low, ~6%. In order to contrast for low efficiency (and because I am verifying my method on mock samples that I have generated myself), I am using the known true values of the parameters as a starting point of the mcmc. I am checking the convergence with the gelman and rubin test and I find that the chains are not converged (I test var(mean)/mean(var) in orthonormalized parameters). I have looked visually to the chains and I have found that indeed for one particular parameter the chains are not well mixed (that is suppose is the visual equivalent of the gelman rubin condition). I was wondering if maybe I should increase the initial guess step on that particular variable. This would be somehow counterintuitive because if the steps were underestimated I would expect an efficiency that would be higher than 30% and not lower.
I was advised that given the dimensionality of my problem (6 dimensions) the length of the chains is small and I should increase it to 10^6.
I would like some advice on what is the correct way to address this problem and in particular the problem of the efficiency, because I suspect that the efficiency is low because of wrong estimation of covariance matrix.