by user3639557
Last Updated September 14, 2018 15:19 PM

Assuming a discrete variable $z_j$ with unnormalized probability $\alpha_j$, one way to sample is to apply argmax(softmax($\alpha_j$)), another is to do the Gumbel trick argmax($\log\alpha_j+g_j$) where $g_j$ is gumbel generated noise. This second approach is useful if we want to do something like variational auto encoding. For example, if the goal was to have the full distribution over possible outcomes for $z_j$, we can use softmax transformation on top of the perturbation with Gumbel noise: $$\pi_j = \frac{e^{\log \alpha_j+g_j}}{\sum_{k=1}^{k=K}e^{\log \alpha_k+g_k}}\ \ \ \text{where}\ \ g_k=-\log(-\log(\epsilon\sim {U}(0,1))).$$ Why this isn't enough? Why do we need to include the temperature $\tau$ term in this? And rewrite, $$\pi_j = \frac{e^{\frac{\log\alpha_j+g_j}{\tau}}}{\sum_{k=1}^{k=K}e^\frac{\log \alpha_k+g_k}{\tau}}\ \ \ \text{where}\ \ g_k=-\log(-\log(\epsilon\sim {U}(0,1)))$$ I understand that the temperature makes the vector $\pi=[\pi_1, ...,\pi_k]$ smoother or rougher (i.e., high temperature just makes all $\pi_i$s to be the same, and generates a flatter distribution) but why do we need it in practice? All we want (i.e., in VAE) is to decouple the stochastic aspect of the sampling (i.e, move the stochastic part of it to the input) which is achieved by the Gumbel trick, and then somehow replace the one-hot vector draw with a continuous vector, which we get by doing the softmax($\log\alpha_j+g_j$) which we will get by using the first equation. I am sure I am missing something fundamental, but can't see what it is...

- Serverfault Help
- Superuser Help
- Ubuntu Help
- Webapps Help
- Webmasters Help
- Programmers Help
- Dba Help
- Drupal Help
- Wordpress Help
- Magento Help
- Joomla Help
- Android Help
- Apple Help
- Game Help
- Gaming Help
- Blender Help
- Ux Help
- Cooking Help
- Photo Help
- Stats Help
- Math Help
- Diy Help
- Gis Help
- Tex Help
- Meta Help
- Electronics Help
- Stackoverflow Help
- Bitcoin Help
- Ethereum Help