Joint uniform convergence in distribution of random variables and constant

by Lundborg   Last Updated July 12, 2019 10:20 AM - source

Let $(X_{n, \theta})_{n \in \mathbb{N}, \theta \in \Theta}$ be a sequence of parameter dependent real-valued random variables where $\Theta$ is some parameter space.

Assume that $X_{n, \theta}$ converges uniformly to $X_\theta$, i.e. for any continuous and bounded $f: \mathbb{R} \to \mathbb{R}$ $$ \sup_{\theta} \left|E(f(X_{n, \theta})) - E(f(X_\theta)) \right| \to 0 $$ as $n \to \infty$.

Let $(y_\theta)_{\theta \in \Theta}$ be some family of real numbers. Does then $(X_{n, \theta}, y_\theta)$ converge uniformly to $(X_\theta, y_\theta)$, i.e. for any continuous and bounded $f: \mathbb{R}^2 \to \mathbb{R}$ $$ \sup_{\theta} \left|E(f(X_{n, \theta}, y_\theta)) - E(f(X_\theta, y_\theta)) \right| \to 0 $$ as $n \to \infty$.

Intuitively I find it crazy that adding a constant that does nothing would change this convergence but perhaps I need some assumptions like boundedness of $y_\theta$ (which would be fine) but I just cant figure out a way to show it.

Usually arguments like this will be of the form: note that $g(x) = f(x, y_\theta)$ is continuous and then we're done but $g$ is now $\theta$-dependent and therefore I don't think the argument works. Any ideas?



Related Questions






Find the limiting probability...

Updated December 04, 2018 23:20 PM