
An Introduction to Markov Chain Monte Carlo on Finite State Spaces
We elaborate the idea behind Markov chain Monte Carlo (MCMC) methods in ...
read it

A Parallel Evolutionary MultipleTry Metropolis Markov Chain Monte Carlo Algorithm for Sampling Spatial Partitions
We develop an Evolutionary Markov Chain Monte Carlo (EMCMC) algorithm fo...
read it

Markov chain Monte Carlo Methods For Lattice Gaussian Sampling:Convergence Analysis and Enhancement
Sampling from lattice Gaussian distribution has emerged as an important ...
read it

Asymptotically Stable Drift and Minorization for Markov Chains with Application to Albert and Chib's Algorithm
The use of MCMC algorithms in high dimensional Bayesian problems has bec...
read it

Concentration Bounds for Cooccurrence Matrices of Markov Chains
Cooccurrence statistics for sequential data are common and important da...
read it

Convergence criteria for sampling random graphs with specified degree sequences
The configuration model is a standard tool for generating random graphs ...
read it

Wassersteinbased methods for convergence complexity analysis of MCMC with application to Albert and Chib's algorithm
Over the last 25 years, techniques based on drift and minorization (d&m)...
read it
Utilizing Network Structure to Bound the Convergence Rate in Markov Chain Monte Carlo Algorithms
We consider the problem of estimating the measure of subsets in very large networks. A prime tool for this purpose is the Markov Chain Monte Carlo (MCMC) algorithm. This algorithm, while extremely useful in many cases, still often suffers from the drawback of very slow convergence. We show that in a special, but important case, it is possible to obtain significantly better bounds on the convergence rate. This special case is when the huge state space can be aggregated into a smaller number of clusters, in which the states behave approximately the same way (but their behavior still may not be identical). A Markov chain with this structure is called quasilumpable. This property allows the aggregation of states (nodes) into clusters. Our main contribution is a rigorously proved bound on the rate at which the aggregated state distribution approaches its limit in quasilumpable Markov chains. We also demonstrate numerically that in certain cases this can indeed lead to a significantly accelerated way of estimating the measure of subsets. The result can be a useful tool in the analysis of complex networks, whenever they have a clustering that aggregates nodes with similar (but not necessarily identical) behavior.
READ FULL TEXT
Comments
There are no comments yet.