Bayesian formulation of Regularization by denoising and application to image restoration

Jeudi 5 décembre 2024, 10:15 à 11:15

Salle de séminaires M.0.1

Diarra Fall

Institut Denis Poisson, Université d'Orléans

Inverse problems are ubiquitous in signal and image processing. Canonical examples include signal/image denoising (i.e, removing noise from a signal/image) and image reconstruction. As inverse problems are known to be ill-posed or at least, ill-conditioned, they require regularization by introducing additional constraints to mitigate the lack of information brought by the observations. A common difficulty is to select an appropriate regularizer, which has a decisive influence on the quality of the reconstruction. Another challenge is the confidence we may have in the reconstructed signal/image. To put it another way, it is desirable for a method to be able to quantify the uncertainty associated with the reconstructed image in order to encourage more principled decision-making. These two tasks (regularization and uncertainty quantification) can be overcome at the same time by addressing the problem within the Bayesian statistical framework. It allows to include additional information by specifying a marginal distribution for the signal/image, known as
prior distribution. The traditional approach consists in defining the prior analytically, as a hand-crafted explicit function chosen to encourage specific desired properties of the recovered signal/image. Following the up-to-date surge of deep learning, data-driven regularization using implicit priors specified by neural networks has become ubiquitous in signal and image parametric Bayesian inverse problems. Popular approaches within this methodology are Plug & Play (PnP) [1-2] and regularization by denoising (RED) [3] methods.

We have proposed a probabilistic approach to the RED method, defining a new probability distribution based on a RED potential, which can be chosen as the prior distribution in a Bayesian inversion task. We have also proposed a dedicated Markov chain Monte Carlo sampling algorithm that is particularly well suited to high-dimensional sampling of the resulting posterior distribution. In addition, we provide a theoretical analysis that guarantees convergence to the target distribution, and quantify the speed of convergence. The power of the proposed approach is illustrated in various restoration tasks such as image deblurring, inpainting, and super-resolution.

This presentation will be based on the following two papers :

[4] E. C. Faye, M. D. Fall, and N.Dobigeon. « Regularization by denoising: Bayesian model and Langevin-within-split Gibbs sampling ». arXiv:2402.12292, preprint, 2024.
[5] E.C. Faye, M.D. Fall and N.Dobigeon, « Efficient posterior sampling for Bayesian imaging with explicit score function-based priors », preprint, 2024.

References
[1] S. V. Venkatakrishnan et al. « Plug-and-Play priors for model based reconstruction ». In IEEE Global Conf. on Signal and Information Processing, pp 945-948, 2013.
[2] R. Laumont et al. « Bayesian imaging using Plug & Play priors: When Langevin meets Tweedie ». SIAM Journal on Imaging Sciences, 15(2):701-737, 2022.
[3] Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (RED),” SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017