Abstract

In solving Bayesian inverse problems, it is often desirable to use a common density parameterization to characterize the prior and posterior. Typically, we seek an optimal approximation of the true posterior from the same distribution family as the prior. As one of the most important classes of distributions in statistics, the exponential family is considered as the parameterization. The optimal parameter values for representing the approximated posterior are achieved by minimizing the deviation between the parameterized density and a homotopy that deforms the prior density into the posterior density. Instead of the usual moment parameters, we introduce a new parameter system in the process, by which we reduce the dimension of the parameter and therefore the computational cost. The parameters are ruled by a system of explicit first-order differential equations. Solving this system over finite “time” interval yields the desired optimal density parameters. This method is proven to be effective using some numerical examples.

1. Introduction

Inverse problems occur widely in mathematical and engineering fields. The related theories and algorithms have been developed by many authors [1]. To analyze the uncertainty of the unknowns, the Bayesian approach is widely applied to find the solutions to inverse problems [24]. It provides quantitative estimations of the probability distribution of the solution which can be used to quantify the confidence. The prior distribution is transited to the posterior distribution of the unknowns. This transition is important and causes numerical challenge in finding the posterior distribution. The Markov chain Monte Carlo (MCMC) methods, e.g., the Metropolis–Hastings (MH) sampling algorithm, are essential tools for analyzing it. These algorithms allow us to evaluate arbitrary probability distributions; however, they are inherently sequential in nature due to the Markov property, which causes the autocorrelation of the samples.

It is inevitable to increase the uncertainty of the estimation of posterior quantities of interest, such as the means and the variance.

We seek an optimal approximation of the posterior distribution rather than draw the posterior samples by the MCMC method. We assume that the approximated posterior and the prior are of the same distribution type and share the common parameterization density. The transition from the prior to the approximated posterior involves the change of the distribution parameters. For such problems, the homotopy method is a promising approach, which characterizes solution spaces by smoothly tracking solution from one formulation (typically an “easy” problem) to another (typically a “hard” problem) [5]. Hanebeck et al. [6] introduced a general framework to perform the measurement update of filter problems with a homotopy. Furthermore, Hagmar et al. [7] developed this framework in filter problems and derived a so-called homotopy differential equation (HDE) to rule the progressive processing of the distribution parameters. In [6, 7], the Gaussian and mixed Gaussian are adopted in the approximated process. To the authors’ knowledge, there is no related work using the homotopy idea to solve the Bayesian inverse problems. In this paper, we extend this homotopy idea and use it to treat the Bayesian inverse problems. As an extension of [6, 7], the approximated posterior density family is chosen as the exponential and the mixed exponential family, and the usual moment parameters are replaced by natural parameters.

Specially for Gaussian, the dimension of the parameters can be reduced.

Compared with the MCMC method, the proposed method is possible to avoid to draw plenty of autocorrelated posterior samples.

We apply the proposed method to solve two inverse problems in partial differential equations: inverse heat conduction problem and inverse scattering problem. The finite element method (FEM) and boundary integral equation method are used to solve the forward problem [8, 9]. For some other numerical techniques for differential equations, we refer to [1018]. The Bayesian perspective to inverse problems is developed rapidly in the past two decades. There are two main points about the algorithms in the Bayesian inversion: MCMC-based sampling methods [1927] and variational methods [2830]. For the MCMC-based sampling algorithms, some fast numerical techniques are studied by many authors. In [433], the polynomial chaos expansion is used to the surrogate of the forward model or the likelihood function. The Gaussian process regression method is also used to the acceleration of the Bayesian inversion [34, 35]. Despite the large body of works on theories and applications of Bayesian approaches in inverse problems, the homotopy idea is still quite an unexplored terrain. Our work develops it and is expected to extend to more fields of applications.

The rest of this paper is organized as follows: in Section 2, we give the basic framework of the Bayesian inversion using homotopy. In Section 3, we introduce the exponential and the mixed exponential families. In Section 4, the approximation of the HDE is derived. In Section 5, some examples are given to verify the effectiveness of the proposed algorithm. Finally, we give the conclusion.

2. Bayesian Inversion Using Homotopy

Inverse problems are concerned with the subject of converting observational data into information which are not observed directly in systems. In mathematics, an inverse problem takes the abstract form:in which is an additive noise, the unknown is to be determined, given the data , where and are Banach spaces. We apply a probabilistic viewpoint, the Bayesian approach, to give the solution to (1), in which all quantities including the unknown , the noise , and the observations are regarded as random variables. In the Bayesian framework, the information about the unknown is updated by blending prior beliefs with observed data. Typically, the prior and posterior are coded in the corresponding probability measures, which are linked by the Bayes’ formula:where is the likelihood function and is the negative log likelihood.

For simplicity, we let and be finite dimensional spaces. The posterior density corresponding to is given aswhere represents the prior density and is the normalization constant. The main target is the numerator, which is expressed in the following equation:

We try to find an optimal approximation of using homotopy.

The homotopy Bayesian approach is the one to perform progressive processing for the posterior distribution . In this method, instead of directly approximating the true density , it starts with a tractable density and continuously approaches the true density via intermediate densities. Choose the homotopy as follows:where is a parameter. It is obvious that and . For each , an optimal approximation of is to be sought from some distribution family, which has the density of parameters . We minimize a deviation function between and and obtain the optimal approximation of . Obviously, when , we get an optimal approximation of the posterior .

We usually use Kullback–Leibler (KL) divergence and the squared Hellinger distance to measure the deviation between two probability distributions over the same variable . In this paper, we assume that the random variables are continuous. For random variables and , we denote their probability densities by and , respectively. The KL divergence and squared Hellinger distance between and are given, respectively.

Taking the deviation with and in the sense of (5), i.e., or , we seek to find the solution

The minimization necessary condition allows us to get

For , the density is an optimal approximation of . If the approximated distributions are chosen from the same family as the prior distribution , the initial condition is set according to . Then, let be identified as . Thereby, we set the initial condition . Solving for in (8), we arrive at the initial value problem of the HDE, which is expressed as follows:

3. Exponential Family and Mixed Exponential Family

The exponential family (EF) and mixed exponential family serve as the approximation of the posterior distribution in this paper. For a numeric random variable , the parametric EF probability density can be written as follows:where is called the natural (canonical) parameter, is the sufficient statistic, and is the normalizer given by

It is easy to know that

We can compute by .

For a -dimensional Gaussian random variable , , the probability density is given as follows:

In this case, we can reduce the dimension of parameters. Denote the precision matrix by . By the symmetric positive definiteness of , we have the Cholesky factorization with being a lower triangular matrix. Then, by introducing the new parameterwe have

Remark 1. It is obvious that the dimension of the new parameter system decreases dramatically compared with the moment parameter or the natural parameter system. Parameter is the mean, and has a lower dimension than the covariance matrix.
Subsequently, for the MEF, we let , i.e.,where and for . To remove the weight constraint, let and . Then, it follows by simple calculations that

Remark 2. The parameter dimension number of the mixed Gaussian exponential family is .

4. Approximation of the HDE

In HDE (9), the homotopy density is involved in the integration term. The numerical evaluations are a challenge. Therefore, it is necessary to give an approximation version of (9) in a real numerical simulation process to avoid the direct computations for the homotopy densities. Next, we derive an approximated HDE in inverse problems.

For a partition , we have from the Taylor expansion of where , , and are the approximations of and , respectively. We then can takewith and being some approximations of and in (9), respectively. Next, let us give these approximations using the deviations of measures. For the KL divergence, we have

When we implement the iteration (18) with (19), the density at has been approximated by . Therefore, we take the approximations of and as follows:where the term is the Fisher information matrix when is a probability density. For the squared Hellinger metric, we have the corresponding expressions as follows:

For both cases, the approximation (19) can be given by

Thereby, we have the approximated HDE

With these computations, we can implement the homotopy Bayesian approach for inverse problems.

5. Numerical Examples

We list the homotopy Bayes process in Algorithm 1.

(1)Initial condition: Take the initial parameter from the prior distribution.
(2)While , compute the integration in (23) using the Monte Carlo algorithm and get . And set and set .
(3)Stop until and use as the approximated posterior parameter.
5.1. Inverse Heat Conduction Problem

We consider the reconstruction of the thermal conductivities using the measurements of solution governed by the system, which is as follows:

This example is originally presented in [4]. We solve the problem with the homotopy Bayesian approach and investigate the performance of the algorithm. As displayed in Figure 1(a), the background thermal conductivity is denoted as that is known, while the conductivities of the material inclusions are termed as and , respectively.

We consider the inverse heat conduction problem (IHCP) that is posed when the thermal conductivities are unknown, and their inference is intended.

We suppose that a number of measurements of the temperature field at the measurement locations are available. The forward model is established by the finite element discretization of (25) [9]. With the discretization model, the measured temperatures are generated by adding ansatz noise to the numerical solutions as follows:where . The prior is set to a multivariate lognormal distribution with independent marginals with and . Parameters and describe the mean and standard deviation of the lognormal prior. They are related to the parameters of the associated normal distribution via and . The unknown parameters are represented as in terms of the standardized variables .

In this computation process, we take a uniform partition for homotopy parameter interval . We set in (18). These integrations in (23) are computed by the Monte Carlo numerical integration. We take the true and . For noise , we display the numerical results in Figure 2. The estimated posterior density for is given in Figure 2(c). We transform it to by and show the plot in Figure 2(a). Accordingly, we plot the posterior for and using the FEM evaluation for the forward problem in Figures 2(b) and 2(d), respectively.

We also consider the IHCP with six unknown conductivities. The setup is displayed in Figure 1(b).

The unknown conductivities are inferred with noisy measurements . Taking and the true parameters , we display the reconstructed probability in Figure 3 using the proposed algorithm.

We see from these numerical results that the homotopy method achieves satisfactory reconstructions for 2D and 6D IHCP.

5.2. Inverse Acoustic Obstacle Scattering

We apply the proposed algorithm to an inverse acoustic scattering problem with a sound-soft obstacle. We consider the scattering by long cylindrical obstacles with cross-sections that are star-like with respect to the origin. In mathematics, we assume that is a bounded, simply connected domain with boundary . Then, can be uniquely represented by a periodic function :where , .

For a given incident plane wave where is the wavenumber, and is the direction, the scattering problem is to find the scattered field , or the total field , such that

It is well-known from the Sommerfeld radiation condition (28b) that the scattered field admits the following asymptotic expansion:as uniformly for all directions . The function defined on the unit circle is called the far field pattern of . We can write the forward problem by [36]where is the shape-to-measurement operator. Using the parameterization (27) and taking the noise in measurements into account, the inverse model is given bywhere is the aperture of the observation and is the aperture of the incident wave.

The prior is taken as the truncated Fourier series [36]where and are i.i.d. (independent and identically distributed) with and is a positive constant.

The ansatz data are used to the numerical reconstruction. The synthetic Gaussian noise is added to the true forward model, i.e.,

In the numerical implementation, we fix wavenumber , and take in (32). For two incident waves , we collect the full aperture data, , . Some shapes are chosen as the test examples. We draw 100 random samples from the approximated posterior distribution, which is displayed in Figure 4. It can be seen that the proposed algorithm provides satisfactory reconstructions for the obstacle scattering problem.(a)Threelobes: ;(b)Pear: ;(c)Acorn: ;(d)Roundedtriangle: ;(e)Roundrect: ;(f)Kite: , .

6. Conclusions

In this paper, we apply the homotopy to Bayesian inverse problems. This method provides an optimal approximation for the posterior density. We use the EF and MEF to approximate the posterior in this process. For the EF and MEF, we get the unified derivative formula in the numerical implementation. Especially for Gaussian EF, the dimension of parameters is reduced. We take the KL divergence and Hellinger distance to measure the deviation between distributions and give the HDE of the distribution parameters, which yield the approximated posterior distribution. The approximation of HDE is derived and used to the numerical implementation. Some numerical results show that the proposed algorithm is effective. In the following research studies, some models with multisolutions are to be explored. For the present method, the high-dimension problem is still a challenge.

Data Availability

No data were used to support the findings of this study.

Disclosure

The preprint version can be referred to https://arxiv.org/pdf/2203.14771.pdf [37].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Central Government Funds of Guiding Local Scientific and Technological Development for Sichuan Province (Nos. 2021ZYD0007 and 11601067).