Abstract
This paper considers some multiscale radial basis function collocation methods for solving the two-dimensional Monge–Ampère equation with Dirichlet boundary. We discuss and study the performance of the three kinds of multiscale methods. The first method is the cascadic meshfree method, which was proposed by Liu and He (2013). The second method is the stationary multilevel method, which was proposed by Floater and Iske (1996), and is used to solve the fully nonlinear partial differential equation in the paper for the first time. The third is the hierarchical radial basis function method, which is constructed by employing successive refinement scattered data sets and scaled compactly supported radial basis functions with varying support radii. Compared with the first two methods, the hierarchical radial basis function method can not only solve the present problem on a single level with higher accuracy and lower computational cost but also produce highly sparse nonlinear discrete system. These observations are obtained by taking the direct approach of numerical experimentation.
1. Introduction
The Monge–Ampère equation with Dirichlet boundary is a fully nonlinear partial differential equation, which is given bywhere z has n independent variables in a bounded domain and is the Hessian of the function u. When restricting it to , we can rewrite the equation aswith a nonlinear map .
The Monge–Ampère equation is originated in geometric surface theory and has been widely applied in dynamic meteorology, elasticity, geometric optics, image processing, conformal geometry, optimal transport, and others. And the numerical solution of the Monge–Ampère equation has been a subject of increasing interest recently. Refer Brenner et al. [1] and Feng and Neilan [2] study for some finite element methods and Oberman’s study [3] for finite difference methods. In order to design a certain type of numerical method which can avoid the tedious mesh generation and the domain integration and can be capable of dealing with any irregular distribution of nodes, some meshfree collocation methods have been researched by Liu and He [4, 5], Böhmer and Schaback [6], and Rashidinia and collaborators [7–9]. However, we are obtaining highly accurate solutions from severely ill-conditioned algebraic systems and high computational cost when using meshfree approximation. Compactly supported radial basis functions can lead to a very well-conditioned sparse system (see [10–12]) but at the cost of a poor approximation accuracy. This is a trade-off principle. That is to say, small support leads to a well-conditioned system but also poor accuracy, while large support yields excellent accuracy at the price of ill-conditioned system. Consequently, to speed up the meshfree collocation for the Monge–Ampère equation, we need to design some efficient multiscale radial basis function collocation methods. Undoubtedly, this is the motivation of the paper.
The remainder of this paper is organized as follows. In next section, we introduce several multiscale RBF collocation methods. Section 3 presents some numerical results, where we will compare these multiscale algorithms through direct numerical observation. And Section 4 closes the paper with a concise summery.
2. Multiscale RBF Collocation Methods
Let be a finite point set in . We define a commonly used trial discretization parameter:which can be regarded as the radius of the largest empty ball that can be placed among the data sites . Generate nested point setswhich have trial discretization parameters . Let be newly added point set in , then we have and for .
Given a kind of the compactly supported radial functions , we can rescale it (in the translation-invariant kernel-based case) as
In order to make the support radii of become smaller and smaller with the addition of new points, we will select
Let , with and for . Then, we can build two types of trial spaces with the form
Here, we have written for . and have multiscale expressions, which differ from the traditional radial basis trial spaces
Let be the total number of the centers and be basis functions from one of the trial spaces above. Then, with a trial function , the nonsymmetric kernel-based collocation method for equations (3) and (4) will generate the following nonlinear systemfor some given interior collocation sites and boundary collocation sites and some unknown coefficients . We need to compute the derivatives of because the nonlinear operator F acts on the basis functions now. The derivatives can be obtained by the chain rule from Table 1.
For example, the Wendland’s function with scaling parameter ε has second-order partial derivatives:
2.1. Cascadic MeshFree Method
Once a convex solution be chosen, equations (3) and (4) can be rewritten in the equivalent form:
This is still a fully nonlinear partial differential equation although the nonlinear terms are moved to the right side. Benamou et al. [13] have considered an iterative finite difference method for the above equation. But it will become very difficult to construct the approximation of , and on unstructured mesh. An efficient iterative algorithm based on kernel-based meshfree discretization was proposed in [14]. The cascadic meshfree method is as follows.
By observing the above calculation process, it is easy to find that algorithm 1 avoids tedious interpolation and makes higher derivative functions , and be easy to compute. Compared with the traditional cascadic multigrid method [15], the cascadic meshfree method requires neither domain nor boundary discretization; i.e., it is truly meshfree. It is flexible to deal with physical domain with curved boundary.
|
2.2. Stationary Multilevel Method
With a stationary multiscale algorithm [16], the condition number of the discrete matrix can be relatively small, and the computation can be performed in operations.
In this algorithm, the present problems (3) and (4) are solved first on the coarsest level by one of the compactly supported radial basis functions with a larger support (usually scaling the size of the support with the fill distance). Then, the residual can be formed and be computed on the next finer level by the same compactly supported radial basis function but with a smaller support. This process can be repeated and be stopped on the finest level. And the final numerical solution is the sum of all of approximation. For interpolation problems, the linear convergence order has been proved in Sobolev spaces on the sphere in [17] and on bounded domains in [18]. And the applications of this algorithm in solving linear partial differential equations on spheres were proposed in [19] and on bounded domains were proposed in [20–22]. A detailed discussion on the stationary multilevel algorithm is reviewed in the latest papers [23, 24]. Algorithm 2 is the specific form for nonlinear situation.
|
2.3. Hierarchical Radial Basis Function Method
In this subsection, we consider a very simple algorithm for solving Monge–Ampère equations (3) and (4). If we choose to use the trial spaces to construct kernel-based collocation, then we will obtain a single-level Algorithm 3.
|
Algorithm 3 is called the hierarchical radial basis function method because it is similar to multilevel splitting of finite element spaces [25]. In fact, are obtained by splitting the reproducing Hilbert space (or native space of the corresponding strictly positive define radial basic function). We consider the Matlab implementation of Algorithm 3. Let the radial basis function matrix be on the j level, which has the form
Here, are interior collocation data and are boundary collocation data in . Let , and be some matrices formed by the derivatives of the corresponding functions in , respectively. Then, the approximate values of u, , and on the data can be represented by the column vectors aswith , respectively. And the nonlinear discrete system in Algorithm 3 can be written as
In the next numerical experiments, the linear discrete system will be solved by the least square method, which is a “” operation in Matlab. And the Matlab function lsqnonlin will be used for solving nonlinear discrete system. Computations were performed on a laptop with 2.4 GHz Intel Core i7 processor, using Matlab running in the Windows 7 operating system.
3. Numerical Examples
In this section, we discuss the implementation of Algorithms 1–3. We consider the multiscale collocation methods for Monge–Ampère equations (3) and (4) with the true solution:
Before the nonlinear solver starts working in Algorithms 2 and 3, a reasonable choice for the initial data is solving the Laplace equation . This method of selecting initial values is natural, which has been used in [13, 14].
3.1. Square Domain
First, we generate a nested Halton points in the interior of the regular domain and create additional equally spaced collocation points for the boundary conditions on each level. So, the boundary data are successively. In our experiments, Wendland’s functions carried the parameters used to construct trial spaces and . equally spaced evaluation points are used to compute
Table 2 shows three kinds of strategies for choosing trial data and testing data. A detailed explanation is as follows: TT1: trial data and testing data are equal TT2: trial data and testing data are equal only on boundary, while additional collocation points are created in the internal domain TT3: trial data only include interior points, while collocation is implemented in internal domain and the boundary
Let be scaled parameter defined by (8) on the j-level. Then, for different values of ε, the numerical results (including RMS errors and convergence rates) are displayed in Tables 3 to 5. It should be noted that Algorithms 1 and 2 are terrible and nonconvergent if the centers just being placed in the interior of the domain. This is the reason for the lack of corresponding TT3 experiments in Tables 3 to 4. But in Algorithm 3, we let the centers only include Halton data while the collocation sites contain the centers and additional data on the boundary. This is because the boundary data are not nested.
By TT1 strategy and with an initial value , the cascadic meshfree method is convergent although it seems that the convergence ceases at a later stage. But when implementing the algorithm by TT2 strategy with the same initial value, the cascadic meshfree method is nonconvergent. This is not contradictory to the linear and nonlinear theory of Schaback [26] and Böhmer and Schaback [6, 27]. The theory holds that a testing with many more degrees of freedom is necessary because the small trial space must be tested on a fine-grained space discretization. But TT2 strategy does not bring a good approximation on the beginning level. This makes be no longer an ideal right side on the next level.
The same set of experiments as for the cascadic meshfree method is displayed in Table 4 for the stationary multilevel RBF collocation method. Different from the numerical results of the stationary multilevel method for solving linear elliptic problems (see Table 41.4 in Fasshauer’s book [28]), Algorithm 2 with an initial is nonconvergent. It seems that the TT2 strategy is more numerically stable than the TT1 strategy, but with limited accuracy. Maybe a more reasonable parameter scheme should not be , when using the stationary multilevel RBF collocation method to solve fully nonlinear partial differential equation. On lack of the theoretical guidance, we have not obtained some satisfactory numerical results.
In Table 5, we list RMS errors and convergence rate for the hierarchical radial basis function collocation method. We observe that this method has an ideal convergence behavior. Even to a relatively large initial parameter , the convergence rate of this method is close to 1.5. With an initial parameter , the RMS error on 4 level is remarkably reduced to . Algorithm 3 is a single-level collocation method but has a multiscale trial space. So there is no relationship between the former numerical solution and the latter one. We do not obtain an ideal numerical result on 1 level where only 9 centers are arranged in the interior of the domain. In addition, these 9 centers are far from the boundary. But this situation has been significantly improved after the addition of the added 16 centers. Refer Figure 1 for the initial scattered centers and its refinement. Clearly, compared with the first two algorithms, Algorithm 3 can solve the model problem with a higher accuracy and lower computational cost.

(a)

(b)
In Figure 2, we show plots of the RMS errors of the above three algorithms. It shows the exponential decay of the error by the hierarchical radial basis function collocation.

3.2. Sector Domain
In this subsection, we consider numerically solving the problems (3) and (4) in a sector domain (see Figure 3) but with the same true solution as in previous example. We first generate a set of successive Halton points: . We place some equidistant points on the rectangular boundary and the curved boundary, respectively, and make the total number of boundary points be still equal to . The first interior points are used to compute the RMS error, and the same is used for j-level experiments. As in example 1, Wendland’s functions carried the parameters used to construct trial spaces and . The numerical results are displayed in Tables 6–8 and Figure 4.

(a)

(b)

From numerical results, we observe that TT2 testing is not an ideal strategy for the cascadic meshfree method. And given a relatively small initial ε stationary multilevel method will be convergent although it becomes very slow at a later stage. But the performance of hierarchical radial basis function method is different from first two kinds of algorithms. With an initial parameter , the RMS error on the 4-level is reduced to . Figure 4 clearly shows the superiority of the hierarchical radial basis function method.
4. Conclusions
The paper considered three kinds of multiscale RBF collocation methods for solving the fully nonlinear Monge–Ampère equation. By a specific example, we found that the hierarchical radial basis function collocation method has higher accuracy and lower computational complexity. However, a convergence proof for the hierarchical radial basis function collocation method is missing. This will depend on the approximation of trial spaces , new inverse inequality, and sampling theorem. Based on the convergence theory of Böhmer and Schaback [6, 27], we will consider the following theoretical aspects in future work [29]:(1)Trial Approximation. We will consider the approximation of spaces under some Sobolev norms. That is to say, with maps , we will prove error bounds: where is the trial discretization parameter, α is the order of approximation, and U usually is the reproducing Hilbert space which satisfies additional regularity requirement.(2)Stability of Testing Strategy. We will prove the boundedness and coerciveness of nonlinear testing operator. This will depend on some Poincaré-type inequality and Bernstein-type inequality. The detailed discussion will be given in [29].
Appendix
Following are the Matlab codes of the hierarchical radial basis function method for Monge–Ampère equation. The DistanceMatrixCSRBF function and 2D Halton data can be cited from [28]:(1)% Main routine(2)rbf = @(e,r) r. ∧ 8. ∗ (66 ∗ spones(r)–…(3) 154 ∗ r + 121 ∗ r. ∧ 2–32 ∗ r. ∧ 3);(4)Lrbf = @(e,r) 44 ∗ e ∧ 2 ∗ r. ∧ 6 ∗ (84 ∗ spones(r)−…(5) 264 ∗ r + 267 ∗ r. ∧ 2–88 ∗ r. ∧ 3);(6)% exact solution and its Laplacian(7)u = @(x,y) exp((x. ∧ 2 + y. ∧ 2)/2);(8)Fu = @(x,y) (1 + x. ∧ 2 + y. ∧ 2). ∗ exp(x. ∧ 2 + y. ∧ 2);(9)K = 4; neval = 40; gridtype = ’h’;(10)N = (2 ∧ K + 1) ∧ 2;(11)name = sprintf(’Data2D_%d%s’ ,N,gridtype);(12)load(name);(13)xdata = dsites(:,1); ydata = dsites(:,2);(14)ep = 0.5 ∗ 2. ∧ [ 0:K–1];(15)% create neval–by–neval equally spaced points(16)% in the unit square(17)grid = linspace(0,1,neval); [xe,ye] = meshgrid(grid);(18)epoints = [xe(:) ye(:)];(19)% compute exact solution(20)exact = u(epoints(:,1),epoints(:,2));(21)N1 = [0 9 25 81 289 1089 4225];(22)for k = 1:K(23) N2 = (2 ∧ {k} + 1) ∧ 2;(24) intdata = [xdata(1:N2) ydata(1:N2)];(25) % add boundary points(26) sn = sqrt(N2); bdylin = linspace(0,1,sn)’;(27) bdy0 = zeros(sn–1,1); bdy1 = ones(sn–1,1);(28) bdydata = [bdylin(1:end–1) bdy0;…(29) bdy1 bdylin(1:end–1); …(30) flipud(bdylin(2:end)) bdy1;…(31) bdy0 flipud(bdylin(2:end))];(32) rhs1 = Fu(intdata(:,1),intdata(:,2));(33) rhs2 = u(bdydata(:,1),bdydata(:,2));(34) % solve linear system on k–level(35) c{k} = linfh(k,N1,ep,xdata, ydata,…(36) intdata, bdydata,rhs1,rhs2);(37) % solve nonlinear system on k–level(38) coef{k} = lsqnonlin(@(c)nonfh(c,k,N1,ep,xdata,…(39) ydata, intdata,bdydata,rhs1,rhs2),c{k});(40) % assemble evaluation matrix(41) for j = 1:k(42) xctrs{j} = [xdata(N1(j) + 1:N1(j + 1))…(43) ydata(N1(j) + 1:N1(j + 1))];(44) DM_eval = DistanceMatrixCSRBF(epoints,…(45) xctrs{j},ep(j));(46) EMK{j} = rbf(ep(j),DM_eval);(47) end(48) EM = EMK{1};(49) if(k > 1)(50) for j = 2:k(51) EM = [EM EMK{j}];(52) end(53) end(54) % evaluate RBF approximation(55) Pf = EM ∗ coef{k};(56) % compute maximum error on evaluation grid(57) rms_err = norm(Pf–exact)/neval;(58) if (k > 1)(59) rms_rate = log(rms_err_old/rms_err)/log(2);(60) end(61) rms_err_old = rms_err;(62)end(63)%——————————–(64)% linear solution function(65)function c = linfh(k,N1,ep,xdata, ydata,idata,…(66) bdata,rhs1,rhs2)(67) rbf = @(e,r) r. ∧ 8. ∗ (66 ∗ spones(r)–154 ∗ r + 121 ∗ …(68) r.2 ∧ –32 ∗ r.3 ∧);(69) dxxrbf = @(e,r,dx) 528 ∗ e ∧ 4 ∗ dx. ∧ 2. ∗ r. ∧ 6. ∗ (7 ∗ spones(r)…(70) –6 ∗ r)–22 ∗ e ∧ 2 ∗ r. ∧ 7. ∗ (24 ∗ spones(r)…(71) –39 ∗ r + 16 ∗ r. ∧ 2);(72) dyyrbf = @(e,r,dy) 528 ∗ e ∧ 4 ∗ dy. ∧ 2. ∗ r. ∧ 6. ∗ (7 ∗ spones(r)…(73) –6 ∗ r)–22 ∗ e ∧ 2 ∗ r. ∧ 7. ∗ (24 ∗ spones(r)…(74) –39 ∗ r + 16 ∗ r. ∧ 2);(75) dxyrbf = @(e,r,dx,dy) 528 ∗ e ∧ 4 ∗ dx. ∗ dy. ∗ r. ∧ 6. ∗ …(76) (7 ∗ spones(r)–6 ∗ r);(77) Lrbf = @(e,r) 44 ∗ e ∧ 2 ∗ r. ∧ 6. ∗ (84 ∗ spones(r)–264 ∗ r + …(78) 267 ∗ r. ∧ 2–88 ∗ r. ∧ 3);(79) % build trial data(80) for j = 1:k(81) xctrs{j} = [xdata(N1(j) + 1:N1(j + 1))…(82) ydata(N1(j) + 1:N1(j + 1))];(83) cent = xctrs{j};(84) D1 = DistanceMatrixCSRBF(idata, cent, ep(j));(85) D2 = DistanceMatrixCSRBF(bdata, cent, ep(j));(86) CMK{j} = Lrbf(ep(j),D1);(87) BMK{j} = rbf(ep(j),D2);(88) end(89) CM = CMK{1};(90) BM = BMK{1};(91) if(k1)(92) for j = 2:k(93) CM = [CM CMK{j}];(94) BM = [BM BMK{j}];(95) end(96) end(97) LCM = [CM; BM];(98) Lrhs = [sqrt(2 ∗ rhs1); rhs2];(99) c = LCM\Lrhs;
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This paper used some Halton datasets and drew lessons from partial codes from Fasshauer’s book [28]. The authors are grateful to [28] for its free CD. The research of the first author was partially supported by the Natural Science Foundations of China (no. 11501313), the Natural Science Foundations of Ningxia Province (no. 2019AAC02001), and the Third Batch of Ningxia Youth Talents Supporting Program (no. TJGC2018037). Research of the second author was partially supported by the Natural Science Foundations of Ningxia Province (no. NZ2018AAC03026) and the Fourth Batch of Ningxia Youth Talents Supporting Program (no. TJGC2019012).