Abstract

In this article, we consider the Nash equilibrium of stochastic differential game where the state process is governed by a controlled stochastic partial differential equation and the information available to the controllers is possibly less than the general information. All the system coefficients and the objective performance functionals are assumed to be random. We find an explicit strong solution of the linear stochastic partial differential equation with a generalized probabilistic representation for this solution with the benefit of Kunita’s stochastic flow theory. We use Malliavin calculus to derive a stochastic maximum principle for the optimal control and obtain the Nash equilibrium of this type of stochastic differential game problem.

1. Introduction

Let be a measure space with finite measure, here, is a bounded, open subset of with regular boundary , and is the Lebesgue measure. Suppose the dynamics of a state process and is a controlled stochastic process in of the formwith boundary condition , where the coefficientsare Borel measurable functions, where is a closed convex set, and is a partial operator of order and is the gradient acting on the space variable . Here, is a one-dimensional Brownian motion on a given filtered probability measure space . The stochastic processes are two control processes and have values in a given closed convex set for all , for a given fixed . Also, are adapted to a given filtration , where , for every . represents the information available to the controller at time t. For example, we could takemeaning that the controller gets a delayed information compared to . We refer to [1, 2] for more details about optimal control under partial information or partial observation.

Let and , are given measurable functions, for every , the functions and are bounded continuously differentiable functions. Suppose we are given two performance functionals of the following form, for ,where is a finite Lebesgue measure on the above given measurable space , denotes the expectation with respect to the probability measure . Let denote the given family of controls , which are contained in the set of -adapted controls, such that (1) has a unique strong solution up to time and for all ,

The partial information nonzero-sum stochastic partial differential game problem under consideration is stated as follows:

Find and such that

Such a control is called a Nash equilibrium. The intuitive idea is that there are two players, Player I and Player II. While Player I controls , Player II controls . Given that each player knows the equilibrium strategy chosen by the other player, none of the players has anything to gain by changing only his or her own strategy (i.e., by changing unilaterally). Note that since we allow to be stochastic processes and also because our controls are required to be -adapted, this problem is not of Markovian type and hence cannot be solved by dynamic programming. In this paper, we use Malliavin calculus techniques, see [3, 4] to obtain a maximum principle for this general non-Markovian stochastic partial differential game with partial information. Our approach still works when any finite number of players instead of two-player formulation.

The problem of finding sufficient conditions for optimality for a stochastic optimal control problem with infinite dimensional state equation, most along the lines of the Pontryagin maximum principle was already addressed in the early 1980s in the pioneering paper by [1]. The Pontryagin maximum principle for the dynamic systems modeled by stochastic partial differential equations (SPDEs) is a well-known result, and we refer to [1, 511], and therein, for more details about the maximum principle for SPDEs. Despite of the fact that the finite dimensional case has been completely solved by [12], the infinite dimensional case requires at least one of the following three assumptions, see [13, 14]:(i)The control domain is convex;(ii)The diffusion does not depend on the control;(iii)The state equation and performance functional are both linear in the state variable.

So, the maximum principle for the infinite dimensional case still has important open issues both on the side of the generality of the abstract model and on the side of its applicability to systems modeled by SPDEs. In this paper, let us suppose that the diffusion is dependent on the control, the state equation and performance functional are both nonlinear in the state variable, but we will assume that the control domain is convex. That is to say, we just assume that (i) holds, and we do not need (ii) and (iii) to hold.

But there are few references about the maximum principle for stochastic differential games of systems described by stochastic partial differential equations. In the present paper, we use Malliavin calculus techniques to obtain a maximum principle for this general non-Markovian stochastic differential game with partial information of systems described by stochastic partial differential equations, without the use of backward stochastic differential equations. To use Malliavin calculus, a strong solution of stochastic partial differential equations with a generalized probabilistic representation will be given with the benefit of Kunita’s stochastic flow theory. This approach of stochastic flow has been used to derive optimal control of stochastic partial differential equations with jump in [15], and at the same time, the ideas of [15] give us great inspiration. Our paper is related to the recent paper [16], where a maximum principle for stochastic control problem (NOT for stochastic differential game problem) with partial information is dealt with. However, the approach in [16] needs the solution of the backward stochastic differential equation for the adjoint processes. This is often a difficult point, particularly in the partial information case.

We summarize the main contributions of this paper as follows: (i) we find a strong solution of a stochastic partial differential equation, which follows from the theory of stochastic flows for stochastic processes; (ii) all coefficients of the controlled stochastic partial differential equation we are studying in this paper are all random, and the coefficients of the objective performance functionals are also random; (iii) with the help of Malliavin calculus for Brownian motion, we get the Nash equilibrium for our stochastic partial differential game with partial information, as obtained by establishing the corresponding stochastic maximum principles for the stochastic optimal controls. It is worth noting that our diffusion term in the controlled stochastic partial differential equation can be dependent on two control variables from two players and the controlled stochastic partial differential equation or the objective performance functionals need not be linear in the state variable.

The article is organised in the following way: in Section 2, we present the explicit strong solution of a stochastic partial differential equation with the benefit of stochastic flow theory for stochastic processes. In Section 3, we provide some properties of Malliavin calculus for Brownian motion, especially the chain rule and duality formula of the Malliavin derivative. In Section 4, we give the Nash equilibrium for our stochastic partial differential game with partial information with the help of the explicit strong solution and Malliavin calculus via a stochastic maximum principle. Finally, in Section 5, an example is given to illustrate our main results, and the conclusion is given in the final section.

2. Strong Solution of Linear SPDE

In this section, we recall some definitions of stochastic flows and preliminary results, more details about stochastic flows see [17, 18]. Let . Denote by the space of all -times continuously differentiable functions such thatwherefor all compact sets . For the multiindex of non-negative integers , the operator is defined aswhere . Further, introduce for sets , the normwhere

We will simply write for .

Define

Set

Define the symmetric matrix function as

We assume that, for some and ,

For all , the stochastic process exists and

Further, suppose that follows the SPDE.with obviously initial condition , and boundary conditionwhere , and .

In the following, we assume that the differential operator in the above SPDE (18) is of the form.wherewhere is a continuous function in , belongs to for some and is bounded from the above. Here,

Furthermore, we require the following condition,(L-i) is an elliptic differential operator.(L-ii) There exists a non-negative symmetric continuous matrix function such that , hencefor all , for a constant and some .(L-iii) The functions are continuous in and satisfyfor a constant and some and .(L-iv) The function and are uniformly bounded.

Here, the operator does not depend on controls or , that is, there are no controls in and . In this section, aided by a stochastic flow theory, we will give a probabilistic representation of the explicit strong solution of the above linear SPDE (18).

Now, we derive the announced probabilistic representation of a solution of linear SPDE (18). Let be a -valued Brownian motion, that is a continuous process with independent increments on another probability space . Assume that this stochastic process has local characteristic and , where the correction term is given by

For instance, has a decompositionwhere

Here, is a Brownian motion defined on an auxiliary probability space .

Then, let us consider the SPDE on the product space :where is the martingale part of and

So, taking the expectation to both sides of (28) gives the following representation for the solution to linear SPDE (18):

Theorem 1. Under the above specified conditions, the following probabilistic representation of the solution to linear SPDE (18) holds:

Proof. Taking the expectation to both sides of equation (28), we can obtainSince is the martingale part of in the probability space , the second term in the right side of (31) equals zero; hence, by Fubini’s theorem, we arrive atHence, by using (49) and (60) in (32), we findhere, , andTherefore, let in (33), we can see solve the linear SPDE (18).

Remark 2. (i)For the probabilistic representation of the solution to linear SPDE, we also refer to Theorem 6.2.5 in [18]. Different from Theorem 6.2.5 in [18], the linear SPDE (18) contains the derivative of the control term.(ii)Using the definition of and noting that (x,t) and are independent, the above linear SPDE (28) can be recast as a first-order SPDE in the sense of the Stratonovich integral using the stochastic flows theory:The connection between the Itô and Stratonovich integral of semimartingale with respect to semimartingale is given bythe notation is called the Itô circle, t stands for nonlinear integration in the sense of the Stratonovich integral. For more details about Stratonovich integral, see [19].
In order to use this probabilistic representation (30) in the proof of our general stochastic maximum principle for stochastic partial differential games, we proceed to develop an expression for in Theorem 1. Let be the solution of the Stratonovich SDE.where and dt stands for nonlinear integration in the sense of the Stratonovich integral. Then, by the formula (86) of Section 6.1 in [18] (where in (76) of Section 6.1 in [18]), we obtain the following representation of :where denotes backward integration and is the inverse flow of the stochastic flow .
For the general case, we consider the case with general initial condition , that is,holds, where . Then, in the probabilistic representation, (30) is described byand using the same reasoning as above we obtain:where is given by (39).

3. Malliavin Calculus for Brownian Motion

In this section, we recall the basic definition and properties of Malliavin calculus for Brownian motion related to this paper, for reader’s convenience. A natural starting point is the Wiener–Itô chaos expansion theorem, which states that any can be written asfor a unique sequence of symmetric deterministic functions , where is a Lebesgue measure on and(the -times iterated integral of with respect to ) for and when is a constant. Here, we use as the measure on time variable , as the measure on spatial variable .

Moreover, we have the isometry

We first present the Malliavin derivative with respect to Brownian motion at of a given Malliavin differentiable random variable , and then we present some basic properties about Malliavin derivative related to this paper.

Let denote the set of all random variables which are Malliavin differentiable with respect to Brownian motion , precisely, let be the space of all such that its chaos expansion satisfies

Definition 3. For any , define the Malliavin derivative of at with respect to Brownian motion aswhere the notation means that we apply the -times iterated integral to the first variables of and keep the last variable as a parameter.
It is easy to check thatso belongs to .
Some basic properties of the Malliavin derivative are the following (a) chain rule and (b) duality formula.(a)Suppose and that is with bounded partial derivatives. Then, and(b)Suppose is -adapted withand let . Then,

4. Nash Equilibrium of Nonzero-Sum SPD Games

In this section, we use Malliavin calculus to derive Nash equilibrium of a nonzero-sum stochastic partial differential game by establishing a stochastic maximum principle. After some assumptions and notations, we introduce the stochastic Hamiltonian function and then the maximum principle for nonzero-sum stochastic partial differential games with partial information is stated and proved.

4.1. Assumptions and Stochastic Hamiltonian Function

We now return to the partial information nonzero-sum stochastic partial differential game problem given in the introduction. We make the following assumptions:(A1) For all , and all bounded -measurable random variables , the controlsBelong to and , respectively, where denotes the indictor function on .(A2) For all with and are bounded, there exists such that the controls and , belong to and , respectively, for all , and such that the familiesare -uniformly integrable and the familiesAre -uniformly integrable.(A3) For all with and are bounded, the processExist. Further, follows the SPDE, for And for all , and for all , ; follows the SPDE, for And for all , and for all , .(A4) For all , the following processes, :Are well defined and where are defined as in the proof, where the operator stands for the adjoint of .

We now define the Hamiltonians for this general stochastic partial differential game problem as follows:

Definition 4. The general stochastic Hamiltonians for the stochastic partial differential game are the functionsdefined by

4.2. Stochastic Maximum Principle for Nonzero-Sum Games

Theorem 5. (i)Let be a Nash equilibrium with the corresponding state process , that is,Assume that for all random variables , its Malliavin derivative with respect to at exists.Then,For a.a. .(ii)Conversely, suppose that there exists such that equations (62) and (63) hold.Then,

If and are concave with respect to and , respectively, then is a Nash equilibrium.

Proof. (i) Suppose is a Nash equilibrium. Since (a) and (b) hold for all and , is a directional critical point for , in the sense that for all bounded and , there exist such that , for all . For simplicity of notation, we write and in the following. For ease in writing, asterisks on optimal functions will sometimes be omitted where the meaning is clear from the context.
By the definition of , we havewherewith initial conditionand boundary conditionBy the duality formulae, we getSimilarly, we getHere, in the last equality, we changed the notation to .
Now, we defineSincewe have, using (65), (69), and (70),Next, we apply the above to of the formfor some , where is bounded and -measurable random variable. Then, we haveand hence (73) becomesNote that, by (66), with and , the process follows the following dynamics:By Theorem 1 and (42), we know that the previous dynamics has an explicit strong solution.the process is the inverse flow of the stochastic flow . Here, solves the following Stratonovich SDE:where , defined in Section 2. In fact, one could verify that solves the following Itô SDE, is a Brownian motion defined on an auxiliary probability space .
We rewrite (78) as, for ,here,We now deal with in (76). Differentiating with respect to at , we getFor first term in (83), , since , we haveFor , by (81) and (84), we havewhere, the operator stands for the adjoint of , and we defineBy (66) and , we haveAfter putting (87) in (85), we getwhereandFor the latter term in (88), i.e., (90), since for , we have by applying the mean theorem,For , by the duality formulae, we haveCombining (88)–(92), we obtainFor in (76), we see directly thatTherefore, differentiating (76) with respect to at , we obtain the following equation from (93) and (94):We denotethen, the above equation (95) can be written as follows:Since be arbitrarily bounded -measurable random variable, we conclude that, for all , a.s.,Similarly, we havewhereand and boundary condition .
By using similar arguments as , we getThis completes the proof of assertion (i).
(ii) Conversely, suppose that there exists such that (62) and (63) hold. In fact, the proof of the opposite direction is divided into two steps.
Firstly, consider . If (62) holds, then we obtain that (75) holds for all , that is,for all with and some bounded -measurable random variable .
Similarly, for all , we havefor all with and some bounded -measurable random variable .
Secondly, consider . These equalities above (102) and (103) hold for all linear combinations of such and . For any and , since all bounded and can be approximated pointwise boundary in by such linear combinations, it follows that (102) and (103) hold for all bounded and , that is, for any , we can approximate bywhere is the coefficient, is a partition of the interval , is a boundary random variable, and this approximation procedure is uniformly for . Hence, we obtain (73) holds for any , in the interval .
Taking , we conclude that (73) holds for all bounded , and this is equivalent tofor all bounded . Similarly, we get thatfor all bounded .

5. Numerical Simulations for the Linear SPDE

A strong solution of the linear stochastic partial differential equation with a generalized probabilistic representation has been given with the benefit of Kunita’s stochastic flow theory. This section is concerned with the numerical simulations of solutions to the linear stochastic partial differential equations. First of all, we consider one space dimensional in the following linear stochastic partial differential equations:where has the form

5.1. Example 1: Stochastic Equation with Volatility

In this example, we solve the linear stochastic partial differential equation (107) on the domain . The space and time steps are chosen as and , respectively. The initial value and boundary value , , and the functions , . The solutions of these linear stochastic partial differential equations are shown in Figure 1. In this case, the linear stochastic partial differential equation is

5.2. Example 2: Stochastic Equation with Volatility

In this example, we solve the linear stochastic partial differential equation (107) on the domain . The space and time steps are chosen as and , respectively. The initial value and boundary value , , and the functions , . The solutions of these linear stochastic partial differential equations are shown in Figure 2. In this case, the linear stochastic partial differential equation is

6. Conclusion

In this paper, we consider a Nash equilibrium of stochastic differential game where the state process is governed by a controlled stochastic partial differential equation. The problem of finding sufficient conditions for Nash equilibrium of stochastic differential game can be transformed into optimality conditions for a stochastic optimal control problem with infinite dimensional state equation. Applying Kunita’s stochastic flow theory, we find an explicit strong solution of the linear stochastic partial differential equation, and this solution has a probabilistic representation. The probabilistic representation of solution and Malliavin calculus imply a stochastic maximum principle for the optimal control and obtain the Nash equilibrium of this type of stochastic differential game problem. We would like to point out that it is meaningful to consider a Nash equilibrium of stochastic differential game when the state process is governed by a controlled stochastic partial differential equation with jump-diffusion, which is a valuable future research direction.

Data Availability

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The author declares that there are no conflicts of interest.

Acknowledgments

This work was supported by the National Science Foundation of China (Grant No. 11501325).