Abstract

The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP). The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all) are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.

1. Introduction

Established techniques based on parameter estimation theory provide an effective tool for the identification of thermophysical properties of materials and/or other unknown system and measurement parameters by means of transient experiments [1, 2].

As the estimation process is usually based on some inverse solution (analytical or numerical) of a physical model, the unavoidable presence of errors in the measured data may have a detrimental effect in the final estimates because of the ill-posed nature of inverse heat conduction problems [3]. For this reason, besides a great precision in the measurement technique, the key to achieve a precise and reliable estimation of thermophysical properties from transient experiments is the adherence of the implemented physical and numerical models to the actual phenomenon under investigation. However, as a general rule, most measurement and process mismatches can be compensated, if detected, by including, in the inverse solution, further additional models [4]. For example, the knowledge of the exact location where thermal sensors are placed inside the specimen can be identified with great accuracy [47]. Errors (time lag) in temperature measurements by contact probes in a transient regime can be adequately compensated [8]. The uncertainty of sensor calibration can be included in the inverse conduction problem [9] to improve both the estimated thermophysical properties and the calibration curve. According to this general approach, the biases are compensated by identifying, in the same experiment, both the thermophysical properties and the unknown parameters (e.g., lags, positioning errors, and calibration coefficients) appearing in the additional models.

In this work, the focus is on errors in the initial temperature distribution [1016]. It is often possible to arrange in the interior of the specimen under test only a limited number of sensors (or no sensors at all in particular experiments [17, 18]). It follows that the initial temperature distribution, needed not only at the measuring points but also at each node of the spatial discretization grid of the numerical model, could be affected by significant uncertainty. Despite the smoothing effect of thermal conduction, errors of this type will propagate in the reconstruction algorithm, thus affecting in some way the estimated values of the thermophysical properties.

At first, this study develops a quantitative analysis to determine, under a given reference set of working and boundary conditions, the influence of a biased initial temperature distribution on the estimated values of the material thermophysical properties.

To follow, a method is proposed to reduce the errors induced by this kind of bias. The proposed method consists in including the initial temperature distribution (i.e., its values on the numerical space grid or a proper parametric function) among the other quantities to be estimated. In this way, the effect of the initial error is minimized and the accuracy of the reconstructed properties results is to be greatly enhanced.

2. Case Study

A material sample with 1D planar geometry is subjected to a thermal transient in which known (measured) time dependent values of temperature or heat flux are imposed on the opposite faces. By measuring the thermal response at some locations inside the specimen (temperature, heat flux, or both), some thermophysical properties of the material (which one exactly depends on the particular set-up) can be reconstructed by the solution of the corresponding inverse heat conduction problem (IHCP). The general case of simultaneous identification of the thermal conductivity and the specific heat has been deeply studied from an experimental point of view and from a theoretical point of view [1922].

In what follows, for the sake of simplicity and to focus the attention on the effect of errors in the initial temperature distribution, the problem assumes temperature independent thermophysical properties and the identification process is devoted only to one constant thermophysical parameter, the thermal diffusivity. In this simple case, the measure of the temperature-time response at the two sides of the specimen and at one (as in this study, according to Figure 1) or more locations in its interior will suffice.

The governing equation will be with its associated initial and boundary conditions:We assume that the inverse problem is to be solved on the hypothesis of a wrong uniform temperature distribution, say , while the actual distribution is . As a consequence of this error , the identification process, based on the physical model described by (1) and (2), will produce errors in the estimated parameters whose magnitude will depend on various factors: “shape” and amplitude of the initial error, type and time scale of the transient experiment, actual diffusivity value of the sample, number and location of the sensors, variance of the random noise affecting the measured data, and other process and measurement biases. Owing to the great number of variables influencing the phenomenon and in order to better explain how the use of a wrong initial temperature distribution affects the estimation process, we will limit ourselves to the study of a particular reference condition in which we assume perfect models and a single internal sensor. It is underlined that the algorithm applies to every type of initial temperature distribution and not only to a uniform one.

A preliminary action is usually done by using the initial measured temperatures and interpolating between the measurements, for instance, by a linear or quadratic method. In this way, use is made of the available data to construct a first attempt initial temperature distribution. Nonetheless, due to limited number of sensors and also considering that their true position is rarely known, the estimated temperature profile can be affected by errors of the order of tenths of degree.

3. Effect of the Initial Bias on the Temperature-Time History

At first, the smoothing behavior of an initial temperature error on the time dependent temperature distribution is considered. Owing to the model linearity, such effect can be calculated using the superposition principle, since the thermal response inside the specimen results is made up of two parts, the first being the undisturbed response and the second being the response of the model to the error alone. So the determination of the RMS (root mean squared) error induced on the residuals reduces to the calculation of a single response to the initial arbitrary space dependent error during its “relaxation.” The solution of the diffusion equation (1) along with the following initial and constant boundary conditions (3) will properly work:We select to be a parabolic shaped error as from Figure 2. Results obtained by using different profiles were very similar to those reported. The error vanishes on the boundaries; that is, , owing to the error-free sensor model. In general, is a coordinate dependent function but we limit ourselves to consider it as uniform distribution.

The general solution of problem (1), (3) can be expressed in terms of an infinite sum of functions all resulting from the product of an exponential and a sinus in formula:withBy inspection of (4), we note the presence of a space dependent (auto)function multiplied by a time dependent coefficient representing a vanishing amplitude.

In other words, the space-time response of the system is made of an infinite sum of vanishing terms whose time constants are related to the wave length of the particular component. The shorter the wave length the shorter the time constant, that is, the time spent by a harmonic to reduce its amplitude to a given fraction of the initial one.

The fundamental of , that is, its 1st harmonic, vanishes in longer times and represents the major contribution to the global temperature error, that is, to the difference between an undisturbed temperature field and a disturbed temperature field. Since the thermal diffusivity reconstruction process gets information during the whole evolution of the experiment, it appears reasonable that also the distortions on the estimated value of it are in some way related to this kind of slowly vanishing error. However, there is another factor that influences the rate of information that any inverse algorithm draws from an experiment during an identification process, that is, the sensitivity coefficients which relate variation in the temperature distribution to variation of the sought parameters and that are usually function of time, space, and the experiment. We address this aspect in a following paragraph.

Figure 3 shows the nondimensional temperature error as a function of the Fourier number, resulting from the numerical simulation of the model. The nondimensional temperature is defined asand the Fourier number is defined aswhere is the duration of the experiment.

The other curves represent the RMS value of , that is, defined asIn this context, is the total number of simulated measurements given in general by the number of internal sensors multiplied by the number of acquisitions in time.

When the Fourier number is greater than around 0.3, the mean squared error tends to assume, on a bilogarithmic diagram, a slope equal to −0.5. In these conditions we haveThis agrees with the fact that, after some time, the effects of any initial temperature error are smoothed by the heat conduction phenomenon and reduce to unnoticeable values, but we have to underline that the identification process is based on information coming from the entire transient experiment. Indeed, for a huge class of tests, the Fourier number as defined in (7) is in the range from 0.1 to 1.0 and, as a consequence, the global standard deviation of the residuals at the end of the identification process is only marginally reduced and its value will be comparable to that of the mean quadratic error of the initial temperature bias. Results very similar to those reported in Figure 3 are obtained with the initial distribution error shaped as in Figure 4.

4. Effect of the Initial Bias on the Reconstructed Thermal Diffusivity Value

To acquire information of general value about the influence of the initial temperature error on the identified results, few hundreds of simulated transient experiments have been performed by varying the value of the initial error (Figure 2), the total experiment duration , the slope of the temperature control in (see (4)), the thickness of the specimen, and the value of the thermal diffusivity to be identified.

On the two opposite sides of the specimen, the following time dependent temperatures are imposed as boundary conditions: That is, a linear increase (constant variation rate , K s−1) is assumed at , while at the temperature is kept at a constant value equal to the initial one, . This particular dependence on time is adopted only due to its simplicity.

The results put in evidence that the error in the identified thermal diffusivity values is directly proportional both to the amplitude and to the inverse of the product between and . This behavior is summarized in Figure 5, where the nondimensional percentage error defined asis reported as a function of the Fourier number. In (11), the percentage error, , is defined asWhen the Fourier number is greater than, say, 2.0 the slope of the curve in the bilogarithmic diagram approaches the asymptotic value −1. This fact is related to the time behavior of the sensitivity coefficients of the thermal diffusivity in the three points of measure. Such coefficients, in fact, in the initial part of the transient show an increasing trend, while, for Fo > 2.0, they tend to a nonzero constant asymptotic value due to the particular forcing temperatures employed in the simulations (linear increase). In other words, during the initial phase of the transient, the experiment provides a greater flux of information. Then, the temperature response inside the sample tends to assume a practically constant increasing rate and the information coming from the thermal history of the specimen when gives us a constant contribution to the identification process. So it appears reasonable to find a normalized error trend that, with other conditions being equal, is proportional to the inverse of the total time length of the transient test:To give an idea of the error arising in actual transient test, Table 1 reports the results obtained in case of three different materials. The results are obtained by using Figure 5. In these explanatory cases, the error on the reconstructed thermal diffusivity is always smaller than 1.5%. Worst results might be expected in case of nonlinear system identification (temperature dependent properties) and simultaneous estimation of various functions (e.g., thermal conductivity and specific heat rather than thermal diffusivity) or, as it happens in actual experiments, if the measures are affected by random errors. So the curves reported in Figure 5 only represent a lower bound of the error in the estimated thermal diffusivity due to an initial temperature bias.

5. Compensation of the Initial Temperature Bias

As said, most model mismatches between actual experiments and their numerical implementation can be compensated by including in the inverse algorithm a proper parametric model. According to this general rule, we add the initial temperature distribution, suitably parameterized, among the other quantities to be estimated.

To solve the inverse problem and reconstruct the unknown parameters, various methods are at our disposal from classic gradient methods to artificial neural networks [23] to genetic algorithms [24].

In the present application, the well-known iterative Gauss method has been implemented for the minimization of the OLS (ordinary least square) functional associated with the temperature residuals. Considerwhere is the unknown parameter vector, refers to measured temperatures, refers to temperatures calculated with the numerical model ((1) and (2)), and the index refers to the sensor number (only one sensor located in the middle of the specimen in the presented case), while is a discretized time index. A detailed description of the application of this inverse technique to thermophysical property reconstruction is reported in various textbooks (e.g., in [1]) and it is not repeated here.

The initial temperature distribution error is parameterized with reference, for instance, to a polynomial representation (another suitable family of functions can be profitably used if preferred) as and the coefficients are added as further unknowns to the parameter vector to be identified along with the associated sensitivity equations. So it will beDue to the simplicity of the considered case (one internal sensor), we adopt a parabolic temperature profile constrained on the boundaries; that is, a single parameter was added.

The case of a parabolic error assumption is made for simplicity. Better algorithms can be based on more parameters describing the initial distribution (also in the case of one single sensor) in such a way that will make it better to deal with arbitrary temperature shapes. In this kind of dynamic experiments, there is low risk of over parameterization since the temperature measurements are typically thousands.

The method has been applied to the general case of nonlinear identification of both thermal conductivity and specific heat, considering their dependence on temperature.

In the sequel, for the sake of simplicity and in analogy with the exposition of the first part of the study, the sensitivity equation will be developed with reference to the unknown parameters only with reference to the linear case, with the thermal diffusivity as the lone (temperature independent) parameter to be identified. The sensitivity equations are obtained by differentiation of the diffusion equation (1) with respect to the parameters ; that is,by using the notationEquation (17) is written asInitial and boundary conditions are obtained in the same way and coupled to the above equation:By solving the boundary problem ((19) and (20)), we obtain the sensitivity needed by the inverse reconstruction problem using, as an example, the OLS method accurately described in [25].

In the specific case, we made use of the Gauss minimization algorithm to minimize the functional (14) associated with the temperature residuals, but other common methods can be used from gradient minimization algorithm to more sophisticated Kalman filter if one wants to correctly account for other quantities like random errors on the control temperatures.

In case of “simulated” experiment, the algorithm shows the ability to converge quickly to the exact solution in both the linear and nonlinear cases. In this last case, the simultaneous identification of different temperature dependent functions is required. In this second part of the study, random errors were always added to the temperature measures. In what follows, the results from a typical application of thermophysical property reconstruction are reported. The considered transient experiment simulates a dynamic thermal conduction problem starting with a parabolic shaped initial temperature distribution but it has to be underlined that the algorithm works with every kind of initial error. On the other hand, in this test, the reconstruction algorithm assumes a uniform, therefore incorrect, initial temperature field.

In the case of temperature measurements alone, the thermophysical properties that can be identified are, for example, [21]where is a reference value of the material volumetric heat capacity. The functions and (normalized thermal conductivity and heat capacity, resp.) are assumed to be linear functions of the temperature; that is,A linear temperature increase is imposed as the control at . Starting from an initial value = 20°C, it achieves, within an hour, a temperature of about = 160°C.

A zero mean random Gaussian noise (std. dev of 0.015 K) has been superimposed on the simulated temperature measurements and the typical simulated acquisition rate is around 1 Hz for one-hour long tests. We use such a value since it characterizes the experimental set-up actually employed in our experimental work. It is, however, of little importance and also noise free temperature could be used.

Table 2 shows the reconstructed values obtained with and without estimating the initial temperature field, that is, with and without the compensation of the initial temperature bias. To give an idea of the mean error affecting the identified temperature dependent properties, a percentage integral error has been introduced as follows (the example refers to the function):Table 2 reports test results in case of unknown, temperature dependent thermophysical properties. Despite the fact that the assumed maximum temperature bias is only = 0.5°C, the error introduced by the initial temperature bias result is to be remarkable, in particular, for the coefficient. On the other hand, if the initial temperature distribution is added, as an unknown, to the function to be identified, accurate estimates will result and the initial bias is completely eliminated just from the beginning of the simulated experiment as Figures 6 and 7 clearly show. Figure 6 shows the effect of the temperature error at sensor location due to the error in the initial temperature distribution. The effects on the residuals persist with relevant amplitude for a long time during the (simulated) experiment, lasting for 1000–1500 s. Even if the bias in the initial temperature distribution “relaxes” in time thanks to the thermal diffusion process, the standard deviation of the residuals, = 0.097°C, is one order of magnitude greater than that of the random error (0.015°C) superposed onto the measurements. Furthermore, all the estimated values of the thermophysical properties are affected by strong errors, ranging from a value of 1.32% (on ) to maximum value of 13.6% (on ), with average errors being around 1.5%. On the contrary, Figure 7 shows a practically unbiased residual history, that is, to say that the initial temperature error has been identified and compensated and no longer has any effect (or at least a negligible effect) on the parameter identification process. Indeed, the temperature residuals show a standard deviation of 0.016°C, which is statistically equal to the one superimposed on the simulated measurements. This behavior is generally an indicator of bias absence. Accordingly, Table 2 shows that the values of the identified parameters, , , and in column #6, are essentially equal to the exact ones in column #2.

6. Caution in the Estimation of “Bias” Parameters

If the model utilized to describe the actual experiment is exact (typically in simulated tests), no problem arises during the identification process and correct results can be found also in the presence of large errors in the initial temperature distribution. If, on the contrary, our description of the phenomenon contains some known or unknown approximations and incorrectness (as always occurs in true experimental tests), the identification of suspected biases (in this case the error in the initial temperature distribution) must be used with caution. In fact the inverse solution algorithm, whose primary target is to minimize temperature residuals, might confuse the bias on temperature residuals due to incorrect modeling of the experiment, with errors in the initial distribution. On the other hand, initial distribution errors cannot be ignored because of possible detrimental effects on the evaluation of thermophysical parameters. A reasonable compromise between these conflicting needs seems to be the following: the identification of the initial temperature distribution within the specimen should be always introduced in the general estimation algorithm, but the final results should be rejected if the identified errors exceed a reasonable initial tolerance (maximum a posteriori case). A correct application of the above procedure requires, however, some practice.

7. Conclusions

With reference to the inverse heat conduction problem (IHCP) applied to the reconstruction of thermophysical properties of materials with tests in transient regime, the effect of an error in the initial temperature distribution on both the temperature residuals and the estimated values of the thermophysical properties has been analyzed. In order to quantify in a simple way the magnitude of this effect, two suitable correlations are suggested for the linear case and with a single unknown parameter to be estimated, the thermal diffusivity. Then, a method is proposed to compensate for this type of error that has shown to be very effective either in the simple linear case or in more complex nonlinear ones, with several unknown functions to be simultaneously identified and in a noisy environment.

The analysis has shown that errors in the initial temperature distribution gave rise to biases on both the time temperature residuals and the value of the estimated parameters. Despite the quite small extent of the initial bias here considered, the need for compensation of such error is evident. The method proposed to face this problem consists in adding the initial temperature distribution among the “properties” to be identified by the inverse procedure. The method, very efficient when the “experimental” measurements are simulated numerically by solving the corresponding direct problem, should be used with some caution in actual experiments. In fact, in this case the algorithm could interpret the temperature residuals as coming from an initial temperature error even if they are actually due to a nonperfect correspondence between the experimental setup and the assumed physical model.

Nomenclature

Symbols
:Thermal diffusivity, m2·s−1
:Control variation rate, K s−1
:Specific heat, J·kg−1 K−1
:Volumetric heat capacity, J·m−3 K−1
:Reference value at , J·m−3 K−1
:Normalized vol. heat capacity
:Error (of some parameter)
:Averaged error (of some function)
:Nondimensional error
:Fourier number
:Thermal conductivity, W m−1 K−1
:Normalized thermal conductivity, m2·s−1
:Specimen length, m
:Number of measurements
:Temperature, K
:Time, s
:Time duration of the experiment, s
:Temperature control, K
:Spatial coordinate, m.
Greek Symbols
:Generic unknown parameter
:Temperature error (direct problem), K
:Temperature error (inverse problem), K
:Temp. error coeff. (direct problem), K mi
:Nondimensional temperature
:Density, kg m−3
:Std. dev. of temp. residuals, K
:Temp. error coeff. (inverse problem), K mi.
Subscripts
0:Initial
:Measured
max:Maximum
exp:Relative to the experiment.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.