Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология Образование Политология Производство Психология Стандартизация Технологии |
Error of Model Parameters
An overdetermined linear equation system that has been solved by min- imizing the L2 norm allows an analysis of the errors. We can study not only the deviations between model and data but also the errors of the etsimated model parameter vector p est. The mean deviation between the measured and predicted data points is directly related to the norm of the error vector. The variance is σ 2 = 1 ⊗ e ⊗ 2 = 1 ⊗ d − Mp ⊗ 2. (17.62) Q − P Q − P est 2
According to Eq. (17.61), the estimated parameter vector p est is a linear combination of the data vector d. Therefore we can apply the error propagation law (Eq. (3.27)) derived in Section 3.3.3. The covariance matrix (for a defi nition see Eq. (3.19)) of the estimated parameter vector p est using ( AB )T = B T A T is given by cov( p est) =. M T M Σ − M T cov( d ) M . M T M Σ − 1
. (17.63)
1 cov( p est) =. M T M Σ − σ 2. (17.64) In this case, ( M T M )− 1 is — except for the factor σ 2 — directly the co- variance matrix of the model parameters. This means that the diagonal elements contain the variances of the model parameters. 466 17 Regularization and Modeling
Regularization So far, the error functional (Eq. (17.55)) only contains a similarity con- straint but no regularization or smoothing constraint. For many dis- crete inverse problems — such as the linear regression discussed in Sec- tion 17.6.1 — a regularization of the parameters makes no sense. If the parameters to be estimated are, however, the elements of a time series or the pixels of an image, a smoothness constraint makes sense. A suit- able smoothness parameter could then be the norm of the time series or image convolved by a derivative fi lter:
In the language of matrix algebra, convolution can be expressed by a vector matrix multiplication:
Because of the convolution operation, the matrix H has a special form. Only the coeffi cients around the diagonal are nonzero, but all values in diagonal direction are the same. As an example, we discuss the same smoothness criterion that we used also in the variational approach (Section 17.3.4), the fi rst deriva- tive. It can be approximated, for instance, by convolution with a forward diff erence fi lter that results into the matrix − 1 1 0 0... 0 H = 0 − 1 1 0... 0
. . ... ... ... .
Minimizing the combined error functional using the L2 norm: ⊗ e ⊗ 2 = ⊗ d − M p ⊗ 2 +α 2 ⊗ H p ⊗ 2
(17.68) 2 2 simi˛ l¸ arity r sm oo˛ t¸ hnerss
results in the following least-squares solution [124]:
M T d. (17.69)
The structure of the solution is similar to the least-squares solution in Eq. (17.56). The smoothness term just causes the additional term α 2 H T H. In the next section, we learn how to map an image to a vector, so that we can apply discrete inverse problems also to images. 17.6 Discrete Inverse Problems† 467
Figure 17.8: Illustration of algebraic reconstruction from projections: a projec- tion beam dk crosses the image matrix. All the pixels met by the beam contribute to the projection.
17.6.8 Algebraic Tomographic Reconstruction‡ In this section we discuss an example of a discrete inverse problem that includes image data: reconstruction from projections (Section 8.6). In order to apply the discrete inverse theory as discussed so far, the image data must be mapped onto a vector, the image vector. This mapping is easily performed by renumbering the pixels of the image matrix row by row (Fig. 17.8). In this way, an M × N image matrix is transformed into a column vector with the dimension P = M × N:
. (17.70)
Now we take a single projection beam that crosses the image matrix (Fig. 17.8). Then we can attribute a weighting factor to each pixel of the image vector that represents the contribution of the pixel to the projec- tion beam. We can combine these factors in a Q-dimensional vector g q: T g q = Σ gq, 1, gq, 2,..., gq, p,..., gQ, P Σ . (17.71) The total emission or absorption along the qth projection beam dq can then be expressed as the scalar product of the two vectors g q and p:
dq = gq, pmp = g q p. (17.72) p=1 468 17 Regularization and Modeling
If Q projection beams cross the image matrix, we obtain a linear equation system of Q equations and P unknowns: d = M p . (17.73) ˛ Q¸ r Q˛ × ¸ rP ˛ P¸ r
The data vector d contains the measured projections and the parameter vector p contains the pixel values of the image matrix that are to be reconstructed. The design matrix M gives the relationship between these two vectors by describing how in a specifi c set up the projection beams cross the image matrix. With appropriate weighting factors, we can take into direct account the limited detector resolution and the size of the radiation source. Algebraic tomographic reconstruction is a general and fl exible method. In contrast to the fi ltered backprojection technique (Section 8.6.3) it is not limited to parallel projection. The beams can cross the image ma- trix in any manner and can even be curved. In addition, we obtain an estimate of the errors of the reconstruction.
|
Последнее изменение этой страницы: 2019-05-04; Просмотров: 208; Нарушение авторского права страницы