Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Error of Model Parameters



An overdetermined linear equation system that has been solved by min- imizing the L2 norm allows an analysis of the errors. We can study not only the deviations between model and data but also the errors of the etsimated model parameter vector p est.

The mean deviation between the measured and predicted data points is directly related to the norm of the error vector. The variance is


σ 2 = 1      ⊗ e ⊗ 2 = 1      ⊗ d Mp


⊗ 2.             (17.62)


Q − P


Q − P


est 2


In order not to introduce a bias in the estimate of the variance, we divide the norm by the degree of freedom Q  P and not by Q.

According to Eq. (17.61), the estimated parameter vector p est is a linear combination of the data vector d. Therefore we can apply the error propagation law (Eq. (3.27)) derived in Section 3.3.3. The covariance matrix (for a defi nition see Eq. (3.19)) of the estimated parameter vector

p est using ( AB )T = B T A T is given by


cov( p est) =. M T


M Σ


M T cov( d ) M . M T


M Σ − 1


 

.             (17.63)


1
If the individual elements in the data vector d are uncorrelated and have the same variance σ 2, i. e., cov( d ) = σ 2 I, Eq. (17.63) reduces to

1

cov( p est) =. M M Σ −   σ 2.                                       (17.64)

In this case, ( M M )− 1 is — except for the factor σ 2 — directly the co- variance matrix of the model parameters. This means that the diagonal elements contain the variances of the model parameters.


466                                                                     17 Regularization and Modeling

 














Regularization

So far, the error functional (Eq. (17.55)) only contains a similarity con- straint but no regularization or smoothing constraint. For many dis- crete inverse problems — such as the linear regression discussed in Sec- tion 17.6.1 — a regularization of the parameters makes no sense. If the parameters to be estimated are, however, the elements of a time series or the pixels of an image, a smoothness constraint makes sense. A suit- able smoothness parameter could then be the norm of the time series or image convolved by a derivative fi lter:

 

2
r 2 = ⊗ h p 2.                                           (17.65)

In the language of matrix algebra, convolution can be expressed by a vector matrix multiplication:

 

2
r 2 = ⊗ H p 2.                                             (17.66)

Because of the convolution operation, the matrix H has a special form. Only the coeffi cients around the diagonal are nonzero, but all values in diagonal direction are the same.

As an example, we discuss the same smoothness criterion that we used also in the variational approach (Section 17.3.4), the fi rst deriva- tive. It can be approximated, for instance, by convolution with a forward diff erence fi lter that results into the matrix

 − 1    1   0  0... 0 


H = 


0  − 1     1  0... 0

0   0  − 1    1... 0  .                      (17.67)

                                 


. .


...


...


... .


 

Minimizing the combined error functional using the L2 norm:


e 2 = ⊗ d M p ⊗ 2 +α 2  ⊗ H p ⊗ 2

 

                                                          


 

(17.68)


2                                2

  simi˛ l¸ arity  r        sm oo˛ t¸ hnerss

 

results in the following least-squares solution [124]:


1
p est =. M M + α 2 H H Σ −


 

M d.                       (17.69)


 

The structure of the solution is similar to the least-squares solution in Eq. (17.56). The smoothness term just causes the additional term α 2 H T H.

In the next section, we learn how to map an image to a vector, so that we can apply discrete inverse problems also to images.


17.6 Discrete Inverse Problems†                                                                      467

 

 

m1 m2 m3             mN
mN+1                 m2N
                   
                   
                   
                   
                   
                   
                   
                  mMN

 

 

Figure 17.8: Illustration of algebraic reconstruction from projections: a projec- tion beam dk crosses the image matrix. All the pixels met by the beam contribute to the projection.

 

17.6.8 Algebraic Tomographic Reconstruction‡

In this section we discuss an example of a discrete inverse problem that includes image data: reconstruction from projections (Section 8.6). In order to apply the discrete inverse theory as discussed so far, the image data must be mapped onto a vector, the image vector. This mapping is easily performed by renumbering the pixels of the image matrix row by row (Fig. 17.8). In this way, an M × N image matrix is transformed into a column vector with the dimension P = M × N:


T
p = Σ m1, m2,..., mp,..., mP Σ


 

.                     (17.70)


 

Now we take a single projection beam that crosses the image matrix (Fig. 17.8). Then we can attribute a weighting factor to each pixel of the image vector that represents the contribution of the pixel to the projec- tion beam. We can combine these factors in a Q-dimensional vector g q:

T

g q = Σ gq, 1, gq, 2,..., gq, p,..., gQ, P Σ  .                                  (17.71)

The total emission or absorption along the qth projection beam dq can then be expressed as the scalar product of the two vectors g q and p:

.
P

dq =   gq, pmp = g q p.                                       (17.72)

p=1


468                                                                     17 Regularization and Modeling

 

If Q projection beams cross the image matrix, we obtain a linear equation system of Q equations and P unknowns:

d = M   p .                                            (17.73)

˛ Q¸ r Q˛ × ¸ rP ˛ P¸ r

 

The data vector d contains the measured projections and the parameter vector p contains the pixel values of the image matrix that are to be reconstructed. The design matrix M gives the relationship between these two vectors by describing how in a specifi c set up the projection beams cross the image matrix. With appropriate weighting factors, we can take into direct account the limited detector resolution and the size of the radiation source.

Algebraic tomographic reconstruction is a general and fl exible method. In contrast to the fi ltered backprojection technique (Section 8.6.3) it is not limited to parallel projection. The beams can cross the image ma- trix in any manner and can even be curved. In addition, we obtain an estimate of the errors of the reconstruction.

·
×
×
However, algebraic reconstruction involves solving huge linear equa- tion systems. At this point, it is helpful to illustrate the enormous size of these equation systems. In a typical problem, the model vector includes all pixels of an image. Even with moderate resolution, e. g., 256 256 pixels, the inverse of a 65536 65536 matrix would have to be com- puted. This matrix contains about 4 109 points and does not fi t into the memory of any but the most powerful computers. Thus alternative solution techniques are required.

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 187; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.024 с.)
Главная | Случайная страница | Обратная связь