Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология Образование Политология Производство Психология Стандартизация Технологии |
Error Propagation with Filtering
Filters are applied to measured data that show noise. Therefore it is im- portant to know how the statistical properties of the fi ltered data can be inferred from those of the original data. In principal, we solved this question in Section 3.3.3. The covariance matrix of the linear combina- tion g ' = Mg of a random vector g is according to Eq. (3.27) given as cov( g ') = M cov( g ) M T . (4.33) Now we need to apply this result to the special case of a convolution. First, we consider only 1-Dsignals. We assume that the covaraince matrix of the signal is homogeneous, i. e., depends only on the distance of the points and not the position itself. Then the variance σ 2 for all elements is equal. Furthermore, the values on the side diagonals are also equal and the covariance matrix takes the simple form
C− 1 C0
C1...
, (4.34)
.. . − .. ... . . .
Because the linear combinations described by M have the special form of a convolution, the matrix has the same form as the homoge- neous covariance matrix. For a fi lter with three coeffi cients M reduces to h0 h− 1 0 0 0
0 0 0 . . . . . . . (4.35) 112 4 Neighborhood Operations
Apart from edge eff ects, the matrix multiplications in Eq. (4.33) re- duce to convolution operations. We introduce the autocovariance vector c = [..., C− 1, C0, C1,...]T. Then we can write Eq. (4.33) as c ' = − h ∗ c ∗ h = c ∗ − h ∗ h = c > ( h > h ), (4.36)
In the case of uncorrelated data, the autocovariance vector is a delta function and the autocovariance vector of the noise of the fi ltered vector reduces to c ' = σ 2( h > h ). (4.37)
Because the covariance vector of a convoluted signal can be described by a correlation, we can also compute the change in the noise spectrum, i. e., the power spectrum of the noise, caused by a convolution operation. It is just required to Fourier transform Eq. (4.36) under consideration of the correlation theorem (± R7). Then we get
This means that the noise spectrum of a convolv.ed sig.nal is given by the multiplication of the noise spectrum of the input data by the square of the transfer function of the fi lter. With Eqs. (4.36) and (4.38) we have everything at hand, to compute the changes of the statistical parameters of a signal (variance, autocovariance matrix, and noise spectrum) caused by a fi lter operation. Going back from Eq. (4.38), we can conclude that Eq. (4.36) is not only valid for 1-D signals but for signals with arbitrary dimensions. 4.2.9 Convolution, Linearity, and Shift Invariance‡
H =H 00 P. (4.39) 4.2 Linear Shift-Invariant Filters† 113
M− 1 N− 1 ' ' (H G )mn = H .. gm'n' m n P with Eq. (4.16) m'=0 n'=0 M− 1 N− 1 mn
. gm'n' H m n P linearity M− 1 N− 1 ' '
with Eq. (4.17) M− 1 N− 1 ' '
M− 1 N− 1 ' '
. gm'n' m n S H with Eq. (4.39)
= . N− 1 . gm'n' hm− m', n− n' with Eq. (4.17) m'=0 n'=0 M− 1 = . N− 1 . gm− m'', n− n'' hm'', n'' m'' = m − m'. m''=0 n''=0 n'' = n − n' These calculations prove that a linear shift-invariant operator must necessarily be a convolution operation in the space domain. There is no other operator type which is both linear and shift invariant.
4.2.10 Inverse Operators‡ Can we invert a fi lter operation so that we can get back the original image from a fi ltered image? This question is signifi cant because degradations such as image blurring by motion or by defocused optics can also be regarded as fi lter operations (Section 7.6.1). If an inverse operator exists and if we know the point spread function of the degradation, we can reconstruct the original, undisturbed image. The problem of inverting a fi lter operation is known as deconvolution or inverse fi ltering. By considering the fi lter operation in the Fourier domain, we immediately recog- nize that we can only reconstruct those wave numbers for which the transfer function of the fi lter does not vanish. In practice, the condition for inversion of a fi lter operation is much more restricted because of the limited quality of the image signals. If a wave number is attenuated below a critical level, which depends on the noise and quantization (Section 9.4), it will not be recoverable. It is obvious that these conditions limit the power of a straightforward inverse fi ltering considerably. The problem of inverse fi ltering is considered further in Chapter 17.8. 114 4 Neighborhood Operations
4.2.11 Eigenfunctions‡ Next we are interested in the question whether special types of images E exist which are preserved by a linear shift-invariant operator, except for multipli- cation with a scalar. Intuitively, it is clear that these images have a special importance for LSI operators. Mathematically speaking, this means H E = λ E. (4.40) A vector (image) which meets this condition is called an eigenvector (eigenim- age) or characteristic vector of the operator, the scaling factor λ an eigenvalue or characteristic value of the operator.
uvwmn = exp. 2π imu Σ exp. 2π inv Σ , (4.41) M N which is given by
Now we are curious to learn whether any linear shift-invariant operator has such a handy set of eigenimages. It turns out that all linear shift-invariant operators have the same set of eigenimages. We can prove this statement by referring to the convolution theorem (Section 2.3, Theorem 4, p. 52) which states that convolution is a point-wise multiplication in the Fourier space. Thus each element of the image representation in the Fourier space gˆ uv is multiplied by the complex scalar hˆ uv . Each point in the Fourier space represents a base image, namely the complex exponential uv W in Eq. (4.41) multiplied with the scalar gˆ uv. Therefore, the complex exponentials are eigenfunctions of any convolution operator. The eigenvalues are then the elements of the transfer function, Hˆ uv . In conclusion, we can write H (gˆ uv uv W ) = hˆ uv gˆ uv uv W . (4.43) The fact that the eigenfunctions of LSI operators are the basis functions of the Fourier domain explains why convolution reduces to a multiplication in Fourier space and underlines the central importance of the Fourier transform for image processing. 4.3 Recursive Filters‡ 115 A b 0.5 0.1 0.4 0.08 0.3 0.06 0.2 0.04 0.1 0 0.02 0
Figure 4.3: Point spread function of the recursive fi lter gn' = α gn' − 1 + (1 − α )gn for a α = 1/2 and b α = 15/16.
4.3 Recursive Filters‡ 4.3.1 Introduction‡ As convolution requires many operations, the question arises whether it is pos- sible or even advantageous to include the already convolved neighboring gray values into the convolution at the next pixel. In this way, we might be able to do a convolution with fewer operations. In eff ect, we are able to perform convolu- tions with much less computational eff ort and also more fl exibility. However, these fi lters, which are called recursive fi lters, are much more diffi cult to under- stand and to handle — especially in the multidimensional case. For a fi rst impression, we consider a very simple example. The simplest 1-D recursive fi lter we can think of has the general form gn' = α gn' − 1 + (1 − α )gn. (4.44)
With recursive fi lters, the point spread function is no longer identical to the fi lter mask, but must be computed. From Eq. (4.44), we can calculate the point spread function or impulse response of the fi lter as the response of the fi lter to the discrete delta function (Section 4.2.5)
0 n ≠ 0 Recursively applying Eq. (4.44), we obtain
116 4 Neighborhood Operations
This equation shows three typical general properties of recursive fi lters:
4.3.2 Transfer Function, z-Transform, and Stable Response‡ After this introductionary example, we are ready for a more formal discussion of recursive fi lters. Recursive fi lters include results from previous convolutions at neighboring pixels into the convolution sum and thus become directional. We discuss here only 1-Drecursive fi lters. The general equation for a fi lter running from left to right is
gn' =− an'' gn' − n'' + n''=1 R n'.=− R
hn' gn− n'. (4.47) While the neighborhood of the nonrecursive part (coeffi cients h) is symmetric around the central point, the recursive part (coeffi cients a) uses only previously computed values. Such a recursive fi lter is called a causal fi lter. If we put the recursive part on the left hand side of the equation, we observe that the recursive fi lter is equivalent to the following diff erence equation, also known as an ARMA(S, R) process (autoregressive-moving average process):
an'' gn' − n'' = n''=0 R n'.=− R
hn' gn− n' with a0 = 1. (4.48) The transfer function of such a fi lter with a recursive and a nonrecursive part can be computed by applying the discrete Fourier transform (Section 2.3.2) and making use of the shift theorem (Theorem 3, p. 52). Then
S R gˆ '(k). an'' exp(− 2π in''k) = gˆ (k) .
hn' exp(− 2π in'k). (4.49) n''=0 n'=− R 4.3 Recursive Filters‡ 117 Thus the transfer function is ' R . hn' exp(− 2π in'k) hˆ (k) = gˆ (k) = n'=− R . (4.50)
S an'' exp(− 2π in''k) n''=0 The properties of the transfer function are governed by the zeros of the numera- tor and the denominator. Thus, a zero in the recursive part of the transfer func- tion causes a zero in the transfer function, i. e., vanishing of the corresponding wave number. A zero in the recursive part causes a pole in the transfer function, i. e., an infi nite response. A determination of the zeros and thus a deeper analysis of the transfer function is not possible from Eq. (4.50). It requires an extension similar to the extension from real numbers to complex numbers that was used to introduce the Fourier transform (Section 2.3.2). We observe that the expressions for both the numera- tor and the denominator are polynomials in the complex exponential exp(2π ik) of the form
an (exp(− 2π ik))n. (4.51) n=0
With this extension we obtain a polynomial of the complex number z. As such we can apply the fundamental law of algebra that states that any polynomial of degree N can be factorized into N factors containing the roots or zeros of the polynomial: N N . anzn = anzn . .1 − rnz− 1Σ . (4.52) n=0 n=1 With Eq. (4.52) we can factorize the recursive and nonrecursive parts of the polynomials in the transfer function into the following products:
S S S . anz− n = z− S. anzS− n = z− S..1 − dnz− 1Σ , n=0 R n=0 S n=1 2R (4.53)
With z = exp(2π ik) the transfer function can fi nally be written as 2R
hˆ (z) = h− RzS− R n'=1
n''=1 . (4.54) 118 4 Neighborhood Operations
Each of the factors cn' and dn'' is a zero of the corresponding polynomial (z = cn' or z = dn''). The inclusion of the factor r in the extended transfer function results in an extension of the Fourier transform, the z-transform, which is defi ned as ∞ gˆ (z) = . gnz− n. (4.55) n=− ∞ The z-transform of the series gn can be regarded as the Fourier transform of the series gnr− n [112]. The z-transform is the key mathematical tool to under- stand 1-Drecursive fi lters. It is the discrete analogue to the Laplace transform. Detailed accounts of the z-transform are given by Oppenheim and Schafer [133] and Poularikas [141]; the 2-D z-transform is discussed by Lim [112]. Now we analyze the transfer function in more detail. The factorization of the transfer function is a signifi cant advantage because each factor can be regarded as an individual fi lter. Thus each recursive fi lter can be decomposed into a cascade of simple recursive fi lters. As the factors are all of the form fn(k˜ ) = 1 − cn exp(− 2π ik˜ ) (4.56)
4.3.3 Higher-Dimensional Recursive Filters‡ Recursive fi lters can also be defi ned in higher dimensions with the same type of equation as in Eq. (4.47); also the transfer function and z-transform of higher- dimensional recursive fi lters can be written in the very same way as in Eq. (4.50). However, it is generally not possible to factorize the z-transform as in Eq. (4.54) [112]. From Eq. (4.54) we can immediately conclude that it will be possible to factorize a separable recursive fi lter because then the higher-dimensional poly- nomials can be factorized into 1-D polynomials. Given these inherent mathe- matical diffi culties of higher-dimensional recursive fi lters, we will restrict the further discussion on 1-Drecursive fi lters.
4.3.4 Symmetric Recursive Filtering‡ While a fi lter that uses only previous data is natural and useful for real-time processing of time series, it makes little sense for spatial data. There is no “before” and “after” in spatial data. Even worse is the signal-dependent spatial shift (delay) associated with recursive fi lters. With a single recursive fi lter it is impossible to construct a so-called zero-phase fi lter with an even transfer function. Thus it is necessary to combine multiple recursive fi lters. The combination should either result in a zero-phase fi lter suitable for smoothing operations or a derivative fi lter that shifts the phase by 90°. Thus the transfer function should either be purely real or purely imaginary (Section 2.3.5). 4.3 Recursive Filters‡ 119
We start with a 1-Dcausal recursive fi lter that has the transfer function +hˆ (k˜ ) = a(k˜ ) + ib(k˜ ). (4.57)
− hˆ (k˜ ) = a(k˜ ) − ib(k˜ ). (4.58) Thus, only the sign of the imaginary part of the transfer function changes when the fi lter direction is reversed. We now have three possibilities to combine the two transfer functions (Eqs. (4.57) and (4.58)) either into a purely real or imaginary transfer function:
1 Subtraction ohˆ (k˜ ) = 2 .+hˆ (k˜ ) − − hˆ (k˜ )Σ = ib(k˜ ), Multiplication hˆ (k˜ ) = +hˆ (k˜ ) − hˆ (k˜ ) = a2(k˜ ) + b2(k˜ ). (4.59)
Addition and multiplication (consecutive application) of the left and right run- ning fi lter yields fi lters of even symmetry and a real transfer function, while subtraction results in a fi lter of odd symmetry and a purely imaginary transfer function.
4.3.5 Relaxation Filters‡ The simple recursive fi lter discussed in Section 4.3.1 gn' = a1gn' ∓ 1 + h0gn with a1 = α, h0 = (1 − α ) (4.60) and the point spread function ±r±n = (1 − α )α n n ≥ 0
(4.61)
is a relaxation fi lter. The transfer function of the fi lter runnning either in for- ward or in backward direction is, according to Eq. (4.50) with Eq. (4.60), given by
1 − α exp(∓ π ik˜ ) with α ∈ R. (4.62) The transfer function Eq. (4.62) is complex and can be divided into its real and imaginary parts as ˜ 1 − α ˜ ˜ ±rˆ (k) = 1 − 2α cos π k˜ + α 2 Σ (1 − α cos π k) ∓ iα sin π kΣ . (4.63) 120 4 Neighborhood Operations
.1/2 15/16 31/32
-1/16
~ log k
Figure 4.4: Transfer function of the relaxation fi lter gn'
~ k
= α gn' ∓ 1 + (1 − α )gn applied fi rst in forward and then in backward direction for a positive; and b negative values of α as indicated.
Ui U0 Ui U0 Ui U0
Figure 4.5: Analog fi lter for time series. a Black-box model: a signal Ui is put into an unknown system and at the output we measure the signal Uo. b A resistor- capacitor circuit as a simple example of an analog lowpass fi lter. c Damped resonance fi lter consisting of an inductor L, a resistor R, and a capacitor C.
After Eq. (4.59), we can then compute the transfer function rˆ for the resulting symmetric fi lter if we apply the relaxation fi lters successively in positive and negative direction:
rˆ (k˜ ) (1 − α )2 1
with = 1 − 2α cos π k˜ + α 2 = 1 + β − β cos π k˜
(4.64)
and α = 1 + β − , 1 + 2β
rˆ (k˜ ) α ˜ 2 α ((1 + 10α + α 2) ˜ 4
≈ = 1 − (1 − α )2 (π k) + 12(1 − α 2)2 (π k). (4.65) If α is positive, the fi lter is a low-pass fi lter (Fig. 4.4a). It can be tuned by adjusting α. If α is approaching 1, the averaging distance becomes infi nite. For negative α, the fi lter enhances high wave numbers (Fig. 4.4b). This fi lter is the discrete analog to the fi rst-order diff erential equation y˙ + τ y = 0 describing a relaxation process with the relaxation time τ = − ∆ t/ ln α. 4.3 Recursive Filters‡ 121
A b
7 -0.5 6 -1 5 -1.5 4 -2 3 2 -2.5 1 -3
Figure 4.6: a Magnitude and b phase shift of the transfer function of the reso- nance fi lter according to Eq. (4.67) for k˜ 0 = 1/4 and values for r as indicated.
An example is the simple resistor-capacitor circuit shown in Fig. 4.5b. The dif- ferential equation for this fi lter can be derived from Kirchhoff ’s current-sum law. The current fl owing through the resistor from Ui to Uo must be equal to the current fl owing into the capacitor. Since the current fl owing into a capacitor is proportional to the temporal derivative of the potential Uo, we end up with the fi rst-order diff erential equation U i − U o = C ∂ U o . (4.66) R ∂ t and the time constant is given by τ = RC. 4.3.6 Resonance Filters‡ The second basic type of a recursive fi lter that we found from the discussion of the transfer function in Section 4.3.2 has a pair of complex-conjugate zeros. Therefore, the transfer function of this fi lter running in forward or backward direction is ±sˆ (k˜ ) = 1 (1 − r exp(iπ k˜ 0) exp(∓ iπ k˜ ))(1 − r exp(− iπ k˜ 0) exp(∓ iπ k˜ )) 1
= 1 − 2r cos(π k˜ 0) exp(∓ iπ k˜ ) + r 2 exp(∓ 2iπ k˜ ). .
(4.67) The second row of the equation shows that this recursive fi lter has the coeffi - cients h0 = 1, a1 = − 2r cos(π k˜ 0), and a2 = r 2 hat: gn' = gn + 2r cos(π k˜ 0)gn' ∓ 1 − r 2gn' ∓ 2. (4.68) From the transfer function in Eq. (4.67) we conclude that this fi lter is a bandpass fi lter with a passband wave number of ±k˜ 0 (Fig. 4.6). For r = 1 the transfer function has two poles at k˜ = ±k˜ 0. The impulse response of this fi lter is after [133] n r ˜ sin[(n + 1)π k˜ 0] n ≥ 0 h±n = sin π k0 0 n< 0 . (4.69) 122 4 Neighborhood Operations
A b 1 1 0.75 0.5 0.25 0 -0.25 -0.5 -0.75 -1
0 5 10 15 20 n 0.75 0.5 0.25 0 -0.25 -0.5 -0.75 -1
0 10 20 30
n 40 Figure 4.7: Point spread function of the recursive resonance fi lter according to Eq. (4.68) for a k˜ 0 = 1/4, r = 3/4 and b k˜ 0 = 1/4, r = 15/16.
This means that the fi lter acts as a damped oscillator. The parameter k˜ 0 gives the wave number of the oscillation and the parameter r is the damping constant (Fig. 4.7). The fi lter is only stable if r ≤ 1. If we run the fi lter back and forth, the resulting fi lter has a real transfer function sˆ (k˜ ) = +sˆ (k˜ ) − sˆ (k˜ ) that is given by
.1 − 2r cos[π (k˜ − k˜ 0)] + r 2Σ .1 − 2r cos[π (k˜ + k˜ 0)] + r 2Σ The transfer function of this fi lter can be normalized so that its maximal value becomes 1 in the passband by setting the nonrecursive fi lter coeffi cient h0 to (1 − r 2) sin(π k˜ 0). Then we obtain the following modifi ed recursion gn' = (1 − r 2) sin(π k˜ 0)gn + 2r cos(π k˜ 0)gn' ∓ 1 − r 2gn' ∓ 2. (4.71) For symmetry reasons, the factors become most simple for a resonance wave number of k˜ 0 = 1/2. Then the recursive fi lter is
sˆ (k˜ ) (1 r 2)2 = 1 + r 4 + 2r 2 cos(2π k˜ ). (4.73) The maximum response of this fi lter at k˜ = 1/2 is one and the minimum re- sponse at k˜ = 0 and k˜ = 1 is ((1 − r 2)/(1 + r 2))2.
r = exp(− ∆ t/τ ) and k˜ 0 = ω 0∆ t/π. (4.74) 4.4 Rank Value Filtering 123
4.3.7 LSI Filters and System Theory‡ The last example of the damped oscillator illustrates that there is a close rela- tionship between discrete fi lter operations and analog physical systems. Thus, digital fi lters model a real-world physical process. They pattern how the cor- responding system would respond to a given input signal g. Actually, we will make use of this equivalence in our discussion of image formation in Chap- ter 7. There we will fi nd that imaging with a homogeneous optical system is completely described by its point spread function and that the image forma- tion process can be described by convolution. Optical imaging together with physical systems such as electrical fi lters and oscillators of all kinds can thus be regarded as representing an abstract type of process or system, called a linear shift-invariant system or short LSI. This generalization is very useful for image processing, as we can describe both image formation and image processing as convolution operations with the same formalism. Moreover, the images observed may originate from a phys- ical process that can be modeled by a linear shift-invariant system. Then the method for fi nding out how the system works can be illustrated using the black- box model (Fig. 4.5a). The black box means that we do not know the composition of the system observed or, physically speaking, the laws that govern it. We can fi nd them out by probing the system with certain signals (input signals) and watching the response by measuring some other signals (output signals). If it turns out that the system is linear, it will be described completely by the impulse response. Many biological and medical experiments are performed in this way. Biologi- cal systems are typically so complex that the researchers often stimulate them with signals and watch for responses in order to fi nd out how they work and to construct a model. From this model more detailed research may start to inves- tigate how the observed system functions might be realized. In this way many properties of biological visual systems have been discovered. But be careful — a model is not the reality! It pictures only the aspect that we probed with the applied signals.
Rank Value Filtering The considerations on how to combine pixels have resulted in the power- ful concept of linear shift-invariant systems. Thus we might be tempted to think that we have learnt all we need to know for this type of image processing operation. This is not the case. There is another class of operations which works on a quite diff erent principle. We might characterize a convolution with a fi lter mask by weighting and summing up. The class of operations to combine neighboring pix- els we are considering now is characterized by comparing and selecting. Such a fi lter is called a rank-value fi lter. For this we take all the gray values of the pixels which lie within the fi lter mask and sort them by ascending gray value. This sorting is common to all rank value fi lters. They only diff er by the position in the list from which the gray value 124 4 Neighborhood Operations
sorted list
n
input output
is picked out and written back to the center pixel. The fi lter operation which selects the medium value is called the median fi lter. Figure 4.8 illustrates how the median fi lter works. The fi lters choosing the mini- mum and maximum values are denoted as the minimum and maximum fi lter, respectively. The median fi lter is a nonlinear operator. For the sake of simplicity, we consider a one-dimensional case with a 3-element median fi lter. It is easy to fi nd two vectors for which the median fi lter is not linear:
M ([··· 0100 ··· ] + [··· 0010 ··· ]) = [··· 0110 ··· ] ≠ M [··· 0100 ··· ] +M [··· 0010 ··· ] = [··· 0000 ··· ]. There are a number of signifi cant diff erences between convolution fi lters and rank value fi lters. Most important, rank value fi lters belong to the class of nonlinear fi lters. Consequently, it is much more diffi - cult to understand their general properties. As rank value fi lters do not perform arithmetic operations but select pixels, we will never run into rounding problems. These fi lters map a discrete set of gray values onto themselves.
4.5 Further Readings‡
The classical concepts of fi ltering of discrete time series, especially recursive fi lters and the z transform are discussed in Oppenheim and Schafer [133] and Proakis and Manolakis [144], 2-D fi ltering in Lim [112]. A detailed account of nonlinear fi lters, especially median fi lters, is given by Huang [75] and Pitas and Venetsanopoulos [140].
|
Последнее изменение этой страницы: 2019-05-04; Просмотров: 184; Нарушение авторского права страницы