Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Error Propagation with Filtering



Filters are applied to measured data that show noise. Therefore it is im- portant to know how the statistical properties of the fi ltered data can be inferred from those of the original data. In principal, we solved this question in Section 3.3.3. The covariance matrix of the linear combina- tion g ' = Mg of a random vector g is according to Eq. (3.27) given as

cov( g ') = M   cov( g ) M T .                                       (4.33)

Now we need to apply this result to the special case of a convolution. First, we consider only 1-Dsignals. We assume that the covaraince matrix of the signal is homogeneous, i. e., depends only on the distance of the points and not the position itself. Then the variance σ 2 for all elements is equal. Furthermore, the values on the side diagonals are also equal and the covariance matrix takes the simple form

                               
 C0  C1   C2  ...... 

 


cov( g ) =  C− 2

.
.      C


C− 1 C0

 

     
 


C1...

2 C− 1 C0
...


,                 (4.34)


    ..


. −   ..


...


. . .   


=
where the index indicates the distance between the points and C0

=
σ 2. Generally, the covariances decrease with increasing pixel distance. Often, only a limited number of covariances Cn are unequal zero. With statistically uncorrelated pixels, only C0 σ 2 is unequal zero.

Because the linear combinations described by M have the special form of a convolution, the matrix has the same form as the homoge- neous covariance matrix. For a fi lter with three coeffi cients M reduces to


 h0  h− 1  0           0     0

h1  h0        h 0     01
                               .
M =  0   h1   h0   h− 1  0

0  0     h1   h0    ..

    0  0     0    . . .    . . .


 .                    (4.35)

 


112                                                                              4 Neighborhood Operations

 

Apart from edge eff ects, the matrix multiplications in Eq. (4.33) re- duce to convolution operations. We introduce the autocovariance vector c = [..., C− 1, C0, C1,...]T. Then we can write Eq. (4.33) as

c ' = − h c h = c ∗ − h h = c > ( h > h ),                         (4.36)

=
where − h is the mirrored convolution mask: − hn h− n. In the last step, we replaced the convolution by a correlation. The convolution of c with h > h can be replced by a correlation, because the autocorrelation function of a real-valued function is a function of even symmetry.

In the case of uncorrelated data, the autocovariance vector is a delta function and the autocovariance vector of the noise of the fi ltered vector reduces to

c ' = σ 2( h > h ).                                               (4.37)

For a fi lter with R coeffi cients, now 2R 1 values of the autocovariance vector are unequal zero. This means that in the fi ltered signal pixels with a maximal distance of R 1 are now correlated with each other.

Because the covariance vector of a convoluted signal can be described by a correlation, we can also compute the change in the noise spectrum,

i. e., the power spectrum of the noise, caused by a convolution operation. It is just required to Fourier transform Eq. (4.36) under consideration of the correlation theorem (± R7). Then we get

 

2
c ' = c > ( h > h )      ◦ • c ˆ '(k) = c ˆ (k). h ˆ (k).  .               (4.38)

This means that the noise spectrum of a convolv.ed sig.nal is given by

the multiplication of the noise spectrum of the input data by the square of the transfer function of the fi lter. With Eqs. (4.36) and (4.38) we have everything at hand, to compute the changes of the statistical parameters of a signal (variance, autocovariance matrix, and noise spectrum) caused by a fi lter operation. Going back from Eq. (4.38), we can conclude that Eq. (4.36) is not only valid for 1-D signals but for signals with arbitrary dimensions.

4.2.9 Convolution, Linearity, and Shift Invariance‡

H                ×
In Section 4.2.4 we saw that a convolution operator is a linear shift invariant operator. But is the reverse also true that any linear shift-invariant operator is also a convolution operator? In this section we are going to prove this statement. From our considerations in Section 4.2.5, we are already familiar with the point spread function of continuous and discrete operators. Here we introduce the formal defi nition of the point spread function for an operator onto an M N- dimensional vector space:

H =H 00 P.                                                    (4.39)


4.2 Linear Shift-Invariant Filters†                                                                       113

H
Now we can use the linearity Eq. (4.16) and the shift invariance Eq. (4.18) of the operator and the defi nition of the impulse response Eq. (4.39) to calculate the result of the operator on any arbitrary image G in the space domain


  M− 1


N− 1


' '  


(H G )mn =


 H  .. gm'n' m n P  


with Eq. (4.16)


m'=0 n'=0

 M− 1 N− 1


mn

mn
' ' 


m'=0 n'=0
=  .


. gm'n' H m n P


linearity


 M− 1


N− 1


'  '           


m'=0 n'=0
mn
=  .. gm'n' H m n S 00 P


with Eq. (4.17)


 M− 1 N− 1                      '  '                  

m'=0 n'=0
mn
=   .. gm'n' m n SH 00 P


 M− 1


N− 1                  ' ' 


m'=0 n'=0
=  .


. gm'n' m n S H                     with Eq. (4.39)


mn
M− 1

= .


N− 1

. gm'n' hmm', n− n'                                    with Eq. (4.17)


m'=0 n'=0


M− 1

=  .


N− 1

. gm− m'', n− n'' hm'', n''


m''


= m − m'.


m''=0 n''=0


n'' = n − n'


These calculations prove that a linear shift-invariant operator must necessarily be a convolution operation in the space domain. There is no other operator type which is both linear and shift invariant.

 

4.2.10 Inverse Operators‡

Can we invert a fi lter operation so that we can get back the original image from a fi ltered image? This question is signifi cant because degradations such as image blurring by motion or by defocused optics can also be regarded as fi lter operations (Section 7.6.1). If an inverse operator exists and if we know the point spread function of the degradation, we can reconstruct the original, undisturbed image. The problem of inverting a fi lter operation is known as deconvolution or inverse fi ltering.

By considering the fi lter operation in the Fourier domain, we immediately recog- nize that we can only reconstruct those wave numbers for which the transfer function of the fi lter does not vanish. In practice, the condition for inversion of a fi lter operation is much more restricted because of the limited quality of the image signals. If a wave number is attenuated below a critical level, which depends on the noise and quantization (Section 9.4), it will not be recoverable. It is obvious that these conditions limit the power of a straightforward inverse fi ltering considerably. The problem of inverse fi ltering is considered further in Chapter 17.8.


114                                                                              4 Neighborhood Operations

 

4.2.11 Eigenfunctions‡

Next we are interested in the question whether special types of images E exist which are preserved by a linear shift-invariant operator, except for multipli- cation with a scalar. Intuitively, it is clear that these images have a special importance for LSI operators. Mathematically speaking, this means

H E = λ E.                                                     (4.40)

A vector (image) which meets this condition is called an eigenvector (eigenim- age) or characteristic vector of the operator, the scaling factor λ an eigenvalue or characteristic value of the operator.

S
In order to fi nd out the eigenimages of LSI operators, we discuss the shift oper- ator. It is quite obvious that for real images only a trivial eigenimage exists, namely a constant image. For complex images, however, a whole set of eigenim- ages exists. We can fi nd it when we consider the shift property of the complex exponential

uvwmn = exp. 2π imu Σ exp. 2π inv Σ ,                                          (4.41)

M                   N

which is given by

M
N
klS uv  W   = exp.− 2 π i ku Σ  exp.− 2 π i lv Σ  uv  W .                                    (4.42)

 

=
The latter equation directly states that the complex exponentials uv W are eigen- functions of the shift operator. The eigenvalues are complex phase factors which depend on the wave number indices (u, v) and the shift (k, l). When the shift is one wavelength, (k, l) (M/u, N/v), the phase factor reduces to 1 as we would expect.

Now we are curious to learn whether any linear shift-invariant operator has such a handy set of eigenimages. It turns out that all linear shift-invariant operators have the same set of eigenimages. We can prove this statement by referring to the convolution theorem (Section 2.3, Theorem 4, p. 52) which states that convolution is a point-wise multiplication in the Fourier space. Thus each element of the image representation in the Fourier space gˆ uv is multiplied by the complex scalar hˆ uv .  Each point in the Fourier space represents a base image, namely the complex exponential uv W in Eq. (4.41) multiplied with the scalar gˆ uv. Therefore, the complex exponentials are eigenfunctions of any convolution operator. The eigenvalues are then the elements of the transfer function, Hˆ uv . In conclusion, we can write

H (gˆ uv uv W ) = hˆ uv gˆ uv uv W .                                             (4.43)

The fact that the eigenfunctions of LSI operators are the basis functions of the Fourier domain explains why convolution reduces to a multiplication in Fourier space and underlines the central importance of the Fourier transform for image processing.


4.3 Recursive Filters‡                                                                                             115
















































A                                                                    b

0.5                                                                   0.1

0.4                                                                  0.08

0.3                                                                  0.06

0.2                                                                  0.04


0.1

0


0.02

0


 

Figure 4.3:   Point spread function of the recursive fi lter gn'  = α gn' − 1 + (1 − α )gn

for a α = 1/2 and b α = 15/16.

 

4.3 Recursive Filters‡

4.3.1 Introduction‡

As convolution requires many operations, the question arises whether it is pos- sible or even advantageous to include the already convolved neighboring gray values into the convolution at the next pixel. In this way, we might be able to do a convolution with fewer operations. In eff ect, we are able to perform convolu- tions with much less computational eff ort and also more fl exibility. However, these fi lters, which are called recursive fi lters, are much more diffi cult to under- stand and to handle — especially in the multidimensional case.

For a fi rst impression, we consider a very simple example. The simplest 1-D recursive fi lter we can think of has the general form

gn'  = α gn' − 1 + (1 − α )gn.                                                  (4.44)

This fi lter takes the fraction 1 α from the previously calculated value and the fraction α from the current pixel. Recursive fi lters, in contrast to nonrecursive fi lters, work in a certain direction, in our example from left to right. For time series, the preferred direction seems natural, as the current state of a signal de- pends only on previous values. Filters that depend only on the previous values of the signal are called causal fi lters. For spatial data, however, no preferred direction exists. Consequently, we have to search for ways to construct fi lters with even and odd symmetry as they are required for image processing from recursive fi lters.

With recursive fi lters, the point spread function is no longer identical to the fi lter mask, but must be computed. From Eq. (4.44), we can calculate the point spread function or impulse response of the fi lter as the response of the fi lter to the discrete delta function (Section 4.2.5)

.=n
δ       1 n = 0.                                              (4.45)

0 n ≠ 0

Recursively applying Eq. (4.44), we obtain

− 1
0
1
n
g'  = 0,      g' = 1 − α,       g' = (1 − α )α,        ..., g' = (1 − α )α n.             (4.46)


116                                                                              4 Neighborhood Operations

 

This equation shows three typical general properties of recursive fi lters:

| |
First, the impulse response is infi nite (Fig. 4.3), despite the fi nite number of coeffi cients. For α < 1 it decreases exponentially but never becomes exactly zero. In contrast, the impulse response of nonrecursive convolution fi lters is always fi nite. It is equal to the size of the fi lter mask. Therefore the two types of fi lters are sometimes named fi nite impulse response fi lters (FIR fi lter) and infi nite impulse response fi lters (IIR fi lter).

| |
FIR fi lters are always stable. This means that the impulse response is fi nite. Then the response of a fi lter to any fi nite signal is fi nite. This is not the case for IIR fi lters. The stability of recursive fi lters depends on the fi lter co- effi cients. The fi lter in Eq. (4.44) is unstable for α > 1, because then the impulse response diverges. In the simple case of Eq. (4.44) it is easy to recog- nize the instability of the fi lter. Generally, however, it is much more diffi cult to analyze the stability of a recursive fi lter, especially in two dimensions and higher.

Any recursive fi lter can be replaced by a nonrecursive fi lter, in general with an infi nite-sized mask. Its mask is given by the point spread function of the recursive fi lter. The inverse conclusion does not hold. This can be seen by the very fact that a non-recursive fi lter is always stable.

 

4.3.2 Transfer Function, z-Transform, and Stable Response‡

After this introductionary example, we are ready for a more formal discussion of recursive fi lters. Recursive fi lters include results from previous convolutions at neighboring pixels into the convolution sum and thus become directional. We discuss here only 1-Drecursive fi lters. The general equation for a fi lter running from left to right is


 

.
S

gn'  =−         an'' gn' − n'' +

n''=1


R n'.=− R


 

hn' gnn'.                            (4.47)


While the neighborhood of the nonrecursive part (coeffi cients h) is symmetric around the central point, the recursive part (coeffi cients a) uses only previously computed values. Such a recursive fi lter is called a causal fi lter.

If we put the recursive part on the left hand side of the equation, we observe that the recursive fi lter is equivalent to the following diff erence equation, also known as an ARMA(S, R) process (autoregressive-moving average process):


 

.
S

an'' gn' − n'' =

n''=0


R n'.=− R


 

hn' gnn' with a0 = 1.                            (4.48)


The transfer function of such a fi lter with a recursive and a nonrecursive part can be computed by applying the discrete Fourier transform (Section 2.3.2) and making use of the shift theorem (Theorem 3, p. 52). Then


 

S                                                                                 R

'(k).  an'' exp(− 2π in''k) = gˆ (k)  .


 

hn' exp(− 2π in'k).                 (4.49)


n''=0                                                                          n'=− R


4.3 Recursive Filters‡                                                                                            117


Thus the transfer function is

'     


R

. hn' exp(− 2π in'k)

 


hˆ (k) = gˆ  (k) = n'=− R


.                    (4.50)


.
gˆ (k)


S

an'' exp(− 2π in''k)

n''=0


The properties of the transfer function are governed by the zeros of the numera- tor and the denominator. Thus, a zero in the recursive part of the transfer func- tion causes a zero in the transfer function, i. e., vanishing of the corresponding wave number. A zero in the recursive part causes a pole in the transfer function,

i. e., an infi nite response.

A determination of the zeros and thus a deeper analysis of the transfer function is not possible from Eq. (4.50). It requires an extension similar to the extension from real numbers to complex numbers that was used to introduce the Fourier transform (Section 2.3.2). We observe that the expressions for both the numera- tor and the denominator are polynomials in the complex exponential exp(2π ik) of the form

.
S

an (exp(− 2π ik))n.                                              (4.51)

n=0

=
The complex exponential has a magnitude of one and thus covers the unit circle in the complex plane. The zeros of the polynomial must not be located one the unit circle but can be an arbitrary complex number. Therefore, it is useful to extend the polynomial so that it covers the whole complex plane. This is possible with the expression z r exp(2π ik) that describes a circle with the radius r in the complex plane.

With this extension we obtain a polynomial of the complex number z. As such we can apply the fundamental law of algebra that states that any polynomial of degree N can be factorized into N factors containing the roots or zeros of the polynomial:

N                                      N

. anzn = anzn . .1 − rnz− 1Σ  .                                           (4.52)

n=0                                   n=1

With Eq. (4.52) we can factorize the recursive and nonrecursive parts of the polynomials in the transfer function into the following products:

 

S                                                    S                                                    S

. anz− n            = z− S. anzS− n = z− S..1 − dnz− 1Σ  ,


n=0

R


n=0

S


n=1

2R


(4.53)


n=− R
n=0
n=1
. hnz− n = z− R. hnzR− n = hRz− R..1 − cnz− 1Σ .

 

With z = exp(2π ik) the transfer function can fi nally be written as

2R

S
. (1 − cn' z− 1)

 


hˆ (z) = h− RzS− R  n'=1

.
(1 − dn'' z− 1)

n''=1


.                          (4.54)


118                                                                              4 Neighborhood Operations

 

Each of the factors cn' and dn'' is a zero of the corresponding polynomial (z =

cn' or z = dn'').

The inclusion of the factor r in the extended transfer function results in an extension of the Fourier transform, the z-transform, which is defi ned as

gˆ (z) =  .  gnz− n.                                                (4.55)

n=− ∞

The z-transform of the series gn can be regarded as the Fourier transform of the series gnr− n [112]. The z-transform is the key mathematical tool to under- stand 1-Drecursive fi lters. It is the discrete analogue to the Laplace transform. Detailed accounts of the z-transform are given by Oppenheim and Schafer [133] and Poularikas [141]; the 2-D z-transform is discussed by Lim [112].

Now we analyze the transfer function in more detail. The factorization of the transfer function is a signifi cant advantage because each factor can be regarded as an individual fi lter. Thus each recursive fi lter can be decomposed into a cascade of simple recursive fi lters. As the factors are all of the form

fn(k˜ ) = 1 − cn exp(− 2π ik˜ )                                              (4.56)

− =
and the impulse response of the fi lter must be real, the transfer function must be Hermitian, that is, f ( k) f ∗ (k). This can only be the case when either the zero cn is real or a pair of factors exists with complex-conjugate zeros. This condition gives rise to two basic types of recursive fi lters, the relaxation fi lter and the resonance fi lter that are discussed in detail in Sections 4.3.5 and 4.3.6.

 

4.3.3 Higher-Dimensional Recursive Filters‡

Recursive fi lters can also be defi ned in higher dimensions with the same type of equation as in Eq. (4.47); also the transfer function and z-transform of higher- dimensional recursive fi lters can be written in the very same way as in Eq. (4.50). However, it is generally not possible to factorize the z-transform as in Eq. (4.54) [112]. From Eq. (4.54) we can immediately conclude that it will be possible to factorize a separable recursive fi lter because then the higher-dimensional poly- nomials can be factorized into 1-D polynomials. Given these inherent mathe- matical diffi culties of higher-dimensional recursive fi lters, we will restrict the further discussion on 1-Drecursive fi lters.

 

4.3.4 Symmetric Recursive Filtering‡

While a fi lter that uses only previous data is natural and useful for real-time processing of time series, it makes little sense for spatial data. There is no “before” and “after” in spatial data. Even worse is the signal-dependent spatial shift (delay) associated with recursive fi lters.

With a single recursive fi lter it is impossible to construct a so-called zero-phase fi lter with an even transfer function. Thus it is necessary to combine multiple recursive fi lters. The combination should either result in a zero-phase fi lter suitable for smoothing operations or a derivative fi lter that shifts the phase by 90°. Thus the transfer function should either be purely real or purely imaginary (Section 2.3.5).


4.3 Recursive Filters‡                                                                                             119

 

We start with a 1-Dcausal recursive fi lter that has the transfer function

+hˆ (k˜ ) = a(k˜ ) + ib(k˜ ).                                               (4.57)

+
−   = −
−                                − =  +
The superscript “ ” denotes that the fi lter runs in positive coordinate direction. The transfer function of the same fi lter but running in the opposite direction has a similar transfer function.  We replace k˜  by  k˜  and note that a(  k˜ )  a(  k˜ ) and b(  k˜ )   b(k˜ )), because the transfer function of a real PSF is Hermitian (Section 2.3.5), and obtain

− hˆ (k˜ ) = a(k˜ ) − ib(k˜ ).                                               (4.58)

Thus, only the sign of the imaginary part of the transfer function changes when the fi lter direction is reversed.

We now have three possibilities to combine the two transfer functions (Eqs. (4.57) and (4.58)) either into a purely real or imaginary transfer function:

1
Addition            ehˆ (k˜ ) = 2 .+hˆ (k˜ ) + − hˆ (k˜ )Σ  = a(k˜ ),

1


Subtraction          ohˆ (k˜ ) = 2 .+hˆ (k˜ ) − − hˆ (k˜ )Σ  = ib(k˜ ),

Multiplication  hˆ (k˜ ) = +hˆ (k˜ ) − hˆ (k˜ ) = a2(k˜ ) + b2(k˜ ).


(4.59)


 

Addition and multiplication (consecutive application) of the left and right run- ning fi lter yields fi lters of even symmetry and a real transfer function, while subtraction results in a fi lter of odd symmetry and a purely imaginary transfer function.

 

4.3.5 Relaxation Filters‡

The simple recursive fi lter discussed in Section 4.3.1

gn'  = a1gn' ∓ 1 + h0gn  with  a1 = α, h0 = (1 − α )                                         (4.60) and the point spread function


±r±n =


(1 − α )α n n ≥ 0

.
0                else


(4.61)


 

is a relaxation fi lter. The transfer function of the fi lter runnning either in for- ward or in backward direction is, according to Eq. (4.50) with Eq. (4.60), given by


rˆ (k) =
±  ˜            1 − α         

1 − α exp(∓ π ik˜ )


with α ∈ R.                        (4.62)


The transfer function Eq. (4.62) is complex and can be divided into its real and imaginary parts as

˜           1 − α                                 ˜                  ˜

±rˆ (k) = 1 −  2α  cos π k˜  + α 2 Σ (1 − α cos π k) ∓ iα sin π kΣ  .                                       (4.63)


120                                                                              4 Neighborhood Operations

 

-1/2
 

.1/2

      7/8 .3/ 4
 

15/16

   

31/32

 
 

 

 

 

a                                                                    b

                                                                                                         

 

                                                                                                           

-1/4
-1/8

 

-1/16

 


 

~ log k

 

Figure  4.4:   Transfer function of the relaxation fi lter gn'


 

~ k

 

= α gn' ∓ 1 + (1 − α )gn


applied fi rst in forward and then in backward direction for a positive; and b

negative values of α as indicated.

 

C
R
    black box
 

 

R

 
 

C

 

 

a                                             b                                           c

                 
     

Ui                                                                                                           U0                      Ui                                                                                                          U0                  Ui                                                                                                          U0

                                                                                                          

 

Figure 4.5: Analog fi lter for time series. a Black-box model: a signal Ui is put into an unknown system and at the output we measure the signal Uo. b A resistor- capacitor circuit as a simple example of an analog lowpass fi lter. c Damped resonance fi lter consisting of an inductor L, a resistor R, and a capacitor C.

 

After Eq. (4.59), we can then compute the transfer function rˆ for the resulting symmetric fi lter if we apply the relaxation fi lters successively in positive and negative direction:


 

rˆ (k˜ )


(1 − α )2                                               1

 

           


 

 

with


= 1 − 2α cos π k˜  + α 2  = 1 + β − β cos π k˜

 


(4.64)


=
β 2α      (1 − α )2


and  α = 1 + β − , 1 + 2β


β
∈ −       ∞
| |
From Eq. (4.61) we can conclude that the relaxation fi lter is stable if α < 1, which corresponds to β ] 1/2, [. As already noted, the transfer function is one for small wave numbers. A Taylor series in k˜  results in


 

rˆ (k˜ )


α   ˜ 2 α ((1 + 10α + α 2)               ˜ 4

 


≈ = 1 − (1 − α )2 (π k)  +             12(1 − α 2)2  (π k).                (4.65)

If α is positive, the fi lter is a low-pass fi lter (Fig. 4.4a). It can be tuned by adjusting α. If α is approaching 1, the averaging distance becomes infi nite. For negative α, the fi lter enhances high wave numbers (Fig. 4.4b).

This fi lter is the discrete analog to the fi rst-order diff erential equation y˙ + τ y =

0 describing a relaxation process with the relaxation time τ = − ∆ t/ ln α.


4.3 Recursive Filters‡                                                                                             121

 































































A                                                                    b

    p /2 15/16 7/8      
    p     1/2 3/4    
         

 

8                                                                       j

7                                                                     -0.5

6                                                                      -1

5                                                                     -1.5

4

-2

3

2                                                                     -2.5

1                                                                      -3

                                                                                                                                                                                      

1
k
0                                                                     -3.5 0    0.2   0.4   0.6   0.8  ~

 

Figure 4.6: a Magnitude and b phase shift of the transfer function of the reso- nance fi lter according to Eq. (4.67) for k˜ 0 = 1/4 and values for r as indicated.

 

An example is the simple resistor-capacitor circuit shown in Fig. 4.5b. The dif- ferential equation for this fi lter can be derived from Kirchhoff ’s current-sum law. The current fl owing through the resistor from Ui to Uo must be equal to the current fl owing into the capacitor. Since the current fl owing into a capacitor is proportional to the temporal derivative of the potential Uo, we end up with the fi rst-order diff erential equation

U i − U o = C ∂ U o .                                                 (4.66)

R            ∂ t

and the time constant is given by τ = RC.

4.3.6 Resonance Filters‡

The second basic type of a recursive fi lter that we found from the discussion of the transfer function in Section 4.3.2 has a pair of complex-conjugate zeros. Therefore, the transfer function of this fi lter running in forward or backward direction is

±sˆ (k˜ ) =                                                      1                                                  


(1 − r exp(iπ k˜ 0) exp(∓ iπ k˜ ))(1 − r exp(− iπ k˜ 0) exp(∓ iπ k˜ ))

1

 

= 1 − 2r cos(π k˜ 0) exp(∓ iπ k˜ ) + r 2 exp(∓ 2iπ k˜ ).


.

 

(4.67)


The second row of the equation shows that this recursive fi lter has the coeffi - cients h0 = 1, a1 = − 2r cos(π k˜ 0), and a2 = r 2 hat:

gn'  = gn + 2r cos(π k˜ 0)gn' ∓ 1 − r 2gn' ∓ 2.                                             (4.68)

From the transfer function in Eq. (4.67) we conclude that this fi lter is a bandpass fi lter  with a passband wave number of ±k˜ 0 (Fig. 4.6).  For r  = 1 the transfer function has two poles at k˜  = ±k˜ 0.

The impulse response of this fi lter is after [133]

n


  


r  ˜   sin[(n + 1)π k˜ 0]  n ≥ 0


h±n =


sin π k0

  0                                           n< 0


.                   (4.69)


122                                                                              4 Neighborhood Operations

 










A                                                                    b

1                                                                     1


0.75

0.5

0.25

0

-0.25

-0.5

-0.75

-1


 

0       5       10     15      20 n


0.75

0.5

0.25

0

-0.25

-0.5

-0.75

-1


 

 

0       10      20      30


 

n 40


Figure 4.7: Point spread function of the recursive resonance fi lter according to Eq. (4.68) for a k˜ 0 = 1/4, r = 3/4 and b k˜ 0 = 1/4, r = 15/16.

 

This means that the fi lter acts as a damped oscillator.  The parameter k˜ 0 gives the wave number of the oscillation and the parameter r is the damping constant (Fig. 4.7). The fi lter is only stable if r ≤ 1.

If we run the fi lter back and forth, the resulting fi lter has a real transfer function

sˆ (k˜ ) = +sˆ (k˜ ) − sˆ (k˜ ) that is given by

sˆ (k) =                                                                                                           . (4.70)
˜                                                    1                                                 

.1 − 2r cos[π (k˜  − k˜ 0)] + r 2Σ  .1 − 2r cos[π (k˜  + k˜ 0)] + r 2Σ

The transfer function of this fi lter can be normalized so that its maximal value becomes 1 in the passband by setting the nonrecursive fi lter coeffi cient h0 to (1 − r 2) sin(π k˜ 0). Then we obtain the following modifi ed recursion

gn'  = (1 − r 2) sin(π k˜ 0)gn + 2r cos(π k˜ 0)gn' ∓ 1 − r 2gn' ∓ 2.                                      (4.71)

For symmetry reasons, the factors become most simple for a resonance wave number of k˜ 0 = 1/2. Then the recursive fi lter is

gn'  = (1 − r 2)gn − r 2gn' ∓ 2 = gn − r 2(gn + gn' ∓ 2)                                            (4.72) with the transfer function


sˆ (k˜ )


(1 r 2)2

= 1 + r 4 + 2r 2 cos(2π k˜ ).                                            (4.73)


The maximum response of this fi lter at k˜  = 1/2 is one and the minimum re- sponse at k˜  = 0 and k˜  = 1 is ((1 − r 2)/(1 + r 2))2.

0
+      +       =
This resonance fi lter is the discrete analog to a linear system governed by the second-order diff erential equation y¨  2τ y˙  ω 2y  0, the damped harmonic oscillator such as the LRC circuit in Fig. 4.5c. The circular eigenfrequency ω 0 and the time constant τ of a real-world oscillator are related to the parameters of the discrete oscillator, r and k˜ 0 by [81]

r = exp(− ∆ t/τ )  and           k˜ 0 = ω 0∆ t/π.                             (4.74)


4.4 Rank Value Filtering                                                                 123

 

4.3.7 LSI Filters and System Theory‡

The last example of the damped oscillator illustrates that there is a close rela- tionship between discrete fi lter operations and analog physical systems. Thus, digital fi lters model a real-world physical process. They pattern how the cor- responding system would respond to a given input signal g. Actually, we will make use of this equivalence in our discussion of image formation in Chap- ter 7. There we will fi nd that imaging with a homogeneous optical system is completely described by its point spread function and that the image forma- tion process can be described by convolution. Optical imaging together with physical systems such as electrical fi lters and oscillators of all kinds can thus be regarded as representing an abstract type of process or system, called a linear shift-invariant system or short LSI.

This generalization is very useful for image processing, as we can describe both image formation and image processing as convolution operations with the same formalism. Moreover, the images observed may originate from a phys- ical process that can be modeled by a linear shift-invariant system. Then the method for fi nding out how the system works can be illustrated using the black- box model (Fig. 4.5a). The black box means that we do not know the composition of the system observed or, physically speaking, the laws that govern it. We can fi nd them out by probing the system with certain signals (input signals) and watching the response by measuring some other signals (output signals). If it turns out that the system is linear, it will be described completely by the impulse response.

Many biological and medical experiments are performed in this way. Biologi- cal systems are typically so complex that the researchers often stimulate them with signals and watch for responses in order to fi nd out how they work and to construct a model. From this model more detailed research may start to inves- tigate how the observed system functions might be realized. In this way many properties of biological visual systems have been discovered. But be careful — a model is not the reality! It pictures only the aspect that we probed with the applied signals.

 











Rank Value Filtering

The considerations on how to combine pixels have resulted in the power- ful concept of linear shift-invariant systems. Thus we might be tempted to think that we have learnt all we need to know for this type of image processing operation. This is not the case. There is another class of operations which works on a quite diff erent principle.

We might characterize a convolution with a fi lter mask by weighting and summing up. The class of operations to combine neighboring pix- els we are considering now is characterized by comparing and selecting. Such a fi lter is called a rank-value fi lter. For this we take all the gray values of the pixels which lie within the fi lter mask and sort them by ascending gray value. This sorting is common to all rank value fi lters. They only diff er by the position in the list from which the gray value


124                                                                              4 Neighborhood Operations

 

sorted list

 

32 33 34 35 36 36 36 37 98

 

n

 

 

39 33 32 35 36 31
35 34 37 36 33 34
34 33 98 36 34 32
32 36 32 35 36 35
33 31 36 34 31 32

 

39 33 32 35 36 31
35 34 37 36 33 34
34 33 36 36 34 32
32 36 32 35 36 35
33 31 36 34 31 32

 

m                                                         m

 

 

input                                                     output

 

×
Figure 4.8: Illustration of the principle of rank value fi lters with a 3 3 median fi lter.

 

is picked out and written back to the center pixel. The fi lter operation which selects the medium value is called the median fi lter. Figure 4.8 illustrates how the median fi lter works. The fi lters choosing the mini- mum and maximum values are denoted as the minimum and maximum fi lter, respectively.

The median fi lter is a nonlinear operator. For the sake of simplicity, we consider a one-dimensional case with a 3-element median fi lter. It is easy to fi nd two vectors for which the median fi lter is not linear:

 

M ([··· 0100 ··· ] + [··· 0010 ··· ])        = [··· 0110 ··· ] ≠

M [··· 0100 ··· ] +M [··· 0010 ··· ] = [··· 0000 ··· ].

There are a number of signifi cant diff erences between convolution fi lters and rank value fi lters. Most important, rank value fi lters belong to the class of nonlinear fi lters. Consequently, it is much more diffi - cult to understand their general properties. As rank value fi lters do not perform arithmetic operations but select pixels, we will never run into rounding problems. These fi lters map a discrete set of gray values onto themselves.

 

4.5 Further Readings‡

 

The classical concepts of fi ltering of discrete time series, especially recursive fi lters and the z transform are discussed in Oppenheim and Schafer [133] and Proakis and Manolakis [144], 2-D fi ltering in Lim [112]. A detailed account of nonlinear fi lters, especially median fi lters, is given by Huang [75] and Pitas and Venetsanopoulos [140].


 

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 184; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.386 с.)
Главная | Случайная страница | Обратная связь