Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Monotonically Decreasing Transfer Function



Intuitively, we expect that any smoothing operator attenuates smaller scales more strongly than coarser scales. More specifi cally, a smoothing operator should not completely annul a certain scale while smaller scales still remain in the image. Mathematically speaking, this means that the transfer function decreases monotonically with the wave number:


hˆ (k˜ 2) ≤ hˆ (k˜ 1)  if


k˜ 2 > k˜ 1.                                  (11.6)


We may impose the more stringent condition that for the highest wave numbers the transfer function is identical to zero:


1-D:

2-D:


hˆ (1) = 0

hˆ (k˜ 1, 1) = 0,        hˆ (1, k˜ 2) = 0


 

 

(11.7)


3-D:


hˆ (k˜ 1, k˜ 2, 1) = 0,


hˆ (k˜ 1, 1, k˜ 3) = 0,


hˆ (1, k˜ 2, k˜ 3) = 0.


Together with the monotonicity condition and the preservation of the mean value, this means that the transfer function decreases monotoni- cally from one to zero for each averaging operator.

 












Isotropy

In most applications, the smoothing should be the same in all directions in order not to prefer any direction. Thus, both the fi lter mask and the transfer function should be isotropic. Consequently, the fi lter mask depends only on the magnitude of the distance from the center pixel and the transfer function on the magnitude of the wave number:

h( x ) = h(| x |)  and  hˆ ( k ˜ ) = hˆ (| k ˜ |).                                   (11.8)

In discrete space, of course, this condition can only be met approxi- mately. Therefore, it is an important design goal to construct discrete masks with minimum deviation from isotropy.


286                                                                                                           11 Averaging

 


Box Filter

Basics

×
1
It is obvious that smoothing fi lters will average pixels within a small neighborhood. The simplest method is to add all the pixels within the fi lter mask and to divide the sum by the number of pixels. Such a simple fi lter is called a box fi lter. Box fi lters are an illustrative example as to how to design a fi lter properly. As an introduction, we consider a 1 3 box fi lter

3 Σ            Σ

3 R =    1  1  1  .                                   (11.9)

The factor 1/3 scales the result of the convolution sum in order to pre-

serve the mean value (Section 11.2.2). Otherwise the gray value in a region with constant gray values is not preserved. We apply this mask to a vertical edge


1
..            ..


..            ..


..                ..


..                ..


···  0 0 1 1 ···

···  0 0 1 1 ···

···  0 0 1 1 ···


∗ 3 Σ 1 1 1 Σ =


··· 0 1/3 2/3 1  ···

··· 0 1/3 2/3 1  ···

··· 0 1/3 2/3 1  ···


..            ..             ..              ..


..                ..                    ..                ..


 

As expected for a smoothing operation, the sharp edge is transformed into a smoother ramp with a gradual transition from 0 to 1. Smoothing fi lters attenuate structures with high wave numbers. Let us test this fi rst with a vertical structure with a wavelength of 3 pixel distance:


1
. . .. . .


...


...


1 –2 1 1 –2 1 ···

1 –2 1 1 –2 1 ···

1 –2 1 1 –2 1 ···


∗ 3 Σ 1 1 1 Σ =


0 0 0 0 0 0  ···

0 0 0 0 0 0  ···

0 0 0 0 0 0  ···


 

. . .. . .


 

...


 

...


×
×
It turns out that the 1 3 box fi lter completely removes a structure with the wavelength 3. As already discussed in Section 11.2.3, we expect that all structures with a wave number above a certain threshold are removed by a good smoothing fi lter. This is not the case for the 1 3 box fi lter. A structure with the wavelength 2 is only attenuated by a factor of 3:

. . . .                                                           .    .   .    .

Σ                Σ 3
···  1 –1 1 –1  ··· 1                          ··· − 1/3 1/3 − 1/3 1/3 ···


···  1 –1 1 –1  ··· ∗          1  1  1   =

··· 1 –1 1 –1 ···


··· − 1/3 1/3 − 1/3 1/3  ···

··· − 1/3 1/3 − 1/3 1/3  ···


 

. . . .                                                           .    .   .    .


11.3 Box Filter                                                                              287


 

a

1

0.8

0.6

0.4

0.2

0

-0.2

1
k
-0.4 0    0.2   0.4   0.6  0.8 ~


 

b

1

 

0.8

 

0.6

 

0.4

 

0.2

 

0


 

B            =
Figure 11.1: Transfer functions of one-dimensional smoothing fi lters: a box fi l- ters with 3, 5, 7, and 9 coeffi cients; b binomial fi lters p with p 2, 4, 8, 16, and

32.

 




























D Box Filter

After this qualitative introduction, we discuss box fi lters quantitatively by computing the transfer function. For sake of simplicity, we start with 1-Dfi lters. The mask of the box fi lter

3 R x = Σ 1/3 1/3  1/3 Σ                           (11.10)

×                                                        =  =
is of even symmetry. According to the considerations in Section 4.2.6, we can apply Eq. (4.25) to compute the transfer function of the one- dimensional 1 3 box fi lter. Only the coeffi cients h0 h1 1/3 are unequal to zero and the transfer function reduces to

     
 

3rˆ      1  2         ˜

x = 3 + 3 cos(π kx).                                        (11.11)

=
=
×
The transfer function is shown in Fig. 11.1a. Our quick computation at the beginning of this section is verifi ed. The transfer function shows a zero at k˜  2/3. This corresponds to a wave number which is sampled 3 times per wavelength. The smallest possible wavelength (k˜  1), which is sampled twice per wavelength, is only damped by a factor of three. The transfer function is negative for k˜  > 2/3.  A negative transfer function means an interchange of minima and maxima, equal to a phase shift of 180 °. In conclusion, the 1 3 box fi lter is not a good lowpass fi lter. It is disturbing that the attenuation does not increase monotonically with the wave number but oscillates. Even worse, structures with the largest wave number are not attenuated strongly enough.

Larger box fi lters do not show a signifi cant improvement (Fig. 11.1a). On the contrary, the oscillatory behavior is more pronounced and the attenuation is only proportional to the wave number. For large fi lter masks, the discrete mask with R coeffi cients comes close to a continuous box function of width R. Therefore the transfer function approximates


288                                                                                                           11 Averaging

 

A                                                                    b


-1

-0.5

0


 

0.5  ~

k x


 

1 -1


 

 

0

 

-0.5


 

1

 

0.5

~

k y

-1


 

-0.5

0


 

0.5  ~

k x


 

1 -1


 

 

0

 

-0.5


 

1

 

0.5

~

k y


Figure 11.2: Transfer functions of two-dimensional box fi lters shown in a pseudo 3-D plot: a 3 × 3 box fi lter; b 7 × 7 box fi lter.

 

the sinc function (± R5):

     
 

Rrˆ (k˜ ) =  sin(π  Rk˜ /2)  ≈ sin(π  Rk˜ /2) = sinc(Rk˜ /2).                                  (11.12)

x               R sin(π k˜ /2)            π R˜ k/2

 

 













D Box Filter

1
Now we turn to two-dimensional box fi lters. To simplify the arithmetic, we utilize the fact that the fi lter is separable and decompose it into ver- tical and horizontal 1-Dcomponents, respectively:


 

 

3         3            3


1  1 1  1       Σ

 


Σ 1  1 


R = R x


R y = 9    1  1  1    = 3


1 1 1


∗ 3  1  .


 

  1
The transfer function of the one-dimensional fi lters is given by Eq. (11.11) (replacing k˜ x by k˜ y for the vertical fi lter).  As convolution in the space domain corresponds to multiplication in the wave number domain, the transfer function of R is

3
3
3
3
3rˆ = 3rˆ x3y =   1 + 2 cos(π k˜ x)   1 + 2 cos(π k˜ y )  .                                  (11.13)









Evaluation

From Eq. (11.13) and Fig. 11.2a, we can conclude that 2-Dbox fi lters are also poor lowpass fi lters. A larger box fi lter, for example one with a

×
7 7 mask (Fig. 11.2b), does not perform any better. Besides the disad- vantages already discussed for the one-dimensional case, we are faced with the problem that the transfer function is not isotropic, i. e., it de- pends, for a given wave number, on the direction of the wave number.


11.3 Box Filter                                                                             289

 

Figure 11.3: Test of the smoothing with a 5           5 (upper right quadrant) and a

×
×
9 9 box fi lter (lower left quadrant) using a test image with concentric sinusoidal rings. The maximum wave number k˜  at the edge of the pattern is 0.6.

 

×       ×
When we apply a box fi lter to an arbitrary image, all these disadvan- tages aff ect the image, but it is diffi cult to observe them quantitatively (Fig. 11.6). They are revealed immediately, however, if we use a carefully designed test image. This image contains concentric sinusoidal rings. The wavelength of the rings decreases with distance from the center. With this test image, we map the Fourier domain onto the space do- main. Thus, we can directly see the transfer function, i. e., the change in the amplitude and the phase shift, when a fi lter is applied. When we convolve this image with a 5 5 or9 9 box fi lter, the deviations from an isotropic transfer function become readily visible (Fig. 11.3). We can observe the wave numbers that vanish entirely and the change of gray value maxima into gray value minima and vice versa in some regions, indicating the 180° phase shift caused by negative values in the transfer function.

From this experience, we can learn an important lesson. We must not rate the properties of a fi lter operation from its eff ect on arbitrary images, even if we think that they seem to work correctly. Obviously, the eye perceives a rather qualitative impression, but for quantitative extraction of image features a quantitative analysis of the fi lter proper- ties is required. This involves a careful analysis of the transfer function and the application of the fi lters to carefully designed test images.

Now we turn back to the question of what went wrong with the box fi l- ter. We might try to design a better smoothing fi lter directly in the wave


290                                                                                                           11 Averaging

 

number space. An ideal smoothing fi lter would cut off all wave num- bers above a certain threshold value. We could use this ideal transfer function and compute the fi lter mask by an inverse Fourier transform. However, we run into two problems which can be understood without explicit calculations. The inverse Fourier transform of a box function is a sinc function. This means that the coeffi cients decrease only proportion- ally to the distance from the center pixel. We would be forced to work with large fi lter masks. Furthermore, the fi lter has the disadvantage that it overshoots at the edges.

 

Fast Computation

Despite all the disadvantages of box fi lters, they show one signifi cant advantage. According to the following equation, the convolution with a one-dimensional box fi lter can be computed independently of its size with only three operations as a recursive fi lter operation:


gm'


= gm'  − 1


 1

+ 2r + 1 (gm+r − gm− r − 1).                              (11.14)


This recursion can be understood by comparing the computations for the convolution at neighboring pixels. When the box mask is moved one position to the right, it contains the same weighting factor for all pixels except for the last and the fi rst pixel. Thus, we can simply take the result of the previous convolution, (gm'  − 1), subtract the fi rst pixel that just moved out of the mask, (gm− r − 1), and add the gray value at the pixel that just came into the mask, (gm+r ). In this way, the computation of a box fi lter does not depend on its size and the number of computations is O(r 0). Only one addition, one subtraction, and one multiplication are required to compute the fi lter result.

 





Binomial Filter

Basics

From our experience with box fi lters, we conclude that the design of fi lters is a diffi cult optimization problem. If we choose a small rectan- gular fi lter mask, we get a poor transfer function. If we start with an ideal transfer function, we get large fi lter masks and overshooting fi l- ter responses. The reason for this behavior is the fundamental relation between smoothness and compactness of the Fourier transform pairs (Section 2.3.5). An edge constitutes a discontinuity. A discontinuity leads to an impulse in the fi rst derivative. The Fourier transform of an


11.4 Binomial Filter                                                                       291

 

±
±
impulse is evenly spread over the whole Fourier domain. Using the inte- gral property of the Fourier transform (Section 2.3), an integration of the derivative in the space domain means a division by k in the Fourier do- main ( R5). Then we know without any detailed calculation that in the one-dimensional case the envelope of the Fourier transform of a func- tion which shows discontinuities in the space domain will decline with k− 1 in the wave number domain. This was exactly what we found for the box function. Its Fourier transform is the sinc function ( R5).

Considering this basic fact, we can design better smoothing fi lters. One condition is that the fi lter masks should gradually approach zero.

 


D Binomial Filter

Here we will introduce a class of smoothing fi lters that meets this crite- rion and can be calculated very effi ciently. Furthermore, these fi lters are an excellent example of how more complex fi lters can be built from sim- ple components. The simplest and most elementary smoothing mask we can think of is

=
B 1 [1 1].                                             (11.15)

2

It averages the gray values of two neighboring pixels. We can use this mask p times in a row on the same image. This corresponds to the fi lter mask

1

2p [1 1] ∗ [1 1] ∗ ... ∗ [1 1],                                   (11.16)

         p t˛ im¸  es                           r

or, written as an operator equation,

Bp = BB. .. B.                                            (11.17)

 p t˛ im¸  es r

Some examples of the resulting fi lter masks are:

B 2 = 1/4 [12 1]

B 3 = 1/8 [133 1]


B 4 = 1/16 [1464 1]

B 8 = 1/256 [18 28 56 70 56 288 1].


(11.18)


 

Because of symmetry, only the odd-sized fi lter masks are of interest. In order to perform a convolution with the asymmetric mask 1/2 [1 1] correctly, we store the result in the right and left pixel alternately.

The masks contain the values of the discrete binomial distribution. Actually, the iterative composition of the mask by consecutive convolu- tion with the 1/2 [1 1] mask is equivalent to the computation scheme of


B      B
292                                                                                                           11 Averaging

 

Figure 11.4: Test of the smoothing with a 4 and 16 binomial fi lter using a test image with concentric sinusoidal rings.

 


Pascal’s triangle:

 

p      f                                                          σ 2

0      1                           1                         0

1   1/2                        1 1                     1/4

2   1/4                      1 2 1                    1/2

3   1/8                    1331      3/4

4  1/16                 14641       1

5  1/32             1 5 10 10 5 1               5/4

6  1/64          1 6 15 20 15 6 1             3/2

7 1/128     1 7 21 35 35 21 7 1           7/4

8 1/256  1 8 28 56 70 56 28 8 1           2


 

(11.19)


 

where p denotes the order of the binomial, f the scaling factor 2− p, and

σ 2 the variance, i. e., eff ective width, of the mask.

.          Σ
The computation of the transfer function of a binomial mask is also very simple since we only need to know the transfer function of B. The transfer function of Bp is then given as the pth power:


 

bˆ p


= cosp(π k˜ /2) = 1


p (π k˜ )2

+
8


3p2 − 2p  (π k˜ )4

384


+ O(k˜ 6).  (11.20)


The graphical representation of the transfer function in Fig. 11.1b reveals that binomial fi lters are much better smoothing fi lters than box


11.4 Binomial Filter                                                                       293

a                                                                    b


-1

-0.5

0

 

c


0.5  ~

k x


 

1 -1


 

0

 

-0.5


 

1

 

0.5

~

k y


 

0.06

0.04

0.02

0

0

 

 

d


0.2


 

0.4


 

 

0.6


 

 

0.8 ~

k 1 0


 

0.5


 

1.5

q

1


 


 

1

 

0.5

~


0.03

0.02

0.01


 

 

1.5

q

1


 

-1

-0.5

0


 

0.5  ~

k x


 

1 -1


k y

0

 

-0.5


0

0

0.2


 

0.4


 

 

0.6


 

 

0.8 ~

k 1 0


 

 

0.5


 

−                                                     B                              B
B
Figure 11.5: Transfer function of two-dimensional binomial fi lters: a 2; b anisotropy Bˆ 2(k˜, θ )  Bˆ 2(k˜, 0) in a (k, θ ) diagram; c   4; d anisotropy for  4 as in b.

 

B
fi lters. The transfer function decreases monotonically and approaches zero at the largest wave number. The smallest mask, 2, has a halfwidth of k˜ /2.  This is a periodic structure which is sampled four times per wavelength. For larger masks, both the transfer function and the fi lter masks approach the Gaussian distribution with an equivalent variance. Larger masks result in smaller half-width wave numbers in agreement with the uncertainty relation.

 










































D Binomial Filter

Two-dimensional binomial fi lters can be composed from a horizontal and a vertical 1-Dfi lter:

x
y
Bp = BpBp .                                              (11.21)

The smallest mask of this kind is a 3 × 3-binomial fi lter (p = 2):


1
2           Σ            Σ


1  1 


1  1 2 1 


 1 
 1 2 1
B   = 4  1  2  1    ∗ 4    2    = 16    2  4  2    .                 (11.22)

 

B          + × +
The transfer function of the 2-Dbinomial fi lter  p with (p                      1) (p  1) coeffi cients is easily derived from the transfer functions of the 1-Dfi lters


294                                                                                                           11 Averaging

 






A                                                                    b

C                                                                    d

E                                                                    f

Figure 11.6: Application of smoothing fi lters: a original image; b 5 × 5 box fi l- ter; c 9 × 9 box fi lter; d 17 × 17 binomial fi lter (B16); a set of recursive fi lters Eq. (11.36) running in horizontal and vertical directions; e p = 2; f p = 16.

 


Eq. (11.20) as


 

y
x
y
bˆ p = bˆ p bˆ p = cosp(π k˜


 

x
/2) cosp(π k˜


 

 

/2),               (11.23)


 


and correspondingly for a 3-Dfi lter as

z
y
x
z
y
bˆ p = bˆ pbˆ p bˆ p = cosp(π k˜   /2) cosp(π k˜


 

x
/2) cosp(π k˜


 

 

/2).     (11.24)


11.4 Binomial Filter                                                                       295

 

A                                                                    b

C                                                                    d

C                                                                    d

×                              B                                                                                ×
Figure 11.7: Suppression of noise with smoothing fi lters: a image from Fig. 11.6a with Gaussian noise; b image with binary noise; c image a and d image b fi ltered with a 9 9 binomial fi lter ( 8); e image a and f image b fi ltered with a 3 3

median fi lter (Section 11.7.1).

 

×
B     B
The transfer functions of 2 and 4 are shown in Fig. 11.5. Already the small 3 3 fi lter is remarkably isotropic. Larger deviations from the circular contour lines can only be recognized for larger wave numbers, when the transfer function has dropped to 0.3 (Fig. 11.5a). This property can be shown by expanding Eq. (11.23) in a Taylor series using cylindrical


296                                                                                                           11 Averaging

 

coordinates k ˜   = [k˜, θ ]T :


 

bˆ p


p  ˜ 2


2p2 − p     ˜ 4 p cos 4θ           ˜ 4

 


≈ 1 − 8 (π k) +


256 (π k) −


768 (π k).           (11.25)


×
Only the second-order term is isotropic. In contrast, the fourth-order term contains an anisotropic part which increases the transfer function in the direction of the diagonals (Fig. 11.5a). A larger fi lter (larger p) is less anisotropic as the isotropic term with k˜ 4 increases quadratically with p while the anisotropic term with k˜ 4 cos 4θ increases only linearly with p. Already the 5 5 fi lter (Fig. 11.5b) is remarkably isotropic. The insignifi cant anisotropy of the binomial fi lters also becomes apparent when applied to the test image in Fig. 11.4.

 









Evaluation

Figure 11.6b, c show smoothing with two diff erent binomial fi lters. We observe that the edges get blurred. Fine structures as in the branches of the tree are lost. Smoothing suppresses noise. Binomial fi lters can reduce the noise level of zero-mean Gaussian noise (Section 3.4.2) con- siderably but only at the price of blurred details (Fig. 11.7a, c). Binary noise (also called impulse noise), which causes wrong gray values for a few randomly distributed pixels (Fig. 11.7b) (for instance due to trans- mission errors), is suppressed only poorly by linear fi lters. The images are blurred, but the error caused by the binary noise is not eliminated but only distributed.

 

Fast Computation

+                                       +  −
+ ×    +
We close our consideration of binomial fi lters with some remarks on fast algorithms. A direct computation of a (2p 1) (2p 1) fi lter mask requires (2p 1)2 multiplications and (2p 1)2 1 additions. If we de- compose the binomial mask into elementary smoothing masks 1/2 [1 1] and apply this mask in horizontal and vertical directions 2p times each, we only need 4p additions. All multiplications can be handled much more effi ciently as shift operations. For example, the computation of a

×
17 17 binomial fi lter requires only 32 additions and some shift opera- tions compared to 289 multiplications and 288 additions needed for the direct approach.

 

11.5 Filters as Networks‡

 

The binomial fi lters we have discussed in this section are built from the sim- plest elementary operations we can think of: scaling of pixels and the addition of neighboring pixels. For both of these operations we can construct a circuit element that performs the corresponding operation. Figure 11.8 shows a scaler,


11.5 Filters as Networks‡                                                                                    297

g                            g                              g

Register Stage


 

Scaler


 

 

h Multiplier


 

 

h

Adder


 

 

h Subtractor


 

Figure 11.8: Elementary circuits for performing discrete fi lter operations.

 

 

an adder, a subtractor, a multiplier, and a shift-register stage. The circuit ele- ments can perform the operations either analogously or digitally. With these circuit elements, we built FIR fi lters with electronic networks in an instructive way. This results also in a new perspective for fi lter operations.

=
=
As a fi rst example, we consider the 1-D binomial mask B 4 1/16 [1464 1]. Figure 11.9 shows diff erent implementations for computing the fi lter output for one pixel. While direct implementations result in irregular-shaped circuit nets, the composition of the fi lter with the B 1/2 [1 1] mask gives a regular mesh of operations. For the calculation of a single output pixel, we need 10 additions, more than for the direct implementation. To calculate the fi lter out- put of the next pixel, we only need four additions if we store the intermediate results on each level of the fi lter net from the computations of the previous pixel (Fig. 11.9d).

Actually, we could build a net of these circuits, spanning the whole vector, to compute the binomial smoothing in parallel (Fig. 11.10). Such a net has a num- ber of interesting properties. Each level of the net corresponds to a fi ltering of the image with the elementary smoothing mask 1/2 [1 1]. Thus we obtain not only the fi nal result but all intermediate smoothing results. The grid points of the individual layers change from regular grid points to intermediate grid points in a natural way.

The network model for fi lter operations is also suitable for illustrating the diff er- ent approaches in handling the boundary problems of fi ltering. We could close the net into a ring. This corresponds to a cyclic convolution. Or we could extend the net beyond the edges of the vector, so that we get all the nodes needed to calculate the fi rst and last point. Then we can either fi ll the grid points in the lowest levels lying outside the vector with zeroes or we can extrapolate them in an appropriate manner from the points within the vector.

The extension of fi lter nets to two dimensions is straightforward for separable fi lters. The nets are then composed of nets alternately connecting the pixels in the horizontal or vertical direction. The fi lter nets are valuable tools for algorithm design. As we have seen, they are especially useful in making effi cient use of intermediate results and in getting a clear idea of the boundary problems at the edges of images.


298                                                                                                           11 Averaging

 








A                                                              b

       

C                                                         d

=
Figure 11.9: Diff erent circuit nets for performing the binomial smoothing fi lter operation B 4 1/16 [1464 1]: a direct implementation; b saving multiplica-

=
tions; c composition with the elementary fi lter B 1/2 [1 1]; d computation for the next pixel.

 

11.6 Effi cient Large-Scale Averaging‡

B
B
Despite the effi cient implementation of binomial smoothing fi lters r by cas- caded convolution with, the number of computations increases dramatically for smoothing masks with low cutoff wave numbers, because the standard devi- ation of the fi lters is proportional to the square root of p according to Eq. (3.42):

σ =  p/4.                                                  (11.26)

Let us consider a smoothing operation over a circle with a radius of about only

B
=
1.73 pixels, corresponding to a variance σ 2 3. According to Eq. (11.26) we need to apply 12 which — even in an effi cient separable implementation — requires 24 (36) additions and 2 (3) shift operations for each pixel in a 2-D(3-D) image. If we want to smooth over the double distance (σ 2 = 12, radius ≈ 3.5, B48) the number of additions quadruples to 96 (144) per pixel in 2-D(3-D) space.


11.6 Effi cient Large-Scale Averaging‡                                                              299

 

Figure 11.10: Circuit net for computing the 1-D binomial smoothing fi lter B4.

 

11.6.1 Multistep Averaging‡

=
The problem of slow large-scale averaging originates from the small distance between the pixels averaged in the elementary B 1/2 [1 1] mask. In order to overcome this problem, we may use the same elementary averaging process but with more distant pixels and increase the standard deviation for smooth-

ing correspondingly. √ In two dimensions, the following masks could be applied

along diagonals (σ · 2):

1  1 0  0                      1  0 0 1 

0 0 1
1 0 0
B x+y = 4    0  2  0    ,           B xy = 4    0  2  0    ,                         (11.27)

 

or, with double step width along axes (σ · 2) and in three dimensions,

     
 

 1                    1 

                  


1                                   1                           1
B 2x =   [1020 1],    B 2y =    2  , B 2z =    2 


.        (11.28)


   
   
4                                   4

0

 1 


4

0

 1  z


 

The subscripts in these masks denote the stepping width and coordinate direc- tion. B x+y averages the gray values at two neighboring pixels in the direction of the main diagonal. B 2x computes the mean of two pixels at a distance of 2 in the x direction. The standard deviation of these fi lters is proportional to the distance between the pixels. The most effi cient implementations are multistep masks along the axes. They have the additional advantage that because of sep- arability, the algorithms can be applied to image data of arbitrary dimensions.

The problem with these fi lters is that they perform a subsampling. Conse- quently, they are no longer fi lters for larger wave numbers.  If we take, for

2x
B
2y
example, the symmetric 2-D B2                   2 fi lter, we eff ectively work on a grid with a

doubled grid constant in the spatial domain. Hence, the reciprocal grid in the wave number space has half the grid width and the transfer function is period- ically replicated once in both directions (Fig. 11.11). Generally, the zero lines


300                                                                                                           11 Averaging

 

A                                                                    b

1                                                                                                          1


 

0.5

~


 

0.5

~


 

-1

-0.5

0


 

0.5  ~

k x


 

1 -1


k y

0

 

-0.5


 

-1

-0.5

0


 

0.5  ~

k x


 

1 -1


k y

0

 

-0.5


2x
B
).
2y
Figure 11.11: Transfer function of the binomial mask applied a in the diagonal


x+y
direction (B2


2

B
x− y


) and b with double step width in axis directions (B2                            2


 

of the transfer function of masks with larger step width refl ect this reciprocal grid. For convolution with two neighboring pixels in the direction of the two di- agonals, the rec√ iprocal grid is turned by 45°. The grid constant of the reciprocal

grid is a factor 2 smaller than that of the original grid.

Used individually, these fi lters are not of much help. But we can use them in cascade, starting with directly neighboring pixels. Then the zero lines of the transfer functions, which lie diff erently for each pixel distance, effi ciently force the transfer function close to zero for large wave number ranges.

Cascaded multistep binomial fi ltering leads to a signifi cant performance in- crease for large-scale smoothing. For normal separable binomial fi ltering, the number of computations is proportional to σ 2(O(σ 2)). For multistep binomial fi ltering it depends only logarithmically on σ (O(ldσ 2)) if a cascade of fi lter operations with recursive step width doubling is performed:

p    p    p    p
BP        ··· B B B B  .                                    (11.29)

     
 


2s− 1x

 


8x 4x s ti˛ m¸  es


2x x

r


Such a mask has the standard deviation

 

12
 
σ 2 = p/4 + p + 4p +... + 4s1p = p (4s − 1)                                  (11.30)


 

 

and the transfer function


s ti˛ m¸  es                            r

 

.
s− 1


cosp(2s'− 1π k˜ ).                                              (11.31)

s'=0

,
Thus, for s steps only ps additions are required while the standard deviation grows exponentially with ≈ p/12 · 2s.

With the parameter p, we can adjust the degree of isotropy and the degree of residual inhomogeneities in the transfer function. A very effi cient implemen- tation is given by using p = 2 ( B 2 = 1/4[1 2 1] in each direction). However the


11.6 Effi cient Large-Scale Averaging‡                                                              301

a                                                                    b


1

 

0.5

~


 

0.1

0.05

0


 

1.5

q


 

-1

-0.5

0

 

 

c


0.5  ~

k x


 

1 -1


k y

0

 

-0.5


-0.05

-0.1

0

 

d


0.2


 

 

0.4


 

0.6


 

0.8 ~

k 1 0


1

 

 

0.5


 

 


1

 

0.5

~


 

0.02

0


 

 

1.5

q


 

-1

-0.5

0


 

0.5  ~

k x


 

1 -1


k y

0

 

-0.5


-0.02

0


 

0.2


 

 

0.4


 

0.6


 

0.8 ~

k 1 0


1

 

 

0.5


2
1
2
1
2
1
2
1
2
1
2
1
Figure 11.12: Transfer function of cascaded multistep binomial fi lters and their anisotropy: a B2B2, b Bˆ 2Bˆ 2(k˜, θ )− Bˆ 2Bˆ 2(k˜, 0), c B4B4, d Bˆ 4Bˆ 4(k˜, θ ) − Bˆ 4Bˆ 2(k˜, 0).

The anisotropy is shown in polar coordinates (k˜, θ ) as the deviation from the transfer function in the x direction.

 

residual side peaks at high wave numbers with maximal amplitudes up to 0.08 are still signifi cant disturbances (Fig. 11.12a, b, Fig. 11.13a, b).

With the next larger odd-sized masks (p = 4, B 4 = 1/16[1 4 6 4 1] in each di- rection) these residual side peaks at high wave numbers are suppressed well below 0.005 (Fig. 11.12c, d, Fig. 11.13c, d). This is about the relative resolution of 8-bit images and should therefore be suffi cient for most applications. With still larger masks, they could be suppressed even further. Figure 11.14 shows the fi rst four steps of multistep averaging with the B 4 mask, illustrating how quickly the smoothing reaches large scales.

 

11.6.2 Multigrid Averaging‡

O
Multistep cascaded averaging can be further enhanced by converting it into a multiresolution technique. The idea of multigrid smoothing is very simple. When a larger-step mask is involved, this operation can be applied on a corre- spondingly coarser grid. This means that the last operation before using the larger-step mask needs to compute the convolution only at the grid points used by the following coarser grid operator. This sampling procedure is denoted by a special syntax in the operator index. x|2 means: Apply the operator in the x direction and advance the mask two pixels in the x direction. Thus, the output of the fi lter operator has only half as many pixels in the x direction as the input.


302                                                                                                           11 Averaging

 























































A                                                                    b

C                                                                    d

Figure 11.13: Cascaded multistep averaging with step width doubling according to Eq. (11.29), applied to the ring test pattern: a B2B2, b B2B2B2, c B4B4, and


4
2
1
d B4B4B4.


2  1           4  2  1            2 1


 

Multigrid smoothing makes the number of computations essentially indepen- dent of the standard deviation of the smoothing mask. We again consider a sequence of 1-Dbinomial fi lters:

p                 p      p

     
 

Bx↓ 2 ··· Bx↓ 2Bx↓ 2.

      s ti˛ m¸  es                r


x|2
Since Bp


takes p operations, the operator sequence takes

s


p.   1   = 2p.1 − 1  Σ  < 2p.


s'=1 2s'− 1


2s− 1


11.6 Effi cient Large-Scale Averaging‡                                                              303

 











A                                                                    b

C                                                                    d

Figure 11.14: Cascaded multistep averaging with step width doubling according to Eq. (11.29), applied to image Fig. 11.6a with a one, b two, c three, and d four steps using the B 4 fi lter.

 

As for the multistep approach, Eq. (11.30), the standard deviation of the oper- ator sequence is

=         −
σ 2 p (4s  1).                                                (11.32)

12

=   ∀ ≥
Thus, smoothing to any degree takes not more than twice as many operations as smoothing at the fi rst step! As for multistep binomial fi lters, the standard deviation grows by a factor of two. Also — as long as B ˆ p(k˜ )  0 k˜  1/2 — the transfer functions of the fi lters are the same as for the multistep fi lters.

 

11.6.3 Recursive Averaging‡

A totally diff erent approach to large-scale averaging is given by recursive fi lter- ing introduced in Section 4.3. The recursion essentially gives a convolution fi lter an infi nite point spread function. The basic advantage of recursive fi lters is that they can easily be “tuned”, as we have demonstrated with the simple lowpass fi lter in Section 4.3.5. In this section, the focus is on the design of averaging fi lters that meet the criteria we discussed earlier in Section 11.2, especially the zero-shift property (Section 11.2.1) that is not met by causal recursive fi lters.


304                                                                                                           11 Averaging

 

Basically, recursive fi lters work the same as non-recursive fi lters. In principle, we can replace any recursive fi lter with a non-recursive fi lter whose fi lter mask is identical to the point spread function of the recursive fi lter. The real problem is the design of the recursive fi lter, i. e., the determination of the fi lter coeffi cients for a desired transfer function.

While the theory of one-dimensional recursive fi lters is standard knowledge in digital signal processing (see, for example, Oppenheim and Schafer [133]), the design of two-dimensional fi lters is still not adequately understood. The main reason is the fundamental diff erence between the mathematics of one- and higher-dimensional z-transforms and polynomials [112].

Despite these theoretical problems, recursive fi lters can be applied successfully in digital image processing. In order to avoid the fi lter design problems, we will use only very simple recursive fi lters which are easily understood and compose them to more complex fi lters, similar to the way we constructed the class of binomial fi lters from the elementary smoothing mask 1/2 [1 1]. In this way we will obtain a class of recursive fi lters that may not be optimal from the point of view of fi lter design but are useful in practical applications.

In the fi rst composition step, we combine causal recursive fi lters to symmetric fi lters. We start with a general one-dimensional recursive fi lter with the transfer function

+ A ˆ  = a(k˜ ) + ib(k˜ ).                                               (11.33)

+
The index denotes the run direction of the fi lter in the positive coordinate direction. The transfer function of the same fi lter but running in the opposite direction is

A ˆ  = a(k˜ ) − ib(k˜ ).                                               (11.34)

Only the sign of the imaginary part of the transfer function changes, as it cor- responds to the odd part of the point spread function, while the real part cor- responds to the even part.

We now have two possible ways to combine the forward and backward running fi lters into symmetric fi lters useful for averaging:

1


addition


A ˆ  = 2 Σ + A ˆ  + − A ˆ Σ  = a(k˜ )


 

(11.35)


multiplication


A ˆ  = + A ˆ − A ˆ


= a2(k˜ ) + b2(k˜ ).


 

Both techniques yield real transfer functions and thus even fi lters with zero shift that are suitable for averaging.

As the elementary recursive smoothing fi lter, we use the two-element lowpass fi lter we have already studied in Section 4.3.5:

±Ax:  Gm'  n = Gm'  , n∓ 1 + α (Gmn − Gm'  , n∓ 1)  with  0 ≤ α ≤ 1                                    (11.36) with the impulse response


(±A


x)m, n


α (1 − α )n n> 0, m = 0

.=
0                otherwise.


(11.37)


11.6 Effi cient Large-Scale Averaging‡                                                            305

1

0.8

0.6

0.4

0.2

0

Figure 11.15: Transfer function of the recursive lowpass fi lter Eq. (11.39) for diff erent values of α = 1/2, 1/4, 1/8, and 1/16.

The transfer function of this fi lter can easily be calculated by taking into account that the Fourier transform of Eq. (11.37) forms a geometric series:

±Aˆ x(k˜ ) =                     α            ˜   .                         (11.38)

1 − (1 − α ) exp(∓ iπ k)

This relation is valid only approximately, since we broke off the infi nite sum in Eq. (11.37) at n = N − 1 because of the limited size of the image.

Consecutive fi ltering with a left and right running fi lter corresponds to a multi- plication of the transfer functions


 

Aˆ   (k˜ )


+ ˆ ˜ − ˆ  ˜                                   α 2

 


x       = Ax(k)


Ax(k) ≈  α 2 + 2(1 − α )(1 − cos(π k˜ )).                           (11.39)


=               =
The transfer function shows the characteristics expected for a lowpass fi lter (Fig. 11.15).  At k˜  0, Aˆ x(k˜ )  1; for small k˜, the transfer function falls off in proportion to k˜ 2,

α 2
Aˆ x ≈ 1 − 1 − α (π k˜ )2  k˜  , 1,                                            (11.40)

and has a half-value wave number k˜ c (Aˆ x(k˜ c ) = 1/2) of

         
   

,
˜     1                 α              α

kc ≈ π arcsin 2(1 − α ) ≈ √ 2π ,                                       (11.41)

where the last approximation is only valid for α < < 1. At the highest wave number, k˜  = 1, the transfer function has dropped off to


 

Aˆ x(1)


α 2

 

≈ 4(1 − α ) + α 2


 

.                                 (11.42)


In contrast to binomial fi lters, it is not exactly zero, but suffi ciently small even for small values of α (Fig. 11.15).

Two-dimensional fi lters can be composed from one-dimensional fi lters running in the horizontal and vertical directions:

A= AxAy = +Ax− Ax+Ay − Ay.                                            (11.43)


306                                                                                                           11 Averaging

 

























A                                                                    b


-1

-0.5

0

 

c


0.5 ~

k x


 

1 -1


 

0

 

-0.5


 

1

 

0.5

~

k y


0.2

0.15

0.1

0.05

0

0

 

d


0.2


 

 

0.4


 

0.6


 

0.8 ~

k 1 0


 

0.5


 

 

1.5

q

1


 


 

 

-0.4


 

-0.2

0


 

 

0.2


 

 

~

k x 0.4


 

 

0.4

0.2

~

0 k y

-0.2

 

-0.4


0.03

0.02

0.01

00


 

0.2


 

 

0.4


 

0.6


 

0.8 ~

k 1 0


 

0.5


 

 

1.5

q

1


Figure 11.16: Transfer functions of two-dimensional recursive lowpass fi lters: a A with α = 1/2, b anisotropy of a: Aˆ (k, θ ) − Aˆ (k, π /4), c A' with α = 1/2, and d anisotropy of c: Aˆ '(k, θ ) − Aˆ '(k, 0).

 

This fi lter (Fig. 11.16) is less isotropic than binomial fi lters (Fig. 11.5). Averaging in coordinate directions is considerably less than in the other directions. How- ever, recursive fi lters have the big advantage that the computational eff ort does not depend on the degree of averaging. With the simple fi rst-order recursive fi lter, we can select the degree of averaging with an appropriate choice of the fi lter parameter α (Eq. (11.41)). The isotropy of recursive fi lters can be further improved by running additional fi lters along the diagonals:

A' = AxAyAxyAx+y.                                             (11.44)

−              +
The subscripts x y and x y denote the main and second diagonal, respec- tively. The transfer function of such a fi lter is shown in Fig. 11.16b.

=
In contrast to non-recursive fi lters, the computational eff ort does not depend on the cut-off wave number. If α 2− l in Eq. (11.36), the fi lter can be computed without any multiplication:

Gm'  n = Σ Gm'  , n±1 · 2l − Gm'  , n±1 + GmnΣ  · 2l,  l > 1.                                  (11.45)

A
A
The two-dimensional fi lter needs only 8 additions and shift operations per pixel, while the ' fi lter, running in 4 directions, needs twice as many oper- ations. This is not more effi cient than the multigrid approach with binomial masks (Section 11.6.2) that makes a much better isotropic fi lter.


11.7 Nonlinear Averaging                                                              307

 




























Nonlinear Averaging

The linear averaging fi lters discussed so far blur edges. Even worse, if the mask of the smoothing operator crosses an object edge it contains pixels from both the object and the background, giving a meaningless result from the fi lter. The same is true if averages are performed when a certain number of pixels in an image show erroneous values, e. g., because of a transmission error. The question, therefore, is whether it is possible to perform an averaging that does not cross object boundaries or that ignores certain pixels. Such a procedure can only be applied, of course, if we have already detected the edges or any distorted pixel.

In this section, we discuss three types of nonlinear averaging fi l- ter: the classical median fi lter (Section 11.7.1); weighted averaging, also known as normalized convolution (Section 11.7.2); and steerable aver- aging (Section 11.7.3), where we control the direction and/or degree of averaging with the local content of the neighborhood.

 

Median Filter

Linear fi lters eff ectively suppress Gaussian noise but perform very poorly in case of binary noise (Fig. 11.7). Using linear fi lters that weigh and sum up, we assume that each pixel carries some useful information. Pixels distorted by transmission errors have lost their original gray value. Lin- ear smoothing does not eliminate this information but carries it on to neighboring pixels. Thus the appropriate operation to process such dis- tortions is to detect these pixels and to eliminate them.

This is exactly what a rank-value fi lter does (Section 4.4). The pix- els within the mask are sorted and one pixel is selected. In particular, the median fi lter selects the medium value. As binary noise completely changes the gray value, it is very unlikely that it will show the medium gray value in the neighborhood. In this way, the medium gray value of the neighborhood is used to restore the gray value of the distorted pixel. The following examples illustrate the eff ect of a 1 × 3 median fi lter

M: M[··· 123789 ··· ] = [··· 123789 ··· ] M[··· 12 102456 ··· ] = [··· 124556 ··· ] M[··· 000999 ··· ] = [··· 000999 ··· ]

As expected, the median fi lter eliminates runaways. The two other gray value structures — a monotonically increasing ramp and an edge be- tween two plateaus of constant gray value — are preserved. In this way a median fi lter eff ectively eliminates binary noise without signifi cantly blurring the image (Fig. 11.7e). Gaussian noise is less eff ectively elimi- nated (Fig. 11.7f).


308                                                                                                           11 Averaging

 

The most important deterministic properties of a one-dimensional 2N + 1 median fi lter can be formulated using the following defi nitions.

• A constant neighborhood is an area with N + 1 equal gray values.

An edge is a monotonically increasing or decreasing area between two constant neighborhoods.

An impulse is an area of at most N points surrounded by constant neighborhoods with the same gray value.

A root or fi x point is a signal that is preserved under the median fi lter operation.

With these defi nitions, the deterministic properties of a median fi lter can be described very compactly:

• Constant neighborhoods and edges are fi x points.

• Impulses are eliminated.

×
Iterative fi ltering of an image with a median fi lter results in an image containing only constant neighborhoods and edges. If only single pixels are distorted, a 3 3 median fi lter is suffi cient to eliminate them. If clusters of distorted pixels occur, larger median fi lters must be used.

The statistical properties of the median fi lter can be illustrated with an image containing only constant neighborhoods, edges, and impulses. The power spectrum of impulses is fl at (white noise). As the median fi l- ter eliminates impulses, the power spectrum decreases homogeneously. The contribution of the edges to a certain wave number is not removed. This example also underlines the nonlinear nature of the median fi lter.

 


Weighted Averaging

n
In Section 3.1, we saw that gray values at pixels, just like any other ex- perimental data, may be characterized by individual errors that have to be considered in any further processing. As an introduction, we fi rst discuss the averaging of a set of N data gn with standard deviations σ n. From elementary statistics, it is known that appropriate averaging requires the weighting of each data point gn with the inverse of the vari- ance wn = 1/σ 2. Then, an estimate of the mean value is given by

N                              N


g =. gn/σ 2  . 1/σ 2


(11.46)


n                          n

n=1                          n=1

 

while the standard deviation of the mean is

g                                        n

σ 2 = 1  . 1/σ 2.                                        (11.47)
n=1

The weight of an individual data point in Eq. (11.46) for computation of the mean is the higher, the lower its statistical error.


11.7 Nonlinear Averaging                                                               309

 






A                                                                    b

C                                                                    d

Figure 11.17: Weighted averaging using the edge strength to hinder smoothing at edges: a image from Fig. 11.6a with added Gaussian noise; b weighting image after 5 convolutions; image after c two and d fi ve normalized convolutions using a B2 binomial smoothing mask (compare with Fig. 11.7).

 

The application of weighted averaging to image processing is known as normalized convolution [57]. The averaging is now extended to a local neighborhood. Each pixel enters the convolution sum with a weighting factor associated with it. Thus, normalized convolution requires two images. One is the image to be processed, the other an image with the weighting factors.

By analogy to Eqs. (11.46) and (11.47), normalized convolution is de- fi ned by

=
G H ∗ ( W · G ),                                        (11.48)

H W

= ∗
where H is any convolution mask, G the image to be processed, and W the image with the weighting factors. A normalized convolution with the mask H essentially transforms the set of the image G and the weighting image W into a new image G ' and a new weighting image W ' H W which can undergo further processing.

In this sense, normalized convolution is nothing complicated or spe- cial. It is just adequate consideration of pixels with spatially variable


310                                                                                                           11 Averaging

 

statistical errors. “Standard” convolution can be regarded as a special case of normalized convolution. Then all pixels are assigned the same weighting factor and it is not required to use a weighting image, since the factor remains a constant.

The fl exibility of normalized convolution is given by the choice of the weighting image. The weighting image does not necessarily get as- sociated with an error. It can be used to select and/or amplify pixels with certain features. In this way, normalized convolution becomes a versatile nonlinear operator.

As an example, Fig. 11.17 shows an noisy image that is fi ltered by nor- malized convolution using an weighting image that hinders smoothing at edges.

 

Steerable Averaging

The idea of steerable fi lters is to make the convolution mask dependent on the local image structure. This is a general concept which is not restricted to averaging but can be applied to any type of convolution process. The basic idea of steerable fi lters is as follows. A steerable fi lter has some freely adjustable parameters that control the fi ltering. These could be various properties such as the degree of smoothing, the direction of smoothing, or both. It is easy to write down a fi lter mask with adjustable parameters. We have done this already for recursive fi lters in Eq. (11.36) where the parameter α determines the degree of smoothing. However, it is not computationally effi cient to convolve an image with masks that are diff erent at every pixel. Then, advantage can no longer be taken of the fact that masks are separable.

An alternative approach is to seek a base of a few fi lters, and to use these fi lters to compute a set of fi ltered images. Then, these images are interpolated using adjustable parameters. In operator notation this

reads

.
P

H (α ) =    fp(α )Hp,                                      (11.49)

p=1

H
where p is the pth fi lter and fp(α ) a scalar interpolation function of the steering parameter α. Two problems must be solved when using steerable fi lters. First, and most basically, it is not clear that such a fi lter base Hp exists at all. Second, the relation between the steering parameter(s) α and the interpolation coeffi cients fp must be found. If the fi rst problem is solved, we mostly get the solution to the second for free.

As an example, a directional smoothing fi lter is to be constructed with the following transfer function:

hˆ θ 0 (k, θ ) = 1 − f (k) cos2(θ − θ 0).                                      (11.50)


11.7 Nonlinear Averaging                                                              311

 

2

1

 

Figure 11.18: Transfer functions for the three base fi lters for directional smooth- ing according to Eq. (11.54).

 

In this equation, cylindrical coordinates (k, θ ) are used in the Fourier domain. The fi lter in Eq. (11.50) is a polar separable fi lter with an arbi- trary radial function f (k). This radial component provides an arbitrary isotropic smoothing fi ltering.

The steerable angular term is given by cos2(θ                  θ 0). Structures ori-

±
ented in the direction θ 0 remain in the image, while those perpendicular to θ 0 are completely fi ltered out. The angular width of the directional smoothing fi lter is 45°.

We separate the cosine function in Eq. (11.50) into trigonometric func- tions that depend only on either θ or the steering angle θ 0 and obtain

 

hˆ θ  (k, θ ) = 1 − 1 f (k) [1 + cos(2θ 0) cos(2θ ) + sin(2θ 0) sin(2θ )]

0                                  2

(11.51)


with the base fi lters

2
hˆ 1 = 1 − 1 f (k),


 

hˆ 2


 

1

= − 2 f (k) cos(2θ ),


 

hˆ 3


 

1

= − 2 f (k) sin(2θ )


(11.52)

and the interpolation functions

f10) = 1,      f20) = cos(2θ 0),          f30) = sin(2θ 0).                (11.53)

Thus three base fi lters are required. The fi lter hˆ 1 is an isotropic smooth- ing fi lter, the other two are directional fi lters. Although the equations


312                                                                                                           11 Averaging

 

 


1

0.8

0.6

0.4

0.2

0

k1


 

0

0.5


 

 

1

0.5

 

0

 

 

k2

1


 

1

0.8

0.6

0.4

0.2

0

k1


 

0

0.5


 

 

1

0.5

 

0

 

 

k2

1


 


1

0.8

0.6

0.4

0.2

0

k1


 

0

0.5


 

 

1

0.5

 

0

 

 

k2

1


 

Figure 11.19: Transfer functions for the steerable smoothing fi lter according to Eq. (11.51) using the base fi lters Eq. (11.54): smoothing in 0°, 22.5°, and 45°to the x axis.

 

for this family of steerable directional smoothing fi lter are simple, it is not easy to implement polar separable base fi lters because they are not Cartesian separable and, thus, require careful optimization.

×
Nevertheless, it is even possible to implement this steerable smooth- ing fi lter with 3 3 base fi lters (Fig. 11.18). Because of symmetry prop- erties of the transfer functions, we have not much choice to chose the fi lter coeffi cients and end up with the following three base fi lters:

1  1 2  1                 1  0 − 4  0                    1  − 2  0    2 

1 2 1
0 − 4 0
2 0 − 2
H 1 = 32   2  20  2   ,         H 2 = 32   4      0  4   ,   H 3 = 32    0  0        0  


hˆ      1  1   2  ˜              2  ˜


π 2k˜ 2

 


1 = 2 + 2 cos (π k1/2) cos (π k2/2)                         ≈ 1 −    8,


hˆ 2 =

 

hˆ 3 =


4 .cos(π k˜ 1) − cos(π k˜ 2)Σ                                   ≈

1
1
8 .cos(π (k˜ 1 + k˜ 2)) − cos(π (k˜ 1 − k˜ 2))Σ  ≈


 

π 2k˜ 2 cos(2θ ), 8

 

π 2k˜ 2 sin(2θ ). 8


 

(11.54)


From Fig. 11.19 it is obvious that this simple implementation works well up to moderate wave numbers. At high wave number (k˜  > 0.5), the directional fi lter does no longer work very well.


11.8 Averaging in Multichannel Images‡                                                        313

11.8 Averaging in Multichannel Images‡

At fi rst glance, it appears that there is not much special about averaging of multichannel images: just apply the smoothing mask to each of the P channels individually:


G '1

G '2

'


H G 1

H G 2


                        
= .  = H G =              .


.                (11.55)


   .   


.

H ∗   G p


+
This simple concept can also be extended to normalized convolution (Section 11.7.2). If the same smoothing kernel is applied to all components, it is suffi cient to use one common weighting image that can be appended as the (P 1)th component

of the multicomponent image.


G '1 G '2  

.
      

 


( H ∗ ( W · G 1))/( H W )

( H ∗ ( W · G 2))/( H W )

.
                                              

 


 

(11.56)


       =                                                 


   G 'P  


 ( H ∗ ( W · G P ))/( H W ) 


W '    


H W               


A special case of multicomponent images is given when they represent features that can be mapped to angular coordinates. Typically, such features include the direction of an edge or the phase of a periodic signal. Features of this kind are cyclic and cannot be represented well as Cartesian coordinates.

Also, they cannot be averaged in this representation. Imagine an angle of +175° and –179°. The mean angle is 178°, since –179° = 360° – 179° = 181° is close to

175° and not (175° – 179°)/2= –2°.

Circular features such as angles are, therefore, better represented as unit vec- tors in the form n ¯ θ  = [cos θ, sin θ ]T.

In this representation, they can be averaged correctly as illustrated in Fig. 11.20. The average vector points to the correct direction but its magnitude is generally smaller than 1:

( n ¯ θ  + n ¯ θ )/2 = Σ   cos[(θ 1 + θ 2)/2]  Σ  cos[(θ  − θ  )/2].                                      (11.57)

1          2                     sin[(θ 1 + θ 2)/2]                     2   1

 

For an angle diff erence of 180°, the average vector has zero magnitude. The decrease of the magnitude of the average vector has an intuitive interpretation. The larger the scatter of the angle is, the less is the certainty of the average value. Indeed, if all directions are equally probable, the sum vector vanishes, while it grows in length when the scatter is low.

These considerations can be extended to the averaging of circular features. To this end we set the magnitude of the vector equal to the certainty of the quantity that is represented by the angle of the vector. In this way, short vectors add little and long vectors add more to the averaging procedure.

This is a very attractive form of weighted convolution since in contrast to nor- malized convolution (Section 11.7.2) it requires no time-consuming division. Of course, it works only with features that can be mapped adequately to an angle.


314                                                                                                           11 Averaging

 

y                                                              y

average vector

 

 

q2                                                                                            q2

q1                                                             q1

x                                                              x

 

Figure 11.20: Averaging of a cyclic quantity represented as a normal vector

1
n ¯ θ  = [cos θ, sin θ ]T on the unit vector. The average vector ( n ¯ θ   + n ¯ θ  )/2 points

2

+
in the correct direction (θ 1 θ 2)/2 but its magnitude decreases with the diff erence angle.

 

Finally, we consider a measure to characterize the scatter in the direction of vectors. Figure 11.20 illustrates that for low scatter the sum vector is only slightly lower than the sum of the magnitudes of the vector. Thus, we can defi ne an angular coherence measure as

=
c | H G | ,                                              (11.58)

| G |

where H is an arbitrary smoothing convolution operator. This measure is one if all vectors in the neighborhood covered by the convolution operator point in the same direction and zero if they are equally distributed. This defi nition of a coherence measure works not only in two-dimensional but also in higher- dimensional vector spaces. In one-dimensional vector spaces (scalar images), the coherency measure is, of course, always one.

 

11.9 Further Readings‡

The articles of Simonds [171] and Wells [197] discuss fast algorithms for large Gaussian kernels. Readers with an interest in the general principles of effi cient algorithms are referred to the textbooks of Aho et al. [3] or Sedgewick [167]. Blahut [10] deals with fast algorithms for digital signal processing. Classical fi lter design techniques, especially for IIR-fi lter are discussed in the standard textbooks for signal processing, e. g., Proakis and Manolakis [144] or Oppenheim and Schafer [133]. Lim [112] specifi cally deals with the design of 2-DIIR fi lters. A detailed description of the deterministic and statistical properties of median fi lters can be found in Huang [75] or Arce et al. [5]. They are also discussed in detail in the monograph on nonlinear digital fi lters by Pitas and Venetsanopou- los [140]. The monograph of Granlund and Knutsson [57] on signal processing for computer vision deals also with weighted averaging (normalized convolu- tion, Section 11.7.2). Steerable fi lters (Section 11.7.3) were introduced by the articles of Freeman and Adelson [48] and Simoncelli et al. [170].


 

 















































Edges

Introduction

The task of edge detection requires neighborhood operators that are sen- sitive to changes and suppress areas of constant gray values. In this way, a feature image is formed in which those parts of the image appear bright where changes occur while all other parts remain dark.

Mathematically speaking, an ideal edge is a discontinuity of the spa- tial gray value function g( x ) of the image plane. It is obvious that this is only an abstraction, which often does not match the reality. Thus, the fi rst task of edge detection is to fi nd out the properties of the edges con- tained in the image to be analyzed. Only if we can formulate a model of the edges, can we determine how accurately and under what conditions it will be possible to detect an edge and to optimize edge detection.

Edge detection is always based on diff erentiation in one or the other form. In discrete images, diff erentiation is replaced by discrete diff er- ences, which only approximate to diff erentiation. The errors associated with these approximations require careful consideration. They cause ef- fects that are not expected in the fi rst place. The two most serious errors are: anisotropic edge detection, i. e., edges are not detected equally well in all directions, and erroneous estimation of the direction of the edges. While the defi nition of edges is obvious in scalar images, diff erent def-

initions are possible in multicomponent or vectorial images (Section 12.6). An edge might be a feature that shows up in only one component or in all. Edge detection also becomes more complex in higher-dimensional images. In three dimensions, for example, volumetric regions are sepa- rated by surfaces, and edges become discontinuities in the orientation of surfaces.

Another important question is the reliability of the edge estimates. We do not only want to fi nd an edge but also to know how signifi cant it is. Thus, we need a measure for edge strength. Closely related to this issue is the question of optimum edge detection. Once edge detectors deliver not only edges but also an objective confi dence measure, diff er- ent edge detectors can be compared to each other and optimization of edge detection becomes possible.

 

315

B. Jä hne, Digital Image Processing                                                                                                       Copyright © 2002 by Springer-Verlag

ISBN 3–540–67754–2                                                                                                    All rights of reproduction in any form reserved.


316                                                                                                                    12 Edges

 

1

0.8

0.6

0.4

0.2

0

0             50           100          150          200          250


 

0.2

0.1

0

-0.1

-0.2


 

 

0             50           100          150          200          250


 

0.2

0.1

0

-0.1

-0.2


 

0             50           100          150          200          250


 

Figure 12.1: Noisy 1-D edge and its fi rst and second derivative.

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 229; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (1.62 с.)
Главная | Случайная страница | Обратная связь