Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология Образование Политология Производство Психология Стандартизация Технологии |
Monotonically Decreasing Transfer Function
Intuitively, we expect that any smoothing operator attenuates smaller scales more strongly than coarser scales. More specifi cally, a smoothing operator should not completely annul a certain scale while smaller scales still remain in the image. Mathematically speaking, this means that the transfer function decreases monotonically with the wave number: hˆ (k˜ 2) ≤ hˆ (k˜ 1) if k˜ 2 > k˜ 1. (11.6) We may impose the more stringent condition that for the highest wave numbers the transfer function is identical to zero: 1-D: 2-D: hˆ (1) = 0 hˆ (k˜ 1, 1) = 0, hˆ (1, k˜ 2) = 0
(11.7) 3-D: hˆ (k˜ 1, k˜ 2, 1) = 0, hˆ (k˜ 1, 1, k˜ 3) = 0, hˆ (1, k˜ 2, k˜ 3) = 0. Together with the monotonicity condition and the preservation of the mean value, this means that the transfer function decreases monotoni- cally from one to zero for each averaging operator.
Isotropy In most applications, the smoothing should be the same in all directions in order not to prefer any direction. Thus, both the fi lter mask and the transfer function should be isotropic. Consequently, the fi lter mask depends only on the magnitude of the distance from the center pixel and the transfer function on the magnitude of the wave number: h( x ) = h(| x |) and hˆ ( k ˜ ) = hˆ (| k ˜ |). (11.8) In discrete space, of course, this condition can only be met approxi- mately. Therefore, it is an important design goal to construct discrete masks with minimum deviation from isotropy. 286 11 Averaging
Box Filter Basics
3 Σ Σ 3 R = 1 1 1 . (11.9) The factor 1/3 scales the result of the convolution sum in order to pre- serve the mean value (Section 11.2.2). Otherwise the gray value in a region with constant gray values is not preserved. We apply this mask to a vertical edge
.. .. .. .. .. .. ··· 0 0 1 1 ··· ··· 0 0 1 1 ··· ··· 0 0 1 1 ··· ∗ 3 Σ 1 1 1 Σ = ··· 0 1/3 2/3 1 ··· ··· 0 1/3 2/3 1 ··· ··· 0 1/3 2/3 1 ··· .. .. .. .. .. .. .. ..
As expected for a smoothing operation, the sharp edge is transformed into a smoother ramp with a gradual transition from 0 to 1. Smoothing fi lters attenuate structures with high wave numbers. Let us test this fi rst with a vertical structure with a wavelength of 3 pixel distance:
... ... 1 –2 1 1 –2 1 ··· 1 –2 1 1 –2 1 ··· 1 –2 1 1 –2 1 ··· ∗ 3 Σ 1 1 1 Σ = 0 0 0 0 0 0 ··· 0 0 0 0 0 0 ··· 0 0 0 0 0 0 ···
. . .. . .
...
...
. . . . . . . .
··· 1 –1 1 –1 ··· ∗ 1 1 1 = ··· 1 –1 1 –1 ··· ··· − 1/3 1/3 − 1/3 1/3 ··· ··· − 1/3 1/3 − 1/3 1/3 ···
. . . . . . . . 11.3 Box Filter 287
a 1 0.8 0.6 0.4 0.2 0 -0.2
b 1
0.8
0.6
0.4
0.2
0
32.
D Box Filter After this qualitative introduction, we discuss box fi lters quantitatively by computing the transfer function. For sake of simplicity, we start with 1-Dfi lters. The mask of the box fi lter 3 R x = Σ 1/3 1/3 1/3 Σ (11.10)
3rˆ 1 2 ˜ x = 3 + 3 cos(π kx). (11.11)
Larger box fi lters do not show a signifi cant improvement (Fig. 11.1a). On the contrary, the oscillatory behavior is more pronounced and the attenuation is only proportional to the wave number. For large fi lter masks, the discrete mask with R coeffi cients comes close to a continuous box function of width R. Therefore the transfer function approximates 288 11 Averaging
A b -1 -0.5 0
0.5 ~ k x
1 -1
0
-0.5
1
0.5 ~ k y -1
-0.5 0
0.5 ~ k x
1 -1
0
-0.5
1
0.5 ~ k y Figure 11.2: Transfer functions of two-dimensional box fi lters shown in a pseudo 3-D plot: a 3 × 3 box fi lter; b 7 × 7 box fi lter.
the sinc function (± R5): Rrˆ (k˜ ) = sin(π Rk˜ /2) ≈ sin(π Rk˜ /2) = sinc(Rk˜ /2). (11.12) x R sin(π k˜ /2) π R˜ k/2
D Box Filter
3 3 3 1 1 1 1 Σ
Σ 1 1 R = R x ∗ R y = 9 1 1 1 = 3 1 1 1 ∗ 3 1 .
Evaluation From Eq. (11.13) and Fig. 11.2a, we can conclude that 2-Dbox fi lters are also poor lowpass fi lters. A larger box fi lter, for example one with a
11.3 Box Filter 289 Figure 11.3: Test of the smoothing with a 5 5 (upper right quadrant) and a
From this experience, we can learn an important lesson. We must not rate the properties of a fi lter operation from its eff ect on arbitrary images, even if we think that they seem to work correctly. Obviously, the eye perceives a rather qualitative impression, but for quantitative extraction of image features a quantitative analysis of the fi lter proper- ties is required. This involves a careful analysis of the transfer function and the application of the fi lters to carefully designed test images. Now we turn back to the question of what went wrong with the box fi l- ter. We might try to design a better smoothing fi lter directly in the wave 290 11 Averaging
number space. An ideal smoothing fi lter would cut off all wave num- bers above a certain threshold value. We could use this ideal transfer function and compute the fi lter mask by an inverse Fourier transform. However, we run into two problems which can be understood without explicit calculations. The inverse Fourier transform of a box function is a sinc function. This means that the coeffi cients decrease only proportion- ally to the distance from the center pixel. We would be forced to work with large fi lter masks. Furthermore, the fi lter has the disadvantage that it overshoots at the edges.
Fast Computation Despite all the disadvantages of box fi lters, they show one signifi cant advantage. According to the following equation, the convolution with a one-dimensional box fi lter can be computed independently of its size with only three operations as a recursive fi lter operation: gm' = gm' − 1 1 + 2r + 1 (gm+r − gm− r − 1). (11.14) This recursion can be understood by comparing the computations for the convolution at neighboring pixels. When the box mask is moved one position to the right, it contains the same weighting factor for all pixels except for the last and the fi rst pixel. Thus, we can simply take the result of the previous convolution, (gm' − 1), subtract the fi rst pixel that just moved out of the mask, (gm− r − 1), and add the gray value at the pixel that just came into the mask, (gm+r ). In this way, the computation of a box fi lter does not depend on its size and the number of computations is O(r 0). Only one addition, one subtraction, and one multiplication are required to compute the fi lter result.
Binomial Filter Basics From our experience with box fi lters, we conclude that the design of fi lters is a diffi cult optimization problem. If we choose a small rectan- gular fi lter mask, we get a poor transfer function. If we start with an ideal transfer function, we get large fi lter masks and overshooting fi l- ter responses. The reason for this behavior is the fundamental relation between smoothness and compactness of the Fourier transform pairs (Section 2.3.5). An edge constitutes a discontinuity. A discontinuity leads to an impulse in the fi rst derivative. The Fourier transform of an 11.4 Binomial Filter 291
Considering this basic fact, we can design better smoothing fi lters. One condition is that the fi lter masks should gradually approach zero.
D Binomial Filter Here we will introduce a class of smoothing fi lters that meets this crite- rion and can be calculated very effi ciently. Furthermore, these fi lters are an excellent example of how more complex fi lters can be built from sim- ple components. The simplest and most elementary smoothing mask we can think of is
2 It averages the gray values of two neighboring pixels. We can use this mask p times in a row on the same image. This corresponds to the fi lter mask 1 2p [1 1] ∗ [1 1] ∗ ... ∗ [1 1], (11.16) p t˛ im¸ es r or, written as an operator equation, Bp = BB. .. B. (11.17) p t˛ im¸ es r Some examples of the resulting fi lter masks are: B 2 = 1/4 [12 1] B 3 = 1/8 [133 1] B 4 = 1/16 [1464 1] B 8 = 1/256 [18 28 56 70 56 288 1]. (11.18)
Because of symmetry, only the odd-sized fi lter masks are of interest. In order to perform a convolution with the asymmetric mask 1/2 [1 1] correctly, we store the result in the right and left pixel alternately. The masks contain the values of the discrete binomial distribution. Actually, the iterative composition of the mask by consecutive convolu- tion with the 1/2 [1 1] mask is equivalent to the computation scheme of
Figure 11.4: Test of the smoothing with a 4 and 16 binomial fi lter using a test image with concentric sinusoidal rings.
Pascal’s triangle:
p f σ 2 0 1 1 0 1 1/2 1 1 1/4 2 1/4 1 2 1 1/2 3 1/8 1331 3/4 4 1/16 14641 1 5 1/32 1 5 10 10 5 1 5/4 6 1/64 1 6 15 20 15 6 1 3/2 7 1/128 1 7 21 35 35 21 7 1 7/4 8 1/256 1 8 28 56 70 56 28 8 1 2
(11.19)
where p denotes the order of the binomial, f the scaling factor 2− p, and σ 2 the variance, i. e., eff ective width, of the mask.
bˆ p = cosp(π k˜ /2) = 1 p (π k˜ )2
3p2 − 2p (π k˜ )4 384 + O(k˜ 6). (11.20) The graphical representation of the transfer function in Fig. 11.1b reveals that binomial fi lters are much better smoothing fi lters than box 11.4 Binomial Filter 293 a b -1 -0.5 0
c
0.5 ~ k x
1 -1
0
-0.5
1
0.5 ~ k y
0.06 0.04 0.02 0 0
d
0.2
0.4
0.6
0.8 ~ k 1 0
0.5
1.5 q 1
1
0.5 ~ 0.03 0.02 0.01
1.5 q 1
-1 -0.5 0
0.5 ~ k x
1 -1 k y 0
-0.5 0 0 0.2
0.4
0.6
0.8 ~ k 1 0
0.5
D Binomial Filter Two-dimensional binomial fi lters can be composed from a horizontal and a vertical 1-Dfi lter:
The smallest mask of this kind is a 3 × 3-binomial fi lter (p = 2):
1 1 1 1 2 1
294 11 Averaging
A b
C d
E f
Figure 11.6: Application of smoothing fi lters: a original image; b 5 × 5 box fi l- ter; c 9 × 9 box fi lter; d 17 × 17 binomial fi lter (B16); a set of recursive fi lters Eq. (11.36) running in horizontal and vertical directions; e p = 2; f p = 16.
Eq. (11.20) as
/2), (11.23)
and correspondingly for a 3-Dfi lter as
/2). (11.24) 11.4 Binomial Filter 295
A b
C d
C d
median fi lter (Section 11.7.1).
296 11 Averaging
coordinates k ˜ = [k˜, θ ]T :
bˆ p p ˜ 2 2p2 − p ˜ 4 p cos 4θ ˜ 4
≈ 1 − 8 (π k) + 256 (π k) − 768 (π k). (11.25)
Evaluation Figure 11.6b, c show smoothing with two diff erent binomial fi lters. We observe that the edges get blurred. Fine structures as in the branches of the tree are lost. Smoothing suppresses noise. Binomial fi lters can reduce the noise level of zero-mean Gaussian noise (Section 3.4.2) con- siderably but only at the price of blurred details (Fig. 11.7a, c). Binary noise (also called impulse noise), which causes wrong gray values for a few randomly distributed pixels (Fig. 11.7b) (for instance due to trans- mission errors), is suppressed only poorly by linear fi lters. The images are blurred, but the error caused by the binary noise is not eliminated but only distributed.
Fast Computation
11.5 Filters as Networks‡
The binomial fi lters we have discussed in this section are built from the sim- plest elementary operations we can think of: scaling of pixels and the addition of neighboring pixels. For both of these operations we can construct a circuit element that performs the corresponding operation. Figure 11.8 shows a scaler, 11.5 Filters as Networks‡ 297 g g g Register Stage
Scaler
h Multiplier
h Adder
h Subtractor
Figure 11.8: Elementary circuits for performing discrete fi lter operations.
an adder, a subtractor, a multiplier, and a shift-register stage. The circuit ele- ments can perform the operations either analogously or digitally. With these circuit elements, we built FIR fi lters with electronic networks in an instructive way. This results also in a new perspective for fi lter operations.
Actually, we could build a net of these circuits, spanning the whole vector, to compute the binomial smoothing in parallel (Fig. 11.10). Such a net has a num- ber of interesting properties. Each level of the net corresponds to a fi ltering of the image with the elementary smoothing mask 1/2 [1 1]. Thus we obtain not only the fi nal result but all intermediate smoothing results. The grid points of the individual layers change from regular grid points to intermediate grid points in a natural way. The network model for fi lter operations is also suitable for illustrating the diff er- ent approaches in handling the boundary problems of fi ltering. We could close the net into a ring. This corresponds to a cyclic convolution. Or we could extend the net beyond the edges of the vector, so that we get all the nodes needed to calculate the fi rst and last point. Then we can either fi ll the grid points in the lowest levels lying outside the vector with zeroes or we can extrapolate them in an appropriate manner from the points within the vector. The extension of fi lter nets to two dimensions is straightforward for separable fi lters. The nets are then composed of nets alternately connecting the pixels in the horizontal or vertical direction. The fi lter nets are valuable tools for algorithm design. As we have seen, they are especially useful in making effi cient use of intermediate results and in getting a clear idea of the boundary problems at the edges of images. 298 11 Averaging
A b
C d
11.6 Effi cient Large-Scale Averaging‡
σ = p/4. (11.26) Let us consider a smoothing operation over a circle with a radius of about only
11.6 Effi cient Large-Scale Averaging‡ 299 Figure 11.10: Circuit net for computing the 1-D binomial smoothing fi lter B4.
11.6.1 Multistep Averaging‡
ing correspondingly. √ In two dimensions, the following masks could be applied along diagonals (σ · 2): 1 1 0 0 1 0 0 1
or, with double step width along axes (σ · 2) and in three dimensions, 1 1
. (11.28)
0 1 4 0 1 z
The subscripts in these masks denote the stepping width and coordinate direc- tion. B x+y averages the gray values at two neighboring pixels in the direction of the main diagonal. B 2x computes the mean of two pixels at a distance of 2 in the x direction. The standard deviation of these fi lters is proportional to the distance between the pixels. The most effi cient implementations are multistep masks along the axes. They have the additional advantage that because of sep- arability, the algorithms can be applied to image data of arbitrary dimensions. The problem with these fi lters is that they perform a subsampling. Conse- quently, they are no longer fi lters for larger wave numbers. If we take, for
doubled grid constant in the spatial domain. Hence, the reciprocal grid in the wave number space has half the grid width and the transfer function is period- ically replicated once in both directions (Fig. 11.11). Generally, the zero lines 300 11 Averaging
A b 1 1
0.5 ~
0.5 ~
-1 -0.5 0
0.5 ~ k x
1 -1 k y 0
-0.5
-1 -0.5 0
0.5 ~ k x
1 -1 k y 0
-0.5
2
) and b with double step width in axis directions (B2 2
of the transfer function of masks with larger step width refl ect this reciprocal grid. For convolution with two neighboring pixels in the direction of the two di- agonals, the rec√ iprocal grid is turned by 45°. The grid constant of the reciprocal grid is a factor 2 smaller than that of the original grid. Used individually, these fi lters are not of much help. But we can use them in cascade, starting with directly neighboring pixels. Then the zero lines of the transfer functions, which lie diff erently for each pixel distance, effi ciently force the transfer function close to zero for large wave number ranges. Cascaded multistep binomial fi ltering leads to a signifi cant performance in- crease for large-scale smoothing. For normal separable binomial fi ltering, the number of computations is proportional to σ 2(O(σ 2)). For multistep binomial fi ltering it depends only logarithmically on σ (O(ldσ 2)) if a cascade of fi lter operations with recursive step width doubling is performed:
2s− 1x
8x 4x s ti˛ m¸ es 2x x r Such a mask has the standard deviation
and the transfer function s ti˛ m¸ es r
cosp(2s'− 1π k˜ ). (11.31) s'=0
With the parameter p, we can adjust the degree of isotropy and the degree of residual inhomogeneities in the transfer function. A very effi cient implemen- tation is given by using p = 2 ( B 2 = 1/4[1 2 1] in each direction). However the 11.6 Effi cient Large-Scale Averaging‡ 301 a b 1
0.5 ~
0.1 0.05 0
1.5 q
-1 -0.5 0
c
0.5 ~ k x
1 -1 k y 0
-0.5 -0.05 -0.1 0
d
0.2
0.4
0.6
0.8 ~ k 1 0 1
0.5
1
0.5 ~
0.02 0
1.5 q
-1 -0.5 0
0.5 ~ k x
1 -1 k y 0
-0.5 -0.02 0
0.2
0.4
0.6
0.8 ~ k 1 0 1
0.5
The anisotropy is shown in polar coordinates (k˜, θ ) as the deviation from the transfer function in the x direction.
residual side peaks at high wave numbers with maximal amplitudes up to 0.08 are still signifi cant disturbances (Fig. 11.12a, b, Fig. 11.13a, b). With the next larger odd-sized masks (p = 4, B 4 = 1/16[1 4 6 4 1] in each di- rection) these residual side peaks at high wave numbers are suppressed well below 0.005 (Fig. 11.12c, d, Fig. 11.13c, d). This is about the relative resolution of 8-bit images and should therefore be suffi cient for most applications. With still larger masks, they could be suppressed even further. Figure 11.14 shows the fi rst four steps of multistep averaging with the B 4 mask, illustrating how quickly the smoothing reaches large scales.
11.6.2 Multigrid Averaging‡
302 11 Averaging
A b
C d
Figure 11.13: Cascaded multistep averaging with step width doubling according to Eq. (11.29), applied to the ring test pattern: a B2B2, b B2B2B2, c B4B4, and
2 1 4 2 1 2 1
Multigrid smoothing makes the number of computations essentially indepen- dent of the standard deviation of the smoothing mask. We again consider a sequence of 1-Dbinomial fi lters: p p p Bx↓ 2 ··· Bx↓ 2Bx↓ 2. s ti˛ m¸ es r
takes p operations, the operator sequence takes s p. 1 = 2p.1 − 1 Σ < 2p. s'=1 2s'− 1 2s− 1 11.6 Effi cient Large-Scale Averaging‡ 303
A b
C d
Figure 11.14: Cascaded multistep averaging with step width doubling according to Eq. (11.29), applied to image Fig. 11.6a with a one, b two, c three, and d four steps using the B 4 fi lter.
As for the multistep approach, Eq. (11.30), the standard deviation of the oper- ator sequence is
12
11.6.3 Recursive Averaging‡ A totally diff erent approach to large-scale averaging is given by recursive fi lter- ing introduced in Section 4.3. The recursion essentially gives a convolution fi lter an infi nite point spread function. The basic advantage of recursive fi lters is that they can easily be “tuned”, as we have demonstrated with the simple lowpass fi lter in Section 4.3.5. In this section, the focus is on the design of averaging fi lters that meet the criteria we discussed earlier in Section 11.2, especially the zero-shift property (Section 11.2.1) that is not met by causal recursive fi lters. 304 11 Averaging
Basically, recursive fi lters work the same as non-recursive fi lters. In principle, we can replace any recursive fi lter with a non-recursive fi lter whose fi lter mask is identical to the point spread function of the recursive fi lter. The real problem is the design of the recursive fi lter, i. e., the determination of the fi lter coeffi cients for a desired transfer function. While the theory of one-dimensional recursive fi lters is standard knowledge in digital signal processing (see, for example, Oppenheim and Schafer [133]), the design of two-dimensional fi lters is still not adequately understood. The main reason is the fundamental diff erence between the mathematics of one- and higher-dimensional z-transforms and polynomials [112]. Despite these theoretical problems, recursive fi lters can be applied successfully in digital image processing. In order to avoid the fi lter design problems, we will use only very simple recursive fi lters which are easily understood and compose them to more complex fi lters, similar to the way we constructed the class of binomial fi lters from the elementary smoothing mask 1/2 [1 1]. In this way we will obtain a class of recursive fi lters that may not be optimal from the point of view of fi lter design but are useful in practical applications. In the fi rst composition step, we combine causal recursive fi lters to symmetric fi lters. We start with a general one-dimensional recursive fi lter with the transfer function + A ˆ = a(k˜ ) + ib(k˜ ). (11.33)
− A ˆ = a(k˜ ) − ib(k˜ ). (11.34) Only the sign of the imaginary part of the transfer function changes, as it cor- responds to the odd part of the point spread function, while the real part cor- responds to the even part. We now have two possible ways to combine the forward and backward running fi lters into symmetric fi lters useful for averaging: 1 addition A ˆ = 2 Σ + A ˆ + − A ˆ Σ = a(k˜ )
(11.35) multiplication A ˆ = + A ˆ − A ˆ = a2(k˜ ) + b2(k˜ ).
Both techniques yield real transfer functions and thus even fi lters with zero shift that are suitable for averaging. As the elementary recursive smoothing fi lter, we use the two-element lowpass fi lter we have already studied in Section 4.3.5: ±Ax: Gm' n = Gm' , n∓ 1 + α (Gmn − Gm' , n∓ 1) with 0 ≤ α ≤ 1 (11.36) with the impulse response (±A x)m, n α (1 − α )n n> 0, m = 0
(11.37) 11.6 Effi cient Large-Scale Averaging‡ 305 1 0.8 0.6 0.4 0.2 0 Figure 11.15: Transfer function of the recursive lowpass fi lter Eq. (11.39) for diff erent values of α = 1/2, 1/4, 1/8, and 1/16. The transfer function of this fi lter can easily be calculated by taking into account that the Fourier transform of Eq. (11.37) forms a geometric series: ±Aˆ x(k˜ ) = α ˜ . (11.38) 1 − (1 − α ) exp(∓ iπ k) This relation is valid only approximately, since we broke off the infi nite sum in Eq. (11.37) at n = N − 1 because of the limited size of the image. Consecutive fi ltering with a left and right running fi lter corresponds to a multi- plication of the transfer functions
Aˆ (k˜ ) + ˆ ˜ − ˆ ˜ α 2
x = Ax(k) Ax(k) ≈ α 2 + 2(1 − α )(1 − cos(π k˜ )). (11.39)
and has a half-value wave number k˜ c (Aˆ x(k˜ c ) = 1/2) of
kc ≈ π arcsin 2(1 − α ) ≈ √ 2π , (11.41) where the last approximation is only valid for α < < 1. At the highest wave number, k˜ = 1, the transfer function has dropped off to
Aˆ x(1) α 2
≈ 4(1 − α ) + α 2
. (11.42) In contrast to binomial fi lters, it is not exactly zero, but suffi ciently small even for small values of α (Fig. 11.15). Two-dimensional fi lters can be composed from one-dimensional fi lters running in the horizontal and vertical directions: A= AxAy = +Ax− Ax+Ay − Ay. (11.43) 306 11 Averaging
A b -1 -0.5 0
c
0.5 ~ k x
1 -1
0
-0.5
1
0.5 ~ k y 0.2 0.15 0.1 0.05 0 0
d
0.2
0.4
0.6
0.8 ~ k 1 0
0.5
1.5 q 1
-0.4
-0.2 0
0.2
~ k x 0.4
0.4 0.2 ~ 0 k y -0.2
-0.4 0.03 0.02 0.01 00
0.2
0.4
0.6
0.8 ~ k 1 0
0.5
1.5 q 1 Figure 11.16: Transfer functions of two-dimensional recursive lowpass fi lters: a A with α = 1/2, b anisotropy of a: Aˆ (k, θ ) − Aˆ (k, π /4), c A' with α = 1/2, and d anisotropy of c: Aˆ '(k, θ ) − Aˆ '(k, 0).
This fi lter (Fig. 11.16) is less isotropic than binomial fi lters (Fig. 11.5). Averaging in coordinate directions is considerably less than in the other directions. How- ever, recursive fi lters have the big advantage that the computational eff ort does not depend on the degree of averaging. With the simple fi rst-order recursive fi lter, we can select the degree of averaging with an appropriate choice of the fi lter parameter α (Eq. (11.41)). The isotropy of recursive fi lters can be further improved by running additional fi lters along the diagonals: A' = AxAyAx− yAx+y. (11.44)
Gm' n = Σ Gm' , n±1 · 2l − Gm' , n±1 + GmnΣ · 2− l, l > 1. (11.45)
11.7 Nonlinear Averaging 307
Nonlinear Averaging The linear averaging fi lters discussed so far blur edges. Even worse, if the mask of the smoothing operator crosses an object edge it contains pixels from both the object and the background, giving a meaningless result from the fi lter. The same is true if averages are performed when a certain number of pixels in an image show erroneous values, e. g., because of a transmission error. The question, therefore, is whether it is possible to perform an averaging that does not cross object boundaries or that ignores certain pixels. Such a procedure can only be applied, of course, if we have already detected the edges or any distorted pixel. In this section, we discuss three types of nonlinear averaging fi l- ter: the classical median fi lter (Section 11.7.1); weighted averaging, also known as normalized convolution (Section 11.7.2); and steerable aver- aging (Section 11.7.3), where we control the direction and/or degree of averaging with the local content of the neighborhood.
Median Filter Linear fi lters eff ectively suppress Gaussian noise but perform very poorly in case of binary noise (Fig. 11.7). Using linear fi lters that weigh and sum up, we assume that each pixel carries some useful information. Pixels distorted by transmission errors have lost their original gray value. Lin- ear smoothing does not eliminate this information but carries it on to neighboring pixels. Thus the appropriate operation to process such dis- tortions is to detect these pixels and to eliminate them. This is exactly what a rank-value fi lter does (Section 4.4). The pix- els within the mask are sorted and one pixel is selected. In particular, the median fi lter selects the medium value. As binary noise completely changes the gray value, it is very unlikely that it will show the medium gray value in the neighborhood. In this way, the medium gray value of the neighborhood is used to restore the gray value of the distorted pixel. The following examples illustrate the eff ect of a 1 × 3 median fi lter M: M[··· 123789 ··· ] = [··· 123789 ··· ] M[··· 12 102456 ··· ] = [··· 124556 ··· ] M[··· 000999 ··· ] = [··· 000999 ··· ] As expected, the median fi lter eliminates runaways. The two other gray value structures — a monotonically increasing ramp and an edge be- tween two plateaus of constant gray value — are preserved. In this way a median fi lter eff ectively eliminates binary noise without signifi cantly blurring the image (Fig. 11.7e). Gaussian noise is less eff ectively elimi- nated (Fig. 11.7f). 308 11 Averaging
The most important deterministic properties of a one-dimensional 2N + 1 median fi lter can be formulated using the following defi nitions. • A constant neighborhood is an area with N + 1 equal gray values.
With these defi nitions, the deterministic properties of a median fi lter can be described very compactly: • Constant neighborhoods and edges are fi x points. • Impulses are eliminated.
The statistical properties of the median fi lter can be illustrated with an image containing only constant neighborhoods, edges, and impulses. The power spectrum of impulses is fl at (white noise). As the median fi l- ter eliminates impulses, the power spectrum decreases homogeneously. The contribution of the edges to a certain wave number is not removed. This example also underlines the nonlinear nature of the median fi lter.
Weighted Averaging
N N g =. gn/σ 2 . 1/σ 2 (11.46) n n n=1 n=1 while the standard deviation of the mean is g n
The weight of an individual data point in Eq. (11.46) for computation of the mean is the higher, the lower its statistical error. 11.7 Nonlinear Averaging 309
A b
C d
Figure 11.17: Weighted averaging using the edge strength to hinder smoothing at edges: a image from Fig. 11.6a with added Gaussian noise; b weighting image after 5 convolutions; image after c two and d fi ve normalized convolutions using a B2 binomial smoothing mask (compare with Fig. 11.7).
The application of weighted averaging to image processing is known as normalized convolution [57]. The averaging is now extended to a local neighborhood. Each pixel enters the convolution sum with a weighting factor associated with it. Thus, normalized convolution requires two images. One is the image to be processed, the other an image with the weighting factors. By analogy to Eqs. (11.46) and (11.47), normalized convolution is de- fi ned by
H ∗ W
In this sense, normalized convolution is nothing complicated or spe- cial. It is just adequate consideration of pixels with spatially variable 310 11 Averaging
statistical errors. “Standard” convolution can be regarded as a special case of normalized convolution. Then all pixels are assigned the same weighting factor and it is not required to use a weighting image, since the factor remains a constant. The fl exibility of normalized convolution is given by the choice of the weighting image. The weighting image does not necessarily get as- sociated with an error. It can be used to select and/or amplify pixels with certain features. In this way, normalized convolution becomes a versatile nonlinear operator. As an example, Fig. 11.17 shows an noisy image that is fi ltered by nor- malized convolution using an weighting image that hinders smoothing at edges.
Steerable Averaging The idea of steerable fi lters is to make the convolution mask dependent on the local image structure. This is a general concept which is not restricted to averaging but can be applied to any type of convolution process. The basic idea of steerable fi lters is as follows. A steerable fi lter has some freely adjustable parameters that control the fi ltering. These could be various properties such as the degree of smoothing, the direction of smoothing, or both. It is easy to write down a fi lter mask with adjustable parameters. We have done this already for recursive fi lters in Eq. (11.36) where the parameter α determines the degree of smoothing. However, it is not computationally effi cient to convolve an image with masks that are diff erent at every pixel. Then, advantage can no longer be taken of the fact that masks are separable. An alternative approach is to seek a base of a few fi lters, and to use these fi lters to compute a set of fi ltered images. Then, these images are interpolated using adjustable parameters. In operator notation this reads
H (α ) = fp(α )Hp, (11.49) p=1
As an example, a directional smoothing fi lter is to be constructed with the following transfer function: hˆ θ 0 (k, θ ) = 1 − f (k) cos2(θ − θ 0). (11.50) 11.7 Nonlinear Averaging 311
2 1
Figure 11.18: Transfer functions for the three base fi lters for directional smooth- ing according to Eq. (11.54).
In this equation, cylindrical coordinates (k, θ ) are used in the Fourier domain. The fi lter in Eq. (11.50) is a polar separable fi lter with an arbi- trary radial function f (k). This radial component provides an arbitrary isotropic smoothing fi ltering.
We separate the cosine function in Eq. (11.50) into trigonometric func- tions that depend only on either θ or the steering angle θ 0 and obtain
hˆ θ (k, θ ) = 1 − 1 f (k) [1 + cos(2θ 0) cos(2θ ) + sin(2θ 0) sin(2θ )] 0 2 (11.51) with the base fi lters
hˆ 2
1 = − 2 f (k) cos(2θ ),
hˆ 3
1 = − 2 f (k) sin(2θ ) (11.52) and the interpolation functions f1(θ 0) = 1, f2(θ 0) = cos(2θ 0), f3(θ 0) = sin(2θ 0). (11.53) Thus three base fi lters are required. The fi lter hˆ 1 is an isotropic smooth- ing fi lter, the other two are directional fi lters. Although the equations 312 11 Averaging
1 0.8 0.6 0.4 0.2 0 k1
0 0.5
1 0.5
0
k2 1
1 0.8 0.6 0.4 0.2 0 k1
0 0.5
1 0.5
0
k2 1
1 0.8 0.6 0.4 0.2 0 k1
0 0.5
1 0.5
0
k2 1
Figure 11.19: Transfer functions for the steerable smoothing fi lter according to Eq. (11.51) using the base fi lters Eq. (11.54): smoothing in 0°, 22.5°, and 45°to the x axis.
for this family of steerable directional smoothing fi lter are simple, it is not easy to implement polar separable base fi lters because they are not Cartesian separable and, thus, require careful optimization.
1 1 2 1 1 0 − 4 0 1 − 2 0 2
hˆ 1 1 2 ˜ 2 ˜ π 2k˜ 2
1 = 2 + 2 cos (π k1/2) cos (π k2/2) ≈ 1 − 8, hˆ 2 =
hˆ 3 = 4 .cos(π k˜ 1) − cos(π k˜ 2)Σ ≈
π 2k˜ 2 cos(2θ ), 8
π 2k˜ 2 sin(2θ ). 8
(11.54) From Fig. 11.19 it is obvious that this simple implementation works well up to moderate wave numbers. At high wave number (k˜ > 0.5), the directional fi lter does no longer work very well. 11.8 Averaging in Multichannel Images‡ 313 11.8 Averaging in Multichannel Images‡
G '1
' H ∗ G 1 H ∗ G 2
. (11.55) . . H ∗ G p
of the multicomponent image.
( H ∗ ( W · G 1))/( H ∗ W )
(11.56) = G 'P ( H ∗ ( W · G P ))/( H ∗ W ) W ' H ∗ W A special case of multicomponent images is given when they represent features that can be mapped to angular coordinates. Typically, such features include the direction of an edge or the phase of a periodic signal. Features of this kind are cyclic and cannot be represented well as Cartesian coordinates. Also, they cannot be averaged in this representation. Imagine an angle of +175° and –179°. The mean angle is 178°, since –179° = 360° – 179° = 181° is close to 175° and not (175° – 179°)/2= –2°. Circular features such as angles are, therefore, better represented as unit vec- tors in the form n ¯ θ = [cos θ, sin θ ]T. In this representation, they can be averaged correctly as illustrated in Fig. 11.20. The average vector points to the correct direction but its magnitude is generally smaller than 1: ( n ¯ θ + n ¯ θ )/2 = Σ cos[(θ 1 + θ 2)/2] Σ cos[(θ − θ )/2]. (11.57) 1 2 sin[(θ 1 + θ 2)/2] 2 1
For an angle diff erence of 180°, the average vector has zero magnitude. The decrease of the magnitude of the average vector has an intuitive interpretation. The larger the scatter of the angle is, the less is the certainty of the average value. Indeed, if all directions are equally probable, the sum vector vanishes, while it grows in length when the scatter is low. These considerations can be extended to the averaging of circular features. To this end we set the magnitude of the vector equal to the certainty of the quantity that is represented by the angle of the vector. In this way, short vectors add little and long vectors add more to the averaging procedure. This is a very attractive form of weighted convolution since in contrast to nor- malized convolution (Section 11.7.2) it requires no time-consuming division. Of course, it works only with features that can be mapped adequately to an angle. 314 11 Averaging
y y average vector
q2 q2 q1 q1 x x
Figure 11.20: Averaging of a cyclic quantity represented as a normal vector
2
Finally, we consider a measure to characterize the scatter in the direction of vectors. Figure 11.20 illustrates that for low scatter the sum vector is only slightly lower than the sum of the magnitudes of the vector. Thus, we can defi ne an angular coherence measure as
| G | where H is an arbitrary smoothing convolution operator. This measure is one if all vectors in the neighborhood covered by the convolution operator point in the same direction and zero if they are equally distributed. This defi nition of a coherence measure works not only in two-dimensional but also in higher- dimensional vector spaces. In one-dimensional vector spaces (scalar images), the coherency measure is, of course, always one.
11.9 Further Readings‡ The articles of Simonds [171] and Wells [197] discuss fast algorithms for large Gaussian kernels. Readers with an interest in the general principles of effi cient algorithms are referred to the textbooks of Aho et al. [3] or Sedgewick [167]. Blahut [10] deals with fast algorithms for digital signal processing. Classical fi lter design techniques, especially for IIR-fi lter are discussed in the standard textbooks for signal processing, e. g., Proakis and Manolakis [144] or Oppenheim and Schafer [133]. Lim [112] specifi cally deals with the design of 2-DIIR fi lters. A detailed description of the deterministic and statistical properties of median fi lters can be found in Huang [75] or Arce et al. [5]. They are also discussed in detail in the monograph on nonlinear digital fi lters by Pitas and Venetsanopou- los [140]. The monograph of Granlund and Knutsson [57] on signal processing for computer vision deals also with weighted averaging (normalized convolu- tion, Section 11.7.2). Steerable fi lters (Section 11.7.3) were introduced by the articles of Freeman and Adelson [48] and Simoncelli et al. [170].
Edges Introduction The task of edge detection requires neighborhood operators that are sen- sitive to changes and suppress areas of constant gray values. In this way, a feature image is formed in which those parts of the image appear bright where changes occur while all other parts remain dark. Mathematically speaking, an ideal edge is a discontinuity of the spa- tial gray value function g( x ) of the image plane. It is obvious that this is only an abstraction, which often does not match the reality. Thus, the fi rst task of edge detection is to fi nd out the properties of the edges con- tained in the image to be analyzed. Only if we can formulate a model of the edges, can we determine how accurately and under what conditions it will be possible to detect an edge and to optimize edge detection. Edge detection is always based on diff erentiation in one or the other form. In discrete images, diff erentiation is replaced by discrete diff er- ences, which only approximate to diff erentiation. The errors associated with these approximations require careful consideration. They cause ef- fects that are not expected in the fi rst place. The two most serious errors are: anisotropic edge detection, i. e., edges are not detected equally well in all directions, and erroneous estimation of the direction of the edges. While the defi nition of edges is obvious in scalar images, diff erent def- initions are possible in multicomponent or vectorial images (Section 12.6). An edge might be a feature that shows up in only one component or in all. Edge detection also becomes more complex in higher-dimensional images. In three dimensions, for example, volumetric regions are sepa- rated by surfaces, and edges become discontinuities in the orientation of surfaces. Another important question is the reliability of the edge estimates. We do not only want to fi nd an edge but also to know how signifi cant it is. Thus, we need a measure for edge strength. Closely related to this issue is the question of optimum edge detection. Once edge detectors deliver not only edges but also an objective confi dence measure, diff er- ent edge detectors can be compared to each other and optimization of edge detection becomes possible.
315 B. Jä hne, Digital Image Processing Copyright © 2002 by Springer-Verlag ISBN 3–540–67754–2 All rights of reproduction in any form reserved. 316 12 Edges
1 0.8 0.6 0.4 0.2 0 0 50 100 150 200 250
0.2 0.1 0 -0.1 -0.2
0 50 100 150 200 250
0.2 0.1 0 -0.1 -0.2
0 50 100 150 200 250
Figure 12.1: Noisy 1-D edge and its fi rst and second derivative.
|
Последнее изменение этой страницы: 2019-05-04; Просмотров: 229; Нарушение авторского права страницы