Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Color Coding of the 2-D Structure Tensor



In Section 13.2.3 we discussed a color representation of the orientation vector. The question is whether it is also possible to represent the struc- ture tensor adequately as a color image. A symmetric 2-D tensor has three independent pieces of information Eq. (13.14), which fi t well to the three degrees of freedom available to represent color, for example luminance, hue, and saturation.

A color represention of the structure tensor requires only two slight modifi cations as compared to the color representation for the orienta- tion vector. First, instead of the length of the orientation vector, the squared magnitude of the gradient is mapped onto the intensity. Sec- ond, the coherency measure Eq. (13.15) is used as the saturation. In the color representation for the orientation vector, the saturation is always one. The angle of the orientation vector is still represented as the hue.

In practice, a slight modifi cation of this color representation is useful. The squared magnitude of the gradient shows variations too large to be displayed in the narrow dynamic range of a display screen with only 256


350                                                                                  13 Simple Neighborhoods

 

luminance levels. Therefore, a suitable normalization is required. The basic idea of this normalization is to compare the squared magnitude of the gradient with the noise level. Once the gradient is well above the noise level it is regarded as a signifi cant piece of information. This train of thoughts suggests the following normalization for the intensity I:

I =    J 11 + J 22             ,                                 (13.16)

n
(J11 + J22) + γ σ 2

where σ n is an estimate of the standard deviation of the noise level. This normalization provides a rapid transition of the luminance from one, when the magnitude of the gradient is larger than σ n, to zero when the gradient is smaller than σ n. The factor γ is used to optimize the display.

 

Implementation

B
D
The structure tensor (Section 13.3.1) or the inertia tensor (Section 13.3.7) can be computed straightforwardly as a combination of linear convolu- tion and nonlinear point operations. The partial derivatives in Eqs. (13.8) and (13.25) are approximated by discrete derivative operators. The inte- gration weighted with the window function is replaced by a convolution with a smoothing fi lter which has the shape of the window function. If we denote the discrete partial derivative operator with respect to the co- ordinate p by the operator p and the (isotropic) smoothing operator by, the local structure of a gray value image can be computed with the structure tensor operator

Jpq = B(Dp · Dq).                                         (13.17)

J
B
D     D
The equation is written in an operator notation. Pixelwise multiplication is denoted by · to distinguish it from successive application of convolu- tion operators. Equation Eq. (13.17) says, in words, that the pq compo- nent of the tensor is computed by convolving the image independently with p and q, multiplying the two images pixelwise, and smoothing the resulting image with.

+
+
These operators are valid in images of any dimension W 2. In a W - dimensional image, the structure tensor has W (W 1)/2 independent components, hence 3 in 2-D, 6 in 3-D, and 10 in 4-D images. These components are best stored in a multichannel image with W (W 1)/2 components.

The smoothing operations consume the largest number of opera- tions. Therefore, a fast implementation must, in the fi rst place, apply a fast smoothing algorithm. A fast algorithm can be established based on the general observation that higher-order features always show a lower resolution than the features they are computed from. This means that


13.3 First-Order Tensor Representation†                                                         351

 

the structure tensor can be stored on a coarser grid and thus in a smaller image. A convenient and appropriate subsampling rate is to reduce the scale by a factor of two by storing only every second pixel in every second row.

These procedures lead us in a natural way to multigrid data struc- tures which are discussed in detail in Chapter 5. Multistep averaging is discussed in detail in Section 11.6.1.

Storing higher-order features on coarser scales has another signifi - cant advantage. Any subsequent processing is sped up simply by the fact that many fewer pixels have to be processed. A linear scale reduc- tion by a factor of two results in a reduction in the number of pixels and the number of computations by a factor of 4 in two and 8 in three dimensions.

Figure 13.5 illustrates all steps to compute the structure tensor and derived quantities using the ring test pattern. This test pattern is par- ticularly suitable for orientation analysis since it contains all kinds of orientations and wave numbers in one image.

The accuracy of the orientation angle strongly depends on the imple- mentation of the derivative fi lters. The straightforward implementation of the algorithm using the standard derivative fi lter mask 1/2 [10 1] (Section 12.3.3) or the Sobel operator (Section 12.5.3) results in surpris-

=
ingly high errors (Fig. 13.6b), with a maximum error in the orientation angle of more than 7° at a wave number of k˜  0.7.  The error depends

on both the wave number and the orientation of the local structure. For orientation angles in the direction of axes and diagonals, the error van- ishes. The high error and the structure of the error map result from the transfer function of the derivative fi lter. The transfer function shows signifi cant deviation from the transfer function for an ideal derivative fi lter for high wave numbers (Section 12.2). According to Eq. (13.12), the orientation angle depends on the ratio of derivatives. Along the axes, one of the derivatives is zero and, thus, no error occurs. Along the diagonals, the derivatives in the x and y directions are the same. Consequently, the error in both cancels in the ratio of the derivatives as well.

The error in the orientation angle can be signifi cantly suppressed if better derivative fi lters are used. Figure 13.6 shows the error in the orien- tation estimate using two examples of the optimized Sobel operator (Sec- tion 12.5.5) and the least-squares optimized operator (Section 12.3.5).

The little extra eff ort in optimizing the derivative fi lters thus pays off in an accurate orientation estimate. A residual angle error of less than 0.5° is suffi cient for almost all applications. The various derivative fi lters discussed in Sections 12.3 and 12.5 give the freedom to balance computational eff ort with accuracy.

An important property of any image processing algorithm is its ro- bustness. This term denotes the sensitivity of an algorithm against noise. Two questions are important. First, how large is the error of the esti-


352                                                                                  13 Simple Neighborhoods

 



A                                                                    b

C                                                                    d

E                                                                    f

G                                                                    h

I                                                                     j

Figure 13.5: Steps to compute the structure tensor: a original ring test pattern; b horizontal derivation Dx; c vertical derivation Dy; d f averaged components for the structure tensor J11 = B(Dx · Dx), J22 = B(Dy · Dy), J12 = B(Dx · Dy); g squared magnitude of gradient J11 + J22; h x component of orientation vector J11 − J22; i y component of orientation vector 2J12; j orientation angle from [− π /2, π /2] mapped to a gray scale interval from [0, 255].


13.3 First-Order Tensor Representation†                                                         353

 


A                                                                    b

C                                                                    d

=                                                                                             ±
Figure 13.6: Systematic errors for the orientation angle estimate using diff erent derivative operators: a original ring test pattern with a maximum normalized wave number k˜  0.7; error maps for b the Sobel operator (angle range  7◦  in 16 discrete steps), c the optimized Sobel operator, and d the least squares optimized operator (angle range ±0.7◦ in 16 discrete steps) with r = 3.

 

mated features in noisy images? To answer this question, the laws of statistics are used to study error propagation. In this context, noise makes the estimates only uncertain but not erroneous. The mean — if we make a suffi cient number of estimates — is still correct. However, a second question arises. In noisy images an operator can also give results that are biased, i. e., the mean can show a signifi cant deviation from the correct value. In the worst case, an algorithm can even become unstable and deliver meaningless results.


354                                                                                  13 Simple Neighborhoods

 


A                                                                    b

C                                                                    d

1                                                                    1

 

0.98                                                                0.8

 

0.96                                                                0.6

 

0.94                                                                 0.4


 

0.92

 

0.9  0  20 40 60 80 100 120 x

e

       
       
       
    5.0  
    1.5  

 

1

 

0.8

                                          

0.6

                                   

0.4


 

0.2

 

0

f

1

 

0.8

 

0.6

 

0.4


 

0  20 40 60 80 100 120x


                                          


0.2

 

0


 

     
 

-2        -1        0         1 angle[°] 2


0.2

 

0 -20      -10       0        10 angle[°]20


 

Figure 13.7: Orientation analysis with a noisy ring test pattern using the op- timized Sobel operator: ring pattern with amplitude 50, standard deviation of normal distributed noise a 15, and b 50; c and d radial cross section of the co- herency measure for standard deviations of the noise level of 1.5 and 5, 15 and 50, respectively; e and f histograms of angle error for the same conditions.

 

Figure 13.7 demonstrates that the estimate of orientation is also a re- markably robust algorithm. Even with a low signal-to-noise ratio, the ori- entation estimate is still correct if a suitable derivative operator is used. With increasing noise level, the coherency (Section 13.3.4) decreases and the statistical error of the orientation angle estimate increases (Fig. 13.7).


13.3 First-Order Tensor Representation†                                                         355

 

13.3.7 The Inertia Tensor‡

In this section, we discuss an alternative approach to describe the local structure in images. As a starting point, we consider what an ideally oriented gray value structure (Eq. (13.1)) looks like in the wave number domain. We can compute the Fourier transform of Eq. (13.1) more readily if we rotate the x1 axis in the direction of n ¯.  Then the gray value function is constant in the x2 direction. Consequently, the Fourier transform reduces to a δ line in the direction of n ¯ (± R5).

It seems promising to determine local orientation in the Fourier domain, as all we have to compute is the orientation of the line on which the spectral densities are non-zero. Bigü n and Granlund [8] devised the following procedure:

• Use a window function to select a small local neighborhood from an image.

Fourier transform the windowed image. The smaller the selected window, the more blurred the spectrum will be (uncertainty relation, Theorem 7, p. 55). This means that even with an ideal local orientation we will obtain a rather band-shaped distribution of the spectral energy.

Determine local orientation by fi tting a straight line to the spectral density distribution. This yields the angle of the local orientation from the slope of the line.

The critical step of this procedure is fi tting a straight line to the spectral densi- ties in the Fourier domain. We cannot solve this problem exactly as it is gener- ally overdetermined, but only minimize the measure of error. A standard error measure is the square of the magnitude of the vector (L2 norm; see Eq. (2.74) in Section 2.4.1). When fi tting a straight line, we minimize the sum of the squares of the distances of the data points to the line:

∫   d2( k, n ¯ )|gˆ ( k )|2dW k → minimum.                                 (13.18)

− ∞

|    |
The distance function is abbreviated using d( k, n ¯ ).  The integral runs over the whole wave number space; the wave numbers are weighted with the spectral density  gˆ ( k ) 2. Equation (13.18) is not restricted to two dimensions, but is gen- erally valid for local orientation or linear symmetry in a W -dimensional space.

The distance vector d can be inferred from Fig. 13.8 to be

d = k − ( k T n ¯ ) n ¯.                                              (13.19)

The square of the distance is then given by

| d |2 = | k − ( k T n ¯ ) n ¯ |2 = | k |2 − ( k T n ¯ )2.                                     (13.20)

In order to express the distance more clearly as a function of the vector n ¯, we rewrite it in the following manner:

| d |2 = n ¯ T ( I ( k T k ) − ( kk T )) n ¯,                                       (13.21)

where I is the unit diagonal matrix. Substituting this expression into Eq. (13.18) we obtain

n ¯ T J ' n ¯ → minimum,                                           (13.22)


356                                                                                  13 Simple Neighborhoods

 

 

Figure 13.8: Distance of a point in the wave number space from the line in the direction of the unit vector n ¯.

 

where J ' is a symmetric tensor with the diagonal elements

.  ∫ '         ˆ J                                                 =      k  |g(k)|  d  k                                      (13.23)pp

2              2  W

q

q≠ p− ∞

and the off -diagonal elements

Jp' q =− ∫   kpkq|gˆ ( k )|2dW k,             p ≠ q.                        (13.24)

− ∞

|    |
The tensor J ' is analogous to a well-known physical quantity, the inertia tensor. If we replace the wave number coordinates by space coordinates and the spectral density  gˆ ( k ) 2 by the specifi c density ρ, Eqs. (13.18) and (13.22) constitute the equation to compute the inertia of a rotary body rotating around the n ¯ axis.

With this analogy, we can reformulate the problem of determining local ori- entation. We must fi nd the axis about which the rotary body, formed from the spectral density in Fourier space, rotates with minimum inertia. This body might have diff erent shapes. We can relate its shape to the diff erent solutions we get for the eigenvalues of the inertia tensor and thus for the solution of the local orientation problem (Table 13.3).

We derived the inertia tensor approach in the Fourier domain. Now we will show how to compute the coeffi cients of the inertia tensor in the space domain.

The integrals in Eqs. (13.23) and (13.24) contain terms of the form

q
k2 |gˆ ( k )|2 = |ikqgˆ ( k )|2


 

and


 

kpkq|gˆ ( k )|2 = ikpgˆ ( k )[ikqgˆ ( k )]∗.


±
Integrals over these terms are inner or scalar products of the functions ikpgˆ ( k ). Because the inner product is preserved under the Fourier transform ( R4), we can compute the corresponding integrals in the spatial domain as well. Multipli- cation of gˆ ( k ) with ikp in the wave number domain corresponds to performing


13.3 First-Order Tensor Representation†                                                       357

 

 

Table 13.3: Eigenvalue classifi cation of the structure tensor in 3-D (volumetric) images.

Condition                                 Explanation

Ideal local orientation              The rotary body is a line. For a rotation around

this line, the inertia vanishes. Consequently, the eigenvector to the eigenvalue zero coin- cides with the direction of the line. The other eigenvector is orthogonal to the line, and the corresponding eigenvalue is unequal to zero and gives the rotation axis for maximum iner- tia.


Isotropic gray value structure


In this case, the rotary body is a kind of fl at isotropic disk. A preferred direction does not exist. Both eigenvalues are equal and the in- ertia is the same for rotations around all axes. We cannot fi nd a minimum.


Constant gray values               The rotary body degenerates to a point at the

 

origin of the wave number space. The inertia is zero for rotation around any axis. Therefore both eigenvalues vanish.

 

the fi rst spatial derivative in the direction of xp in the space domain:

 


Jp' p( x )  =


. ∫ w( x x ')


2

.   Σ ∂ g
        dW x'

∂ x


q≠ p − ∞                                            q


 

(13.25)


pq                                                                     ∂ xp ∂ xq
J' ( x )  =  − ∫ w( x x ') ∂ g ∂ g dW x'.

− ∞

In Eq. (13.25), we already included the weighting with the window function w

to select a local neighborhood.

The structure tensor discussed in Section 13.3.1 Eq. (13.8) and the inertia tensor are closely related:

J ' = trace( J ) I J.                                               (13.26)

From this relationship it is evident that both matrices have the same set of eigenvectors. The eigenvalues λ p are related by

 

n                                              n

λ p =. λ q − λ 'p,         λ 'p =. λ q − λ p.                               (13.27)

q=1                                         q=1

 

Consequently, we can perform the eigenvalue analysis with any of the two ma- trices. For the inertia tensor, the direction of local orientation is given by the minimum eigenvalue, but for the structure tensor it is given by the maximum eigenvalue.


358                                                                                  13 Simple Neighborhoods

 

13.3.8 Further Equivalent Approaches‡

In their paper on analyzing oriented patterns, Kass and Witkin [90] chose — at fi rst glance — a completely diff erent method. Yet it turns out to be equivalent to the tensor method, as will be shown in the following. They started with the idea of using directional derivative fi lters by diff erentiating a diff erence of Gaussian fi lter (DoG, Section 12.5.6) (written in operator notation)

D  (B  − B  )y 1      2
Ry
R(Θ ) = [cos Θ sin Θ ] Σ Dx(B1 − B2) Σ = [cos Θ sin Θ ] Σ Rx Σ ,

 

B       B
where 1 and 2 denote two Gaussian smoothing masks with diff erent vari- ances. The direction in which this directional derivative is maximal in a mean square sense gives the orientation normal to lines of constant gray values. This approach results in the following expression for the variance of the directional derivative:

V (Θ ) = B(R(Θ ) · R(Θ )).                                        (13.28)

The directional derivative is squared and then smoothed by a binomial fi lter. This equation can also be interpreted as the inertia of an object as a function of the angle. The corresponding inertia tensor has the form

Σ                                                        Σ .                      (13.29)
B(Ry · Ry) − B(Rx · Ry)

− B(Rx · Ry)        B(Rx · Rx)

Thus Kass and Witkin’s approach is identical to the general inertia tensor method discussed in Section 13.3.7. They just used a special type of derivative fi lter.

Without being aware of either Bigü n and Granlund [8] earlier or Knutsson [99] contemporary work, Rao and Schunck [147] and Rao [146] proposed the same structure tensor (denoting it as the moment tensor) as that we discussed in Section 13.3.1.

 

 

13.4 Local Wave Number and Phase‡

13.4.1 Phase‡

So far in this chapter we have discussed in detail the analysis of simple neighbor- hoods with respect to their orientation. In this section we proceed with another elementary property of simple neighborhoods. In Chapter 5 we stressed the importance of the scale for image processing. Thus we must not only ask in which directions the gray values change. We must also ask how fast the gray values change. This question leads us to the concept of the local wave number.

The key to determining the local wave number is the phase of the signal. As an introduction we discuss a simple example and consider the one-dimensional periodic signal

g(x) = g0 cos(kx).                                             (13.30)

The argument of the cosine function is known as the phase of the periodic signal:

φ (x) = kx.                                                  (13.31)


13.4 Local Wave Number and Phase‡                                                            359

 

Figure 13.9: Application of the Hilbert fi lter to the ring test pattern: upper left quadrant: in the horizontal direction; lower right quadrant: in the vertical direc- tion.

 

The equation shows that the phase is a linear function of the position and the wave number. Thus we obtain the wave number of the periodic signal by com- puting the fi rst-order spatial derivative of the phase signal

∂ φ (x)

∂ x = k.                                              (13.32)

These simple considerations re-emphasize the signifi cant role of the phase in image processing that we discussed already in Section 2.3.6. We will discuss two related approaches for determining the phase of a signal, the Hilbert transform (Section 13.4.2) and the quadrature fi lter (Section 13.4.5) before we introduce effi cient techniques to compute the local wave number from phase gradients.

 

13.4.2 Hilbert Transform and Hilbert Filter‡

=                                                            =
In order to explain the principle of computing the phase of a signal, we take again the example of the simple periodic signal from the previous section. We suppose that an operator is available to delay the signal by a phase of 90°. This operator would convert the g(x) g0 cos(kx) signal into a g'(x)

g0 sin(kx) signal as illustrated in Fig. 13.9. Using both signals, the phase of g(x) can be computed by

=        .          Σ
φ (g(x))     arctan − g'(x)  .                                    (13.33)

g(x)


360                                                                                  13 Simple Neighborhoods

 

As only the ratio of g'(x) and g(x) goes into Eq. (13.33), the phase is indeed independent of amplitude. If we take the signs of the two functions g'(x) and g(x) into account, the phase can be computed over the full range of 360°.

Thus all we need to determine the phase of a signal is a linear operator that shifts the phase of a signal by 90°. Such an operator is known as the Hilbert fi lter H or Hilbert operator H and has the transfer function


 

hˆ (k) =


  


i k> 0

0 k = 0


 

.                                   (13.34)


  − i k< 0

−     −
The magnitude of the transfer function is one as the amplitude remains un- changed. As the Hilbert fi lter has a purely imaginary transfer function, it must be of odd symmetry to generate a real-valued signal. Therefore positive wave numbers are shifted by 90° (π /2) and negative wave numbers by 90° ( π /2). A special situation is given for the wave number zero where the transfer function is also zero. This exception can be illustrated as follows. A signal with wave number zero is a constant. It can be regarded as a cosine function with infi - nite wave number sampled at the phase zero. Consequently, the Hilbert fi ltered signal is the corresponding sine function at phase zero, that is, zero.

Because of the discontinuity of the transfer function of the Hilbert fi lter at the origin, its point spread function is of infi nite extent


=−
h(x)         1

π x


.                                           (13.35)


The convolution with Eq. (13.35) can be written as

gh(x) = 1 ∫ g(x') dx'.                                          (13.36)

π − ∞ x' − x

This integral transform is known as the Hilbert transform [116].

Because the convolution mask of the Hilbert fi lter is infi nite, it is impossible to design an exact discrete Hilbert fi lter for arbitrary signals. This is only possible if we restrict the class of signals to which it is applied. Thus the following approach is taken to design an eff ective implementation of a Hilbert fi lter.

First, the fi lter should precisely shift the phase by π /2. This requirement comes from the fact that we cannot aff ord an error in the phase because it includes the position information. A wave-number dependent phase shift would cause wave-number dependent errors. This requirement is met by any convolution kernel of odd symmetry.

Second, requirements for a magnitude of one can be relaxed if the Hilbert fi lter is applied to a bandpassed signal, e. g., the Laplace pyramid. Then, the Hilbert fi lter must only show a magnitude of one in the passband range of the bandpass fi lter used. This approach avoids the discontinuities in the transfer function at the wave number zero and thus results in fi nite-sized convolution kernels.

Optimized Hilbert fi lters are generated with the same least-squares techniques used above for interpolation fi lters (Section 10.6.6) and fi rst-order derivative fi lters (Section 12.3.5).


13.4 Local Wave Number and Phase‡                                                             361


 

a

1             4

0.8     5

3

0.6             2

0.4

 

0.2


 

b

1.04

 

1.02

 

1

 

0.98

 

k
k
0.96


0  0    0.1   0.2   0.3   0.4


~ 0.5


0    0.1   0.2  0.3   0.4 ~


0.5


 

=
Figure 13.10: a Transfer functions of a family of least-squares optimized Hilbert operators according to Eq. (13.37) for the four fi lter coeffi cients R 2, 3, 4, 5. b sector of a to better show the deviations from an ideal Hilbert fi lter. As the fi lters are symmetric around k˜  = 0.5 only a wave number range from 0–0.5 is shown.

 

Because of the odd symmetry of the Hilbert fi lter, the following formulation is used

.       .                Σ
R

hˆ (k˜ ) = 2i     hv sin  (2v − 1)π k˜   .                                 (13.37)

v=1

=
Note that we have only used sine functions with odd wave numbers. This causes the transfer function also to become symmetric around k˜  1/2 and leads to a fi lter mask with alternating zeros

 

[hR, 0, ···, h2, 0, h1, 0, –h1, 0, –h2, ···, 0, –hR].                       (13.38)

−                                 −
The mask has 4R 1 coeffi cients, 2R 1 of which are zero. Figure 13.10 shows the transfer functions optimized with the least squares technique for R = 2, 3, 4, 5. The fi lter with R = 4 (a mask with 15 coeffi cients)

h = {0.6208, 0.1683, 0.0630, 0.0191},                                   (13.39)

for instance, has an amplitude error of only slightly larger than 1.0 % in the wave number range [0.16, 0.84] and by design no phase error. The convolution with this mask requires 4 multiplications and 7 additions/subtractions.

 

13.4.3 Analytic Signal‡

A real-valued signal and its Hilbert transform can be combined into a complex- valued signal by

ga = g − igh.                                                   (13.40)

This complex-valued signal is denoted as the analytic function or analytic signal. According to Eq. (13.40) the analytic fi lter has the point spread function


= +
a(x)   1 i

π x


 

(13.41)


362                                                                                  13 Simple Neighborhoods

 


and the transfer function


 

aˆ (k) =


 

2 k> 0

  
1 k = 0

   0  k< 0


 

 

.                                    (13.42)


Thus all negative wave numbers are suppressed. Although the transfer function of the analytic fi lter is real, it results in a complex signal because it is asymmet- ric. For a real signal no information is lost by suppressing the negative wave numbers. They can be reconstructed as the Fourier transform of a real signal is Hermitian (Section 2.3.5). The analytic signal can be regarded as just another representation of a real signal with two important properties. The magnitude of the analytic signal gives the local amplitude

|A|2 = I ·I +H ·H.                                              (13.43)

and the argument the local phase

I
arg(A) = arctan. − H Σ ,                                             (13.44) using A and H for the analytic and Hilbert operators, respectively.

The original signal and its Hilbert transform can be obtained from the analytic signal using Eq. (13.40) by


g(x)    =  (ga(x) + ga∗ (x))/2

gh(x)  =  i(ga(x) − ga∗ (x))/2.


 

(13.45)


The concept of the analytic signal also makes it easy to extend the ideas of local phase into multiple dimensions. The transfer function of the analytic operator uses only the positive wave numbers, i. e., only half of the Fourier space. If we extend this partitioning to multiple dimensions, we have more than one choice to partition the Fourier space into two half spaces. Instead of the wave number, we can take the scalar product between the wave number vector k and any unit vector n ¯ and suppress the half space for which the scalar product k n ¯ is negative:

 


aˆ ( k ) =


1   k n ¯ = 0

    2  k n ¯ > 0
   0   k n ¯ < 0


.                                  (13.46)


,
The unit vector n ¯ gives the direction in which the Hilbert fi lter is to be applied. The defi nition Eq. (13.46) of the transfer function of the analytic signal implies that the Hilbert operator can only be applied to directionally fi ltered signals. This results from the following considerations. For one-dimensional signals we have seen that a discrete Hilbert fi lter does not work well for small wave numbers (Fig. 13.10). In multiple dimensions this means that a Hilbert fi lter does not work well if k ˜ n ¯     1.  Thus no wave numbers near an orthogonal to the direction of the Hilbert fi lter may exist, in order to avoid errors.

This fact makes the application of Hilbert fi lters and thus the determination of the local phase in higher-dimensional signals signifi cantly more complex. It is not suffi cient to use bandpass fi ltered images, e. g., a Laplace pyramid (Sec- tion 5.3.3). In addition, the bandpass fi ltered images must be further decom- posed into directional components. At least as many directional components as the dimensionality of the space are required.


13.4 Local Wave Number and Phase‡                                                            363

 

13.4.4 Monogenic Signal‡

The extension of the Hilbert transform from a 1-Dsignal to higher-dimensional signals is not satisfactory because it can only be applied to directionally fi ltered signals. For wave numbers close to the separation plane, the Hilbert transform does not work. What is really required is an isotropic extension of the Hilbert transform. It is obvious that no scalar-valued transform for a multidimensional signal can be both isotropic and of odd symmetry.

A vector-valued extension of the analytic signal meets both requirements. It is known as the monogenic signal and was introduced to image processing by Felsberg and Sommer [38]. The monogenic signal is constructed from the origi- nal signal and its Riesz transform. The transfer function of the Riesz transform is given by

=
h ˆ ( k ) i   k   .                                             (13.47)

| k |

The magnitude of the vector h is one for all values of k. The Riesz transform is thus isotropic. It is also of odd symmetry because

h ˆ (− k ) = − h ˆ ( k ).                                             (13.48)

The Riesz transform can be applied to a signal of any dimension. For a 1-D signal it reduces to the Hilbert transform.

For a 2-D signal the transfer function of the Riesz transform can be written using polar coordinates as


T
h ˆ ( k ) = i   k co s θ ,  k si n θ


 

.                             (13.49)


|k|      |k|

The convolution mask or PSF of the Riesz transform is given by


=−
h ( x )          x

2π | x |3


 

.                                       (13.50)


The original signal and the signal convolved by the Riesz transform can be combined for a 2-Dsignal to the 3-Dmonogenic signal as

T

g m( x ) = Σ g, h1 ∗ g, h2 ∗ gΣ  .                                        (13.51)

The local amplitude of the monogenic signal is given as the norm of the vector of the monogenic signal as in the case of the analytic signal (Eq. (13.43)):

2

gm
. . = g2 + (h1 ∗ g)2 + (h2 ∗ g)2.                                      (13.52)

For an intrinsically 1-D signal, it can be proven that the local phase φ and the local orientation θ are


 

and


tan φ =


1/2

Σ                                    Σ 2                                                       2(h  ∗ g)  + (h  ∗ g)
1                           2

g


 

(13.53)


=
tan θ    h 2 ∗ g .                                            (13.54)

h1 ∗ g

We can thus conclude that the monogenic signal combines an estimate of local orientation (Section 13.2) and local phase (Section 13.4.1).


364                                                                                  13 Simple Neighborhoods

 

13.4.5 Quadrature Filters‡

Quadrature fi lters are an alternative approach to getting a pair of signals that diff er only by a phase shift of 90° (π /2). It is easiest to introduce the complex form of the quadrature fi lters. Essentially, the transfer function of a quadrature fi lter  is also zero for k n ¯   < 0, like the transfer function of the analytic fi lter. However, the magnitude of the transfer function is not one but can be any arbitrary real-valued function h( k ):

 


.q(k) =
ˆ             2h( k )   k n ¯ > 0

0         otherwise.


 

(13.55)


The quadrature fi lter thus also transforms a real-valued signal into an analyt- ical signal. In contrast to the analytical operator, a wave number weighting is applied. From the complex form of the quadrature fi lter, we can derive the real quadrature fi lter pair by observing that they are the part of Eq. (13.55) with even and odd symmetry. Thus


+( k )  =  (qˆ ( k ) + qˆ (− k ))/2,

( k )  =  (qˆ ( k ) − qˆ (− k ))/2.


 

(13.56)


 

The even and odd part of the quadrature fi lter pair show a phase shift of 90° and can thus can also be used to compute the local phase.

=
0
x
otherwise.                    (13.57)
The best-known quadrature fi lter pair is the Gabor fi lter. A Gabor fi lter is a bandpass fi lter that selects a certain wavelength range around the center wave- length k 0 using the Gauss function. The complex transfer function of the Gabor fi lter is


ˆ
g( k )


.  exp.| k k 0|2σ 2/2Σ   kk 0 > 0


If | k 0x > 3, Eq. (13.57) reduces to

x
gˆ ( k ) = exp.− | k k 0)|2σ 2/2Σ  .                                       (13.58)

Using the relations in Eq. (13.56), the transfer function for the even and odd component are given by

 


+( k )  =


Σ exp.− | k k 0|2σ 2/2Σ  + exp.− | k + k 0|2σ 2/2Σ Σ  ,


2                              x                                                              x

2
x
x
1
1


(13.59)


( k )  =


Σ exp.− | k k 0|2σ 2/2Σ  − exp.− | k + k 0|2σ 2/2Σ Σ  .


The point spread function of these fi lters can be computed easily with the shift theorem (Theorem 3, p. 52, ± R4):


g+( x ) =


cos( k 0 x ) exp.


x 2

x
Σ |  |
− 2σ 2,


 

 

(13.60)


g( x ) =


i sin( k 0 x ) exp.


x 2

x
Σ |  |
− 2σ 2,


13.4 Local Wave Number and Phase‡                                                             365

or combined into a complex fi lter mask:


g( x ) =


exp(i k 0 x ) exp.


x 2

x
Σ |  |
− 2σ 2  .                              (13.61)


 

Gabor fi lters are useful for bandpass-fi ltering images and performing image analysis in the space/wave number domain. Figure 13.11 illustrates an appli- cation [Riemer, 1991; Riemer et al., 1991]. An image with short wind-generated water surface waves is decomposed by a set of Gabor fi lters. The center wave- length k 0 was set in the x direction, parallel to the wind direction. The fi lters had the center wavelength in octave distances at 1.2, 2.4, and 4.8 cm wavelengths. The bandwidth was set proportional to the center wave number.

The left column of images in Fig. 13.11 shows the fi ltering with the even Ga- bor fi lter, the right column the local amplitude, which is directly related to the energy of the waves. The fi ltered images show that waves with diff erent wave- length are partly coupled. In areas where the larger waves have large ampli- tudes, also the small-scale waves (capillary waves) have large amplitudes. The energy of waves is not equally distributed over the water surface.

An extension of this analysis to image sequences gives a direct insight into the nonlinear wave-wave interaction processes. Figure 13.12 shows the temporal evolution of one row of images from Fig. 13.11. As we will discuss in detail in Section 14.2.4, the slope of the structures in these space-time images towards the time axis is directly proportional to the speed of the moving objects.

It can be observed nicely that the small waves are modulated by the large waves and that the group velocity (speed of the wave energy) of the small waves is slower than the phase speed for the capillary waves.

 

13.4.6 Local Wave Number Determination‡

In order to determine the local wave number, we just need to compute the fi rst spatial derivative of the phase signal (Section 13.4.1, Eq. (13.32)). This derivative has to be applied in the same direction as the Hilbert or quadrature fi lter has been applied. The phase is given by either

.          Σ =
φ ( x )   arctan  − g h ( x )                                          (13.62)

g( x )

.           Σ =
or

φ ( x )   arctan  − g + ( x )  ,                                        (13.63)

g( x )

where g+ and g denote the signals fi ltered with the even and odd part of the quadrature fi lter.

Direct computation of the partial derivatives from Eqs. (13.62) and (13.63) is not advisable, however, because of the inherent discontinuities in the phase signal. A phase computed with the inverse tangent restricts the phase to the main interval [− π, π [ and thus leads inevitably to a wrapping of the phase signal from π to − π with the corresponding discontinuities.


366                                                                                  13 Simple Neighborhoods

 

a









































































































B                                                         c

D                                                         e

F                                                          g

×
Figure 13.11: Analysis of an image ( a, 40 cm 30 cm) from wind-generated water surface waves. The intensity is proportional to the along-wind component of the slope of the waves. The even part ( b, d, f ) and squared magnitude (energy, c, e, g ) of the Gabor-fi ltered images with center wavelength at 48, 24, and 12 mm, respectively.


13.4 Local Wave Number and Phase‡                                                            367

 


A                                           b                                           c

D                                           e                                           f

Figure 13.12: Analysis of a 5 s long space-time slice in wind direction of an image sequence from short wind-generated water surface waves. The time axis is vertically oriented. Even part ( a c ) and squared magnitude (energy, d f ) of the Gabor-fi ltered images with center wavelength at 48, 24, and 12 mm, respectively.

 

 

As pointed out by Fleet [42], this problem can be avoided by computing the phase gradient directly from the gradients of q+( x ) and q( x ). The result is

=p
k          ∂ φ ( x )

∂ xp


= ∂ xp arctan(− q+( x )/q( x ))


(13.64)


=        1        . ∂ q + ( x ) q


( x ) − ∂ q − ( x ) q


( x )Σ .


+
q2 ( x ) + q2 ( x )          ∂ xp        −


∂ xp   +


This formulation of the phase gradient also eliminates the need for using trigono- metric functions to compute the phase signal and is, therefore, signifi cantly faster.


368                                                                                  13 Simple Neighborhoods

 

13.5 Tensor Representation by Quadrature Filter Sets‡

13.5.1 Principle‡

Quadrature fi lters provide another way to analyze simple neighborhoods and to determine both the local orientation and the local wave number. Historically, this was the fi rst technique for local structure analysis, pioneered by the work of Granlund [56]. The inertia and structure tensor techniques actually appeared later in the literature [8, 90, 146, 147].

The basic idea of the quadrature fi lter set technique is to extract structures in a certain wave number and direction range. In order to determine local orien- tation, we must apply a whole set of directional fi lters, with each fi lter being sensitive to structures of diff erent orientation. We then compare the fi lter re- sponses and obtain a maximum fi lter response from the directional fi lter whose direction coincides best with that of local orientation. Similarly, the quadrature fi lter set for diff erent wave number ranges can be set up to determine the local wave number.

If we get a clear maximum in one of the fi lters but only little response in the others, the local neighborhood contains a locally oriented pattern. If the diff er- ent fi lters give comparable responses, the neighborhood contains a distribution of oriented patterns.

So far, the concept seems to be straightforward, but a number of tricky problems needs to be solved. Which properties have to be met by the directional fi lters in order to ensure an exact determination of local orientation, if at all possible? For computational effi ciency, we need to use a minimal number of fi lters to interpolate the angle of the local orientation. What is this minimal number?

The concepts introduced in this section are based on the work of Granlund [56], Knutsson [98], and Knutsson et al. [100], later summarized in a monograph by Granlund and Knutsson [57]. While the quadrature fi lter set techniques have been formulated by these authors for multiple dimensions, we will discuss here only the two-dimensional case.

We fi rst discuss the design of quadrature fi lters that are suitable for the de- tection of both local orientation and local wave number. This leads to polar separable quadrature fi lters (Section 13.5.2). In a second step, we show how the orientation vector defi ned in Section 13.3.3 can be constructed by simple vector addition of the quadrature fi lter responses (Section 13.5.3). Likewise, in Section 13.5.4 we study the computation of the local wave number. Finally, Section 13.5.5 closes the circle by showing that the structure tensor can also be computed by a set of quadrature fi lters. Thus the tensor methods discussed in the fi rst part of this chapter (Section 13.3) and the quadrature fi lter set tech- nique diff er only in some subtle points but otherwise give identical results.

 

13.5.2 Polar Separable Quadrature Filters‡

For an appropriate set of directional fi lters, each fi lter should be a rotated copy of the others. This requirement implies that the transfer function of the fi lters can be separated into an angular part d(φ ) and a wave number part r(k). Such a fi lter is called polar separable and may be conveniently expressed in polar


13.5 Tensor Representation by Quadrature Filter Sets‡                             369


coordinates

      


qˆ (k, φ ) = rˆ (k)dˆ (φ ),                                               (13.65)


1
2
where k =  k2 + k2 and φ = arctan(k2/k1) are the magnitude and argument of

the wave number, respectively. For a set of directional fi lters, only the angular part of the transfer function is of importance, while the radial part must be the same for each fi lter but can be of arbitrary shape. The converse is true for a fi lter set to determine the local wave number.

Σ
Knutsson [98] suggested the following base quadrature fi lter


rˆ (k)  =


exp Σ −


(ln k − ln k0)2 (B/2)2 ln 2


(13.66)


dˆ (φ )  =


cos2l(φ − φ k) |φ − φ k| < π /2

.
0                       otherwise.


In this equation, the complex notation for quadrature fi lters is used (Section 13.4.5). The fi lter is directed into the angle φ k.  The unit vector in this direction is d ¯ k = [cos φ k, sin φ k].

|  −    |=          =
The fi lter is continuous, since the cosine function is zero in the partition plane for the two half spaces ( φ  φ k  π /2 or d ¯ k k   0). Using the unit vector d ¯ k in the direction of the fi lter, the angular part of the fi lter can also be written as:


dˆ ( k ) =


( k d ¯ k)2l  ( k d ¯ k)> 0

.
(13.67)

0           otherwise.


 

The constant k0 in Eq. (13.66) denotes the peak wave number. The constant B determines the half-width of the wave number in number of octaves and l the angular resolution of the fi lter. In a logarithmic wave number scale, the fi lter has the shape of a Gaussian function. Therefore the radial part has a lognormal shape.

For the real even and the imaginary odd fi lter of the quadrature fi lter pair, the radial part is the same and only the angular part diff ers:

dˆ   (φ )  =  cos2l(φ − φ  )

+                                                          k                                                                                 (13.68)

k
k
dˆ   (φ )  =  i cos2l(φ − φ  ) sign(cos(φ − φ  )).

 

Figure 13.13 shows the radial and angular part of the transfer function for dif- ferent k0 and φ k. A set of directional fi lters is obtained by a suitable choice of diff erent φ k:


K
φ k = π k


k = 0, 1, ···, K − 1.                                (13.69)


Knutsson used four fi lters with 45° increments in the directions 22.5°, 67.5°, 112.5°, and 157.5°. These directions have the advantage that only one fi lter kernel has to be designed. The kernels for the fi lter in the other directions are obtained by mirroring the kernels at the axes and diagonals.

These fi lters were designed in the wave number space. The fi lter coeffi cients are obtained by inverse Fourier transformation. If we choose a reasonably small fi lter mask, we will cut off a number of non-zero fi lter coeffi cients. This causes deviations from the ideal transfer function.


370                                                                                  13 Simple Neighborhoods

 


a

1

 

0.8


b

1

 

0.8


 

0.6                                                                  0.6

 

0.4                                                                  0.4


 

0.2

 

0


 

0.2

 

0


 

 

0  0.5  1  1.5  2  2.5 f 3


 

=             =
Figure 13.13: a Radial and b angular part of quadrature fi lter according to Eq. (13.66) with l 1 and B 2 in diff erent directions and with diff erent peak wave numbers.

a

b

Figure 13.14: Computation of local orientation by vector addition of the four fi lter responses. An example is shown where the neighborhood is isotropic con- cerning orientation: all four fi lter responses are equal. The angles of the vectors are equal to the fi lter directions in a and double the fi lter directions in b .

 

 

×
Therefore, Knutsson modifi ed the fi lter kernel coeffi cient using an optimization procedure in such a way that it approaches the ideal transfer function as closely as possible. It turned out that at least a 15 15 fi lter mask is necessary to get a good approximation of the anticipated transfer function.

 

13.5.3 Determination of the Orientation Vector‡

The local orientation can be computed from the responses of the four quadra- ture fi lters by vector addition. The idea of the approach is simple. We assign to the individual directional fi lters an orientation vector. The magnitude of the vector corresponds to the response of the quadrature fi lter. The direction of the vector is given by the double angle of the fi lter direction (Section 13.3.3). In this representation each fi lter response shows how well the orientation of the pattern is directed in the direction of the fi lter. An estimate of the orientation vector is then given as the vector sum of the fi lter responses.


13.5 Tensor Representation by Quadrature Filter Sets‡                             371

 

Using a represention with complex numbers for the orientation vector, we can write the fi lter response for the fi lter in φ k direction as

Qφ k = |Q| exp(2iφ k).                                            (13.70)

Then the orientation vector as the vector sum of the fi lter responses can be written as

.
K− 1

O= Qφ k.                                              (13.71)

k=0

Figure 13.14 illustrates why an angle doubling is necessary for the vector addi- tion to obtain the orientation vector. An example is taken where the responses from all four fi lters are equal. In this case the neighborhood contains structures in all directions. Consequently, we observe no local orientation and the vector sum of all fi lter responses vanishes. This happens if we double the orientation angle (Fig. 13.14b), but not if we omit this step (Fig. 13.14a).

After these more qualitative considerations, we will prove that we can compute the local orientation exactly when the local neighborhood is ideally oriented in an arbitrary direction φ 0. As a result, we will also learn the least number of fi lters we need. We can simplify the computations by only considering the angular terms, as the fi lter responses show the same wave number dependence. The quick reader can skip this proof.

Using Eq. (13.70), Eq. (13.66), and Eq. (13.69) we can write the angular part of the fi lter response of the kth fi lter as

dˆ k(φ 0) = exp (2π ik/K) cos2l0 − π k/K).

The cosine function is decomposed into the sum of two complex exponentials:


22l
dˆ k(φ 0) = 1


exp (2π ik/K) [exp (i(φ 0 − π k/K)) + exp (− i(φ 0 − π k/K))]2l

l


.
1                         2


.2lΣ


= 22l exp (2π ik/K)               j

j=0


exp.ij(φ 0 − π k/K)Σ exp.− i(2l − j)(φ 0 − π k/K)Σ


1 2l


.2lΣ


= 22l j.=0  j


exp.i(j − l)2φ 0Σ  exp.2π i(1 + l − j)(k/K)Σ  .


Now we sum up the vectors of all the K directional fi lters:

1
k=0
22l j=0
j
k=0
K− 1                       2l                                                              K− 1

. dˆ k =      ..2lΣ  exp.i(j − l)2φ 0Σ  . exp.2π i(1 + l − j)(k/K)Σ  .

 

.                                Σ + −
= +
The complex double sum can be solved if we carefully analyze the inner sum over k. If j l 1 the exponent is zero. Consequently, the sum is K. Otherwise, the sum represents a geometric series with the factor exp 2π i(1         l j)(k/K) and the sum

K− 1

. exp.2π i(1 + l − j)(k/K)Σ  =  1 −  exp .2π i(1 + l −  j)Σ  .                               (13.72)


k=0


1 − exp.2π i(1 + l − j)/KΣ


372                                                                                  13 Simple Neighborhoods

 

b

c

 

a

Figure 13.15: Vector addition of the fi lter responses from K directional fi lters to determine local orientation; a K = 2; b K = 3; c K = 4; sum vector shown thicker.

 

+
∀  =       ···
We can use Eq. (13.72) only if the denominator ≠ 0 j 0, 1, , 2l; conse- quently K > 1 l. With this condition the sum vanishes. This result has a simple geometric interpretation. The sum consists of vectors which are equally distributed on the unit circle. The angle between two consecutive vectors is 2π k/K.

= +
= +
K− 1
In conclusion, the inner sum in Eq. (13.72) reduces to K for j l  1, otherwise it is zero. Therefore the sum over j contains only the term with j l 1. The fi nal result


. dˆ k =  K  .  2l


Σ exp (i2φ 0)                                (13.73)


k=0


22l


l + 1


shows a vector with the angle of the local orientation doubled. This concludes the proof. ■

=
=
+
The proof of the exactness of the vector addition techniques gives also the minimal number of directional fi lters required. From l > 0 and K > l 1 we conclude that at least K 3 directional fi lters are necessary. We can also illustrate this condition intuitively. If we have only two fi lters (K 2), the vector responses of these two fi lters lie on a line (Fig. 13.15a). Thus orientation determination is not possible. Only with three or four fi lters can the sum vector point in all directions (Fig. 13.15b, c).

. cos2l(φ − π k/K) = K  .2lΣ K   (2l)!  .                               (13.74)
With a similar derivation, we can prove another important property of the di- rectional quadrature fi lters. The sum over the transfer functions of the K fi lters results in an isotropic function for K > l:

 


k=0


22l


l 22l


l! 2


In other words, a preferred direction does not exist. The sum of all fi lter re- sponses gives an orientation invariant response. This is also the deeper reason why we can determine local orientation exactly with a very limited number of fi lters and a simple linear procedure such as vector addition.


13.5 Tensor Representation by Quadrature Filter Sets‡                            373

13.5.4 Determination of the Local Wave Number‡

Σ
The lognormal form of the radial part of the quadrature fi lter sets is the key for a direct estimate of the local wave number of a narrowband signal. According to Eq. (13.66), we can write the radial part of the transfer function of the quadrature fi lter sets as


l(k)


= exp Σ −


(ln k − ln kl)2  .                                  (13.75) 2σ 2 ln 2


We examine the ratio of the output of two diff erent radial center frequencies k1

and k2 and obtain:


rˆ 21


= exp


(ln k ln k )2  (ln k      ln k )2

Σ
Σ         −     2  −          −    1
−                  2σ 2 ln 2


= exp


2(ln k2 − ln k1) ln k + ln2 k2 − ln2 k1

Σ                                                               Σ
2σ 2 ln 2


                                                                       =
exp (ln k2 − ln k1)[ln k − 1/2(ln k2 + ln k1)]

 

Σ      ,                         Σ =
σ 2 ln 2

exp ln(k/ k2k1) ln(k2/k1)

σ 2 ln 2

 

ln(k /k )/(σ 2 ln 2)

=. k Σ

, k1k2

Generally, the ratio of two diff erent radial fi lters is directly related to the local wave number. The relation becomes particularly simple if the exponent in the last expression is one. This is the case, for example, if the wave number ratio of the two fi lters is two (k2/k1 = 2 and σ = 1). Then


1
rˆ 2


k

1
2
= , k  k


 

.                                            (13.76)


 

13.5.5 Determination of the Structure Tensor‡

In this fi nal section we relate the quadrature fi lter set technique as discussed in Section 13.5 to the tensor technique (Section 13.3). It is shown that the structure tensor can be computed from the responses of these fi lters. Granlund and Knutsson [57] present the general equation to compute the structure tensor from the quadrature fi lter responses:

.          .                     Σ
K− 1

J ( x ) =    Qkg( x )  α d ¯ k ⊗ d ¯ k − β I  ,                               (13.77)

k=0

where Qkg( x ) is the (amplitude) output of the kth quadrature fi lter and I the identity matrix. In the two-dimensional case, α = 4/3 and β = 1/3.

We demonstrate this relationship with the quadrature fi lter set with (the mini- mum number of) three fi lters. The three fi lters point at 0°, 60°, and 120°. Thus


374                                                                                  13 Simple Neighborhoods

 


the unit direction vectors are:

d ¯ 0  =  [1, 0]T

d ¯ 1  =  Σ 1/2, √ 3/2Σ T

d ¯ 2  =  Σ − 1/2, √ 3/2Σ T


 

.                               (13.78)


With these values for d ¯ k, Eq. (13.77) can be written as

0
0 –1/3
J ( x ) = Q g( x ) Σ  1            0 Σ


1
1/
+  Q  g( x ) Σ       0√  


1/√ 3 Σ


 

(13.79)


3 2/3
2
–1/
3  2/3
+ Q g( x ) Σ         0√  


–1/√ 3 Σ .


 

The matrices give the contribution of the individual quadrature fi lters to the corresponding elements of the structure tensor. For an isotropically oriented pattern, the output from all quadrature fi lters is the same. If we set the output to q( x ), Eq. (13.79) results in the correct structure tensor for an isotropically oriented pattern:


0   q(x)
J ( x ) = Σ  q( x )         0


Σ .                                (13.80)


Σ                                               Σ q(x)                                                .
Conversely, for an oriented pattern, the response is q( x ) cos2(φ 0 φ k) and we obtain


J ( x ) =


cos2(φ 0)      sin(2φ 0)/2

sin(2φ 0)/2       sin20)


(13.81)


This is the correct form of the structure tensor for an ideally oriented structure in the direction φ 0. (This can be shown for instance by checking that the deter- minant of the matrix is zero and by computing the orientation angle according to Eq. (13.12).)

There is one subtle but important diff erence between the quadrature fi lter tech- nique and the structure tensor technique. The quadrature fi lter technique does not require any averaging to compute the elements of the structure tensor. However, the averaging is an essential element of the direct method. Without averaging, the coherency measure (see Eq. (13.15) in Section 13.3.4) would al- ways be one.

 

13.6 Further Readings‡

The quadrature fi lter approach (Section 13.5) is detailed in the monograph of Granlund and Knutsson [57], the inertia tensor method (Section 13.3.7) in a paper by Bigü n and Granlund [8]. Poularikas [141] expounds the mathemat- ics of the Hilbert transform. The extension of the analytical signal to higher- dimensional signals (Section 13.4.4) was published only recently by Felsberg and Sommer [38]. More mathematical background to the monogenic signal and geometric algebra for computer vision can be found in the monograph edited by Sommer [176].


 

 

































































































Motion

Introduction

Motion analysis long used to be a specialized research area that had not much to do with general image processing. This separation had two rea- sons. First, the techniques used to analyze motion in image sequences were quite diff erent. Second, the large amount of storage space and computing power required to process image sequences made image se- quence analysis available only to a few specialized institutions that could aff ord to buy the expensive specialized equipment. Both reasons are no longer true. Because of the general progress in image processing, the more advanced methods used in motion analysis no longer diff er from those used for other image processing tasks. The rapid progress in com- puter hardware and algorithms makes the analysis of image sequences now feasible even on standard personal computers and workstations.

Therefore we treat motion in this chapter as just another feature that can be used to identify, characterize, and distinguish objects and to un- derstand scenes. Motion is indeed a powerful feature. We may com- pare the integration of motion analysis into mainstream image process- ing with the transition from still photography to motion pictures. Only image sequence analysis allows us to recognize and analyze dynamic processes. Thus far-reaching capabilities become available for scientifi c and engineering applications including the study of fl ow; transport; bi- ological growth processes from the molecular to the ecosystem level; diurnal, annual, and interannual variations; industrial processes; traffi c; autonomous vehicles and robots — to name just a few application areas. In short, everything that causes temporal changes or makes them visible in our world is a potential subject for image sequence analysis.

The analysis of motion is still a challenging task and requires some special knowledge. Therefore we discuss the basic problems and prin- ciples of motion analysis in Section 14.2. Then we turn to the various techniques for motion determination. As in many other areas of image processing, the literature is swamped with a multitude of approaches. This book should not add to the confusion. We emphasize instead the basic principles and we try to present the various concepts in a unifi ed way as fi lter operations on the space-time images. In this way, the inter- relations between the diff erent concepts are made transparent.

 

375

B. Jä hne, Digital Image Processing                                                                                                       Copyright © 2002 by Springer-Verlag

ISBN 3–540–67754–2                                                                                                    All rights of reproduction in any form reserved.


376                                                                                                                  14 Motion

 


A                                                                   b

C                                                                   d

Figure 14.1: a d Two pairs of images from the construction area for the new head clinic at Heidelberg University. What has changed from the left to the right images?

 

In this sense, we will discuss diff erential (Section 14.3), tensor (Sec- tion 14.4), correlation (Section 14.6), and phase (Section 14.7) techniques as elementary motion estimators.

 

Basics


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 187; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.98 с.)
Главная | Случайная страница | Обратная связь