Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Evaluation and Comparison



In contrast to the diff erential methods, which are based on the conti- nuity of the optical fl ux, the correlation approach is insensitive to in- tensity changes between the two images. This makes correlation-based techniques very useful for stereo-image processing where slight inten- sity variations always occur between the left and right image because of the two diff erent cameras used. Actually, the fast maximum search described Section 14.6.2 is the standard approach for determining the stereo disparity. Quam [145] used it with a coarse-to-fi ne control strat- egy, and Nishihara [132] used it in a modifi ed version, taking the sign of the Laplacian of the Gaussian as a feature. He reports a resolution ac- curacy of about 0.1 pixel for small displacements. Gelles et al. [51] mea- sured movements in cells with a precision of about 0.02 pixel using the correlation method. However, they used a more costly approach, com- puting the centroid of a clipped cross-correlation function. The model- adapted approach of Diehl and Burkhardt [31] can be understood as an extended correlation approach as it allows also for rotation and other forms of motion.

The correlation method deviates from all other methods discussed in this work in the respect that it is conceptually based on the comparison of only two images. Even if we extend the correlation technique by multi- ple correlations to more than two frames, it remains a discrete time-step approach. Thus it lacks the elegance of the other methods, which were formulated in continuous space before being implemented for discrete images. Furthermore, it is obvious that a multiframe extension will be computationally quite expensive.

 

14.7 Phase Method‡

14.7.1 Principle‡

Except for the costly correlation method, all other methods that compute the optical fl ow are more or less sensitive to temporal illumination changes. Thus we search for a rich feature which contains the essential information in the images with regard to motion analysis. Fleet and Jepson [45] and Fleet [42] pro- posed using the phase for the computation of optical fl ow. We have discussed the crucial role of the phase already in Sections 2.3.6 and 13.4.1. In Section 2.3.6 we demonstrated that the phase of the Fourier transform of a signal carries the essential information. An image can still be recognized when the amplitude information is lost, but not when the phase is lost [112]. Global illumination changes the amplitude of a signal but not its phase.

As an introduction to the phase method, we consider a planar 1-D wave with a wave number k and a circular frequency ω, traveling with a phase speed u = ω /k:

g(x, t) = g0 exp[− i(φ (x, t))] = g0 exp[− i(kx − ω t)].                                      (14.78)


410                                                                                                                  14 Motion

 

The position and thus also the displacement is given by the phase. The phase depends on both the spatial and temporal coordinates. For a planar wave, the phase varies linearly in time and space,

φ (x, t) = kx − ω t = kx − ukt,                                           (14.79)

where k and ω are the wave number and the frequency of the pattern, respec- tively. Computing the temporal and spatial derivatives of the phase, i. e., the spatiotemporal gradient, yields both the wave number and the frequency of the moving periodic structure:

φ t
− ω
xtφ = Σ φ x Σ = Σ            k Σ .                             (14.80)

 

Then the velocity is given as the ratio of the frequency to the wave number:

ω

u = k = − ∂ tφ  ∂ xφ.                                (14.81)

This formula is very similar to the estimate based on the optical fl ow (Eq. (14.11)). In both cases, the velocity is given as a ratio of temporal and spatial derivatives.

Direct computation of the partial derivatives from the phase signal is not ad- visable because of the inherent discontinuities in the phase signal (restriction to the main interval [ π, π [). As we discussed in Section 13.4.6, it is possible to compute the phase gradients directly from the output of a quadrature fi l- ter pair. If we denote the quadrature fi lter pair with q+(x, t) and q(x, t), the spatiotemporal phase gradient is given by (compare Eq. (13.64)):

∇              =xt
φ (x, t)     q + (x, t) ∇ xt q (x, t) − q (x, t) ∇ xt q + (x, t).                                 (14.82)

+
q2 (x, t) + q2 (x, t)

Using Eq. (14.81), the phase derived optical fl ow f is

=−
f      q+ ∂ tq−  − q− ∂ tq+ .                                          (14.83)

q+xq − qxq+

14.7.2 Evaluation and Comparison‡

At fi rst sight the phase method appears to off er nothing new. Replacing the gray value by the phase is a signifi cant improvement, however, as the phase is much less dependent on the illumination than the gray value itself. Using only the phase signal, the amplitude of the gray value variations may change without aff ecting the velocity estimates at all.

So far, we have only considered an ideal periodic gray value structure. Generally, images are composed of gray value structures with diff erent wave numbers. From such a structure we cannot obtain useful phase estimates. Consequently, we need to decompose the image into a set of wave number ranges.

This implies that the phase method is not appropriate to handle two-dimensional shifts. It is essentially a 1-D concept which measures the motion of a linearly


14.7 Phase Method‡                                                                                             411

 

oriented structure, e. g., a planar wave, in the direction of the gray value gradi- ents. From this fact, Fleet and Jepson [44] derived a new paradigm for motion analysis. The image is decomposed with directional fi lters and in each of the components normal velocities are determined.

The 2-D motion fi eld is then composed from these normal velocities. This ap- proach has the advantage that the composition to a complete motion fi eld is postponed to a second processing step which can be adapted to the kind of motion occurring in the images. Therefore this approach can also handle more complex cases such as motion superimposition of transparent objects.

Fleet and Jepson [44] use a set of Gabor fi lters (Section 13.4.5) with an angular resolution of 30° and a bandwidth of 0.8 octaves for the directional decompo- sition.

Alternatively, a bandpass decomposition and a Hilbert fi lter (Section 13.4.2) can be used. The motivation for this idea is the fact that the decomposition with a set of Gabor fi lters, as proposed by Fleet and Jepson, does not allow easy reconstruction of the original image. The transfer functions of the Gabor fi lter series do not add up to a unit transfer function but show considerable ripples, as shown by Riemer [154].

A bandpass decomposition, for example using a Laplacian pyramid [15, 16], does not share this disadvantage (Section 5.3.3). In addition, it is computation- ally more effi cient. However, we are faced with the problem that no directional decomposition is gained.

Jä hne [78, 79] showed how the concept of the Laplacian pyramid can eff ectively be extended into a directiopyramidal decomposition. Each level of the pyramid is further decomposed into two or four directional components which add up directly to the corresponding isotropically fi ltered pyramid level (see also Sec- tion 5.3.4).

 

14.7.3 From Normal Flow to 2-D Flow‡

As the phase method gives only the normal optical fl ow, a technique is required to determine the two-dimensional optical fl ow from the normal fl ow. The basic relation between the normal and 2-D fl ow is as follows. We assume that f is a normal fl ow vector. It is a result of the projection of the 2-Dfl ow vector f in the direction of the normal fl ow. Thus we can write:

f = f ¯ ⊥ f ,                                                   (14.84)

⊥   ⊥ y
where f ¯  is a unit vector in the direction of the normal fl ow. From Eq. (14.84), it is obvious that we can determine the unknown 2-D optical fl ow in a least squares approach if we have more than two estimates of the normal fl ow in diff erent directions. In a similar way as in Section 14.3.2, this approach yields the linear equation system


⊥   ⊥ x
  f
    f¯ xf¯


 f¯ xf¯

 


 Σ f1 Σ


    f¯  xf⊥  


 

(14.85)


 

 

f
f
⊥ p
f
⊥ q
with


¯ ¯

f
⊥ x ⊥ y


⊥ y  ⊥ y       f2


=   ¯

⊥      
  f
f
⊥ y ⊥


 

¯ ¯

f
f
⊥ p ⊥ q


= ∫  w( x x ', t − t')f¯


¯  d2x'dt'                                   (14.86)


412                                                                                                                  14 Motion

 


⊥ p
and


 

 

¯ ¯

f
f
⊥ p ⊥


= ∫  w( x x ', t − t')f¯


 

fd2x'dt'.                       (14.87)


 

 

14.8 Further Readings‡

 

The following monographs on motion analysis are available: Singh [172], Fleet [43], and Jä hne [80]. A good survey of motion analysis can also be found in the review articles of Beauchemin and Barron [6] and Jä hne and Hauß ecker [82, Chapter 10]. The latter article also includes the estimation of higher-order motion fi elds. Readers interested in visual detection of motion in biological systems are referred to the monograph edited by Smith and Snowden [173]. The extension of motion analysis to the estimation of parameters of dynamic processes and illumination variation is described in Hauß ecker and Fleet [65] and Hauß ecker [64].


 

 






















Texture

Introduction

In Chapters 11 and 12 we studied smoothing and edge detection and in Chapter 13 simple neighborhoods. In this chapter, we will take these important building blocks and extend them to analyze complex patterns, known as texture in image processing. Actually, textures demonstrate the diff erence between an artifi cial world of objects whose surfaces are only characterized by their color and refl ectivity properties to that of real-world imagery.

Our visual system is capable of recognizing and distinguishing tex- ture with ease, as can be seen from Fig. 15.1. It appears to be a much more diffi cult task to characterize and distinguish the rather “diff use” properties of the texture with some precisely defi ned parameters that allow a computer to perform this task.

In this chapter we systematically investigate operators to analyze and diff erentiate between textures. These operators are able to describe even complex patterns with just a few characteristic fi gures. We thereby re- duce the texture recognition problem to the simple task of distinguishing gray values.

How can we defi ne a texture? An arbitrary pattern that extends over a large area in an image is certainly not recognized as a texture. Thus the basic property of a texture is a small elementary pattern which is repeated periodically or quasi-periodically in space like a pattern on a wall paper. Thus, it is suffi cient to describe the small elementary pattern and the repetition rules. The latter give the characteristic scale of the texture.

Texture analysis can be compared to the analysis of the structure of solids, a research area studied in solid state physics, chemistry, and mineralogy. A solid state physicist must fi nd out the repetition pattern and the distribution of atoms in the elementary cell. Texture analysis is complicated by the fact that both the patterns and periodic repetition may show signifi cant random fl uctuation, as shown in Fig. 15.1.

Textures may be organized in a hierarchical manner, i. e., they may look quite diff erent at diff erent scales. A good example is the curtain shown in Fig. 15.1a. On the fi nest scale our attention is focused on the individual threads (Fig. 15.2a). Then the characteristic scale is the thick- ness of the threads. They also have a predominant local orientation. On

413

B. Jä hne, Digital Image Processing                                                                                                       Copyright © 2002 by Springer-Verlag

ISBN 3–540–67754–2                                                                                                    All rights of reproduction in any form reserved.


414                                                                                                                15 Texture

 


A                                                                    b

C                                                                    d

E                                                                    f

Figure 15.1: Examples of textures: a curtain; b wood; c dog fur; d woodchip paper; e, f clothes.

 

the next coarser level, we will recognize the meshes of the net (Fig. 15.2b). The characteristic scale here shows the size of the meshes. At this level, the local orientation is well distributed. Finally, at an even coarser level, we no longer recognize the individual meshes, but observe the folds of the curtain (Fig. 15.2c). They are characterized by yet another character- istic scale, showing the period of the folds and their orientation. These considerations emphasize the importance of multiscale texture analysis.


15.1 Introduction                                                                          415

 


A                                           b                                           c

Figure 15.2: Hierarchical organization of texture demonstrated by showing the image of the curtain in Fig. 15.1a at diff erent resolutions.

 

Thus multiscale data structures as discussed in the fi rst part of this book (Chapter 5) are essential for texture analysis.

Generally, two classes of texture parameters are of importance. Tex- ture parameters may or may not be rotation and scale invariant. This classifi cation is motivated by the task we have to perform.

Imagine a typical industrial or scientifi c application in which we want to recognize objects that are randomly oriented in the image. We are not interested in the orientation of the objects but in their distinction from each other. Therefore, texture parameters that depend on orientation are of no interest. We might still use them but only if the objects have a characteristic shape which then allows us to determine their orienta- tion. We can use similar arguments for scale-invariant features. If the objects of interest are located at diff erent distances from the camera, the texture parameter used to recognize them should also be scale in- variant. Otherwise the recognition of the object will depend on distance. However, if the texture changes its characteristics with the scale — as in the example of the curtain in Fig. 15.1a — the scale-invariant texture features may not exist at all. Then the use of textures to characterize objects at diff erent distances becomes a diffi cult task.

In the examples above, we were interested in the objects themselves but not in their orientation in space. The orientation of surfaces is a key feature for another image processing task, the reconstruction of a three-dimensional scene from a two-dimensional image. If we know the surface of an object shows a uniform texture, we can analyze the orien- tation and scales of the texture to fi nd the orientation of the surface in space. For this, the characteristic scales and orientations of the texture are needed.

Texture analysis is one of those areas in image processing which still lacks fundamental knowledge. Consequently, the literature con- tains many diff erent empirical and semiempirical approaches to texture analysis. Here these approaches are not reiterated. In contrast, a rather simple approach to texture analysis is presented which builds complex texture operators from elementary operators.


416                                                                                                                15 Texture

 

For texture analysis only four fundamental texture operators are used:

• mean,

• variance,

• orientation,

• scale,

which are applied at diff erent levels of the hierarchy of the image process- ing chain. Once we have, say, computed the local orientation and the lo- cal scale, the mean and variance operators can be applied again, now not to the mean and variance of the gray values but to the local orientation and local scale.

These four basic texture operators can be grouped in two classes. The mean and variance are rotation and scale independent, while the orientation and scale operators just determine the orientation and scale, respectively. This important separation between parameters invariant and variant to scale and rotation signifi cantly simplifi es texture analysis. The power of this approach lies in the simplicity and orthogonality of the parameter set and the possibility of applying it hierarchically.

 


First-Order Statistics

Basics

All texture features based on fi rst-order statistics of the gray value dis- tributions are by defi nition invariant on any permutation of the pixels. Therefore they do not depend on the orientation of objects and — as long as fi ne-scale features do not disappear at coarse resolutions — on the scale of the object. Consequently, this class of texture parameter is rotation and scale invariant.

The invariance of fi rst-order statistics to pixel permutations has, how- ever, a signifi cant drawback. Textures with diff erent spatial arrange- ments but the same gray value distribution cannot be distinguished. Here is a simple example. A texture with equally wide black and white stripes and a texture with a black and white chess board have the same bi- modal gray value distribution but a completely diff erent spatial arrange- ment of the texture.

Thus many textures cannot be distinguished by parameters based on fi rst-order statistics. Other classes of texture parameter are required in addition for a better distinction of diff erent textures.

 

Local Variance

All parameters that are deviating from the statistics of the gray values of individual pixels are basically independent of the orientation of the


15.2 First-Order Statistics                                                              417

 

objects. In Section 3.2.2 we learnt to characterize the gray value distri- bution by the mean, variance, and higher moments. To be suitable for texture analysis, the estimate of these parameters requires to be aver- aged over a local neighborhood. This leads to a new operator estimating the local variance.

In the simplest case, we can select a mask and compute the para- meters only from the pixels contained in this window M. The variance operator, for example, is then given by

2
1                                              


vmn =


P −  1 m',.n'∈ M


.gmm', n− n' − gmnΣ  .                          (15.1)


The sum runs over the P image points of the window. The expression gmn denotes the mean of the gray values at the point [m, n]T, computed over the same window M:

.
         1


gmn = P

m', n'∈ M


gmm', n− n'.                                     (15.2)


 

It is important to note that the variance operator is nonlinear. However, it resembles the general form of a neighborhood operation — a convolu- tion. Combining Eqs. (15.1) and (15.2), we can show the variance operator is a combination of linear convolution and nonlinear point operations


1


1                                  2


− 1
m', n'∈ M
m− m, n− n
m', n'∈ M
Vmn = P           . g2        '        ' −  P  . gmm', n− n'      ,  (15.3)

or, in operator notation,

 

V = R(I· I) − (R· R).                                       (15.4)

I· I
I
R
The operator denotes a smoothing over all the image points with a box fi lter of the size of the window M. The operator is the identity operator. Therefore the operator performs a nonlinear point oper- ation, namely the squaring of the gray values at each pixel. Finally, the variance operator subtracts the square of a smoothed gray value from the smoothed squared gray values. From discussions on smoothing in Section 11.3 we know that a box fi lter is not an appropriate smoothing fi lter. Thus we obtain a better variance operator if we replace the box fi lter R with a binomial fi lter B

V = B(I· I) − (B· B).                                        (15.5)

We know the variance operator to be isotropic. It is also scale inde- pendent if the window is larger than the largest scales in the textures


418                                                                                                                15 Texture

 













A                                                                    b

C                                                                    d

Figure 15.3: Variance operator applied to diff erent images: a Fig. 11.6a; b

Fig. 15.1e; c Fig. 15.1f; d Fig. 15.1d.

 

 

and if no fi ne scales of the texture disappear because the objects are lo- cated further away from the camera. This suggests that a scale-invariant texture operator only exists if the texture itself is scale invariant.

B
The application of the variance operator Eq. (15.5) with 16 to several

images is shown in Fig. 15.3. In Fig. 15.3a, the variance operator turns out to be an isotropic edge detector, because the original image contains areas with more or less uniform gray values.

The other three examples in Fig. 15.3 show variance images from tex- tured surfaces. The variance operator can distinguish the areas with the fi ne horizontal stripes in Fig. 15.1e from the more uniform surfaces. They appear as uniform bright areas in the variance image (Fig. 15.3b). The variance operator cannot distinguish between the two textures in Fig. 15.3c. As the resolution is still fi ner than the characteristic repeti- tion scale of the texture, the variance operator does not give a uniform estimate of the variance in the texture. The chipwood paper (Fig. 15.3d) also gives a non-uniform response to the variance operator because the pattern shows signifi cant random fl uctuations.


15.2 First-Order Statistics                                                             419

 


A                                                                    b

C                                                                    d

Figure 15.4: Coherence of local orientation of a piece of cloth with regions of horizontal stripes (Fig. 15.1e), b dog fur (Fig. 15.1c), c curtain (Fig. 15.1a), and d woodchip wall paper.

 

Higher Moments

Besides the variance, we could also use the higher moments of the gray value distribution as defi ned in Section 3.2.2 for a more detailed descrip- tion. The signifi cance of this approach may be illustrated with examples of two quite diff erent gray value distributions, a normal and a bimodal distribution:


 1

2π σ
p(g) = √


exp.− g −  g Σ  ,  p'(g) = 1 .δ (g + σ ) + δ (g − σ )Σ  .


 

2σ 2
2
Both distributions show the same mean and variance. Because both dis- tributions are of even symmetry, all odd moments are zero. Thus the third moment (skewness) is also zero. However, the forth and all higher- order even moments of the two distributions are diff erent.


420                                                                                                                15 Texture

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 202; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.126 с.)
Главная | Случайная страница | Обратная связь