Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Pyramidal Texture Analysis



The Laplace pyramid is an alternative to the local wave number opera- tor, because it results in a bandpass decomposition of the image. This decomposition does not compute a local wave number directly, but we can obtain a series of images which show the texture at diff erent scales. The variance operator takes a very simple form with a Laplace pyra-

mid, as the mean gray value, except for the coarsest level, is zero:

V = B(L(p) · L(p)).                                            (15.6)

Figure 15.7 demonstrates how the diff erent textures from Fig. 15.1f appear at diff erent levels of the Laplacian pyramid. In the two fi nest


15.3 Rotation and Scale Variant Texture Features                           423

 


A                                                                    b

C                                                                    d

Figure 15.7: Application of the variance operator to levels 0 to 3 of the Laplace pyramid of the image from Fig. 15.1f.

 

scales at the zero and fi rst level of the pyramid (Fig. 15.7a, b), the variance is dominated by the texture itself. The most pronounced feature is the variance around the dot-shaped stitches in one of the two textures.

At the second level of the Laplacian pyramid (Fig. 15.7c), the dot- shaped stitches are smoothed away and the variance becomes small in this texture while the variance is still signifi cant in the regions with the larger vertically and diagonally oriented stitches. Finally, the third level (Fig. 15.7d) is too coarse for both textures and thus dominated by the edges between the two texture regions because they have a diff erent mean gray value.

The Laplace pyramid is a very well adapted data structure for the analysis of hierarchically organized textures which may show diff erent characteristics at diff erent scales, as in the example of the curtain dis- cussed in Section 15.1. In this way we can apply such operators as local variance and local orientation at each level of the pyramid. The simul- taneous application of the variance and local orientation operators at multiple scales gives a rich set of features, which allows even complex hierarchically organized textures to be distinguished. It is important to


424                                                                                                                15 Texture

 

note that application of these operations on all levels of the pyramid only increases the number of computations by a factor of 4/3 for 2-D images.

 

15.4 Further Readings‡

 

The textbooks of Jain [86, Section 9.11] and Pratt [142, Chapter 17] deal also with texture analysis. Further references for texture analysis are the monography of Rao [146], the handbook by Jä hne et al. [83, Vol. 2, Chapter 12], and the proceedings of the workshop on texture analysis, edited by Burkhardt [13].


 

 



Part IV

Image Analysis


 


 

 



Segmentation

Introduction

All image processing operations discussed in the preceding chapters aimed at a better recognition of objects of interest, i. e., at fi nding suit- able local features that allow us to distinguish them from other objects and from the background. The next step is to check each individual pixel to see whether it belongs to an object of interest or not. This operation is called segmentation and produces a binary image. A pixel has the value one if it belongs to the object; otherwise it is zero. Segmentation is the operation at the threshold between low-level image processing and im- age analysis. After segmentation, we know which pixel belongs to which object. The image is parted into regions and we know the discontinuities as the boundaries between the regions. After segmentation, we can also analyze the shape of objects with operations such as those discussed in Chapter 19.

In this chapter, we discuss several types of elementary segmentation methods. Basically we can think of several basic concepts for segmen- tation. Pixel-based methods (Section 16.2) only use the gray values of the individual pixels. Region-based methods (Section 16.4) analyze the gray values in larger areas. Finally, edge-based methods (Section 16.3) detect edges and then try to follow them. The common limitation of all these approaches is that they are based only on local information. Even then they use this information only partly. Pixel-based techniques do not even consider the local neighborhood. Edge-based techniques look only for discontinuities, while region-based techniques analyze homo- geneous regions. In situations where we know the geometric shape of an object, model-based segmentation can be applied (Section 16.5). We discuss an approach to the Hough transform that works directly from gray scale images (Section 16.5.3).

 

Pixel-Based Segmentation

Point-based or pixel-based segmentation is conceptually the simplest ap- proach we can take for segmentation. We may argue that it is also the best approach. Why? The reason is that instead of trying a complex segmentation procedure, we should rather fi rst use the whole palette

427

B. Jä hne, Digital Image Processing                                                                                                       Copyright © 2002 by Springer-Verlag

ISBN 3–540–67754–2                                                                                                    All rights of reproduction in any form reserved.


428                                                                                                    16 Segmentation

 


a

b

14000

12000

10000

8000

6000

4000

2000

0


 

 

0        50       100      150      200


 

 

g 250


 

l                                           l

 

Figure 16.1: Segmentation with a global threshold: a original image; b his- togram; c e upper right sector of a segmented with global thresholds of 110, 147, and 185, respectively.

 

of techniques we have discussed so far in this book to extract those features that characterize an object in a unique way before we apply a segmentation procedure. It is always better to solve the problem at its root. If an image is unevenly illuminated, for instance, the fi rst thing to do is to optimize the illumination of the scene. If this is not possible, the next step would be to identify the unevenness of the illumination sys- tem and to use corresponding image processing techniques to correct it. One possible technique has been discussed in Section 10.3.2.

If we have found a good feature to separate the object from the back- ground, the histogram of this feature will show a bimodal distribution with two distinct maxima as in Fig. 16.1b. We cannot expect that the probability for gray values between the two peaks will be zero. Even if there is a sharp transition of gray values at the edge of the objects, there will always be some intermediate values due to a nonzero point spread function of the optical system and sensor (Sections 7.6.1 and 9.2.1). The smaller the objects are, the more area in the image is occupied by inter- mediate values fi lling the histograms in between the values for object and background (Fig. 16.1b).

How can we fi nd an optimum threshold in this situation? In the case shown in Fig. 16.1, it appears to be easy because both the background and the object show rather uniform gray values. Thus we obtain a good segmentation for a large range of thresholds, between a low threshold of


16.2 Pixel-Based Segmentation                                                    429

a                                                                    c                  d                  e

 

Figure 16.2: Segmentation of an image with a graded background: a original image; b profi le of column 55 (as marked in a ); c e fi rst 64 columns of a seg- mented with global thresholds of 90, 120, and 150, respectively.

 

110, where the objects start to get holes (Fig. 16.1c), and a high threshold of 185, close to the value of the background, where some background pixels are detected as object pixels.

However, a close examination of Fig. 16.1c–e reveals that the size of the segmented objects changes signifi cantly with the level of the thresh- old. Thus it is critical for a bias-free determination of the geometrical features of an object to select the correct threshold. This cannot be per- formed without knowledge about the type of the edge between the object and the background. In the simple case of a symmetrical edge, the cor- rect threshold is given by the mean gray value between the background and the object pixels.

This strategy fails as soon as the background is not uniform, or if objects with diff erent gray values are contained in the image (Figs. 16.2 and 16.3). In Fig. 16.2b, the segmented letters are thinner in the upper, brighter part of the image. Such a bias is acceptable for some applica- tions such as the recognition of typeset letters. However, it is a serious fl aw for any gauging of object sizes and related parameters.


430                                                                                                    16 Segmentation

 

a

b

250

 

200

 

150

 

100

 

50

 

0

 









C                                                                    d

Figure 16.3: Segmentation of an image with an uneven illumination: a orig- inal image with inhomogeneous background illumination (for histogram, see Fig. 10.10b); b profi le of row 186 (as marked in a ); c and d segmentation results with an optimal global threshold of the images in a before and after the image is fi rst corrected for the inhomogeneous background (Fig. 10.10c), respectively.

 

Figure 16.3a shows an image with two types of circles; both are circles but of diff erent brightness. The radiance of the brighter circles comes close to the background. Indeed, a histogram (Fig. 10.10b) shows that the gray values of these brighter circles no longer form a distinct maximum but overlap with the wide distribution of the background.

Consequently, the global thresholding fails (Fig. 16.3c). Even with an optimal threshold, some of the background in the right upper and lower corners are segmented as objects and the brighter circles are still segmented only partly. If we fi rst correct for the inhomogeneous illu- mination as illustrated in Fig. 10.10, all objects are segmented perfectly (Fig. 16.3d). We still have the problem, however, that the areas of the dark circles are too large because the segmentation threshold is too close to the background intensity.


16.3 Edge-Based Segmentation                                                    431

 


Edge-Based Segmentation

Principle

We have seen in Section 16.2 that even with perfect illumination, pixel- based segmentation results in a bias of the size of segmented objects when the objects show variations in their gray values (Figs. 16.2 and 16.3). Darker objects will become too small, brighter objects too large. The size variations result from the fact that the gray values at the edge of an object change only gradually from the background to the object value. No bias in the size occurs if we take the mean of the object and the background gray values as the threshold. However, this approach is only possible if all objects show the same gray value or if we apply diff erent thresholds for each objects.

An edge-based segmentation approach can be used to avoid a bias in the size of the segmented object without using a complex thresholding scheme. Edge-based segmentation is based on the fact that the position of an edge is given by an extreme of the fi rst-order derivative or a zero crossing in the second-order derivative (Fig. 12.1). Thus all we have to do is to search for local maxima in the edge strength and to trace the maximum along the edge of the object.

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 188; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.032 с.)
Главная | Случайная страница | Обратная связь