Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Bias by Uneven Illumination



In this section, we study the bias of various segmentation techniques by a nonuniform background and varying object brightness. We assume that the object edge can be modeled adequately by a step edge that is blurred by a point spread function h(x) with even symmetry. For the sake of simplicity, we model the 1-D case. Then the brightness of the object in the image with an edge at the origin can be written as


x

g(x) = g0 ∫ h(x)dx with

− ∞


∫ h(x)dx = 1.                (16.1)

− ∞


We further assume that the background intensity can be modeled by a parabolic variation of the form

b(x) = b0 + b1x + b2x2.                                          (16.2)

Then the total intensity in the image is given by

x

g(x) = g0 ∫ h(x)dx + b0 + b1x + b2x2.                                   (16.3)

− ∞

The fi rst and second derivatives are


gx(x) = g0h(x) + b1 + 2b2x, gxx(x) = g0hx(x) + 2b2.


 

(16.4)


432                                                                                                    16 Segmentation

 

Around the maximum, we can approximate the point spread function

h(x) by a parabola: h(x) ≈ h0 − h2x2. Then

gx(x) ≈ g0h0 − g0h2x2 + b1 + 2b2x,


gxx(x) ≈ − 2g0h2x + 2b2.


(16.5)


 

The position of the edge is given as the zero of the second derivative. Therefore the bias in the edge position estimation, xb, is given from Eq. (16.5) as

b
x    b 2  .                                                 (16.6)

g0h2

From Eq. (16.6) we can conclude:

1. Edge-based segmentation shows no bias in the edge position even if the background intensity is sloped.

2. Edge-based segmentation shows no bias with the intensity g0 of the edge as it is the case with intensity-based segmentation (Section 16.2).

3. Edge-based segmentation is only biased by a curvature in background intensity. The bias is directly related to the ratio of the curvature in the background intensity to the maximum curvature of the point spread function. This means that the bias is higher for blurred edges. The bias is also inversely proportional to the intensity of the object and thus seriously aff ects only objects with weak contrast.

 










Edge Tracking

Edge-based segmentation is a sequential method. In contrast to pixel- based and most region-based segmentations, it cannot be performed in parallel on all pixels. The next step to be performed rather depends on the results of the previous steps. A typical approach is as described in the following. The image is scanned line by line for maxima in the magnitude of the gradient. When a maximum is encountered, a tracing algorithm tries to follow the maximum of the gradient around the object until it reaches the starting point again. Then a search begins for the next maximum in the gradient. Like region-based segmentation, edge- based segmentation takes into account that an object is characterized by adjacent pixels.

 

Region-Based Segmentation

Principles

Region-based methods focus our attention on an important aspect of the segmentation process we missed with point-based techniques. There we classifi ed a pixel as an object pixel judging solely on its gray value


16.4 Region-Based Segmentation                                                  433

 

independently of the context. This meant that isolated points or small areas could be classifi ed as object pixels, disregarding the fact that an important characteristic of an object is its connectivity.

In this section we will not discuss such standard techniques as split- and-merge or region-growing techniques. Interested readers are referred to Rosenfeld and Kak [157] or Jain [86]. Here we discuss rather a tech- nique that aims to solve one of the central problems of the segmentation process.

If we use not the original image but a feature image for the segmenta- tion process, the features represent not a single pixel but a small neigh- borhood, depending on the mask sizes of the operators used. At the edges of the objects, however, where the mask includes pixels from both the object and the background, any feature that could be useful cannot be computed. The correct procedure would be to limit the mask size at the edge to points of either the object or the background. But how can this be achieved if we can only distinguish the object and the background after computation of the feature?

Obviously, this problem cannot be solved in one step, but only iter- atively using a procedure in which feature computation and segmenta- tion are performed alternately. In principle, we proceed as follows. In the fi rst step, we compute the features disregarding any object bound- aries. Then we perform a preliminary segmentation and compute the features again, now using the segmentation results to limit the masks of the neighborhood operations at the object edges to either the object or the background pixels, depending on the location of the center pixel. To improve the results, we can repeat feature computation and segmenta- tion until the procedure converges into a stable result.

 


Pyramid Linking

Burt [15] suggested a pyramid-linking algorithm as an eff ective imple- mentation of a combined segmentation feature computation algorithm. We will demonstrate it using the illustrative example of a noisy step edge (Fig. 16.4). In this case, the computed feature is simply the mean gray value. The algorithm includes the following steps:

1. Computation of the Gaussian pyramid. As shown in Fig. 16.4a, the gray values of four neighboring pixels are averaged to form a pixel on the next higher level of the pyramid. This corresponds to a smoothing operation with a box fi lter.

2. Segmentation by pyramid-linking. As each pixel contributes to ei- ther of two pixels on the higher level, we can now decide to which it most likely belongs. The decision is simply made by comparing the gray values and choosing the pixel next to it. The link is pictured in Fig. 16.4b by an edge connecting the two pixels. This procedure is


434                                                                                                    16 Segmentation

 

a               51    

 

          49         53    
      43         49     56     55
    45     39     44     55   56      56   55     53  
  50   46   38   34   38   54   50   58   58 50 58 66   50 58     46   54
  b           49         51 root edge        53     node      
      43         49     56     55    
    45     39     44     55   56      56   55     53  

 

G(4)

 

G(3)

 

G(2  )

 

G(1)

 

G(0)

 

 

G(4)

 

G(3)

 

G(2  )

G(1)

 


50 46 38 34 38 54 50 58 58 50 58 66 50 58 46 54

leaf

c


G(0)

 

 

G(4)


 


41                                               55


G(3)


 


41                    0                     56                    52


G(2  )


 


48      37     0        52      55      62       54     50


G(1)


 


50 46 38 34 38 54 50 58 58 50 58 66 50 58 46 54


G(0)


 

 

d       51

 

      41           55  
    42     38     56       52
  (41)   (41)   (55)     (55)
48 36 38 54   54 62   54     50
(41) (41) (41) (55)   (55) (55)   (55)   (55)
50 46 38 34 38 54 50 58   58 50 58 66   50 58 46 54

 

G(4)

 

G(3)

 

                     
         

G(2  )

G(1)

 


 

(41) (41) (41) (41) (41) (55) (55) (55) (55) (55) (55) (55) (55) (55) (55) (55)


G(0)


 

Figure 16.4: Pyramid-linking segmentation procedure with a one-dimensional noisy edge: a computation of the Gaussian pyramid; b node-linking; c re- computation of the mean gray values; d fi nal result after several iterations of steps b and c .


16.4 Region-Based Segmentation                                                 435

 






















A                                        b

C                                         d

Figure 16.5: Noisy a tank and c blood cell images segmented with the pyramid- linking algorithm in b two and d three regions, respectively; after Burt [15].

 

repeated through all the levels of the pyramid. As a result, the links in the pyramid constitute a new data structure. Starting from the top of the pyramid, one pixel is connected with several pixels on the next lower level. Such a data structure is called a tree in computer science. The links are called edges; the data points are the gray values of the pixels and are denoted as nodes or vertices. The node at the highest level is called the root of the tree and the nodes with no further links are called the leaves of the tree. A node linked to a node at a lower level is denoted as the father node of this node. Correspondingly, each node linked to a node at a higher level is defi ned as the son node of this node.

3. Averaging of linked pixels. Next, the resulting link structure is used to recompute the mean gray values, now using only the linked pixels (Fig. 16.4c), i. e., the new gray value of each father node is computed as the average gray value of all the son nodes. This procedure starts at the lowest level and is continued through all the levels of the pyramid.

The last two steps are repeated iteratively until we reach a stable result shown in Fig. 16.4d. An analysis of the link-tree shows the result of the segmentation procedure. In Fig. 16.4d we recognize two subtrees, which have their roots in the third level of the pyramid. At the next lower level, four subtrees originate. But the diff erences in the gray values at


436                                                                                                    16 Segmentation

 

this level are signifi cantly smaller. Thus we conclude that the gray value structure is obviously parted into two regions. Then we obtain the fi nal result of the segmentation procedure by transferring the gray values at the roots of the two subtrees to the linked nodes at the lowest level. These values are shown as braced numbers in Fig. 16.4d.

The application of the pyramid-linking segmentation algorithm to two-dimensional images is shown in Fig. 16.5. Both examples illustrate that even very noisy images can be successfully segmented with this procedure. There is no restriction on the form of the segmented area.

The pyramid-linking procedure merges the segmentation and the com- putation of mean features for the objects extracted in an effi cient way by building a tree on a pyramid. It is also advantageous that we do not need to know the number of segmentation levels beforehand. They are contained in the structure of the tree. Further details of pyramid-linking segmentation are discussed in Burt et al. [17] and Pietikä inen and Rosen- feld [138].

 


Model-Based Segmentation

Introduction

All segmentation techniques discussed so far utilize only local informa- tion. In Section 1.6 (Fig. 1.16) we noted the remarkable ability of the hu- man vision system to recognize objects even if they are not completely represented. It is obvious that the information that can be gathered from local neighborhood operators is not suffi cient to perform this task. In- stead we require specifi c knowledge about the geometrical shape of the objects, which can then be compared with the local information.

This train of thought leads to model-based segmentation. It can be applied if we know the exact shape of the objects contained in the image. We consider here only the simplest case: straight lines.

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 198; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.039 с.)
Главная | Случайная страница | Обратная связь