Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Normal and Binomial Distributions



Many processes with continuous RVs can adequately be described by the normal or Gaussian probability density N(µ, σ ) with the mean µ and the variance σ 2:


N(µ, σ ):      f (g) =


1

√ 2π σ


exp.−


(g − µ)2  .                 (3.40)

Σ
2σ 2


 

From Eq. (3.40) we can see that the normal distribution is completely described by the mean and the variance.

For discrete analogue to the normal distribution is the binomial dis- tribution B(Q, p)

=                      −              ≤ q
B(Q, p):     f      Q!     pq(1   p)Q− q,    0 q< Q.         (3.41)

q! (Q − q)!

The natural number Q denotes the number of possible outcomes and the parameter p ]0, 1[ determines together with Q the mean and the variance:

µ = Qp and σ 2 = Qp(1 − p).                                    (3.42)

Even for moderate Q, the binomial distribution comes very close to the Gaussian distribution as illustrated in Fig. 3.3b.

Σ
In extension to Eq. (3.40), the joint normal PDF N( µ, C ) for multiple RVs, i.e., the random vector g with the mean µ and the covariance matrix C is given by


 

        1        

N( µ, C ): f ( g ) = (2π )P/2√ det  C exp


. ( g µ )T C − 1( g µ )

 


 

. (3.43)


−           2
At fi rst glance this expression looks horribly complex. It is not. We must just consider that the symmetric covariance matrix becomes a di- agonal matrix by rotation into its principle-axis system. Then the joint


90                                                                         3 Random Variables and Fields

 

normal density function becomes a separable function

Σ
P


.
f ( g ') =


    1    

(2π σ 2 )1/2 exp


. (gp' − µp')2


(3.44)


−   2σ 2
p'=1               p'                                                    p'


p'
with the variances σ 2


along the principle axis (Fig. 3.4a) and the com-


ponents gp' are independent RVs.

For uncorrelated RVs with equal variance σ 2, the N( µ, C ) distribution reduces to the isotropic normal PDF N( µ, σ ) (Fig. 3.4b):


    1       .


.( g µ ).2 Σ

 


N( µ, σ ):    f ( g ) = (2π σ  2)P/2 exp −























Central Limit Theorem


2σ 2.


.          (3.45)


The central importance of the normal distribution stems from the central limit theorem (Theorem 6, p. 54), which we discussed with respect to cas- caded convolution in Section 2.3.5. Here we emphasize its signifi cance for RVs in image processing. The central limit theorem states that under conditions that are almost ever met for image processing applications the PDF of a sum of RVs tends to a normal distribution. As we discussed in Section 3.3, in image processing weighted sums from many values are often computed. Consequently, these combined variables have a normal PDF.

 




Other Distributions

Despite the signifi cance of the normal distribution, other probability den- sity functions also play a certain role for image processing. They occur when RVs are combined by nonlinear functions.

As a fi rst example, we discuss the conversion from Cartesian to polar

T

coordinates. We take the random vector g = Σ g1, g2Σ  with independent

N(0, σ )-distributed components. Then it can be shown [134, Section 6.3]

1
2
that the magnitude of this vector r = (g2, g2)1/2 and the polar angle

=
φ    arctan(g2/g1) are independent random variables. The magnitude has a Rayleigh density


 

R(σ ):    f (r )


 r exp.


r 2 Σ for r > 0               (3.46)

 


= σ 2

with the mean and variance

 


− 2σ 2

 

2


 

 

2 4 − π


µR = σ , π /2  and  σ R = σ

and the angle φ has a uniform density


2,                     (3.47)


=
f (φ )   1 2π


 

.                                            (3.48)


3.4 Probability Density Functions                                                    91


 

a

0.6

0.5

0.4

0.3

0.2

0.1


 

                         
           

 

0 1 2 3 4 5 6 r


 

b

2 3          
             
             
             
             
             
             

 

1.4

1.2

1

0.8

0.6

0.4

0.2


 

         
   

 

0      1      2      3


 

             
     

 

 

 

       
3 5      
10      
       
       
10 0    
       
       

 

r/µ


Figure 3.5: a Chi density for 2 (Rayleigh density), 3 (Maxwell density), and higher degrees of freedom as indicated; b chi-square density in a normalized plot (mean at one) with degrees of freedom as indicated.

 

In generalization of the Rayleigh density, we consider the magnitude of a P dimensional vector. It has a chi density with P degrees of freedom

 


 

χ (P, σ ):      f (r )


2r P− 1


exp.


r 2 Σ for r > 0       (3.49)

 


 

 

with the mean

 

µχ = σ

and variance


= 2P/2Γ (P/2)σ P                       − 2σ 2

 

+         ,
√ 2 Γ (P/2   1/2)

Γ (P/2)        ≈ σ P − 1/2 for P $ 1                    (3.50)

 

χ
χ
σ 2 = σ 2P − µ2 ≈ σ 2/2 for P $ 1.                                  (3.51)


,                 √ −
The mean of the chi density increases with the square root of P while the variance is almost constant. For large degrees of freedom, the density quickly approaches the normal density N(σ P /2 1/2, σ / 2) (Fig. 3.5a).

The PDF of the square of the magnitude of the vector has a diff er- ent PDF because squaring is a nonlinear function (Section 3.2.3). Using Theorem 9 the PDF, known as the chi-square density with P degrees of freedom, can be computed as

2
2
r P/2− 1                                     r  

 


χ 2(P, σ ):       f (r ) =


 

2
2
2
P/ Γ (P/


)σ  P exp.−  σ


Σ for r > 0      (3.52)


with the mean and variance

µχ 2 = σ


 

2P and σ 2 = 2σ


 

4P                           (3.53)


χ 2
The sum of squares of RVs is of special importance to obtain the error in the estimation of the sample variance

− 1 1
1
1 P                                                          1 P


s2 = P


.(gp − g)2  with  g = P .gp.                            (3.54)


92                                                                         3 Random Variables and Fields

 

g

 

Figure 3.6: Measured noise variance σ 2 as a function of the gray value g (image courtesy of H. Grö ning).

 

Papoulis [134, Section 8.2] shows that the normalized sample variance

 

σ 2
1
σ
2       P                    2


(P − 1)s =.. g p − g Σ


(3.55)


 

=
has a chi-square density with P 1 degrees of freedom. Thus the mean of the sample variance is σ 2 (unbiased estimate) and the variance is 2σ 4/(P 1). For low degrees of freedom, the chi-square density shows signifi cant deviations from the normal density (Fig. 3.5b). For more than 30 degrees of freedom the density is in good approximation normally distributed. A reliable estimate of the variance requires many measure- ments. For P 100, the relative standard deviation of the variance is still about 20 % (for the standard deviation of the standard deviation it is half, 10 %).

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 208; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.053 с.)
Главная | Случайная страница | Обратная связь