Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология Образование Политология Производство Психология Стандартизация Технологии |
Normal and Binomial Distributions
Many processes with continuous RVs can adequately be described by the normal or Gaussian probability density N(µ, σ ) with the mean µ and the variance σ 2: N(µ, σ ): f (g) = 1 √ 2π σ exp.− (g − µ)2 . (3.40)
From Eq. (3.40) we can see that the normal distribution is completely described by the mean and the variance. For discrete analogue to the normal distribution is the binomial dis- tribution B(Q, p)
q! (Q − q)!
µ = Qp and σ 2 = Qp(1 − p). (3.42) Even for moderate Q, the binomial distribution comes very close to the Gaussian distribution as illustrated in Fig. 3.3b.
1 N( µ, C ): f ( g ) = (2π )P/2√ det C exp . ( g − µ )T C − 1( g − µ )
. (3.43)
90 3 Random Variables and Fields
normal density function becomes a separable function
1 (2π σ 2 )1/2 exp . (gp' − µp')2 (3.44)
along the principle axis (Fig. 3.4a) and the com- ponents gp' are independent RVs. For uncorrelated RVs with equal variance σ 2, the N( µ, C ) distribution reduces to the isotropic normal PDF N( µ, σ ) (Fig. 3.4b): 1 . .( g − µ ).2 Σ N( µ, σ ): f ( g ) = (2π σ 2)P/2 exp − Central Limit Theorem 2σ 2. . (3.45) The central importance of the normal distribution stems from the central limit theorem (Theorem 6, p. 54), which we discussed with respect to cas- caded convolution in Section 2.3.5. Here we emphasize its signifi cance for RVs in image processing. The central limit theorem states that under conditions that are almost ever met for image processing applications the PDF of a sum of RVs tends to a normal distribution. As we discussed in Section 3.3, in image processing weighted sums from many values are often computed. Consequently, these combined variables have a normal PDF.
Other Distributions Despite the signifi cance of the normal distribution, other probability den- sity functions also play a certain role for image processing. They occur when RVs are combined by nonlinear functions. As a fi rst example, we discuss the conversion from Cartesian to polar T coordinates. We take the random vector g = Σ g1, g2Σ with independent N(0, σ )-distributed components. Then it can be shown [134, Section 6.3]
R(σ ): f (r ) r exp. r 2 Σ for r > 0 (3.46)
= σ 2 with the mean and variance
− 2σ 2
2
2 4 − π µR = σ , π /2 and σ R = σ and the angle φ has a uniform density 2, (3.47)
. (3.48) 3.4 Probability Density Functions 91
a 0.6 0.5 0.4 0.3 0.2 0.1
0 1 2 3 4 5 6 r
b
1.2 1 0.8 0.6 0.4 0.2
0 1 2 3
Figure 3.5: a Chi density for 2 (Rayleigh density), 3 (Maxwell density), and higher degrees of freedom as indicated; b chi-square density in a normalized plot (mean at one) with degrees of freedom as indicated.
In generalization of the Rayleigh density, we consider the magnitude of a P dimensional vector. It has a chi density with P degrees of freedom
χ (P, σ ): f (r ) 2r P− 1 exp. r 2 Σ for r > 0 (3.49)
with the mean
µχ = σ and variance = 2P/2Γ (P/2)σ P − 2σ 2
Γ (P/2) ≈ σ P − 1/2 for P $ 1 (3.50)
The PDF of the square of the magnitude of the vector has a diff er- ent PDF because squaring is a nonlinear function (Section 3.2.3). Using Theorem 9 the PDF, known as the chi-square density with P degrees of freedom, can be computed as
χ 2(P, σ ): f (r ) =
)σ P exp.− σ Σ for r > 0 (3.52) with the mean and variance µχ 2 = σ
2P and σ 2 = 2σ
4P (3.53)
s2 = P .(gp − g)2 with g = P .gp. (3.54) 92 3 Random Variables and Fields
g
Figure 3.6: Measured noise variance σ 2 as a function of the gray value g (image courtesy of H. Grö ning).
Papoulis [134, Section 8.2] shows that the normalized sample variance
(P − 1)s =.. g p − g Σ (3.55)
|
Последнее изменение этой страницы: 2019-05-04; Просмотров: 208; Нарушение авторского права страницы