Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Noise Model for Image Sensors



,
After the detailed discussion on random variables, we can now conclude with a simple noise model for an image sensor. In Section 3.4.1 we saw that the photo signal for a single pixel is Poisson distributed. Except for very low-level imaging conditions, where only a few electrons are col- lected per sensor element, the Poisson distribution is well approximated by a normal distribution N(Qe, Qe), where Qe is the number of elec- trons absorbed during an exposure. Not every incoming photon causes the excitation of an electron. The fraction of electrons excited by the photons irradiating onto the sensor element (Qp) is known as quantum effi ciency η:

=
η   Q e .                                                  (3.56)

Qp


3.5 Stochastic Processes and Random Fields‡                                             93

g
The electronic circuits add a number of other noise sources. For prac- tical purposes, it is, however, only important to know that these noise sources are normal distributed and independent of the photon noise. Therefore the received signal g and its variance σ 2 can be described by


only two terms as

g   = g0 + gp = g0 + α Qe with gp = α Qe,


(3.57)


g
0
p
0
p
σ 2 = σ 2 + σ 2 = σ 2 + α gp with σ 2 = α 2Qe.

The constant α is an amplifi cation factor of the sensor electronics (in- cluding digitization) measured in bits/electron. Equation (3.57) predicts a linear increase of the noise variance with the measured gray value. The measurements with several CCDsensors show good agreement with this model (Fig. 3.6). From the increase of the noise variance σ 2 with the sig- nal g, the amplifi cation factor α can be determined.

 

3.5 Stochastic Processes and Random Fields‡

The statistics developed so far do not consider the spatial and temporal rela- tions between the points of a multidimensional signal. If we want to analyze the content of images statistically, we must consider the whole image as a sta- tistical quantity, known as a random fi eld for spatial data and as a stochastic process for time series.

×                                                                             ×
In case of an M N image, a random fi eld consists of an M N matrix whose elements are random variables. This means that a joint probability density function has MN variables. The mean of a random fi eld is then given as a sum over all possible states q:


 

Gm, n =


QMN

.
fq( G ) G q.                                          (3.58)

q=1


×
If we have Q quantization levels, each pixel can take Q diff erent states. In com- bination of all M N pixels we end up with QMN states G q. This is a horrifying concept, rendering itself useless because of the combinatory explosion of pos- sible states. Thus we have to fi nd simpler concepts to treat multidimensional signals as random fi elds. In this section, we will approach this problem in a practical way.

We start by estimating the mean and variance of a random fi eld. We can do that in the same way as for a single value (Eq. (3.54)), by taking the mean G p of P measurements under the same conditions and computing the average as

.
1 P

G = P    G p.                                               (3.59)

p=1

This type of averaging is known as an ensemble average. The estimate of the

2
variance, the sample variance, is given by


S 2 =


P                   

1
.. G p G Σ

 


 

.                                 (3.60)


G P − 1 p 1


94                                                                         3 Random Variables and Fields

 

At this stage, we know already the mean and variance at each pixel in the im- age. From these values we can make a number of interesting conclusions. We can study the uniformity of both quantities under given conditions such as a constant illumination level.

 

3.5.1 Correlation and Covariance Functions‡

In a second step, we now relate the gray values at diff erent positions in the images with each other. One measure for the correlation of the gray values is the mean for the product of the gray values at two positions, the autocorrelation function

Rgg(m, n; m', n') = GmnGm'n'.                                               (3.61)

As in Eqs. (3.59) and (3.60), an ensemble mean is taken.

The autocorrelation function is not of much use if an image contains a deter- ministic part with additive zero-mean noise

     
 

G ' = G + N,      with G ' = G and N ' = 0.                               (3.62)

Then it is more useful to subtract the mean so that the properties of the random part in the signal are adequately characterized:

=
Cgg(m, n; m', n') = (Gmn − Gmn)(Gm'n' − Gm'n').                                     (3.63) This function is called the autocovariance function. For zero shift (m                                                                       m'

=
and n n') it gives the variance at the pixel [m, n]T, at all other shifts the covariance, which was introduced in Section 3.3.2, Eq. (3.19). New here is that the autocovariance function includes the spatial relations between the diff erent points in the image. If the autocovariance is zero, the random properties of the corresponding points are uncorrelated.

The autocovariance function as defi ned in Eq. (3.63) is still awkward because it is four-dimensional. Therefore even this statistic is only of use for a restricted number of shifts, e. g., short distances, because we suspect that the random properties of distant points are uncorrelated.

Things become easier if the statistics do not explicitly depend on the position of the points. This is called a homogeneous random fi eld. Then the autocovariance function becomes shift invariant:

Cgg(m + k, n + l; m' + k, n' + l)


= Cgg(m, n; m', n')

= Cgg(m − m', n − n'; 0, 0)

=  Cgg(0, 0; m' − m, n' − n).


(3.64)


=  −      −                      =
The last two identities are obtained when we set (k, l)                         ( m', n') and (k, l)

−   −
( m, n). This also means that the variance of the noise Cgg(m, n; m, n) no longer depends on the position in the image but is equal at all points.

Because the autocorrelation function depends only on the distance between points, it reduces from a four- to a two-dimensional function. Fortunately, many stochastic processes are homogeneous. Because of the shift invariance, the


3.5 Stochastic Processes and Random Fields‡                                            95

autocovariance function for a homogeneous random fi eld can be estimated by spatial averaging:


Cgg(m, n) = 1


M− 1

MN
.


N− 1                                                                                                  

. (Gm'n' − Gm'n')(Gm'+m, n'+n − Gm'+m, n'+n). (3.65)


m'=0 n'=0

Generally, it is not certain that spatial averaging leads to the same mean as the ensemble mean. A random fi eld that meets this criterion is called ergodic.

Another diffi culty concerns indexing. As soon as (m, n) ≠ (0, 0), the indices run over the range of the matrix. We then have to consider the periodic extension of the matrix, as discussed in Section 2.3.5. This is known as cyclic correlation.

Now we illustrate the meaning of the autocovariance function. We consider an image that contains a deterministic part plus zero-mean homogeneous noise, see Eq. (3.62). Let us further assume that all points are statistically independent. Then the mean is the deterministic part and the autocovariance vanishes except for zero shift, i. e., for a zero pixel distance:

C gg = σ 2oo P or Cgg(m, n) = σ 2δ mδ n.                                     (3.66)

For zero shift, the autocovariance is equal to the variance of the noise. In this way, we can examine whether the individual image points are statistically un- correlated. This is of importance because the degree of correlation between the image points determines the statistical properties of image processing opera- tions as discussed in Section 3.3.3.

MN
In a similar manner to correlating one image with itself, we can correlate two diff erent images G and H with each other. These could be either images from diff erent scenes or images of a dynamic scene taken at diff erent times. By analogy to Eq. (3.65), the cross-correlation function and cross-covariance function are defi ned as


Rgh(m, n) = 1


M− 1 m.'=0


N− 1

.
Gm'n' Hm'+m, n'+n                                           (3.67)

n'=0


 


Cgh(m, n) = 1


M− 1 m.'=0


N− 1                                                                                                  

.
(Gm'n' − Gm'n')(Hm+m', n+n' − Hm+m', n+n'). (3.68)

n'=0


MN
The cross-correlation operation is very similar to convolution (Section 2.3.5,

±
R7). The only diff erence is the sign of the indices (m', n') in the second term.

 

3.5.2 Random Fields in Fourier Space‡

In the previous sections we studied random fi elds in the spatial domain. Given the signifi cance of the Fourier transform for image processing (Section 2.3), we now turn to random fi elds in the Fourier domain. For the sake of simplicity, we restrict the discussion here to the 1-D case. All arguments put forward in this section can, however, be applied analogously in any dimension.

The Fourier transform requires complex numbers. This constitutes no addi- tional complications, because the random properties of the real and imaginary


96                                                                         3 Random Variables and Fields

 

.                                 Σ
part can be treated separately. The defi nitions for the mean remains the same, the defi nition of the covariance, however, requires a slight change as compared to Eq. (3.19):

Cpq = E (gp − µp)∗ (gq − µq)  ,                                            (3.69)

p
where  denotes the conjugate complex. This defi nition ensures that the vari- ance


 

remains a real number.


σ 2 = E.(gp − µp)∗ (gp − µp)Σ                                               (3.70)


∈                                 ∈
The 1-D DFT maps a vector g   CN onto a vector g ˆ   CN. The components of g ˆ are given as scalar products with orthonormal base vectors for the vector space CN (compare Eqs. (2.29) and (2.30)):

v
v
v = b   T g   with   b   T b v' = δ vv'.                                       (3.71)

 

Thus the complex RVs in Fourier space are nothing else but linear combinations of the RVs in the spatial domain. If we assume that the RVs in the spatial domain are uncorrelated with equal variance (homogeneous random fi eld), we arrive at a far-reaching conclusion. According to Eq. (3.71) the coeffi cient vectors b v are orthogonal to each other with a unit square magnitude. Therefore we can conclude from the discussion about functions of multiple RVs in Section 3.3.3, especially Eq. (3.32), that the RVs in the Fourier domain remain uncorrelated and have the same variance as in the spatial domain.

 

3.5.3 Power Spectrum, Cross-correlation Spectrum, and Coherence‡

In Section 3.5.1 we learnt that random fi elds in the space domain are character- ized by the auto- and the cross-correlation functions. Now we consider random fi elds in the Fourier space.

Correlation in the space domain corresponds to multiplication in the Fourier space with the complex conjugate functions (± R4):

G > G ◦    • Pgg( k ) = gˆ ( k )∗ gˆ ( k )                                    (3.72)


 

and


 

G > H ◦    • Pgh( k ) = gˆ ( k )∗ hˆ ( k ).                                    (3.73)


In these equations, correlation is abbreviated with the > symbol, similar to convolution for which we use the symbol. For a simpler notation, the spectra are written as continuous functions. This corresponds to the transition to an infi nitely extended random fi eld (Section 2.3.2, Table 2.1).

The Fourier transform of the autocorrelation function is the power spectrum Pgg. The power spectrum is a real-valued quantity. Its name is related to the fact that it represents the distribution of power of a physical signal in the Fourier do- main, i. e., over frequencies and wave numbers, if the signal amplitude squared is related to the energy of a signal. If the power spectrum is averaged over sev- eral images, it constitutes a sum of squares of independent random variables. If the RVs have a normal density, the power spectrum has, according to the discussion in Section 3.4.4, a chi-square density.


3.6 Further Readings‡                                                                                             97

 

The autocorrelation function of a fi eld of uncorrelated RVS is zero except at the origin, i. e., a δ -function (Eq. (3.66)). Therefore, its power spectrum is a constant (± R7). This type of noise is called white noise.

The Fourier transform of the cross-correlation function is called the cross-corre- lation spectrum Pgh. In contrast to the power spectrum, it is a complex quantity. The real and imaginary parts are termed the co-co- and quad-spectrum, respec- tively.

To understand the meaning of the cross-correlation spectrum, it is useful to defi ne another quantity, the coherence function Φ:


 

Φ 2( k )


|Pgh( k )|2

 

=
.
Pgg( k )Phh( k )


 

(3.74)


=               −
Basically, the coherence function contains information on the similarity of two images. We illustrate this by assuming that the image H is a shifted copy of the image G:  hˆ ( k )  gˆ ( k ) exp(  i kx s).  In this case, the coherence function is one and the cross-correlation spectrum Pgh reduces to

Pgh( k ) = Pgg( k ) exp(− i kx s).                                           (3.75)

Because Pgg is a real quantity, we can compute the shift x s between the two images from the phase factor exp(− i kx s).

If there is no fi xed phase relationship of a periodic component between the two images, then the coherency decreases. If the phase shift is randomly distributed from image to image in a sequence, the cross-correlation vectors in the complex plane point in random directions and add up to zero. According to Eq. (3.74), then also the coherency is zero.

 

3.6 Further Readings‡

 

An introduction to random signals is given by Rice [149]. A detailed account of the theory of probability and random variables can be found in Papoulis [134]. The textbook of Rosenfeld and Kak [157] gives a good introduction to stochastic processes with respect to image processing. Spectral analysis is discussed in Marple Jr. [119].


98                                                                         3 Random Variables and Fields


 

 













































Neighborhood Operations


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 207; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.058 с.)
Главная | Случайная страница | Обратная связь