Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология Образование Политология Производство Психология Стандартизация Технологии |
Two-Point Radiometric Calibration
The simple ratio imaging described above is not applicable if also a non- zero inhomogeneous background has to be corrected for, as caused, for instance, by the fi xed pattern noise of a CCD sensor. In this case, two reference images are required. This technique is also applied for a sim- ple two-point radiometric calibration of an imaging sensor with a linear response. Some image measuring tasks require an absolute or relative radiometric calibration. Once such a calibration is obtained, we can infer the radiance of the objects from the measured gray values. First, we take a dark image B without any illumination. Second, we take a reference image R with an object of constant radiance, e. g., by looking into an integrating sphere. Then, a normalized image corrected for both the fi xed pattern noise and inhomogeneous sensitivity is given by
R − B Fig. 10.11 shows a contrast-enhanced dark image and reference image of a CCD camera with analog output. Typical signal distortions can be observed. The signal oscillation at the left edge of the dark image results from an electronic interference, while the dark blobs in the reference image are caused by dust on the glass window in front of the sensor. The improvement due to the radiometric calibration can clearly be seen in Fig. 10.12. 260 10 Pixel Processing
A b
Figure 10.12: Two-point radiometric calibration with the dark and reference image from Fig. 10.11: a original image and b calibrated image; in the calibrated image the dark spots caused by dust are no longer visible.
10.3.4 Nonlinear Radiometric Calibration‡ Sometimes, the quantity to be measured by an imaging sensor is related in a nonlinear way to the measured gray value. An obvious example is thermog- raphy. In such cases a nonlinear radiometric calibration is required. Here, the temperature of the emitted object is determined from its radiance using Planck’s equations (Section 6.4.1). We will give a practical calibration procedure for ambient temperatures. Be- cause of the nonlinear relation between radiance and temperature, a simple two-point calibration with linear interpolation is not suffi cient. Hauß ecker [63] showed that a quadratic relation is accurate enough for a small temperature range, say from 0 to 40° centigrade. Therefore, three calibration temperatures are required, which are provided by a temperature-regulated blackbody calibra- tion unit. The calibration delivers three calibration images G 1, G 2, and G 3 with known temperatures T1, T2, and T3. The temperature image T of an arbitrary image G can be computed by quadratic interpolation as
with T ∆ G 2 · ∆ G 3
T ∆ G 1 · ∆ G 3
T ∆ G 1 · ∆ G 2
T3, (10.20) ∆ G k = G − G k and ∆ G kl = G k − G l. (10.21)
10.3 Inhomogeneous Point Operations† 261
A b c
D e f
Figure 10.13: Three-point calibration of infrared temperature images: a – c show images of calibration targets made out of aluminum blocks at temperatures of 13.06, 17.62, and 22.28° centigrade. The images are stretched in contrast to a narrow range of the 12-bit digital output range of the infrared camera: a: 1715– 1740, b: 1925–1950, c: 2200–2230, and show some residual inhomogeneities, especially vertical stripes. d Calibrated image using the three images a – c with quadratic interpolation. e Original and f calibrated image of the temperature microscale fl uctuations at the ocean surface (area about 0.8 × 1.0 m2).
Windowing
262 10 Pixel Processing
A b
C d
Figure 10.14: Eff ect of windowing on the discrete Fourier transform: a original image; b DFT of a without using a window function; c image multiplied with a cosine window; d DFT of c using a cosine window.
tion with the sampling theorem in Section 9.2.3. The periodic repetition leads to discontinuities at the horizontal and vertical edges of the image which cause correspondingly high spectral densities along the x and y axes in the Fourier domain. In order to avoid these distortions, we must multiply the image with a window function that gradually approaches zero towards the edges of the image. An optimum window function should preserve a high spectral resolution and show minimum distortions in the spectrum, that is, its DFT should fall off as fast as possible. These are contradictory require- ments. A good spectral resolution requires a broad window function. Such a window, however, falls off steeply at the edges, causing a slow fall-off of the sidelobes of its spectrum. A carefully chosen window is crucial for a spectral analysis of time series [119, 133]. However, in digital image processing it is less critical because of the much lower dynamic range of the gray values. A simple 10.4 Multichannel Point Operations‡ 263 cosine window
performs this task well (Fig. 10.14c, d).
10.4 Multichannel Point Operations‡ 10.4.1 Defi nitions‡ Point operations can be generalized to multichannel point operations in a straight- forward way. The operation still depends only on the values of a single pixel. The only diff erence is that it depends on a vectorial input instead of a scalar input. Likewise, the output image can be a multichannel image. For homoge- neous point operations that do not depend on the position of the pixel in the image, we can write
G ' = P ( G ) (10.23) G ' = G '0 G '1 ... G 'l ... G 'L− 1 G = [ G 0 G 1... G k... G K− 1],
(10.24) where G 'l and G k are the components l and k of the multichannel images G ' and G with L and K channels, respectively. An important subclass of multicomponent point operators are linear opera- tions. This means that each component of the multichannel image G ' is a linear combination of the components of the multichannel image G:
G 'l = Plk G k (10.25) k=0 where Plk are constant coeffi cients. Therefore, a general linear multicomponent point operation is given by a matrix (or tensor) of coeffi cients Plk. Then, we can write Eq. (10.25) in matrix notation as G ' = PG (10.26) where P is the matrix of coeffi cients. 264 10 Pixel Processing
If the components of the multichannel images in a point operation are not inter- related to each other, all coeffi cients in P except those on the diagonal become zero. For K-channel input and output images, just K diff erent point operations remain, one for each channel. The matrix of point operations fi nally reduces to a standard scalar point operation when the same point operation is applied to each channel of a multi-component image. For an equal number of output and input images, linear point operations can be interpreted as coordinate transformation. If the matrix of the coeffi cients in Eq. (10.26) has a rank R < K, the multichannel point operation projects the K-dimensional space to an R-dimensional subspace. Generally, linear multichannel point operations are quite easy to handle as they can be described in a straightforward way with the concepts of linear algebra. For square matrices, for instance, we can easily give the condition when an inverse operation to a multichannel operation exists and compute it. For non- linear multicomponent point operations, the linear coeffi cients in Eqs. (10.25) and (10.26) have to be replaced by nonlinear functions: G 'l = Pl( G 0, G 1,..., G K− 1) (10.27) Nonlinear multicomponent point operations cannot be handled in a general way, unlike linear operations. Thus, they must be considered individually. The complexity can be reduced signifi cantly if it is possible to separate a given mul- tichannel point operation into its linear and nonlinear parts.
10.4.2 Dyadic Point Operations‡ Operations in which only two images are involved are termed dyadic point op- erations. Dyadic homogeneous point operations can be implemented as LUT operations. Generally, any dyadic image operation can be expressed as Gm' n = P(Gmn, Hmn). (10.28)
L(28 p + q) = P(p, q), 0 ≤ p, q < Q. (10.29) The high and low bytes of the LUT address are given by the gray values in the images G and H, respectively. Some image processing systems contain a 16-bit LUT as a modular processing element. Computation of a dyadic point operation either with a hardware or software LUT is often signifi cantly faster than a direct implementation, espe- cially if the operation is complex. In addition, it is easier to control exceptions such as division by zero or underfl ow and overfl ow. A dyadic point operation can be used to perform two point operations simul- taneously. The phase and magnitude (r, i) of a complex-valued image, for ex- ample, can be computed simultaneously with one dyadic LUT operation if we 10.5 Geometric Transformations 265 restrict the output to 8 bits as well:
The magnitude is returned in the high byte and the phase, scaled to the interval [− 128, 127], in the low byte.
|
Последнее изменение этой страницы: 2019-05-04; Просмотров: 234; Нарушение авторского права страницы