Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Spectral Sampling Methods



Spectroscopic imaging is in principle a very powerful tool for identify- ing objects and their properties because almost all optical material con- stants depend on the wavelength of the radiation. The trouble with spec- troscopic imaging is that it adds another coordinate to imaging and the required amount of data is multiplied correspondingly. Therefore, it is important to sample the spectrum with the minimum number of sam- ples suffi cient to perform the required task. Here, we introduce several general spectral sampling strategies. In the next section, we also discuss human color vision from this point of view as one realization of spectral sampling.

Line sampling is a technique where each channel picks only a narrow spectral range (Fig. 6.4a). This technique is useful if processes are to be imaged that are related to emission or absorption at specifi c spectral lines. The technique is very selective. One channel “sees” only a spe- cifi c wavelength and is insensitive — at least to the degree that such a narrow bandpass fi ltering can be realized technically — to all other wave- lengths. Thus, this technique is suitable for imaging very specifi c eff ects or specifi c chemical species. It cannot be used to make an estimate of the total radiance from objects since it misses most wavelengths.

Band sampling is the appropriate technique if the total radiance in a certain wavelength range has to be imaged and still some wavelength resolution is required (Fig. 6.4b). Ideally, the individual bands have uni- form responsivity and are adjacent to each other. Thus, band sampling gives the optimum resolution with a few channels but does not allow any distinction of the wavelengths within one band. The spectral reso- lution achievable with this sampling method is limited to the width of the spectral bands of the sensors.

In many cases, it is possible to make a model of the spectral radiance of a certain object. Then, a much better spectral sampling technique can be chosen that essentially samples not certain wavelengths but rather the parameters of the model. This technique is known as model-based spectral sampling.


6.3 Radiometry, Photometry, Spectroscopy, and Color                   157

     
 

l1  l 2  l3                                                                         l1          l2

 

Figure 6.4: Examples of spectral sampling: a line sampling, b band sampling, c sampling adapted to a certain model of the spectral range, in this example for a single spectral line of unknown wavelength.

 

We will illustrate this general approach with a simple example. It illustrates a method for measuring the mean wavelength of an arbitrary spectral distribution φ (λ ) and the total radiative fl ux in a certain wave number range. These quantities are defi ned as

λ 2                                                          λ 2                                 λ 2

φ =   1 ∫ φ (λ ) dλ  and  λ = ∫ λ φ (λ )dλ  ∫  φ (λ ) dλ.                (6.13)


1
λ 2 − λ 1λ


 

λ 1                                  λ 1


In the second equation, the spectral distribution is multiplied by the wavelength λ. Therefore, we need a sensor that has a sensitivity varying linearly with the wave number. We try two sensor channels with the following linear spectral responsivity, as shown in Fig. 6.4c:

R1(λ )  =   λ −  λ 1 R0 =. 1 + λ ˜ Σ  R0


λ 2 − λ 1              2


(6.14)


2
R2(λ )  =  R0 − R1(λ ) =. 1 − λ ˜ Σ  R0,

where R is the responsivity of the sensor and λ ˜  the normalized wave- length

2
λ ˜  =.λ − λ 1 +  λ 2 Σ  /(λ 2 − λ 1).                                         (6.15)

±
λ ˜  is zero in the middle and              1/2 at the edges of the interval.

The sum of the responsivity of the two channels is independent of the wavelength, while the diff erence is directly proportional to the wave- length and varies from − R0 for λ = λ 1 to R0 for λ = λ 2:


R1' (λ ˜ )  =  R1(λ ) + R2(λ ) = R0

R2' (λ ˜ )  =  R1(λ ) − R2(λ ) = 2λ ˜ R0.


 

(6.16)


Thus the sum of the signals from the two sensors R1 and R2, gives

the total radiative fl ux,  while the mean wavelength is given by 2λ ˜   =

−          +
(R1 R2)/(R1 R2). Except for these two quantities, the sensors cannot reveal any further details about the spectral distribution.


158                                                                             6 Quantitative Visualization

 













A                                                                    b

1                                                                                                          1

 

0.8                                                                                                       0.8

 

0.6                                                                                                        0.6

 


0.4

 

0.2

 

0


 

400             500            600             700


0.4

 

0.2

 

0


 

400  450  500 550 600    650 700


 

Figure 6.5: a Relative spectral response of the “standard” human eye as set by the CIE in 1980 under medium to high irradiance levels (photopic vision, V (λ ), solid line), and low radiance levels (scotopic vision, V '(λ ), dashed line); data from [105]. b Relative cone sensitivities of the human eye after DeMarco et al. [28].

 






Human Color Vision

The human visual system responds only to electromagnetic radiation having wavelengths between about 360 and 800 nm. It is very insen- sitive at wavelengths between 360 and about 410 nm and between 720 and 830 nm. Even for individuals without vision defects, there is some variation in the spectral response. Thus, the visible range in the electro- magnetic spectrum (light, Fig. 6.2) is somewhat uncertain.

The retina of the eye onto which the image is projected contains two general classes of receptors, rods and cones. Photopigments in the outer segments of the receptors absorb radiation. The absorbed energy is then converted into neural electrochemical signals which are transmitted via subsequent neurons and the optic nerve to the brain. Three diff erent types of photopigments in the cones make them sensitive to diff erent spectral ranges and, thus, enable color vision (Fig. 6.5b). Vision with cones is only active at high and medium illumination levels and is also called photopic vision. At low illumination levels, vision is taken over by the rods. This type of vision is called scotopic vision.

At fi rst glance it might seem impossible to measure the spectral re- sponse of the eye in a quantitative way since we can only rely on the sub- jective impression how the human eye senses “radiance”. However, the spectral response of the human eye can be measured by making use of the fact that it can sense brightness diff erences very sensitively. Based on extensive studies with many individuals, in 1924 the International Lighting Commission (CIE) set a standard for the spectral response of the human observer under photopic conditions that was slightly revised several times later on. Figure 6.5 show the 1980 values. The relative spectral response curve for scotopic vision, V '(λ ) is similar in shape but the peak is shifted from about 555 nm to 510 nm (Fig. 6.5a).


6.3 Radiometry, Photometry, Spectroscopy, and Color                   159

×
Physiological measurements can only give a relative spectral lumi- nous effi ciency function. Therefore, it is required to set a new unit for luminous quantities. This new unit is the candela; it is one of the seven fundamental units of the metric system (Systè me Internationale, or SI). The candela is defi ned to be the luminous intensity of a monochro- matic source with a frequency of 5.4 1014 Hz and a radiant intensity of 1/683 W/sr. The odd factor 1/683 has historical reasons because the candela was previously defi ned independently from radiant quantities.

With this defi nition of the luminous intensity and the capability of the eye to detect small changes in brightness, the luminous intensity of any light source can be measured by comparing it to a standard light source. This approach, however, would refer the luminous quantities to an individual observer. Therefore, it is much better to use the standard spectral luminous effi cacy function. Then, any luminous quantity can be computed from its corresponding radiometric quantity by:

780 nm


W
Qv = 683 lm


Q(λ )V(λ ) dλ           photopic,

380 nm


(6.17)


780 nm


lm

Qv'  = 1754 W


Q(λ )V'(λ ) dλ scotopic,

380 nm


±
where V (λ ) is the spectral luminous effi cacy for day vision (photopic). A list with all photometric quantities and their radiant equivalent can be found in Appendix A ( R15). The units of luminous fl ux, the photomet- ric quantity equivalent to radiant fl ux (units W) is lumen (lm).

In terms of the spectral sampling techniques summarized above, hu- man color vision can be regarded as a blend of band sampling and model- based sampling. The sensitivities cover diff erent bands with maximal sensitivities at 445 nm, 535 nm, and 575 nm, respectively, but which overlap each other signifi cantly (Fig. 6.5b). In contrast to our model examples, the three sensor channels are unequally spaced and cannot simply be linearly related. Indeed, the color sensitivity of the human eye is uneven, and all the nonlinearities involved make the science of color vision rather diffi cult. Here, we give only some basic facts in as much as they are useful to handle color images.

With three color sensors, it is obvious that color signals cover a 3- D space. Each point in this space represents one color. It is clear that many spectral distributions, known as metameric color stimuli or just metameres, map onto one point in the color space. Generally, we can write the signal si received by a sensor with a spectral responsivity Ri(λ )

as

si = ∫ Ri(λ )φ (λ ) dλ.                                              (6.18)


160                                                                             6 Quantitative Visualization

 

With three primary color sensors, a triple of values is received, often called a tristimulus.

One of the most important questions in colorimetry is how to set up a system representing colors as linear combination of some basic or primary colors. A set of three spectral distributions φ j(λ ) represents a set of primary colors and results in an array of responses that can be described by the matrix P with

pij = ∫ Ri(λ )φ j(λ ) dλ.                                              (6.19)

=
Each vector p j (p1j, p2j, p3j) represents a tristimulus of the pri- mary colors in the 3-Dcolor space. It is obvious that only colors can be represented that are a linear combination of the base vectors p j

s = R p 1 + G p 2 + B p 3           with     0 ≤ R, G, B ≤ 1            (6.20)

where the coeffi cients are denoted by R, G, and B, indicating the three primary colors red, green, and blue. Only if the three base vectors p j are an orthogonal base can all colors be presented as a linear combi- nation of them. One possible and easily realizable primary color sys- tem is formed by the monochromatic colors red, green, and blue with wavelengths 700 nm, 546.1 nm, and 435.8 nm, as adopted by the CIE in 1931. In the following, we use the primary color system according to the European EBU norm with red, green, and blue phosphor, as this is the standard way color images are displayed.

Given the signifi cant overlap in the spectral response of the three types of cones (Fig. 6.5b), especially in the green image, it is obvious that no primary colors exist that can span the color systems. The colors that can be represented lie within the parallelepiped formed by the three base vectors of the primary colors. The more the primary colors are correlated with each other, i. e., the smaller the angle between two of them, the smaller is the color space that can be represented by them. Mathematically, colors that cannot be represented by a set of primary colors have at least one negative coeffi cient in Eq. (6.20).

One component in the 3-D color space is intensity. If a color vector is multiplied by a scalar, only its intensity is changed but not the color. Thus, all colors could be normalized by the intensity. This operation reduces the 3-Dcolor space to a 2-Dcolor plane or chromaticity diagram:

r =    R    , g =    G    , b =    B   ,             (6.21)

R + G + B              R + G + B              R + G + B

 

with     r + g + b = 1.                                       (6.22)

It is suffi cient to use only the two components r and g. The third com- ponent is then given by b = 1 − r − g, according to Eq. (6.22). Thus,


6.3 Radiometry, Photometry, Spectroscopy, and Color                   161

all colors that can be represented by the three primary colors R, G, and B are confi ned within a triangle in the rg space as shown in Fig. 6.6a. As already mentioned, some colors cannot be represented by the pri- mary colors. The boundary of all possible colors is given by the visible monochromatic colors from deep red to blue. The line of monochro- matic colors form a U-shaped curve in the rg-space. Since all colors that lie on a straight line between two colors can be generated as an ad- ditive mixture of these colors, the space of all possible colors covers the area fi lled by the U-shaped spectral curve and the straight mixing line between its two end points for blue and red color (purple line).

In order to avoid negative color coordinate values, often a new coor- dinate system is chosen with virtual primary colors, i. e., primary colors that cannot be realized by any physical colors. This color system is known as the XYZ color system and constructed in such a way that it just includes the curve of monochromatic colors with only positive coeffi - cients (Fig. 6.6c) and given by the following linear coordinate transform:

 X   0.490 0.310 0.200   R 

 Y  =  0.177 0.812 0.011   G  .                    (6.23)

 Z   0.000 0.010 0.990   B 

The back-transform from the XYZ color system to the RGB color system is given by the inverse of the matrix in Eq. (6.23).

The color systems discussed so far do not directly relate to the human sense of color. From the rg or xy values, we cannot directly infer colors such as green or blue. A natural type of description of colors includes besides the luminance (intensity) the type of color, such as green or blue (hue) and the purity of the color (saturation). From a pure color, we can obtain any degree of saturation by mixing it with white.

=
Hue and saturation can be extracted from chromaticity diagrams by simple coordinate transformations. The point of reference is the white point in the middle of the chromaticity diagram (Fig. 6.6b). If we draw a line from this point to a pure (monochromatic) color, it constitutes a mixing line for a pure color with white and is thus a line of constant hue. From the white point to the pure color, the saturation increases linearly. The white point is given in the rg chromaticity diagram by w [1/3, 1/3]T.

A color system that has its center at the white point is called a color diff erence system. From a color diff erence system, we can infer a hue- saturation color system (hue, saturation, and density; HIS) by simply using polar coordinate systems. Then, the saturation is proportional to the radius and the hue to the angle (Fig. 6.6b).

So far, color science is easy. All the real diffi culties arise from the need to adapt the color system in an optimum way to display and print devices and for transmission by television signals or to correct for the


 

             
             
    500 green        
    bl   ue        
          w 570 yell   ow  
          590 orange 610  
        line   of consta   nt hue red

 

162                                                                             6 Quantitative Visualization

 


a

             
  5 gr 00 een        
             
          570    
      w   59   0 orange  
            red  
          re  

 

2.5

ge

2

 

 

 

 

1.5

 

 

 

 

1

 

 

 

 

0.5

 

 

 

 

0

 

 

 

 

-0.5

 

 

-1.5      -1      -0.5      0       0.5       1       1.5

c

    gre Ge   en    
500   570 yello 5 w 90 orange  
  blue w   610 Re     red
    Be        

 

0.8

y

0.6


b

2

 

v

 

1.5

 

 

 

 

1

 

 

 

 

0.5

 

 

 

 

0

 

 

 

 

-0.5

 

 

 

-1


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

                                                                         
                                   

-1.5     -1      -0.5      0       0.5      1 u


 

0.4

 

 

0.2

 

0

0            0.2          0.4           0.6          0.8 x 1

Figure 6.6: Chromaticity diagram shown in the a rg-color space, b uv-color space, c xy-color space; the shaded triangles indicate the colors that can be generated by additive color mixing using the primary colors R, G, and B.

 

uneven color resolution of the human visual system that is apparent in the chromaticity diagrams of simple color spaces (Fig. 6.6). These needs have led to a confusing variety of diff erent color systems (± R16).

 

6.4 Interactions of Radiation with Matter‡

 

The interaction of radiation with matter is the basis for any imaging technique. Basically, two classes of interactions of radiation with matter can be distin- guished. The fi rst class is related to the discontinuities of optical properties at the interface between two diff erent materials (Fig. 6.7a). The second class is volume-related and depends on the optical properties of the material (Fig. 6.7b). In this section, we give a brief summary of the most important phenomena. The idea is to give the reader an overview of the many possible ways to measure ma- terial properties with imaging techniques.


6.4 Interactions of Radiation with Matter‡                                                       163


a

L(l, qe, fe)


L(l2, qe, fe)


E(l1)


  

Surface emission   Stimulated emission       Reflection                  Refraction

b


Volumetric emission


Stimulated emission


Absorption                Refraction

 

E(l) 


 


 

 

L(l, Qe, Fe)

 

E(l) 


 

E(l1)


 

L(le, Qe, Fe)


 

ds

a(l)  

E
dE = -a(l)ds


 

 

gradient of index of refraction

 

l 1


l 2

 


 

 

Scattering                       Rotation of polarization plane

(optical activity)


 

 

Frequency doubling, tripling


l 3

Nonlinear effect,

two-photon processes


 

Figure 6.7: Principle possibilities for interaction of radiation and matter: a at the surface of an object, i. e., at the discontinuity of optical properties, b volume related.

 

6.4.1 Thermal Emission

Emission of electromagnetic radiation occurs at any temperature and is thus a ubiquitous form of interaction between matter and electromagnetic radiation. The cause for the spontaneous emission of electromagnetic radiation is thermal molecular motion, which increases with temperature. During emission of radi- ation, thermal energy is converted to electromagnetic radiation and, according to the universal law of energy conservation, the matter is cooling down.

An upper level for thermal emission exists. According to the laws of thermody- namics, the fraction of radiation at a certain wavelength that is absorbed must also be re-emitted: thus, there is an upper limit for the emission, when the ab- sorptivity is one. A perfect absorber — and thus a maximal emitter — is called a blackbody.

The correct theoretical description of the radiation of a blackbody by Planck in 1900 required the assumption of emission and absorption of radiation in discrete energy quanta E = hν. The spectral radiance of a blackbody with the


164                                                                             6 Quantitative Visualization

 

 

Figure 6.8: Spectral radiance Le of a blackbody at diff erent absolute tempera- tures T in K as indicated. The thin line crosses the emission curves at the wave- length of maximum emission.

 

 

absolute temperature T is (Fig. 6.8):


Le(ν, T ) =

 

with


2hν 3

c2


1      , Le(λ, T ) =

.  Σ exp     − 1
kB T


2hc2

λ 5


1       ,   (6.24)

kB Tλ
exp. hc Σ − 1


h = 6.6262 × 10− 34 J s               Planck constant,

kB = 1.3806 × 10− 23 J K− 1 Boltzmann constant, and

c = 2.9979 × 108 m s− 1                       speed of light in vacuum.


 

(6.25)


Blackbody radiation has the important feature that the emitted radiation does not depend on the viewing angle. Such a radiator is called a Lambertian ra- diator. Therefore the spectral emittance (constant radiance integrated over a hemisphere) is π times higher than the radiance:


Me(λ, T ) =


2π hc2

λ 5


1

kB Tλ
exp. hc Σ − 1


 

.                          (6.26)


 

The total emittance of a blackbody integrated over all wavelengths is propor- tional to T 4 according to the law of Stefan and Boltzmann:

 


B
Me = ∫ Me(λ ) dλ = 2


k4 π 5


T 4 = σ T 4,                           (6.27)


15 c2h3

0


6.4 Interactions of Radiation with Matter‡                                                       165


 

a

1.2

 

1

 

0.8

 

0.6

 

0.4

 

0.2

 

0


 

 

0                 5               10               15               20


 

b

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0


 

 

3                3.5              4               4.5              5


Figure 6.9: Radiance of a blackbody at environmental temperatures as indicated in the wavelength ranges of a 0–20 µm and b 3–5 µm.

 


































































A                                                                   b

1

10

 

0.8                                                                                                         7                                                                                          7

 


 

 

0.6

 

0.4

 

0.2


5

 

3

 

 

2

 

1.5


5

 

3

 

 

2

 

1.5


 


 

 

0                10              20               30               40


1

0              10             20             30             40


 

Figure 6.10: Relative photonen-based radiance in the temperature interval 0– 40°C and at wavelengths in µm as indicated: a related to the radiance at 40°C; b relative change in percent per degree.

 

≈       ·
where σ 5.67 10− 8W m− 2K− 4 is the Stefan–Boltzmann constant. The wave- length of maximum emittance of a blackbody is given by Wien’s law:


λ m


2.898 10− 3Km

·
T        .                                     (6.28)


The maximum excitance at room temperature (300 K) is in the infrared at about 10 µm and at 3000 K (incandescent lamp) in the near infrared at 1 µm.

Real objects emit less radiation than a blackbody. The ratio of the emission of a real body to the emission of the blackbody is called (specifi c) emissivity H and depends on the wavelength.

Radiation in the infrared and microwave range can be used to image the tem- perature distribution of objects. This application of imaging is known as ther- mography. Thermal imaging is complicated by the fact that real objects are not perfect black bodies. Thus they partly refl ect radiation from the surrounding.


166                                                                             6 Quantitative Visualization

 

If an object has emissivity H, a fraction 1 H of the received radiation originates from the environment, biasing the temperature measurement. Under the sim- plifying assumption that the environment has a constant temperature Te, we can estimate the infl uence of the refl ected radiation on the temperature mea- surement. The total radiance emitted by the object, E, is

e
E = Hσ T 4 + (1 − H)σ T 4.                                             (6.29)

e
This radiance is interpreted to originate from a blackbody with the apparent temperature T ':


 

Rearranging for T ' yields


σ T '4 = Hσ T 4 + (1 − H)σ T 4.                                           (6.30)

 

Σ T  − TH)
1/4


T ' =


T.1 +


4        4

4
(1 −      e T           .


(6.31)


In the limit of small temperature diff erences (Te − T, T) Eq. (6.31) reduces to

T ' ≈ HT + (1 − H)Te  or  T ' − T = (1 − H)(Te − T ).                                (6.32)

−     ·         −
From this simplifi ed equation, we infer that a 1 % deviation of H from unity results in 0.01 K temperature error per 1 K diff erence between the object tem- perature and the environmental temperature. Even for an almost perfect black- body such as a water surface with a mean emissivity of about 0.97, this leads to considerable errors in the absolute temperature measurements. The apparent temperature of a bright sky can easily be 80 K colder than the temperature of a water surface at 300 K, leading to a 0.03 80 K = 2.4 K bias in the measure- ment of the absolute temperature of the water surface.

This bias can, according to Eqs. (6.31) and (6.32), be corrected if the mean tem- perature of the environment is known. Also relative temperature measurements are biased, although to a less signifi cant degree. Assuming a constant environ- mental temperature in the limit (Te − T), T, we can infer from Eq. (6.32) that

∂ T ' ≈ H∂ T for (Te − T), T,                             (6.33)

which means that the measured temperature diff erences are smaller by the fac- tor H than in reality.

Other corrections must be applied if radiation is signifi cantly absorbed on the way from the object to the receiver. If the distance between the object and the camera is large, as for space-based or aerial infrared imaging of the Earth’s surface, it is important to select a wavelength range with a minimum absorp- tion. The two most important atmospheric windows are at 3–5 µm (with a sharp absorption peak around 4.15 µm due to CO2) and at 8–12 µm.

Figure 6.9 shows the radiance of a blackbody at environmental temperatures between 0 and 40 °C in the 0–20 µm and 3–5 µm wavelength ranges. Although the radiance has its maximum around 10 µm and is about 20 times higher than at 4 µm, the relative change of the radiance with temperature is much larger at 4 µm than at 10 µm.

This eff ect can be seen in more detail by examining radiance relative to the radiance at at fi xed temperature (Fig. 6.10a) and the relative radiance change in (∂ L/∂ T )/L in percent (Fig. 6.10b). While the radiance at 20°C changes only about


6.4 Interactions of Radiation with Matter‡                                                       167

 




















A                                                                b

C                                                                d

Figure 6.11: Some examples of thermography: a Heidelberg University building taken on a cold winter day, b street scene, c look inside a PC, and d person with lighter.

 

 

1.7 %/K at 10 µm wavelength, it changes about 4 %/K at 4 µm wavelength. This higher relative sensitivity makes it advantageous to use the 3–5 µm wavelength range for measurements of small temperature diff erences although the absolute radiance is much lower.

Some images illustrating the application of thermography are shown in Fig. 6.11.

 

6.4.2 Refraction, Refl ection, and Transmission‡

At the interface between two optical media, according to Snell’s law the trans- mitted ray is refracted, i. e., changes direction (Fig. 6.12):


sin θ 1 sin θ 2


n2

= n1


 

,                                               (6.34)


168                                                                             6 Quantitative Visualization

 

b

a

Figure 6.12: a A ray changes direction at the interface between two optical media with a diff erent index of refraction. b Parallel polarized light is entirely transmitted and not refl ected when the angle between the refl ected and trans- mitted beam would be 90°. This condition occurs at the transitions from both the optically thinner medium and the thicker one.

 

where θ 1 and θ 2 are the angles of incidence and refraction, respectively. Re- fraction is the basis for transparent optical elements (lenses) that can form an image of an object. This means that all rays emitted from a point of the object and passing through the optical element converge at another point at the image plane.

+
A specular surface behaves like a mirror. Light irradiated in the direction (θ i, φ i) is refl ected back in the direction (θ i, ϕ i π ). This means that the angle of refl ectance is equal to the angle of incidence and that the incident and refl ected ray and the normal of the surface lie in one plane. The ratio of the refl ected radiant fl ux to the incident fl ux at the surface is called the refl ectivity ρ.

Specular refl ection only occurs when all parallel incident rays are refl ected as parallel rays. A surface need not be perfectly smooth for specular refl ectance because of the wave-like nature of electromagnetic radiation. It is suffi cient that the residual surface irregularities are signifi cantly smaller than the wavelength. The refl ectivity ρ depends on the angle of incidence, the refractive indices, n1 and n2, of the two media meeting at the interface, and the polarization state of

the radiation. Light is called parallel or perpendicular polarized if the electric fi eld vector is parallel or perpendicular to the plane of incidence, i. e., the plane containing the directions of incidence, refl ection, and the surface normal.

Fresnel’s equations give the refl ectivity for parallel polarized light:

 

ρ    tan2(θ 1 − θ 2)


⊗ = tan2(θ 1 for perpendicular polarized light


,                                         (6.35)

+ θ 2)


 

ρ     sin21 − θ 2)


⊥ = sin21


,                                         (6.36)

+ θ 2)


and for unpolarized light (see Fig. 6.13)

=
ρ   ρ ⊗ + ρ ⊥  ,                                                 (6.37)

2


6.4 Interactions of Radiation with Matter‡                                                      169














A                                                                    b

1                                                                                                           1

0.8                                                                                                        0.8

0.6                                                                                                        0.6

 

0.4                                                                                                        0.4

 

0.2                                                                                                        0.2

 

0                                                                                                          0

 

Figure 6.13: Interface refl ectivities for parallel (⊗ ) and perpendicular (⊥ ) polar- ized light and unpolarized light incident from a air (n1 = 1.00) to BK7 glass (n2 = 1.517), b BK7 glass to air.

 

respectively, where θ 1 and θ 2 are the angles of the incident and refracted rays related by Snell’s law.

=
At normal incidence (θ 1 0), the refl ectivity does not depend on the polariza- tion state:

     
 

2
ρ (n1 − n2)2              (n − 1)2


= (n1


+ n )2 = (n + 1)2 with n = n1/n2.                                  (6.38)


As illustrated in Fig. 6.13, parallel polarized light is not refl ected at all at a cer- tain angle, the polarizing or Brewster angle θ b. This condition occurs when the refracted and refl ected rays would be perpendicular to each other (Fig. 6.12b):


θ b =


arcsin        1

1
2
 1 + n2/n2


.                                   (6.39)


When a ray enters into a medium with lower refractive index, there is a critical


angle, θ c


n1 θ c = arcsin n


with n1 < n2,                                    (6.40)


2
beyond which all light is refl ected and none enters the optically thinner medium. This phenomenon is called total refl ection.

 

6.4.3 Rough Surfaces‡

Most natural and also artifi cial objects do not refl ect light directly but show a dif- fuse refl ectance, as microscopic surface roughness causes refl ection in various directions depending on the slope distribution of the refl ecting facets. There is a great variety in how these rays are distributed over the emerging solid angle. Some materials produce strong forward scattering eff ects while others scatter almost equally in all directions. Other materials show a kind of mixed refl ectiv- ity which is partly specular due to refl ection at the smooth surface and partly diff use caused by body refl ection. In this case, light penetrates partly into the object where it is scattered at optical inhomogeneities. Part of this scattered light leaves the object again, causing a diff use refl ection. To image objects that


170                                                                             6 Quantitative Visualization

 

do not emit radiation by themselves but passively refl ect incident light, it is essential to know how the light is refl ected.

Generally, the relation between the incident and emitted radiance can be ex- pressed as the ratio of the radiance emitted at the polar angle θ e and the az- imuth angle φ e and the irradiance received at the incidence angle θ i. This ratio is called the bidirectional refl ectance distribution function (BRDF ) or refl ectivity distribution, since it generally depends on the angles of both the incident and excitant radiance:

=i                                          i                   e                   e
f (θ , φ , θ , φ )        L e (θ e, φ e ) .                                   (6.41)

Eii, φ i)

For a perfect mirror (specular refl ection), f is zero everywhere, except for θ i =

θ e and φ e = π + φ i, hence

f (θ i, θ e) = δ (θ i − θ e) · δ (φ e + π − φ i).                                           (6.42)

The other extreme is a perfect diff user, refl ecting incident radiation equally into all directions independently of the angle of incidence. Such a surface is known as Lambertian radiator or Lambertian refl ector. The radiance of such a surface is independent of the viewing direction:

Le = 1 Ei or f (θ i, φ i, θ e, φ e) = 1 .                                        (6.43)

π                                                π

 

6.4.4 Absorptance and Transmittance‡

Radiation traveling in matter is more or less absorbed and converted into dif- ferent energy forms, especially heat. The absorptance is proportional to the radiant intensity in a thin layer dx. Therefore

dI(λ )

dx = − α (λ, x)I.                                               (6.44)

The absorption coeffi cient α is a property of the medium and depends on the wavelength of the radiation. It is a reciprocal length with the units m− 1. By integration of Eq. (6.44), we can compute the attenuation of radiation over the distance from 0 to x:

x
I(x) = I(0) · exp.− ∫ 0  α (λ, x')dx'Σ  ,                                       (6.45) or, if the medium is homogeneous (i. e., α does not depend on the position x'),

I(x) = I(0) exp(− α (λ )x).                                             (6.46)

The exponential attenuation of radiation in a homogeneous medium, as ex- pressed by Eq. (6.46), is often referred to as Lambert–Beer’s or Bouger’s law. After a distance of 1/α, the radiation is attenuated to 1/e of its initial value.

The path integral over the absorption coeffi cient

1
x2

τ (x1, x2) = ∫ x α (x')dx'                                                                      (6.47)


6.4 Interactions of Radiation with Matter‡                                                       171

 

results in a dimensionless quantity that is known as the optical thickness or optical depth. The optical depth is a logarithmic expression of radiation attenu- ation and means that along the path from the point x1 to point x2 the radiation has been attenuated to e− τ.

If radiation travels in a composite medium, often only one chemical species

— at least at certain wavelengths — is responsible for the attenuation of the radiation. Therefore, it makes sense to relate the absorption coeffi cient to the concentration of that species:

              =  ·  =
α ε  c, [ε ]    ,                                   (6.48)

mol m− 1

where c is the concentration in mol/l. Then, ε is known as the molar absorption coeffi cient. The simple linear relation Eq. (6.48) holds for a very wide range of radiant intensities but breaks down at very high intensities, e. g., the absorption of highly intense laser beams. At that point, the domain of nonlinear optical phenomena is entered.

As the absorption coeffi cient is a distinct optical feature of chemical species, it can be used in imaging applications to identify chemical species and to measure their concentrations.

Finally, the term transmittance means the fraction of radiation that remains after the radiation has traveled a certain path in the medium. Often, transmit- tance and transmissivity are confused. In contrast to transmittance, the term transmissivity is related to a single surface. It means the fraction of radiation that is not refl ected but enters the medium.

 

6.4.5 Scattering‡

The attenuation of radiation by scattering can be described with the same con- cepts as for loss of radiation by absorption. The scattering coeffi cient is defi ned by


β (λ )


1 dI(λ )


=− I  dx  .                                             (6.49)

It is a reciprocal length with the units m− 1. If in a medium the radiation is attenuated both by absorption and scattering, the two eff ects can be combined in the extinction coeffi cient κ (λ ):

κ (λ ) = α (λ ) + β (λ ).                                                 (6.50)

Unfortunately, there is no unifi ed terminology and symbolism for these vari- ous coeffi cients. The diff erent communities use diff erent symbols and slightly diff erent defi nitions.

Although scattering appears to be similar to absorption, it is a much more dif- fi cult phenomenon. The above formula can only be used if the radiation from the individual scattering events adds up incoherently at some point far from the particles. The complexity of scattering is related to the fact that the scattered radiation (without additional absorption) is never lost. Scattered light can be scattered more than once. Therefore, a fraction of it can reenter the original beam. The probability that radiance will be scattered in a certain path length more than once is directly related to the total attenuation by scattering along


172                                                                             6 Quantitative Visualization

 

the path of the beam or the optical depth τ. If τ is smaller than 0.1, less than 10% of the radiance is scattered.

The total amount of scattered light and the analysis of the angular distribution is related to the optical properties of the scattering medium. Consequently, scattering is caused by the optical inhomogeneity of the medium. In the further discussion we assume that small spherical particles with radius r and index of refraction n are imbedded in a homogeneous optical medium.

Scattering by a particle is described by the cross section. It is defi ned in terms of the ratio of the fl ux removed by the particle to the fl ux incident on the particle:

 

σ s = φ s/φ π r 2.                                                    (6.51)

The cross section has the units of area. It can be regarded as the eff ective area of the particle for scattering that completely scatters the incident radiative fl ux. Therefore, the effi ciency factor for scattering Qs is defi ned as the cross section related to the geometric cross section of the scattering particle:

Qs = σ s/(π r 2).                                                   (6.52)

The angular distribution of the scattered radiation is given by the diff erential cross section dσ s/dΩ, i. e., the fl ux density scattered per unit solid angle. The total cross-section is given as the integral over the sphere of the diff erential cross-section:

dΩ
σ s = ∫ dσ s dΩ.                                                   (6.53)

·
The relation between the scattering coeffi cient β Eq. (6.49) and the scattering cross-section can be derived as follows. N is the number of particles per unit volume. Thus, the total eff ective scattering cross-section covers the area N σ . This area compared to the unit area gives the fraction of area that removes the incident fl ux and is thus equal to the scattering coeffi cient β:

β = Nσ .                                                      (6.54)

=           =           ,
The scattering by small particles is most signifi cantly infl uenced by the ratio of the particle size to the wavelength of the radiation expressed in the dimension- less particle size q 2π r/λ rk. If q 1 (Rayleigh scattering), the scattering is very weak and proportional to λ − 4:


 

σ s/π r 2


 

8

= 3 q


. n     ..                                   (6.55)

n2 + 2
2 − 14
.       .


 

$
For q 1, the scattering can be described by geometrical optics. If the particle completely refl ects the incident radiation, the scattering cross-section is equal to the geometric cross-section (σ s/π r 2 = 1) and the diff erential cross-section is constant (isotropic scattering, dσ /dΩ = r 2/2).

Scattering for particles with sizes of about the wavelength of the radiation (Mie scattering) is very complex due to diff raction and interference eff ects of the light scattered from diff erent portions of the surface of the particle. The diff erential cross-section shows strong variations with the scattering angle and is directed mostly in the forward direction, while Rayleigh scattering is fairly isotropic.


6.4 Interactions of Radiation with Matter‡                                                      173

 

6.4.6 Optical Activity‡

An optical material rotates the plane of polarization of electromagnetic radia- tion. The rotation is proportional to the concentration of the optically active material, c, and the path length d:

ϕ = γ (λ )cd.                                                     (6.56)

The constant γ is known as the specifi c rotation and has the units [m2 mol] or [cm2 g− 1]; it depends strongly on the wavelength of the radiation. Generally, the specifi c rotation is signifi cantly larger at shorter wavelengths.

Two well-known optically active materials are quartz crystals and sugar solu- tion. Optical activity — including the measurement of the wavelength depen- dency — can be used to identify chemical species and to measure their con- centration. With respect to visualization, optical activity is signifi cant since it can be induced by various external forces, among others electrical fi elds (Kerr eff ect) and magnetic fi elds (Faraday eff ect).

 

6.4.7 Luminescence‡

Luminescence is the emission of radiation from materials that arises from a radiative transition from an excited state to a lower state. Fluorescence is lu- minescence characterized by short lifetimes of the excited state (on the order of nanoseconds), while the term phosphorescence is used for longer lifetimes (milliseconds to minutes).

Luminescence is an enormously versatile process since it can be triggered by various processes. In chemiluminescence, the energy required to generate the excited state is derived from the energy released by a chemical reaction. Chemi- luminescence normally has only low effi ciencies (i. e., number of photons emit- ted per reacting molecule) on the order of 1 % or less. Flames are the classic example of a low-effi ciency chemiluminescent process. Bioluminescence is a chemiluminescence in living organisms. Firefl ies and the glow of marine mi- croorganisms are well-known examples. The fi refl y reaction involves the enzy- matic oxidation of luciferin. In contrast to most chemiluminescent processes, this reaction converts almost 100 % of the chemical energy into radiant energy. Low-level bioluminescent processes are common to many essential biological processes. Imaging of these processes is becoming an increasingly important tool to study biochemical processes.

Marking biomolecules with fl uorescent dyes is becoming another increasingly sophisticated tool in biochemistry. It has even become possible to mark indi- vidual chromosomes or gene sequences in chromosomes with fl uorescent dyes.

Luminescence always has to compete with other processes that deactivate the excited state without radiation emission. A prominent radiationless deactiva- tion process is the energy transfer during the collision of molecules.

Some types of molecules, especially electronegative molecules such as oxygen, are very effi cient in deactivating excited states during collisions. This process is referred to by the term quenching. The presence of a quenching molecule causes the fl uorescence to decrease. Therefore, the measurement of the fl uo- rescent irradiance can be used to measure the concentration of the quenching


174                                                                             6 Quantitative Visualization

 


100

80

60

40

20

0

             
             
             
             
             
             

2   4   6   8  10 12 14

O2 concentration in mg/l


 

Figure 6.14: Quenching of the fl uorescence of pyrene butyric acid by dissolved oxygen: measurements and fi t with the Stern–Vollmer equation (dashed line).

 

 

molecule. The dependence of the fl uorescent intensity on the concentration of the quencher is given by the Stern–Vollmer equation:


L        1

L0 = 1 + kcq


,                                              (6.57)


where L is the fl uorescent radiance, L0 the fl uorescent radiance when no quencher is present, Cq the quencher concentration, and k the quenching constant de- pending suitably on the lifetime of the fl uorescent state. Effi cient quenching requires that the excited state have a suffi ciently long lifetime.

A fl uorescent dye suited for quenching by dissolved oxygen is pyrene butyric acid (PBA) [189]. The relative fl uorescent radiance of PBA as a function of dis- solved oxygen is shown in Fig. 6.14 [127]. Fluorescence is stimulated by a pulsed nitrogen laser at 337 nm. The change in fl uorescence is rather weak but suffi - ciently large to enable reliable measurements of the concentration of dissolved oxygen.

 

6.4.8 Doppler Eff ect‡

A velocity diff erence between a radiating source and a receiver causes the re- ceiver to measure a frequency diff erent from that emitted by the source. This phenomenon is known as the Doppler eff ect. The frequency shift is directly proportional to the velocity diff erence according to


ν  = c − u k ¯   ν

 


or ∆ ν = ν


− ν = ( u s − u r )T k ,                     (6.58)

 


r

s
r     c − u k ¯   s


r       s        1 − u k ¯ /c


s
=  | |
where k ¯ k /   k  , ν s is the frequency of the source, ν r the frequency measured at the receiver, k the wave number of the radiation, c the propagation speed of the radiation, and u s and u r the velocities of the source and receiver relative to the medium in which the wave is propagating. Only the velocity component in the direction to the receiver causes a frequency shift.

If the source is moving towards the receiver ( u s k > 0), the frequency increases as the wave fronts follow each other faster. A critical limit is reached when the


6.5 Further Readings‡                                                                                            175

source moves with the propagation speed of the radiation. Then, the radiation is left behind the source. For small velocities relative to the wave propagation speed, the frequency shift is directly proportional to the relative velocity be- tween source and receiver.

∆ ν = ( u s u r ) k.                                                   (6.59)

The relative frequency shift ∆ ω /ω is given directly by the ratio of the velocity diff erence in the direction of the receiver and the wave propagation speed:

∆ ν   = ( u s −   u r )T  k ¯.                                                 (6.60)

ν              c

,                  21 − (|u| /c)
For electromagnetic waves, the velocity relative to a “medium” is not relevant. The theory of relativity gives the frequency


     ν s        

ν r = γ (1 − u k ¯ /c)


with  γ =             1       .              (6.61)


=   −
| |
For small velocities ( u < < c), this equation also reduces to Eq. (6.59) with u u s u r. In this case, acoustic and electromagnetic waves can be treated equally with respect to the frequency shift due to a relative velocity between the source and receiver.

 

6.5 Further Readings‡

 

This chapter covered a variety of topics that are not central to image process- ing yet are important to know for a correct image acquisition. You can refresh or extend your knowledge about electromagnetic waves by one of the classi- cal textbooks on the subject, e. g., F. S. Crawford [36], Hecht [66], or Towne [184]. Stewart [178] and Drury [33] address the interaction of radiation with matter in the fi eld of remote sensing. Richards [150] gives a survey of imaging techniques across the electromagnetic spectrum. The topic of infrared imaging has become an subject of its own and is treated in detail by Gaussorgues [49] and Holst [71]. Pratt [142] give a good description of color vision with respect to image processing. The practical aspects of photometry and radiometry are covered by the “Handbook of Applied Photometry” from DeCusaris [26]. The oldest application area of quantitative visualization is hydrodynamics. A fasci- nating insight into fl ow visualization with many images is given by the “Atlas of Visualization” edited by Nakayama and Tanida [129].


176                                                                             6 Quantitative Visualization


 

 












































Image Formation

Introduction

Image formation includes three major aspects. One is geometric in na- ture. The question is where we fi nd an object in the image. Essentially, all imaging techniques project a three-dimensional space in one way or the other onto a two-dimensional image plane. Thus, basically, imaging can be regarded as a projection from 3-Dinto 2-Dspace. The loss of one coordinate constitutes a severe loss of information about the geometry of the observed scene. However, we unconsciously and constantly expe- rience that our visual system perceives a three-dimensional impression suffi ciently well that we can grasp the three-dimensional world around us and interact with it. The ease with which this reconstruction task is performed by biological visual systems might tempt us to think that this is a simple task. But — as we will see in Chapters 8 and 17 — it is not that simple.

The second aspect is radiometric in nature. How “bright” is an im- aged object, and how does the brightness in the image depend on the optical properties of the object and the image formation system? The radiometry of an imaging system is discussed in Section 7.5. For the basics of radiometry see Section 6.3.

The third question is, fi nally, what happens to an image when we represent it with an array of digital numbers to process it with a digital computer? How do the processes that transform a continuous image into such an array — known as digitization and quantization — limit the resolution in the image or introduce artifacts? These questions are addressed in Chapter 9.

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 201; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.74 с.)
Главная | Случайная страница | Обратная связь