Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Depth of Focus and Depth of Field



The image equations Eqs. (7.19) and (7.20) determine the relation be- tween object and image distances. If the image plane is slightly shifted or the object is closer to the lens system, the image is not rendered useless. It rather gets blurred. The degree of blurring depends on the deviation from the distances given by the image equation.

The concepts of depth of focus and depth of fi eld are based on the fact that a certain degree of blurring does not aff ect the image quality. For digital images it is naturally given by the size of the sensor elements. It makes no sense to resolve smaller structures. We compute the blurring in the framework of geometrical optics using the image of a point object as illustrated in Fig. 7.8a. At the image plane, the point object is imaged to a point. It smears to a disk with the radius H with increasing distance from the image plane. Introducing the f -number nf of an optical system as the ratio of the focal length and diameter of lens aperture 2r:


=f
n     f

2r

we can express the radius of the blur disk as:


 

(7.24)


=                   3
H 1      f ∆ x,                                      (7.25) 2nf f + d'

−         +
where ∆ x3 is the distance from the (focused) image plane. The range of positions of the image plane, [d' ∆ x3, d' ∆ x3], for which the ra- dius of the blur disk is lower than H, is known as the depth of focus.


188                                                                                               7 Image Formation

 

 

a

 

b

Figure 7.8: Illustration of the a depth of focus and b depth of fi eld with an on-axis point object.

 

Equation (7.25) can be solved for ∆ x3 and yields

f
∆ x3 = 2nf.1 + d' Σ  H = 2nf (1 + ml)H,                                 (7.26)

where ml is the lateral magnifi cation as defi ned by Eq. (7.21). Equa- tion (7.26) illustrates the critical role of the nf -number and magnifi ca- tion for the depth of focus. Only these two parameters determine for a given H the depth of focus and depth of fi eld.

Of even more importance for practical usage than the depth of focus is the depth of fi eld. The depth of fi eld is the range of object positions for which the radius of the blur disk remains below a threshold H at a fi xed image plane (Fig. 7.8b). With Eqs. (7.19) and (7.26) we obtain

     
 

d                      f 2                                          f 2

± ∆ X3 = d' ∓ ∆ x3 = d' ∓ 2nf (1 + ml)H.                           (7.27)

In the limit of ∆ X3, d, Eq. (7.27) reduces to

m2
∆ X3 ≈ 2nf · 1 + m l H.                                           (7.28)


7.4 Real Imaging                                                                          189

If the depth of fi eld includes the infi nite distance, the minimum distance for a sharp image is

     
 

d                   f 2                             f 2

min = 4nf (1 + ml)H ≈ 4nf H.                                     (7.29)

×
±
A typical high resolution CCD camera has sensor elements, which are about 10 10 µm in size. Thus we can allow for a radius of the unsharpness disc of 5 µm. Assuming a lens with an f -number of 2 and a focal length of 15 mm, according to Eq. (7.28) we have a depth of fi eld of 0.2 m at an object distance of 1.5 m, and according to Eq. (7.29) the depth of fi eld reaches from 5 m to infi nity. This example illustrates that even with this small f -number and the relatively short distance, we may obtain a large depth of fi eld.

For high magnifi cations as in microscopy, the depth of fi eld is very small. With ml $ 1, Eq. (7.28) reduces to


 

∆ X3


2nf H

≈  ml  .                                               (7.30)


=              =
With a 50-fold enlargement (ml 50) and nf 1, we obtain the extreme low depth of fi eld of only 0.2 µm.

Generally, the whole concept of depth of fi eld and depth of focus as discussed here is only valid in the limit of geometrical optics. It can only be used for blurring that is signifi cantly larger than that caused by the aberrations or diff raction of the optical system.

 













Telecentric Imaging

In a standard optical system, a converging beam of light enters an optical system. This setup has a signifi cant disadvantage for optical gauging (Fig. 7.9a). The object appears larger if it is closer to the lens and smaller if it is farther away from the lens. As the depth of the object cannot be inferred from its image, either the object must be at a precisely known depth or measurement errors are unavoidable.

A simple change in the position of the aperture stop from the prin- cipal point to the fi rst focal point solves the problem and changes the imaging system to a telecentric lens (Fig. 7.9b). By placing the stop at this point, the principal rays (ray passing through the center of the aper- ture) are parallel to the optical axis in the object space. Therefore, slight changes in the position of the object do not change the size of the image of the object. The farther it is away from the focused position, the more it is blurred, of course. However, the center of the blur disk does not change the position.

Telecentric imaging has become an important principle in machine vision. Its disadvantage is, of course, that the diameter of a telecentric


190                                                                                               7 Image Formation

 


a

normal imaging


 

P1 = P2


 

inner wall


 

F2

F1

stop


object                optical system              image

b

telecentric imaging


 

 

front cross section


 

F1                                                                                                            F2

 

stop

object               optical system               image

 

Figure 7.9: a Standard diverging imaging with stop at the principal point; b telecentric imaging with stop at the second focal point. On the right side it is illustrated how a short cylindrical tube whose axis is aligned with the optical axis is imaged with the corresponding set up.

 

lens must be at least of the size of the object to be imaged. This makes telecentric imaging very expensive for large objects.

Figure 7.9 illustrates how a cylinder aligned with the optical axis with a thin wall is seen with a standard lens and a telecentric lens. Standard imaging sees the cross-section and the inner wall and telecentric imaging the cross-section only.

The discussion of telecentric imaging emphasizes the importance of stops in the construction of optical systems, a fact that is often not adequately considered.

 









Geometric Distortion

A real optical system causes deviations from a perfect perspective pro- jection. The most obvious geometric distortions can be observed with simple spherical lenses as barrel- or cushion-shaped images of squares. Even with a corrected lens system these eff ects are not completely sup- pressed.

This type of distortion can easily be understood by considerations of symmetry. As lens systems show cylindrical symmetry, concentric circles only suff er a distortion in the radius. This distortion can be ap-

proximated by

=
x '       x    .                                          (7.31)

1 + k3| x |2


7.5 Radiometry of Imaging                                                            191

 

Depending on whether k3 is positive or negative, barrel- and cushion- shaped distortions in the images of squares will be observed. Commer- cial TV lenses show a radial deviation of several image points (pixels) at the edge of the sensor. If the distortion is corrected with Eq. (7.31), the residual error is less than 0.06 image points [107].

This high degree of correction, together with the geometric stability of modern CCDsensors, accounts for subpixel accuracy in distance and area measurements without using expensive special lenses. Lenz [108] discusses further details which infl uence the geometrical accuracy of CCDsensors.

Distortions also occur if non-planar surfaces are projected onto the image plane. These distortions prevail in satellite and aerial imagery. Thus correction of geometric distortion in images is a basic topic in re- mote sensing and photogrammetry [151].

Accurate correction of the geometrical distortions requires shifting of image points by fractions of the distance between two image points. We will deal with this problem later in Section 10.6 after we have worked out the knowledge necessary to handle it properly.

 

Radiometry of Imaging

If is not suffi cient to know only the geometry of imaging. Equally impor- tant is to consider how the irradiance at the image plane is related to the radiance of the imaged objects and which parameters of an optical sys- tem infl uence this relationship. For a discussion of the fundamentals of radiometry, especially all terms describing the properties of radiation, we refer to Section 6.3.

The path of radiation from a light source to the image plane involves a chain of processes (see Fig. 6.1). In this section, we concentrate on the observation path (compare Fig. 6.1), i. e., how the radiation emitted from the object to be imaged is collected by the imaging system.

 

Radiance Invariance

An optical system collects part of the radiation emitted by an object (Fig. 7.10). We assume that the object is a Lambertian radiator with the radiance L. The aperture of the optical system appears from the object to subtend a certain solid angle Ω. The projected circular aperture area is π r 2 cos θ at a distance (d + f )/ cos θ. Therefore, a fl ux


Φ = LA


π r 2 cos3 θ                                            (7.32)

(d + f )2


enters the optical system. The radiation emitted from the projected area

A is imaged onto the area A'. Therefore, the fl ux Φ must be divided by


192                                                                                               7 Image Formation

 


 

 

projected aperture

 

 

d          F2    f


 

P2        P1


 

q F1


image area A'


q               r       f                         d

 

 


Object area A with radiance L


 

optical system


 

Figure 7.10: An optical system receives a fl ux density that corresponds to the product of the radiance of the object and the solid angle subtended by the pro- jected aperture as seen from the object. The fl ux emitted from the object area A is imaged onto the image area A'.

 

the area A' in order to compute the image irradiance E'. The area ratio can be expressed as


A/A' =


cos− 1 θ (f + d)2.                                       (7.33)

(f + d')2


We further assume that the optical system has a transmittance t. This leads fi nally to the following object radiance/image irradiance relation:


E'  tπ     r     

f + d'


 

2

.     Σ =
cos4 θ L.                                (7.34)


 

This fundamental relationship states that the image irradiance is pro- portional to the object radiance. This is the base for the linearity of op- tical imaging. The optical system is described by two simple terms: its (total) transmittance t and the ratio of the aperture radius to the distance of the image from the fi rst principal point. For distant objects d $ f, d', f, Eq. (7.34) reduces to


E' = tπ


cos4 θ

 

n
2

f


L    d $ f                                 (7.35)


using the f -number nf. For real optical systems, equations Eqs. (7.34) and (7.35) are only an approximation. If part of the incident beam is cut off by additional apertures or limited lens diameters (vignetting), the fall-off is even steeper at high angles θ. On the other hand, a careful design of the position of the aperture can make the fall-off less steep than cos4 θ. As also the residual refl ectivity of the lens surfaces depends on the angle of incidence, the true fall-off depends strongly on the design of


7.5 Radiometry of Imaging                                                            193

 

a

principle planes

A0                                                                                         b

A                                                                     A'

W  d f     f        d' W'

 

L                                                                     L'

object                                                        image

 

optical system

 

Figure 7.11: Illustration of radiance invariance: a The product AΩ is the same in object and image space. b Change of solid angle, when a beam enters an optically denser medium.

 

the optical system and is best determined experimentally by a suitable calibration setup.

The astonishing fact that the image irradiance is so simply related to the object radiance has its cause in a fundamental invariance. An image has a radiance just like a real object. It can be taken as a source of radia- tion by further optical elements. A fundamental theorem of radiometry now states that the radiance of an image is equal to the radiance of the object times the transmittance of the optical system.

The theorem can be proved using the assumption that the radiative fl ux Φ through an optical system is preserved except for absorption in the system leading to a transmittance less than one. The solid angles which the object and image subtend in the optical system are

Ω = A0/(d + f )2 and Ω ' = A0/(d' + f )2,                                (7.36)

where A0 is the eff ective area of the aperture.

=    +        +
The fl ux emitted from an area A of the object is received by the area A' A(d' f )2/(d f )2 in the image plane (Fig. 7.11a). Therefore, the radiances are

L = Φ  = Φ  (d + f )2


Ω A  A0A


(7.37)


L'  = t Φ  = t Φ  (d + f )2,

Ω 'A'   A0A

and the following invariance holds:

L' = tL for n' = n.                                           (7.38)

The radiance invariance of this form is only valid if the object and image are in media with the same refractive index (n' = n). If a beam with


194                                                                                               7 Image Formation

 

radiance L enters a medium with a higher refractive index, the radiance increases as the rays are bent towards the optical axis (Fig. 7.11b). Thus, more generally the ratio of the radiance and the refractive index squared remains invariant:

L'/n'2 = tL/n2                                                                            (7.39)

From the radiance invariance, we can immediately infer the irradiance on the image plane to be

E' = L'π sin2 α ' = tLπ sin2 α '.                                        (7.40)

+
This equation does not consider the fall-off with cos4 θ in Eq. (7.34) because we did not consider oblique principal rays. The term sin2 α cor- responds to r 2/(f d')2 in Eq. (7.34). Radiance invariance considerably simplifi es computation of image irradiance and the propagation of radi- ation through complex optical systems. Its fundamental importance can be compared to the principles in geometric optics that radiation propa- gates in such a way that the optical path nd (real path times the index of refraction) takes an extreme value.

 

7.6 Linear System Theory of Imaging†

In Section 4.2 we discussed linear shift-invariant fi lters (convolution op- erators) as one application of linear system theory. Imaging is another example that can be described with this powerful concept. Here we will discuss optical imaging in terms of the 2-D and 3-D point spread func- tion (Section 7.6.1) and optical transfer function (Section 7.6.2).

 




























Point Spread Function

Previously it was seen that a point in the 3-Dobject space is not imaged onto a point in the image space but onto a more or less extended area with varying intensities. Obviously, the function that describes the imag- ing of a point is an essential feature of the imaging system and is called the point spread function, abbreviated as PSF. We assume that the PSF is not dependent on position. Then optical imaging can be treated as a linear shift-invariant system (LSI ) (Section 4.2).

If we know the PSF, we can calculate how any arbitrary 3-Dobject will be imaged. To perform this operation, we think of the object as decom- posed into single points. Figure 7.12 illustrates this process. A point X' at the object plane is projected onto the image plane with an intensity

distribution corresponding to the point spread function h. With gi'( x ')

we denote the intensity values at the object plane go' ( X ') projected onto

the image plane but without any defects through the imaging. Then the


7.6 Linear System Theory of Imaging†                                                            195

 

 

g0                                                                                                                                       x''

 

x

 

g0                                                                                                                                         x'

 

 

Figure 7.12: Image formation by convolution with the point spread function h(x). A point at X' in the object plane results in an intensity distribution with a maximum at the corresponding point x' on the image plane. At a point x on the image plane, the contributions from all points x', i. e., gi'(x')h(x  x'), must be integrated.

 

intensity of a point x at the image plane is computed by integrating the contributions from the point spread functions which have their maxi- mums at x' (Fig. 7.12):


gi( x ) =


∫   gi'( x ')h( x x ')d2x' = (gi' ∗ h)( x ).                      (7.41)

− ∞


The operation in Eq. (7.41) is known as a convolution. Convolutions play an essential role in image processing. Convolutions are not only involved in image formation but also in many image processing operations. In case of image formation, a convolution obviously “smears” an image and reduces the resolution.

This eff ect of convolutions can be most easily demonstrated with im- age structures that show periodic gray value variations. As long as the repetition length, the wavelength, of this structure is larger than the width of the PSF, it will suff er no signifi cant changes. As the wavelength decreases, however, the amplitude of the gray value variations will start to decrease. Fine structures will fi nally be smeared out to such an extent that they are no longer visible. These considerations emphasize the im- portant role of periodic structures and lead naturally to the introduction of the Fourier transform which decomposes an image into the periodic gray value variations it contains (Section 2.3).

Previous considerations showed that formation of a two-dimensional image on the image plane is described entirely by its PSF. In the following we will extend this concept to three dimensions and explicitly calculate the point spread function within the limit of geometric optics, i. e., with a perfect lens system and no diff raction. This approach is motivated


196                                                                                               7 Image Formation

 

by the need to understand three-dimensional imaging, especially in mi- croscopy, i. e., how a point in the 3-D object space is imaged not only onto a 2-Dimage plane but into a 3-Dimage space.

First, we consider how a fi xed point in the object space is projected into the image space. From Fig. 7.8 we infer that the radius of the un- sharpness disk is given by

=i
rx 3 .                                                 (7.42)

di

The index i of ε indicates the image space. Then we replace the radius of the aperture r by the maximum angle under which the lens collects light from the point considered and obtain

di
Hi = d o x3 tan α.                                                 (7.43)

This equation gives us the edge of the PSF in the image space. It is a dou- ble cone with the x3 axis in the center. The tips of both the cones meet at the origin. Outside the two cones, the PSF is zero. Inside the cone, we can infer the intensity from the conservation of radiation energy. Since the radius of the cone increases linearly with the distance to the plane of focus, the intensity within the cone decreases quadratically. Thus the PSF hi( x ) in the image space is given by


 

h ( x )


       I0         Π (x2 + x2)1/2

 


i             = π ( d o x3 tan α )2 2 d o x3 tan α

di
di
      I0                r     


 

1
2
(7.44)


= π ( d o z tan α )2 Π 2 d o z tan α,

di                                    di

where I0 is the light intensity collected by the lens from the point, and

Π is the box function, which is defi ned as


.Π (x) =
1 |x|≤ 1/2

0  otherwise


 

.                              (7.45)


 

The last expression in Eq. (7.44) is written in cylindrical coordinates (r, φ, z) to take into account the circular symmetry of the PSF with re- spect to the x3 axis.

In a second step, we discuss what the PSF in the image space refers to in the object space, since we are interested in how the eff ects of the imaging are projected back into the object space. We have to consider both the lateral and axial magnifi cation. First, the image, and thus also ε, are larger than the object by the factor di/do. Second, we must fi nd the planes in object and image space corresponding to each other. This problem has already been solved in Section 7.4.2. Equation Eq. (7.23)


7.6 Linear System Theory of Imaging†                                                            197

 

a

b

1 z

 

 

0.5

 

 

0

 


-0.5

 

-1

-0.5


 

0

0

x 0.5 -0.5


 

0.5

y


Figure 7.13: a 3-D PSF and b 3-D OTF of optical imaging with a lens, back- projected into the object space. Lens aberrations and diff raction eff ects are ne- glected.

 

relates the image to the camera coordinates. In eff ect, the back-projected radius of the unsharpness disk, Ho, is given by

Ho = X3 tan α,                                                 (7.46)

and the PSF, back-projected into the object space, by


 

h ( X )


     I0       Π (X2 + X2)1/2

 

1
2


o              = π (X3 tan α )2

    I0     


2X3 tan α


 

(7.47)


= π (Z tan α )2 Π 2Z tan α.

The double cone of the PSF, back-projected into the object space, shows the same opening angle as the lens (Fig. 7.13). In essence, h0(x) in Eq. (7.47) gives the eff ect of optical imaging disregarding geometric scal- ing.

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 204; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.148 с.)
Главная | Случайная страница | Обратная связь