Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Optical Transfer Function



Convolution with the PSF in the space domain is a quite complex oper- ation. In Fourier space, however, it is performed as a multiplication of complex numbers.  In particular, convolution of the 3-D object go' ( X ) with the PSF ho( X ) corresponds in Fourier space to a multiplication of the Fourier transformed object gˆ o' ( k ) with the Fourier transformed PSF,

the optical transfer function or OTF hˆ o( k ).  In this section, we consider


198                                                                                               7 Image Formation

 

the optical transfer function in the object space, i. e., we project the im- aged object back into the object space. Then the image formation can be described by:

 


Imaged object Imaging Object Space domain go( X )    =  ho( X )  ∗  go' ( X ) Fourier domain     gˆ o( k )     =  hˆ o( k )  ·  gˆ o' ( k ).


 

 

(7.48)


This correspondence means that we can describe optical imaging with either the point spread function or the optical transfer function. Both descriptions are complete. As with the PSF, the OTF has an illustrative meaning. As the Fourier transform decomposes an object into periodic structures, the OTF tells us how these periodic structures are changed by the optical imaging process. An OTF of 1 for a particular wavelength means that this periodic structure is not aff ected at all. If the OTF is 0, it disappears completely. For values between 0 and 1 it is attenuated correspondingly. Since the OTF is generally a complex number, not only the amplitude of a periodic structure can be changed but also its phase.

Direct calculation of the OTF is awkward.

Here several features of the Fourier transform are used, especially its linearity and separability, to decompose the PSF into suitable functions which can be transformed more easily. Two possibilities are demon- strated. They are also more generally instructive, since they illustrate some important features of the Fourier transform.

±
The fi rst method for calculating the OTF decomposes the PSF into a bundle of δ lines intersecting at the origin of the coordinate system. They are equally distributed in the cross-section of the double cone. We can think of each δ line as being one light ray. Without further calcula- tions, we know that this decomposition gives the correct quadratic de- crease in the PSF, because the same number of δ lines intersect a quadrat- ically increasing area. The Fourier transform of a δ line is a δ plane which is perpendicular to the line ( R5). Thus the OTF is composed of a bun- dle of δ planes. They intersect the k1k2 plane at a line through the origin of the k space under an angle of at most α. As the Fourier transform preserves rotational symmetry, the OTF is also circular symmetric with respect to the k3 axis. The OTF fi lls the whole Fourier space except for a double cone with an angle of π /2 α. In this sector the OTF is zero. The exact values of the OTF in the non-zero part are diffi cult to obtain with this decomposition method.

| |
We will infer it with another approach, based on the separability of the Fourier transform. We think of the double cone as layers of disks with varying radii which increase with x3. In the fi rst step, we perform the Fourier transform only in the x1x2 plane. This transformation yields a function with two coordinates in the k space and one in the x space,


7.6 Linear System Theory of Imaging†                                                            199

(k1, k2, x3), respectively ( q, ϕ, z) in cylinder coordinates. Since the PSF Eq. (7.47) depends only on r (rotational symmetry around the z axis), the two-dimensional Fourier transform corresponds to a one-dimensional Hankel transform of zero order [11]:

h(r, z)  =    I 0           Π (   r   )


π (z tan α )2


2z tan α


(7.49)


=     0
hˇ (q,  z)                 I  J1(2π zq tan α ). π zq tan α

±
The Fourier transform of the disk thus results in a function that contains the Bessel function J1 ( R5).

As a second step, we perform the missing one-dimensional Fourier

transform in the z direction.  Equation Eq. (7.49) shows that hˇ (q, z) is also a Bessel function in z. This time, however, the Fourier transform is one-dimensional. Thus we obtain not a disk function but a circle function

(± R5):                  J (2π x)                                        k

x
1                        ◦ •  2.1 − k2Σ 1/2 Π .  Σ .                       (7.50)

2

If we fi nally apply the Fourier transform scaling theorem (± R4),


if       f (x)   ◦ •

then f (ax)  ◦      • 1


fˆ (k),

fˆ. k Σ ,


 

 

(7.51)


 

we obtain

 

hˆ (q, k  )


 

2I0   .


|a|

 

k
Σ
2           1/2

3

 


a

 

Π . k 3     Σ .


3 = π |q tan α |


1 − q2 tan2 α


2q tan α


(7.52)


A large part of the OTF is zero. This means that spatial structures with the corresponding directions and wavelengths completely disap- pear. In particular, this is the case for all structures in the z direction,

i. e., perpendicular to the image plane. Such structures get completely lost and cannot be reconstructed without additional knowledge.

We can only see 3-Dstructures if they also contain structures parallel to the image plane. For example, it is possible to resolve points or lines that lie above each other. We can explain this in the x space as well as in the k space. The PSF blurs the points and lines, but they can still be distinguished if they are not too close to each other.

Points or lines are extended objects in Fourier space, i. e., constants or planes. Such extended objects partly coincide with the non-zero parts of the OTF and thus will not vanish entirely. Periodic structures up to an angle of α to the k1k2 plane, which just corresponds to the opening angle of the lens, are not eliminated by the OTF. Intuitively, we can say


200                                                                                               7 Image Formation

 

planar wave front       optical system                           image plane

 

=
Figure 7.14: Diff raction of a planar wave front at the aperture stop of an optical system. At the aperture stop we think of the planar wave as being decomposed into spherical wave fronts that show a diff erence of δ x' sin θ depending on their direction θ and position x'.

 

that we are able to recognize all 3-Dstructures that we can actually look into. All we need is at least one ray which is perpendicular to the wave number of the structure and, thus, run in the direction of constant gray values.

 

7.6.3 Diff raction-Limited Optical Systems‡

Light is electromagnetic radiation and as such subject to wave-related phenom- ena. When a parallel bundle of light enters an optical system, it cannot be focused to a point even if all aberrations have been eliminated. Diff raction at the aperture of the optical system blurs the spot at the focus to a size of at least the order of the wavelength of the light. An optical system for which the aberrations have been suppressed to such an extent that it is signifi cantly lower than the eff ects of diff raction is called diff raction-limited.

A rigorous treatment of diff raction according to Maxwell’s equations is math- ematically quite involved ([34, Chapters 9 and 10] and [77, Chapter 3]). The diff raction of a planar wave at the aperture of lenses can be treated in a sim- ple approximation known as Fraunhofer diff raction. It leads to a fundamental relation.

=
We assume that the aperture of the optical system is pierced by a planar wave front (Fig. 7.14). At the aperture plane, we apply Huygens’ principle which states that each point of the wave front can be taken as the origin of a new in-phase spherical wave. In particular, we can add up these waves to new planar wave fronts leaving the aperture plane at an angle of θ. The inclination causes a path diff erence which results in a position-dependent phase shift. The path diff erence in the aperture plane is δ x' sin θ, where x' is the position in the aperture plane. Thus, the phase diff erence is

∆ ϕ = 2π δ /λ = 2π x' sin θ /λ,                                                 (7.53)

where λ is the wavelength of the wave front. We further assume that ψ '(x') is the amplitude distribution of the wave front at the aperture plane. In case of a simple aperture stop, ψ '(x') is a simple box function, but we want to treat


7.6 Linear System Theory of Imaging†                                                            201

 

the more general case of a varying amplitude of the wave front or any type of aperture functions. Then, the planar wave front leaving the aperture under an angle θ is given by the integral

∫                 .                 Σ '2π ix  sin θ =

ψ (θ )        ψ '(x') exp             λ          dx'.                      (7.54)

− ∞

.           Σ
This equation describes the diff raction pattern at infi nity as a function of the angle θ. We have not yet included the eff ect of the optical system. All that an ideal optical system does, is to bend the planar wave front at the aperture into a spherical wave that converges at the image plane into a point. This point is given by the relation x = f tan θ ≈ f sin θ. With this relation Eq. (7.54) becomes

 


 

ψ (x)


= ∫ ψ '(x')

− ∞


 

exp


2π ix'x fλ


dx'.


 

(7.55)


This integral can easily be extended to two dimensions by replacing x'x with the scalar product of the 2-Dvectors x ' and x:


 

ψ ( x )


∫ ∫ ψ '( x ')


. π x 'T x Σ

 


 

2x'.


∞ ∞
=

− ∞ − ∞


exp


2 i fλ d


(7.56)


These equation means that the amplitude distribution ψ (x) at the focal plane is simply the 2-D Fourier transform of the amplitude function ψ '(x') at the aperture plane.

For a circular aperture, the amplitude distribution is given by

2r
ψ '( x ') = Π .| x '| Σ  ,                                              (7.57)

where r is the radius of the aperture. The Fourier transform of Eq. (7.57) is given by the Bessel function of fi rst order (± R4):

= 0
ψ ( x )   ψ I 1 (2π xr/f λ ) .                                           (7.58)

π xr/f λ

The irradiance E on the image plane is given by the square of the amplitude:


0
π xr/f λ
2
E( x ) = |ψ ( x )|2 = ψ 2. I 1 (2π xr/f λ ) Σ


 

.                    (7.59)


 

The diff raction pattern has a central spot that contains 83.9 % of the energy and encircling rings with decreasing intensity (Fig. 7.15a). The distance from the center of the disk to the fi rst dark ring is

r
∆ x = 0.61 · f λ = 1.22λ nf.                                          (7.60) At this distance, two points can clearly be separated (Fig. 7.15b). This is the

Rayleigh criterion for resolution of an optical system.  The resolution of an


202                                                                                               7 Image Formation

 

a

b

1

0.8

0.6

0.4

0.2

0 -3 -2 -1 0 1 2 3

 

 

Figure 7.15: a Irradiance E( x ) of the diff raction pattern (“Airy disk”) at the focal plane of an optical system with a uniformly illuminated circular aperture according to Eq. (7.59). b Illustration of the resolution of the image of two points at a distance x/(nf λ ) = 1.22.

 

optical system can be interpreted in terms of the angular resolution of the in- coming planar wave and the spatial resolution at the image plane. Taking the Rayleigh criterion Eq. (7.60), the angular resolution ∆ θ 0 = ∆ x/f is given as

r
∆ θ 0 = 0.61 λ  .                                                   (7.61)

Thus, the angular resolution does not depend at all on the focal length but only the aperture of the optical system in relation to the wavelength of the electromagnetic radiation.

In contrast to the angular resolution, the spatial resolution ∆ x at the image plane, depends according to Eq. (7.60) only on the relation of the radius of the lens aperture to the distance f of the image of the object from the principal point. Instead of the f -number we can use in Eq. (7.60) the numerical aperture which is defi ned as

nf
na = n sin θ 0 = 2n.                                                  (7.62)

We assume now that the image-sided index of refraction n may be diff erent from 1. Here θ 0 is the opening angle of the light cone passing from the center of the image plane through the lens aperture. Then

=
∆ x  0.61 λ  .                                               (7.63)

n'a

Therefore, the absolute resolution at the image plane does not at all depend again on the focal length of the system but only the numerical aperture of the image cone.

As the light way can be reversed, the same arguments apply for the object plane. The spatial resolution at the object plane depends only on the numerical aper- ture of the object cone, i. e., the opening angle of the cone entering the lens aperture:

=
∆ X 0.61 λ  .                                               (7.64)

na


7.7 Further Readings‡                                                                                           203

 

These simple relations are helpful to evaluate the performance of optical sys- tems. Since the maximum numerical aperture of optical systems is about one, no smaller structures than about half the wavelength can be resolved.

 

7.7 Further Readings‡

 

In this chapter, only the basic principles of imaging techniques are discussed. A more detailed discussion can be found in Jä hne [81] or Richards [150]. The geometrical aspects of imaging are also of importance for computer graphics and are therefore treated in detail in standard textbooks on computer graph- ics, e. g. Watt [194] or Foley et al. [46]. More details about optical engineering can be found in the following textbooks: Iizuka [77] (especially about Fourier optics) and Smith [174]. Riedl [153] focuses on the design of infrared optics. In this chapter, the importance of linear system theory has been stressed for the description of an optical system. Linear system theory has widespread applica- tions throughout science and engineering, see, e. g., Close and Frederick [21] or Dorf and Bishop [32].


204                                                                                               7 Image Formation


 

 
















































D Imaging

Basics

In this chapter we discuss various imaging techniques that can retrieve the depth coordinate which is lost by the projection of the object onto an image plane. These techniques fall into two categories. They can either retrieve only the depth of a surface in 3-D space or allow for a full re- construction of volumetric objects. Often depth imaging and volumetric imaging are both called 3-D imaging. This causes a lot of confusion.

Even more confusing is the wide variety of both depth and volumetric imaging techniques. Therefore this chapter will not detail all available techniques. It rather focuses on the basic principles. Surprisingly or not, there are only a few principles on which the wide variety of 3-Dimaging techniques is based. If you know them, it is easy to understand how they work and what accuracy you can expect.

We start with the discussion of the basic limitation of projective imag- ing for 3-Dvision in Section 8.1.1 and then give a brief summary of the basic principles of depth imaging (Section 8.1.2) and volumetric imag- ing (Section 8.1.3). Then one section is devoted to each of the basic principles of 3-Dimaging: depth from triangulation (Section 8.2), depth from time-of-fl ight (Section 8.3), depth from phase (interferometry) (Sec- tion 8.4), shape from shading and photogrammetric stereo (Section 8.5), and tomography (Section 8.6).

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 219; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.082 с.)
Главная | Случайная страница | Обратная связь