Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Depth from Active Triangulation



Instead of a stereo camera setup, one camera can be replaced by a light source. For a depth recovery it is then necessary to identify at each pixel from which direction the illumination is coming. This knowledge is equivalent to knowledge of the disparity. Thus an active triangula- tion technique shares all basic features with the stereo system that we discussed in the previous section.

Sophisticated techniques have been developed in recent years to code the light rays in a unique way. Most commonly, light projectors are used that project fringe patterns with stripes perpendicular to the triangula- tion base line onto the scene. A single pattern is not suffi cient to identify the position of the pattern on the image plane in a unique way, but with a sequence of fringe patterns with diff erent wavelengths, each horizontal position at the image plane of the light projector can be identifi ed by a unique sequence of dark and bright stripes. A partial series of six such patterns is shown in Fig. 8.2.


212                                                                                                         8 3-D Imaging

 

         
   

Figure 8.3: Active triangulation by phase-shifted fringe patterns with the same wavelength. Three of four patterns are shown with phase shifts of 0, 90, and 180 degrees; from Wiora [199].

 

Such a sequence of fringe patterns also has the advantage that — within the limits of the dynamic range of the camera — the detection of the fringe patterns becomes independent of the refl ection coeffi cient of the object and the distance-dependent irradiance of the light projec- tor. The occlusion problem that is evident from the shadow behind the espresso machine in Fig. 8.2 remains.

The binary coding by a sequence of fringe patterns no longer works for fi ne fringe patterns. For high-resolution position determination, as shown in Fig. 8.3, phase-shifted patterns of the same wavelength work much better and result in a subpixel-accurate position at the image plane of the light projector. Because the phase shift is only unique within a wavelength of the fringe pattern, in practice a hybrid code is often used that determines the coarse position by binary coding and the fi ne position by phase shifting.

 



Depth from Focus

The limited depth of fi eld of a real optical system (Section 7.4.3) is an- other technique for depth estimation. An object is only imaged without blurring if it is within the depth of fi eld. At fi rst glance, this does not look like a depth from triangulation technique. However, it has exactly the same geometry as the triangulation technique. The only diff erence is that instead of two, multiple rays are involved and the radius of the blurred disk replaces the disparity. The triangulation base corresponds to the diameter of the optics. Thus depth from focus techniques share all the basic properties of a triangulation technique. For given optics, the resolution decreases with the square of the distance (compare Eq. (8.4) with Eq. (7.28)).


8.2 Depth from Triangulation                                                         213

 

 

Figure 8.4: Superposition of the point spread function of two neighboring points on a surface.

 

The discussion on the limitations of projective imaging in Section 8.1.1 showed that the depth from focus technique does not work for volumet- ric imaging, because most structures, especially those in the direction of the optical axis, vanish. Depth from focus is, however, a very useful and simple technique for depth determination for opaque surfaces.

Steurer et al. [177] developed a simple method to reconstruct a depth map from a light microscopic focus series.  A depth map is a two-

dimensional function that gives the depth of an object point d — relative

T

to a reference plane — as a function of the image coordinates Σ x, yΣ .

With the given restrictions, only one depth value for each image point

needs to be found. We can make use of the fact that the 3-Dpoint spread function of optical imaging discussed in detail in Section 7.6.1 has a distinct maximum in the focal plane because the intensity falls off with the square of the distance from the focal plane. This means that at all points where we get distinct image points such as edges, lines, or local extremes, we will also obtain an extreme in the gray value on the focal plane. Figure 8.4 illustrates that the point spread functions of neighboring image points only marginally infl uence each other close to the focal plane.

Steurer’s method makes use of the fact that a distinct maximum of the point spread function exists in the focal plane. His algorithm in- cludes the following four steps:

1. Take a focus series with constant depth steps.

2. Apply a suitable fi lter such as the variance operator (Section 15.2.2) to emphasize small structures. The highpass-fi ltered images are seg- mented to obtain a mask for the regions with signifi cant gray value changes.

3. In the masked regions, search for the maximum magnitude of the diff erence in all the images of the focus series. The image in which the maximum occurs gives a depth value for the depth map. By in-


214                                                                                                         8 3-D Imaging

 

a

b

Figure 8.5: a Focus series with 16 images of a metallic surface taken with depth distances of 2 µm; the focal plane becomes deeper from left to right and from top to bottom. b Depth map computed from the focus series. Depth is coded by intensity. Objects closer to the observer are shown brighter. From Steurer et al. [177].

 

terpolation of the values the depth position of the maximum can be determined more exactly than with the depth resolution of the image series [163].

4. As the depth map will not be dense, interpolation is required. Steurer used a region-growing method followed by an adaptive lowpass fi l- tering which is applied only to the interpolated regions in order not to corrupt the directly computed depth values. However, other valid techniques, such as normalized convolution (Section 11.7.2) or any of the techniques described in Section 17.3, are acceptable.

This method was successfully used to determine the surface struc- ture of worked metal pieces. Figure 8.5 shows that good results were achieved. A fi ling can be seen that projects from the surface. Moreover, the surface shows clear traces of the grinding process.


Aperture
8.2 Depth from Triangulation                                                         215


Microscope Specimen objective


Dichroic beam splitter


 

 

Figure 8.6: Principle of confocal laser scanning microscopy.

 

 

This technique works only if the surface shows fi ne details. If this is not the case, the confocal illumination technique of Scheuermann et al.

[163] can be applied that projects statistical patterns into the focal plane (compare Section 1.2.2 and Fig. 1.3).

 








Confocal Microscopy

Volumetric microscopic imaging is of utmost importance for material and life sciences. Therefore the question arises, whether it is possible to change the image formation process — and thus the point spread func- tion — so that the optical transfer function no longer vanishes, especially in the z direction.

The answer to this question is confocal laser scanning microscopy. Its basic principle is to illuminate only the points in the focal plane. This is achieved by scanning a laser beam over the image plane that is focused by the optics of the microscope onto the focal plane (Fig. 8.6). As the same optics are used for imaging and illumination, the intensity distri- bution in the object space is given approximately by the point spread function of the microscope. (Slight diff erences occur as the laser light is coherent.) Only a thin slice close to the focal plane receives a strong illu- mination. Outside this slice, the illumination falls off with the distance squared from the focal plane. In this way contributions from defocused objects outside the focal plane are strongly suppressed and the distor- tions decrease. However, can we achieve a completely distortion-free reconstruction? We will use two independent trains of thought to an- swer this question.

Let us fi rst imagine a periodic structure in the z direction. In con- ventional microscopy, this structure is lost because all depths are illu- minated with equal radiance. In confocal microscopy, however, we can still observe a periodic variation in the z direction because of the strong decrease of the illumination intensity provided that the wavelength in the z direction is not too small.


216                                                                                                         8 3-D Imaging

 

a


B                                                       c

Figure 8.7: Demonstration of confocal laser scanning microscopy (CLSM). a A square pyramid-shaped crystal imaged with standard microscopy focused on the base of the pyramid. b Similar object imaged with CLSM: only a narrow height contour range, 2.5 µm above the base of the square pyramid, is visible. c Image composed of a 6.2 µm depth range scan of CLSM images. Images courtesy of Carl Zeiss Jena GmbH, Germany.

 

The same fact can be illustrated using the PSF. The PSF of confocal microscopy is given as the product of spatial intensity distribution and the PSF of the optical imaging. As both functions fall off with z− 2, the PSF of the confocal microscope falls off with z− 4. This much sharper localization of the PSF in the z direction results in a nonzero OTF in the z direction up to the z resolution limit.

The superior 3-D imaging of confocal laser scanning microscopy is demonstrated in Fig. 8.7. An image taken with standard microscopy shows a crystal in the shape of a square pyramid which is sharp only at the base of the pyramid (Fig. 8.7a). Towards the top of the pyramid, the edges become more blurred. In contrast, a single image taken with a con- focal laser scanning microscopy images only a narrow height range at all (Fig. 8.7b). An image composed of a 6.2 µm depth scan by adding up all images shows a sharp image for the whole depth range (Fig. 8.7c). Many fi ne details can be observed that are not visible in the image taken with the conventional microscope. The laser scanning microscope has found widespread application in medical and biological sciences and materials research.


8.3 Depth from Time-of-Flight                                                        217

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 182; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.02 с.)
Главная | Случайная страница | Обратная связь