Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


Characterization of 3-D Imaging Techniques



Depth imaging is characterized by two basic quantities, the depth res- olution σ z and the depth range ∆ z. The depth resolution denotes the statistical error of the depth measurement and thus the minimal resolv- able depth diff erence. Note that the systematic error of the depth mea- surement can be much larger (see discussion in Section 3.1). How the resolution depends on the distance z is an important characteristic of a depth imaging technique. It makes a big diff erence, for example, whether the resolution is uniform, i. e., independent of the depth, or decreasing with the distance z.

The depth range ∆ z is the diff erence between the minimum and max-

imum depth that can be measured by a depth imaging technique. Con- sequently, the ratio of the depth range and depth resolution, ∆ z/σ z, denotes the dynamic range of depth imaging.

 

Depth from Triangulation

Looking at the same object from diff erent points of view separated by a base vector b results in diff erent viewing angles. In one way or the other, this diff erence in viewing angle results in a shift on the image


8.2 Depth from Triangulation                                                         209

 

 

Figure 8.1: A stereo camera setup.

 

 

plane, known as disparity, from which the depth of the object can be inferred.

Triangulation-based depth measurements include a wide variety of diff erent techniques that — at fi rst glance — have not much in common, but are still based on the same principle. In this section we will discuss stereoscopy (Section 8.2.1), active triangulation, where one of the two cameras is replaced by a light source (Section 8.2.2), depth from focus (Section 8.2.3), and confocal microscopy (Section 8.2.4). In the section about stereoscopy, we also discuss the basic geometry of triangulation.

 



Stereoscopy

Observation of a scene from two diff erent points of view allows the dis- tance of objects to be determined. A setup with two imaging sensors is called a stereo system. Many biological visual systems perform depth perception in this way. Figure 8.1 illustrates how depth can be deter- mined from a stereo camera setup. Two cameras are placed close to each other with parallel optical axes. The distance vector b between the two optical axes is called the stereoscopic basis.

An object will be projected onto diff erent positions of the image plane because it is viewed from slightly diff erent angles. The diff erence in the position is denoted as the disparity or parallax, p. It is easily calculated from Fig. 8.1:

p = r x1 − lx1 = d' X 1 + b/2 − d' X 1 − b/2 = b d'.                           (8.3)

X3                    X3            X3

The parallax is inversely proportional to the distance X3 of the ob- ject (zero for an object at infi nity) and is directly proportional to the stereoscopic basis and the focal length of the cameras (d' f for dis- tant objects). Thus the distance estimate becomes more diffi cult with increasing distance. This can be seen more clearly by using the law of


210                                                                                                         8 3-D Imaging

 

error propagation (Section 3.3.3) to compute the error of X3 from:


 

X3 =


bd' p


 

~ σ X3 =


bd' p2


 

σ p =


 

2

X
3 σ p.                     (8.4)

bd'


Therefore, the absolute sensitivity for a depth estimate decreases with the distance squared. As an example, we take a stereo system with a stereoscopic basis of 200 mm and lenses with a focal length of 100 mm. Then, at a distance of 10 m the change in parallax is about 200 µm/m (about 20 pixel/m), while it is only 2 µm/m (0.2 pixel/m) at a distance of 100 m.

Parallax is a vector quantity and parallel to the stereoscopic basis b. This has the advantage that if the two cameras are exactly oriented we know the direction of the parallax beforehand. On the other hand, we cannot calculate the parallax in all cases. If an image sector does not show gray value changes in the direction of the stereo basis, then we cannot determine the parallax. This problem is a special case of the so- called aperture problem which occurs also in motion determination and will be discussed in detail in Section 14.2.2.

The depth information contained in stereo images can be perceived directly with a number of diff erent methods. First, the left and right stereo image can be represented in one image, if one is shown in red and the other in green. The viewer uses spectacles with a red fi lter for the right and a green fi lter for the left eye. In this way, the right eye observes only the green and the left eye only the red image. This method

— called the anaglyph method — has the disadvantage that no color images can be used. However, this method needs no special hardware and can be projected, shown on any RGB monitor, or printed out with standard printers.

Vertical stereoscopy also allows for the viewing of color stereo images [102]. The two component images are arranged one over the other. When viewed with prism spectacles that refract the upper image to the right eye and the lower image to the left eye, both images fuse into a 3-D image.

Other stereoscopic imagers use dedicated hardware. A common prin- ciple is to show the left and right stereo image in fast alternation on a monitor and switch the polarization direction of the screen synchro- nously. The viewer wears polarizing spectacles that fi lter the correct images out for the left and right eye.

However, the anaglyph method has the largest potential for most applications, as it can be used with almost any image processing work- station, the only additional piece of hardware needed being red/green spectacles. A stimulating overview of scientifi c and technical applica- tions of stereo images is given by Lorenz [115].


8.2 Depth from Triangulation                                                         211

 

         
   

Figure 8.2: Active triangulation by projection of a series of fringe patterns with diff erent wavelengths for binary coding of the horizontal position; from Wiora [199].

 


Поделиться:



Последнее изменение этой страницы: 2019-05-04; Просмотров: 217; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.009 с.)
Главная | Случайная страница | Обратная связь