Printable Version

Panoramic Rendering for Concave Surfaces of Revolution

07/09/2003

Various methods exist for rendering panoramic images using computers. As part of my Honours Degree in Bachelor of Digital Systems, and under the supervision of the CEMA laboratory at Monash University (2001), I investigated another method for rendering panoramas by exploiting the symmetrical properties found in concave surfaces of revolution. Only a brief overview of the project is presented here. Those who wish to delve into the technical details of this project may download the thesis directly off the Monash web site: "Panoramic Rendering for Concave Surfaces of Revolution" (PDF, 3.2MB).

1 Introduction

Panoramas, or pictures with a wide field-of-view is not an entirely new concept, such visualization existed since the last decades of the 18th century. There are many methods for creating panoramas. For instance, early artists used to hand paint panoramic pictures in a meticulous manner. Later on, the creation of panoramas became more straightforward with the advent of optics and photography. These days, we can easily experiment with panoramic imagery with the help of computers. Panoramas can be rendered cheaply in different variations, forms and shapes.

Most computers use a traditional perspective view rendering system, consisting of a 2D display. Objects are displayed by projecting 3D geometric shapes onto a 2D view plane. This view plane corresponds to the area of the display screen.

There are other methods for rendering objects. Ray-tracers fire incident ray vectors trough the view plane, and perform ray-object intersection testing in the virtual scene. The colour at the object's intersection point is stored at the corresponding screen location where the ray intersected the view plane. Similar rendering techniques can be employed with panoramic rendering. However, in this case the viewing surface is no longer a plane, it is a curved surface that possibly provides a greater field of view.

2 Project Details

This project explored techniques for generating and displaying panoramic images on concave surfaces of revolution. The implementation included a ray-tracer and a real-time rendering system.

A surface of revolution can be constructed by revolving a 2D curve around a line, the principal axis. The geometric shape of the symmetrical surface is governed by the 2D function, the profile curve. Since most curved displays can be considered to be symmetrical about its principal axis, 2D profile curves provide a convenient way for modelling a display's shape.

Figure 1 Figure 2
Figure 1: Profile Curve. Figure 2: Virtual Camera.

The symmetrical property of the surface allows 3D points to be transformed in the 2D profile curve space. To illustrate this idea, the projection surface (Figure 1) can be represented with an infinite number of 2D profile curve "slices" revolving around the Z axis. This means that any point in 3D space will lie in the plane of a slice, the profile curve space.

Figure 3
Figure 3: Relationship between the 2D image plane and the panorama.

The view origin for the virtual camera and the user is illustrated in Figure 2. The rendered panorama is displayed into the surface by a projector.

When rendering a panoramic image, the resulting picture is eventually stored in a 2D image space (or screen space). To be more precise, the panoramic transformation of a 3D object is considered as a two-step process: Initially the object is projected onto a surface of revolution, then it is re-projected onto a 2D image plane.

The reason why the final projection is represented in 2D image space is because most display technologies, such as overhead projectors, are inherently based on two dimensional raster image planes.

After the two-step transformation process, the resulting 2D image plane is assumed to be coincident with the XY-plane of the viewing coordinate system. The view origin and the 3D surface are centered on the image plane, where the principal axis runs parallel with the plane's normal. This arrangement is intended to create an orthographic projection of the surface on the image plane.

2D image planes have a finite resolution when rasterised, therefore in most cases they will clip the 3D surface. Figure 3 illustrates the 2D image plane and the visible portion of the surface.

3 Results

The following images (Figures 4 to 7) demonstrate how the shape of the profile curve affects the final panoramic image. Each pair of figures illustrate the actual cross-section of the surface (red highlights the visible regions of the surface), accompanied with their respective ray-traced panoramas on the right.

Figure 5a and 5b shows an example where the view vectors intersect the surface more than once, giving a "warped" appearance in the ray-traced image. In real world situations, it would be impossible to project this panorama onto a physical surface, because it would cast shadows on itself.

Figure 4a Figure 4b Figure 5a Figure 5b
(a) (b) (a) (b)
Figure 4: ƒ(r) = -r3 + r2 + r + 1 Figure 5: ƒ(r) = -0.1r4 + 2.25r2 + 0.2cos(8r) + 1
Figure 6a Figure 6b Figure 7a Figure 7b
(a) (b) (a) (b)
Figure 6: ƒ(r) = sqrt(1 - r2) Figure 7: ƒ(r) = -2.5r2

The following set of images, Figures 8 to 11 demonstrate the real time rendering system in wire frame and in Gouraud shading mode. The projection surface is no longer planar, and hence the rendering of flat polygons on curved surfaces will look incorrect. Therefore, it was necessary to subdivide polygons to approximate the curvature of the surface. The wire frame views in Figures 8 and 9 illustrate how the polygons were subdivided. The sub-images in the top-right corner show the perspective view of the scene with no polygonal subdivision.

Using polygon subdivision to approximate the curvature of the surface introduced a discontinuity problem, which manifested themselves as cracks within the geometric shape of the models (see Figures 10 and 11). A crack would develop when a midpoint division along the edge of two neighbouring polygons were not co-incident. One possible solution is to represent geometric objects in a winged-edge data structure, which would simplify the tracking of neighbouring polygons affected by a midpoint division. Unfortunately, there was not enough time to address this issue.

Figure 8 Figure 9 Figure 10 Figure 11
Figure 8: ƒ(r) = -r2 + 1 Figure 9: ƒ(r) = -loge(r + 0.1) Figure 10: ƒ(r) = -r2 + 1 Figure 11: ƒ(r) = -2r + 1

Figures 12 to 15 illustrate how the panoramas may be used in practical situations. Each Figure illustrates the profile curve (top right), which was used to render the panorama (left). The profile curve was also used to model a 3D panoramic display (bottom-right). The panorama was projected onto the 3D surface. The user's viewpoint inside the panoramic display was simulated by a perspective camera. The camera's perspective view was illustrated in centre-right. The camera saw a perspective correct view of the scene.

Figure 12 Figure 13 Figure 14 Figure 15
Figure 12: Spherical panorama. Figure 13: Parabolic panorama. Figure 14: Hyperbolic panorama, ray-traced. Figure 15: Hyperbolic panorama, rendered in real-time.

Appendix

Documentation

Video Clips

Source Code

The source code and the sample program used in this project is available to download under the GNU General Public License. Pre-compiled binaries and the necessary 3D models are also included. For details about operating the program, refer to readme.txt in the zip file. The code is not exactly a shining example of good C++ programming practices, but it did the job.

Copyright 2003, Dominik Deák

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License (gpl-2.0.txt) for more details.