Theses and Dissertations

Issuing Body

Mississippi State University

Advisor

Moorhead II, Robert J.

Committee Member

Machiraju, Raghu

Committee Member

Burg, Clarence O.

Committee Member

Evans, David L.

Committee Member

Thompson, David S.

Date of Degree

5-10-2003

Document Type

Dissertation - Open Access

Major

Computational Engineering (Program)

Degree Name

Doctor of Philosophy

College

James Worth Bagley College of Engineering

Department

Computational Engineering Program

Abstract

The motivation for this work is to study methods of estimating appropriate level-of-detail (LoD) object models by quantifying appearance errors prior to image synthesis. Visualization systems have been developed that employ LoD objects, however, the criteria are often based on heuristics that restrict the form of the object model and rendering method. Also, object illumination is not considered in the LoD selection. This dissertation proposes an image-based scene learning pre-process to determine appropriate LoD for each object in a scene. Scene learning employs sample images of an object, from many views and with a range of geometric representations, to produce a profile of the LoD image error as a function of viewing distance. Signal processing techniques are employed to quantify how images change with respect to object model resolution, viewing distance, and lighting direction. A frequency-space analysis is presented which includes use of the vision system?s contrast sensitivity to evaluate perceptible image differences with error metrics. The initial development of scene learning is directed to sampling the object?s appearance as a function of viewing distance and object geometry in scene space. A second phase allows local lighting to be incorporated in the scene learning pre-process. Two methods for re-lighting are presented that differ in accuracy and overhead; both allow properties of an object?s image to be computed without rendering. In summary, full-resolution objects pro-duce the best image since the 3D scene is as real as possible. A less realistic 3D scene with simpler objects produces a different appearance in an image, but by what amount? My the-sis is such can be had. Namely that object fidelity in the 3D scene can be loosened further than has previously been shown without introducing significant appearance change in an object and that the relationship between 3D object realism and appearance can be expressed quantitatively.

URI

https://hdl.handle.net/11668/18384

Comments

Fourier transform||computer graphics||visual perception

Share

COinS