Importance-Driven Volume Rendering

This paper introduces importance-driven volume rendering as a novel technique for automatic focus and context display of volumetric data. Our technique is a generalization of cut-away views, which - depending on the viewpoint - remove or suppress less important parts of a scene to reveal more import...

Full description

Saved in:
Bibliographic Details
Published in2004 IEEE Visualization Conference pp. 139 - 146
Main Authors Viola, Ivan, Kanitsar, Armin, Groller, Meister Eduard
Format Conference Proceeding
LanguageEnglish
Published Washington, DC, USA IEEE Computer Society 10.10.2004
IEEE
SeriesACM Conferences
Subjects
Online AccessGet full text
ISBN0780387880
9780780387881
DOI10.1109/VISUAL.2004.48

Cover

More Information
Summary:This paper introduces importance-driven volume rendering as a novel technique for automatic focus and context display of volumetric data. Our technique is a generalization of cut-away views, which - depending on the viewpoint - remove or suppress less important parts of a scene to reveal more important underlying information. We automatize and apply this idea to volumetric data. Each part of the volumetric data is assigned an object importance which encodes visibility priority. This property determines which structures should be readily discernible and which structures are less important. In those image regions, where an object occludes more important structures it is displayed more sparsely than in those areas where no occlusion occurs. Thus the objects of interest are clearly visible. For each object several representations, i.e., levels of sparseness, are specified. The display of an individual object may incorporate different levels of sparseness. The goal is to emphasize important structures and to maximize the information content in the final image. This paper also discusses several possible schemes for level of sparseness specification and different ways how object importance can be composited to determine the final appearance of a particular object.
Bibliography:SourceType-Conference Papers & Proceedings-1
ObjectType-Conference Paper-1
content type line 25
ISBN:0780387880
9780780387881
DOI:10.1109/VISUAL.2004.48