[4]Bentley,J.L.Multidimensional Binary Search Trees
Ud for Associative Searching.Communications of the ACM,18(1975),509-517.
[5]Brooks,Jr.,Frederick P.Walkthrough-A Dynamic
Graphics System for Simulating Virtual Buildings.Pro-ceedings of the1986Workshop on Interactive3D Graphics.
[6]Clark,James H.Hierarchical Geometric Models for
Visible Surface Algorithms.Communications of the ACM,19,10(October1976),547-554.
[7]Hohmeyer,Michael E.,and Teller,Seth J.Stabbing Iso-
校园安全教育内容thetic Rectangles and Boxes in lg Time.Tech-nical Report UCB/CSD91/634,Computer Science De-partment,U.C.Berkeley,1991.Also to appear in Com-putational Geometry:Theory and Applications,1992.
[8]Jones,C.B.A New Approach to the‘Hidden Line’
Problem.The Computer Journal,14,3(August1971), 232-237.
[9]Khorramabadi,Delnaz.A Walk through the Planned
皮蛋黄瓜
CS Building.Masters Thesis UCB/CSD91/652,Com-puter Science Department,U.C.Berkeley,1991. [10]S´e quin,Carlo H.Introduction to the Berkeley UN-
IGRAFIX Tools(Version 3.0).Technical Report UCB/CSD91/606,Computer Science Department, U.C.Berkeley,1991.
[11]Teller,Seth J.,and S´e quin,Carlo H.Visibility Pre-
利托尔诺
processing for Interactive Walkthroughs.Computer Graphics(Proc.SIGGRAPH’91),25,4(August1991), 61-69.
[12]Zyda,Michael J.Cour Notes,Book Number10,
Graphics Video Laboratory,Department of Computer Science,Naval Postgraduate School,Monterey,Cali-fornia,November1991.
most likely to be visible to the obrver in upcoming frames in order
2
4
6
8
10
50
100
150
200
250
M e g a b y t e s
Frame Number
Figure 14:Comparison of the amounts of data fetched from disk (top curve)and actually needed for rendering (bottom curve)while following the walkthrough test path;marked spots correspond to the labels shown in Figure 13.
In general,we are able to pre-fetch objects before they are rendered,and so the obrver can move smoothly through the model.However,there are a few cas in which the mem-ory manager is not able to predict which objects are going to become visible to the obrver far enough in advance to pre-fetch them,and so the ur may have to wait while they are read into memory.As the obrver turns a corner in a
corridor,the visible t of objects can change dramatically This prompts a request for a large amount of new data to be loaded into memory.For the worst-ca corners (labels ‘B’and ’C’),the coprocessor is busy for about 8conds to prefetch on the order of 2MB of data that might be ud in the near future.However,the amount of data needed imme-diately for the rendering of the next frame is much smaller;becau of parallel processing,resulting obrvable delays are on the order of a couple of conds for a worst-ca sit-uation in our model.We are developing more sophisticated pre-fetching techniques that u a better prediction of the ob-rver’s motion.
7Conclusion
关于六的成语Our paper describes a system for interactive walkthroughs of very large architectural models.It builds a hierarchical display databa containing objects reprented at multiple levels of detail during the modeling pha,performs a spa-tial subdivision and visibility analysis during a precomputa-tion pha,and us real-time display and memory manage-ment algorithms during a walkthrough pha to judiciously lect a relevant subt of data for rendering.We have im-plemented a first version of this system,and tested it in real walkthroughs of a completely furnished model of the sixth floor of the planned Computer Science building at UC Berke-ley.Our initial results show that the display and memory management techniques are effective at culling away sub-stantial portions of the model,and make interactive frame rates possible even for very large models.
8
Acknowledgements
We are grateful to Delnaz Khorramabadi for her efforts con-structing the buildingmodel,and to Paul Haeberli for his help in producing the color plates.Silicon Graphics,Inc.,donated a 320VGX workstation to this project as part of a grant from the Microelectronics Innovationand Computer Re
arch Op-portunities (MICRO)program of the State of California.
References
[1]Airey,John M.Increasing Update Rates in the Building
Walkthrough System with Automatic Model-Space Sub-division and Potentially Visible Set Calculations .Ph.D.thesis,UNC Chapel Hill,1990.[2]Airey,John M.,Rohlf,John H.,and Brooks,Jr.,Fred-erick P.Towards image realism with interactive update rates in complex virtual building environments.ACM SIGGRAPH Special Issue on 1990Symposium on In-teractive 3D Graphics ,24,2(1990),41-50.[3]Autocad Reference Manual ,Relea 10,Autodesk Inc.,
1990.
Figure13:Test path through the building model.
Display Management
As discusd in Section5.1,we compute the t of potentially visible objects by generating successively smaller superts, culling away objects invisible to the obrver.The sizes of the ts,and the times(in conds)required to render them are shown for viewpoint‘A’in Table1and averaged over the test walkthrough path in Table2.On average,we are able to cull away94%of the model and reduce rendering time by a factor of17by rendering only objects in the eye-to-object visibility t rather than the entire building model.
Culling##Draw%of
Method Objs.Faces Time Model
Entire model2,320242,668 3.77100%
Cell-to-cell1,065109,227 1.7745%
Cell-to-object55840,4750.6517%
Eye-to-cell24130,2650.5212%
Eye-to-object16518,9270.337.8%
Table1:Visibility cull results for viewpoint‘A’.
Culling##Draw%of
Method Objs.Faces Time Model
Entire model2,320242,668 3.66100%
Cell-to-cell77878,475 1.2232%
Cell-to-object44036,9210.5915%
Eye-to-cell20720,6570.348.5%
Eye-to-object14113,7010.23 5.6% Table2:Average visibility cull results for test walkthrough. We further reduce the number of faces rendered at each frame by choosing an appropriate level of detail at which to render each potentially visible object bad on its appar-ent size and speed to the obrv
er.Statistics regarding the number of faces and the time required to render each frame using different pixels-per-face thresholds for viewpoint‘A’and averaged over the test path are shown in Tables3and4, respectively.Usable rendering modes for which little or no degradation in image quality is perceptible(256pixels per face),are shown in bold typeface.
Color Plates IV,V and VI show the difference between a static image produced using the highest level of detail for all objects(Plate IV)and one generated with reduced levels of detail for objects with fewer than256pixels per face(Plate V).Plate IV has23,468faces and took0.34conds to ren-der,whereas Plate V has7,555faces and took0.17conds. The images were rendered without interpolated shading or antialiasing in order to accentuate differences–notice the reduced tesllation of the chairs further from the obrver. Plate VI shows which level of detail was ud for each object in Plate V(a darker shade reprents a higher level of detail). Overall,after computing the t of potentially visible ob-jects and choosing an appropriate level of detail for each ob-ject,we are able to cull away an average of97%of the build-ing model and reduce rendering time by an average factor of 39in each frame.
Min.Pixels##Draw%of
Per Face Objs.Faces Time Model
016518,9270.337.8%
6416511,7630.26 4.8%
1281658,8610.22 3.6%
2561656,2040.17 2.6%
5121653,8890.13 1.6%
10241652,8710.12 1.2% Table3:Average detail cull results for viewpoint‘A’.
Min.Pixels##Draw%of
Per Face Objs.Faces Time Model
014113,7010.23 5.6%
641419,7000.18 4.0%
1281417,9790.16 3.3%
2561416,1760.14 2.5%
5121414,7450.12 2.0%
10241413,4270.10 1.4% Table4:Average detail cull results for test walkthrough.
Memory Management
急宅送
As described in Section5.2,the memory manager tries to store in memory the objects incident upon the cells that are
at the lowest level of detail for which the average size of a face is greater than some threshold,and the size of an average face divided by its speed is greater than another threshold.If either of the values is less than the corresponding threshold for all available levels of detail of an object,we render the object at its lowest level of detail.
As the obrver moves through the model,an object may be rendered at different levels of detail in successive frames. Rather than abruptly snapping from one level of detail to the next,we blend successive levels of detail using partial trans-parency.Since the complexity of any level is typically small
compared to the one of the next higher higher level(by more than a factor of two),the extra time spent blending the two levels during transition does not constitute an undue over-head,considering the small fraction of objects making a tran-sition at the same time.
5.2Memory Management
Since the entire model cannot be stored in memory at once, we must choo a subt of objects to store in memory for
cad比例尺each frame,and swap objects in and out of memory in real-time as the obrver moves through the model.As a min-imum,we must store in memory all objects to be rendered in the next frame.However,since it takes a relatively large amount of time to swap data from disk into memory,we must
also predict which objects might be rendered in future frames and begin swapping them into memory in advance.Other-wi,frame updates might be delayed,waiting for objects to be read from disk before they can be rendered.
As described in Section4.2,we group each level of detail
for all objects incident upon the same cell contiguously in the display databa.To take advantage of the relative efficiency of large IO operations,we always load all objects incident upon the same cell into memory together at the same level of detail.Thus,our memory management algorithm must com-pute for each frame which cell contents to store in memory
at which levels of detail.
In general,we store in memory the contents of the cells containing the objects most likely to be rendered in upcom-ing frames.Specifically,we determine which cells are most likely to contain the obrver in upcoming frames,and store
in memory all objects incident upon cells visible from any of the cells.Each time the obrver steps across a cell bound-ary,we traver the cell adjacency graph,considering cells in order of the minimum amount of time before the cell can possibly contain the obrver using a shortest path algorithm.
The ur interface also enforces some limits on the size of a step or turn that the obrver may take in a single frame. For each cell,visited in the arch,we mark and claim memory for the contents of all cells visible from in the direction of the obrver’s frustum up to the precomputed
maximum level of detail at which any object incident upon the cell might be rendered for an obrver in.Our arch terminates when all available memory has been claimed or when we have considered all possible obrver viewpoints more than some maximum amount of time in the future.We then read the contents of all newly marked cells into memory, possibly replacing the contents of unmarked cells.
For instance,consider the obrver viewpoint shown in Figure12.Cells are labeled by the minimum amount of time (in conds)before they can possibly become visible to the obrver;and shaded by the level at which their contents are stored in memory–darker shades reprent higher levels. The cells surrounded by the thick-dashed line reprent the cells visited during the he range of obrver po-sitions for which we store visible objects in memory.
Figure12:Cells labeled by the number of conds before they can possibly become visible to the obrver,and shaded by level of detail stored in memory(a darker shade repre-nts a higher level of detail).White cells are not loaded into memory.
6Results and Discussion
In this ction we prent and analyze test results collected during real interactive walkthroughs performed with our sys-tem.During the tests,we logged statistics regarding the performance of our display and memory management algo-rithms in real time as a ur walked through the building model.
We prent results for one obrver viewpoint ud as an example in the previous discussion(marked by an‘A’in Fig-ure13),as well as for a full quence of obrver viewpoints generated during an actual walkthrough along the path shown in Figure13).The path is about300feet long,and a real-istic physical walk along it should take approximately one minute.All tests were performed on a VGX320Silicon Graphics workstation with two33MHz processors and64 MB of memory.
Since the obrver is at a known point and has vision lim-ited to a view cone emanating from this poi
nt,we can cull the t of visible objects even further.We define the eye-to-cell visibility as the t of all objects incident upon any cell partially or completely visible to the obrver(the light stip-pled regions in Figure9).Clearly,the eye-to-cell visibility is also a supert of the objects actually visible to the ob-rver.The visible area in any cell is always the interction of that(convex)cell with one or more(convex)wedges em-anating through portals from the eyepoint.To compute the eye-to-cell visibility,we initialize the visible area wedge to the interior of the view cone,and the eye-to-cell visibility to the source cell.Next,we perform a constrained depth-first-arch(DFS)of the stab tree,starting at the source cell,and propagating outward.Upon encountering a portal,the wedge is suitably narrowed,and the newly reached cell is added to the eye-to-cell visibility t.If the wedge is disjoint from the portal,the active branch of the DFS is terminated. Finally,we estimate the eye-to-object visibility,a nar-rower supert of the objects actually visible to the obrver, by generating the interction of the cell-to-object and eye-to-cell ts.For example,consider the obrver viewpoint shown in Figure9.The eye-to-object visibility t(filled squares)contains all objects in the interction between the cell-to-object(all squares)and eye-to-cell(gray regions) ts.It is a small subt of all objects in the model,but still an over-estimate of the actual visibility of the obrver.In Figure9,only one square lies in a cell visible to the obrver and can be en from some point inside the cell containing the obrver,but is not visible from the obrver’s current viewpoint.Color Plate III depicts the eye-to-object visibility t for this obrver viewpoint in three dimensions.
Figure9:Eye-to-object visibility.Shown are only the po-tentially visible he black objects from Figure5.Object Hierarchy
After we have culled away portions of the model that are in-visible from the obrver’s viewpoint,we can further reduce the number of faces rendered in each frame by choosing an appropriate level of detail at which to render each visible ob-ject.Since the image must ultimately be displayed in pixels, it is uless to render very detailed descriptions of objects that are very small or far away from the obrver and which map to just a few pixels on the display(Figure10).Likewi,it is wasteful to render
details in objects that are moving quickly across the screen and which appear blurred or can be en for only a short amount of time(Figure11).Instead,we can achieve the same visual effect by rendering simpler repren-tations of the objects,consisting of just a few faces with appropriate colors.This is a technique ud by commercial flight simulators,however little has been published on the systems[12].
View Plane
Obrver
研学心得体会
Figure10:Perceptible detail is related to apparent size.
View Plane
#2
View Plane
#1
糯米粉Figure11:Perceptible detail is related to apparent speed.
Rather than rendering all objects at the highest level of detail in every frame,we choo a level of detail at which to render each object bad on its apparent size and speed from the point of view of the obrver.For each level of detail,we estimate the size of an average face in pixels,and the speed of an average face in pixels per frame.We render an object
is stored with each gment so that gments can be read and relead
Figure7:The implementation of display databa gments.
4.2Layout
Since the latency overhead of each read operation is rela-tively large,we group the gments for all objects incident upon the same cell contiguously in the display databafile. This layout allows us to
utilize the cell-to-cell visibility in-formation from the precomputation pha to load groups of objects(tho likely to become visible at the same time)into memory in a single IO operation.If an object is incident upon more than one straddles a cell boundary),then we store it redundantly,once
for each cell.
Furthermore,we store descriptions of all objects incident upon the same cell at the same level of detail contiguously in the display databa,as shown in Figure8.Within a single cell,the object headers appearfirst,followed by descriptions of the objects at increasing levels of detail.As a result,all objects incident upon a cell at or up to any level of detail may be read at once in a single re
ad operation during the interactive walkthrough pha.
5The Walkthrough Pha
During the walkthrough pha,we simulate an obrver moving through the architectural model under ur control. The goal is to render the model as en from the obrver’s Headers
at Level#1 at Level#2