OutofCore Compression for Gigantic Polygon Meshes PowerPoint PPT Presentation

presentation player overlay
1 / 26
About This Presentation
Transcript and Presenter's Notes

Title: OutofCore Compression for Gigantic Polygon Meshes


1
Out-of-Core Compression for Gigantic Polygon
Meshes
  • Martin Isenburg, Stefan Gumhold
  • Siggraph 2003 Accepted

Present by Pin Ren Apr 18, 2003
2
The Problem I
  • Huge Size of 3D Mesh Data From
  • 3D Scanning Technology
  • Digital Michelangelo Project at Stanford
  • Large Scale CAD Application
  • Power Plant Model at UNC
  • Compression / Decompression
  • Without loss of fidelity
  • Efficiency

3
The Problem II
  • Limited Memory on Common PC
  • Typically 128MB--1GB
  • St.Matthew statue gt6GB (raw,186M vertices)
  • Out-of-Core Vs. In-Core
  • Things to be considered
  • Small Memory Foot-Print (cant in the size of the
    Mesh!)
  • Accuracy (Connectivity and Geometry)
  • How to solve the side effects of techniques
    introduced (partitioning introduce
    discontinuities)

4
Previous Work
  • Traditional Representation of Meshes
  • Array of vertex positions
  • Array of indices of vertex array to specify
    polygons
  • Mesh Compression
  • Deering95, Taubin and Rossignac98, Touma and
    Gotsman98, Brodsky and Watson00, Lee et.
    al02 Many nice works, still cannot fill the
    gap size of model and limited memory size of
    common PC.
  • Ho et.al 02 address the problem by partitioning
    the manageable pieces. Then apply existing
    approaches. Artificial Discontinuities!

5
Previous Work cont.
  • Out-of-Core algorithms
  • Pointer De-referencing vertex soup.
  • I/O efficient external algo eg. external merge
    sort
  • Two main paradigms
  • Batched data is streamed in one or more passes,
    small amount of data at memory at one time
  • On-line data is processed through a series of
    queries. Common data structure B/B tree

6
Where to go?
  • Limited memory vs. Huge Model Size
  • Eliminate popular schemes
  • Multi-pass approaches that has to store the
    entire mesh in the middle.
  • Two-pass schemes that decompresses connectivity
    and Geometry
  • The (only) promising way
  • One-Pass single and memory-limited pass

7
Where to go? Cont.
  • Common problem lower quality results caused by
    different handling schemes. Like partitioning,
    vertex clustering.
  • We dont want to sacrifice accuracy when using
    out-or-core algorithms.
  • How to achieve?
  • Out-of-core meshes should provide the compressor
    transparent access to the info (c g)
  • Octree-based external memory? Cignoni et.al 03
    (lack of explicit connectivity info)

8
Where to go? Cont.
  • Partition into pieces vs. treat as whole
  • Partition introduces discontinuity
  • Compression Rate
  • Decompression speed
  • Compared with Ho et.al01
  • 25 better compression rate
  • 100 times fast (decompression speed)

9
Whatre their contributions? I
  • An external memory data structure that provides
    transparent access to connectivity and geometry
    of gigantic meshes
  • A method to construct this out-of-core mesh from
    an indexed mesh representation

10
Whatre their contributions? II
  • A compression scheme that uses the out-of-core
    mesh to compress gigantic models in one piece
  • A streamable, highly compressed mesh format that
    allows decompression at 2 million triangles per
    second
  • The concept of a processing sequence for
    high-speed, limited memory foot-print mesh
    computations with access to connectivity

11
Out-of-Core MeshData Structure 1
  • Major structure Half-Edge
  • Information to retain
  • Ability to enumeration of all half-edges and mark
    as visited
  • Access to next, inverse half-edge and to origin
    vertex
  • Access to the position of a vertex and whether it
    is non-manifold
  • Knowledge of border edges

12
Out-of-Core MeshData Structure 2
13
Out-of-Core MeshData Structure 3
  • Two mode
  • Implicit for pure triangular meshes
  • Explicit for polygonal meshes (has next)
  • Capacity analysis
  • 12 byte for each vertex or triangle, 8(I)/12(E)
    bytes for each half-edges
  • St. Matthew 186M Vertices/372M Tri,
  • At least 2.4GB4.8GB9.6GB/14.4GB

14
Out-of-Core MeshClustering
  • Partition the mesh into a set of clusters
  • Cache over clusters

15
Out-of-Core MeshCaching
  • LRU replacing strategy
  • Vertex, half-edge data, binary-flag of a cluster
    are kept separately in files
  • Read/Written pattern
  • Vertex data only read
  • Half-edge data R/W when mesh built, only R by
    compressor
  • Binary-flag of cluster R/W by compressor

16
Building Out-of-core MeshOverview
  • Six stages
  • 3 vertex passes
  • 1 face pass
  • 1 matching stage of incident half-edges
  • Stage of linking and shortening of borders,
    search for non-manifold vertices
  • All restricted to the in-core memory

17
Building Out-of-core MeshVertex Passes
  • First
  • determine of vertices,
  • Find bounding box
  • Second
  • Computer balanced, spatial clustering of v
  • K-nearest neighbors k6
  • Similar to Ho et.al 01
  • Third
  • Sort the v into clusters
  • Determine their index-pair
  • Store mapping file (map index of v to
    index-pairs)

18
Building Out-of-core MeshFace Passes
  • Create half-edges
  • One half-edge for each of faces edges
  • Store it into cluster it belongs to
  • Primary file store it into cluster
  • Secondary file store copies of H-E (crossing
    H-E) from other clusters temporary file for later
    matching stages.
  • Decided order of half-edges in cluster
  • Explicit mode sort by its origin vertex
  • Implicit mode half-edges of one triangle has to
    be in successive order.

19
Building Out-of-core MeshMatching of Incident
Half-edges
  • Get info from previous stored files
  • Mixed sorting stratege
  • A singled bucket-sort over all edges
  • Quick-sort over edges of each buckets.
  • O(nlogDmax)
  • After sort, what info we get
  • Looking at their number and orientation,we can
    distinguish 4 different types of edges
  • Pair the half-edges to guarantee manifold
    connectivity

20
Building Out-of-core MeshBorder Loops and
Non-Manifold Vertices
  • Link all border loops
  • Simply cycle for each border half-edge
  • Shorten border loops
  • Detects and Mark non-Manifold vertices using two
    binary flags
  • First specifies whether it was visited before
  • Second specifies whether it is non-manifold

21
Compression--overview
  • Single Pass to handle both connectivity and
    geometry info
  • Do decompressor a favor
  • At any time, it any need to have access to the
    boundaries of this region.
  • Maintain some extra information at compression
    time

22
Decompression--Process
23
Connectivity Coding-degree coder
  • Active Boundary
  • Maintained as loop of boundary edges that are
    doubly-linked
  • Coder grows a region by always including the face
    adjacent to the gate of the A-B
  • Holes
  • Special handling, otherwise leads to bad access
    pattern, store extra info
  • Non-Manifold vertices
  • Whenever a vertex is processed, store whether it
    is manifold
  • Only first-time non-manifold vertex is
    compressed.

24
Geometry Coding-predictive coder
  • Quantization precision
  • For Scanned data, keep the quantization error
    just below scanning error
  • Quantized vertices compressed with Parallelogram
    Rule Touma98
  • First three of each mesh cannot
  • Prediction value is simple
  • Other properties similarly to vertex pos

25
Conclusions
  • First technique that is able to compress St.
    Matthew in one piece on desktop PC
  • New out-of-core mesh data structure
  • Efficient building process
  • Compress scheme
  • The concept of processing sequence

26
  • Thank you very much.
  • Any Questions?
Write a Comment
User Comments (0)
About PowerShow.com