Grid Enabled Image Guided Neurosurgery Using High Performance Computing PowerPoint PPT Presentation

presentation player overlay
1 / 27
About This Presentation
Transcript and Presenter's Notes

Title: Grid Enabled Image Guided Neurosurgery Using High Performance Computing


1
Grid Enabled Image Guided Neurosurgery Using
High Performance Computing
  • A Majumdar1, A Birnbaum1, D Choi1, T.
    Devadithya2,
  • A Trivedi3, S. K. Warfield4, N. Archip4, K.
    Baldridge1,5,
  • Petr Krysl3, June Andrews6
  • 1 San Diego Supercomputer Center 3Structural
    Engineering Dept University of California San
    Diego
  • 2Computer Science Dept, Indiana University
  • 4 Computational Radiology Lab Brigham and
    Womens HospitalHarvard Medical School
  • 5 Universität Zürich
  • 6 Electrical Engineering, UC Berkeley
  • Grants NSF ITR 0427183, 0426558, REU NIHP41
    RR13218, P01 CA67165, LM0078651 I3 grant (IBM)

2
Neurosurgery Challenge
  • Challenges
  • Remove as much tumor tissue as possible
  • Minimize the removal of healthy tissue
  • Avoid the disruption of critical anatomical
    structures
  • Know when to stop the resection process
  • Compounded by the intra-operative brain
    deformation as a result of the surgical process
  • Important to quantify and correct for these
    deformations while surgery is in progress
  • Real-time constraints provide images
    once/hour within few mins during surgery lasting
    6 to 8 hours

3
Intraoperative MRI Scanner at BWH(0.5 T)
4
Brain Deformation

Before surgery
After surgery
5
Overall Process
  • Before image guided neurosurgery
  • During image guided neurosurgery

6
Timing During Surgery

Time (min)
0
20
10
30
40
Before surgery
During surgery
Preop segmentation
Intraop MRI
Segmentation
Registration
Surface displacement
Biomechanical simulation
Visualization
Surgical progress
7
Current Prototype DDDAS Inside Hospital
8
Two Research Aspects
  • Grid Architecture grid scheduling, on demand
    remote access to multi-teraflop machines, data
    transfer/sharing
  • Development of detailed advanced non-linear
    scalable hyper elastic biomechanical model

9
Intra-op MRI with pre-op fMRI
10
Scheduling Experiment 1 on 2 TeraGrid Clusters
  • TeraGrid is a NSF funded grid infrastructure
    across multiple research and academic sites
  • Queue delays at SDSC and NCSA TG were measured
    over 3 days for 5 mins wall clock time on 2 to 64
    CPUs
  • Single job submitted at a time
  • If job didnt start within 10 mins, job
    terminated, next one processed
  • What is the likelihood of job running
  • 313 jobs to NCSA TG cluster and 332 to SDSC TG
    cluster 50 to 56 jobs of each size on each
    cluster

11
of submitted tasks that run as a function of
CPUs requested
TeraGrid Experiment Results

Average queue delay for tasksthat began running
within10 mins
12
Scheduling Experiment2 on 5 TeraGrid Clusters
  • The real-time constraint of this application
    requires that data transfer and simulation
    altogether take about 10 mins, otherwise these
    results are not of use to surgeons
  • Assume simulation and data transfer (both ways)
    together takes 10 mins and data transfer takes 4
    mins
  • Leaves 6 mins for biomechanical simulation on
    remote HPC machines
  • Assume biomechanical model is scalable i.e.
    better results achieved on higher number of
    processors
  • Objective
  • Get simulation done in 6 mins
  • Get maximum number of processors available within
    6 mins
  • Allow 4 mins to wait in the queue this leaves 2
    mins for actual simulation

13
Experiment Characteristics
  • Flooding scheduler approach experiment 1
  • Simultaneously submit 8, 16, 32, 64, 128 procs
    jobs to multiple clusters - SDSC DataStar, SDSC
    TG, NCSA TG, ANL TG, PSC TG
  • When a lower count job starts (at any center)
    kill all the lower CPU count jobs at all the
    other centers
  • Results out of 1464 job submissions over 7
    days, only 6 failed giving success of 99.59 128
    CPU jobs ran greater than 50 of time at least
    64 CPU jobs ran more than 80 of time
  • Next slide gives time varying behavior with 6
    hour intervals for this experiment
  • 4 other experiments were performed by taking out
    some of the successful clusters as well as taking
    scheduler cycle time into account on DataStar
  • As number of clusters were reduced, success rate
    goes down

14
(No Transcript)
15
Data Transfer
  • We are investigating grid based data transfer
    mechanisms such as globus-url-copy, SRB
  • All hospitals have firewalls for security and
    patient data privacy single port of entry to
    internal machines

Transfer time in seconds for 20 MB file
16
Mesh Model with Brain Segmentation
17
Current and New Biomechanical Models
  • Current linear elastic material model RTBM
  • Advanced biomechanical model FAMULS (AMR)
  • Advanced model is based on conforming adaptive
    refinement method
  • Inspired by the theory of wavelets this
    refinement produces globally compatible meshes by
    construction
  • Replicate the linear elastic result produced by
    RTBM using FAMULS

18
FEM Mesh FAMULS RTBM

RTBM (Uniform)
FAMULS (AMR)
19
Deformation Simulation After Cut

No AMR FAMULS
RTBM
3 level AMR FAMULS
20
Advanced Biomechanical Model
  • The current solver is based on small strain
    isotropic elastic principle
  • New biomechanical model
  • Inhomogeneous scalable non-linear kinematics with
    hyper elastic model with AMR
  • Increase resolution close to the level of MRI
    voxels i.e. millions of FEM meshes
  • New high resolution complex model still has to
    meet the real time constraint of neurosurgery
  • Requires fast access to remote multi-tflop systems

21
Parallel Registration Performance
22
Parallel Rendering Performance
23
Parallel RTBM Performance
(43584 meshes, 214035 tetrahedral elements)

60.00
50.00
IBM Power3
40.00
Elapsed Time (sec)
30.00
IA64 TeraGrid
20.00
IBM Power4
10.00
-
1
2
4
8
16
32
of CPUs
24
End to End (BWH ? SDSC?BWH) Timing
  • RTBM not during surgery
  • Rendering - during Surgery

25

End-to-end Timing of RTBM
  • Timing of transferring 20 MB files from BWH to
    SDSC, running simulations on 16 nodes (32 procs),
    transferring files back to BWH 9 (60 7)
    50 124 sec.
  • Capable of providing biomechanical brain
    deformation simulation results (using the linear
    elastic model) to the surgery room at BWH within
    2 mins using TG machines at SDSC

26
End-to-end Timing of Rendering
DURING SURGERY
  • Intra-op MRI data sent from BWH to SDSC during a
    surgery, parallel rendering performed at SDSC,
    rendered viz sent back to BWH (but not shown to
    surgeons)
  • Total time (for two sets of data) 253 2
    7.4 0.2 13.7 148.4 sec

27
Current and Future DDDAS Research
  • Continuing research and development in grid
    architecture, on demand computing, data transfer
  • Continuing development of advanced biomechanical
    model and parallel algorithm
  • Future DDDAS - near-continuous instead of once an
    hour 3-D MRI based
  • Scanner at BWH can provide one 2-D slice every 3
    sec or three orthogonal 2-D slices every 6 sec
  • Near-continuous DDDAS architecture
  • Requires major research, development and
    implementation work in the biomechanical
    application domain
  • Requires research in the closed loop system of
    dynamic image driven continuous biomechanical
    simulation and 3-D volumetric FEM results based
    surgical navigation and steering
Write a Comment
User Comments (0)
About PowerShow.com