DETER Testbed Status - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

DETER Testbed Status

Description:

Approx 1/3 of nodes are currently down for repair or reserved for testing ... Research Accelerator for MultiProcessing (RAMP) 1,000 200-300 Mhz FPGA-based CPUs ... – PowerPoint PPT presentation

Number of Views:94
Avg rating:3.0/5.0
Slides: 15
Provided by: ISI4
Learn more at: http://www.isi.edu
Category:

less

Transcript and Presenter's Notes

Title: DETER Testbed Status


1
DETER Testbed Status
  • Kevin Lahey (ISI)
  • Anthony D. Joseph (UCB)
  • January 31, 2006

2
Current PC Hardware
  • ISI
  • 64 pc3000 (Dell 1850)
  • 11 pc2800 (Sun V65x)
  • 64 pc733 (IBM Netfinity 4500R)
  • UCB
  • 32 bpc3000 (Dell 1850)
  • 32 bpc3060 (Dell 1850)
  • 32 bpc2800 (Sun V60x)

Approx 1/3 of nodes are currently down for repair
or reserved for testing
3
Special Hardware
  • ISI
  • 4 Juniper M7i routers
  • 2 Juniper IDP-200 IDS
  • 1 Cloud Shield 2200
  • 2 McAfee Intrushield 2600
  • UCB Minibed
  • 8-32 HP DL360G2 Dual 1.4GHz/512KB PIII

4
Current Switches
  • ISI
  • 1 Cisco 6509 (336 GE ports)
  • 7 Nortel 5510-48T (48 GE ports each)
  • Gigabit Switch Interconnects
  • UCB
  • 1 Foundry FastIron 1500 (224 GE ports)
  • 10 Nortel 5510-48T (48 GE ports each)
  • Gigabit Switch Interconnects
  • UCB Minibed
  • 6 Nortel 5510-48T(48 GE ports each)

5
Current Configuration
1Gb VPN
ISI
UCB
pc733s
...
Cisco 6509
pc2800s
...
1Gb
Nortel 5510
pc3000s
...
...
Junipers
...
6
New Hardware for 2006
  • ISI
  • 64 Dell 1850, identical to previous pc3000s
  • Dual 3GHz Xeons with 2GB RAM, but with 2MB cache
    instead of 1MB, and 6 interfaces instead of 5
  • 32 IBM x330 (dual 1GHz Pentium IIIs with 1GB RAM)
  • UCB
  • 96 TBD nodes, depending on overhead recovery
  • Full Boss and Users nodes
  • 2 HP DL360 Dual 3.4GHz/2MB Cache Xeon, 800MHz
    FSB, 2GB RAM
  • HP Modular Smart Array 20s 12 x 500GB SATA
    drives (6TB)
  • Combined
  • Nortel 5510-48T and 10Gb-capable Nortel 5530-24T
    switches

7
New ISI Configuration
...
pc733s
Nortel 5510
Cisco 6509
pc1000s
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
pc2800s
...
1Gb
1Gb (10Gb later)
Nortel 5510
Nortel 5510
pc3000s
pc3000s
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Junipers
...
2 x 10Gb
8
DETER Clusters
UCB
  • ISI

9
Progress (1)
  • People
  • New ops guy (Kevin Lahey _at_ isi) getting up to
    speed
  • Reliability
  • Daily backups for users and boss, 1-time tarballs
    for all other nodes
  • More robust Nortel switch configuration
  • ISI or UCB users/boss machines can run either or
    both clusters
  • Security Panic Switch to disconnect from
    Internet

10
Progress (2)
  • Emulab Software
  • Unified boot image for com1 and com2 machines
  • DNS servers, IP addresses in Database
  • Click image with polling
  • Incorporated state of Emulab as of about 9/30/05
  • Debugged at UCB, then installed at ISI
  • Firewall and experimental nodes must be resident
    on the same switch
  • Release/update procedure is still problematic
    for discussion in testbed breakout

11
In-Progress (1)
  • Reliability
  • Automating fail-over between clusters (DB
    mirroring / reconciliation scripts)
  • Security
  • Automatic disk wiping on a project/experiment
    basis
  • Automating leak testing for control/experiment
    networks
  • Performance
  • Redoing the way emulab-in-emulab handles the
    control net (saves 1 experimental node interface)
  • Improving the performance of the VPN/IPsec links
  • Supporting a local tftp/frisbee server at UCB

12
In-Progress (2)
  • Federation
  • Supporting running federated experiments between
    separately administered Emulabs using
    emulab-in-emulab
  • Netgraph module to rewrite 802.1q tags as they
    pass through a VPN tunnel (similar to Berkeley
    and ISI link)
  • Configuration
  • Incorporating EMIST setup/visualization tools
    into Dashboard
  • New Emulab Hardware Types
  • Supporting IBM BladeCenters (currently testing
    with 12x2 BC)
  • Routers as first-class objects

13
Network Topology
  • Open hypothesis Inter-switch links may be a
    bottleneck
  • Foundry/Cisco-Nortel and Nortel-Nortel
  • Adding multiple 10GE interconnects
  • Exploring alternate node interconnection
    topologies
  • Example connecting each node to multiple
    switches
  • Potential issue Assign is a very complex program
  • There may be all sorts of gotchas lurking out
    there

14
Other New Nodes on the Horizon
  • Secure64
  • NetFPGA2
  • pc3000s with 10 interfaces
  • Research Accelerator for MultiProcessing (RAMP)
  • 1,000 200-300 Mhz FPGA-based CPUs
  • Some number of elements devoted to FSM traffic
    generators
  • Many 10GE I/O ports
  • 100K for 8U box at 1.5KW
Write a Comment
User Comments (0)
About PowerShow.com