Bricks - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Bricks

Description:

Bricks – PowerPoint PPT presentation

Number of Views:415
Avg rating:3.0/5.0
Slides: 26
Provided by: searchcioT
Category:
Tags: bricks | windows

less

Transcript and Presenter's Notes

Title: Bricks


1
Bricks Blades Considerations and Costs of
Emerging Server Architectures
  • Rich Evans
  • Vice President, Technology Research Services
  • META Group

2
Business and Technology Scenario
  • Economic pressures force TCO project choices
    rather than the value of repeatable services
  • ITOs minimizing of end-to-end projects increases
    portfolio diversity and overall cost of service
  • Infrastructure consolidation while balancing
    rationalization and commoditization
  • Wintel Lintel becoming more than good enough at
    all server tiers

Highlighting Project Risk
By 2005, infrastructure substitutability will
replace consolidation projects
3
Operational Scenario
Projected Data Center Budget Growth 2002-12
Data Center Challenges
180
  • Growth of data center budget is high
  • Growth for 2003 7-9
  • Data center 50-75 of IT budget
  • Utilization of Unix/ Windows servers is low
  • Complexity of managing increasing numbers of
    infrastructure components is high

160
140
120
100
80
60
40
20
0
2002
2004
2006
2008
2010
2012
Network
People
Storage
Software
Servers
To prevent the data center from consuming the
entire IT budget, increased manageability and
utilization through standardization and
automation are essential
4
Server Technology Standardization Intel
Displaces RISC and MIPS
Data Center Capacity Growth 2002-12
10-Year Growth ( of capacity)
  • Data center net annual capacity grows by 40/year
    through 2012 (down from 45)
  • Windows surpasses proprietary Unix in 2007
  • Linux surpasses proprietary Unix by 2011
  • Intel dominates data center by 2007

30x
10-Year Data Center Capacity Growth
13x (20)
25x
251x (26)
20x
15x
10x
74x (51)
5x
2.6x (3)
0x
2002
2007
2012
z/OS
Windows
Linux
Unix
Driven by Intel economics, Windows and Linux
dominate the data center by 2007
5
Storage Technology Standardization Networked
Storage Services
Leveraging the Storage Investment
  • Networked storage utilization improvements
  • Initially 25
  • Additive 25 with robust management (2006)
  • Managed storage virtualization yields a 10x
    productivity (2006)
  • Standards-based heterogeneous virtualization lags
    point products through 2005

100
90
80
SAN
70
60
50
NAS
40
30
20
DAS
10
0
2002
2004
2006
2008
2010
2012
Virtual storage-area network (VSAN)
infrastructure will equal virtual local-area
network (VLAN) by 2006
6
The Data Center Journey
Time
Unshared Application-Centric Fixed-Cost
Infrastructure Optimization
Shared Service-Centric Variable-Cost Model
Customer Flexible
Intra-Domain Infrastructure Optimization
Inter-Domain Infrastructure Optimization
  • Service delivery via platform silos (Win, Unix,
    MF)
  • Infrastructure deployment by platform
  • Optimized organized by application and silo
  • Skills by silo
  • Domain management, virtualization, and
    optimization by silo
  • Improved infrastructure efficiency and personnel
    productivity
  • Intra-domain policy management (e.g., network)
  • Improved end-to-end performance, availability,
    and time-to-market
  • Integrated services, simplified products
  • Inter-domain dynamic policy management
  • Utility-like, variable costs tied to QoS and
    utilization
  • Optimized for cost-effective, end-to-end services

Large in Size, Difficult to Redistribute,
Repurpose, and Optimize
Smaller and Easier to Optimize, Reconfigure, and
Repurpose
Industry standards for resource provisioning,
usage, and management are essential to success
7
Critical Issues
  • Reducing costs with Wintel and Lintel
    commodity platforms
  • Extending scalability and availability utilizing
    blades in the data center
  • Measuring clustering scalability ROI

8
Reducing Costs With Wintel and Lintel
Commodity Platforms
Lay Down Your Chips
  • Intel Scaling server infrastructure tiers
    (2002-06)
  • The OS vs. risk tradeoff, for 3-year server
    infrastructure project costs
  • The Linux effect The change of OS cost?

Through 2003, processor performance is similar,
server scaling with balanced I/O and memory
become key
9
Key Server Platforms for Infrastructure Tiers
(2002-06)
Through 2006, Intel platforms will be capable of
addressing 95 of all workloads
10
The OS vs. Risk Tradeoff For 3-Year Server
Infrastructure Project Costs
  • Ongoing costs in a project should be the main
    focus of ITOs
  • Focusing on minimizing upfront costs only often
    leads to increased ongoing costs and risk
  • Choosing Linux on just upfront costs (saving the
    cost of OS) is offset by increased support
    integration and project isolation or failure

The Costs of a Successful Project
Project A B C

Lintel Wintel Enterprise Unix/RISC
Upfront costs OS HW SW Recurring costs
Support Integration and Risk
ITOs should be cautious of free components
costs are linked to software, integration,
maintenance, skills
11
The Linux Effect The Change of OS Cost?
A True Reflection of OS Costs
  • Selecting new OS platforms can save up to 50
    less than competition! However
  • Reducing risk by increasing robustness reflects
    price increases
  • Repackaging legacy platforms to help compete on
    two fronts
  • ITOs should look at overall cost, not just OS or
    platform
  • Increasing Linux project costs will help mature
    support and integration costs you pay for what
    you get

Price 250K 100K 75K 50K 25K 5K

Mainframe Cost
Unix Cost
Windows Cost
DCS AS
Linux Cost
OS Maturity
Through 2005, Lintel projects will increase in
cost (adding value) to within 20 of Wintel
platforms
12
Reduce Costs With Wintel andLintel Commodity
Platforms
  • Bottom Line
  • Through 2005, Intel platforms will be capable of
    addressing 95 of all workloads
  • Standardizing on server platforms reduces support
    and integration costs
  • ITOs should be cautious of free components Costs
    are linked to software, integration,
    maintenance, skills
  • Focus on the costs of successful projects
    minimizing risk
  • Through 2005, Lintel projects will increase in
    cost (adding value) to within 20 of Wintel
    platforms
  • Any OS saving will be largely eradicated by
    increases in ongoing costs

Business Impact Overconsolidating platforms
will negatively affect the ROI
13
Extending Scalability and Availability Utilizing
Blades in the Data Center
A True Reflection of Consolidation Costs
  • Understanding server bricks and blades
  • Defining the new form factors
  • Leveraging blade evolution through 2005 Modular
    computing
  • Implications of commodity building blocks for
    high-end architectures
  • Connecting the modules from buses to fabrics
    (2003-05) interconnects

DBMS
App Server Farm
Firewall
Web Server Farm
Browsers
14
Understanding Server Bricks and Blades
  • Defining the new form factors
  • Blade contains processor, memory, and I/O
  • Brick contains 4 processor, memory, and I/O
  • Rack contains blades, bricks, and switches, and
    power and cooling (environmentals)
  • Scaling OS contained for blade or 4-processor
    brick (similar to SHV building block)
  • Standardizing on form factors will reduce todays
    proprietary blade footprints led by Intel and
    white-box suppliers

Defining Brick and Blades
Blade
Contained OS Scaling
Brick
Rack
Through 2005, form factors will standardize
between OEMs (led by Intel) reuse of existing
blades will be limited
15
Leveraging Blade Evolution Through 2005 Modular
Computing
e.g., IBM x440 Layout
8-Processor Layout
Quad 0 SMP Bus
Quad 1 SMP Bus
  • Moving to modular computing IBMs x440 leads the
    way
  • Server module are polarizing to 2 or 4
    processors Speed of the electron limits SMP bus
    length
  • Scaling modules through either proprietary
    switches/interconnects or InfiniBand
    server-to-server fabrics

Memory
I/O
Memory
I/O
Cache
Cache
Interconnect
Interconnect
16-Processor Layout
Quad 0 SMP Bus
Quad 1 SMP Bus
Memory
I/O
Memory
I/O
Cache
Cache
Interconnect
Interconnect
Interconnect
Interconnect
Inter connect
Memory
I/O
Memory
I/O
Cache
Cache
Interconnect
Quad 1 SMP Bus
Quad 2 SMP Bus
High-end server designs benefit from brick and
blade volumes leveraging modularity improving
service and operations
16
Connecting the Modules From Buses to Fabrics
2003-05
CPU
CPU
Memory
CPU
CPU
Memory
Chipset
Chipset
Chipset
Chipset
PCI Express
PCI Express
Bridge
GbE
Bridge
IBA
Bridge
GbE
IBA
Bridge
PCI-X
PCI
PCI-X
PCI
InfiniBand/Switched Fabric (Clusters Memory)
FC (Storage SAN) Switched Fabric
IP (Network LAN, MAN, WAN) Switched Fabric
By 2005, networks touch all components Sun
nearly had it right The computer is a network
17
Still Emerging Initiatives for Grid Services
  • 2001 revolutionary initiatives
  • Globus Grid Computing
  • IBM Project Eliza
  • Sun Net Effect
  • HP Always On Infrastructure
  • Intel Macroprocessing
  • MIT Project Oxygen
  • DoD Revolution in Military Affairs (RMA)
  • 2002 update
  • Globus Open Grid Services Arch.
  • IBM e-Business On Demand
  • Sun N1
  • HP Adaptive Infrastructure
  • Intel
  • Veritas Global Operations Mgmt.

Grid Resource Sharing
Grid for COTS business applications is still a
vision thing
18
Extend Scalability and Availability Utilizing
Blades in the Data Center
  • Bottom Line
  • Through 2005, form factors will standardize
    between OEMs Reuse of existing blades will be
    limited
  • Initial blade deployments will be tactical to
    overcome density issues
  • High-end server designs benefit from brick and
    blade volumes leveraging modularity Improving
    service and operations
  • Repeatable services will fix costs between
    projects
  • By 2005, networks touch all components Sun
    nearly had it right The computer is a network
  • The network moves next to the processor

Business Impact The move from buses to networks
is unavoidable be cautious of vendor lock-in
with new technologies and form factors
19
Measuring Clustering Scalability ROI
Clustering Infrastructure Services
  • Investing in scale-out infrastructure
  • Building dynamic scale-out clusters The
    scaling-on-demand game plan
  • Measuring scalability of DBMS scale-out clusters

Pattern
Pattern
Scale-Out Service
DBMS Service
SAN Service
Individual Components
Platform
As infrastructure components build into services,
rationalization will become the main IT driver
(not consolidation) through 2006
20
Investing in Scale-Out Infrastructure
Investing in Infrastructure
  • Scale-out DBMS is not a quick fix
  • Clustering needs people (skills), process, and
    technology
  • Hybrid scaling (44 or 48) is the key
    infrastructure enabler to scale-out success
  • Modular server bricks are emerging HW platforms
    for scale-out DBMS clusters

Process
People
Technology
P2
P8
P1
P7
P1
P3
P3
P7
P5
P4
P6
P8
J1
J1
T1
T2
J3
J2
T2
T7
J6
T8
J4
T1
J7
J4
T3
T4
T3
J6
J2
T5
J8
T6
J5
Guaranteed performance of hybrid clusters will
mature through 2003
21
Building Dynamic Scale-Out Clusters The
Scaling-on-Demand Game Plan
Hardware Database Scale-Out
  • Clustering maturity follows consolidation
    practices
  • Has to be on a SAN
  • Scale out is doable from 2003-05, but
  • DSS scale-out Oracle, IBM, Microsoft, and
    Teradata
  • OLTP scale-out Oracle RAC
  • True load-balancing self-healing is not mature
    until 2006/07
  • Software licensing models need to address
    flexibility

High
5
Scaling on Demand 2006/07
C o m p l e x i t y
Basic Clusters Scaling 2002/03
Scale-Out 2004/05
4
Std. 19 Racks
3
2
1
High-Density Racks Intro 2002
Low
Low
Efficiency
High
Through 2005, optimized off-the-shelf components
will offer good-enough scale-out over proprietary
clusters in 90 of DBMS installations
22
Measuring Scalability of DBMS Scale-Out Clusters
Database Clusters Scalability per Node
  • Basic clusters (e.g., Win2000 clusters) Best
    used for HA, not recommended for scale-out per node
  • Bundled clusters (e.g., Unix cluster over IP)
    Again, best used for HA and N1 failover
    55-65 per node
  • Optimized clusters (e.g., Veritas with Oracle)
    Acceptable and improving performance for HA and
    scale-out 65-75 per node
  • Proprietary clusters (e.g., HP TruCluster with
    Oracle RAC) For optimum HA and scale-out
    75-85 per node
  • SMP scaling For pure scalability 85-95 per
    processor

100
85- 99
75-85
65- 75
55- 65
Optimized
SMP Single Image
Proprietary
Bundled

Basic
Max Nodes 4
2
8 Virtualization
Investing in RDBMS, operations, storage, and
interconnects will ease scale-out DBMS clusters
through 2003-05
23
Measure Clustering Scalability ROI
  • Bottom Line
  • Guaranteed performance of hybrid clusters will
    mature through 2003
  • Look initially at 44 and 48 deployments
  • Through 2005, optimized off-the-shelf components
    will offer good-enough scale-out over proprietary
    clusters in 90 of DBMS installations
  • Measure the return on scaling investment of
    off-the-shelf and proprietary solutions
  • Investing in RDBMS, operations, storage, and
    interconnects will ease scale-out DBMS clusters
    through 2003-05
  • Dont pioneer the space Follow up production
    references

24
N-Tier Server Infrastructure Transitioning and
Futures
  • Transformation Steps
  • Reducing costs with Wintel and Lintel
    commodity platforms
  • Intel exists in all data centers, and management
    is key
  • Extending scalability and availability utilizing
    blades in the data center
  • New server fabrics will enable vendor scale-out
    of DBMS promises
  • Measuring clustering scalability ROI
  • Look for the incremental increase in performance
    when you add a node

25
Bricks Blades Considerations and Costs of
Emerging Server Architectures
  • Rich Evans
  • Vice President, Technology Research Services
  • META Group
Write a Comment
User Comments (0)
About PowerShow.com