Title: VMware 101
1VMware 101
- Ken Stewart
- Director of Technical Services
- InterTech
2Disclaimer
This session may contain product features that
are currently under development. This
session/overview of the new technology represents
no commitment from VMware to deliver these
features in any generally available
product. Features are subject to change, and
must not be included in contracts, purchase
orders, or sales agreements of any
kind. Technical feasibility and market demand
will affect final delivery. Pricing and
packaging for any new technologies or features
discussed or presented have not been determined.
These features are representative of feature
areas under development. Feature commitments are
subject to change, and must not be included in
contracts, purchase orders, or sales agreements
of any kind. Technical feasibility and market
demand will affect final delivery.
3Who am I?
- Ken Stewart, Director of Technical Services,
InterTech - VMware Certified Professional
- Capacity Planner, Site Recovery Manager, Virtual
Infrastructure - HP Master Accredited Systems Engineer
- SAN Architect, BladeSystems, Proliant Servers,
ProCurve Networking - IBM Certified Specialist
- BladeCenter, System x High Performance Servers,
Storage Systems - Cisco Certified Voice Professional
4InterTech Services
- VMware Enterprise VIP Partner
- Cisco Premier Partner Voice and Wireless
Specialized - Dell Premier Partner - PowerEdge Servers,
Equallogic, Storage - HP Elite Partner - Proliant Servers,
BladeSystems, ProCurve, MSA EVA SANs - IBM Premier Partner - xSeries Servers,
BladeCenters, Storage - Microsoft Gold Partner
- Network Services - Infrastructure, Security, VoIP
, Wireless - Managed Service Provider - 7X24 NOC, Help Desk,
Hosting - Authorized Warranty Service Provider - Dell, HP,
IBM, Lenovo, Xerox - Licensed Low Voltage Contractor
5Session Agenda
- Selecting a Virtualization Platform
- VMware ESXi Overview
- Virtual Infrastructure Design Considerations
- ESX Server hosts
- Networking
- Storage
- Software Configuration
- VirtualCenter and Collaborative Services
- Virtual Machine Protection
6Selecting a Virtualization Platform
7Framework for virtualization evaluation
Five Core, Required Elements
Reliable, Comprehensive, Cost-effective Solutions
8Customers Count on VMware ESX Reliability
Most Reliable Foundation
VMware ESX 1 in Reliability
Large pharmaceutical customer Overfour years of
VMware ESX uptime!
Companies Trust Their Production Serversto Run
on VMware
9Architectures Matter
Most Reliable Foundation
Matter
Hyper-V / Xen
Microsoft / Xen Architecture
VMware Architecture
- True thin hypervisor 32MB
- No general-purpose OS
- Direct driver model I/O scaling
- Drivers optimized for VMs
- Special treatment for drivers
- 2-10GB footprint
- General purpose management OS
- Indirect driver model
- Generic drivers in mgmt partition
- I/O bottlenecks
10Size Does Matter
Most Reliable Foundation
VMwareESXi 32 MB
11Risk from Generic Windows Drivers
Most Reliable Foundation
In a nutshell, one of Hyper-V's advertised
strengths -- the host partition's ability to work
with generic Windows device drivers -- is also
its greatest weakness. That's because the quality
level of Windows device drivers, especially those
from third-party developers, is notoriously
inconsistent.
12Generic Windows DriversRoot Cause of 70 of
Crashes
Most Reliable Foundation
- Slide from TechEd 2006 Mark Russinovich
!!!
13Virtualization is More than Just Running VMs
True DynamicIT Services
Hypervisor
14Live Migration is a Critical Service for Dynamic
IT
True DynamicIT Services
Suspend / Resume Migration Doesnt Cut It for
Dynamic IT Network connections break! Users are
affected!
15File Systems Matter
True DynamicIT Services
- Built-In VMFS Cluster File System
- Simplifies VM provisioning
- Enables independent VMotion and HA restart of VMs
in common LUN - File-level locking protects virtual disks
- Separates VM and storage administration
- Use RDMs for access to SAN features
VMFS Datastore
16File Systems Matter
True DynamicIT Services
- Cluster Services an inadequate substitute
- All VMs on a LUN migrate/failover together
- Must provision one VM per LUN for VM independence
- Storage administration burden as VM count grows
Many VMs Storage management nightmare
All VMs on a LUN must move together
Requires one LUN/VM for independent mobility
17MSFT Claim Better ManagementReality Incomplete
Solution
Complete Virtualization Management
VMwarevCenter Competitors Offering
Basic VM management ? ?
Basic patch management ? ?
Performance monitoring ? ?
Backup ? ?
Manage physical servers Integrates w/ IBM, HP, CA, BMC, ?
Basic Offering
Zero-app downtime maintenance ? ?
Dynamic load balancing ? ?
Zero-app downtime offline VM patching ? ?
Self-service provisioning, image library mgmt of multi-tier environments ? ?
VM lifecycle mgmt with track-and-control ? ?
Staging of multi-tier environments for production deployment ? ?
BC / DR workflow automation ? ?
Additional Required Components
18Management Ecosystem for VMware Infrastructure
Complete Virtualization Management
Dozens of Management Partners, including
- Open VMware interfaces and developer resources
support deep management tool integrations - VI SDK API, VI Toolkits, Remote CLI, SNMP, CIM
APIs, OVF, VMI, VMDK, VDDK, Community Source
Program, Guest SDK, VMCI SDK - Use the best-of-breed management tools of your
choice
19Single Platform to Support the Entire IT
Infrastructure
Complete ITInfrastructureSupport
VMware Infrastructure
Multiple Silos
Windows VMs
Oracle DB
Citrix Server
Linux VMs
Windows VMs
Oracle Apps/DB
Presentation Server
Linux VMs
MSFTHyper-V
OracleVM
CitrixXenSrvr
Xen
Clear example of Windows bias 4-way vSMP only
for Win2008 guests
Do you want one solution for the entire
infrastructure? Or Four?
20Most Comprehensive OS Support
Complete ITInfrastructureSupport
VMware Runs the Widest Selection of Operating
Systems You Depend On
Source Virtualization Licensing and Support
Lethargy Curing the Disease That Stalls
Virtualization Adoption, Burton Group, Jan 2008
21Most Comprehensive Application Support
Complete ITInfrastructureSupport
VMware Runs the Widest Selection of Apps You
Depend On
Source Virtualization Licensing and Support
Lethargy Curing the Disease That Stalls
Virtualization Adoption, Burton Group, Jan 2008
22VMware ESX 3.5 Guest OS Support
Complete ITInfrastructureSupport
RHEL5 RHEL4 RHEL3 RHEL2.1 SLES10 SLES9 SLES8 Ubunt
u 7.04 Windows NT 4.0 Windows 2000 Windows Server
2003 Windows Server 2008 Windows Vista Windows
XP Solaris 10 for x86 NetWare 6.5 NetWare
6.0 NetWare 6.1
http//www.microsoft.com/windowsserver2008/en/us/h
yperv-supported-guest-os.aspx
23MS Hyper-V Guest OS Support
Complete ITInfrastructureSupport
Win Server 2008 (up to 4P vSMP) Win Server 2003
SP2 (up to 2P vSMP) Win Server 2000 SP4 (1P
only) SLES10 (1P only) Windows Vista SP1 Windows
XP Pro SP2/SP3
http//www.microsoft.com/windowsserver2008/en/us/h
yperv-supported-guest-os.aspx
24Proven Solution, Unrivaled Customer Success
Most Proven, Trusted Platform
- 120,000 VMware customers
- 100 of Fortune 100
- 92 of Fortune 1000
- 85 use VMware in production
- 54 VMware as the default application platform
- 59 use live migration in production
The Worlds Most Successful Companies Run
VMware(hundreds of customer stories on
www.vmware.com)
25Lowest Cost per VM
VMware VI3Foundation VMware VI3Enterprise MicrosoftHyper-V Citrix XenServer Enterprise Other free Xen based
Hardware 2P server with 16GB RAM 7,000 7,000 7,000 7,000 7,000
Guest OS 2P Windows Server 2008 Datacenter Edition without Hyper-V 5,942 5,942 5,942 5,942 5,942
2P Virtualization License 995 5,750 0 2,600 0
Subtotal 13,937 18,692 12,942 15,542 12,942
Total VMs1 (2GB each) 16 16 8 8 8
Price per VM 871 1,168 1,618 1,943 1,618
1 Assumes a 21 memory overcommit ratio
26VI3 Enterprise Deploying Dynamic, On-demand
Datacenters
Cost to deploy 1000 VMs
Necessary add-ons make others more expensive, yet
they still dont match VI3 functionality
Other Hypervisors
Other Hypervisors 1,722,748(with 2 yrs support)
Basic Single Server Partitioning
1st generation hypervisor
1,677,092 (with 2 yrs support)
High Availability(failover individual VMs)
Requires cluster upgrade/3rd-party add-on
?
100,500 cluster FS/HA add-ons
Memory Overcommit(higher VM density per host)
Not available, more RAM reqd
7,318 vCenter
88,149 management server/agent licenses
Ultra-thin virtualization footprint(better
reliability, security)
Full OS in mgmt partition, 2GB
174,200 hypervisor licenses
564,47567 VI3 Enterprise
Patching of Offline VMs
602,79967 Windows Server Datacenter Ed.
Not available
Clustered FS (enables VM mobility independent of
LUN mapping)
Requires 3rd-party add-on
?
602,79967 Windows Server Datacenter Ed.
Some suspend VMs when moved
Live VM Migration
?
757,10067 Servers(32GB RAM each)
Live VM Storage Migration
502,50067 Servers(16GB RAM each)
Not available
Zero VM Downtime Host Patching
Not available
VMware VI3 Enterprise
Other Hypervisors
Dynamic Load Balancing
Not available
Host 2P Quad-core, 16 or 32 GBs physical RAM (1
GB physical RAM allocated for virtualization
software per host). Each VM provisioned with 2.0
GB RAM. VMware solution using memory overcommit
technology at 21 ratio.
Complete Virtual Infrastructure Management
Extra cost mgmt agents servers required
?
VI3 Enterprise VCMS
27VMware The Best Platform for Your Applications
The VMware Advantage
Reliable, Comprehensive, Cost-effective Solutions
28VMware ESXi Overview
29VMware Technology Overview
New ModelVirtualization Technology
- Separation of OS and hardware
- Encapsulation of OS and application into VMs
- Isolation
- Hardware independence
- Flexibility
VMware ESXi
30VMware ESXi Overview
Next generation of VMwares market-leading ESX
hypervisor
- Partitions a server into virtual machines
- Reduces hardware, power, and cooling with the
performance and features of ESX - Plug-and-Play
- Minimal configuration. Run VMs in minutes
- OS-Independent, thin architecture
- Unparalleled security and reliability
- Full-featured
- Superior consolidation and scalability
- Easy to mange with remote tools
- Simple license upgrade to VI3 Enterprise
Virtual Machines
VMware ESXi
31Installing VMware ESXi
- VMware ESXi Embedded
- Installed via SD flash or USB key internal to the
server - Distributed with a new server
- No Installation -- Just Turn It On!
- VMware ESXi Installable
- Load Installer via CD or ISO image
- Simple 2-step procedure
- Accept EULA
- Select local drive for installation
32VMware ESXi vs VMware Infrastructure
- VMware Infrastructure
- Pools of computing resources
- Built-in automation, availability and
manageability - Three bundles, all inclusive of ESXi, starting at
995
- VMware ESXi
- Single server partitioning
- Production-class hypervisor
- Advanced server resource management
- FREE
VMware Infrastructure
Centralized Management
Dynamic Resource Scheduling and Power Mgmt
High Availability and Consolidated Backup
VMotion and Storage VMotion
VMware ESXi
VMware ESXi
VMware ESXi
The hypervisor is to Virtual Infrastructure what
the engine is to a car, or the BIOS to a PC an
enabling component but not the whole solution.
33Virtual Infrastructure Design Considerations
34Typical VMware Infrastructure Deployments
- VI Enterprise
- VMotion Storage VMotion
- Resource pooling
- High availability
- VI Foundation
- Central management
- Patch management
- Consolidated Backup
VirtualCenter Server
VirtualCenter Server
35VI3 Foundation Additional Features
- Additional Management Features
- Virtual Machine Templates
- Create golden image for rapid, standardized
deployment - Virtual Machine Cloning
- Create exact copy of virtual machine for testing,
debugging, etc. - Alarms and Alerts
- Get notified of resource shortage and other
issues - Cold Migration of virtual machines between ESX
hosts - Enables flexibility for hardware maintenance, etc
- Fine-grained roles and permissions
- Allows for delegated administration
- Active Directory based authentication
- Unified with existing user directory
36Virtual Infrastructure VMware Product Portfolio
Desktop
Datacenter
Apps Infrastructure Mgmt
Apps Infrastructure Mgmt
- ACE Management Server
- Application Performance Mgmt
- Consolidated Backup
- Converter
- Site Recovery Manager forVirtual Desktop
Infrastructure
- ThinApp
- Update Manager
- Virtual Desktop Manager
- VirtualCenter
- Workstation
- Application Performance Mgmt
- Capacity Planner
- Consolidated Backup
- Converter
- Lab Manager
- Lifecycle Manager
- Site Recovery Manager
- Stage Manager
- Update Manager
- VI Toolkit
- VirtualCenter
Desktop Infrastructure
Datacenter Infrastructure
- Virtual Desktop Infrastructure
- Workstation
- Distributed Power Management
- Distributed Resource Scheduler
- High Availability
- Storage VMotion
- Virtual Machine File System
- VMotion
ESX/ESXi Hypervisor
Physical Infrastructure
37Hardware Needed
- Server
- CPU
- Minimum Single socket, dual core
- Ideal Dual-socket, 4 cores per CPU
- Memory
- Minimum 1GB
- Ideal 8GB
- Network
- Minimum one NIC, plus one for Management
interface - Ideal One for Management Interface plus multiple
NICs for VMs - Storage
- Local Storage (SATA/SAS)
- Minimum one 80GB drive
- Ideal2 mirrored drives (only for ESXi
Installable)plus 4 RAID5 drives for VMs - Shared Storage
- NFS, iSCSI, Fibre Channel for VM storage
- ESXi Installable requires local disk for the
hypervisor
38ESX Server Hardware Compatibility
- VMware Certified Compatibility Guides
(VCCGs)http//www.vmware.com/resources/techresour
ces/cat/119 - Guides for systems (servers), storage/SANs, I/O
devices (HBAs, SCSI adapters), backup software - Ensure all hardware for production environments
is listed in the VCCGs! - Test/development environments often built with
white box systems and components - Community supported list (not officially
supported by VMware)http//www.vmware.com/resourc
es/communitysupport/ -
39ESX Server Hardware Configuration - CPUs
- ESX schedules CPU cycles for processing requests
from virtual machines and Service Console - The greater the number of available CPU targets,
the better ESX manages the scheduling (8 cores
optimal) - Hyperthreading does not give the benefit of
multi-core processors recommend disabling
hyperthreading - Intel VT and AMD V with EM64T capable processors
allow for running 32-bit and 64-bit VMs - Keep same vendor, family, and generation of
processors throughout the environment to ensure
VMotion compatibility
40ESX Server Hardware Configuration - RAM
- RAM is most often maxed out before CPU resources
- Potential to over commit host RAM due to
- Host swap file (avoid using for best performance)
- Transparent Page Sharing
- Beware of server-specific memory configuration
requirements - DIMM sizes, bank pairing, parity, upgrade
considerations (mix and match or forklift
replacement) - Purchase largest amount possible, and largest
size possible (especially if not filling all
banks)
41Networking
- Basic Virtual Infrastructure network component
connectivity
Port Group
(Management virtual machine)
Port Group
(Vmotion, iSCSI, NFS)
Port Group
(VM connectivity)
42Networking Virtual Switches and Port Groups
- Minimum of 1 vSwitch required, minimum of 3
recommended - vSwitches can host all three types of port groups
(Service Console, VMkernel, VM) - Recommended to place Service Console, VMkernel,
and VM port groups on their own vSwitches - VLANs require separate port groups per VLAN
- Networking configuration must match between
VMware ESX forVmotion and DRS to function
(including Network Label names!)
43Networking Essential Components
- ESX Servers, vSwitches, physical NICs (pNICs)
- Each vSwitch should have at least 2 pNICs
assigned to it for fault tolerance - Number of pNICs per VMware ESX depends on number
of vSwitches - If 3 port group types (SC, VMkernel, VM) are on
different vSwitches, at least 6 pNICs recommended - vSwitches with VM port groups will gain load
distribution benefits when assigned multiple pNICs
44Networking Physical Infrastructure Design
- pNICs and pSwitches
- pNICs in the same vSwitch should be connected to
different pSwitches - Connect the pNICs for all VMotion-enabled
VMkernel port groups on all VMware ESX in a
cluster to the same set of pSwitches (while still
keeping the above rule)
45Storage
- Local Storage vs.
- Shared Storage
- Fibre Channel (FC)
- iSCSI
- NAS/NFS
Local Storage
Shared Storage
46Storage
- Shared storage between VMware ESX hosts required
for collaborative features (VMotion, DRS, HA) - Fibre Channel (FC)
- Block level storage
- 1/2/4/8Gb throughput speeds (8Gb with ESX 3.5
Update 2) - iSCSI
- Block level storage
- 1/10Gb throughput speed (10Gb with ESX 3.5 Update
2) - NAS/NFS
- File level storage
- 1/10Gb throughput speed (10Gb with ESX 3.5 Update
2)
47Storage Platform Considerations
- Which type of storage to use?
- Fibre Channel
- Pros Fast, enterprise-proven
- Cons Expensive, requires separate infrastructure
- iSCSI
- Pros Inexpensive, leverages existing
infrastructure, fast - Cons Sometimes slower than FC (depending on
infrastructure) - NAS/NFS
- Pros Inexpensive, leverages existing
infrastructure - Cons Slower than FC and iSCSI, no RDMs
48Storage Platform Considerations
- Why choose only one?
- Tiered storage placing VMs on different storage
based on defined characteristics (workload,
criticality, etc.) - SANs (FC and iSCSI) more expensive, higher
performing, more reliable - High I/O Database, email, application server VMs
- Critical Directory services, content
management/repository VMs - NAS/NFS less expensive, lower performing, less
reliable - Low I/O static web server, licensing server,
virtual desktop VMs (depending on workload) - Non-critical development, test, sandbox VMs
49Storage - Platform Considerations
- Fibre Channel (FC)
- ESX Server HBAs, fibre switch ports, and SAN
controller ports should all be at same and
highest speed possible (4/8Gb) - Ensure zoning on fibre switches include all
VMware ESX hosts to be included for VMotion, DRS,
and HA - Avoid daisy chaining of fibre switches or other
single points of failure in fabric design - When installing an new VMware ESX with fibre
HBAs, disconnect them from the fabric until after
the install is complete
50Storage - Platform Considerations
- iSCSI and NAS/NFS
- Separate, dedicated Ethernet switches recommended
may also use dedicated VLAN (not native VLAN!) - Configure multiple network connections on SAN/NAS
to prevent network oversubscription (VLANs may
still oversubscribe) - Configure jumbo frame and flow control support
- If using the software iSCSI initiator included
with VMware ESX, 1Gb pNICs are required (set to
full duplex or auto negotiate) - Hardware initiators (iSCSI HBAs) generally
outperform software initiators (greater host CPU
utilization with software initiator) - When installing a new VMware ESX using iSCSI
HBAs, disconnect them from the network until
after install is complete
51Storage General Considerations
- Storage performance needs good throughput and I/O
- Disk types
- SCSI/SAS - 10K or 15K RPM vs. SATA 7200 RPM
- RAID levels
- Most common - RAID-10, RAID-50, RAID-5
- Trade-off performance and useable space
- Combinations of disk type and RAID level matter
- SATA disks in RAID-10 often outperform SAS disks
in RAID-5 - Array-specific (check with the vendor)
- Read/write caching on controllers/processors
52Storage General Considerations
- Redundant connections to SAN/NAS are critical
- Fibre Channel and Hardware iSCSI (HBAs)
- Configure multipathing via multiple HBAs
connected to multipleswitches accessing the same
LUNs - Follow SAN vendor and VMware specifications for
multipathing policy to prevent path thrashing - Fixed or Most Recently Used (MRU)
- Verify all paths to LUNs are visible from within
the Virtual Infrastructure client - Software iSCSI and NAS/NFS
- Assign multiple pNICs to vSwitch hosting
iSCSI/NFS traffic
53Storage - LUNs/Volumes
- Spread LUNs/volumes across as many disks as
possible - More spindles better I/O
- Sizing Considerations and general Rules of Thumb
- 20-30 server VMs per LUN to avoid SCSI
reservation issues - 30-40 desktop VMs per LUN
- Maintain free space for snapshots and VM swap
files ( 20) - Maximum number of LUNs per VMware ESX host
256 - 400-600GB LUNs recommended as standard, adjust
on as- needed basis
54Storage LUN/Volume formatting
- LUNs used in 2 different ways
- Raw Device Mapping (RDM)
- LUN is presented raw to VM, VM writes to it
directly - Formatted with VMFS (clustered file system)
- VMs exist as series of files on VMFS file system
- VMFS block size determines how space is used and
largest file size - 1MB block, 256GB max file size / 2MB block, 512GB
max file size - 4MB block, 1TB max file size / 8MB block, 2TB max
file size - Format with block size which gives max file size
larger than the LUN - For example, format a 400GB LUN with a 2MB block
size - Small amount of space may be wasted, but will
provide for larger than expected VMs without
having to clear off and reformat the LUN
55ESX Server Software Configuration - NTP
- NTP (Network Time Protocol)
- Ensure time is consistent between ESX hosts,
VirtualCenter, directory services (AD,
eDirectory, etc) - Virtual machine time issues
- Time in VM OS may be skewed due to ESX CPU time
slicing - VMs may be configured to sync time with ESX hosts
- If syncing Windows VMs in an Active Directory
environment to ESX hosts, point ESX hosts and PDC
emulator to same external time server
56ESX Server Software Configuration - Firewall
- Security Profile (firewall)
- Out of the box limited to essential ports
- SSH access via root disabled by default
- Comment out PermitRootLogin No line in
/etc/ssh/sshd_config file, then issue service
sshd restart command to enable - Commonly opened additional ports
- Outgoing SSH client, SNMP, software ISCSI
client, NFS client, Update Manager - Incoming SNMP, FTP (better to use SFTP or SCP)
- Keep attack surface minimized by limiting the
open ports and separating Service Console on
management VLAN/subnet
57Creating Virtual Machines
58VirtualCenter and Collaborative Services
59Virtual Infrastructure - VirtualCenter
60Virtual Infrastructure - VirtualCenter
- Should the VirtualCenter server be physical or
virtual? - Virtual
- Pros No additional hardware or infrastructure
needed - Cons performance issues in large environments,
reduces opportunities to co-host additional
services, observer effect - Physical
- Pros Better performance and scalability, able to
co-host additional services (VCB, 3rd party
monitoring and reporting tools, etc.) - Cons Additional hardware and infrastructure
needed - Recommended to run VirtualCenter on physical
server
61Virtual Infrastructure - Sizing
- How many VMware ESX hosts are needed?
- Influencing variables
- Configuration of hosts (amount of CPU cores and
RAM) - 8 cores, 32GB RAM 20-25 servers, 35-45
desktops - Utilization profile of VMs (servers vs. desktops,
types of servers) - Growth projections and budgetary processes
- General guidelines
- Plan for VM sprawl
- Maintain N1 environment to allow for maintenance
and failures - Recommended to start with 3 hosts and grow
accordingly
62Virtual Infrastructure Resource Pools and
Clusters
- Resource pools are logical divisions of CPU and
RAM resource allocations - VMs assigned to pool cannot utilize more
resources than allocated to the pool provides
resource throttling - Useful for protecting production VMs from
dev/test VMs - Pool resource configuration, combined with VM
reservation settings, may prevent VMs from
powering on - Clusters are logical collections of VMware ESX
hosts - Used to enable collaborative features (DRS, HA)
- Clusters contain a default resource pool, can
create sub-pools - Recommended to maintain single default pool where
possible
63Virtual Infrastructure VMotion
- Eliminates planned downtime
- Enables dynamic load balancing
- Reduces power consumption
- Essential to managing the virtualized datacenter
Proven VMotion has been available since 2003 and
is now trusted by 62 of VMware customers in
production
64Virtual Infrastructure VMotion and DRS
- Ensure CPU compatibility (same family and
supported instruction sets) between hosts and
within DRS cluster - Most vendors publish compatibility matrices
- VMware compatibility check utility
http//www.vmware.com/download/shared_utilities.ht
ml - Per-VM CPU masks may be used between incompatible
hosts, but not supported in production - Connected floppy or CD drive on VM will cause
VMotion to fail - Use vmcdconnected (http//www.ntpro.nl/blog/archiv
es/172-Software.html) or similar tool to find
and disconnect devices
65Live Migration Extended to Storage
- Live migration of VMs across storage disks with
no downtime - Minimizes planned downtime
66Virtual Infrastructure - HA
- Before configuring HA, ensure that DNS is correct
and functional - Verify full and short name lookups for all ESX
hosts in cluster - After HA is configured, verify all hosts are
listed in /etc/hosts - Avoiding false isolation events
- Configure Service Console network redundancy
- Adjust failure detection time from 15 seconds to
higher value - Cluster VMware HA Advanced Options
das.failuredetectiontime (value in milliseconds) - Set Isolation Response default value to Leave
Powered On - Cluster VMware HA Isolation Response
67Virtual Machine Protection
68Virtual Machine Protection Backup and Recovery
- Essential nature of virtual machines a
collection of files - Configuration file, disk file(s), NVRAM file,
logs - Backup and recovery of VMs based on different
perspectives - Data and configurations contained within the VM
(Guest OS view) - VM files stored on host/SAN datastores (ESX
Server view) - Recommended to use a combination approach which
provides for both application/data and complete
system protection
69Virtual Machine Protection Backup and Recovery
- Protection from Guest OS perspective
- Use same backup/recovery programs as physical
servers (i.e. NTBackup, Tivoli, Backup Exec,
scripts, etc.) - Typically licensed per node
- Focused on protecting data, not entire systems
(bare-metal recovery) - Protection from VMware ESX perspective
- Applications tailored for virtual environments
(i.e. VCB, vRanger Pro, etc.) - Typically licensed per socket of VMware ESX hosts
- Focused on protecting entire VMs
(bare-virtual-metal recovery), although many
provide OS-level file recovery as well
70Virtual Machine Protection Backup and Recovery
- Methodology recommendations
- Implement hybrid solutions (Guest OS and VMware
ESX views) - Node-based file/data backups, VI-specific app
full snapshot backups - may consist of multiple
backup applications or a single application - VMware Consolidated Backup (VCB) provides for
backups across the storage infrastructure,
increasing backup speeds - Use in conjunction with 3rd party applications to
gain additional functionality and performance
benefits (i.e. compression, file redirection,
incremental/differential backups, etc.) - May be possible to replace node-based solutions
depending on 3rd party application feature sets
may result in significant cost savings vs.
per-node licensing for traditional backup
software - Test, test, test regularly restore individual
files and entire VMs!
71VI3 Foundation Consolidated Backup
- VMware Consolidated Backup (VCB) interface
- Move backup out of the virtual machine
- Eliminate backup traffic on the local area
network - Integrated with major 3rd-party backup products
72Virtualization Services from InterTech
- VMware Capacity Planner
- Capacity planning tool that collects
comprehensive resource utilization data in
heterogeneous IT environments, and compares it to
industry standard reference data to provide
analysis and decision support modeling. - VMware Implementation
- VMware Infrastructure
- WMware Virtual Desktop Infrastructure
- Site Recovery Manager
- Business Continuity and Disaster Recovery
- Business Continuity and Disaster Recovery
Planning - Site Recovery Manager Hosting
73Recent Microsoft-VMware Support News
- ESX 3.5 Update 2 first hypervisor certified under
Microsoft Server Virtualization Validation
Program - Microsoft supports all users running Windows
applications - For other ESX versions users with Microsoft
Premiere support get commercially reasonable
efforts - 90-day license reassignment restrictions lifted
for 41 Microsoft server applications - No more VMotion Tax!
74QA