Title: Virtualization Tech
1Virtualization Tech
2Outline
- What is Virtualization
- Why need virtualization
- Virtualization today
- Xen Overview
- Xen Network IO
- State of the art
- Research Roadmap?
3Virtualization
- Virtualization is a framework or methodology of
dividing the resources of a computer into
multiple execution environments, - Done by applying one or more concepts or
technologies such as - hardware and software partitioning,
- time-sharing,
- partial or complete machine simulation,
- emulation,
- quality of service,
- and many others.
- Virtualization is an abstraction layer that
decouples the physical hardware from the
operating system to deliver greater IT resource
utilization and flexibility.
www.vmware.com
4Virtualization
5Why Virtualization in the Enterprise
- Consolidate under-utilized servers to reduce
CapEx and OpEx
X
- Avoid downtime with VM Relocation
- Dynamically re-balance workload to guarantee
application SLAs
X
6Why virtualization for U
- Test patches or modification to OS or server
configurations - Used to run non trustworthy applications
- Running Windows, Linux, Mac in one machine
- Accelerated application deployment
- Using pre-configured virtual machines
7Virtualization today
- Single OS image Virtuozzo, Vservers, Zones
- Group user processes into resource containers
- Hard to get strong isolation
- Full virtualization VMware, VirtuaPC, QEMU
- Run multiple unmodified guest OSes
- Hard to efficiently virtualize x86
- Para-virtualization UML, Xen
- Run multiple guest OSes ported to special arch
- Arch Xen/x86 is very close to normal x86
8Xen Overview
- Xen is a virtual machine monitor for x86 that
supports execution of multiple guest operating
systems with unprecedented levels of performance
and resource isolation. It is a
para-virtualization technology. - Only guest kernel needs to be ported
- All user-level apps and libraries run unmodified
- Linux 2.4/2.6, NetBSD, FreeBSD, Plan9
- Execution performance is close to native
- Based on Intel-VT, Xen supported full
virtualization.
9Xen 2.0 Architecture
10Xen 3.0 Architecture
VM3
VM0
VM1
VM2
Device Manager Control s/w
Unmodified User Software
Unmodified User Software
Unmodified User Software
GuestOS (XenLinux)
GuestOS (XenLinux)
GuestOS (XenLinux)
Unmodified GuestOS (WinXP))
AGP ACPI PCI
Back-End
SMP
Native Device Drivers
Front-End Device Drivers
Front-End Device Drivers
Front-End Device Drivers
VT-x
x86_32 x86_64 IA64
Event Channel
Virtual MMU
Virtual CPU
Control IF
Safe HW IF
Xen Virtual Machine Monitor
Hardware (SMP, MMU, physical memory, Ethernet,
SCSI/IDE)
11Xen Network IO
- Xen Network IO architecture
12Network IO
- How to Send packet?
- Network protocol stack in guest OS
- Virtual Network interface ( Front-end driver)
- IO channel to Backend Driver
- Page flipping technology without data copy
- Deliver packet to Bridge and Real NIC driver
- Initialize DMA
- DMA data transfer
- Completion notification through interrupt
13Network IO
- How to Receive Packet?
- Interrupt to VMM through NIC
- DMA data transfer
- Driver and bridge manages packets information
- VMM signal the specified guest OS
- Backend to Front-end through IO channel
- Go up to Guest OS network stack
14Networking micro-benchmark
- One streaming TCP connection per NIC (up to 4)
- Driver receive throughput 75 of Linux throughput
- Guest throughput 1/3rd to 1/5th of Linux
throughput
15Receive Xen Driver overhead
- Profiling shows slower instruction execution with
Xen Driver than w/Linux (both use 100 CPU) - Data TLB miss count 13 times higher
- Instruction TLB miss count 17 times higher
- Xen 11 more instructions per byte transferred
(Xen virtual interrupts, driver hypercall)
16Receive Xen Guest overhead
- Xen Guest configuration executes two times as
many instructions as Xen Driver configuration - Driver domain (38) overhead of bridging
- Xen (27) overhead of page remapping
17Transmit Xen Guest overhead
- Xen Guest executes 6 times as many instructions
as Xen driver configuration - Factor of 2 as in Receive case
- Guest instructions increase 2.7 times
- Virtual NIC (vif2) in guest does not support
TCP offload capabilities of NIC
18Why downgrade?
- Lower bandwidth Higher Latency
- VMM involvement brings more system overhead(
Domain switch, Inter-Domain communication cost,
system management etc) - Virtual NIC does not support some offload
capabilities of Real NIC etc
19Virtualization IO State of the Art
- Architecture side
- Concurrent direct access to NIC from Guest OS
(Rice University) - Needs NICs support with multi-context
- Still needs the involvement of VMM
- Bypass VMM just like bypass OS tech, How?
- IOMMU support from Hardware, which offers
translation and protection function in hardware.
Both AMD and Intel will support it soon. - Does not support multi-domain direct access to IO
device?
20Virtualization IO State of the Art
- System side
- Optimize IO channel
- Optimize Receive path
- Optimize transmit path
- Virtual Memory optimization
21Are they enough?
- What can we do?
- Performance evaluation of XEN, XEN with Intel-VT
and the coming Intel IOMMU technology. - Domain 0 and VMM on-loading on multi-core?
- Optimize Cache
- Optimize Instruction Set
- Integrated NIC to CPU?
- Floorplan How to place NIC on multi-core die?
Performance Power Concern - Like TOE, not only bypass VMM but also OS?
- NIC supports concurrent direct access from guest
OS? - Use polling instead of interrupt?
22QA
23Reference
- Optimization Network virtualization in Xen
- Virtualization and Virtual machines Tom Gianos
- Xen and the art of virtualization Ian Pratt
- Diagnosing Performance Overhead in the Xen
Virtual Machine Environment Aravind Menon - Integrated Network Interfaces for High-Bandwidth
TCP/IP Nathan L. Binkert - Design and Evaluation of Network Interfaces for
SAN