Virtual Cluster Development Environment - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

Virtual Cluster Development Environment

Description:

A hypervisor has its own kernel and it's installed directly on the hardware, or ' ... The kernels of both the OS and the hypervisor must be modified, however, to ... – PowerPoint PPT presentation

Number of Views:149
Avg rating:3.0/5.0
Slides: 42
Provided by: sri127
Category:

less

Transcript and Presenter's Notes

Title: Virtual Cluster Development Environment


1
Virtual Cluster Development Environment
  • Presented by
  • S.THAMARAI SELVI
  • PROFESSOR
  • DEPT. OF INFORMATION TECHNOLOGY
  • MADRAS INSTITUTE OF TECHNOLOGY
  • CHROMEPET, CHENNAI
  • Open Source Grid and Cluster Conference-2008
  • at OAKLAND on 15.05.2008

2
Agenda
  • Virtualization
  • Xen Machines
  • VCDE overview
  • VCDE Architecture
  • VCDE Component details
  • Conclusion

3
Virtualization
  • Virtualization is a framework or methodology of
    dividing the resources of a computer into
    multiple execution environments, by applying one
    or more concepts or technologies such as hardware
    and software partitioning, time-sharing, partial
    or complete machine simulation, emulation,
    quality of service, and many others.
  • Source http//www.kernelthread.com
  • It allows you to run multiple operating systems
    simultaneously on a single machine

4
Need for Virtualization
  • Integrates fragmented resources
  • Isolation across VMs - Security
  • Resource Provisioning
  • Dynamic Configuration
  • Efficient Resource Utilization

5
  • Hypervisor - The hypervisor is the most basic
    virtualization component. It's the software that
    decouples the operating system and applications
    from their physical resources. A hypervisor has
    its own kernel and it's installed directly on the
    hardware, or "bare metal." It is, almost
    literally, inserted between the hardware and the
    Guest OS.
  • Virtual Machine - A virtual machine (VM) is a
    self-contained operating environmentsoftware
    that works with, but is independent of, a host
    operating system. In other words, it's a
    platform-independent software implementation of a
    CPU that runs compiled code.
  • The VMs must be written specifically for the
    OSes on which they run. Virtualization
    technologies are sometimes called dynamic virtual
    machine software.

6
Virtual Machines
  • A system VM provides a complete, persistent
    system environment that supports an operating
    system along with its many user processes. It
    provides the guest operating system with access
    to virtual hardware resources, including
    networking, I/O, and perhaps a graphical usser
    interface along wiith a processor and memory.

Source Architecture of Virtual Machines, Smith
Nair, Computer,, May 2005,, pp 32-38
7
Paravirtualization
  • It is a type of virtualization in which the
    entire OS runs on top of the hypervisor and
    communicates with it directly, typically
    resulting in better performance. The kernels of
    both the OS and the hypervisor must be modified,
    however, to accommodate this close interaction.
  • Ex. Xen Machine

8
Xen
  • Xen is an open-source Virtual Machine Monitor or
    Hypervisor for both 32- and 64-bit processor
    architectures. It runs as software directly on
    top of the bare-metal, physical hardware and
    enables you to run several virtual guest
    operating systems on the same host computer at
    the same time. The virtual machines are executed
    securely and efficiently with near-native
    performance.

9
Xen
  • Hypervisor (VMM) sits on top of H//W
  • Ported to Linux/FreeBSD/NetBSD
  • Hosted OS kernel modification required
  • Near- native performance
  • Highly scalable

10
Xen
Source http//www.cl.cam.ac.uk/netos/papers/2003x
ensosp.pdf, p5
11
Grid Context
Users
Job submission
Portal / CLI
Maps to Physical resources
Resource Broker
Grid Enabled Resources
C1
Physical Resources
12
In our context
  • Cluster Head Node


13
VCDE - Objectives
  • Design and Development of Virtual Cluster
    Development Environment for Grids using Xen
    machines
  • The remote deployment of Grid environment to
    execute any application written in parallel or
    sequential application has been automated by
    VCDE

14
VCDE Architecture
JOB SUBMISSION PORTAL
GLOBUS CONTAINER
VIRTUAL CLUSTER SERVICE
VIRTUAL INFORMATION SERVICE
NETWORK MANAGER
IP
VIRTUAL CLUSTER SERVER
USER
POOL
POOL
SECURITY SERVER
CLUSTER HEAD NODE
JOB STATUS SERVICE
RESOURCE AGGREGATOR
HOST
POOL
SCHEDULER
DISPATCHER
MATCH MAKER
JOB
POOL
VIRTUAL CLUSTER DEVELOPMENT ENVIRONMET
TRANSFER MODULE
VIRTUAL CLUSTER MANAGER
EXECUTOR MODULE
VIRTUAL MACHINE CREATOR
VIRTUAL MACHINE CREATOR
VIRTUAL MACHINE CREATOR
COMPUTENODE 2
COMPUTENODE 1
COMPUTENODE n
15
The VCDE Components
  • Virtual cluster service and Virtual information
    service
  • Virtual cluster server
  • User pool
  • Job status service
  • Job pool
  • Network service
  • Resource Aggregator
  • Dispatcher
  • Match maker
  • Host pool
  • Virtual Cluster Manager
  • Executor

16
Globus Toolkit Services
  • Two custom services are developed and deployed in
    Globus tool kit and running as virtual workspace,
    the underlying virtual machine is based on Xen
    VMM.
  • Virtual cluster service which is used to create
    Virtual Clusters
  • Virtual information service which is used to know
    the status of virtual resources.

17
Job Submission client
  • This component is responsible getting the user
    requirements for the creation of virtual cluster.
  • When the user is accessing the Virtual Cluster
    Service the users identity is verified using
    grid-map file. The Virtual Cluster Service
    contacts the Virtual Cluster Development
    Environment (VCDE) to create and configure the
    Virtual Cluster.
  • The inputs are type of Os, disk size, Host name
    etc.

18
Virtual Cluster Service (VCS)
  • It is the central or core part of the Virtual
    Cluster Development Environment. The Virtual
    Cluster Service contacts the VCDE for virtual
    machine creation. The Virtual Cluster Server
    maintains the Dispatcher, Network Manager,
    Resource Aggregator, User Manager, and Job Queue.

19
Resource Aggregator
  • This module fetches all the resource information
    from physical cluster and these information are
    updated periodically to the Host Pool.
  • The Host Pool maintains the Head and Compute
    nodes logical volume partition, logical volume
    disk space total and free, ram size total and
    free, Kernel type, gateway, broadcast address,
    network address, netmask etc.

20
Match Maker
  • The Match Making process compares the Users
    requirements with the physical resource
    availability.
  • The physical resource information such as Disk
    space, RamSizeFree, Kernel Version, Operating
    Systems are gathered from the resource aggregator
    via virtual cluster server module.
  • In this module the rank of matched host is
    calculated by using RamSizeFree and disk space.
  • The details are returned as Hashtable with
    hostname and rank and send it to the
    UserServiceThread.

21
Host, User and Job pools
  • Host pool gets the list of hosts form the
    information aggregator and identifies the list of
    free nodes in order to create virtual machines on
    the physical nodes.
  • User pool is responsible for maintaining the list
    of authorized users. It also has the facility to
    allow which users are allowed to create the
    virtual execution environment. We can also limit
    the number of jobs for each user.
  • Job pool maintains a user request as jobs in
    Queue from the user manager module. It processes
    the user request one by one for the dispatcher
    module to the input for match maker module

22
Job Status
  • Job Status service accesses the Job Pool through
    VCDE Server and displays the virtual cluster
    status and job status dynamically.

23
Dispatcher
  • Dispatcher is invoked when the job is submitted
    to the Virtual Cluster Server. The Dispatcher
    module gets the job requirements and updates in
    the job pool with job id. After that, the
    dispatcher sends the job to match making module
    with user's Requirements available in the host
    pool.
  • The matched hosts are identified and the ranks
    for the matched resources are computed.
  • The rank is calculated based on the free
    ramsize. The resource which has more free ramsize
    gets the highest rank.

24
Scheduler
  • The Scheduler module is invoked after the
    matching host list is generated by the match
    making module.
  • The resources are ordered based on the rank. The
    node having the highest rank is considered as the
    Head node for the Virtual Clusters.
  • Virtual machines are created as compute nodes
    from the matched host list and the list of these
    resources are sent to Dispatcher Module.

25
Virtual Cluster Manager
  • Virtual Cluster Manager Module (VCM) is
    implemented by using the Round-Robin Algorithm.
    Based on the users node count, VCM creates the
    first node as the head node and others as compute
    nodes.
  • The VCM waits until it receives the message on
    successful creation of Virtual Cluster and the
    completion software installation.

26
Virtual Machine Creator
  • The two main functions of the virtual machine
    creator are
  • Updating Resource Information and
  • Creation of Virtual Machines
  • The resource information viz., hostname, OS,
    Architecture, Kernel Version, Ram disk, Logical
    Volume device, Ram Size, Broadcast Address, Net
    mask, Network Address and Gateway Addresses are
    getting updated in the host pool through VCS.
  • Based the message received from the Virtual
    Cluster Manager it starts to create the virtual
    machines.
  • If the message received from the VCM is Head
    Node, it starts to create the Virtual Cluster
    Head Node with required software
  • else if the message received from the Virtual
    Cluster Manager is Client Node, it creates the
    compute node with minimal software.

27
Automation of GT
  • Prerequisite software for the Globus installation
    has been automated.
  • The required softwares are
  • JDK
  • Ant
  • Tomcat web server
  • Junit
  • Torque

28
Automation of GT
  • All the steps required for the Globus
    installation also been automated.
  • Globus package installation
  • Configurations like SimpleCA, RFT and other
    services.

29
Security Server
  • The Security Server is to perform mutual
    authentication dynamically.
  • When the Virtual Cluster installation and
    configuration is completed, the Security client
    running in the virtual cluster head node sends
    the certificate file, signing policy file and the
    user's identity to the Security server running in
    VCS.

30
Executor Module
  • After the formation of virtual clusters, the
    executor module is invoked.
  • This module fetches the job information from the
    job pool and creates RSL file and contacts the
    Virtual Cluster Head Nodes Job Managed Factory
    Service and submits this job description RSL
    file. It gets the job status and updates the same
    in the job pool.

31
Transfer Module
  • The job executable, input files and RSL file are
    transferred using the transfer manager to the
    Virtual Cluster Head Node.
  • After the execution of the job, the output file
    is transferred to the head node of the physical
    cluster.

32
Virtual Information Service
  • The resource information server fetches the Xen
    Hypervisor status, hostname, operating system,
    privileged domain id and name, Kernel Version,
    Ramdisk, Logical Volume Space, Total and Free
    Memory, Ram Size details, Network related
    information and the details of the created
    virtual cluster.

33
VCDE Architecture
JOB SUBMISSION PORTAL
GLOBUS CONTAINER
VIRTUAL CLUSTER SERVICE
VIRTUAL INFORMATION SERVICE
NETWORK MANAGER
IP
VIRTUAL CLUSTER SERVER
USER
POOL
POOL
SECURITY SERVER
CLUSTER HEAD NODE
JOB STATUS SERVICE
RESOURCE AGGREGATOR
HOST
POOL
SCHEDULER
DISPATCHER
MATCH MAKER
JOB
POOL
VIRTUAL CLUSTER DEVELOPMENT ENVIRONMET
TRANSFER MODULE
VIRTUAL CLUSTER MANAGER
EXECUTOR MODULE
VIRTUAL MACHINE CREATOR
VIRTUAL MACHINE CREATOR
VIRTUAL MACHINE CREATOR
VIRTUAL HEAD NODE
VIRTUAL COMPUTE NODE 1
VIRTUAL COMPUTE NODE n
VIRTUALCLUSTER
COMPUTENODE 2
COMPUTENODE 1
COMPUTENODE n
34
VIRTUAL CLUSTER FORMATION
Fedora 4 nodes 512 MB 10 GB
Fedora 4 nodes 512 MB 10 GB
VCDE SERVER
VM CREATOR
VM CREATOR
VM CREATOR
VM CREATOR
Ethernet
HEAD NODE
SLAVE NODE 2
SLAVE NODE 3
SLAVE NODE1
VIRTUAL CLUSTER
35
Image Constituents
36
Experimental Setup
  • In our testbed, We have created the physical
    cluster with four nodes, one Head Node and three
    compute nodes.
  • The operating system in the head node is
    Scientific Linux 4.0 with
  • 2.6 Kernel
  • Xen 3.0.2,
  • GT4.0.5
  • VCDE Server and VCDE Scheduler
  • In the compute node, VM Creator is the only
    module running.

37
Conclusion
  • The VCDE (Virtual Cluster Development
    Environment) has been designed and developed for
    creating virtual clusters automatically to
    satisfy the requirements of the users.
  • There is no human intervention in the process of
    creating the virtual execution environment. The
    complete automation takes more time, so in the
    near future, the performance of the VCDE will be
    improved
  • VCDE has been implemented for a single cluster
  • It has to be extended for multiple clusters by
    considering the meta scheduler.

38
References
  • 1. Foster, I., C. Kesselman, J. Nick, and S.
    Tuecke, The Physiology of the Grid An Open Grid
    Services Architecture for Distributed Systems
    Integration, 2002 Open Grid Service
    Infrastructure WG, Global Grid Forum.
  • 2. Foster, I., C. Kesselman, and S. Tuecke, The
    Anatomy of the Grid Enabling Scalable Virtual
    Organizations, International Journal of
    Supercomputer Applications, 2001. 15(3) p.
    200-222.
  • 3. Goldberg, R., Survey of Virtual Machine
    Research , IEEE Computer, 1974. 7(6) p. 34-45.
  • 4. Keahey, K., I. Foster, T. Freeman, X. Zhang,
    and D. Galron, Virtual Workspaces in the
  • Grid, ANL/MCS-P1231-0205, 2005.
  • 5. Figueiredo, R., P. Dinda, and J. Fortes, "A
    Case for Grid Computing on Virtual Machines,
    23rd International Conference on Distributed
    Computing Systems. 2003.
  • 6. Reed, D., I. Pratt, P. Menage, S. Early, and
    N. Stratford, Xenoservers Accountable Execution
    of Untrusted Programs, 7th Workshop on Hot
    Topics in Operating Systems,1999. Rio Rico, AZ
    IEEE Computer Society Press.
  • 7. Barham, P., B. Dragovic, K. Fraser, S. Hand,
    T. Harris, A. Ho, R. Neugebar, I. Pratt, and A.
    Warfield, Xen and the Art of Virtualization,
    ACM Symposium on Operating Systems Principles
    (SOSP).
  • 8. Sugerman, J., G. Venkitachalan, and B.H. Lim,
    Virtualizing I/O devices on VMware workstation's
    hosted virtual machine monitor, USENIX Annual
    Technical Conference, 2001.

39
References continued
  • 9. Adabala, S., V. Chadha, P. Chawla, R.
    Figueiredo, J. Fortes, I. Krsul, A. Matsunaga, M.
    Tsugawa, J. Zhang, M. Zhao, L. Zhu, and X. Zhu,
    From Virtualized Resources to Virtual Computing
    Grids, The In-VIGO System, Future Generation
    Computer Systems, 2004.
  • 10. Sundararaj, A. and P. Dinda, Towards
    Virtual Networks for Virtual Machine Grid
    Computing, 3rd USENIX Conference on Virtual
    Machine Technology, 2004.
  • 11. Jiang, X. and D. Xu, VIOLIN Virtual
    Internetworking on OverLay Infrastructure,
    Department of Computer Sciences Technical Report
    CSD TR 03-027, Purdue University, 2003.
  • 12. Keahey, K., I. Foster, T. Freeman, X. Zhang,
    and D. Galron, Virtual Workspaces in the Grid,
    Europar. 2005, Lisbon, Portugal.
  • 13. Keahey, K., I. Foster, T. Freeman, and X.
    Zhang, Virtual Workspaces Achieving Quality of
    Service and Quality of Life in the Grid,
    Scientific Progamming Journal, 2005.
  • 14. I.Foster, T. Freeman, K.Keahey, D.Scheftner,
    B.Sotomayor, X.Zhang, Virtual Clusters for Grid
    Communities, CCGRID 2006, Singapore (2006)
  • 15. T. Freeman, K. Keahey, Flying Low Simple
    Leases with Workspace Pilot, Euro-Par 2008.
  • 16. Keahey, K., T. Freeman, J. Lauret, D. Olson,
    Virtual Workspaces for Scientific Applications
    , SciDAC 2007 Conference, Boston, MA, June 2007
  • 17. Sotomayor, B. Masters paper, A Resource
    Management Model for VM-Based Virtual Workspaces
    ,University of Chicago, February 2007
  • 18. Bradshaw, R., N. Desai, T. Freeman, K.
    Keahey, A Scalable Approach To Deploying And
    Managing Appliances , TeraGrid 2007, Madison,
    WI, June 2007

40
A
41
Work Hard

Thank you all
Think High
Write a Comment
User Comments (0)
About PowerShow.com