virtual cloud r98922135 陳昌毅 r98944033 顏昭恩 r98922150 黃伯淳 2010/06/03
TRANSCRIPT
Virtual cloudR98922135 陳昌毅R98944033 顏昭恩R98922150 黃伯淳
2010/06/03
Outline
• Goal• Introduction– Full virtualization– Para-virtualization
• VMware• XEN• System Diagram• Current Progress• Future Work• Q&A
Goal
• Construct cloud platform that provides VM as a service
• Motivation– A Private cloud platform– Common cloud platform (Eucalyptus, ONE) doesn’t provide
migration
• Challenge– A convenient environment to migrate VM
• Access image through NFS is extremely slow
• XEN– A hypervisor– Virtualization
• Eucalyptus– Controller system– NO migration mechanism
• OpenNEbula
Related work
• We use a distributed share file system to host VM image. – VM can migrate without a central storage.
• Construct VMs as multiple prototypes according to their functionalities.
Innovations
Why Virtualization
• To increase the utilization of costly hardware resources.
• Easy to management.• Portability.
XEN
• Para-virtualization• Full virtualization (Hardware assisted)• Xend (domain-0)
System Diagram
Share File SystemPros Cons
NFS Easy to manage IO bottleneck
Glustre Distribute file system Need to modify kernel
Gluster User space distribute file system
P2P
Modified Glustre kernel is unstable to work with XEN
Migration
• Why we need live migration? We want to move VM without interrupting VM
service.Products:– Xen : live migration– VMware:VMotion
Two important consideration: 1. Downtime 2. Total migration time
Pre-copy migration
Managed Migration(Xen)
1. 1st round: – Copy all memory pages to destination machine. – Replace shadow page table to original one, and mark all pages read-
only . – Create a dirty bit map for the VM.
2. 2nd-(n-1)th round: – During the pages transferring, if VM want to modify a page, it will
invoke Xen to set the appropriate bit in the dirty bit map. – Dirty pages will be resend again.– Reset dirty bit map for the next round.
3. nth round: – When the Dirty rate is bigger than up bound, begin to do stop-and-
copy.
User Interface
User Interface
Demo- Migration
•Source
Demo- Migration
• Destination
• 13 nodes plus a master to sort 8.6GB data.• Local– All images are on local disks.
• SFS_local– All images are on local disks and are accessed via
Gluster file system.• SFS_remote– All images are on remote disks and are accessed
over Gluster file system.
Hadoop Benchmark
1 2 3 4 5 6 7 8 9 100
20406080
100120140160180
localSFS_localSFS_remote
Iteration
Performance Comparison
local SFS_local SFS_remote
87.50 103.90 160.70 Sec
Seconds
Q&A