vmfs_rdm_perf.pdf

Upload: vijay-anand

Post on 03-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 vmfs_rdm_perf.pdf

    1/11

    Performance Study

    Copyright 2007 VMware, Inc. All rights reserved. 1

    Performance Characteristicsof VMFS and RDMVMware ESX Server 3.0.1

    VMwareESXServeroffersthreechoicesformanagingdiskaccessinavirtualmachineVMwareVirtual

    MachineFileSystem(VMFS),virtualrawdevicemapping(RDM),andphysicalrawdevicemapping.Itisvery

    importantto

    understand

    the

    I/O

    characteristics

    of

    these

    disk

    access

    management

    systems

    in

    order

    to

    choose

    therightaccesstypeforaparticularapplication.Choosingtherightdiskaccessmanagementmethodcanbea

    keyfactorinachievinghighsystemperformanceforenterpriseclassapplications.

    Thisstudyprovidesperformancecharacterizationofthevariousdiskaccessmanagementmethodssupported

    byVMwareESXServer.Thegoalistoprovidedataonperformanceandsystemresourceutilizationatvarious

    loadlevelsfordifferenttypesofworkloads.Thisinformationoffersyouanideaofrelativethroughput,I/O

    rate,andCPUefficiencyforeachoftheoptionssoyoucanselecttheappropriatediskaccessmethodforyour

    application.

    Thisstudycoversthefollowingtopics:

    TechnologyOverviewonpage 2

    ExecutiveSummaryonpage 2

    TestEnvironmentonpage 2

    PerformanceResultsonpage 5

    Conclusiononpage 10

    Configurationonpage 10

    Resourcesonpage 11

  • 7/29/2019 vmfs_rdm_perf.pdf

    2/11

    Copyright 2007 VMware, Inc. All rights reserved. 2

    Performance Characteristics of VMFS and RDM

    Technology Overview

    VMFSisaspecialhighperformancefilesystemofferedbyVMwaretostoreESXServervirtualmachines.

    AvailableaspartofESXServer,itisadistributedfilesystemthatallowsconcurrentaccessbymultiplehosts

    tofilesonasharedVMFSvolume.VMFSoffershighI/Ocapabilitiesforvirtualmachines.Itisoptimizedfor

    storingandaccessinglargefilessuchasvirtualdisksandthememoryimagesofsuspendedvirtualmachines.

    RDMisamappingfileinaVMFSvolumethatactsasaproxyforarawphysicaldevice.TheRDMfilecontains

    metadata

    used

    to

    manage

    and

    redirect

    disk

    accesses

    to

    the

    physical

    device.

    This

    technique

    provides

    advantagesofdirectaccesstophysicaldeviceinadditiontosomeoftheadvantagesofavirtualdiskonVMFS

    storage.Inbrief,itoffersVMFSmanageabilitywiththerawdeviceaccessrequiredbycertainapplications.

    YoucanconfigureRDMintwoways:

    VirtualcompatibilitymodeThismodefullyvirtualizesthemappeddevice,whichappearstotheguest

    operatingsystemasavirtualdiskfileonaVMFSvolume.VirtualmodeprovidessuchbenefitsofVMFS

    asadvancedfilelockingfordataprotectionanduseofsnapshots.

    PhysicalcompatibilitymodeThismodeprovidesminimalSCSIvirtualizationofthemappeddevice.

    VMkernelpassesallSCSIcommandstothedevice,withoneexception,therebyexposingallthephysical

    characteristicsoftheunderlyinghardware.

    BothVMFSandRDMprovidesuchdistributedfilesystemfeaturesasfilelocking,permissions,persistent

    naming,and

    VMotion

    capabilities.

    VMFS

    is

    the

    preferred

    option

    for

    most

    enterprise

    applications,

    including

    databases,ERP,CRM,VMwareConsolidatedBackup,Webservers,andfileservers.AlthoughVMFSis

    recommendedformostvirtualdiskstorage,rawdiskaccessisneededinafewcases.RDMisrecommended

    forthosecases.SomeofthecommonusesofRDMareinclusterdataandquorumdisksforconfigurations

    usingclusteringbetweenvirtualmachinesorbetweenphysicalandvirtualmachinesorforrunningSAN

    snapshotorotherlayeredapplicationsinavirtualmachine.

    FormoreinformationonVMFSandRDM,refertotheServerConfigurationGuide.Executive Summary

    Themainconclusionsthatcanbedrawnfromthetestsdescribedinthisstudyare:

    Forrandom

    reads

    and

    writes,

    VMFS

    and

    RDM

    yield

    similar

    I/O

    performance.

    Forsequentialreadsandwrites,RDMprovidesslightlybetterI/OperformanceatsmallerI/Oblocksizes,

    butVMFSandRDMprovidesimilarI/OperformanceforlargerI/Oblocksizes(greaterthan32KB).

    CPUcyclesrequiredperI/OoperationforbothVMFSandRDMaresimilarforrandomreadsandwrites

    atsmallerI/Oblocksizes.AstheI/Oblocksizeincreases,CPUcyclesperI/OoperationincreasesforVMFS

    comparedtoRDM.

    Forsequentialreadsandwrites,RDMrequiresrelativelyfewerCPUcyclesperI/Ooperation.

    Test Environment

    Thetestsdescribedinthisstudycharacterizetheperformanceofvariousoptionsfordiskaccessmanagement

    VMwareoffers

    with

    ESX

    Server.

    We

    ran

    the

    tests

    with

    auniprocessor

    virtual

    machine

    using

    Windows

    Server

    2003EnterpriseEditionwithSP2astheguestoperatingsystem.ThevirtualmachineranonanESXServer

    systeminstalledonalocalSCSIdisk.Weattachedtwodiskstothevirtualmachineonevirtualdiskforthe

    operatingsystemandaseparatetestdisk,whichwasthetargetfortheI/Ooperations.Weusedalogical

    volumecreatedonfivelocalSCSIdisksconfiguredasRAID0asthetestdisk.WegeneratedI/Oloadusing

    Iometer,averypopulartoolforevaluatingI/Operformance(seeResourcesonpage 11foralinktomore

    information).SeeConfigurationonpage 10foradetailedlistofthehardwareandsoftwareconfiguration

    weusedforthetests.

  • 7/29/2019 vmfs_rdm_perf.pdf

    3/11

    Copyright 2007 VMware, Inc. All rights reserved. 3

    Performance Characteristics of VMFS and RDM

    Disk Layout

    Inthisstudy,disklayoutreferstotheconfiguration,location,andtypeofdiskusedfortests.Weusedasingle

    serverwithsixphysicalSASdisksandaRAIDcontrollerforthetests.

    Wecreatedalogicaldriveonasinglephysicaldisk.WeinstalledESXServeronthisdrive.Onthesamedrive,

    wealsocreatedaVMFSpartitionandusedittostorethevirtualdiskonwhichweinstalledtheWindows

    Server2003guestoperatingsystem.

    Weconfigured

    the

    remaining

    five

    physical

    disks

    as

    RAID

    0and

    created

    a10GB

    logical

    volume,

    referred

    to

    here

    asthetestdisk.WeusedthistestdiskonlyfortheI/Ostresstest.Wecreatedavirtualdiskonthetestdiskand

    attachedthatvirtualdisktotheWindowsvirtualmachine.Totheguestoperatingsystem,thevirtualdisk

    appearsasaphysicaldrive.

    IntheVMFStests,weimplementedthevirtualdisk(seenasaphysicaldiskbytheguestoperatingsystem)as

    a.vmdkfilestoredonaVMFSpartitioncreatedonthetestdisk.

    IntheRDMtests,wecreatedanRDMfileontheVMFSvolume(thevolumethatheldthevirtualmachine

    configurationfilesandthevirtualdiskwhereweinstalledtheguestoperatingsystem)andmappedtheRDM

    filetothetestdisk.WeconfiguredthetestvirtualdisksoitwasconnectedtoanLSISCSIhostbusadapter.

    Figure 1. Disk layout for VMFS and RDM tests

    Fromtheperspectiveoftheguestoperatingsystem,thetestdiskswererawdiskswithnopartitionorfile

    system(suchasNTFS)createdonthem.Iometercanreadandwriteraw,unformatteddisksdirectly.Weused

    thiscapabilitysowecouldcomparetheperformanceoftheunderlyingstorageimplementationwithout

    involvinganyoperatingsystemfilesystem.

    ESX Server

    .vmdk

    Test diskVMFS volume

    Operating system diskVMFS volume

    Virtual

    machine

    .vmdk

    ESX Server

    .vmdk

    Test diskVMFS volume

    Operating system diskVMFS volume

    Virtual

    machine

    .vmdk

    ESX Server

    Test diskRaw device

    Operating system diskVMFS volume

    Virtual

    machine

    RDMmapping

    file pointerRaw device

    mapping

    VMFS test RDM test

    .vmdk

  • 7/29/2019 vmfs_rdm_perf.pdf

    4/11

    Copyright 2007 VMware, Inc. All rights reserved. 4

    Performance Characteristics of VMFS and RDM

    Software Configuration

    WeconfiguredtheguestoperatingsystemtousetheLSILogicSCSIdriver.OnVMFSvolumes,wecreated

    virtualdiskswiththethickoption.Thisoptionprovidesthebestperformingdiskallocationschemefora

    virtualdisk.Allthespaceallocatedduringdiskcreationisavailablefortheguestoperatingsystem

    immediatelyafterthecreation.Anyolddatathatmightbepresentontheallocatedspaceisnotzeroedout

    duringvirtualmachinewriteoperations.

    Unless

    stated

    otherwise,

    we

    left

    all

    ESX

    Server

    and

    guest

    operating

    system

    parameters

    at

    their

    default

    settings.

    I/O Workload Characteristics

    EnterpriseapplicationstypicallygenerateI/Owithmixedaccesspatterns.Thesizeofdatablockstransferred

    betweentheserverhostingtheapplicationandthestoragealsochanges.Designinganappropriatediskand

    filesystemlayoutisveryimportanttoachieveoptimumperformanceforagivenworkload.

    Afewapplicationshaveasingleaccesspattern.Oneexampleisbackupanditspatternofsequentialreads.

    Onlinetransactionprocessing(OLTP)databaseaccess,ontheotherhand,ishighlyrandom.

    Thenatureoftheapplicationalsoaffectsthesizeofdatablockstransferred.Often,thedatablocksizeisnota

    singlevaluebutarange.ForMicrosoftExchange,theI/Ooperationsaregenerallysmallfrom4KBto16KB.

    OLTPdatabaseaccessesusingMicrosoftSQLServerandOracledatabasesuseamixofrandom8KBreadand

    writeoperations.

    TheI/Ocharacteristicsofaworkloadcanbedefinedintermsoftheratioofreadtowriteoperations,theratio

    ofsequentialtorandomI/Oaccess,andthedatatransfersize.Arangeofdatatransfersizescanalsobe

    mentionedinsteadofasinglevalue.

    Test Cases

    Inthisstudy,wecharacterizetheperformanceofVMFSandRDMforarangeofdatatransfersizesacross

    variousaccess

    patterns.

    The

    data

    transfer

    sizes

    we

    selected

    were

    4KB,

    8KB,

    16KB,

    32KB,

    and

    64KB.

    The

    access

    patternswechosewererandomreads,randomwrites,sequentialreads,sequentialwrites,oramixofrandom

    readsandwrites.ThetestcasesaresummarizedinTable1.

    NOTE ThethickoptionisnotthedefaultwhenyoucreateavirtualdiskusingtheVIClient.Tocreatevirtualdiskswiththethickoption,weusedacommandlineprogram(vmkfstools).Fordetailsonusing

    vmkfstools,seetheVMwareInfrastructure3ServerConfigurationGuide.

    Table 1. Test cases

    100% Sequential 100% Random

    100% Read 4KB,8KB,16KB,32KB,64KB 4KB,8KB,16KB,32KB,64KB

    100% Write 4KB,8KB,16KB,32KB,64KB 4KB,8KB,16KB,32KB,64KB

    50% Read + 50% Write 4KB,8KB,16KB,32KB,64KB

  • 7/29/2019 vmfs_rdm_perf.pdf

    5/11

    Copyright 2007 VMware, Inc. All rights reserved. 5

    Performance Characteristics of VMFS and RDM

    Load Generation

    WeusedtheIometerbenchmarkingtool,originallydevelopedatIntelandwidelyusedinI/Osubsystem

    performancetesting,togenerateI/OloadandmeasuretheI/Operformance.Foralinktomoreinformation,

    seeResourcesonpage 11.IometerprovidesoptionstocreateandexecuteawelldesignedsetofI/O

    workloads.Becausewedesignedourteststocharacterizetherelativeperformanceofvirtualdisksonraw

    devicesandVMFS,weusedonlybasicloademulationfeaturesinthetests.

    Iometer

    configuration

    options

    used

    as

    variables

    in

    the

    tests: Transferrequestsizes:4KB,8KB,16KB,32KB,and64KB.

    Percentrandomorsequentialdistribution:foreachtransferrequestsize,weselected0percentrandom

    access(equivalentto100percentsequentialaccess)and100percentrandomaccesses.

    Percentreadorwritedistribution:foreachtransferrequestsize,weselected0percentreadaccess

    (equivalentto100percentwriteaccess),50percentreadaccess(onlyforrandomaccess)and100percent

    readaccesses.

    Iometerparametersconstantforalltestcases:

    NumberofoutstandingI/Ooperations:8

    Runtime:5minutes

    Rampuptime:60seconds

    Numberofworkerstospawnautomatically:1

    Performance Results

    Thissectionpresentsdataandanalysisofstoragesubsystemperformanceinauniprocessorvirtualmachine.

    Metrics

    ThemetricsweusedtocomparetheperformanceofVMFSandRDMareI/Orate(measuredasnumberofI/O

    operationspersecond),throughputrate(measuredasMBpersecond),andCPUefficiency.

    Inthis

    study,

    we

    report

    the

    I/O

    rate

    and

    throughput

    rate

    as

    measured

    by

    Iometer.

    We

    use

    an

    efficiency

    metric

    calledMHzperI/OpstocomparetheefficienciesofVMFSandRDM.ThismetricisdefinedastheCPUcost

    (inprocessorcycles)perunitI/Oandiscalculatedasfollows:

    WecollectedI/OandCPUutilizationstatisticsfromIometerandesxtopasfollows:

    IometercollectedI/OoperationspersecondandthroughputinMBps

    esxtopcollectedtheaverageCPUutilizationofphysicalCPUs

    ForlinkstoadditionalinformationonhowtocollectI/OstatisticsusingIometerandhowtocollectCPU

    statisticsusing

    esxtop,

    see

    Resources

    on

    page 11.

    MHz per I/OpsAverage CPU utilization CPU rating in MHz Number of cores

    Number of I/O operations per second---------------------------------------------------------------------------------------------------------------------------------------------------------------=

  • 7/29/2019 vmfs_rdm_perf.pdf

    6/11

    Copyright 2007 VMware, Inc. All rights reserved. 6

    Performance Characteristics of VMFS and RDM

    Performance

    Thissectioncomparestheperformancecharacteristicsofeachtypeofdiskaccessmanagement.Themetrics

    usedareI/Orate,throughput,andCPUefficiency.

    Random Workload

    Mostoftheenterpriseapplicationsthatgeneraterandomworkloadstypicallytransfersmallamountsofdata

    in

    each

    I/O

    operation.

    For

    a

    small

    I/O

    data

    transfer,

    the

    time

    it

    takes

    for

    the

    disk

    head

    to

    physically

    locate

    the

    dataonthediskandmovetowardsitismuchgreaterthanthetimetoactuallyreadorwritethedata.Any

    delaycausedbytheoverheadintheapplicationandfilesystemlayerbecomesnegligiblecomparedtotheseek

    delay.Inourtestsforrandomworkloads,VMFSandRDMproducedsimilarI/Operformanceasevidentfrom

    Figure2,Figure3,andFigure4.

    Figure 2. Random mixed /IO per second (higher is better)

    Figure 3. Random read I/O operations per second (higher is better)

    0

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    4 8 16 32 64

    Data s ize (KB)

    I/Oo

    perationspe

    rsecond

    VMFS

    RDM (Virtual)

    RDM (Physical)

    0

    200

    400

    600

    800

    1000

    1200

    4 8 16 32 64

    Data size (KB)

    I/Oo

    perationspersecond

    VMFS

    RDM (Virtual)

    RDM (Physical)

  • 7/29/2019 vmfs_rdm_perf.pdf

    7/11

    Copyright 2007 VMware, Inc. All rights reserved. 7

    Performance Characteristics of VMFS and RDM

    Figure 4. Random write I/O operations per second (higher is better)

    Sequential Workload

    Forsequentialworkloads,applicationstypicallyaccessconsecutivedatablocks.Thediskseektimeisreduced

    becausethediskheadhastomoveverylittletoaccessthenextblock.AnyI/Oqueuingdelaythatoccursinthe

    applicationandfilesystemlayercanaffecttheoverallthroughput.Accessingfilemetadataanddatastructures

    toresolvefilenamestophysicaldatablocksondisksaddsaslightdelaytotheoveralltimerequiredto

    completetheI/Ooperation.InordertosaturatetheI/Obandwidth,whichtypicallyisafewhundred

    megabytespersecond,applicationshavetoissuemoreI/OrequestsiftheI/Oblocksizeissmall.Asthe

    numberofI/Orequestsincreases,thedelaycausedbyfilesystemoverheadincreases.Thusrawdisksyield

    slightlyhigherthroughputcomparedtofilesystemsatsmallerdatatransfersizes.Inthesequentialworkload

    testswithasmallI/Oblocksize,RDMprovided3to10percenthigherI/Othroughput,asshowninFigure5

    andFigure6.

    However,asI/Oblocksizeincreases,thenumberofI/OrequestsdecreasesbecausetheI/Obandwidthis

    saturated,

    and

    the

    delay

    caused

    by

    file

    system

    overhead

    becomes

    less

    significant.

    In

    this

    case,

    both

    RDM

    and

    VMFSyieldsimilarperformance.

    Figure 5. Sequential read I/O operations per second (higher is better)

    0

    500

    1000

    1500

    2000

    2500

    3000

    4 8 16 32 64Data s ize (KB)

    I/Oo

    perationspersecond

    VMFS

    RDM (Virtual)

    RDM (Physical)

    0

    5000

    10000

    15000

    20000

    25000

    30000

    4 8 16 32 64

    Data size (KB)

    I/Oo

    pe

    rationspersecond

    VMFS

    RDM (Virtual)

    RDM (Physical)

  • 7/29/2019 vmfs_rdm_perf.pdf

    8/11

    Copyright 2007 VMware, Inc. All rights reserved. 8

    Performance Characteristics of VMFS and RDM

    Figure 6. Sequential write I/O operations per second (higher is better)

    Table2andTable3showthethroughputratesinmegabytespersecondcorrespondingtotheaboveI/O

    operations

    per

    second

    for

    VMFS

    and

    RDM.

    The

    throughput

    rates

    (I/O

    operations

    per

    second

    *

    data

    size)

    are

    consistentwiththeI/OratesshownaboveanddisplaybehaviorsimilartothatexplainedfortheI/Orates.

    CPU Efficiency

    CPUefficiencycanbecomputedintermsofCPUcyclesrequiredperunitofI/Oorunitofthroughput(byte).

    WeobtainedafigureforCPUcyclesusedbythevirtualmachineformanagingtheworkload,includingthe

    virtualizationoverhead,bymultiplyingtheaverageCPUutilizationofalltheprocessorsseenbyESXServer,

    theCPUratinginMHz,andthetotalnumberofcoresinthesystem(fourinthiscase).Inthisstudy,we

    measuredCPUefficiencyasCPUcyclesperunitofI/Ooperationspersecond.

    CPUefficiencyforvariousworkloadsisshowninfigures7through11.Forrandomworkloadswithsmaller

    I/Oblocksizes(4KBand8KB)CPUefficiencyobservedwithbothVMFSandRDMisverysimilar.ForI/Oblock

    sizesgreaterthan8KB,RDMofferssomeimprovement(7to12percent).Forsequentialworkloads,RDMoffers

    an8to12percentimprovement.Aswithanyfilesystem,VMFSmaintainsdatastructuresthatmapfilenames

    Table 2. Throughput rates for random workloads in megabytes per second

    Random Mix Random Read Random Write

    Data Size (KB) VMFS RDM (V) RDM (P) VMFS RDM (V) RDM (P) VMFS RDM (V) RDM (P)

    4 5.54 5.46 5.45 4.2 4.14 4.15 9.78 9.7 9.64

    8 10.39 10.28 10.27 8.03 7.9 7.9 17.95 17.95 17.92

    16 18.63 18.43 18.43 14.49 14.3 14.34 31.58 31.2 31.06

    32 31.04 30.65 30.65 24.7 24.39 24.38 50 49.24 49.45

    64 46.6 45.8 45.78 38.11 37.69 37.66 71 70.15 70.01

    Table 3. Throughput rates for sequential workloads in megabytes per second

    Sequential Read Sequential Write

    Data Size (KB) VMFS RDM (V) RDM (P) VMFS RDM (V) RDM (P)

    4 87.53 90.66 95.17 106.14 107.47 109.56

    8 161.04 168.47 177.44 172.38 173.81 174.86

    16 300.44 311.53 316.92 221.8 226.21 229.05

    32 435.92 437.94 437.47 252.74 265.89 271.2

    64 436.88 438.04 438.06 308.3 315.63 317

    0

    5000

    10000

    15000

    20000

    25000

    30000

    4 8 16 32 64

    Data size (KB)

    I/Oo

    peratio

    nspersecond

    VMFS

    RDM (Virtual)

    RDM (Physical)

  • 7/29/2019 vmfs_rdm_perf.pdf

    9/11

    Copyright 2007 VMware, Inc. All rights reserved. 9

    Performance Characteristics of VMFS and RDM

    tophysicalblocksonthedisk.EachfileI/Orequiresaccessingthemetadatatoresolvefilenamestoactualdata

    blocksbeforereadingdatafromorwritingdatatoafile.ThenameresolutionrequiresafewextraCPUcycles

    everytimethereisanI/Oaccess.Inaddition,maintainingthemetadataalsorequiresadditionalCPUcycles.

    RDMdoesnotrequireanyunderlyingfilesystemtomanageitsdata.Dataisaccesseddirectlyfromthedisk,

    withoutanyfilesystemoverhead,resultinginalowerCPUcycleconsumptionandhigherCPUefficiency.

    Figure 7. CPU efficiency for random mix (lower is better)

    Figure 8. CPU efficiency for random read (lower is better)

    Figure 9. CPU efficiency for random write (lower is better)

    Figure 10. CPU efficiency for sequential read (lower is better)

    0.00

    0.20

    0.40

    0.60

    0.80

    1.00

    4 8 16 32 64

    Data s ize (KB)

    Efficiency(MHzperI/Ops

    )

    VMFS

    RDM (Virtual)

    RDM (Physical)

    0.00

    0.20

    0.40

    0.60

    0.80

    1.00

    4 8 16 32 64

    Data size (KB)

    Efficiency(MHzperI/Op

    s)

    VMFS

    RDM (Virtual)

    RDM (Physical)

    0.00

    0.20

    0.40

    0.60

    0.801.00

    4 8 16 32 64Data s ize (KB)

    Efficiency(MHzperI/Op

    s)

    VMFS

    RDM (Virtual)

    RDM (Physical)

    0.00

    0.20

    0.40

    0.60

    0.80

    1.00

    4 8 16 32 64Data s ize (KB)

    Efficiency(MHzperI/

    Ops)

    VMFS

    RDM (Virtual)

    RDM (Physical)

  • 7/29/2019 vmfs_rdm_perf.pdf

    10/11

    Copyright 2007 VMware, Inc. All rights reserved. 10

    Performance Characteristics of VMFS and RDM

    Figure 11. CPU efficiency for sequential write (lower is better)

    Conclusion

    VMwareESXServeroffersthreeoptionsfordiskaccessmanagementVMFS,RDM(Virtual),andRDM

    (Physical).Alltheoptionsprovidedistributedfilesystemfeatureslikeuserfriendlypersistentnames,

    distributedfilelocking,andfilepermissions.BothVMFSandRDMallowyoutomigrateavirtualmachine

    usingVMotion.Thisstudycomparesthethreeoptionsandfindssimilarperformanceforallthree.

    Forrandom

    workloads,

    VMFS

    and

    RDM

    produce

    similar

    I/O

    throughput.

    For

    sequential

    workloads

    with

    smallI/Oblocksizes,RDMprovidesasmallincreaseinthroughputcomparedtoVMFS.However,the

    performancegapdecreasesastheI/Oblocksizeincreases.VMFSandRDMoperateatsimilarCPUefficiency

    levelsforrandomworkloadsatsmallerI/Oblocksizes.Forsequentialworkloads,RDMshowsimprovedCPU

    efficiency.

    TheperformancetestsdescribedinthisstudyshowthatVMFSandRDMprovidesimilarI/Othroughputfor

    mostoftheworkloadswetested.Mostenterpriseapplications,suchasWebserver,fileserver,databases,ERP,

    andCRM,exhibitI/Ocharacteristicssimilartotheworkloadsweusedinourtestsandhencecanuseeither

    VMFSorRDMforconfiguringvirtualdiskswhenruninavirtualmachine.However,therearefewcasesthat

    requireuseofrawdisks.BackupapplicationsthatusesuchinherentSANfeaturesassnapshotsorclustering

    applications(forbothdataandquorumdisks)requirerawdisks.RDMisrecommendedforthesecases.We

    recommenduseofRDMforthesecasesnotforperformancereasonsbutbecausetheseapplicationsrequire

    lowerlevel

    disk

    control.

    Configuration

    Thissectiondescribesthehardwareandsoftwareconfigurationsweusedinthetestsdescribedinthisstudy.

    Server Hardware

    Processors:2dualcoreIntelXeonprocessor5160,3.00GHz,4MBL2cache(4corestotal)

    Memory:8GB

    Localdisks:

    1Seagate146GB10KRPMSCSI(forESXServerandtheguestoperatingsystem)

    5Seagate146GB10KRPMSCSIinRAID0configuration(forthetestdisk)

    Software

    Virtualizationsoftware:ESXServer3.0.1(build32039)

    Guest Operating System Configuration

    Operatingsystem:WindowsServer2003R2EnterpriseEdition32bit,ServicePack2,512MBofRAM,1

    CPU

    Testdisk:10GBunformatteddisk

    0.00

    0.20

    0.40

    0.60

    0.80

    1.00

    4 8 16 32 64Data size (KB)

    Efficien

    cy(MHzperI/Ops)

    VMFS

    RDM (Virtual)

    RDM (Physical)

  • 7/29/2019 vmfs_rdm_perf.pdf

    11/11

    11

    Performance Characteristics of VMFS and RDM

    VMware, Inc. 3401 Hillview Ave., Palo Alto, CA 94304 www.vmware.com

    Copyright 2007 VMware, Inc. All rights reserved. Protected by one or more of U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886,6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149, 843, 7,155,558, 7,222,221, 7,260,815,7,260,820, 7,269,683, 7,275,136, 7,277,998, 7,277,999, 7,278,030, 7,281,102, and 7,290,253; patents pending. VMware, the VMware boxes logo and design, Virtual SMP andVMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Microsoft, Windows and Windows NT are registeredtrademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respectivecompanies.Revision 20071207 Item: PS-035-PRD-01-01

    Iometer Configuration

    NumberofoutstandingI/Os:8

    Rampuptime:60seconds

    Runtime:5minutes

    Numberofworkers(threads):1

    Accesspatterns:

    random/mix,

    random/read,

    random/write,

    sequential/read,

    sequential/write

    Transferrequestsizes:4KB,8KB,16KB,32KB,64KB

    Resources

    ToobtainIometer,goto

    http://www.iometer.org/

    FormoreinformationonhowtogatherI/OstatisticsusingIometer,seetheIometerusersguideat

    http://www.iometer.org/doc/documents.html

    TolearnmoreabouthowtocollectCPUstatisticsusingesxtop,seethechapterUsingtheesxtopUtility

    intheVMwareInfrastructure3ResourceManagementGuideathttp://www.vmware.com/pdf/vi3_301_201_resource_mgmt.pdf

    ForadetaileddescriptionofVMFSandRDMandhowtoconfigurethem,seechapters5and8ofthe

    VMwareInfrastructure3ServerConfigurationGuideathttp://www.vmware.com/pdf/vi3_301_201_server_config.pdf

    VMwareInfrastructure3ServerConfigurationGuidehttp://www.vmware.com/pdf/vi3_301_201_server_config.pdf