perf vsphere sap

Upload: pachecoalexander1038

Post on 10-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Perf Vsphere Sap

    1/13

    Virtualized SAP Performance

    with VMware vSphere 4

    W H I T E P A P E R

  • 8/8/2019 Perf Vsphere Sap

    2/13

    2

    VMWARE WHITE PAPER

    Table of Contents

    I n t r o d u c t i o n 3

    Experiment Configuration and Methodology 3

    Workload Description 3

    Experimental Setup 3

    SAP Server 3

    Load Generator 4

    Storage 4

    Results 5

    Hardware Nested Paging 5

    Transaction Processing (Windows) 6

    Memory Throughput Load (Linux) 7

    Scale-Up 8

    Overcommit Fairness 9

    Large Pages 10

    R e c o m m e n d a t i o n s 1 1

    Conclusion 11

    Acknowledgments 12

    About the Author 12

    References 12

  • 8/8/2019 Perf Vsphere Sap

    3/13

    VMWARE WHITE PAPER

    IntroductionSAP provides a range o enterprise sotware applications and business solutions to manage and run the business processes o an entire

    company. The mission critical role o this sotware requires reliability, manageability, high perormance, and scalability.VMware vSphere 4 can help customers manage a smarter SAP application-based landscape via template-based virtual machine

    cloning; distributed resource scheduling and power management (VMware DRS and DPM); and improved uptimes through VMotion,

    VMware HA, and Fault Tolerance. However, these management and reliability eatures would be less appealing i perormance were

    to suer. This paper demonstrates that vSphere supports virtualized SAP Enterprise Resource Planning sotware with excellent

    perormance and scalability. By supporting physical CPU eatures such as large pages,AMD Rapid Virtualization Indexing, and Intel Extended

    Page Tables, vSphere exhibits low overhead when running SAP with even its most MMU-intensive memory models. VMware Virtual SMP

    support allows SAP sotware to scale-up (utilize more CPUs) with up to 95 percent o physical machine perormance.

    This paper includes results o several experiments using vSphere and SAP sotware with both the Microsot Windows Server 2008

    and SUSE Linux 10.2 operating systems. It presents perormance gains due to vSpheres support or hardware nested page tables,

    perormance data rom scale-up scenarios, and the eciency o vSpheres resource management. You will also fnd best practice

    recommendations based on these experiments.

    Experiment Configuration and MethodologyMany perormance and sizing studies were conducted in VMwares labs to measure, analyze, and understand the perormance o SAP

    in virtual environments. The ollowing sections describe the workloads used or the majority o the tests and present the hardware

    and sotware setup.

    Workload DescriptionThe perormance o SAP on vSphere was studied with a popular online transaction processing workload used to size SAP deployments.

    The workload simulates sales and distribution scenarios with concurrent users creating orders, entering delivery inormation, posting

    a goods issue, and perorming other typical sales and distribution tasks. There are 10 seconds o think time between each transaction.

    SAP deployments may be confgured in a two-tier or three-tier confguration. All experiments in this paper used a two-tier confguration

    wherein the database and application server share the same physical host or vir tual machine. An automated load generator served

    as the presentation tier. The load generator slowly ramps up the number o concurrent users until a preset maximum is reached. Ater

    ramp-up is complete, response times were measured or 10-15 minutes. The number o concurrent users that can be served while

    maintaining an average response time below two seconds was recorded. The experiments were conducted in good aith and run to

    completion without errors or warnings, but the results in this paper have not been certifed by SAP.

    Experimental SetupThe two-tier setup consisted o 1) a database and application server (SAP Server) and 2) a presentation server (Load Generator). The

    ollowing sections describe the hardware and sotware stacks or each server including the storage attached network (SAN) system

    confguration.

    SAP Server

    In addition to tests with virtual machines, tests were run in a physical environment to provide context or the virtual results. For an

    apples-to-apples comparison, identical hardware was used or both physical and virtual tests. The server was rebooted to enter

    either the Windows environment or physical tests or the vSphere environment or virtual tests.

    Server:DellPowerEdge2970

    Processors:2AMDOpteron2356processorsat2.3GHz(8corestotal)

    Third-generationAMDOpteronprocessorsimplementtwotechnologiesdesignedtoassistvirtualization.Instructionsetextensions

    simpliy execution virtualization, and Rapid Virtualization Indexing (RVI) provides capabilities or nested memory translation [1].

    Memory:32GB

    VirtualizationSoftware:VMwarevSphere4(ReleaseCandidate)

    OperatingSystem:WindowsServer2008Enterprisex64Edition(Build6001SP1)

  • 8/8/2019 Perf Vsphere Sap

    4/13

    4

    VMWARE WHITE PAPER

    Database:MicrosoftSQLServer2005x64SP2

    SAP:ECC6.0/NetWeaver7.0SR2with64bitUnicodekernel(7.00PL146)

    TheServerslocalstoragehadtwopartitions:

    NTFS:Physicalsystemdisk(Windows,VMwareTools,SAP,andSQLServer)

    vmfs3:VirtualMachinesystemdisk:(Windows,SAP,andSQLServer)

    Load Generator

    A physical machine was used as the load generator. It emulated multiple users interacting with the SAP Server.

    Server:DellPowerEdge1950

    Processors:2IntelXeon5160processorsat3GHz(fourcorestotal)

    Memory:8GBRAM

    Operatingsystem:64-bitWindowsServer2008DatacenterEditionSP1

    Disk:1x10KRPM146GBdiskdrive

    Storage

    ThedataandloglesofthedatabaseweredeployedonanEMCCX3-40SAN(FirmwareversionR4F0)withthecongurationdetails

    shown inTable 1. The database was stored on a logical unit ormatted with NTFS and exposed to the virtual machine via raw device

    mappings so that the same disks and fles could be used in both the physical and vir tual environment.

    Table 1: Storage array coniguration

    RAID Group RAID Level Disks

    15K RPM146 GBFibre Channel (4Gbps)

    LUNs

    0 0 15 Database data files

    1 0 15 Database log files,Windows swap file

  • 8/8/2019 Perf Vsphere Sap

    5/13

    VMWARE WHITE PAPER

    Figure 1: Two-tier SAP Application Load Test Architecture

    The hardware and sotware were connected as shown in Figure 1. Network and storage links are labeled with the observed trac.

    The link capacity greatly exceeds the bandwidth generated by the workload.

    ResultsUsing the testbed described above, the perormance beneft o nested paging is highlighted. The next result ocuses on the scale-up

    behavior o SAP in physical and virtual environments. Next, scheduler airness is demonstrated when the host server is overcommitted.

    Finally, it is shown how the use o large pages can improve SAP perormance.

    Hardware Nested PagingAMD Rapid Virtualization Indexing (RVI) and Intel Extended Page Tables (EPT) are hardware technologies that support the multiple

    layers o memory address space translation needed to run vir tual machines. Prior to these technologies, vSphere translated memory

    addresses in sotware with a technique called shadow page tables. Shadow page tables may cause a time and space overhead

    as vSphere must monitor the page tables o a guest operating system to ensure coherence with the underlying mapping to the

    physical hosts memory.

    As RVI and EPT move vir tual machine page table management rom sotware to hardware, VMware oten reers to the support or thistechnology as Hardware MMU support. A Hardware MMU typically oers higher perormance or workloads that heavily manipulate

    the guest operating systems page tables. I these workloads also exhibit signifcant translation lookaside buer (TLB) misses, however,

    perormance may suer. This can be mitigated by the use o large pages as discussed in Large Pages. A more complete discussion

    o the trade-os between a Hardware MMU and traditional Sotware MMUs on various workloads may be ound in VMware white

    papers [2, 3].

    This section ocuses on the eect o a Hardware MMU on SAP per ormance in a virtual machine.

    SAP Application Server

    MS SQL Server

    Windows Server

    VMware vSphere 4

    AMD Opteron 2356 with RVI

    Load

    Generator

    SAN

    100 MB/sec

    Ethernet

    TX: 2.4 MB/sec

    RX: 0.3 MB/sec

    Fibre Channel

    100 IOPS

  • 8/8/2019 Perf Vsphere Sap

    6/13

    6

    VMWARE WHITE PAPER

    Transaction Processing (Windows)

    SAPcanbeconguredtousetwomemorymodelsinWindows:ViewandFlat.Amemoryprotection(mprotect)optionfortheFlat

    mode results in three eective modes, but SAP recommends either the View or deault Flat model (mprotect enabled) or production

    use [4,5]. To show the impact o hardware support or MMU virtualization, the MMU virtualization technique is manually toggled

    using the Virtual Inrastructure Client GUI1. Figure 2 shows the number o users achieved with each memory model in a 2-CPU experiment.

    The results are normalized to the highest user count achieved in a physical (native) environment.

    Figure 2: vSpheres Hardware MMU support minimizes overhead or SAPs production memory models on Windows (View and Flat with mprotect).

    vSpheres ability to support hardware nested page tables (RVI in this case) markedly improved perormance in the two production

    memory models. A two-way virtual machine achieved 89 percent and 90 percent o native perormance or the View and Flat

    (mprotect=true) modes, respectively. This represents a 15 percent and 20 percent improvement over the sotware MMU perormance.

    For completeness, the fgure includes data showing perormance using the unsupported Flat memory model with mprotect

    disabled. This mode is oten used during benchmarks by server vendors to assess peak perormance. There are ewer page table

    manipulations in this mode, so the Hardware MMU provides little beneft. In act, the higher cost o a TLB miss with the Hardware

    MMU is enough to slightly reduce the perormance compared to running with the traditional Sotware MMU. TLB miss costs may

    change in uture hardware generations, altering the hardware-sotware gap.

    Another important take-away rom Figure 2isthatthedierencebetweenthemaximumandminimumusercountsislessthan6

    percent in native and in optimally confgured virtual environments. This is a much better result than tests done on older hardware

    with Windows 2003. Those tests showed higher vir tualization overhead in the View mode. They also saw a dramatic decrease in

    absolute perormance on both native and virtual machines when mprotect was set to true, though the relative perormance was

    similar to current results. The Hardware MMU has addressed the higher-than-desired View virtualization overhead, and changes in

    Windows have greatly improved the absolute perormance with mprotect enabled.

    at(mprotect=false)

    Numberofusers

    (percentofmaximum)

    view

    Memory Model

    at(mprotect=true)

    60%

    70%

    80%

    90%

    100%

    50%

    40%

    30%

    0%

    10%

    20%

    Native

    vSphere

    (Hardware MMU)

    vSphere

    (Software MMU)

    1EditSettings...OptionsTab/Advanced:CPU/MMUVirtualization

  • 8/8/2019 Perf Vsphere Sap

    7/13

    VMWARE WHITE PAPER

    Memory Throughput Load (Linux)

    The impact o vSpheres Hardware MMU support was also tested using the EPT implementation rom Intel. The hardware and

    sotware used or these tests diers rom that described above.

    Server:SuperMicroX8DTN

    Processors:2IntelXeonX5570processorsat2.93GHz(8corestotal)

    This processor implements Intel VT-x to assist virtualization o the CPU and Extended Page Tables (EPT) to provide capabilities or

    nested memory translation.

    Memory:72GB

    VirtualizationSoftware:VMwarevSphere4(ReleaseCandidate)

    OperatingSystem:x64SUSELinux10.2

    Database:MAXDB7.7.04

    SAP:ECC6.0/NetWeaver7.0SR2with64-bitUnicodekernel

    Storage:iSCSIfilerwith241TBdisksinaRAID5array.Ten300GBLUNs

    Instead o the transaction processing workload, the memory_load script was run rom the SAP Linux Certifcation Suite (SLCS), a

    stress test that heavily loads an SAP system. According to documentation supplied with the SLCS, memory_load generates a

    300MB internal ABAP table and writes random data into this table. It then loops over this table or 900 seconds. During each loop it reads

    one data set randomly. The overall number o loops per second is reported. For every available CPU in the system, the script confgures

    three dialog work processes. Each process runs one report. For example, when the server has two CPUs, the script confgures a total

    ofsixworkprocesses,allocatingapproximately1800MB[6].Theresultshavea3%marginoferror.

    SAPonLinuxmayberunintwomemorymodes:Standard(Std)andMap,withStdbeingthedefaultfor64-bitLinuxinstallations.

    The dierence between the frst and third bars in Figure 3 shows the impact o the Hardware MMU (EPT) on this memory-intensive

    workload. Specifcally, running a virtual machine with a Hardware MMU increased the loops-per-process in Std mode by 82 percent

    o the amount achieved with the Std mode using a Sotware MMU. Note that VMwares support or CPUs with a Hardware MMU

    means that the Map mode is not recommended when a Hardware MMU is available.

    Figure 3: EPT perormance on Linux

    Std +Hardware MMU

    Loopsperprocess

    Map +Hardware MMU

    Memory Model + MMU Implementation

    Std +Software MMU

    12000

    14000

    16000

    10000

    8000

    6000

    0

    2000

    4000

  • 8/8/2019 Perf Vsphere Sap

    8/13

    8

    VMWARE WHITE PAPER

    Scale-UpTo measure scale-up perormance, the sales and distribution workload described earlier was run with an increasing number o CPUs

    in both a native and virtual environment. The baseline consists o 1-, 2-, 4-, and 8-way experiments conducted in the native environment.

    The bcdedit tool was used to orce Windows to boot into the confgurations shown inTable 2. In the virtual environment, a virtual

    machine was confgured to use corresponding processor counts and memory.

    Table 2: Physical machine boot conigurations

    Number of CPUs Memory (MB)

    1 5,500

    2 9,500

    4 15,000

    8 28,000

    To allow the SAP sotware to take advantage o multi-core CPUs, it can be confgured with multiple instances.Oneinstancewasusedforeachofthencoresintheconguration:acentralinstanceandn-1dialoginstances.Thecentralinstancewascomprisedoftwo

    dialog processes, one update process, and one enqueue process (along with a batch and spool process which were mostly idle). Each

    dialog instance had three dialog processes and one update process. Anity was set in the SAP instance profles so that each instance

    ran on its own processor. The database ran on the same CPU as the central instance. This confguration is similar to that used by certifed

    SAP2-Tierbenchmarksonsimilarhardware[7].SAPsFlatmemorymodelwithmprotectdisabledwasusedforpeakperformance.The

    VMs were confgured to use the Sotware MMU as it provided the best perormance or this memory model.

    Figure 4 shows the perormance o each confguration normalized to the number o users supported in a native uniprocessor system.

    It shows that the native SAP system scaled almost linearly rom 1-4 instances and the virtual machine perormed almost as well.

    Native and virtual perormance did not scale as well to eight CPUs, but still exhibited sizable perormance gains.

    Figure 4: SAP scale-up in physical and virtual environments

    1

    Numberofusers

    (percentof1PCPUresult)

    2

    Number of physical (Native) or virtual (vSphere) CPUs

    4 8

    600%

    700%

    800%

    500%

    400%

    300%

    0%

    100%

    200%

    Native

    vSphere

  • 8/8/2019 Perf Vsphere Sap

    9/13

    VMWARE WHITE PAPER

    Table 3: Ratio o supported users (Virtual Native)

    (V)CPUs Ratio to Native

    1 94%2 94%

    4 95%

    8 85%

    Table 3 indicates the near-native perormance o SAP in a virtual machine. Note that this table is slightly dierent rom Figure 4.

    The table compares the number o users supported on confgurations o similar size, while the fgure normalizes each result by the

    number o users in a native uniprocessor confguration. The high ratio o virtual to physical perormance, especially in 1-4 VCPU virtual

    machines, presents the administrator with many confguration possibilities. In uture work, VMware plans to examine a scale-out

    scenario in which several o the smaller, most ecient virtual machines work together on this sales and distribution workload.

    Overcommit FairnessWhen multiple identical virtual machines are running, one expects identical perormance rom each. To veriy this property, multiple

    4-way,16GBvirtualmachineswereinstantiatedonan8-wayservertocreateCPUovercommitscenariosof100percent,150percent,

    and 200 percent; memory was not overcommitted. Then, the memory_load test was run in each virtual machine and the throughput,

    measured in loops-per-process, was observed. Figure 5 shows the eect o overcommitting the CPU resources o the machine using the

    hardware and sotware rom Memory Throughput Load (Linux). The total height o each bar represents aggregate throughput while the

    components o the bar show the contribution rom each virtual machine. The airness properties o the vSphere scheduler are clearly

    seen:whenequalsharesarespecied,novirtualmachinereceivedsubstantiallymoreCPUtimethanitspeers.

    Figure 5: CPU overcommitment perormance on Linux

    100

    Loopsperprocess

    150

    Percentage overcommitment (VCPU/PCPU*100)

    200

    30000

    35000

    40000

    45000

    25000

    20000

    15000

    0

    5000

    10000

    VM 4

    VM 3

    VM 2

    VM 1

  • 8/8/2019 Perf Vsphere Sap

    10/13

    10

    VMWARE WHITE PAPER

    Beore interpreting the aggregate throughput result, it is important to note that the hardware platorm has a non-uniorm memory

    access (NUMA) topology. Memory access time is reduced when a processor is on the same NUMA node as the memory chip it is

    accessing.TheserverusedforthesetestshastwoNUMAnodes,eachwithfourcoresand36GBofmemory.Avirtualmachinewith4

    VCPUsand16GBofmemorytsonasingleNUMAnodewithmemorytospare.

    I one examines the aggregate perormance shown in the fgure, an interesting behavior is seen. Whether run on two virtual machines

    (80%hostCPUutilization)orthree(100%hostCPUutilization),performancewasroughlythesame.Surprisingly,addingafourth

    virtual machine increased aggregate perormance. While one might expect the addition o the third virtual machine to have

    increased perormance, the additional host CPU capacity was likely cancelled out by the costs o migration and remote memory

    access time. To detect such a problem, one can reer to the NUMA Statistics on esxtops memory screen and the CPU Event Counts

    (migration rate) on esxtops CPU screen.

    With three 4-way virtual machines and two physical 4-way nodes, vSphere periodically migrated the virtual machines to ensure

    fairness.Thisintroducedamemoryaccesstimepenalty:localmemorybecameremotewhenthevirtualmachinemigratedtothe

    othernode.Whenafourthvirtualmachinewasadded,nointernodemigrationwasneededtoensurefairness,andVMwareESX

    allocated memory on the same node that runs the virtual machine. In this case, its believed that the beneft o local memory access

    and lack o migrations in the our virtual machine case improved the aggregate perormance compared to the three-virtual

    machine case even in the presence o overcommitted resources.

    Local memory is preerred even in undercommited situations due to its lower access time. Thus, when running on a NUMA system,

    one should strive to speciy a virtual machine memory size that is less than the amount o memory on a NUMA node. Furthermore,

    choose a VCPU count that is less than or equal to the number o processors per node. This confguration allows vSphere 4 to employ

    NUMA optimizations or memory and CPU scheduling and ensures that all memory accesses will be satisfed by the memory closest

    to the processor.

    Large PagesComputersmanagememoryingroupsofbytescalledpages.Thex86platformsupportsseveralpagesizes,themostcommonbeing

    4KB(small)pagesand2MB(large)pages.Whenanapplicationaccessesalargememoryregionfrequentlyoraccessesasignicant

    amount o memory within the large region, it can be advantageous to work with large pages. This allows circuits in the CPU to access

    the memory aster. In a virtual environment, hypervisor support is needed to ully realize this perormance boost [8].

    Bydefault,ESXwilluselargepagesforallmemorythatitmanageswhenrunningonaplatformwithhardwarenestedpagetables.

    This mitigates the extended TLB miss penalty that is possible on such hardware. Even when the hypervisor has backed a guest

    physical page with a large host machine page, confguring the guest to use large pages can oer urther benefts and is always

    advised. Properly confguring the guest is essential or large page perormance with hardware that does not implement RVI or EPT.

    For such servers, vSphere deers to the guest operating system or hints as to which pages should be backed large.

    To see the value o vSpheres large page support, the virtual machines operating system and applications were confgured as described

    below prior to running the sales and distribution application load test1. The number o supported users was compared against an execution

    using the same virtual machine confguration but with large pages disabled using vSphere advanced confguration settings2.

    Windows:UseGroupPolicyEditortogiveSAPprocessownerrightstoLockpagesinmemory

    MSSQLServer:Automaticifthevirtualmachinesmemorysizeis8GB

    SAP:Profilesettingem/largepages=yes3

    1.Linuxmayalsobeconfiguredforlargepages.ContactyourOSvendorfordetails.

    2. esxcfg-advcfg s /LPage/LPageAlwaysTryForNPT 0esxcfg-advcfg -s /Mem/AllocGuestLargePage 0

    3. This setting is not supported in production deployments. It is used here to provide an upper bound on the beneit o large page support.

  • 8/8/2019 Perf Vsphere Sap

    11/13

    1

    VMWARE WHITE PAPER

    (applicable in benchmarking conigurations only)

    As shown in Figure6, the SAP workload exhibits a marked perormance beneft, 12 percent more users, when using vSpheres large

    page support.

    Figure 6: vSpheres support or large pages improves SAP perormance by 12%

    RecommendationsTheresultsinthispapersuggestthattorunSAPinavirtualmachinemosteciently,oneshouldadoptthefollowingbestpractices:

    RunwithnomoreVCPUsthannecessary.

    Usethenewesthardware(e.g.,AMDOpteron2300/8300SeriesorIntelXeon5500Series)toexploitvSpheressupportof

    hardware nested page tables.

    LimitvirtualmachinesizetofitwithinaNUMAnode.

    Configureguestoperatingsystemandapplicationsforlargepages.

    Ifusingaprocessorwithhardwarenestedpagetables(RVIorEPT)andLinux,choosetheStdmemorymodel.

    Ifusingaprocessorwithhardwarenestedpagetables(RVIorEPT)andWindows2008,convenienceshoulddictatethechoiceof

    memory model as it has only a minor eect on perormance.

    ConclusionThe mission critical nature o SAP solution deployments requires a well-perorming and scalable compute inrastructure to satisy the

    service level agreement requirements o business users. The results in this document show how vSphere can help to meet the

    perormance and scalability requirements o these SAP implementations.

    This paper has demonstrated how CPUs that support nested paging in hardware can boost perormance o an SAP workload in a

    vSphere virtual machine. Experiments that compare native and virtual perormance have shown the excellent scale-up properties

    o SAP sotware on vSphere. The airness o vSpheres scheduler during a CPU overcommitment scenario has been confrmed, and

    vSpheres support or large pages has been highlighted. By ollowing the recommendations above, one can expect virtualized SAP

    environments to exhibit nearly the same perormance as environments running on physical systems.

    Enabled (default)

    Users

    Disabled

    vSphere Large Page Support

    60%

    70%

    80%

    90%

    100%

    50%

    40%

    30%

    0%

    10%

    20%

  • 8/8/2019 Perf Vsphere Sap

    12/13

    12

    VMWARE WHITE PAPER

    AcknowledgmentsTuyetPhamperformedtheexperimentsusedtocharacterizeperformanceontheWindows/RVIplatform,andMichaelHesse

    measuredperformanceofSAPontheLinux/EPTplatform.Theirsignicantcontributionisgreatlyappreciated.VMware thanks the SAP Benchmark team or its quick response whenever ull disclosure inormation was requested.

    VMware thanks Intel or lending the hardware used or the EPT and CPU overcommitment experiments.

    ThankstoScottDrummonds,AravindPavuluri,VasMitra,RajitKambo,PritiMishra,andMichaelHessefortheirreviewsonearlydrafts

    o this paper.

    About the AuthorKennethBarrisaStaEngineerinVMwaresCorePerformancegroupwherehehasstudiedvirtualizedSAPworkloads,characterized

    Virtual Desktop Inrastructure (VDI) perormance, and helped tune memory resource management eatures. At VMworld 2008, he

    spoke about his work at sessions detailing SAP and VDI perormance results and best practices. He also helped present a general

    overviewofESXPerformanceBestPractices.

    KenreceivedMastersandPh.D.degreesfromMITCSAILsComputerArchitectureGroup.AtMIT,KenperformedresearchonEnergy-Aware Lossless Data Compression and on techniques to accelerate simulation-based microprocessor design. Prior to his work

    atVMware,KensresearchappearedatvariousacademicconferencesincludingseveralpapersattheInternationalSymposiumon

    PerformanceAnalysisofSystemsandSoftware.KenreceivedhisBSEinComputerEngineeringfromtheUniversityofMichigan.

    References[1]AMD Virtualization (AMD-V) Technology.

    http://www.amd.com/us-en/0,,3715_15781_15785,00.html.

    Retrieved April 14, 2009.

    [2] Perormance Evaluation o AMD RVI Hardware Assist. Technical Report, VMware, 2009.

    http://www.vmware.com/resources/techresources/1079

    [3] Perormance Evaluation o Intel EPT Hardware Assist. Technical report, VMware, 2009.

    http://www.vmware.com/resources/techresources/10006

    [4] Flat Memory Model on Windows.SAPNoteNumber1002587.

    [5] Memory protection and perormance.SAPNoteNumber807929.

    [6]H. Kuhnemund. Documentation or SLCS v2.3.1.LinuxLab,SAPAG,Walldorf,May2008.Release0.7.

    [7]SAP SD Standard Application Benchmark Results, Two-Tier Internet Confguration

    http://www.sap.com/solutions/benchmark/sd2tier.epx. Retrieved May 15, 2009.

    [8] Large Page Perormance. Technical report, VMware, 2008.

    http://www.vmware.com/resources/techresources/1039.

  • 8/8/2019 Perf Vsphere Sap

    13/13

    VMware, Inc. 3401 Hillview Ave Palo Alto CA 94304 USA Tel 877-486-927 3 Fax 650-427-50 01 www.vmware.com

    Copyright 2009 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual

    property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.

    VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other

    marks and names mentioned herein may be trademarks of their respective companies. VMW_09Q2_WP_VSPHERE_SAP_P13_R1