01—2019 storage - the cloud reportthe-report.cloud/wp-content/uploads/2019/01/cloudreport... ·...

48
Storage Furthermore: Geo Redundancy – Ceph – Rook – Project Delivery 01—2019

Upload: others

Post on 13-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

StorageFurthermore: Geo Redundancy – Ceph – Rook – Project Delivery

01—2019

Page 2: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

KOMPETENZ FÜR IHRE SICHERHEIT – SEIT 1966

BERLIN CHEMNITZ HEILBRONN KARLSRUHE MANNHEIM MÜNCHEN NÜRNBERG PHILADELPHIA (USA) STUTTGART

Die Dr. Hörtkorn Unternehmensgruppe zählt zu den größteninhabergeführten Versicherungsmaklern in Deutschland. Mit rund 200 Mitarbeitern sind wir Ihr kompetenter Partner rund umdie Bereiche Unternehmensversicherung und Vorsorgemanagement.

Weitere Infos auf www.dr-hoertkorn.de oder gleich Termin unter 07131/949-0 vereinbaren.

DR. FRIEDRICH E. HÖRTKORN GMBH | Oststraße 38 - 44 | 74072 Heilbronn | Telefon 07131/949-0 | [email protected]

Page 3: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

1 the cloud report 01—2019

EDITORIAL

Storing Data

Cloud verspricht den Eintritt in die digitale, virtuelle Welt. Die alten Server-schränke werden aufgelöst, die Daten transferiert und der Nutzer arbeitet mit dem Laptop über´s Internet, wo er oder sie möchte. Cloud-Provider ver-sprechen, dass die Daten und Services rund um die Uhr zur Verfügung stehen, und garantieren auch, dass sie auf keinen Fall verloren gehen.

In der Cloud hängen die gespeicherten Daten aber natürlich nicht in der Luft, sie werden weiterhin auf Festplatten in Serverschränken in Rechenzentren ge-speichert, die nur nicht mehr notwendig in der Nähe der Datennutzer stehen müssen. In diesen Rechenzentren sind dann auch häufig die Daten mehrerer Abteilungen, sogar mehrerer Firmen gesammelt. In Public Clouds sind sogar die Daten von unzähligen privaten und kommerziellen Kunden nebeneinander gespeichert. Und dennoch wird eine Datensicherheit garantiert und realisiert, dass tatsächlich nur berechtigte Nutzer einen entsprechenden Zugriff haben.

Dies erfordert eine besondere Infrastruktur in diesen Rechenzentren, aber auch eine besondere Technik und Software. In dieser Ausgabe liegt der Schwerpunkt auf genau dieser Technik und Software. Wir stellen dafür verschiedenen Sto-rage-Lösungen (Ceph, Rook) und auch Storage-Anbieter (NetApp) vor, die die oben genannten Herausforderungen zu lösen versuchen.

Ziel der Lösungen und Angebote ist es, vorhandenen Speicherplatz optimal zu nutzen. Für Firmen, die eine Private Cloud Lösung anstreben, kann mit den richtigen Tools eine auf den Bedarf angepasste Einstiegsinvestition errechnet werden. Denn wer vorhandene Speicherkapazitäten optimal nutzen kann, muss nicht in zusätzliche Kapazitäten investieren.

Über Storage hinausgehend beginnen wir mit der Reihe Project Delivery. In dieser behandeln wir Themen, die direkt mit der Cloudifizierung einhergehen: Zusammenarbeitsmodelle, Agiles Arbeiten, angepasste Prozesse, Projekt-abwicklung im Sinne des Kunden, ohne dabei die beteiligten DevOps Engi-neers aus den Augen zu verlieren, … Wer sich auf Cloud einlässt, betritt damit nicht nur eine neue technische Welt. Und wir möchten helfen, dass Sie diese Schritte nicht blind gehen, sondern bewusst, geplant und langfristig digital.

Natürlich haben wir weitere Cloud-Angebote ausführlich getestet und aus-gewertet. Die Ergebnisse der Tests finden Sie, wie immer, am Ende des Heftes.

Ich wünsche Ihnen, auch im Namen der Cloudibility, ein erfolgreiches, cloudi-ges, nichtsdestotrotz sonniges Jahr 2019! Viel Spaß beim Lesen!

Ihre Friederike ZelkeEditor in Chief

Page 4: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

Impressum2

Herausgeber Cloudibility UG, Kurfürstendamm 21, 10719 Berlin

Geschäftsführung Michael Dombek und Karsten Samaschke

Publikationsleitung / Chefredaktion Friederike Zelke

Redaktion Julia Hahn, Emelie Gustafsson

Onlineredaktion Stefan Klose

Vertrieb Linda Kräter

Anzeigen Julia Hahn

Artdirektion und Gestaltung Anna Bakalovic

Herstellung Regina Metz, Andreas Merkert

Kontakt zur Redaktion [email protected]

Kontakt zu Anzeigen und Vertrieb [email protected] / [email protected] /

[email protected]

Copyright © Cloudibility UG

the cloud report erscheint bei der Cloudibility UG

Kurfürstendamm 21, 10719 Berlin

Geschäftsführung Michael Dombek und Karsten Samaschke

Telefon: +49 30 88 70 61 718 , E-Mail: [email protected]

the-report.cloud

ISSN 2626-1200

The Cloud Report erscheint vierteljährlich jeweils Anfang Januar, April, Juli, Oktober. The Cloud Report gibt

es in zwei Versionen, zum einen die Online-Ausgabe, die über die Homepage online oder als Download er-

reichbar ist, zum anderen die gedruckte Ausgabe zum Preis von 20 Euro im Jahr abonnierbar über das perso-

nalisierte Kundenportal, in dem ein persönlicher Account eingerichtet wird. Bitte geben Sie bei Ihrer Regist-

rierung an, in welcher Ausführung Sie den Report beziehen möchten. Das Abonnement ist jederzeit über den

persönlichen Zugang des Abonnenten kündbar, per Mail spätestens zwei Wochen vor Erscheinen der neuen

Ausgabe über die E-Mailadresse: [email protected]. Für das Abonnement erheben wir relevante per-

sönliche Kundendaten. Einzelne Ausgaben können auch ohne Abonnement käuflich erworben werden zum

Preis von: 5 Euro, auch in diesem Fall werden relevante persönliche Daten zur Erfüllung des Kaufvertrages

erhoben. Weitere Informationen hierzu finden Sie unter: http://the-report.cloud/privacy-policy

the cloud report

IMPRESSUM

Page 5: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

3 the cloud report 01—2019

the cloud report

INHALT

Time ScopeCost

Time ScopeCost

Traditional Project Leadership Digital Project Leadership

EDITORIALStoring Data 1

KOMMENTARAround the globe

and into trouble 4

FOKUSStay ahead of the pack 6

Interview mit Kim-Norman Sahm über Ceph 11

Rook more than Ceph 14

Ceph Day Berlin 2018 19

PROJECT DELIVERYMind- und Skill-Set

für Digital Leadership 20

KONFERENZBERICHTEMy first OpenStack Summit 22

Cloud-Welt in Mannheim 24

TESTSWir testen Clouds 26

And the Winner is … 27Auswertung der technischen Tests 28

Page 6: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

Kommentar4

Kommentar

Around the globe and into trouble

As simple as it is to produce, consume, manipulate, and store data and applications in cloud environments, you need to ensure their availability, backup, and recovery. Surely, this can be handled by built-in mechanisms of the respective environments such as multi-location storage, backup, and restore - but what is about data safety, data integrity and privacy?

In modern cloud environments data can easily be stored and backed up in multiple locations around the globe. These approaches provide a lot of advantages: Protection from local disasters such as earthquakes or fires, faster availability of data to clients and customers in different re-gions of the world, decentralized and parallel processing of data, better utilization of resources, etc. And, it can be ini-tiated easily without having to program and to learn about cloud technologies.

Brave new world, problems solved, cloud technology for the win!

But, unfortunately, things are not that simple and easy, at least from a non-technical point of view, as cloud vendors have to apply to legal regulations at the location of the data centers. Which basically means: Legal regulations, such as data privacy laws and rights of local authorities apply.

And that makes things complicated, as most organiza-tions and individuals are not aware of the consequences and implications of utilizing geo-redundancy. Such con-sequences could be: Authorities of countries may gain access to confident or user-specific data. Privacy regula-tions would not be fulfilled anymore, data security might be hampered, business models and customer‘s trust would vanish, if those consequences would not have been taken into account, mitigated, legally checked and communicat-ed properly.

Is geo-redundancy a no-go then?

Of course not, but a proper planning process needs to be established and executed, involving legal and data com-pliance teams and clarifying these aspects with the same priority as solving technical issues. Actually, a process like this needs to be executed before, while and after solving technical issues, executing 3-2-1 backup strategies (3 cop-ies of data, 2 storage medias, at least 1 offsite) or processing any data in a cloud environment. This ongoing and perma-nent process is even more required when trying to utilize multi-cloud-environments for better data isolation or to harness advantages of specific cloud environments. And it needs to remain in place, considering the ever-changing nature of laws and regulations.

Page 7: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

5 the cloud report 01—2019

So, when trying to secure data by storing and processing it in cloud environments, it is not enough to just press a but-ton or execute a command. It is not enough just to think of a backup strategy. It is not enough to solve a technical issue or to provide faster transport of data to users. It may even be dangerous if not critical to a business to simply store data in cloud environments or to utilize awesome technical advantages such as Amazon S3, Azure Storage, blob- and object stores, etc. It is not enough to hope to comply with GDPR- or other privacy regulations – you actually need to comply with them, everywhere and anytime.

When executing businesses in cloud environments, gover-nance and legal need to be involved. They need to be part of a - THE - process. Which again should make you think of how to set up and execute processes and ensure sus-tainability in your cloud strategy. DevOps and other mod-ern collaboration approaches are required to sensitize for problems and consequences of just „lifting and shifting“ into cloud environments (which is way too often performed by setting up VMs, firewalls and infrastructures, bringing in applications and not realizing the implications of mov-ing from self-owned and self-operated data centers into vendor-owned and vendor-operated cloud environments)

or of simply implementing and setting up of technical ap-proaches.

Cloud is complex. It often encapsulates a technical com-plexity, making approaches as geo-redundancy and da-ta-replication as easy as clicking on a button. But it can and it will not abstract from legal problems and data-security aspects. Moving and executing in cloud environments, your responsibility does not decrease, it actually increases and therefore needs to be understood, accepted and man-aged continuously and as part of a process.

As with all cloud solutions and approaches, you remain in command and you remain responsible for everything your organization creates, operates and stores.

Literally everywhere on the planet.

Karsten Samaschke Co-Founder und CEO der CloudibilityKurfürstendamm 21, 10719 [email protected]

Page 8: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

FoKus6

The famous quote of Henry Ford “If you always do what you’ve always done, you’ll always get what you’ve always got” describes pretty accurately what happens once you stop challenging current situations to improve them for the future: you become rigid and restricted in your think-ing with the result of being unable to adapt to new situa-tions. In today’s business environment, data is considered as the base to help organizations succeed in their digital transformation by deriving valuable information leading eventually to a competitive advantage. The value of data has also been recognised by the Economist in May 2017, stating that data has replaced oil as the world’s most valu-able resource. Why is that? The use of smartphones and the internet have made data abundant, ubiquitous, and far more valuable since nowadays almost any activity cre-ates a digital trace no matter if you are just taking a picture, having a phone call or browsing through the internet. Also, with the development of new devices, sensors, and emerg-

In today’s It ecosystem, the cloud has become synonymous with flexibility and efficiency. though, all that glitters is not gold since applications with fixed usage patterns often continue to be deployed on-premises. this leads to hybrid cloud environments creating va-rious data management challenges. this article describes available solutions to tackle risks such as scattered data silos, vendor lock-ins & lack of control scenarios.

FoKus

Stay ahead of the pack and capture the full potential of your cloud business

ing technologies, there is no doubt, that the amount of data is further growing. According to the IBM Marketing Cloud report, “10 Key Marketing Trends For 2017,” 90 % of the data in the world today has been created in the last two years from which the majority is unstructured. To become an understanding of this, figure 1 illustrates the number of transactions executed every 60 seconds for a variety of data related products within the ecosystem of the internet.

Estimates suggest that by 2020 about 1.7 MB of new data will be created every second for every human on the planet leading to 44 zettabytes of data (or 44 trillion giga-bytes). The exploding volumes of data changes the nature of competition in the corporate world. If an organization is able to collect and process data properly, the product scope can be improved based on specific customer needs which attracts more customers, generating even more data and so on. The value of data can also be illustrated within the Data-Information-Knowledge-Wisdom (DIKW) Pyra-

Page 9: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

7 the cloud report 01—2019

Figure 1: 60 seconds in the internet

mid referring back to the initial quote (figure 2). Typically, information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge hence. Data is considered as the initial base to gain wisdom. As a result, the key to success in the digital era is to maximize the value of data. That might mean improving the cus-tomer experience, making information more accessible to stakeholders, or identifying opportunities that lead to new markets and new customers.

All that glitters is not gold

In addition to the described observation that the quantity of data is growing exponentially further challenges can be derived for the following three categories: a Distributed: Data is no longer located at one location

such as your local data centre. Data relevant for enter-prises is distributed across multiple locations. Figure 2: DIKW pyramid (by Longlivetheux)

http

s://

ww

w.b

eing

guru

.com

/20

18/0

9/w

hat-

hap

pen

s-on

-int

erne

t-in

-60

-sec

ond

s-in

-20

18/

Page 10: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

FoKus8

a Diverse: Data is no longer available just in a structured format. As already mentioned before, most of the data being created is considered as unstructured data such as images, audio-/video files, emails, web-pages, social media messages etc.

a Dynamic: Given the described increase in quantity, data sets grow quickly and can change over time. Hence, it is difficult to keep track of the state where the data is located and where it came from.

According to the IDC study: “Become a Data Thriver: Re-alize Data-Driven Digital Transformation (2007)”, leading digital organizations have discovered that the cloud, with its power to deliver agility and flexibility has the ability to tackle the described challenges and is indispensable for achieving their digital transformation. Cloud computing is therefore aiding the business to stay flexible and efficient in an ever-changing environment. It enables customers to deploy services or run applications with varying usage needs that allows you to pay what you need, when you need it. This realization leads most organizations to hybrid IT environments, in which data is generated and stored across a combination of on-premises, private cloud, and public cloud resources. The existence of a hybrid IT envi-ronment is probably the result of an organic growth and might be more tactical than strategical. Different lines of business in the organisation are likely using whatever tools they need to get their jobs done without involving the IT department. This approach creates numerous challeng-es for IT teams, such as knowing what data is where, pro-tecting and integrating data, securing data and ensuring compliance, figuring out how to optimize data placement,

and seamlessly moving data into and out of the cloud as needed. The question regarding the data movement be-comes even more crucial with regards to potential vendor lock-ins. To address those challenges, organizations must invest in cloud services while developing new data services that are tailored to a hybrid cloud environment. Deploying data services across a hybrid cloud can help organizations to respond faster and stay ahead of the competition. How-ever, all the data in the world won’t do your organization any good if the people who need it can’t access it. Employ-ees at every level, not just executive teams, must be able to make data-driven decisions. To support organizations in their digital transformation process by creating new and innovative business opportunities fuelled by distributed, divers and dynamic data sets, organizations often find their most valuable data trapped in silos, hampered by com-plexity and too costly to harness (figure 3). To undermine this statement, industry research from RightScale identi-fied, that organizations worldwide are wasting, on average a staggering 35 % of their cloud investment. Or, to put it in monetary terms, globally over $10 billion is being misspent in the provisioning of cloud resources each year.

Every cloud has a silver lining

The ultimate goal of the described problem should be, that business data needs to be shared, protected and integrat-ed at corporate level, regardless where the data is locat-ed. Although organizations can outsource infrastructure and applications to the cloud, they can never outsource the responsibility they have for their business data. Or-ganizations have spent years controlling and aligning the

Figure 3: isolated resources / data silos

Figure 4: NetApp Data Fabric

Page 11: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

9 the cloud report 01—2019

appropriate levels of data performance, protection, and security in their data centre to support applications. Now, as they seek to pull in a mix of public cloud resources for infrastructure and apps, they need to maintain control of their data in this new hybrid cloud. They need a single, cohesive data environment, which is a vendor-agnostic platform for on-premises and hybrid clouds to give them control over their data. A cloud strategy is only as good as the data management strategy that underpins it and if you can’t measure it, you can’t manage it. The starting point to establish an appropriate cloud strategy is to become an in-sight of the data available for being able to control it. This implies that the data locations need to be identified and additional attributes concerning performance, capacity, and availability it requires and what the storage costs are. After, the data can be integrated to cloud data services ex-tending the capabilities within the areas of: backup- and disaster recover management, DevOps, production work-loads, cloud-based analytics etc.

The following section describes a data management solution by using NetApp’s Data Fabric as an example, though there are a variety of vendors offering similar solution. (Editor’s note)

NetApp’s Data Fabric (figure 4) empowers organizations to use data for being able to make intelligent decisions about how to optimize their business and get the most out of their IT infrastructure. They provide essential data visibility and insight, data access and control, and data protection and security. With it, you can simplify the deployment of data services across cloud and on-premises environments to accelerate digital transformation to gain the desired competitive advantage.1 A short description for use-cases within the area of storage, analytics and data provision-ing within a hybrid cloud environment is described below, tackling the main issues described earlier.

1 The products related to NetApp’s Data Fabric can be found at cloud.netapp.com.

Cloud Storage NetApp offers several services and solutions to address data protection and security needs, including: a Backup and restore services for SaaS services such

Office365 and Salesforce a Cloud-integrated backup for on-premises data a End-to-end protection services for hybrid clouds

NetApp Cloud Volumes Service offers consistent, reliable storage and data management with multi-protocol sup-port for MS Azure, AWS and Google Cloud Platform, en-abling existing file-based applications to be migrated at scale and new applications to consume data and extract value quickly (figure 5).

Furthermore, it enables you to scale development ac-tivities in AWS and Google Cloud Platform, including building out developer workspaces in seconds rather than hours, and feeding pipelines to build jobs in a fraction of the time. Container-based workloads and microservices can also achieve better resiliency with persistent storage provided by Cloud Volumes Service.

Azure NetApp Files similarly enables you to scale de-velopment and DevOps activities in Microsoft Azure all in a fully managed native Azure service. NetApp Cloud Vol-umes ONTAP® services enable developers and IT opera-tors to use the same capabilities in the cloud as on-premis-es, allowing DevOps to easily span multiple environments.

Cloud Analytics Since IT infrastructures are growing more complex and administrators are asked to do more with fewer resourc-es. While businesses depend on infrastructures that span on-premises and cloud, administrators responsible for these infrastructures are left with a growing number of inad-equate tools that leads to poor customer satisfaction, out of control costs, and an inability to keep pace with innovation.

NetApp Cloud Insights is a simple to use SaaS-based monitoring and optimization tool designed specifically for cloud infrastructure and deployment technologies. It pro-vides users with real-time data visualization of the topol-ogy, availability, performance and utilization of their cloud and on-premises resources (figure 6).

Figure 5: multi-cloud use-case scenarios

Page 12: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

FoKus10

Cloud Data Services - Cloud Sync Transferring data between disparate platforms and main-taining synchronisation can be challenging for IT. Moving from legacy systems to new technology, server consolida-tion and cloud migration, all require large amounts of data to be moved between different domains, technologies and data formats. Existing methods such as relying on simplis-tic copy tools or homegrown scripts that must be created, managed and maintained can be unreliable or not robust enough and fail to address challenges such as: a Effectively and securely getting a dataset to the new

target a Transforming data to the new format and structure a Timeframe and keeping it up to date a Cost of the process a Validating migrated data is consistent and complete

One of the biggest difficulties in moving data is the slow speed of data transfers. Data movers must move data between on-premises data centres, production cloud en-vironments and cloud storage as efficiently as possible.

NetApp Cloud Sync is designed to specifically address those issues, making use of parallel algorithms to deliver speed, efficiency and data integrity. The objective is to pro-vide an easy to use cloud replication and synchronisation service for transferring files between on-premises NFS or CIFS file shares, Amazon S3 object format, Azure Blob, IBM Cloud Object Storage, or NetApp StorageGRID (figure 7).

Figure 6: Cloud Insight performance dashboard

Figure 7: Integrate the cloud with your existing infrastructure

Erik Lau, Solutions Engineer

Erik has a love for technology and the ability to tie technical concepts back to underlying business needs. He works at NetApp as a Solutions Engineer helping customers discovering technical solutions tackling demanding busi-ness challenges within the field of Cloud-Computing and Data-Science.

NetApp Deutschland GmbHHarburger Schloßstrasse 2621079 Hamburg

Page 13: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

11 the cloud report 01—2019

INTERVIEW

Interview mit Kim-Norman Sahm über Ceph

Kim-Norman Sahm ist Head of Cloud Techno-logy bei der Cloudibility und als Experte in den Bereichen OpenStack, Ceph und Kuberne-tes unterwegs. Als typischer Ops-ler ist er im Thema Storage Zuhause und hat schon einige Ceph-Projekte umgesetzt. Speichermöglich-keiten und -kapazitäten spielen im IT-Umfeld schon immer eine große Rolle, mit dem Gang in die Cloud verändern sich diese Möglichkeiten aber sehr, wie sie sich verändern und wie Ceph dabei eingebunden werden kann, sind wir in die-sem Interview auf den Grund gegangen.

Warum ist das Thema Storage wichtig? Was ist besonders an Storage in der Cloud? Was hat sich verändert?Storage war schon immer Thema, in der Lega-cy-Welt gab es die Anforderungen, alle Infor-mationen, die anfallen, zu speichern. Alle Apps waren darauf angewiesen, persistenten Sto-rage zur Verfügung zu haben. Generell wurde mit Storage- und Compute-Ressourcen sehr großzügig umgegangen. Selbst für kleinste An-wendungen wurden häufig zu große Server an-geschafft, die normalerweise zu 90 % ungenutzt blieben. Dabei waren Systeme unflexibel und in monolithische Storage-Blöcke eingeteilt. Es gab nur wenige Storage-Hersteller und die An-gebote waren oft sehr teuer.

Mit der Zeit entwickelte es sich in die Rich-tung, Ressourcen effizienter zu nutzen, sowohl Compute als auch Storage. Virtualisierung

wurde als die Lösung gesehen, Compute-Res-sourcen effizienter zu nutzen. Der Storage-Be-reich wandelte sich hin zu SDS-Lösungen (Soft-ware Defined Storage), welche eine Vielzahl von Vorteilen mit sich bringen: Kosteneffizienz, Flexibilität, Elastizität, … Software-Lösungen brechen die harten Grenzen der klassischen Storage-Lösungen auf und ermöglichen ver-teilte Systeme, Georedundanz und, zum Bei-spiel mit Ceph, vermeiden sie Vendor Lock-in, das heißt, das Storage-System ist nicht länger von einem Hersteller abhängig. In der Cloud ist Storage inzwischen ein Service, der Kunde will nur bezahlen, was er wirklich nutzt. Für den An-bieter ergeben sich damit auch Vorteile, er kann den Platz flexibel und somit effizient ausnutzen.

Kim-Norman Sahm

Page 14: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

IntervIew12

Bezüglich Cloud Native Anwendungen hat sich auch die Mentalität dahingehend verändert, dass nur noch gespeichert wird, was gespeichert werden muss, nicht mehr alles. Der größte Teil von Microservices sind beispielsweise stateless, es werden keine Daten mehr gespeichert. Somit wird der Storage auch diesbezüglich effizient genutzt.

Wie kommt hier Ceph ins Spiel?Ceph ist eine Software Defined Storage-Lö-sung, entstanden aus der Doktorarbeit von Sage Weil konnte sich Ceph erfolgreich auf dem damals noch dünn besiedelten Software Defined Storage-Markt behaupten. Die Open Source-Lösung bietet ein hochverfügbares Sto-rage-Backend, welches auf jeder beliebigen X86 Server-Hardware läuft. Vereinfacht ausgedrückt fasst Ceph alle physikalischen Festplatten im Cluster-Verbund zusammen und stellt diese als

logischen, hochverfügbaren Storage-Pool zur Verfügung, der dann eine Gesamtkapazität der Summe aller Platten bereitstellt. Dies kann dann in mehrere logische Pools aufgeteilt werden, die den Anwendungen zur Verfügung gestellt wer-den.

Ein großer Vorteil von Ceph ist, dass es Block-, Object- und File-Storage aus einem Backend bereitstellen kann. Man ist nicht in der Situa-tion, für jeden Storage-Typ eine eigene Storage- Lösungen anschaffen zu müssen (Figure 1).

Wie arbeitet Ceph?Als das Ceph-Projekt gestarten wurde, be-schränkte sich das Angebot auf Block- und Object-Storage. Im Vergleich zu anderen Sto-rage-Lösungen, die über Gateway- oder Pro-xy-Nodes den Client mit dem Storage-System verbinden, führte Ceph von Anfang an eine “no single point of failure” ein. Die Ceph-Architektur bestand zunächst aus Ceph Monitor (Mon) und Ceph OSD (Object Storage Daemon).

Mons stellen die Cluster-Logik bereit, es müssen mindestens 3, maximal 11 Monitoren im Cluster existieren, deren Anzahl wegen Quorum immer ungerade sein muss. Die Aufgabe der Mons ist es, den Cluster-State zu überwachen und die hochverfügbare Verteilung der Objek-te zu gewährleisten. Dafür halten die Mons die CRUSH-Map, eine Art Lageplan der Objekte, vor. Die eigentlichen Nutzdaten werden auf den OSD-Nodes gespeichert. Eine OSD stellt immer genau eine physikalische Festplatte dar. Wenn ein Client auf die Block-Storage-Daten

Figure 1: Ceph overview

„ Ein großer Vorteil von Ceph ist, dass es Block-, Object- und File-Storage aus einem Backend bereitstellen kann. “

Page 15: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

13 the cloud report 01—2019

zugreifen möchte, wendet er sich zunächst an einen der Monitor-Nodes und fordert die CRUSH-Map an. Anhand dieser Map und dem Berechnungsalgorithmus CRUSH ist der Client selbständig in der Lage zu berechnen, auf wel-chen OSDs die Daten liegen, die er benötigt, und kontaktiert dann direkt die entsprechenden OSD-Nodes (Figure 2).

Werden Daten geschrieben, verhält sich das System analog. Um die Hochverfügbar-keit zu gewährleisten, werden Objekte vom Ceph-Cluster dreifach (Ceph Default Wert) repliziert. Ein Objekt wird geschrieben, es exis-tiert danach aber dreimal im Cluster. Das Repli-zierungslevel ist anpassbar, man begeht dabei aber den Spagat zwischen Hochverfügbarkeit und Kosteneffizienz. Die Besonderheit hierbei ist, dass der Schreibvorgang dem Client erst bestätigt wird, wenn alle Replikas geschrieben wurden. Dies stellt allerdings eine Schwierig-keit beim Aufbau von Geoclustern dar, weil die Paket-Laufzeiten zu Problemen führen können. Deshalb gibt es keine Ceph-Geocluster. Das Ceph-Projekt arbeitet aktuell unter anderem an asynchronen Schreibvorgängen, um Geocluster zu ermöglichen.

Einen großen Schritt vorwärts ging es für das Ceph-Projekt, als die OpenStack-Com-munity auf Ceph aufmerksam wurde und sich dies hervorragend als Backend für OpenStack Cinder (Block-Storage) sowie als Ersatz für

OpenStack Swift (Object-Storage) anbietet. Diese Entwicklung verhalf Ceph zu einem höhe-ren Marktanteil, da Ceph bis heute als das Stan-dard-Storage-Backend für OpenStack gilt. Um die Dreifaltigkeit des Storage zu vollenden, führ-te Ceph mit CephFS ein Netzwerk-basiertes Filesystem ein, dessen Client-Modul sich seit Version 2.6 im Linux-Kernel befindet.

Warum wird Ceph eingesetzt? Wofür ist es wichtig?Aufgrund seiner Vielseitigkeit eignet sich Ceph für viele Unternehmen. Mit seiner großen Skalierbarkeit ermöglicht Ceph, mit einem klei-nen Setup zu starten und dieses mit der steigen-den Anfrage/Nutzung wachsen zu lassen. Ob als reiner Object-Store für Backups und andere An-wendungen, als Backend für Private Cloud-Lö-sungen auf Basis von OpenStack oder KVM oder als NFS- Ersatz für Linux-Clients, ist Ceph flexibel verwendbar. Durch die gute Integration in Kubernetes wird auch der Einsatz von Ceph in der Container-Welt realisierbar.

In den meisten Management-Runden ist das Hauptargument für die Einführung von Ceph der preisliche Vorteil gegenüber kom-merziellen Closed Source Enterprise Sto-rage-Lösungen. Günstige Server-Hardware und Community-Software ermöglichen einen Start mit geringen Capex-Aufwänden. Wem der Ein-satz von Open Source-Software mit Communi-ty-Support schlaflose Nächte bereitet, der hat die Möglichkeit über die Linux-Distributoren kommerziellen Support zu erwerben.

Das Subscriptions-Modell ist dabei sehr dif-ferenziert und sollte im Vorfeld gründlich ge-prüft werden. Generell gilt, so vielseitig Ceph ist, desto größer ist die Herausforderung im täglichen Betrieb. Das Ops-Team muss ent-sprechend fit sein.

Figure 2: Ceph flow

„ Mit seiner großen Skalierbarkeit ermöglicht Ceph, mit einem kleinen Setup zu starten.“

Das Interview führte Friederike Zelke.

Seite

12,13

: htt

p://

doc

s.ce

ph.

com

/doc

s/m

aste

r/ar

chite

ctur

e/

Page 16: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

FoKus14

Data and Persistence

Aren’t we all loving the comfort of the cloud? Simple back-up and also sharing of pictures as an example. Ignoring privacy concerns for now when using a company for that, instead of e.g., self hosting, which would be a whole oth-er topic. I love being able to take pictures of my cats, the landscape, and my food and sharing the pictures. Sharing a picture with the world or just your family is only a few clicks away. The best of that, even my mother can do it.

Imagine the following situation. Your phone has been stolen and all your pictures in the cloud have been deleted due to a software bug. I, personally, would probably get a heart attack just thinking about that I am a person which likes to look at old pictures from time to time to remember happenings and friends during the time.

You may ask yourself what does this have to do with “Data and Persistence”. There is a simple answer for that. Pictures are data and the persistence is, well in this case, gone because your data has been deleted.

Persistence of Data has a different importance to each of us. A student in America may hope for the persistence to be lost on his student debts and the other may have a job agency which basically relies on keeping the data of their clients not only available and intact but also secure.

rook allows you to run Ceph and other storage backends in Kubernetes with ease. Consumption of storage, especially block and filesystem storage, can be consumed through Kubernetes native ways. this allows users of a Kubernetes cluster to consu-me storage easily as in “any” other standard Kubernetes cluster out there. allowing users to “switch” between any Kubernetes offering to run their containerized appli-cations. Looking at the storage backends such as minio and CockroachDB, this can also potentially reduce costs for you if you use rook to simply run the CockroachDB yourself instead of through your cloud provider.

FoKus

Rook more than Ceph

Storage: What is the right one?

Block storageWill give you block devices on which you can format as you need, just like a “normal” disk attached to your sys-tem. Block storage is used for applications, such as MySQL, PostgreSQL, and more, which need the “raw” performance of block devices and the caching coming with that.

Filesystem storageIs basically a “normal” filesystem which can be consumed directly. This is a good way to share data between multiple applications in a read and write a lot manner. This is com-monly used to share AI models or scientific data between multiple running jobs or applications.

Technical note: if you have very very old/legacy applica-tions which are not really 64bit compatible, you might run into (stat syscall used) problems when the filesystem is us-ing 64bit inodes.

Page 17: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

15 the cloud report 01—2019

Object storageObject storage is a very cloud native approach to storing data. You don’t store data on a block device and/or filesys-tem, you use a HTTP API. Most commonly known in the object storage field is Amazon Web Services S3 storage. There are also open source projects implementing (parts) of the S3 API to act as a drop-in replacement for AWS S3. Next to S3, there are also other object store APIs/proto-cols, such as OpenStack Swift, Ceph Rados and more.

In the end it boils down to what are the needs of your ap-plications, but I would definitely keep in mind what the dif-ferent storage types can offer. If you narrowed down what storage type can be used, look into the storage software market to see which “additional” possibilities each soft-ware can give you for your storage needs.

Storage in a Cloud-Native world

In a Cloud-Native world, where everything is dynamic, dis-tributed, and must be resilient, it is more important than ever to keep the feature set of your storage which is used for your customer data. It must be highly available all the time, resilient to failure of a server and/or application, and scale to the needs of your application(s).

This might seem like an easy task if your are in the cloud, but even cloud have limits at a certain point. Though if you have special needs for anything in the cloud you are using, it will definitely help to talk with your cloud provider to re-solve problems. The point of talking to your cloud provid-

er(s) is important before and while you are using their pro-posal. As an example, if you should experience problems with the platform itself or scaling issues of, let’s say, block storage, you can directly give feedback to them about it and possibly work together with them to workout a fix for the issue. Or provide another product which will be able to scale to your current and future needs.

Storage is especially problematic when it comes to scale depending on the solution you are running/using. Assuming your application in itself can scale without is-sues, but the storage runs into performance issues. In most cases you can’t just add ten more storage servers and the problem goes away. “Zooming out” of storage as a topic to persistence, one must accept that there are always certain limits to persisting data. Let it be the amount, speed, or consistency of data, there will always be a limit or at least a trade off.

Figure 1: Rook Architecture.

http

s://

rook

.io/d

ocs/

rook

/v0

.9/c

eph-

stor

age.

htm

l

Page 18: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

FoKus16

A good example for such scaling limits is Facebook. To keep it short, Facebook at one point just “admitted” that there will always be a delay during replication of data/info. They accept that when a user from Germany updates his profile that it can/will take up to 3-5 minutes before users from e.g., Seattle, USA, will be able to see those changes.

To summarize this section: Your storage should be as Cloud-Native as your application. Talk with your cloud pro-vider during testing and usage, keep them in the loop when you run into issues. Also don’t try to push limits which can’t be pushed right now at the current state of technology.

What can Rook offer for your Kubernetes cluster?

Rook can turn all or selected nodes into “Ceph storage servers”. This allows you to use “wasted” space from the nodes your Kubernetes cluster runs on. Next to “just utiliz-ing ‘wasted’ storage”, you don’t need to buy extra storage servers. You would just keep that in mind during planning the hardware for the Kubernetes cluster (figure 1).

With running storage on the nodes your applications can also run on, the hyperconverged aspect is also kind of covered. You might not get more performance because of your application running on the same node as your storage with Ceph, but Ceph and Rook are aware of this and will possibly look into ways of improving this. Please note that

Ceph’s priority will always be consistency even if speed needs to be sacrificed for that.

Ceph is not the only storage backend which can be run using Rook but more on that later.

Rook Kubernetes integration

In Kubernetes you can consume storage for your ap-plications, through these Kubernetes objects: Per-sistentVolumeClaim, PersistentVolume and StorageClass. Each of these objects has their own role. PersistentVolumeClaims are what users create to claim/request storage for their applications. A Per-sistentVolumeClaim is basically the user facing side of storage in Kubernetes as it is standing for a Persistent-Volume behind that. To enable users to consume storage easily through PersistentVolumeClaims, a Kuber-netes administrator should create StorageClasses. An administrator can create multiple StorageClasses and also define one as a default. A StorageClass holds pa-rameters which can be “used” during the provisioning pro-cess by the specific storage provider/driver.

You see, Rook enables you to consume storage the Kuber-netes native way. The way most operators work in point of their native Kubernetes integration is to watch “simply” for events happening to a certain selection of objects. “Events” are, e.g., that an object has been created, deleted, updated. This allows the operator to react to certain “situations” and act accordingly, e.g., when a watched object is deleted, the operator could run it’s own cleanup routines or with Rook as an example, the user creates a Ceph Cluster object and the operator begins to create all the components for the Ceph Cluster in Kubernetes.

To be able to have custom objects in Kubernetes, Rook uses CustomResourceDefinitions. CustomRe-sourceDefinitions are a Kubernetes feature which allows users to specify their own objects in their Kuber-netes clusters. These custom objects allow the user to ab-stract certain applications/tasks, e.g., with Rook the user is allowed to create one Ceph Cluster object and have the Rook Ceph operator create all the other objects (Config-Maps, Secrets, Deployments and so on) in Kubernetes.

Onto the topic of how Kubernetes mounts the storage for your applications to be consumed: If you have already heard a bit about storage for containers, you may have come across CSI (Container Storage Interface). CSI is a standardized API to request storage. Instead of having to maintain drivers per storage backend in the Kubernetes project, the driver maintenance is moved to each storage backend itself, which allows faster fixes of issues with the driver. The normal process when there is an issue in an in-tree Kubernetes volume plugin is to go through the whole Kubernetes release process to get the fix out. The storage backend projects create a driver which implements the CSI

ROOK IS A FRAMEWORK TO MAKE IT EASY TO BRING STORAGE BACKENDS TO RUN INSIDE OF KUBERNETES

Page 19: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

17 the cloud report 01—2019

driver interface/specifications, through which Kubernetes and other platforms can the request storage.

For mounting Ceph volumes in Kubernetes, currently Rook uses the flexvolume driver which may require a small configuration change in existing Kubernetes clusters. Us-ing CSI with Rook Ceph clusters will hopefully soon be possible when CSI support has been implemented in the Rook 0.9 release. Depending on how you see it flexvolume is just the mount (and unmount) part of what CSI is.

Running Ceph with Rook in Kubernetes

Objects in Kubernetes describe a state, e.g., a Pod object contains the state (info) on how a Pod must be created (container image, command to be run, ports to be open, and so on). The same applies to a Rook Ceph cluster ob-ject. A Rook Ceph cluster object describes the user desired state of a Ceph Cluster in their Kubernetes cluster. Below is an example of a basic Rook Ceph Cluster object:

Not going into too much details about the example Rook Ceph Cluster object here, it will instruct the Rook Ceph operator to use all nodes in your cluster as long as they are applicable (don’t have taints and/or other “restrictions” on them). For each applicable node it will try to use all emp-ty devices and store configs and some state data in the dataDirHostPath: /var/lib/rook.

If you would search through the Kubernetes API refer-ence you wouldn’t find this API (ceph.rok.io/v1be-ta1) nor the object kind Cluster. As written in the pre-vious section, user defined APIs and objects (kinds) are introduced by a CustomResourceDefinition to the Kubernetes API. All CustomResourceDefinition of Rook are created during the installation of Rook in your Kubernetes cluster.

Creating the above object on your Kubernetes cluster, with the Rook Ceph operator running, would cause the Rook Ceph operator to react to the event that an object of type / kind Clusterin the API ceph.rook.io/v1be-ta1 has been created.

Before shortly going into what the Rook Ceph operator does now, I give a quick overview about how a “standard” Ceph cluster looks like. A Ceph cluster always has one or

more Ceph Monitors which are the brain of the cluster, and a Ceph Manager which takes care of gathering metrics and doing other maintenance tasks. There are more compo-nents in a Ceph cluster, to focus on the third which is next to the Monitors and Manager the most important thing which will store your data. Ceph Object Storage Daemon (OSD) is the component which “talks” to a disk or directory to store and serve your data.

The Rook Ceph operator will start and manage the Ceph monitors, Ceph Manager and Ceph OSDs for you. To store data in so called Pools in your Ceph cluster, the user can simply create a Pool object. Again the Rook Ceph op-erator will take of it and in this create a Ceph pool. The pool can then directly be consumed using a StorageClass and PersistentVolumeClaims to dynamically get PersistentVolumes provisioned for your applications.

This is how simple it is to run a Ceph cluster inside Kuber-netes and consume the storage of the Ceph cluster.

Rook is more than just Ceph

Rook is a framework to make it easy to bring storage back-ends to run inside of Kubernetes. The focus for Rook is to not only bringing Ceph which is for block, filesystem and ob-ject storage, but also for persistence on a more application specific level by running CockroachDB and Minio through a Rook operator. Due to have the abstraction of complex tasks/applications through CustomResourceDefini-tions in Kubernetes, it is as simple as deploying a Ceph Cluster as shown with the above code snippet.

To give a quick overview of the currently implemented storage backends besides Ceph, here is a list of the other storage backends: a Minio - Minio is an open source object storage which

implements the S3 API. a CockroachDB - CockroachDB provides ultra-resilient

SQL for global business. Rook allows you to run it through one object to ease the deployment of CockroachDB.

a NFS - NFS exports are provided through the NFS Ga-nesha server on top of arbitrary PersistentVolumeC-laims.

For more information on the state and availability of each storage backend, please look at “Project Status” section in the README file in the Rook GitHub project. Please note that not all storage backends here are available in Rook version 0.8, which is at the point of writing this article the latest version, some are currently only in the latest devel-opment version but a 0.9 release is targeted to happen soon.

Rook project roadmapTo give you an outlook of what can be to come up, a sum-mary of the current Rook project roadmap:

Page 20: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

FoKus18

a Further stabilization for the CustomResourceDefinitions specifications and managing/orchestration logic: a Ceph a CockroachDB a Minio a NFS

a Dynamic provisioning of filesystem storage for Ceph. a Decoupling the Ceph version from Rook to allow the

users to run “any” Ceph version. a Simpler and better disk management to allow adding, re-

moving and replacing disks in Rook Ceph cluster. a Adding Cassandra as a new storage provider a Object Storage user CustomResourceDefinition, to

allow managing users by creating, deleting and modify-ing objects in Kubernetes.

There is more to come for a more detailed roadmap, please look at the roadmap file in the Rook GitHub project.

How to get involved?If you are interested in Rook, don’t hesitate to connect with the Rook community and project using the below ways.

a Twitter - @rook_io a Slack - https://rook-io.slack.com/ a For conferences and meetups: Checkout the #confer-

ences Channel a Contribute to Rook: a https://github.com/rook/rook a https://rook.io/ a Forums - https://groups.google.com/forum/#!forum/

rook-dev a Community Meetings

For questions just hop on the Rook.io Slack and ask in the #general channel.

cloud.netapp.com

Extend your cloud capabilities.

Alexander TrostRook Maintainer and DevOps [email protected]

Page 21: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

19 the cloud report 01—2019

Berlin. Im CityCube Berlin wurde im Vorfeld der OpenStack Summit ein Tag, am 12.11.2018, dem alleinigen Thema Ceph gewidmet. Cloudibi-lity war mit drei Personen ebenfalls dabei, wenn auch nur als Besucher. Der Ceph Day Berlin war eine ganz-tägige Veranstaltung, die sich der Weitergabe der transformativen Kraft von Ceph und der Förderung der pul-sierenden Ceph-Gemeinschaft ge-widmet hat und von dieser und ihren Freunden ausgerichtet wurde. Ceph ist ein skalierbares, Open-Source- und Software-definiertes Speicher-system, welches die Wirtschaftlichkeit und das Management von Daten-speichern für Unternehmen grund-legend verbessern kann.

Es waren schätzungsweise 350 Teil-nehmer aus verschiedensten Sprach-räumen an diesem schönen Herbst-tag anwesend und widmeten sich fast ausschließlich den Vorträgen und Ge-sprächen untereinander in den Pau-sen und am Ende der Veranstaltung.

Sage Weil, Red Hat, Gründer und Chefarchitekt von Ceph eröffnete die

Veranstaltung mit einem Talk über the State of Ceph und verkündete bei dieser Gelegenheit auch die Grün-dung der Ceph Foundation, welche als direkter Fonds unter der Linux Foundation organisiert ist. Aufgabe ist die finanzielle Unterstützung der Ceph-Projektgemeinschaft und dient als Forum für Koordinierungsaktivi-täten und Investitionen, das den tech-nischen Teams Leitlinien für die Road-map und die Weiterentwicklung der Projektverwaltung bietet.

In weiteren Vorträgen wurde unter anderem von Ceph-Anwendern wie: MeerKAT radio telescope; SKA Afri-ca, Bennett SARAO; CERN, Dan van der Ster; Human Brain Project, Stack HPC, Stig Telfer und SWITCHengines, Simon Leinen, vorgestellt, was diese aus ihren Implementierungen gelernt haben und wie sie damit arbeiten.

Es kamen aber auch Partner bzw. Kunden von Ceph zur Sprache wie: a Phil Straw, SoftIron a Robert Sander, Heinlein Support a Jeremy Wei, Prophetstor a Sebastian Wagner und Lenz

Grimmer, SUSE

Ceph Day Berlin 2018

Anna Filipiak

cloud.netapp.com

Extend your cloud capabilities.

a Martin Verges, croit a Aaron Joue, Ambedded

Technology a Tom Barron, Red Hat

Es war eine gelungene Veranstaltung. Die Ceph Days werden von der Ceph-Community (und Freunden) in ausgewählten Städten auf der ganzen Welt veranstaltet und dienen der För-derung dieser lebendigen Gemein-schaft.

Neben Ceph-Experten, Commu-nity-Mitgliedern und Anbietern hören Sie auch von Produktionsanwendern von Ceph, die Ihnen vermitteln, was diese aus ihren Implementierungen gelernt haben.

Page 22: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

projeCt DeLIvery20

PROJECT DeLIvery

Mind- und Skill-Set für Digital Leadership

Der Prozess der digitalen Trans-formation beschäftigt derzeit den Industriestandort Deutschland; zahl-reiche Programm- und Projekt-leiterinnen1 arbeiten intensiv daran, einzelne Produkte oder An-wendungen in neuen, digitalen Kon-texten zu etablieren. Die Allgegen-wärtigkeit des Begriffs wirkt auf viele Betrachter ermüdend. Letztlich ist sie aber ein Beleg für die Vielschichtig-keit von technischen, wirtschaft-lichen und organisatorischen Veränderungsprozessen, die Projekt-leiter und Mitarbeiterinnen als digita-le Transformation erleben.

Um diese Komplexität wirksam zu gestalten, brauchen Projektleiter ein adäquates Skill-Set, das sie befähigt, den Transformationsprozess nicht nur technisch zu organisieren, son-dern auch die Mitarbeiter dafür zu ge-winnen. Anhand des Cloud Compu-ting werden in diesem Beitrag sowohl das der Technologie innewohnende Veränderungspotenzial skizziert, als

1 In diesem Artikel geben wir jeweils nur eine Form von weiblichen oder männlichen Bezeichnungen wieder, aber selbstverständlich denken wir uns alle mit: weiblich, männlich, divers.

auch die erforderlichen Kompetenz-bereiche, die sich daraus für Projekt-leiterinnen ergeben.

Enabling Technology

Spitzentechnologien wie Cloud Com-puting eröffnen neue Möglichkeiten, Arbeit zu organisieren, Innovatio-nen durchzuführen, Produkte her-zustellen und Kunden zu bedienen. Die scheinbar unbegrenzte Elastizität und Flexibilität der technischen Infra-strukturen erschüttern traditionelle Grenzen des unternehmerisch Mög-lichen. Cloud-Umgebungen schaf-fen die Voraussetzungen und den Gestaltungsraum für hochinnovative Prozesse und Produktlösungen, die bisher nicht wirtschaftlich realisierbar waren.

Um diesen Möglichkeitsraum zum eigenen Wettbewerbsvorteil zu nutzen, müssen alle Abteilungen gleichermaßen in die Gestaltung der Cloud-Umgebung eingebunden

werden. Dadurch entstehen neue Zusammenarbeitsmodelle und Herangehensweisen, die parallel zur tech nischen Migration in Cloud-Um-felder im Unternehmen verankert werden. Der Wandel im Kontext einer Enabling Technology ist nicht nur ein technischer Wandel, sondern auch und vor allem ein kultureller Wandel im Unternehmen.

Digital Leadership

Traditionelle Führungskonzepte fo-kussierten die technische Kom-petenz der Projektleiterinnen. Der Projekterfolg lag vor allem in der substanziellen Steuerungsfähigkeit begründet. Die wirkungsvolle Imple-mentierung von Enabling Techno-logies wie Cloud erfordert jedoch ein beweglicheres Mind- und Skill-Set. Nicht die Planungs- und Steuerungs-arbeit steht im Mittelpunkt, son-dern die kommunikative Einbindung aller beteiligten Kollegen. Allein mit der Einhaltung von Terminplänen kann der Projekterfolg nicht sicher-gestellt werden. Ein wirtschaftlicher Mehrwert für das Unternehmen wird nicht durch das Abhaken von An-forderungsdokumenten oder Beauf-tragungen erreicht. Stattdessen müs-sen digital versierte Projektleiter alle Beteiligten dazu befähigen, die ver-änderte Technologie auch in neuen Kontexten zielführend und wirksam zu nutzen.

DIGITALE PROJEKTLEITER VERWANDELN LIMITIERUNG IN EINEN POSITIVEN IMPULS

Page 23: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

21 the cloud report 01—2019

Damit Enabling Technologies ihre Wirkmächtigkeit entfalten können, müssen Projektleiterinnen die be-teiligten Mitarbeiter zu eigenstädi-gen Gestaltern und selbstbewussten Nutzern der Technologie be-fähigen. Dabei helfen ihnen persön-liche Stärken in den folgenden vier Kompetenzbereichen:

Inspire to GrowTransformationsprozesse in Cloud- Um ge bungen sind komplexe Auf-gaben in einem dynamischen Arbeits- und Wissensumfeld. Die beteiligten Kolleginnen, nicht nur in den IT-Ab-teilungen, müssen mit grundsätzlich neuen Konzepten und Applikationen umgehen lernen. Projektleiterinnen inspirieren das Team zum Lernen. Sie unterstützen das Team auf diesem persönlichen Wachstumspfad und agieren als Coaches, Mentoren und Lehrer.

Trustworthy DeterminationDie Migration in Cloud-Umfelder und die Entwicklung neuer Zu-sammenarbeitsmodelle sind tief-greifende Veränderungen in einer Organisation. Eine gewisse Skep-sis gegenüber der anspruchsvollen

Technologie ist verständlich. Einige Kollegen mögen sogar grundsätzlich bezweifeln, dass eine zunehmende Flexibilität der Geschäftsprozesse wünschenswert oder realisierbar sei. Digitale Projektleiter reflek-tieren dieses Spannungsfeld und binden auch die skeptischen Mit-arbeiterinnen in die Kommunika-tion ein. Gleichzeitig treiben sie den Veränderungsprozess entschlossen voran. Ihre Energie reißt das Team und die Organisation mit. Sie er-klären unermüdlich die Bedeutung und Bedeutsamkeit des Projektes. Ihre Offenheit und unermüdliche Kommunikation gerade mit den Skeptikern schafft eine belastbare Vertrauensbasis.

Focused Vision Komplexe Transformationsprozesse können nicht en detail geplant wer-den. Viele Unwägbarkeiten und Abhängigkeiten beeinflussen den Projektverlauf. Digitale Projekt-leiterinnen akzeptieren diese Un-sicherheit. Sie haben eine klare Vor-stellung von dem Ergebnis. Sie setzen eine klare Richtung und agieren auf dem Weg flexibel und anpassungs-fähig.

Energetic CuriosityIn einem Umfeld sich rasch ver-ändernden, spezialisierten Wissens verfügen auch erfahrene Führungs-persönlichkeiten über eine be-schränkte Wissensbasis. Digita-le Projektleiter verwandeln diese Limitierung in einen positiven Im-puls des Teilens von Wissen und Ver-antwortung. Sie schätzen kritische Fragen höher als unkritische Ant-worten; sie sind skeptisch gegen-über unverrückbaren Wahrheiten. Sie schaffen ein Umfeld, in dem alle Projektbeteiligten neue Antworten entwickeln und innovative Lösungen formulieren können.

Anpassungsfähigkeit gilt auch für Führungsqualitäten

Digitale Transformation ist ein in-zwischen weitläufig verwendeter Sammelbegriff, der sehr ver-schiedene Veränderungsprozesse subsumiert. Obgleich die Einführung von Cloud-Umgebungen eine spezi-fische Technologie beinhaltet, rea-lisieren sich die einzelnen Projek-te in einem breiten Spannungsfeld wirtschaftlicher und organisatori-scher Rahmenbedingungen. Ent-sprechend variabel müssen auch die Kompetenzbereiche digitaler Projektleiterinnen konzipiert werden. Damit Unternehmen die Chancen von Enabling Technologies individu-ell realisieren können, gibt es kein sta-tisches One-Size-Fits-All Führungs-modell. Wer die Ambiguität des oben skizzierten Kompetenzmodells an-nimmt und stattdessen fragt: “Wel-che konkreten Fähigkeiten sind in dem konkreten Projekt in einer kon-kreten Projektphase am wirkungs-vollsten?”, der ist auf dem richtigen Weg zum Digital Leadership.

Felix EvertHead of [email protected]

Time ScopeCost

Time ScopeCost

Traditional Project Leadership Digital Project Leadership

Time ScopeCost

Time ScopeCost

Traditional Project Leadership Digital Project Leadership

Page 24: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

KonFerenzBerICht22

After a couple of month as a Cloudi-bility employee, I felt brave enough to visit the OpenStack Summit. Middle November, rain in the air and a lem-ming migration towards the CityCube Berlin. What could possibly go wrong?

What?OpenStack is a free and open-source software platform for cloud comput-ing and the pertain community. This community inhere companies, organ-isations and individuals that develops software, or in other ways support OpenStack.

2010 Openstack started as a joint project of Rackspace Hosting and NASA, but as of 2016 it is managed by the OpenStack Foundation. Since then more than 500 companies have joined the project.

How come? Servers and storage are a must have. But sooner or later the limits of the physical servers starts to show. To deal with this requires a lot of time and money, something that very few companies appreciate. Adding RAM, bigger disk, and CPUs.The following virtualization amounted to adding hypervisor. A program that allows multiple operating systems and ap-plications to share a single hardware processor. What a treat !

But later on also this solution start-ed to chafe as the system admin-istrator had to add several types of hypervisors and virtual servers from different vendors. The more added the trickier it got to keep up and get an overview. Having the possibility

to handle the physical servers vitrally through a dedicated program was a positive change but this did not elimi-nate the frustration as the developers and users did not manage to levigate their measure and receive the results within a reasonable timeframe.

And so! OpenStack therefore comes in as the next layer on top of the already virtualized and the hy-pervisors. Through OpenStack all the different parts of the infrastructure becomes accessible for the user. Now it is possible to handle the IT environ-ment without having to order the in-frastructure through an IT architect.

No worries, the physical servers are still there but the accessibility is sim-plified.

For who?But who wants to work in additional layers? WMWs, KVM, Xen Hyper V etc… And THEN OpenStack on top? Complex to implement, steep learn-ing curve and as constantly develop-ing solution there is always the risk for users coming across inaccurate and/or outdated documents within the community. What a headache! Wouldn’t it be better to just place that order at the IT architect and then get yourself a coffee?

This we need to sort out!Do you want to virtualize and scale your IT-structure? If you are still con-sidering whether you are going to vir-tualize or not, maybe OpenStack isn’t what you need for the moment.

Are you looking to automate re-petitive chores while updating your

infrastructure? Now OpenStack could be interesting!

Do you have several teams in-volved in your processes? For every addition there is a new opportunity miscommunication and compilation. Let’s minimize this.

Would you like to have more con-trol with a unitary dashboard? Yes. The answer to this is yes. A unitary dash-board will help you to manage your IT infrastructure.

Do you like open source ? I think you will find that you do! You can rely on and contribute to the community. You can do it yourself, you can adjust and integrate with open APIs. Be in-dependent and rely on the strength in numbers at the same time !

The Summit The OpenStack foundation arranges every year several events and among those the OpenStack Summit. This November it was time for the Summit to take place in Berlin. As a first time visitor, I was excited to see what this Summit had in store for us. Turned out, I was not alone. 2700 happy campers from 63 different countries came to participate for the three day long summit. Apart from the five headline sponsors the lineup was full of OpenStack users. New as well as growing. With a program of 200 sessions and workshops and also an exhibition area, I have to admit it was kind of tricky for a newbie to navigate.

Start from the beginning.When in doubt, starting from the be-ginning is always helpful. The keynote

My first OpenStack Summit

Page 25: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

23 the cloud report 01—2019

sessions gave a good introduction. The speakers let us in on user and success stories as well as updates and pilot projects. It also serves a guide when you are having difficulties to make up your mind for what further session you wish to attend.

The exhibition area. Here you can easily work the room. There was a friendly and chatty atmo-sphere that featured the exhibition. The perfect opportunity to make con-tacts, ask questions and see what the vendors have to offer. (More than the goodies of course, even though I took the time to collect them as well.)

Bring a friend. Keynotes and Exhibitions in all its glory. The best hack for a first time visitor is to bring a friend. A friend who knows the environment, what to see and where to be. A friend who also happens to be the Head of Cloud Technology at Cloudibility. Please read the indenta-tion from our very own Kim-Norman:

The OpenStack Summit is transform-ing more and more to an Open In-

frastructure Summit. OpenStack is just one part of it. Each second talk is about container infrastructure topics like Docker or Kubernetes. Software defined network tools combine and connect the whole stack: bare metal, virtual machines and containers.

OpenStack ❤ Kubernetes The combination of OpenStack and Kubernetes is more than a fix idea. Both ecosystems work very well to-gether. Run Kubernetes on top of OpenStack to use the dynamic in-stance provision and easy autoscal-ing of your Kubernetes clusters or use Kubernetes to deploy OpenStack easily. OpenStack Kolla-Kubernetes is obsolete but there is a very nice new project to do this: OpenStack Helm. Deploy and upgrade containerized OpenStack environments by using the standardized Helm charts. In com-bination with Rook you’re able to run OpenStack and Ceph on Kubernetes with all benefits of microservices.

Vanilla vs. DistributionEach enterprise OpenStack vendor is using his own deployment strate-

gy: Ansible (SUSE), TripleO (RedHat), Juju (Canonical) or a bunch of tools like Jenkins, Salt (Mirantis). If you’re asking the vendors for their upgrade strategy you get always the same an-swer: upgrade is no problem. In the real life the situation looks a little bit different. The OpenStack upgrade is still one of the trickiest parts in the OpenStack operation. For my experi-ence there is no difference between Vanilla and Distribution for updates. Both are working well if you know what you are doing. If not you’ll have a big problem. The advantage of Vanilla is that you’ve written the automation by your own; you know what your code is doing. It makes troubleshooting easi-er. The complexity is getting higher if you’re including third party solutions like storage or software defined net-works. The subscription model of the OpenStack distributions are very dif-ferent. You need to inform you about scaling costs.

Emelie Gustafsson and Kim-Norman Sahm

Page 26: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

KonFerenzBerICht24

Zwei große Themen in einer Kon-ferenz, der Continuous Lifecycle und die Container Conf, zogen über 700 Teilnehmer nach Mannheim, die sich vier Tage lang in die Welt der CD, De-ployment, Cluster, Migration, Dock-er, Istio, Kubernetes, Cloud Native Development, DevOps … versenk-ten, lernten und diskutierten. Fragen wurden aufgegriffen und beantwor-tet: Wie bringt man Dev und Ops zusammen in einem agilen Prozess? Wie kann Development für und in der Cloud aussehen? Welche Tools gibt es und wie kann man sie am besten einsetzen? Helm, Draft, Skaffolk? Wie kann die CI/CD-Pipeline optimiert

werden? Was kann sinnvoll automa-tisiert ablaufen? Wie sieht es aus mit Infrastruktur, Testing und Monitor-ing? Was braucht man, um Container “sauber” zu bauen, und, dass die Apps darin gut laufen, aktualisierbar sind, getestet werden können und sich-er sind? Was darf der Container, was sollte er nicht dürfen? Wohin kann Leading Edge noch führen?

Es gab 16 Sessions, bei 14 davon gab es mindestens vier Vorträge parallel. Das Programm war vielfältig, span-nend und die Vorträge sehr gut be-sucht. Ich konnte leider jeweils nur in einem Talk sitzen - wobei mir die

Entscheidung, in welchen ich gehen möchte, wirklich schwer gemacht wurde. Alle, die ich besucht habe, hatten ein hohes Niveau, waren tech-nisch spannend und innovativ und vor allem auf die Technologie bezogen und nicht werblich. Nirgendwo hatte ich das Gefühl, dass eigentlich nur ein Produkt verkauft werden sollte.

Meinem Interesse folgend war ich auf dieser Doppel-Konferenz eher bei den Developern unterwegs, die die ersten Ergebnisse bezüglich Development in der Cloud vorstellten. Die Cloud-Welt geht die ersten erfolgreichen Schritte beim Coden im Browser in browser-basierten Editoren, die sogar mit Chrome books verwendet werden können  … Sogar Kubernetes stellt in-zwischen eine Entwicklungs-umgebung bereit, in der diverse Tools wie Helm oder Draft das Aufsetzen der Entwicklungsumgebung unterstützen. Einhellige Meinung verschiedener Sprecher: Es ist schon ziemlich cool und holt die Entwickler aus ihren dunk-len Kammern, die Tools sind insgesamt aber noch unausgereift.

Die dunklen Kammern der Entwickler waren immer wieder in verschiedenen Zusammenhängen ein Thema. Das Konzept DevOps wurde diskutiert. Viele Entwickler haben sich über Jahre ihre Arbeit organisiert, ihre Tools auf ihrem Rechner gehabt, haben geco-

Cloud-Welt in MannheimContinuous Lifecycle und Container Conf 2018

Page 27: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

25 the cloud report 01—2019

det, und wenn sie fertig waren, haben sie alles weitergeschoben. Cloud-ba-siertes Entwickeln ermöglicht nicht nur, dass man mit anderen Entwicklern leichter zusammenarbeiten kann. Man muss nicht darauf warten, bis der Ent-wickler seinen Code “abgibt”, mit den richtigen Zugängen ist er schon da  … Kann getestet werden, die Umgebung kann darauf basierend schon mal auf-gesetzt werden, … Es gibt schöne neue Möglichkeiten. Aber wie kann der Ent-wickler abgeholt werden? Wie kann er trotz der neuen Zusammenarbeits-möglichkeiten sein “Kammergefühl” behalten? Menschen sind nun einmal Gewohnheitstiere. Es gibt Tools, die es erlauben, sich seine Umgebung so zu gestalten, dass es sich wie immer anfühlt, sogar einige Desktop -Programme können integriert wer-den. Dennoch ändert sich etwas. Und das muss der Entwickler auch wollen.

Ein weiteres übergreifendes Thema waren Automatisierungen im Umfeld der Continuous Delivery. Wie kann die Delivery so gestaltet werden, dass sie tatsächlich Continuous ist? Auto-matisierte Tests während der Ent-wicklung waren ein Vorschlag. Von vornherein eingeplante automatisierte Tests und ein ausgeklügeltes Monito-ring des gesamten Systems, das immer komplexer und dynamischer wird, was das Monitoring vor neue Heraus-forderungen stellt. Ein weiterer Vor-

schlag, auch im Sinne der Gesamt-qualität, war das regelmäßige Ausliefern von Entwicklungsergeb-nissen, auch von Teilergebnissen, diese können dann getestet und ge-prüft werden, ob das in die richtige Richtung geht, ob Kunden-anforderungen erfüllt werden, das er-haltene Feedback kann dann zeitnah in die Entwicklung integriert werden. Man codet nicht erst ein halbes Jahr und bekommt dann eine riesige Bug-Liste, die immer frustrieren muss. Natürlich ist negatives Feedback nie angenehm, je eher das aber kommt, desto erfolgreicher und effizienter wird die Entwicklung, man nimmt Ops und Kunde direkt von Anfang an mit, gerade Letzterer bekommt so das Ge-fühl, dass es in seinem Sinne läuft und voran geht. Frühzeitiges Feedback, das angenommen und umgesetzt werden kann, erleichtert die Zu-sammenarbeit für alle Seiten. Zu Feh-lern stehen und Korrekturen wollen gelingt nicht häufig problemlos, das muss von allen Beteiligten gewollt und gefördert werden, und häufig braucht es dafür einen Anstoß von außen.

Der Blick von außen ist auch beim Thema Migration in die Cloud in vie-len Fällen der sinnvollste Ansatz. Be-vor überhaupt migriert werden kann, sollte das vorhandene System gründ-lich analysiert werden. Was gibt es? Was kann erhalten bleiben und ist

schon Cloud-ready, was muss um-gebaut werden? Was sollte auf keinen Fall mit in die Cloud umziehen? Wer hat gebaut, ist dieser noch vorhanden und kann in den Umbauprozess ein-bezogen werden? Es gibt ver-schiedene Ansätze, diese Fragen zu beantworten. Einer sind die 12-Fac-tor-App-Principles (vgl. https://12fac-tor.net/de/).

Das ist nur ein kleiner Einblick in die Ergebnisse der Konferenz, die un-bedingt Lust auf mehr machte. Ins-gesamt war alles sehr gut organisiert, die Location toll, die Versorgung großartig, die Vorträge sehr gut aus-gewählt, die Ausstellung der Sponso-ren lud zum Wandeln ein und die an-geregten Diskussionen in und zwischen den Vorträgen zeigte, wie angeregt das Publikum war. Ein gro-ßes Lob gilt deswegen den Organisa-toren! Schon am Bahnhof wurde man mit Plakaten “abgeholt” und zum Congress Center geleitet und auch dort blieben keine Fragen offen, man wusste, wo was stattfand und wohin man sich wenden muss, die Pausen waren genau richtig lang und das Niveau der Vorträge rundete alles ab. Ich freue mich schon auf das nächste Jahr in Mannheim!

Friederike Zelke

Page 28: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests26

Die Angebote rund um Cloud Com-puting ändern sich rasant. Selbst die Angebote der einzelnen Provider werden regelmäßig weiterentwickelt. So ist es kaum möglich, einen Über-blick zu behalten. Wir, die Cloudibility, möchten diesem Umstand abhelfen und nach und nach die Angebote durchleuchten und nach objektiven Gesichtspunkten auswerten. Dafür haben wir einen Fragebogen ent-wickelt, den uns Provider, die sich unserem Wunsch nach Transparenz anschließen, ausfüllen und unseren Lesern so Einblicke in ihr Angebot ermöglichen. Ergänzt werden diese Daten durch Testergebnisse, die unsere Techniker erheben.

Die Fragen an die Provider und die Tests decken einen umfangreichen Bereich des Cloud Computings ab: Wir erfragen und testen allgemeine Informationen zum Onboarding, zu Verfügbarkeiten, SLAs, zu Datacen-tern, Compute, Storage, Netzwerk, Limitierungen, Skalierung, Techno-logien, aber auch mehr interne Informationen wie Back-up, Security, Imageservice, Patch management, Monitoring, CI/CD, as a Service-An-gebote und natürlich auch den Kostenfaktor.

So entstehen Rankings und Tabel-len, die dem Kunden weiterhelfen,

sich unabhängig zu informieren und den richtigen Anbieter für sich zu fin-den. Aber nicht nur Leser des Reports bekommen so umfangreiche, un-abhängige Daten, auch die Provider können sich über ihren Markt infor-mieren, sehen, wo sie stehen und wo im Vergleich ihre Stärken liegen. Sie können aber auch ihre eventuellen Schwächen und Potenziale erkennen, möglichen Nachholbedarf sehen oder Ansätze für weitere Spezialisie-rungen und Verbesserungen ent-decken. Und natürlich präsentieren sie sich so einem interessierten Leser- und potenziellen Kundenkreis.

Neben den schon getesteten Anbietern AWS, Microsoft Azure, Google Cloud Platform und dem SysEleven Stack haben wir drei neue Provider getestet, diese sind: die Open Telekom Cloud, Noris Cloud und IBM Cloud.

Auf den nächsten Seiten finden Sie die Auswertungen nach einzelnen Themen sortiert. Sie finden hier die kompletten Auswertungen. Wir wer-den nach und nach weitere Clouds hinzufügen, sodass in den nächsten Ausgaben nur noch exemplarische Testauswertungen wiedergegeben werden, die ausführlichen Tabellen finden Sie online. a the-report.cloud

Wenn Sie Anregungen haben, die Fragen zu ergänzen, schreiben Sie uns bitte unter: [email protected].

Hinweis: In den Auswertungen werden drei verschieden große Virtu-elle Maschinen verwendet:

Small bedeutet: a OS Ubuntu 16.04 a 2vCPUs a 8GB RAM a min. 50GB HDD a Location: Germany, if not

Western Europe, if not Europe

Medium bedeutet: a OS Ubuntu 16.04 a 4vCPUs a 16GB RAM a min. 50GB HDD a Location: Germany, if not Western

Europe, if not Europe

Large bedeutet: a OS Ubuntu 16.04 a 8vCPUs a 32GB RAM a min. 50GB HDD a Location: Germany, if not Western

Europe, if not Europe

tests

Wir testen Clouds

Page 29: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

27 the cloud report 01—2019

As with every edition, our team looked into many different aspects of sever-al cloud providers. We analyzed their pros and cons, discussed our experi-ences, and decided for the winner in several categories.

Interestingly, we found out that none of the vendors we tested would not be recommendable - each one has spe-cific strengths and (of course) poten-tial for improvements. We were espe-cially impressed by the performance

of smaller and / or not-so-well-known cloud vendors, such as SysEleven or Noris Cloud: Often they offer com-parable performance and sometimes even more options than their bigger competitors, combined with more personal support and very reasonable pricing.

That being said, let’s look into the win-ners. And don’t forget to check out our detailed comparison tables on the next pages for more details!

And the winners are ...

Category Winner(s) Reason

Storage SysEleven Stack Highest average IOPs

Software-as-a-Service Microsoft AzureGoogle Cloud Platform

Most available services across all areasBest and most seamless integration of services

Security Microsoft Azure Checking all marks and executing regular penetra-tion tests against their own environment

Network SysEleven Stack Highest measured bandwidth

Image Services Noris Cloud Most comprehensive Image Format support

Database-as-a-Service Noris Cloud Fastest MySQL- and PostgreSQL-performance

Container-as-a-Service Google Cloud Platform Best integration and most convenient usage

Compute Amazon AWSGoogle Cloud PlatformMicrosoft AzureNoris CloudIBM Cloud

Highest CPU-ScoresBest price-performance-ratioBest RAM-throughputEase of creationMost supported hypervisor types

Backup & Recovery OTC Most available options at a moderate price

As you can see, each one of our vendors has his strengths. From our perspective there is no right or wrong considering cloud vendors nowadays, there is only a matter of needs and their fulfillment by a ven-dor. The best thing is: Nowadays,

multi-cloud-approaches can be im-plemented as easy as never before, allowing for a the-right-tool-for-the-job-approach and preventing from vendor lock-ins.

And these are good news for 2019!

Page 30: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests28

Storage

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which kinds of storage are available?- Object / Blob Storage- File Storage- Block Storage

yes (S3 / Glacier)yes (EFS)yes (EBS)

yes (Azure Blob Storage)yes (Azure Disk Storage)yes (Azure Files)

yes (Google Cloud Storage)yes (Google Drive / Persistent Disk)yes (Google Persistent Disk)

yes (S3)yes (Manila)yes (Cinder/Nova)

yesyesyes

yes(IBM Cloud Object Storage)yes(IBM Cloud file storage)yes(IBM Cloud block storage)

Block - Different tier-classes? SATA, SSD, SAS

yes yes yes No (Local SSD is planed) yes yes yes

Object - S3 and/or Swift? S3 Azure Blob Storage Buckets (as S3) Object S3 OpenStack SwiftS3

S3 S3Swift

File - Accessing file storage via (cluster) file system.

EFS GlusterFS; BeeGFS; Luster Google Cloud Storage FUSE; Beta: Google Cloud Filestore

not provided as a service NFS CephFS NFS

Storage capacity limits Overall size: Unlimited5 TB per S3 object

Overall size: 500 TB per Storage Account200 Storage Accounts per Subscriptions

Overall size: Unlimited5 TB per individual object

Overall size: 15TB+ 50 TB of Object storage32 TB of Block Storage10 PB of File Storage

1 TB Block Storage Unlimited for Object Storage for Standard Plan12 TB of Block Storage12 TB of File Storage

Duration of provisioning? 1.18 min 1.32 min 20 sec 10 sec 2.5 min

Throughput IOPS (only Block- and File-Storage)

– Random read test: bw=12311KB/s, iops=3077

– Random write test: bw=158779KB/s, iops=39694

– Random Read and write test: – read: bw=9205.6KB/s, iops=2301; – write: bw=3069.9KB/s, iops=767; – sequential read test: bw=3537.2MB/s, iops=452753

– sequential write test: bw=62963KB/s, iops=1967

– Random read test: bw=19565KB/s, iops=2445

– Random write test: bw=5685.7KB/s, iops=710

– Random Read and write test: – read: bw=13835KB/s, iops=1729; – write: bw=1576.3KB/s, iops=197 ; – sequential read test: bw=19551KB/s, iops=2443

– sequential write test: bw=11801KB/s, iops=1475

– Random Read: bw=2881.2KB/s, iops=360

– Random Write: bw=11490KB/s, iops=1436

– Random Read and write: – Read: bw=5086.4KB/s, iops=635; – Write: bw=598117B/s, iops=73 – Sequential Read: bw=30820KB/s, iops=3852

– Sequential Write: bw=30820KB/s, iops=3852

– Random Read: bw=111019KB/s, iops=13877

– Random Write: bw=30406KB/s, iops=3800

– Random Read & Write: – Read: bw=74293KB/s, iops=9286; – write: bw=8464.4KB/s, iops=1058 – Sequential Read: bw=96140KB/s, iops=12017

– Sequential Write: bw=32778KB/s, iops=4097

– Random Read: bw=8041.8 KB/s, iops=1005

– Random Write: bw=64736 KB/s, iops=1011

– Random Read and write: – Read:bw=14454 KB/s, iops=903 – Write:bw=1646.9 KB/s, iops=102 – Sequential Read: bw=11034 KB/s, iops=1379

– Sequential Write: bw=64540 KB/s, iops=2016

– Random Read: bw=80730 KB/s, iops=10091

– Random Write: bw=134696 KB/s, iops=2104

– Random Read and write: – Read: bw=55438 KB/s, iops=3464 – Write: bw=6316.2KB/s, iops=394 – Sequential Read: bw=61150KB/s, iops=7643

– Sequential Write: bw=141092KB/s, iops=4409

– Random Read: bw=92883 KB/s, iops=11610,

– Random Write: bw=243007KB/s, iops=3796,

– Random Read and write: – Read: bw=78238 KB/s, iops=4889, – Write: bw=8913.9KB/s, iops=557, – Sequential Read: bw=79440 KB/s, iops=9929

– Sequential Write: bw=85321 KB/s, iops=2666

Limitations - IOPS, Limitations because of storage technology

If Volume Size (SSD)is:1) 1 GiB-16 TiB,Then Maximum IOPS**/Volume: 10,000.2) 4 GiB - 16 Tib, Then Maximum IOPS**/Volume: 32,000***.

If Volume Size (HDD)is:1) 500 GiB - 16 TiB,Then Maximum IOPS**/Volume:5002) 500 Gib - 16 Tib,Then Maximum IOPS**/Volume:250

– A standard disk is expected to handle 500 IOPS or 60MB/s

– A P30 Premium disk is expected to handle 5000 IOPS or 200MB/s

Results: the-report.cloud A Common I/O (SATA) is expected to handle 1000 IOPS or 40 MB/s.A High I/O (SAS) is expected to han-dle 3000 IOPS or 120 MB/s.A High I/O (SAS) is expected to han-dle 3000 IOPS or 550 MB/s.A Ultra-High I/O (SSD) is expected to handle 20,000 IOPS or 320 MB/s.A Ultra-High I/O (SSD) is expected to handle 30,000 IOPS or 1GB/s.

A BSS Mass Storage is expected to handle 200 IOPS or 100 MB/sA BSS-performance Storage is expected to handle 1000 IOPS or 160 MB/sA BSS-Ultra-SSD-Storage is expect-ed to handle 10,000 IOPS or 300 MB/s

A Block Storage from 25 GB to 12,000 GB capacity is expected to handle 48,000 IOPS.

Costs - total price for the Disk which is mounted to the VM 50GB

Storage price with additional 50 gb = € 71.3

Storage price with additional 50 gb = € 72.03

Storage price with additional 50 gb = € 85.10

Storage price with additional50 gb = € 2.85

Storage price with additional50 gb = € 6.90

Storage price with additional50 gb = € 6.00

Storage price with additional50 gb = € 8.23

Page 31: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

29 the cloud report 01—2019

Storage

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which kinds of storage are available?- Object / Blob Storage- File Storage- Block Storage

yes (S3 / Glacier)yes (EFS)yes (EBS)

yes (Azure Blob Storage)yes (Azure Disk Storage)yes (Azure Files)

yes (Google Cloud Storage)yes (Google Drive / Persistent Disk)yes (Google Persistent Disk)

yes (S3)yes (Manila)yes (Cinder/Nova)

yesyesyes

yes(IBM Cloud Object Storage)yes(IBM Cloud file storage)yes(IBM Cloud block storage)

Block - Different tier-classes? SATA, SSD, SAS

yes yes yes No (Local SSD is planed) yes yes yes

Object - S3 and/or Swift? S3 Azure Blob Storage Buckets (as S3) Object S3 OpenStack SwiftS3

S3 S3Swift

File - Accessing file storage via (cluster) file system.

EFS GlusterFS; BeeGFS; Luster Google Cloud Storage FUSE; Beta: Google Cloud Filestore

not provided as a service NFS CephFS NFS

Storage capacity limits Overall size: Unlimited5 TB per S3 object

Overall size: 500 TB per Storage Account200 Storage Accounts per Subscriptions

Overall size: Unlimited5 TB per individual object

Overall size: 15TB+ 50 TB of Object storage32 TB of Block Storage10 PB of File Storage

1 TB Block Storage Unlimited for Object Storage for Standard Plan12 TB of Block Storage12 TB of File Storage

Duration of provisioning? 1.18 min 1.32 min 20 sec 10 sec 2.5 min

Throughput IOPS (only Block- and File-Storage)

– Random read test: bw=12311KB/s, iops=3077

– Random write test: bw=158779KB/s, iops=39694

– Random Read and write test: – read: bw=9205.6KB/s, iops=2301; – write: bw=3069.9KB/s, iops=767; – sequential read test: bw=3537.2MB/s, iops=452753

– sequential write test: bw=62963KB/s, iops=1967

– Random read test: bw=19565KB/s, iops=2445

– Random write test: bw=5685.7KB/s, iops=710

– Random Read and write test: – read: bw=13835KB/s, iops=1729; – write: bw=1576.3KB/s, iops=197 ; – sequential read test: bw=19551KB/s, iops=2443

– sequential write test: bw=11801KB/s, iops=1475

– Random Read: bw=2881.2KB/s, iops=360

– Random Write: bw=11490KB/s, iops=1436

– Random Read and write: – Read: bw=5086.4KB/s, iops=635; – Write: bw=598117B/s, iops=73 – Sequential Read: bw=30820KB/s, iops=3852

– Sequential Write: bw=30820KB/s, iops=3852

– Random Read: bw=111019KB/s, iops=13877

– Random Write: bw=30406KB/s, iops=3800

– Random Read & Write: – Read: bw=74293KB/s, iops=9286; – write: bw=8464.4KB/s, iops=1058 – Sequential Read: bw=96140KB/s, iops=12017

– Sequential Write: bw=32778KB/s, iops=4097

– Random Read: bw=8041.8 KB/s, iops=1005

– Random Write: bw=64736 KB/s, iops=1011

– Random Read and write: – Read:bw=14454 KB/s, iops=903 – Write:bw=1646.9 KB/s, iops=102 – Sequential Read: bw=11034 KB/s, iops=1379

– Sequential Write: bw=64540 KB/s, iops=2016

– Random Read: bw=80730 KB/s, iops=10091

– Random Write: bw=134696 KB/s, iops=2104

– Random Read and write: – Read: bw=55438 KB/s, iops=3464 – Write: bw=6316.2KB/s, iops=394 – Sequential Read: bw=61150KB/s, iops=7643

– Sequential Write: bw=141092KB/s, iops=4409

– Random Read: bw=92883 KB/s, iops=11610,

– Random Write: bw=243007KB/s, iops=3796,

– Random Read and write: – Read: bw=78238 KB/s, iops=4889, – Write: bw=8913.9KB/s, iops=557, – Sequential Read: bw=79440 KB/s, iops=9929

– Sequential Write: bw=85321 KB/s, iops=2666

Limitations - IOPS, Limitations because of storage technology

If Volume Size (SSD)is:1) 1 GiB-16 TiB,Then Maximum IOPS**/Volume: 10,000.2) 4 GiB - 16 Tib, Then Maximum IOPS**/Volume: 32,000***.

If Volume Size (HDD)is:1) 500 GiB - 16 TiB,Then Maximum IOPS**/Volume:5002) 500 Gib - 16 Tib,Then Maximum IOPS**/Volume:250

– A standard disk is expected to handle 500 IOPS or 60MB/s

– A P30 Premium disk is expected to handle 5000 IOPS or 200MB/s

Results: the-report.cloud A Common I/O (SATA) is expected to handle 1000 IOPS or 40 MB/s.A High I/O (SAS) is expected to han-dle 3000 IOPS or 120 MB/s.A High I/O (SAS) is expected to han-dle 3000 IOPS or 550 MB/s.A Ultra-High I/O (SSD) is expected to handle 20,000 IOPS or 320 MB/s.A Ultra-High I/O (SSD) is expected to handle 30,000 IOPS or 1GB/s.

A BSS Mass Storage is expected to handle 200 IOPS or 100 MB/sA BSS-performance Storage is expected to handle 1000 IOPS or 160 MB/sA BSS-Ultra-SSD-Storage is expect-ed to handle 10,000 IOPS or 300 MB/s

A Block Storage from 25 GB to 12,000 GB capacity is expected to handle 48,000 IOPS.

Costs - total price for the Disk which is mounted to the VM 50GB

Storage price with additional 50 gb = € 71.3

Storage price with additional 50 gb = € 72.03

Storage price with additional 50 gb = € 85.10

Storage price with additional50 gb = € 2.85

Storage price with additional50 gb = € 6.90

Storage price with additional50 gb = € 6.00

Storage price with additional50 gb = € 8.23

Page 32: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests30

Compute

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Small VM: OS Ubuntu 16.04; 2vCPUs; 8GB RAM; min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

yes yes yes yes yes yes yes

Medium VM: OS Ubuntu 16.04; 4vCPUs; 16GB RAM; min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

yes yes yes yes yes yes yes

Large VM: OS Ubuntu 16.04; 8vCPUs; 32GB RAM; min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

yes yes yes yes yes yes yes

GPU support for the VM? yes yes yes no yes no yes

AutoScaling for VM? yes yes yes yes yes yes yes

Availability Zones (i.e. Availability set) possible yes yes yes yes yes yes yes

Startup-time (till time of availability) - Small- Medium- Large

28 sec30 sec33 sec

151 sec192 sec203 sec

31 sec44 sec46 sec

26 sec28 sec31 sec

80 sec83 sec100 sec

70 sec80 sec100 sec

120 sec156 sec322 sec

Count of steps until VM is created 7 steps 4 Steps 5 Steps 2 Steps 3 Steps 2 Steps 4 Steps

RAM throughput (sysbench, Block size 1k)- Read- Write

792.57 MB/sec759.62 MB/sec

4224.71 MB/sec2801.53 MB/sec

3199.60 MB/sec2283.16 MB/sec

3500.09 MB/sec2539.60 MB/sec

2616.52 MB/sec1936.31 MB/sec

2591.06 MB/sec2256.63 MB/sec

590.88 MB/sec557.13 MB/sec

CPU speed (geekbench)- Small Single Core- Small Multi Core- Medium Single Core- Medium Multi Core- Large Single Core- Large Multi Core

33406343331011546336321687

3268302729759669331519689

2909381830227227306513705

3303592731419875330216897

2765539728049913279918470

2822515428259781280215780

2374455226479006266316253

VM accessible via Console no yes yes yes yes yes yes

Total cost of VM per month (732hrs, * = converted from USD)- Small- Medium- Large

€ 65.67 *€ 129.44 *€ 256.97 *

€ 60.98€ 121.97€ 243.94

€ 46.05€ 92.10€ 184.21

n/an/an/a

€ 74.57€ 150.28€ 292.42

n/an/an/a

€ 78.77€ 141.19€ 301.21

Supported disk formats / images – OVA – VMDK – RAW – VHD/VHDX

– VHD – VMDK – VDH – RAW

– ISO – QCOW2 – RAW – AKI – AMI – KRI

– VMDK – QCOW2 – RAW – VHD – VHDX

– ISO – PLOOP – QCOW2 – RAW – VDI – VHD – VMDK – AKI – ARI – AMI – OVA – DOCKER

– VMDK – AKI – ARI – AMI – QCOW2 – RAW

Can bare-metal servers be deployed via the cloud? yes no no no yes no yes

Which hypervisor is used? - KVM - Hyper-V - VMware ESXi- KVM- Xen

- KVM - KVM- Xen

- KVM - PowerVM- VMware ESX Server- Xen- KVM- z/VM

Page 33: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

31 the cloud report 01—2019

Compute

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Small VM: OS Ubuntu 16.04; 2vCPUs; 8GB RAM; min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

yes yes yes yes yes yes yes

Medium VM: OS Ubuntu 16.04; 4vCPUs; 16GB RAM; min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

yes yes yes yes yes yes yes

Large VM: OS Ubuntu 16.04; 8vCPUs; 32GB RAM; min. 50GB HDD; Location: Germany, if unavailable: Western Europe, if unavailable: Europe

yes yes yes yes yes yes yes

GPU support for the VM? yes yes yes no yes no yes

AutoScaling for VM? yes yes yes yes yes yes yes

Availability Zones (i.e. Availability set) possible yes yes yes yes yes yes yes

Startup-time (till time of availability) - Small- Medium- Large

28 sec30 sec33 sec

151 sec192 sec203 sec

31 sec44 sec46 sec

26 sec28 sec31 sec

80 sec83 sec100 sec

70 sec80 sec100 sec

120 sec156 sec322 sec

Count of steps until VM is created 7 steps 4 Steps 5 Steps 2 Steps 3 Steps 2 Steps 4 Steps

RAM throughput (sysbench, Block size 1k)- Read- Write

792.57 MB/sec759.62 MB/sec

4224.71 MB/sec2801.53 MB/sec

3199.60 MB/sec2283.16 MB/sec

3500.09 MB/sec2539.60 MB/sec

2616.52 MB/sec1936.31 MB/sec

2591.06 MB/sec2256.63 MB/sec

590.88 MB/sec557.13 MB/sec

CPU speed (geekbench)- Small Single Core- Small Multi Core- Medium Single Core- Medium Multi Core- Large Single Core- Large Multi Core

33406343331011546336321687

3268302729759669331519689

2909381830227227306513705

3303592731419875330216897

2765539728049913279918470

2822515428259781280215780

2374455226479006266316253

VM accessible via Console no yes yes yes yes yes yes

Total cost of VM per month (732hrs, * = converted from USD)- Small- Medium- Large

€ 65.67 *€ 129.44 *€ 256.97 *

€ 60.98€ 121.97€ 243.94

€ 46.05€ 92.10€ 184.21

n/an/an/a

€ 74.57€ 150.28€ 292.42

n/an/an/a

€ 78.77€ 141.19€ 301.21

Supported disk formats / images – OVA – VMDK – RAW – VHD/VHDX

– VHD – VMDK – VDH – RAW

– ISO – QCOW2 – RAW – AKI – AMI – KRI

– VMDK – QCOW2 – RAW – VHD – VHDX

– ISO – PLOOP – QCOW2 – RAW – VDI – VHD – VMDK – AKI – ARI – AMI – OVA – DOCKER

– VMDK – AKI – ARI – AMI – QCOW2 – RAW

Can bare-metal servers be deployed via the cloud? yes no no no yes no yes

Which hypervisor is used? - KVM - Hyper-V - VMware ESXi- KVM- Xen

- KVM - KVM- Xen

- KVM - PowerVM- VMware ESX Server- Xen- KVM- z/VM

Page 34: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests32

Backup & Recovery

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Who is responsible for the cloud resources backups (Cloud Provider or Owner of the Resources)

Owner of the Resources Owner of the Re-sources

Owner of the Resources Owner of the resources Owner of the Resources Owner of the Resources Owner of the Resources

Which types of backups are supported for VMs? - Snapshots- Incremental Backups

- Full Backups- Differential Backups- Incremental Backups

- Snapshots- Incremental Backups

- Instance Snapshots- Volume Snapshots

- Snapshot- Full Backups- Incremental Backups

- Snapshot- Full Backups- Incremental Backups

- Snapshot- Full Backups- Incremental Backups

Where will the backup be stored? Amazon S3 Recovery Services Vault

Google Cloud Storage Storage Volume backup serviceDifferent datacenters

Ceph Object Storage SystemDifferent Datacenters

Evaultr1 cdp

Can backups be scheduled? yes yes yes no yes yes yes

Usage costs- 500 GB Backup Storage- HDD- 20% change per month- Frankfurt / Western Europe

€ 40.96 € 21.57 € 13.25 n/a € 5.00 n/a € 260.79

Is it possible to restore data from a previous date? yes yes yes no yes no yes

Container as a Service

Questions AWS Azure Google Cloud PlatformSysEleven Stack OTC

Noris Cloud IBM Cloud

Which technologies are being provided/supported? KubernetesDocker

KubernetesDockerMesophere

KubernetesMesophere

Kubernetes KubernetesDockerCloud Container Engine

Kubernetes

Is a managed container service available? yes (EKS) yes (AKS) yes yes yes yes

Can worker nodes be accessed directly by customers? yes yes yes yes yes yes

Can master nodes be accessed directly by customers? no yes yes no yes yes

Which version of the technologies/Kubernetes is being offered?

1.10.3 1.10.6 (West Europe)1.11.1 (Canada / USA)

1.10.6 (IWest Europe)1.11.1 (Canada / USA)

n/a 1.9.2-r2 1.11.31.10.81.9.8

How much time does it take to provide the container service?- Cluster- Worker

11 min8 min

<2 minn/a

<3 minn/a

n/an/a

8 min8 min

<2 min13 min

Costs(Managed service, 732hrs per month, overall min. 8+ GB RAM, default HDD, 1-2 vCPUs, hosted in Frankfurt or Western Europe, Storage / IPs not included)* = Prices in USD have been converted to EUR

€ 194.34 per month

Machines: 4x t2.small (2 GB RAM, 1 vCPU)Note: EKS-cluster is € 0.17 * per hour

€ 101.24 per month

Machines: 4x A1 v2 (2 GB RAM, 1 vCPU)Note: AKS is free

€ 80.23 per month

Machines: 3x n1-standard-1 (3.5 GB RAM, 1 vCPU)Note: This example has less computing pow-er but more RAM than AWS's and Azure's examples

n/a € 247.68 per month

Machines: u2c.2X4 (4 GB RAM, 2 vCPU)Note: worker nodes is € 0.34 per hour

Shared or dedicated Container Engine Cluster? Dedicated Shared Shared SharedDedicated

Shared dedicated

Limitations - How many master and worker nodes can be deployed?

Max cluster: 3 Max nodes/cluster: 100Max pods/nodes: 110Max Cluster/subscription: 100

Max nodes/cluster: 100Max pods/nodes: 100

Max nodes/cluster: 1000 n/a

Do you have full access to all K8s ressources (no RBAC restriction)?

no no no n/a yes no

Page 35: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

33 the cloud report 01—2019

Backup & Recovery

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Who is responsible for the cloud resources backups (Cloud Provider or Owner of the Resources)

Owner of the Resources Owner of the Re-sources

Owner of the Resources Owner of the resources Owner of the Resources Owner of the Resources Owner of the Resources

Which types of backups are supported for VMs? - Snapshots- Incremental Backups

- Full Backups- Differential Backups- Incremental Backups

- Snapshots- Incremental Backups

- Instance Snapshots- Volume Snapshots

- Snapshot- Full Backups- Incremental Backups

- Snapshot- Full Backups- Incremental Backups

- Snapshot- Full Backups- Incremental Backups

Where will the backup be stored? Amazon S3 Recovery Services Vault

Google Cloud Storage Storage Volume backup serviceDifferent datacenters

Ceph Object Storage SystemDifferent Datacenters

Evaultr1 cdp

Can backups be scheduled? yes yes yes no yes yes yes

Usage costs- 500 GB Backup Storage- HDD- 20% change per month- Frankfurt / Western Europe

€ 40.96 € 21.57 € 13.25 n/a € 5.00 n/a € 260.79

Is it possible to restore data from a previous date? yes yes yes no yes no yes

Container as a Service

Questions AWS Azure Google Cloud PlatformSysEleven Stack OTC

Noris Cloud IBM Cloud

Which technologies are being provided/supported? KubernetesDocker

KubernetesDockerMesophere

KubernetesMesophere

Kubernetes KubernetesDockerCloud Container Engine

Kubernetes

Is a managed container service available? yes (EKS) yes (AKS) yes yes yes yes

Can worker nodes be accessed directly by customers? yes yes yes yes yes yes

Can master nodes be accessed directly by customers? no yes yes no yes yes

Which version of the technologies/Kubernetes is being offered?

1.10.3 1.10.6 (West Europe)1.11.1 (Canada / USA)

1.10.6 (IWest Europe)1.11.1 (Canada / USA)

n/a 1.9.2-r2 1.11.31.10.81.9.8

How much time does it take to provide the container service?- Cluster- Worker

11 min8 min

<2 minn/a

<3 minn/a

n/an/a

8 min8 min

<2 min13 min

Costs(Managed service, 732hrs per month, overall min. 8+ GB RAM, default HDD, 1-2 vCPUs, hosted in Frankfurt or Western Europe, Storage / IPs not included)* = Prices in USD have been converted to EUR

€ 194.34 per month

Machines: 4x t2.small (2 GB RAM, 1 vCPU)Note: EKS-cluster is € 0.17 * per hour

€ 101.24 per month

Machines: 4x A1 v2 (2 GB RAM, 1 vCPU)Note: AKS is free

€ 80.23 per month

Machines: 3x n1-standard-1 (3.5 GB RAM, 1 vCPU)Note: This example has less computing pow-er but more RAM than AWS's and Azure's examples

n/a € 247.68 per month

Machines: u2c.2X4 (4 GB RAM, 2 vCPU)Note: worker nodes is € 0.34 per hour

Shared or dedicated Container Engine Cluster? Dedicated Shared Shared SharedDedicated

Shared dedicated

Limitations - How many master and worker nodes can be deployed?

Max cluster: 3 Max nodes/cluster: 100Max pods/nodes: 110Max Cluster/subscription: 100

Max nodes/cluster: 100Max pods/nodes: 100

Max nodes/cluster: 1000 n/a

Do you have full access to all K8s ressources (no RBAC restriction)?

no no no n/a yes no

Page 36: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests34

IaaS: Patch Management

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud provide a man-aged patch service?

yes (Amazon Systems Management Service)

yes (Azure Automation) no no no no yes (IBM BigFix Patch Manage-ment(bookable))

Which operating systems are supported?

Linux: – Red Hat Enterprise Linux (RHEL) 6 (x86 / x64), 7 (x64)

– SUSE Linux Enterprise Server (SLES) 12 (x64)

– Amazon Linux 2 2-2.0 (x86 / x64) – Amazon Linux 2012.03 - 2017.03 (x86 / x64)

– Amazon Linux 2015.03 - 2018.03 (x64)

– Ubuntu Server 14.04 LTS,16.04 LTS and 18.04 LTS (x86 / x64)

– CentOS 6 (x86 / x64), 7(x64) – Raspbian Jessie (x86) – Raspbian Stretch (x86)

Windows:-Windows Server 2008 through Windows Server 2016, including R2 versions

Linux: – CentOS 6 (x86/x64), 7 (x64) – Red Hat Enterprise 6 (x86 / x64), 7 (x64)

– SUSE Linux Enterprise Server 11 (x86 / x64), 12 (x64)

– Ubuntu 14.04 LTS and 16.04 LTS (x86 / x64)

Windows: – Windows Server 2008 R2 SP1 and later

– Windows Server 2008, Windows Server 2008 R2 RTM

Linux: – CentOS – Container-Optimized OS from Google

– CoreOS – Debian – Red Hat Enterprise Linux (RHEL) – RHEL for SAP – SUSE Enterprise Linux Server (SLES) – SLES for SAP – Ubuntu

Windows: – Windows Server

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 24 , 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x – Ubuntu 16.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8 , 7.3

Windows: – Windows 2008 – Windows 2012 – Window Server 2016

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x ,16.04.x.18.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8 , 7.3

Windows: – Window Server 2012 R2

Linux. – CentOS-Minimal 7.X – CentOS-LAMP 7.X – CentOS-Minimal 6.X – CentOS-LAMP 6.X – Debian Minimal Stable 9.X – Debian Minimal Stable 8.X – Debian LAMP Stable 8.X – Red Hat Minimal 7.x – Red Hat LAMP 7.x – Red Hat Minimal 6.x – Red Hat LAMP 6.x – Ubuntu Minimal 18.04-LTS – Ubuntu LAMP 18.04-LTS – Ubuntu Minimal 16.04-LTS – Ubuntu LAMP 16.04-LTS – Ubuntu Minimal 14.04-LTS – Ubuntu LAMP 14.04-LTS

Windows. – Standard 2016 – Standard 2012 – R2 Standard 2012

Is the operating system from the deployed VM at a current patch level?

yes yes yes yes yes yes yes

What is the current available patch level in our sample VM?

– Ubuntu 16.04 LTS with latest patches applied

0.04 0.04 0.04 0.04 0.04 0.04 0.04

Can the cloud show/provide the patch level of existing machines?

yes yes yes yes yes yes yes

Is an overview of the patchlevel of all provided images available? yes yes yes yes yes yes

Is a centralized Update- / Repo-server available? no no no no yes no no

Page 37: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

35 the cloud report 01—2019

IaaS: Patch Management

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud provide a man-aged patch service?

yes (Amazon Systems Management Service)

yes (Azure Automation) no no no no yes (IBM BigFix Patch Manage-ment(bookable))

Which operating systems are supported?

Linux: – Red Hat Enterprise Linux (RHEL) 6 (x86 / x64), 7 (x64)

– SUSE Linux Enterprise Server (SLES) 12 (x64)

– Amazon Linux 2 2-2.0 (x86 / x64) – Amazon Linux 2012.03 - 2017.03 (x86 / x64)

– Amazon Linux 2015.03 - 2018.03 (x64)

– Ubuntu Server 14.04 LTS,16.04 LTS and 18.04 LTS (x86 / x64)

– CentOS 6 (x86 / x64), 7(x64) – Raspbian Jessie (x86) – Raspbian Stretch (x86)

Windows:-Windows Server 2008 through Windows Server 2016, including R2 versions

Linux: – CentOS 6 (x86/x64), 7 (x64) – Red Hat Enterprise 6 (x86 / x64), 7 (x64)

– SUSE Linux Enterprise Server 11 (x86 / x64), 12 (x64)

– Ubuntu 14.04 LTS and 16.04 LTS (x86 / x64)

Windows: – Windows Server 2008 R2 SP1 and later

– Windows Server 2008, Windows Server 2008 R2 RTM

Linux: – CentOS – Container-Optimized OS from Google

– CoreOS – Debian – Red Hat Enterprise Linux (RHEL) – RHEL for SAP – SUSE Enterprise Linux Server (SLES) – SLES for SAP – Ubuntu

Windows: – Windows Server

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 24 , 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x – Ubuntu 16.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8 , 7.3

Windows: – Windows 2008 – Windows 2012 – Window Server 2016

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x ,16.04.x.18.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8 , 7.3

Windows: – Window Server 2012 R2

Linux. – CentOS-Minimal 7.X – CentOS-LAMP 7.X – CentOS-Minimal 6.X – CentOS-LAMP 6.X – Debian Minimal Stable 9.X – Debian Minimal Stable 8.X – Debian LAMP Stable 8.X – Red Hat Minimal 7.x – Red Hat LAMP 7.x – Red Hat Minimal 6.x – Red Hat LAMP 6.x – Ubuntu Minimal 18.04-LTS – Ubuntu LAMP 18.04-LTS – Ubuntu Minimal 16.04-LTS – Ubuntu LAMP 16.04-LTS – Ubuntu Minimal 14.04-LTS – Ubuntu LAMP 14.04-LTS

Windows. – Standard 2016 – Standard 2012 – R2 Standard 2012

Is the operating system from the deployed VM at a current patch level?

yes yes yes yes yes yes yes

What is the current available patch level in our sample VM?

– Ubuntu 16.04 LTS with latest patches applied

0.04 0.04 0.04 0.04 0.04 0.04 0.04

Can the cloud show/provide the patch level of existing machines?

yes yes yes yes yes yes yes

Is an overview of the patchlevel of all provided images available? yes yes yes yes yes yes

Is a centralized Update- / Repo-server available? no no no no yes no no

Page 38: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests36

Databases (DB-as-a-Service)

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which DB engines are offered?

Relational DB – MySQL – PostgreSQL – MariaDB – Oracle – Microsoft SQL Server – Amazon Aurora

Non-Relational DB – Amazon DynamoDB – Amazon ElastiCache – Amazon Neptune – Redis – MemCached

Data Warehouse / Big Data – Amazon Redshift – Amazon Athena – Amazon EMR (Hadoop, Spark, HBase, Presto, etc.)

– Amazon Kinesis – Amazon Elasticsearch Service – Amazon Quicksight

Relational DB – Azure SQL Database – Azure Database for MySQL – Azure Database for PostgreSQL – Azure Database for Maria DB – Microsoft SQL Server

Non-Relational DB – Azure Cosmos DB – Azure Table Storage – Redis

Data Warehouse / Big Data – SQL Data Warehouse – HDInsight (Hadoop, Spark, Hive, LLAP, Kafka, Storm, R.)

– Azure Databricks (Spark) – Azure Data Factory – Azure Stream Analytics

Relational DB – PostgreSQL – MySQL – Google Cloud Spanner

Non-Relational DB – Google Cloud Datastore – Google Cloud BigTable

Data Warehouse / Big Data – Google Cloud BigQuery – Google Cloud Dataflow – Google Cloud Dataproc (Hadoop / Spark)

– Google Cloud Datalab – Google Cloud Dataprep

Relational DB – PostgreSQL – MySQL – Microsoft SQL Server

Non-Relational DB – MongoDB – Redis

Relational DB – Db2 on Cloud – PostgreSQL – MySQL

Non-Relational DB – Cloudant – MongoDB – ScyllaDB – Redis – JanusGraph – etcd – Elasticsearch

Data Warehouse / Big Data – Db2 Warehouse on Cloud

Performance of MyS-QL (MySQL Sysbench, table-size (row data): 1000000, Threads: 16)- Read- Write- Read / Write

Transactions: 59354 (988.96/sec)Transactions: 42052 (699.07/sec)Transactions: 28325 (471.91/sec)

Transactions: 52354 (988.96/sec)Transactions: 41002 (683.25/sec)Transactions: 30412 (506.86/sec)

Transactions: 49084 (817.85/sec)Transactions: n/aTransactions: 29329 (488.66/sec)

Transactions: 52545 (875.53/sec)Transactions: 75435 (1256.91/sec)Transactions: 28676 (477.75/sec)

Transactions: 353 (5.65/sec)Transactions: 873 (14.29/sec)Transactions: 273 (4.31/sec)

Supported DB Versions

– MySQL 5.7, 5.6, 5.5 – MariaDB 10.2,10.1,10.0 – Microsoft SQL Server 2017 RTM, 2016 SP1, 2014 SP2, 2012 SP4, 2008 R2 SP3

– Oracle 12c (12.1.0.2, 12.1.0.1), Oracle 11g (11.2.0.4, 11.2.0.3, 11.2.0.2)

– PostgreSQL 11 Beta 1, 10.4, 10.3, 10.1, 9.6.x, 9.5.x, 9.4.x, 9.3.x

– MySQL 5.7, 5.6 – MariaDB currently on wait list as beta – Azure SQL Database: Microsoft SQL Server 2017

– Microsoft SQL Server 2017, 2016 SP1, 2014 SP2, 2012 SP4, 2008 R2 SP3

– PostgreSQL 10.3, 9.6.x, 9.5.x

– MySQL 5.7, 5.6 – PostgreSQL 9.6.x

– PostgreSQL 9.6.5, 9.6.3, 9.5.5 – MySQL 5.7.20, 5.7.17, – 5.6.35, 5.6.34, 5.6.33, 5.6.30 – Microsoft SQL Server 2014 SP2 SE – MongoDB – Redis 3.0.7

– Db2-ge – PostgreSQL 9.6.10,9.6.9,9.5.14,9.5.13,9.4.19,9.4.18

– MySQL 5.7.22 – Cloudant-h7 – MongoDB 3.4.10,3.2.18,3.2.11,3.2.10 – ScyllaDB 2.0.3 – Redis 4.0.10,3.2.12 – JanusGraph 0.1.1 beta – etcd 3.3.3,3.2.18 – Elasticsearch 6.2.2 , 5.6.9 – Db2 Warehouse-ef

Additional Services and Support

– Roll Back (Amazon Redshift) – Support

– Troubleshooting as Service – Rollback – Support

– Rollback – Support

– Rollback – Support

– Rollback – Support

Total price for the database

– MySQL – 2 vCores – 100 GB Storage – Frankfurt / Western Europe

– 100% active per month

– No dedicated backup

€ 62.45 for db.t2.medium € 59.04 for Gen 4 2 vCores, Basic Tier € 118.64 for db-n1-standard-2 ( Generation 2)

For MySQL:Total Price = € 298.40 for 100 GB StorageFor PostgreSQL:Total Price = € 312.80 for 100 GB Storage

For MySQL:Total Price = € 812 for 100 GB StorageFor PostgreSQL:Total Price = € 542 for 100 GB Storage

How does backup/restore work?

Amazon RDS creates a storage volume snapshot of DB instance for backup and restore.

Point in Time Restore Geo-Restore

Automatic BackupsOn-demand

MySQL daily backups MySQL daily backups

Page 39: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

37 the cloud report 01—2019

Databases (DB-as-a-Service)

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which DB engines are offered?

Relational DB – MySQL – PostgreSQL – MariaDB – Oracle – Microsoft SQL Server – Amazon Aurora

Non-Relational DB – Amazon DynamoDB – Amazon ElastiCache – Amazon Neptune – Redis – MemCached

Data Warehouse / Big Data – Amazon Redshift – Amazon Athena – Amazon EMR (Hadoop, Spark, HBase, Presto, etc.)

– Amazon Kinesis – Amazon Elasticsearch Service – Amazon Quicksight

Relational DB – Azure SQL Database – Azure Database for MySQL – Azure Database for PostgreSQL – Azure Database for Maria DB – Microsoft SQL Server

Non-Relational DB – Azure Cosmos DB – Azure Table Storage – Redis

Data Warehouse / Big Data – SQL Data Warehouse – HDInsight (Hadoop, Spark, Hive, LLAP, Kafka, Storm, R.)

– Azure Databricks (Spark) – Azure Data Factory – Azure Stream Analytics

Relational DB – PostgreSQL – MySQL – Google Cloud Spanner

Non-Relational DB – Google Cloud Datastore – Google Cloud BigTable

Data Warehouse / Big Data – Google Cloud BigQuery – Google Cloud Dataflow – Google Cloud Dataproc (Hadoop / Spark)

– Google Cloud Datalab – Google Cloud Dataprep

Relational DB – PostgreSQL – MySQL – Microsoft SQL Server

Non-Relational DB – MongoDB – Redis

Relational DB – Db2 on Cloud – PostgreSQL – MySQL

Non-Relational DB – Cloudant – MongoDB – ScyllaDB – Redis – JanusGraph – etcd – Elasticsearch

Data Warehouse / Big Data – Db2 Warehouse on Cloud

Performance of MyS-QL (MySQL Sysbench, table-size (row data): 1000000, Threads: 16)- Read- Write- Read / Write

Transactions: 59354 (988.96/sec)Transactions: 42052 (699.07/sec)Transactions: 28325 (471.91/sec)

Transactions: 52354 (988.96/sec)Transactions: 41002 (683.25/sec)Transactions: 30412 (506.86/sec)

Transactions: 49084 (817.85/sec)Transactions: n/aTransactions: 29329 (488.66/sec)

Transactions: 52545 (875.53/sec)Transactions: 75435 (1256.91/sec)Transactions: 28676 (477.75/sec)

Transactions: 353 (5.65/sec)Transactions: 873 (14.29/sec)Transactions: 273 (4.31/sec)

Supported DB Versions

– MySQL 5.7, 5.6, 5.5 – MariaDB 10.2,10.1,10.0 – Microsoft SQL Server 2017 RTM, 2016 SP1, 2014 SP2, 2012 SP4, 2008 R2 SP3

– Oracle 12c (12.1.0.2, 12.1.0.1), Oracle 11g (11.2.0.4, 11.2.0.3, 11.2.0.2)

– PostgreSQL 11 Beta 1, 10.4, 10.3, 10.1, 9.6.x, 9.5.x, 9.4.x, 9.3.x

– MySQL 5.7, 5.6 – MariaDB currently on wait list as beta – Azure SQL Database: Microsoft SQL Server 2017

– Microsoft SQL Server 2017, 2016 SP1, 2014 SP2, 2012 SP4, 2008 R2 SP3

– PostgreSQL 10.3, 9.6.x, 9.5.x

– MySQL 5.7, 5.6 – PostgreSQL 9.6.x

– PostgreSQL 9.6.5, 9.6.3, 9.5.5 – MySQL 5.7.20, 5.7.17, – 5.6.35, 5.6.34, 5.6.33, 5.6.30 – Microsoft SQL Server 2014 SP2 SE – MongoDB – Redis 3.0.7

– Db2-ge – PostgreSQL 9.6.10,9.6.9,9.5.14,9.5.13,9.4.19,9.4.18

– MySQL 5.7.22 – Cloudant-h7 – MongoDB 3.4.10,3.2.18,3.2.11,3.2.10 – ScyllaDB 2.0.3 – Redis 4.0.10,3.2.12 – JanusGraph 0.1.1 beta – etcd 3.3.3,3.2.18 – Elasticsearch 6.2.2 , 5.6.9 – Db2 Warehouse-ef

Additional Services and Support

– Roll Back (Amazon Redshift) – Support

– Troubleshooting as Service – Rollback – Support

– Rollback – Support

– Rollback – Support

– Rollback – Support

Total price for the database

– MySQL – 2 vCores – 100 GB Storage – Frankfurt / Western Europe

– 100% active per month

– No dedicated backup

€ 62.45 for db.t2.medium € 59.04 for Gen 4 2 vCores, Basic Tier € 118.64 for db-n1-standard-2 ( Generation 2)

For MySQL:Total Price = € 298.40 for 100 GB StorageFor PostgreSQL:Total Price = € 312.80 for 100 GB Storage

For MySQL:Total Price = € 812 for 100 GB StorageFor PostgreSQL:Total Price = € 542 for 100 GB Storage

How does backup/restore work?

Amazon RDS creates a storage volume snapshot of DB instance for backup and restore.

Point in Time Restore Geo-Restore

Automatic BackupsOn-demand

MySQL daily backups MySQL daily backups

Page 40: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests38

Logging as a Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud platform provide a Logging as a Service functionality?

yes yes yes yes no no yes

Is the data stored in encrypted form?

yes yes yes yes no no yes

Which logging technology is used?

– AWS Cloudwatch – AWS Cloudtrail – AWS VPC flow logs – Amazon Cloudfront access logs – Amazon S3 access logs

– Activity Logs – Activity diagnostics Logs – Azure AD Reporting – Virtual machines and cloud services – Azure Storage Analytics – Network Security Group (NSG) flow logs – Application insight

Stackdriver Logging Cloud trace no no Bluemix UICloud Foundry Line Interface(CLI)External logging

On what basis is logging billed? Based on: – Resources – Region – API – Event type – Alarm/resources

Data Ingestion 5 GB/month1 € 2.38/GBData Retention 31 days2 € 0.10/GB/month

Logging = € 0.43/GB n/a n/a n/a Lite: Free

Log Collection: – 0.38 € / GB ingested – 0.08 € / GB stored per month

Log Collection with 2 GB/Day Search:

– 0.38 € / GB ingested – 0.08 € / GB stored per month – 3.01 € / day searchable

Log Collection with 5 GB/Day Search – 0.38 € / GB ingested – 0.08 € / GB stored per month – 7.52 € / day searchable

Log Collection with 10 GB/Day Search

– 0.38 € / GB ingested – 0.08 € / GB stored per month – 15.04 € / day searchable

Page 41: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

39 the cloud report 01—2019

Logging as a Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Does the cloud platform provide a Logging as a Service functionality?

yes yes yes yes no no yes

Is the data stored in encrypted form?

yes yes yes yes no no yes

Which logging technology is used?

– AWS Cloudwatch – AWS Cloudtrail – AWS VPC flow logs – Amazon Cloudfront access logs – Amazon S3 access logs

– Activity Logs – Activity diagnostics Logs – Azure AD Reporting – Virtual machines and cloud services – Azure Storage Analytics – Network Security Group (NSG) flow logs – Application insight

Stackdriver Logging Cloud trace no no Bluemix UICloud Foundry Line Interface(CLI)External logging

On what basis is logging billed? Based on: – Resources – Region – API – Event type – Alarm/resources

Data Ingestion 5 GB/month1 € 2.38/GBData Retention 31 days2 € 0.10/GB/month

Logging = € 0.43/GB n/a n/a n/a Lite: Free

Log Collection: – 0.38 € / GB ingested – 0.08 € / GB stored per month

Log Collection with 2 GB/Day Search:

– 0.38 € / GB ingested – 0.08 € / GB stored per month – 3.01 € / day searchable

Log Collection with 5 GB/Day Search – 0.38 € / GB ingested – 0.08 € / GB stored per month – 7.52 € / day searchable

Log Collection with 10 GB/Day Search

– 0.38 € / GB ingested – 0.08 € / GB stored per month – 15.04 € / day searchable

Page 42: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests40

Network

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is network monitoring availble? yes yes yes yes yes yes yes

Is a Content Delivery Network (CDN) available? yes yes yes no yes yes yes

Sample Measurements Iperf Result:TCP: Bandwidth Sender: 959 Mbits/sec Receiver: 958 Mbits/secUDP: Bandwidth Sent: 990 Mbits/sec

Iperf Result:TCP: Bandwidth Sender: 906 Mbits/sec Receiver: 904 Mbits/secUDP: Bandwidth Sent: 923 Mbits/sec

Iperf Result:TCP: Bandwidth Sender: 3.85 Gbits/sec Receiver: 3.85 Gbits/secUDP: Bandwidth Sent: 3.85 Gbits/sec

Iperf Result:TCP: Bandwidth Sender: 9.97 Gbits/sec Receiver: 9.97 Gbits/secUDP: Bandwidth Sent: 7.26 Gbits/sec

Iperf Result:TCP: Bandwidth Sender: 101 Mbits/sec Receiver: 99.6 Mbits/sec UDP: Bandwidth Sent: 99.0 Mbits/sec

Iperf Result:TCP: Bandwidth Sender: 3.74 Gbits/sec Receiver: 3.74 Gbits/sec UDP: Bandwidth Sent: 2.16 Gbits/sec

Iperf Result:TCP: Bandwidth Sender: 101 Mbits/sec Receiver: 99.8 Mbits/sec UDP: Bandwidth Sent: 99.0 Mbits/sec

Public IPs- Public IPs for VMs?- Available kinds of public IPs for VMs- Public IPs for LoadBalancers?- Available kinds of public IPs for LoadBalancers

yesfloating / staticyesstatic

yesfloating / staticyesstatic

yesfloating / staticyesstatic

yesfloatingyesstatic

yesstaticyesstatic

yesfloatingyesfloating

yesfloating/staticyesstatic

Is a dedicated network connection from data-center to public cloud possible?

yes (AWS Direct Connect) yes (Azure Express Route) yes (Google Cloud Interconnect) no yes yes yes

Network Security features (Network Traffic analysis, Network Security Groups)

- AWS Web Application Firewall- Network security groups- Network Traffic analysis

Network Access Controls- User-Defined Routes- Network Security Appliance- Application Gateway- Azure Web Application Firewall- Network Availability Control

Network security group (NSG); Network security group flow logs; Log Analytics; Log analytics workspace; Network Watcher

- Firewall- Network security groups- Network Traffic analysis

Security Groups Network Security GroupsFirewalls

Network Security GroupsFirewalls

Network Security GroupsFirewalls (Multi VLAN, Single VLAN and Web App)DDOS mitigation

Traffic costs per GB Up to 10 TB per month: € 0.15Next 40 TB per month: € 0.095Next 100 TB per month: € 0.078Over 150 TB per month: € 0.069

Up to 5 GB per month: FreeNext 5 GB per month: € 0.075Next 100 TB per month: € 0.06Next 350 TB per month: € 0.043

Up to 10 TB per month: € 0.073Next 140 TB per month: € 0.063Next 350 TB per month: € 0.039

Up to 50 TB per month: € 0.06Next 150 TB per month: € 0.04

€ 0.05 € 0.078

Security

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

– Integration to a SIEM possible? (Security Information and Event Management) – Security Groups – Disk Encryption – Network Traffic Analyse

yesyesyesyes

yesyesyesyes

yesyesyesyes

noyesnono

yesyesyesyes

noyesyesno

yesyesyesyes

Protection against Denial of Service Attacks yes yes yes yes yes yes yes

Firewall - Does the cloud provider provide additional integrated security features i.e. a Next Generation Firewall?

yes yes no no yes yes yes

Does the cloud provider keep an eye on current threats and take action? yes yes yes yes yes yes yes

Does the cloud provider support additional integrated security features for cloud resources using 3rd party tools:

– IDS (Intrusion Detection System) – IPS (Intrusion Prevention System) – ATP (Advanced Threat Protection)

yesyesyes

yesyesyes

yesyesyes

yesyesyes

yesyesyes

nonono

yesyesyes

Does the provider carry out regular penetration tests against the platform? No yes no no no no no

Page 43: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

41 the cloud report 01—2019

Network

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is network monitoring availble? yes yes yes yes yes yes yes

Is a Content Delivery Network (CDN) available? yes yes yes no yes yes yes

Sample Measurements Iperf Result:TCP: Bandwidth Sender: 959 Mbits/sec Receiver: 958 Mbits/secUDP: Bandwidth Sent: 990 Mbits/sec

Iperf Result:TCP: Bandwidth Sender: 906 Mbits/sec Receiver: 904 Mbits/secUDP: Bandwidth Sent: 923 Mbits/sec

Iperf Result:TCP: Bandwidth Sender: 3.85 Gbits/sec Receiver: 3.85 Gbits/secUDP: Bandwidth Sent: 3.85 Gbits/sec

Iperf Result:TCP: Bandwidth Sender: 9.97 Gbits/sec Receiver: 9.97 Gbits/secUDP: Bandwidth Sent: 7.26 Gbits/sec

Iperf Result:TCP: Bandwidth Sender: 101 Mbits/sec Receiver: 99.6 Mbits/sec UDP: Bandwidth Sent: 99.0 Mbits/sec

Iperf Result:TCP: Bandwidth Sender: 3.74 Gbits/sec Receiver: 3.74 Gbits/sec UDP: Bandwidth Sent: 2.16 Gbits/sec

Iperf Result:TCP: Bandwidth Sender: 101 Mbits/sec Receiver: 99.8 Mbits/sec UDP: Bandwidth Sent: 99.0 Mbits/sec

Public IPs- Public IPs for VMs?- Available kinds of public IPs for VMs- Public IPs for LoadBalancers?- Available kinds of public IPs for LoadBalancers

yesfloating / staticyesstatic

yesfloating / staticyesstatic

yesfloating / staticyesstatic

yesfloatingyesstatic

yesstaticyesstatic

yesfloatingyesfloating

yesfloating/staticyesstatic

Is a dedicated network connection from data-center to public cloud possible?

yes (AWS Direct Connect) yes (Azure Express Route) yes (Google Cloud Interconnect) no yes yes yes

Network Security features (Network Traffic analysis, Network Security Groups)

- AWS Web Application Firewall- Network security groups- Network Traffic analysis

Network Access Controls- User-Defined Routes- Network Security Appliance- Application Gateway- Azure Web Application Firewall- Network Availability Control

Network security group (NSG); Network security group flow logs; Log Analytics; Log analytics workspace; Network Watcher

- Firewall- Network security groups- Network Traffic analysis

Security Groups Network Security GroupsFirewalls

Network Security GroupsFirewalls

Network Security GroupsFirewalls (Multi VLAN, Single VLAN and Web App)DDOS mitigation

Traffic costs per GB Up to 10 TB per month: € 0.15Next 40 TB per month: € 0.095Next 100 TB per month: € 0.078Over 150 TB per month: € 0.069

Up to 5 GB per month: FreeNext 5 GB per month: € 0.075Next 100 TB per month: € 0.06Next 350 TB per month: € 0.043

Up to 10 TB per month: € 0.073Next 140 TB per month: € 0.063Next 350 TB per month: € 0.039

Up to 50 TB per month: € 0.06Next 150 TB per month: € 0.04

€ 0.05 € 0.078

Security

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

– Integration to a SIEM possible? (Security Information and Event Management) – Security Groups – Disk Encryption – Network Traffic Analyse

yesyesyesyes

yesyesyesyes

yesyesyesyes

noyesnono

yesyesyesyes

noyesyesno

yesyesyesyes

Protection against Denial of Service Attacks yes yes yes yes yes yes yes

Firewall - Does the cloud provider provide additional integrated security features i.e. a Next Generation Firewall?

yes yes no no yes yes yes

Does the cloud provider keep an eye on current threats and take action? yes yes yes yes yes yes yes

Does the cloud provider support additional integrated security features for cloud resources using 3rd party tools:

– IDS (Intrusion Detection System) – IPS (Intrusion Prevention System) – ATP (Advanced Threat Protection)

yesyesyes

yesyesyes

yesyesyes

yesyesyes

yesyesyes

nonono

yesyesyes

Does the provider carry out regular penetration tests against the platform? No yes no no no no no

Page 44: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests42

Image Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which operating systems are offered by the provider?

Windows: – Windows Server 2016 – Windows Server 2012 – Windows Server 2012 R2 – Windows Server 2008 – Windows Server 2008 R2 – Windows Server 2003 R2

Linux: – CentOS – Amezone Linux – Gentoo – Mint – Debian – SUSE – FreeBSD – RHEL – SUSE Linux Enterprise Server – Ubuntu 18.04,16.04,14.04

Windows: – Windows Server, version 1709 – Windows Server 2016 – Windows Server 2012 R2 – Windows Server 2012 – Windows Server 2008 R2 SP1 – Windows Server 2008 SP2 – Windows 10

Linux: – CentOS-based 6.9 – CentOS-based 7.4 – ClearLinux – Container Linux – Debian 8 “Jessie” – Debian 9 “Stretch” – Red Hat Enterprise Linux 7.x – SLES 11SP4 – SLES 12SP3 – Ubuntu 14.04-LTS – Ubuntu 16.04-LTS – Ubuntu 18.04-LTS

Windows:Windows Server

– windows-1709-core – windows-1709-core-for-containers – windows-1803-core – windows-1803-core-for-containers – windows-2016 – windows-2016-core – windows-2012-r2 – windows-2012-r2-core – windows-2008-r2

Linux: – centos-6,7 – Container-Optimized OS from Google (cos-stable,beta,dev)

– coreos-stable,beta,alpha – debian-9 – rhel-6,7 – rhel-7-sap-apps – rhel-7-sap-hana – sles-11,12 – sles-12-sp3-sap – sles-12-sp2-sap – ubuntu-1804-lts,1710,1604-lts,1404-lts – ubuntu-minimal-1804-lts,1604-lts

Linux: – Ubuntu 18.04 LTS – Ubuntu 18.04 LTS sys11 optimized – Ubuntu 16.04 LTS – Ubuntu 16.04 LTS sys11 optimized – Ubuntu 14.04 LTS – Ubuntu 14.04 LTS sys11 optimized – Rescue Ubuntu 16.04 sys11 – Rescue Ubuntu 18.04 sys11

Windows: – Windows 2008 – Windows 2012 – Window Server 2016

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 24 , 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x – Ubuntu 16.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8, 7.3

Windows: – Window Server 2012 R2

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x ,16.04.x.18.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8 , 7.3

Windows: – Standard 2016 – Standard 2012 – R2 Standard 2012

Linux: – CentOS-Minimal 7.X – CentOS-LAMP 7.X – CentOS-Minimal 6.X – CentOS-LAMP 6.X – Debian Minimal Stable 9.X – Debian Minimal Stable 8.X – Debian LAMP Stable 8.X – Red Hat Minimal 7.x – Red Hat LAMP 7.x – Red Hat Minimal 6.x – Red Hat LAMP 6.x – Ubuntu Minimal 18.04-LTS – Ubuntu LAMP 18.04-LTS – Ubuntu Minimal 16.04-LTS – Ubuntu LAMP 16.04-LTS – Ubuntu Minimal 14.04-LTS

Can own images be uploaded?

yes

Supported formats:- OVA File- VMDK- VHD- RAW

yes

Supported formats:- VHD / VHDX

yes

Supported formats:- VMDK- VHDX- VPC- VDI- QCOW2

yes

Supported formats- ISO- QCOW2- RAW- AKI- AMI- KRI

yes

Supported Formats:- VHD- VMDK- VHDX- QCOW2- RAW

yes yes

Supported Formats:- VHD- VMDK- QCOW2- AKI- ARI- AMI

Can existing licenses be used to minimize costs?

yes yes yes yes yes n/a yes

Is there an image build service?

yes

Supported formats:- OVA file- VMDK- VHD

yes

Supported formats:- VHD / VHDX

yes

Supported formats:- VMDK- VHDX- VPC- VDI- QCOW2

yes

Supported formats:- ISO- QCOW2- RAW- AKI- AMI- KRI

yes

Supported Formats:- VHD- VMDK- VHDX- QCOW2- RAW

yes

Supported Formats:- ISO- PLOOP- QCOW2- RAW- VDI- VHD- VMDK- AKI- ARI- AMI- OVA- DOCKER

yes

Supported formats:- VHD- VMDK- QCOW2- AKI- ARI- AMI

Can images be created from existing cloud instances? yes yes yes yes yes yes yes

Are different patch levels of images available? yes yes yes yes yes yes yes

Page 45: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

43 the cloud report 01—2019

Image Service

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Which operating systems are offered by the provider?

Windows: – Windows Server 2016 – Windows Server 2012 – Windows Server 2012 R2 – Windows Server 2008 – Windows Server 2008 R2 – Windows Server 2003 R2

Linux: – CentOS – Amezone Linux – Gentoo – Mint – Debian – SUSE – FreeBSD – RHEL – SUSE Linux Enterprise Server – Ubuntu 18.04,16.04,14.04

Windows: – Windows Server, version 1709 – Windows Server 2016 – Windows Server 2012 R2 – Windows Server 2012 – Windows Server 2008 R2 SP1 – Windows Server 2008 SP2 – Windows 10

Linux: – CentOS-based 6.9 – CentOS-based 7.4 – ClearLinux – Container Linux – Debian 8 “Jessie” – Debian 9 “Stretch” – Red Hat Enterprise Linux 7.x – SLES 11SP4 – SLES 12SP3 – Ubuntu 14.04-LTS – Ubuntu 16.04-LTS – Ubuntu 18.04-LTS

Windows:Windows Server

– windows-1709-core – windows-1709-core-for-containers – windows-1803-core – windows-1803-core-for-containers – windows-2016 – windows-2016-core – windows-2012-r2 – windows-2012-r2-core – windows-2008-r2

Linux: – centos-6,7 – Container-Optimized OS from Google (cos-stable,beta,dev)

– coreos-stable,beta,alpha – debian-9 – rhel-6,7 – rhel-7-sap-apps – rhel-7-sap-hana – sles-11,12 – sles-12-sp3-sap – sles-12-sp2-sap – ubuntu-1804-lts,1710,1604-lts,1404-lts – ubuntu-minimal-1804-lts,1604-lts

Linux: – Ubuntu 18.04 LTS – Ubuntu 18.04 LTS sys11 optimized – Ubuntu 16.04 LTS – Ubuntu 16.04 LTS sys11 optimized – Ubuntu 14.04 LTS – Ubuntu 14.04 LTS sys11 optimized – Rescue Ubuntu 16.04 sys11 – Rescue Ubuntu 18.04 sys11

Windows: – Windows 2008 – Windows 2012 – Window Server 2016

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 24 , 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x – Ubuntu 16.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8, 7.3

Windows: – Window Server 2012 R2

Linux: – openSUSE 42.x – CentOS 6.x , 7.x – Debian 8.x 9.x – Fedora 25 , 26 , 27 – EulerOS 2.x – Ubuntu 14.04.x ,16.04.x.18.04.x – SUSE Enterprise Linux 11 , 12 – Oracle Linux 6.8 , 7.2 – Red Enterprise Linux 6.8 , 7.3

Windows: – Standard 2016 – Standard 2012 – R2 Standard 2012

Linux: – CentOS-Minimal 7.X – CentOS-LAMP 7.X – CentOS-Minimal 6.X – CentOS-LAMP 6.X – Debian Minimal Stable 9.X – Debian Minimal Stable 8.X – Debian LAMP Stable 8.X – Red Hat Minimal 7.x – Red Hat LAMP 7.x – Red Hat Minimal 6.x – Red Hat LAMP 6.x – Ubuntu Minimal 18.04-LTS – Ubuntu LAMP 18.04-LTS – Ubuntu Minimal 16.04-LTS – Ubuntu LAMP 16.04-LTS – Ubuntu Minimal 14.04-LTS

Can own images be uploaded?

yes

Supported formats:- OVA File- VMDK- VHD- RAW

yes

Supported formats:- VHD / VHDX

yes

Supported formats:- VMDK- VHDX- VPC- VDI- QCOW2

yes

Supported formats- ISO- QCOW2- RAW- AKI- AMI- KRI

yes

Supported Formats:- VHD- VMDK- VHDX- QCOW2- RAW

yes yes

Supported Formats:- VHD- VMDK- QCOW2- AKI- ARI- AMI

Can existing licenses be used to minimize costs?

yes yes yes yes yes n/a yes

Is there an image build service?

yes

Supported formats:- OVA file- VMDK- VHD

yes

Supported formats:- VHD / VHDX

yes

Supported formats:- VMDK- VHDX- VPC- VDI- QCOW2

yes

Supported formats:- ISO- QCOW2- RAW- AKI- AMI- KRI

yes

Supported Formats:- VHD- VMDK- VHDX- QCOW2- RAW

yes

Supported Formats:- ISO- PLOOP- QCOW2- RAW- VDI- VHD- VMDK- AKI- ARI- AMI- OVA- DOCKER

yes

Supported formats:- VHD- VMDK- QCOW2- AKI- ARI- AMI

Can images be created from existing cloud instances? yes yes yes yes yes yes yes

Are different patch levels of images available? yes yes yes yes yes yes yes

Page 46: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

tests44

Software as a Service / Applikationen

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is an office suite offered?Is it deeply integrated with other services?

non/a

yesno

yesyes

yesno

yesyes

Managed App Services – AWS Step Functions – Amazon API Gateway – Amazon Elastic Transcoder – Amazon SWF

– Azure Stack – Security and Compliance – Backups and Archives – Disaster Recovery – Cosmos DB – Networks – Active Directory Services – Development and Testing Services – Mobile Services

– Google App Engine – GSuite

– Distributed Message Service – Simple Message Notification – Workspace

– Mobile Foundation – AppId – Mobile Analytics – Push Notifications

Mobile App Services – Push Notifications – User Management – NoSQL-Datenbase – File Storage – Messaging – Social Networks

AWS Mobileyesyesyesyesyesno

Azure Mobile App Serviceyesyesyesyesyesyes

Google Firebase / App Engineyesyesyesyesyesyes

IBM Mobile Foundationyesyesyesyesyesyes

Application Environments – Websites – Microservices – Messaging – Serverless

yes (AWS Lightsail)yes (AWS Elastic Beanstalk)yes (AWS SQS)yes (AWS Lambda)

yes (Azure Web Sites)yes (Azure Service Fabric)yes (Azure Service Bus)yes (Azure Functions)

no yes (App Engine)yes (Cloud Pub/Sub)yes (Cloud Functions)

noyesyes (IBM message Hub)yes (Cloud Functions)

Rollback to a previous application version?

yes yes yes yes

Monitoring

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Dashboard yes yes yes yes yes yes yes

Which cloud resources will be monitored?

– VMs – Apps – Network – LoadBalancer – Storage

yesyesyesyesyes

yesyesyesyesyes

yesyesyesyesyes

yesnoyesyesno

yesyesyesyesyes

yesyesyesyesyes

yesyesyesyesyes

Connection/Usage of external monitoring solutions

yes yes yes no no no yes

Page 47: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

45 the cloud report 01—2019

Software as a Service / Applikationen

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Is an office suite offered?Is it deeply integrated with other services?

non/a

yesno

yesyes

yesno

yesyes

Managed App Services – AWS Step Functions – Amazon API Gateway – Amazon Elastic Transcoder – Amazon SWF

– Azure Stack – Security and Compliance – Backups and Archives – Disaster Recovery – Cosmos DB – Networks – Active Directory Services – Development and Testing Services – Mobile Services

– Google App Engine – GSuite

– Distributed Message Service – Simple Message Notification – Workspace

– Mobile Foundation – AppId – Mobile Analytics – Push Notifications

Mobile App Services – Push Notifications – User Management – NoSQL-Datenbase – File Storage – Messaging – Social Networks

AWS Mobileyesyesyesyesyesno

Azure Mobile App Serviceyesyesyesyesyesyes

Google Firebase / App Engineyesyesyesyesyesyes

IBM Mobile Foundationyesyesyesyesyesyes

Application Environments – Websites – Microservices – Messaging – Serverless

yes (AWS Lightsail)yes (AWS Elastic Beanstalk)yes (AWS SQS)yes (AWS Lambda)

yes (Azure Web Sites)yes (Azure Service Fabric)yes (Azure Service Bus)yes (Azure Functions)

no yes (App Engine)yes (Cloud Pub/Sub)yes (Cloud Functions)

noyesyes (IBM message Hub)yes (Cloud Functions)

Rollback to a previous application version?

yes yes yes yes

Monitoring

Questions AWS Azure Google Cloud Platform SysEleven Stack OTC Noris Cloud IBM Cloud

Dashboard yes yes yes yes yes yes yes

Which cloud resources will be monitored?

– VMs – Apps – Network – LoadBalancer – Storage

yesyesyesyesyes

yesyesyesyesyes

yesyesyesyesyes

yesnoyesyesno

yesyesyesyesyes

yesyesyesyesyes

yesyesyesyesyes

Connection/Usage of external monitoring solutions

yes yes yes no no no yes

Page 48: 01—2019 Storage - The Cloud Reportthe-report.cloud/wp-content/uploads/2019/01/CloudReport... · 2019-01-22 · the cloud report 01—2019 1 EDITORIAL Storing Data Cloud verspricht

Save your

Early Bird

Ticket

www.containerdays.io