Top Ten things to consider when moving Business Critical Applications (BCA) to the Cloud (Part 2 of 3)

 Allgemein, Cloud, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Top Ten things to consider when moving Business Critical Applications (BCA) to the Cloud (Part 2 of 3)
Apr 022016
 

In the first partwe looked at public, private and Hybrid Cloud and their characteristics. In this part we will look at the common characteristics of business critical applications. We will also look at how some of these characteristics relate to the different types of Cloud infrastructure.

Common Characteristics of Business Critical Applications (BCA):

Business critical applications typically have very stringent SLAs and have a direct impact on the business. These are the crown jewels of the business that need to be managed with utmost care to avoid loss of productivity, data and potential revenue. These are the major factors can have a direct impact on these applications such as the following:

Figure 6: BCA Cloud Considerations

Performance:

A majority of BCA requires high performance, which is typically guaranteed to the business in the form of an SLA. Performance can have a direct impact on productivity and revenue for the business. Due to the inherent shared nature of public clouds, they are not able to provide performance guarantees for BCA.

High Availability:

High Availability has always been a critical requirement for BCA. Enterprises typically require four to five nines of availability for these applications. Public clouds today do not provide HA for customers. Customers are required to re-architect their applications for more resiliency and HA. Customers have spend extra effort to have high availability in Public cloud environments. Private and Hybrid Clouds have built in HA capabilities that BCA can leverage.

Scalability:

BCA needs an infrastructure that can scale up or down as and when needed. Public clouds are highly scalable providing elastic scaling capabilities. Virtualized private clouds leveraging technologies like Hot-Add can also dynamically scale up or scale down resources needed for BCA. Short term bursting and scalability can be best done leveraging public clouds.

Security & Compliance:

Business Critical Data often have very stringent security requirements. It is also common for this data to be subject to industry and government regulations for audit and compliance. The sensitivity and other export regulations of the data can also determine whether these have to be stored in house or by an external provider. The public cloud providers provide robust tools to manage the environment securely to meet common business requirements. The information security team of the enterprise has to adapt its people and processes to leverage the tools of the public cloud provider. Private clouds can leverage existing toolsets and can also meet regulatory requirements, with fine-grained control and storing the data where appropriate. There is a lot of effort and complexity required in compliance processes and the solution should provide a lot of features and flexibility. Public Cloud based solutions might undergo additional scrutiny with extra steps required for compliance.

Tools:

Businesses leverage tools to manage and maintain their BCA. These tools are quite often evaluated, optimized and deployed over an extended period of time with the internal teams being trained to effectively use these tools. Public clouds provide tools as part of the service to customers that require customers to adapt and retool their processes around. Private clouds can continue to leverage existing tools and processes for management of BCA.

Cost:

Due to the critical nature of BCA, cost takes a back seat to factors like Performance, Availability and other SLA related considerations. The potential loss in revenue, productivity, etc. counters the need for reduced cost. Private and Hybrid clouds are the best suited for BCA from a cost perspective. Public clouds can be cost prohibitive due to the need for BCA to operate 24X7X365 and the pay as you go metering of these applications.

Operations:

Robust operations are very important for managing BCA. BCA infrastructure should be managed reliably to meet the SLAs required by the business. A combination of a people, process and technology make operations happen. Public clouds are operated upon by the operations team of the cloud provider. In addition, operational aspects from a customer standpoint need to be done by internal teams. The split in the operational tasks helps reduce costs, but there is less coordination and control of the environment. Private clouds are managed by the internal team, with full accountability for all operational aspects of the environment.

Prototyping:

New BCA under consideration are typically prototyped with a proof of concept as part of a feasibility study and the decision making process. The need for infrastructure for prototyping is for a short duration with quick turnaround for infrastructure readiness. Public Cloud providers are very adept at providing needed infrastructure quickly and for short-term needs. Private cloud providers do not necessarily have spare infrastructure for such needs and the turnaround time is likely to be much longer for Prototyping needs.

Disaster Recovery:

Disaster Recovery is a broad area that includes backup of critical data and the ability to recover from minor and major disasters. BCA data need to be protected from disasters to comply with business and regulatory requirements. Legacy disaster recovery protections for BCA involved having a dedicated identical infrastructure in a secondary site used only during testing and actual disasters. Virtualization and Cloud computing has greatly simplified DR for BCA through automation tools and the ability to transfer entire virtual machines to the DR site, where they can be recovered on completely different hardware. The need for disaster recovery infrastructure is only short term and Public Cloud Pay as you go capabilities fit these requirements well.

Vendor Lock In:

This is a very important criterion for BCA. If a cloud vendor is very proprietary and makes it hard for customers to move out, the customer loses control over their infrastructure and its costs. Public cloud vendors traditionally lock in their customers. They provide robust tools to move in, but scant tools to move out. Lock in can be avoided by using Public Cloud providers intelligently for the requirements that are least impacted by it.

BCA Cloud Criteria Analysis:

Based on our discussions about requirements for business critical applications and how different cloud options stack up against each other. For the sake of this analysis, we will classify SAAS as also a Public cloud technology to make the comparison simpler. Neither the Public nor the Private cloud on their own meet all the requirements for BCA. One can combine the two methodologies appropriately and leverage hybrid cloud to get the best of both worlds and at the same time efficiently run BCA in the cloud.

Figure 7: BCA Criteria match by cloud type

Performance: Private Clouds can guarantee the performance for BCA, while Public Clouds cannot due to their shared nature.

High Availability: HA solutions are more prevalent and available in private clouds, while these have to be architected into public cloud solutions.

Scalability: Public cloud solutions have unbounded resources and is more scalable than resource constrained private clouds. One can mix and match these and potentially burst to public clouds for certain high demand short-term application needs.

Security: Even though Public clouds do provide robust security tools, the external location of the data and its management makes it not compatible with all security requirements. Private clouds can be customized to meet all security requirements.

Tools: Robust tools are available in both the private and public clouds for BCA management.

Cost: For short term needs the pay as you go public cloud model can be cost effective. For round the clock applications, the private cloud is cost effective. A hybrid cloud with a mix of these two cost models would provide the most effective solution.

Operations: The public cloud includes infrastructure operations as part of the service, but customer specific processes have to be managed by customer operations. In private cloud a single team manages both components. Private cloud management requires more personnel and could be more expensive. A Hybrid cloud can leverage the best of both worlds.

Prototyping: Prototyping and proof of concepts are best addressed in Public Clouds with fast turnaround for infrastructure with a Pay as you go model. Private clouds are not adequately suitable for these requirements.

Disaster Recovery: Public clouds are great for providing infrastructure on a needed basis. Rather than have dedicated infrastructure for disaster recovery, it can be leased for testing and disasters for the short duration of the events. Another consideration when it comes to disaster recovery is what kind of conversion if any is required to run the BCA workload with the cloud provider. The Recovery Time Objectives (RTO) for BCA are very stringent and any conversions across platforms will significantly increase the time to recover. Ideally if the BCA is recovered in a platform identical to the source and automatic tools are leveraged excellent RTO can be achieved. The time required for testing is also dependent on compatibility between the source and destination platforms.

 

The post Top Ten things to consider when moving Business Critical Applications (BCA) to the Cloud (Part 2 of 3) appeared first on Virtualize Business Critical Applications.

Useful for Devs: Three new Log Insight Content Packs (.NET, Microsoft SQL Server, and Oracle JRE)

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Useful for Devs: Three new Log Insight Content Packs (.NET, Microsoft SQL Server, and Oracle JRE)
Nov 032014
 
VMware has recently released three main stream content packs to the Log Insight Solution Exchange download site, for application developers and system administrators. These releases include .NET, Microsoft SQL Server, and Oracle JRE (Java Runtime Environment). All of these content packs include customized widgets for graphically viewing significant log events including fatal crash events within […]]> http://blogs.vmware.com/management/2014/11/useful-devs-three-new-log-insight-content-packs-net-microsoft-sql-server-oracle-jre.html/feed 0 Understanding Organizational Maturity http://blogs.vmware.com/management/2014/10/understanding-organizational-maturity.html?utm_source=rss&utm_medium=rss&utm_campaign=understanding-organizational-maturity http://blogs.vmware.com/management/2014/10/understanding-organizational-maturity.html#comments Fri, 31 Oct 2014 13:00:30 +0000

Effect of VAAI on cloning with All Flash Arrays:

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Effect of VAAI on cloning with All Flash Arrays:
Apr 292014
 
Cloning virtual machines is an area where VAAI can provide many advantages. Flash storage arrays provide excellent IO performance. We wanted to see what difference VAAI makes in virtual machine cloning operations for “All Flash Arrays”. The following components were used for testing VAAI performance on an all Flash storage array: Dell R910 server with […]]> Cloning virtual machines is an area where VAAI can provide many advantages. Flash storage arrays provide excellent IO performance. We wanted to see what difference VAAI makes in virtual machine cloning operations for “All Flash Arrays”.

The following components were used for testing VAAI performance on an all Flash storage array:

  1. Dell R910 server with 40 cores and 256 GB RAM
  2. Pure FA-400 Flash Array with two shelves that included 44 238 GB Flash drives and 8.2 TB usable capacity.
  3. Centos Linux Virtual Machine with 4 vCPU, 8 GB RAM, 16 GB OS/Boot Disk & 500 GB Data Disk all on the Pure Storage Array
  4. SW ISCSI on dedicated 10GBPS ports.

Test Virtual Machine:

The virtual machine used for testing was a generic Centos Linux based system with a second virtual data disk with 500GB Capacity. To make the cloning process be truly exercised, we want this data disk to be filled with random data. Making the data random ensures that the data being copied is not repetitive in any way and is not easily compressed or de-duplicated.

Preparing the Data Disk:

The following command was used to create a large 460 GB file with random data with “dd” command on Linux.

dd if=/dev/urandom of=/thinprov/500gb_file bs=1M count=4600000

The disk space used in the data disk is shown below and it contains only the random data file generated with dd command.

root@linux01 thinprov]# df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00 10220744 2710700 6982480 28% /

/dev/sda1 101086 20195 75672 22% /boot

tmpfs 4087224 0 4087224 0% /dev/shm

/dev/sdb1 516054864 469853428 19987376 96% /thinprov

Tuning for VAAI and best performance:

VAAI can be enabled or disabled using the following settings: (1 enables, 0 Disables)

esxcli system settings advanced set –int-value 1 –o /DataMover/HardwareAcceleratedMove

esxcli system settings advanced set –int-value 1 -o /DataMover/HardwareAcceleratedInit

esxcli system settings advanced set –int-value 1 -o /VMFS3/HardwareAcceleratedLocking

esxcli system settings advanced set –int-value 1 -o /VMFS3/EnableBlockDelete

Adjust Maximum HW Transfer size for better copy performance:

esxcli system settings advanced set –int-value 16384 –option /DataMover/ 
MaxHWTransferSize

For larger I/O sizes its found in experiments that settings IOPS to 1 have a positive effect on latency

esxcli storage nmp psp roundrobin deviceconfig set –d <device> -I 1 -t iops

On ESXi 5.5, DSNRO can be set on a per LUN basis!

esxcli storage core device set -d <device> -O 256

Set Disk SchedQuantum to maximum (64)

esxcli system settings advanced set –int-value 64 –o /Disk/SchedQuantum

Phase 1: Cloning with VAAI disabled:

For the first phase of the study VAAI was turned off and the settings validated. The cloning process was initiated for the Linux virtual machine and some of the key metrics were observed and captured at the storage array and in vCenter performance charts.

The cloning process was carefully monitored and the time for the cloning operation was observed to be 63 minutes.

 

The time in the chart between 2:06 and 3:09 PM represents the cloning operation shown as the blue area. There is a spike in latency (>2ms), IOPS (5000) and Bandwidth utilization around 420 MBPS during this cloning operation.

Phase 2: Cloning with VAAI Enabled:

For the second phase of the study VAAI was turned on and the settings validated. The cloning process was initiated for the Linux virtual machine and some of the key metrics were observed and captured at the storage array and in vCenter performance charts.

The cloning process was carefully monitored and the time for the cloning operation was observed to be 19 minutes.

 

The time in the chart between 3:54 and 4:13 PM represents the cloning operation shown as the blue area. There is a minimal spike in latency (0.5ms), IOPS (3000) and Bandwidth utilization around 10 MBPS during this cloning operation.

The performance chart for network usage does not correlate with the 10 MBPS average utilization during the cloning operation. The network utilization at the vSphere host level during the operation shows no increase in network utilization as was seen with the Non VAAI operation. This clearly shows that all the network activity occurs within the storage array with no impact the vSphere host.

Effect of VAAI on the cloning operation:

The observations highlight the huge impact that VAAI has on a large copy operation represented by a VM clone. A clone of a VM with 500 GB of random data benefits significantly through the use the use VAAI compliant storage as shown in the following table.

 

Arrays that are VAAI capable such as the Pure Storage array used in this study dramatically improves write intensive operations such as cloning by reducing time of impact, latency, IOPS and bandwidth consumed. This study shows that even all flash arrays that have fast disks with huge IOPS can significantly benefit from VAAI for cloning

 

 

Apr 232011
 

1. Open registry with regedit.exe

2. move to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\vctomcat\Parameters\Java

3. change these entries to the following values

JvmMs = 0x0

JvmMx = 0x0

JvmSs = 0x0

4. restart this service „VMware VirtualCenter Management Webservices“

5. enjoy and relax 🙂

Enhanced by Zemanta

VMware und Salesforce.com planen Java-Cloud

 Allgemein, Cloud, Updates  Kommentare deaktiviert für VMware und Salesforce.com planen Java-Cloud
Apr 282010
 

VMForce is coming….

Source: http://www.tecchannel.de/

VMforce

VMware und Salesforce.com planen Java-Cloud

Mit der gemeinsamen Entwicklung „VMforce“ wollen VMware und Salesforce.com Enterprise-Java den Weg in die Cloud bahnen.
VMforce kombiniert Java mit dem Java-Framework Spring und der Cloud-Plattform Force.com von Salesforce. Als Server für die VMforce-Applikationen kommt die „tc Server Runtime“ zum Einsatz, eine Unternehmensversion von Apache Tomcat. Force.com stellt die relationale Datenbank, verschiedene Dienste (unter anderem Suche, Identität und Security, Workflow, Reporting und Analytik, API zur Integration von Web-Services, mobiler Einsatz) sowie die „Chatter Services“ (Profile, Status-Updates, Grupen, Feeds, Dokumenten-Sharing, Chatter-API) für „Web-2.0“-Applikationen bereit. Verwaltet wird die mit „vSphere“ virtualisierte Infrastruktur mittels „vCloud„.
Lupe

Wolke: Die VMforce soll Java-Entwicklern die gleiche Architektur  wie die

Wolke: Die VMforce soll Java-Entwicklern die gleiche Architektur wie die „Private Cloud“ on premise bieten.
„Jetzt können Enterprise-Java-Entwickler leistungsstarke und innovative ‚Cloud-2‘-Apps schaffen“, verspricht Salesforce.com-Chef Marc Benioff. Sein VMware-Pendant Paul Maritz ergänzt: „Kunden können so die bestehenden internen Investitionen mit der Flexibilität und den Ressourcen der Cloud verbinden.“ Anwender sollen unter anderem von der automatischen Skalierung durch VMforce profitieren und sich keine Gedanken mehr über die Skalierung von App-Servern, Datenbanken oder Infrastruktur an wechselnde Leistungsanforderungen mehr machen müssen.
Weiterführende Informationen zu VMforce und „Open PaaS“ finden Interessierte in zwei Blogposts bei VMware sowie bei Salesforce.
VMforce wird heute angekündigt und soll „noch 2010“, also vermutlich gegen Ende des Jahres, als Developer Preview verfügbar sein. Dann werden auch die Preise bekanntgegeben. (Computerwoche/cvi)

VMforce

VMware und Salesforce.com planen Java-Cloud

Mit der gemeinsamen Entwicklung „VMforce“ wollen VMware und Salesforce.com Enterprise-Java den Weg in die Cloud bahnen.
VMforce kombiniert Java mit dem Java-Framework Spring und der Cloud-Plattform Force.com von Salesforce. Als Server für die VMforce-Applikationen kommt die „tc Server Runtime“ zum Einsatz, eine Unternehmensversion von Apache Tomcat. Force.com stellt die relationale Datenbank, verschiedene Dienste (unter anderem Suche, Identität und Security, Workflow, Reporting und Analytik, API zur Integration von Web-Services, mobiler Einsatz) sowie die „Chatter Services“ (Profile, Status-Updates, Grupen, Feeds, Dokumenten-Sharing, Chatter-API) für „Web-2.0“-Applikationen bereit. Verwaltet wird die mit „vSphere“ virtualisierte Infrastruktur mittels „vCloud“.
Lupe

Wolke: Die VMforce soll Java-Entwicklern die gleiche Architektur  wie die

Wolke: Die VMforce soll Java-Entwicklern die gleiche Architektur wie die „Private Cloud“ on premise bieten.
„Jetzt können Enterprise-Java-Entwickler leistungsstarke und innovative ‚Cloud-2‘-Apps schaffen“, verspricht Salesforce.com-Chef Marc Benioff. Sein VMware-Pendant Paul Maritz ergänzt: „Kunden können so die bestehenden internen Investitionen mit der Flexibilität und den Ressourcen der Cloud verbinden.“ Anwender sollen unter anderem von der automatischen Skalierung durch VMforce profitieren und sich keine Gedanken mehr über die Skalierung von App-Servern, Datenbanken oder Infrastruktur an wechselnde Leistungsanforderungen mehr machen müssen.
Weiterführende Informationen zu VMforce und „Open PaaS“ finden Interessierte in zwei Blogposts bei VMware sowie bei Salesforce.
VMforce wird heute angekündigt und soll „noch 2010“, also vermutlich gegen Ende des Jahres, als Developer Preview verfügbar sein. Dann werden auch die Preise bekanntgegeben. (Computerwoche/cvi)
Reblog this post [with Zemanta]