By Girish Manmadkar, an SAP Virtualization Architect at VMware
Earlier this month, my colleague David Gallant wrote about architecting a software-defined data center for SAP and other business-critical applications. I’d like to further explore how SAP fits into the software-defined data center (SDDC) and, specifically, how to optimize it for CapEx and OpEx savings.
A key to remember is that the SDDC is not a single technology that you purchase and install—it is a use case, a strategy, a mind shift. And in that way, it is also a journey that will unfold in stages and should be planned in that way. I’ve outlined the three foundational steps below.
Most of the customers that I work with are well along in this stage, moving their current non-x86 SAP workloads toward a VMware-based x86 environment.
During this process, numerous milestones can be delivered to the business, in particular, an immediate reduction in their CapEx. This benefit is achieved by starting to move non-x86 or current physical x-86 workloads to the virtual x-86 OS platform. Understandably, customers tend to approach this transition with caution, so we often start with low-hanging fruits: non-production and/or development SAP systems.
The next step you can take is to introduce automation. Automation comes in two places: at the infrastructure layer, which is achieved using VMware vCloud Automation Center and Orchestration; and at the application layer, delivered using SAP’s Landscape Virtualization Manager.
During this phase it is best to implement vSphere features, including auto deploy—host profiles, and OS templates—in order to automate vSphere and virtual machine provisioning to the environment.
Often it is a good idea at this time to start a parallel project around storage. You can work with your storage and backup teams to enhance current architectures by enabling storage technologies like de-dup, vSphere storage I/O control and any other storage array plugins.
We also recommend minimizing agents in the guest operating system, such as agents used for backup and/or anti-virus. The team should start putting together new architecture to move such agents from the guest OS to the vSphere hosts to reduce complexity and improve performance. The storage and network teams should look to implement new architecture that will support virtual disaster recovery solution. By planning ahead now, teams can avoid rework later.
During this phase, the team not only migrates SAP application servers to the vSphere platform but also shows business value with CapEx reductions and value-added flexibility to scale out SAP application server capacity on demand.
Once this first stage goes into the operations cycle, it lays the groundwork for various aspects of the SDDC’s second stage. The next shift is toward a converged datacenter or common virtualization framework to deploy a software-defined lifecycle for SAP. This allows better monitoring, migration to the cloud, chargeback, and security.
This is also the phase where you want to virtualize your SAP central instances, or ASCS instances, and database servers. The value here is the removal of a reliance on complex, physical clustered environments by transitioning instead to VMware’s high-availability features. These include fault tolerance (FT) applicable to and determined by the SAP sizing exercise for the ASCS and focused on meeting the business’s SLAs.
Once the SDDC 2.0 is in production, it is a good time to start defining other aspects of SDDC, such as Infrastructure-as-a-Service, Platform-as-a-Service, Storage-as-a-Service, and Disaster-Recovery-as-a-Service.
Keep an eye out for our follow-up post fleshing out the processes and benefits of these later stages.
Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hands-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.
AUTHOR: Arron Lock
I recently presented onstage at the Enterprise Mobility: BYOD event here in London with a couple of respected peers in the industry from Airwatch and Swivel as well as a VMware colleague. After taking questions from the audience around security for BYOD, my main takeaway from the event and in particular this session is that there is still a lot of confusion around mobility in general. In fact, mobility is becoming a catch-all phrase for end-user computing (EUC) transformation.
It’s amazing how quickly the line between BYOD and enterprise mobility (EM) became blurred. A number of people in the audience had deployed some form of BYOD (most for smartphones) to enable employees to get access to email and calendar. But others, typically with a higher level of risk associated with externalising email, were struggling with the business case.
But BYOD is only one aspect of mobility. The main benefit from enabling users to become more mobile is realised when you mobilise the business workflow that they are part of. This started off as email for executives—with their exec toys such as tablets or the latest smartphone—an obvious use case. But when a business is looking holistically at mobility, there are many other opportunities such as enabling field engineers or the sales force to be more productive. So, the important message that I stressed to my audience wasn’t about technology or security, but rather to ensure they establish the business justification for doing this work before jumping in with both feet.
It’s paramount to gather the business requirements first, by engaging the business stakeholders to understand their needs. I recently helped an IT director of a large multinational company interpret the corporate strategy to collaborate with industry partners. I took the IT team and line-of-business stakeholders through a defined process of the steps required to build out a highly agile externalisation platform. Now they have users accessing the platform from any device and any location—24/7—and they manage everything from the centre since the devices are unknown to them. It’s like BYOD to an extent, but tied to a major business initiative and nothing to do with smartphones.
As with this example, the most successful IT projects are those linked to business outcomes. The EUC space has long been at the unfortunate end of the IT spectrum in that it provides the general tools and services that employees expect to use on a day-to-day basis, yet the solutions do not appear to deliver direct value to the business.
So where did I begin with the before-mentioned IT organisation? First, I got my client to stop starting with technology, which in my experience is often the root cause of an IT project’s failure—deploying technology without considering the “why.” Here’s how I approached this project and others like it with my clients:
- Set a clear and simple mission statement with your business stakeholders.
- Kick off with a discovery workshop with the key stakeholders from IT and the business.
- Capture the business requirements in a clear and concise format, under the following headings:
- User Experience
- Application Delivery
- Performance, Availability and Scalability
- Service Wrapper
- Use the MoSCoW model or similar to help set priorities.
- Review your findings regularly with key stakeholders.
Once this series of steps is complete, I find that my clients are in a good place to communicate internally to key stakeholders as well as externally to potential vendors of technology solutions—communication that paves the way to move forward to build out the business case and functional and technical designs for the solution.
Arron Lock is an EUC business solutions architect with Accelerate Advisory Services and based in the UK. Follow him on Twitter @arron_lock
RELATED: To learn more about the trends in mobile adoption and how IT is adapting, read the Mobile Rebels research report. This VMware-commissioned study provides insight to the pressures European businesses are facing and reveals just how dependent employees have become on their mobile devices.
We’ve heard over and over again how people are helped by the stories we share on our Certification Pros page. This is a place for VCPs, VCAPs, and VCDXs to share their experiences with and advice preparing for VMware certifications. It’s one of the many ways the VMware certification community comes together to share the wealth of its knowledge.
With the launch of the VMware Certified Associate certification this summer, we are eager to gather stories about VCAs to add to the Certification Pros page and to feature on this blog. Have you passed a VCA exam or know someone who has? Please contact us at email@example.com. We look forward to hearing from you!
A common question that I have seen recently around Virtual SAN (VSAN) is how limits on number of components, disks, etc., translate into capacity and policy limits. I will attempt to cover some of the basics in this post. However for a deeper explanation on the various sizing and design considerations, please check out the updated Design & Sizing Guide which you can find in the VSAN beta community documents section. If you are not yet signed up for the VSAN beta, why not? Click here to register. The beta community has a wealth of information, including documentation, hardware guidance and some great discussions with our R&D engineers. A great place to start if you just want to read up on Virtual SAN, or indeed, kick its proverbial tires.
Let’s begin with components. I have put a deeper description of objects and components in an earlier post here. However, in a nutshell, a virtual machine can have a policy which define stripe width and/or availability through mirroring. These stripes and replicas are made up of components. There is a maximum of 3000 components per host. This is an important consideration if you wish to use policies that has a high stripe width or a high failures to tolerate setting since each of these contribute towards component consumption. And each virtual machine deployed on VSAN with that policy will consume that many components. However you need to have a lot of VMs, with a large stripe width and a large failures to tolerate setting before getting close to this limit.
How many disks?
The next discussion concerns the number of disks that can be deployed in VSAN. Again, there are some limits around this that needs to be considered by someone designing a VSAN implementation. VSAN uses the concept of disk groups to act as a container for HDDs and SSDs. A disk group can contain only one SSD but since the November 2013 VSAN beta refresh, a disk group can now contain 7 HDDs (magnetic disks). A host can have a maximum of 5 disk groups. This means, using some simple math, that a single host in a VSAN cluster can have 5 SSDs and 35 HDDs. However, you need to ensure that your storage controller can manage that many disk drives and that is a conversation you need to have with your hardware vendor. You should also check the VSAN HCL for a list of supported storage controllers (this is still a work in progress and new controllers are being validated and tested all the time). Also remember, VSAN supports scale out. So you can start small, and build out larger environments over time, including hot-adding of disks to servers and the hot-adding of hosts to the VSAN cluster. Currently we support a maximum of 8 hosts in a cluster – some more simple math gives you a maximum of 40 SSDs and 280 HDDs in a fully configured VSAN cluster.
How much HDD capacity do I actually need?
Now that we know the disk limits, how much capacity do I actually need.
“FailuresToTolerate” policy setting plays an important role in this consideration. There is a direct relationship between the number of failures to tolerate and the number of replicas of a virtual machine’s storage. For example, if the number of failures to tolerate is set to 1 in the VM storage policy, then there is a single mirror of the VMDK created on local disks on another host. If the number of “FailuresToTolerate” (FTT) is set to two, then there are two replicas of the VMDK across the cluster. The following formula can assist in calculating how much HDD one needs:
How much HDD do I need = VMDK Size * (FTT + 1)
How much SSD capacity do I need?
There are some additional considerations when trying to figure out how much SSD you need for Virtual SAN. Internally at VMware we use a rule of thumb that SSD capacity in the VSAN cluster should be approximately 10% of HDD capacity. This rule of thumb is used to reflect the average working data set of an application running in a VM. Whilst not perfect, VMware feels this is adequate for ball-park sizing of SSD capacity.
So, as per the previous example, with a default policy value of “FailuresToTolerate” (FTT) set to 1, write cache will be mirrored since writes go to the SSD on both hosts before being de-staged to magnetic disks on those hosts. This means you need to consider increasing the amount of SSD allocated per virtual machine as the “FailuresToTolerate” policy setting increases.
The amount of SSD can be calculated using the following formula:
How much SSD do I need = (VMDK Size * 10%) * (FTT + 1)
However, additional considerations come into play when you wish to ensure that there is spare capacity to handle failures in the cluster and still have optimally configured virtual machines. VSAN does of course offer high availability for virtual machines through the use of policies and replicas. But what if you want hot-spares? This is something that has come up a lot in conversations, and the answer is that the whole cluster can act as the hot-spare. But you must provision for it. This means that should a failure occur, your virtual machines will still be available, but you can now have VSAN rebuild those components that were on the failed host or disk to have your virtual machine tolerate another future failure in the cluster.
To reiterate, this is optional since a failure will not impact your virtual machines if ‘FailuresToTolerate’ is set to at least 1. The largest failure in the cluster could be a complete host failure. Therefore you will need to ensure that there is enough free SSD & HDD available in the cluster to tolerate at least one host failure if you want your virtual machine storage objects to be rebuilt and come back into compliance after a failure occurs. If you have not followed recommendations and are using non-uniform host configuration with different disk capacities, you will need to ensure that there is enough capacity in the cluster to tolerate the a failure of the largest host.
So as you can see, there are a lot of things to consider when trying to get the design and sizing of a Virtual SAN just right. For a definitive guide, head over to the VSAN beta community and get the latest Design & Sizing Guide for more information.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage