Visit the Pivotal Partner Academy for the latest Cloud Application Platform training. VMware Application Management and Data Management competencies will no longer be offered, however those of you who already have these competencies will maintain your current access and partner level.
According to the Disaster Recovery Preparedness Councilrsquo;s 2014 Annual Report, organizations worldwide could – and should – be doing a lot more to prepare for a disaster. The benchmark survey results demonstrate severe shortcomings in disaster recovery preparedness for companies worldwide, with 3 in 4 companies at risk due to nonexistent or inadequate disaster recovery plans.
Below are three common challenges organizations face when it comes to implementing a comprehensive DR strategy and how they can overcome these challenges.
The Disaster Recovery Preparedness Councilrsquo;s 2014 results reveal that more than 60% of organizations lack a fully documented DR plan, with another 40% admitting that the DR plan they have isnrsquo;t actually useful when faced with their worst-case scenario.
However, there are ways to improve preparedness at a fraction of the cost of duplicating infrastructure or maintaining additional data centers. In fact, IDGlsquo;s recent survey states that 43% of respondents are getting started with hybrid cloud to improve their disaster recovery capabilities.
Take a look at cloud-based disaster recovery options like VMware vCloud® AirTM Disaster Recovery. vCloud Air Disaster Recovery allows users to protect data and applications by helping businesses prepare a robust plan for what needs to be recovered in the event of a disaster. It also allows users to define critical recovery time objectives (RTO) and recovery point objectives (RPO) to help determine how quickly you need to recover and the amount of downtime that is tolerable.
Even when an organization has a DR plan in place, it doesnrsquo;t mean that the plan has been tested or validated in the event of a worst-case scenario outage or event. According to The Disaster Recovery Preparedness Councilrsquo;s 2014 report, a third of all organizations report that they only test their DR plans once or twice a year, and more alarmingly, one in four never test their DR plans.
In addition, more than one in three organizations have reported losing one or more critical applications, VMs, or critical data files for hours at a time over the past year, while nearly one in five companies have reported losing one or more critical applications over a period of days.
To prevent this, a cloud-based approach to disaster recovery makes it easy to test critical applications to validate they will recover within RTO and RPOrsquo;s without overburdening critical resources. vCloud Air Disaster Recovery allows customers to test various failover scenarios as often as necessary during the service period, by offering individual failover tests.
Most traditional DR solutions require organizations to have not only the budget, but also the staff with the necessary skills to manage a comprehensive DR plan. The Disaster Recovery Preparedness Councilrsquo;s 2014 survey found that nearly two-thirds of organizations believe that their DR plan is underfunded, while almost half of organizations are unsure about what is spent on DR or whether they have budget for it at all.
Still, the financial risk to organizations that donrsquo;t have a DR plan in place far outweigh the initial upfront investment. Losses reported from outages can range from a few thousand dollars to millions of dollars, with nearly 20% of survey respondents indicating losses of more than $50,000 to over $5 million.
When it comes to supplementing continuity plans while addressing budget, time and resource constraints, getting up and running quickly is critical. With vCloud Air Disaster Recovery, you get a single interface and common management tools that enable access from other VMware applications. This provides you with a familiar and comfortable way to handle workflow execution and task management, from both the VMware vSphere® Web Client and the vCloud Air console, thus ensuring disaster recovery environment access at all times.
For more information about how to close the gaps in your disaster recovery strategy, read the below infographic.
To learn more about vCloud Air Disaster Recovery, visit us at vCloud.VMware.com.
Be sure to subscribe to the vCloud blog, follow@vCloudon Twitter or lsquo;likersquo; us onFacebookfor future updates.
Name: Theresa Kushner
Role: Vice President, Enterprise Information Management
Office Location: Palo Alto, California
Years at VMware: 2
Twitter handle: @tkushner
The 2014 theme for Grace Hopper Celebration of Women in Computingis Be Inspired. What inspired you to pursue a career in technology?
Sometimes the best inspiration is fear. I came to a career in technology reluctantly. Although my father owned a computer software company when I was in high school, I did everything I could to avoid computers and technology. After I graduated from college, I realized I couldnrsquo;t pay my credit card bill because of the low pay my chosen field provided. I discovered that computer firms in the technology industry paid better than any of the other industries and my skills were transferrable. My first technology job was at Texas Instruments in product marketing for consumer products. Since then, Irsquo;ve spent my entire career in the world of technology with IBM, Cisco Systems and now, VMware. The fear of not being able to pay off my credit card forced me to open a new door.
What exciting things are taking shape for VMware people because of the VMwomen initiative?
Launched as an enterprise-wide initiative in January 2014, the VMwomen initiative at VMware is taking actionable and measureable steps at all levels to increase the representation of women so that VMware leads our industry globally. Wersquo;re delivering programs to help propel these goals including:
- TALK, a monthly speakers series with internal and external guest speakers
- DIALOGUE, a pilot peer mentoring program designed to help women accelerate their professional development by working with senior advisers
- Unconscious Bias training, which provides a common framework and language to uncover unconscious biases that unfold in the workplace
These programs, supported by action plans up and down the organization, are helping to broker conversations and behavior change while providing an environment that openly supports diversity.
You are sitting on the Accountability and Metrics for Gender Diversity panel at the Grace Hopper Conference (#GHC14). Can you provide insight into the panel topic?
If we truly want to improve the representation of women in computing in all levels, then we need to focus on metrics first. Having metrics tells us first where we are starting from and helps us decided where we want to go. Itrsquo;s just good business. Having metrics also makes it possible to hold people accountable. At VMware, our CEO Pat Gelsinger has insisted on metrics to set the stage and to guide us. He uses these metrics to help recognize where we need to concentrate our efforts and to understand which of our actions are helping. Pat holds his executive team accountable for progress – measurable progress. You can learn more about what wersquo;re bring to life here at VMware and within the technology community on this topic at the Accountability and Metrics for Gender Diversitypanel at #GHC14 on Thursday, October 9 from 10:15 – 11:15 am.
What are you most looking forward to about #GHC14?
I get excited by engaging with the next generation of women technologists. They bring such different perspectives and I am constantly amazed at their intelligence and creativity. More than anything, Irsquo;m looking forward to these interactions.
Fill in the blank: Be inspired tobe data driven!
Search all our open positions worldwide
Connect with us at VMware Careers
Learn more about the workplace culture at VMware, see pics of our offices, talk to recruiters, and get real time job openings by following us on our social pages:
LinkedIn Group rsquo;VMware Careersrdquo;
You are probably familiar with the Virtual SAN networking requirement of Layer 2 Multicast but today we would like to discuss why Virtual SAN leverages multicast forwarding for a portion of its network traffic as well as provide troubleshooting steps when it seems as though multicast traffic is not being received by the Virtual SAN VMkernels. The goal of this article is to educate the networking novice as well as provide clarification for the networking experts so we will be taking a thorough, ground up approach for our discussion.
Click the link if you need to jump directly to the testing examples Testing Multicast functionality. You will also want to make sure that you are following the guidelines below.
Virtual SAN Multicast Guidelines
- Layer 2 multicast v2 enabled for the VSAN VMkernel network
- Layer 3 multicast is not required
- VSAN VMkernel multicast traffic should be isolated to a layer 2 non-routable VLAN
- We do not recommend implementing multicast flooding across all ports as a best practice.
- IGMP Snooping and an IGMP Querier can be used to filter multicast traffic to a limited to specific port group. This is beneficial if other non-Virtual SAN network devices exist on the same layer 2 network segment (VLAN). If only Virtual SAN VMkernel ports exist on a particular VLAN, IGMP snooping will not offer any benefit. Note: An IGMP Querier is necessary to house the access tables in order for IGMP snooping to function.
- Two Virtual SAN clusters can exist peacefully on the same layer 2 network segment however, both clusters will receive all of the multicast traffic from the other cluster. If a networking issues arises in this scenario (e.g. vCenter displaying “Network Status: Misconfiguration detected”) the Ruby vSphere Console command “vsan.reapply_vsan_vmknic_config” may resolve the issue (click here for the RVC blog series for instructions on accessing RVC).
As a suggestion for performance optimization, if two Virtual SAN clusters do exist on the same layer 2 network segment, modifying the multicast addresses for one of the clusters will reduce the amount of multicast traffic received for each Virtual SAN cluster and possibly resolve the “Network Status: Misconfiguration detected” message as well. For instructions on modifying the multicast addresses of the cluster, please see VMware KB 2075451: Changing the multicast address used for a VMware Virtual SAN Cluster.
- Use tcpdump-uw and nc to validate that each Virtual SAN node can send and receive multicast traffic.
For a complete list of Virtual SAN networking requirements please check out the Virtual SAN Networking Requirements and Best Practices from the VMware vSphere 5.5 Documentation Center.
Network Datagram Forwarding Schemes
Forwarding schemes for network datagrams differ in their delivery methodologies. Below you will find a list of the most common network forwarding schemes:
- Unicast delivers a message to a single specific network host
- Broadcast delivers a message to all hosts in a network
- Multicast delivers a message to a group of hosts that have expressed interest in receiving the message
- Anycast delivers a message to anyone out of a group of hosts, typically the one nearest to the source
- Geocast delivers a message to a geographic area
Unicast Network Transmission
Unicast forwarding is the predominant delivery mechanism used for the sending of network datagrams (network traffic) over IP based networks. A datagram is simply a basic transfer unit of network traffic associated with packet-switched networks. In unicast, network datagrams are intended for delivery to a single network destination that is identified by a unique network address. Though network traffic that is sent via unicast may be forwarded across multiple devices to get to the intended receiver, the intention of the forwarding devices is to simply read the header of the packet, not the data payload, in order to identify the destination address for proper forwarding.
Think of unicast as someone having a conversation with a friend. You may converse face-to-face or through other medium such as telephone, email, or smoke signals however the intended destination is a single, specific recipient.
Each network datagram forwarding scheme excels in its own unique area. Unicast forwarding is meant to be a more secure and resource friendly solution for delivering network traffic. This is possible because the data is sent directly to a single host rather than to all hosts everywhere.
Unicast does have its challenges though. Since unicast creates a one-to-one connection with the intended recipient, network traffic intended for mass-distribution can be very costly in terms of computing resources and bandwidth consumption. Each recipient of the data will require a separate network connection that consumes computing resources on the sending host and requires its own separate network bandwidth for the duplicate transmission. Streaming media to multiple recipients presents a very challenging use case for unicast as large quantities of duplicate data are being sent to multiple recipients.
Think of this as having to have the same conversation with a group of friends. Wouldnrsquo;t it be easier if you could just let everyone know at the same time?
Multicast Network Transmission
Where unicast has challenges, multicast excels. Multicast forwarding is a one-to-many or many-to-many distribution of network traffic (as opposed to unicastrsquo;s one-to-one forwarding scheme). Rather than using the network address of the intended recipient for its destination address, multicast uses a special destination address to logically identify a group of receivers.
Since multicasting allows the data to be sent only once, this has the benefit of freeing up computing resources from the host that would otherwise be required to send individual streams of duplicate data. Leveraging switching technology to repeat the data message to the members of the multicast group is far more efficient for the host than it is sending each individually.
For a complete list of general network IPv4 multicast addresses see the IPv4 Multicast Address Space Registry from the Internet Assigned Numbers Authority (IANA).
Multicast with IGMP Snooping and an IGMP Querier
Layer 2 multicast forwarding, without IGMP snooping and an IGMP Querier enabled, is essentially a layer 2 network broadcast. Each network device attached to an active network port will receive the multicast network traffic.
IGMP Snooping and an IGMP Querier can be leveraged to constrain the IPv4 multicast traffic to only those switch ports that have devices attached that request it. This will avoid causing unnecessary load on other network devices in the layer 2 segment by requiring them to process packets that they have not solicited (similar to a denial-of-service attack).
How does Multicast benefit Virtual SAN network traffic?
Virtual SAN uses a clustered metadata database and monitoring service (CMMDS) to make particular metadata available to each host in the cluster. The CMMDS is designed to be a highly available, performant and network efficient service that shares information regarding host, network, disks, objects, components, etc. among all of the hosts within the Virtual SAN cluster.
Distributing this data amongst all of the hosts and keeping each host synchronized could potentially consume a considerable amount of compute resources and network bandwidth. Each host is intended to contain an identical copy of this metadata which means, if we were using general unicast forwarding for this traffic, there would be constant duplicate traffic being sent to all of the hosts in the cluster.
Virtual SAN leverages layer 2 multicast forwarding for the discovery of hosts and to optimize network bandwidth consumption for the metadata updates from the CMMDS service (storage traffic is always unicast). This eliminates the computing resource and network bandwidth penalties that unicast imposes in order to send identical data to multiple recipients.
The bandwidth required for these updates depend upon the actual deployment (quantity of hosts, VMs, objects, etc) however here is a speculative example using nice round numbers for easier calculations.
Consider a 32 node cluster with a considerably dense population of virtual machines. CMMDs updates could potentially consume up to 100Mb through spikes in network traffic with possibly an average of 10Mb sustained throughput. Using unicast forwarding, this would require 3.2Gb (100Mbit * 32 = 3200Mbit = 3.2Gbit) just for the transfer of metadata. The bandwidth required for storage traffic would be added on to this number.
By leveraging multicast forwarding for metadata updates, Virtual SAN is able to decrease the 3.2Gb this scenario requires, down to 100Mb.
(Again this scenario is speculative in order to illustrate the concept, actual numbers vary depending upon deployment).
Testing Multicast functionality
When enabling Virtual SAN, vCenter may display that there is a “Misconfiguration detected” in the cluster’s network status. If you receive this error, you will want to validate that each host can successfully receive multicast network traffic from each other host in the cluster using the default Virtual SAN multicast group addresses.
1. Identify Virtual SAN VMkernel Port
First we will want to identify which VMkernel is used by Virtual SAN:
~ # esxcfg-vmknic -lInterface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Typevmk0 Management Network IPv4 10.144.97.177 255.255.255.0 10.144.97.255 a0:d3:c1:03:9b:a8 1500 65535 true STATICvmk0 Management Network IPv6 fe80::a2d3:c1ff:fe03:9ba8 64 a0:d3:c1:03:9b:a8 1500 65535 true STATIC, PREFERREDvmk1 VSAN IPv4 10.144.102.177 255.255.255.0 10.144.102.255 00:50:56:66:4c:4a 1500 65535 true STATICvmk1 VSAN IPv6 fe80::250:56ff:fe66:4c4a 64 00:50:56:66:4c:4a 1500 65535 true STATIC, PREFERRED
2. Monitor VSAN VMKernel Port network traffic
Next we simply monitor the Virtual SAN VMkernel for any multicast network traffic. If each host in the cluster sees multicast traffic across their respective VMkernel interfaces with the default Virtual SAN multicast group addresses, then multicast traffic is successfully traversing the network segment as it should.
The default multicast group addresses for Virtual SAN are:
184.108.40.206 Port: 12345
220.127.116.11 Port: 23451
Here are two ways to monitor for multicast network traffic:
A) Using “esxcli network ip connection list” to identify active connections with the default multicast group addresses.
~ # esxcli network ip connection list | egrep 224udp 0 0 18.104.22.168:12345 0.0.0.0:0 34062 hostd-workerudp 0 0 22.214.171.124:23451 0.0.0.0:0 34062 hostd-worker
B) Using tcpdump-uw to collect packet traces to troubleshoot network issues:
-i = interface
-n = no IP or Port name resolution
-s0 = Collect entire packet
-t = no timestamp
-c = number of frames to capture
3. Generate Multicast traffic (*Optional)
You can use thenc command (netcat) to troubleshoot TCP port connectivity.
The syntax of the nc command is:
# nc -uz <destination-ip> <destination-port>
Here is what it will look like when ran.
~ # nc -uz 126.96.36.199 12345Connection to 188.8.131.52 12345 port [udp/*] succeeded!
Note: Netcat includes an option to test UDP connectivity with the -uz flag, but because UDP is a connectionless protocol, it will always report as ‘succeeded’ even when ports are closed or blocked. Use in combination with tcpdump-uw on the remote host to validate that the network traffic generated with nc (netcat) was successfully received.
That concludes our guided tour through Virtual SAN’s usage of multicast, its benefits, as well as troubleshooting steps in the event something goes awry. In the next Virtual SAN Troubleshooting blog we will show how to create a script to automate multicast group address changes in the event you have need. Happy troubleshooting!
Virtual SAN 5.5 Validation Guide
Capturing virtual switch traffic with tcpdump and other utilities (1000880)
Troubleshooting network and TCP/UDP port connectivity issues on ESX/ESXi (2020669)
Changing the multicast address used for a VMware Virtual SAN Cluster (2075451)
IETF RFC: 1112 Host Extensions for IP Multicasting
IETF RFC: 2236 Internet Group Management Protocol, Version 2
IETF RFC: 768 User Datagram Protocol
Support for Oracle Linux 5.11 has been introduced for these products:
- ESXi 5.0 Update 3, ESXi 5.1 Update 1, ESXi 5.1 Update 2, ESXi 5.5, ESXi 5.5 Update 1 and ESXi 5.5 Update 2
For more information about software and hardware support, please check the VMware Compatibility Guide
This post has been generated by Page2RSS
本稿では、Breakout セッション以外のVMworldの見所として、VMworld Partyについてご紹介してまいります。
■ VMworld Party
水曜日の夜は、VMworld Partyということで、モスコーンセンターのすぐ近くにあるイェルバ・ブエナ ガーデンズというモスコーンセンターのWESTと同じくらいの大きさの講演でPM7:00から開催されました。会場内では、筋肉系のエンターテイメントが実施されたり、かつらをかぶって記念撮影が出来たり、大きな画面でいろいろな映像を流したりしながら、ビールやワイン等のアルコールといろいろな軽食が取れるようになっています。
また、弊社CEOのPat GelsingerがEMCのCEOであるJoe Tucciからの指名によりアイス・バケツ・チャレンジも実施されました。こちらは、ご存知の方も多いかと思いますが、筋萎縮性側索硬化症(ALS)協会への寄付を募る活動で、氷水を頭からかぶるものになります。実施後弊社CEOが次に指名したかは残念ながら聞き取れませんでした。。。
VMworld 2014速報ブログシリーズでは、USで開催されているVMworld 2014について現地から速報でお届けしています。発表時点での予定情報であり、本ブログに記載されている製品仕様やロードマップは将来予告無く変更になる可能性があります。
To meet the demand for hosted applications, VMware Horizon 6 supports an app-remoting option based on Microsoft RDS. The Application-Delivery Options in VMware Horizon 6.0 white paper describes this new option, as well as additional application-delivery options available in Horizon 6. You can publish and manage RDS-hosted applications through Horizon with View in the Horizon Advanced Edition and Horizon Enterprise Edition. That includes setting policies and entitlement. You can also integrate VMware Workspace with View, which enables you to present your hosted applications in Workspace, where they are displayed alongside applications from ThinApp repositories, Citrix XenApp farms, and SaaS and Web application providers.
RDS is the Microsoft architecture that supports the use of remote machines and applications through a network connection. The application-hosting option in Horizon 6 provides the essentials for publishing applications based on RDS. You can install one instance of an application on an RDS host instead of on multiple individual desktops, and make that application available to many end users.
The Application-Delivery Options in VMware Horizon 6.0 white paper summarizes the straightforward process of hosting an application via RDS. In just a few steps, you can publish an application on an RDS host and entitle users to access it in View, and then sync to display it in Workspace. The following outline will give you an idea of how easy RDS hosting is in Horizon 6.
Publish an Application by Hosting It on an RDS Server
First , you select an RDS host. Using the Inventory navigation pane in View, you click the RDS Hosts tab, and select an RDS farm. In the following screenshot, an RDS host was previously set up and registered with View. The host is part of an RDS farm, a collection of RDS hosts based on the set of applications served. In this screenshot, the host is OFF2K13_1.
Next, you locate the application pool where applications are managed, using the Inventory navigation pane on the left. In the following screenshot, the application pool already has some Office 2013 applications in it. You now click Add to publish a new application.
In the Add Application Pools window, you select the Select installed applications radio button, and select your application from those listed. We used Calculator for this example, but the process works the same for any application that you want to publish. You then click Next.
View automatically provides an ID and a friendly display name that you can modify. You can select the check box to entitle users after this wizard finishes, or wait until later. After verifying that everything is accurate, you click Finish.
That is all there is to it! The application is now published on the RDS host and ready for entitlement in View.
If you selected the check box to entitle users after the wizard finishes, then View automatically opens the entitlement window so you can immediately entitle the appropriate users. If you did not select this check box, you can simply click the Entitlements tab in Application Pools to bring up the Add Entitlements window. You click Add to select the users or groups that you want to entitle to access this newly published application.
In the Add Application Entitlement window, you provide the required domain and user or group name, and click Next.
The newly hosted application now appears in the Application Pool with entitlements complete.
Voilà! The application is now installed on the RDS host and end users are entitled to access it. The next time these end users launch their Horizon Clients, the newly published RDS-hosted application will be available.
In order for your users to access the application from Workspace as well as from View, you now need to sync. In Workspace, you log in as an administrator, navigate to View Pools, and click Sync Now.
After the sync has completed, you can verify and save the changes. Workspace now displays the icon of the application that you just published. The end user sees the new icon directly in the Workspace user interface, which is pre-populated with the fully qualified domain name of the RDS host. For your end user, the icon is available in Workspace without the appearance of going through View or through a View desktop. And regardless of the device, your end user always finds this application in the same place, providing a consistent user experience:
When you see how effective VMware Horizon 6 is for making applications of all types available to your end users, you will want to know more. See the Application-Delivery Options in VMware Horizon 6.0 white paper for more information about the combination of application-delivery options, as well as use cases where each option is most beneficial.
You can also find out more about VMware Horizon 6 by
- Watching a live demo
- Downloading a free trial
- Reading more
To comment on this paper, contact the VMware End-User Computing Solutions Management and Technical Marketing team at twitter.com/vmwarehorizon.
Name: Duncan Epping
Role: Chief Technologist
Office Location: Utrecht, The Netherlands
Years at VMware: 6
What does paying it forward mean to you?
To me it means giving back in any shape or form. Whether that is through a donation or volunteering. Personally I feel that everyone should experience the act of volunteering at some point in his or her lives, even though I realize that it is not for everyone. Than again, I always felt that it wasnrsquo;t really something for me, but I did really enjoy it when I had the opportunity to join the Good Gigs Trek to Vietnam with the VMware Foundation. In Vietnam I directly experienced what a huge impact you can have by volunteering or how much can be done with relatively small donations. For those who did not follow my articles on this subject, we went to Vietnam to support an Orphan Impact with Team4Tech, which provides computer classes in orphanages. Not only does this help the kids increase the chances of finding a job at an older age, but it also allows them to connect with the outside world via things like Facebook or Google. On top of that, there is of course the whole personal touch… the attention they get from the teachers, etc. It definitely opened up my eyes!
What does the quote, rsquo;The brain is a muscle that can move the world.rdquo; From the VMware Foundationrsquo;s Service Learning campaign mean to you?
To me it means that if you put your mind to it than there is nothing that you cannot achieve, whether that is something in sports (like the poster shows for my daughter), at work, or within the community. A lot of people tend to underestimate the difference that they can make and the impact they can halve on other peoplersquo;s lives. You just need to be open for it.
How do you instill this value in your children?
We do a couple of things. First and foremost we try to make them aware of what is going on in the rest of the world. We want to make it obvious that not everyone is as fortunate as they are in life. Secondly, we explain to them why we give back, how we have contributed, and how it has impacted the people we are serving. Thirdly, we try to have them help as well whenever they can. That can be something small from donating a couple of dollars from their savings, or asking them to donate some of their toys to less fortunate kids. Those little things actually go a long way!
Please reflect on having your child featured in the campaign. What does it mean to you and your family?
The whole family was very proud to see the picture used, especially my daughter. We were actually all together in the VMware office recently and saw the campaign poster featuring my daughter hanging in the pantry. It was definitely special and something that my daughter will never forget.
Search all our open positions worldwide
Connect with us at VMware Careers
Learn more about the workplace culture at VMware, see pics of our offices, talk to recruiters, and get real time job openings by following us on our social pages:
LinkedIn Group rsquo;VMware Careersrdquo;
Storage Policy Based Management (SPBM) is the foundation of the SDS Control Plane and enables vSphere administrators to over come upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity headroom. Through defining standard storage Profiles, SPBM optimizes the virtual machine provisioning process by provisioning datastores at scale and eliminating the need to provision virtual machines on a case-by-case basis. PowerCLI, VMware vCloud Automation Center, vSphere API, Open Stack and other applications can leverage the vSphere Storage Policy Based Management API to automate storage management operations for the Software-Defined Storage infrastructure.
For more information on the Software-Defined Data Center and its related components, please visit the VMware SDDC Product pages.
Traditional storage systems have been manually managed and over engineered in to better accommodate the static provisioning process inherent in their design along with the constantly evolving storage requirements of applications consuming storage.
Matching Storage Consumers and Storage Providers
The major challenge with traditional storage architectures lies in aligning storage consumer needs with storage provider capabilities. Offset in alignment, results in the over provisioning of storage resources and waste that IT admins have suffered throughout the years. Proper alignment of application needs and storage resources can eliminate the loss of storage brought on through over provisioning.
Storage Management Operations can be divided into two major categories:
• Storage Consumers = Applications requiring storage capabilities
• Storage Providers = Storage arrays offering various storage capabilities
The vSphere environment sits between the storage consumers (applications) and the storage providers (storage arrays). This enables vSphere to act as an arbiter between the applicationrsquo;s needs and the arrayrsquo;s capabilities.
Traditional Storage Model
In the traditional storage-provisioning model, storage tiers are created through the acquisition and deployment of arrays from different classes (high-end, mid-tier, archive, etc.). In this model, each class of array becomes a separate storage tier. Applications are then statically bound to a specific array that best matches their class of requirements.
Example: Tier 1 application VMs to the high-end array, file and printer server VMs go to the mid-tier array, and archive data goes to the least expensive array.
Challenges of the Traditional Storage Model
The traditional storage model confines storage arrays to delivering only one level of service. Arrays today generally have multiple capabilities allowing for wider flexibility in delivering multiple levels of service (e.g. performance, data protection, tiering, replication, etc.).
The traditional storage model also required a great emphasis on architecture sizing (How many storage consumers, what type, etc.). Storage arrays are quite often the largest cost in the datacenter and they take a considerable effort to deploy and migrate data to and from. Considering the cost, time, and effort involved, it is imperative that the storage administrators plan storage needs properly to avoid misaligning applications and storage resources.
Storage Pooling Model
The Storage Pooling provisioning model takes a rsquo;bottoms uprdquo; approach to storage provisioning. With storage arrays gaining the capability of delivering multiple levels of service, storage administrators could began creating different classes of service within a single storage array. This allows for more flexibility in provisioning as additional resource pools could be configured for whichever storage tier required additional resources for service.
Existing and new workloads would be assigned to a static level of service established in advance by the storage administrator. Changing service levels required moving the applicationrsquo;s data store to the appropriate pool within a storage array.
Challenges of the Storage Pooling Model
As with the traditional storage provisioning model, storage pooling does not consider individual application requirements. Applications are still forced into pre-defined buckets of storage. Storage administrators still have to go through the typical rsquo;have-a-hunch-provision-a-bunchrdquo; provisioning style when deploying a new array because re-carving the storage array resources into a different set of service pools is a daunting task.
Misalignment of application needs and storage resources
These storage models inevitably resulted in a considerable amount of misalignment between what the capabilities an array provided, and what the applications actually required. The result of this misalignment was the creation of a lot of higher-end arrays that are supporting less-than-critical applications. It is neither efficient nor optimized.
In these models over provisioning is often necessary in order to best guarantee the allocated storage resources will meet application requirements. Wasted storage resources are brought about due to lack of granularity in the provisioning process.
Example: Say Tier 2 and Tier 3 storage are out of resources and Tier 1 storage is the only available locations to store archive data. In this scenario Tier 1 storage capabilities will not be fully leveraged by the tier 3 application storage requirements for low-cost, long-term storage. This is one example of the misalignment of application resources and storage capabilities.
Software-Defined Storage Model
In the Software-Defined Storage (SDS) architectural model, VMwarersquo;s goal is to leverage the hypervisor to bring about revolutionary storage efficiencies. Software-Defined Storage is the vision that storage services should be dynamically created and delivered on a per VM basis with control of these services occurring through policies rather than the time consuming process of managing each storage system independently.
This innovative approach succeeds in aligning storage services with application requirements by shifting the traditional storage operational model from a bottoms-up array-centric approach to a tops-down VM centric model. After storage policies are configured, the storage consumer can choose their desired application or virtual machine and the policy engine will read the associated storage resource policy then orchestrate the precise provisioning of storage resources that match the application’s storage resource requirements.
Through Software-Defined Storage, VMware is allowing storage resources to be provisioned precisely to application requirements and seeing a tangible reduction in storage over provisioning, IT management cycles and cost.
This is great news for storage admins. As a previous storage admin, I would gladly welcome any assistance in offloading any and all routine mundane tasks. Imagine having more cycles to put out the fires you already are fighting. The more that we virtualize and automate the control of storage resources, the more of our cycles that are freed up to do everything else that is demanding our attention.
For a more extended look into Software-Defined Storage, here is an excellent white paper by Chief Strategist for VMwarersquo;s Storage and Application services, Chuck Hollis: The VMware Perspective on Software-Defined Storage
Policy-Driven Control Plane
The Policy-Driven Control Plane is VMwarersquo;s new management layer for Software-Defined Storage. This layer provides a common orchestration and automates storage consumption with a consistent approach across all storage tiers.
Storage Policy-Based Management (SPBM) is VMwarersquo;s implementation of the policy-driven control plane. It is integrated with vCloud Automation Center, vSphere APIs, PowerShell, and OpenStack.
The Storage Policy-Based Management (SPBM) engine interprets the storage requirements of individual applications specified in policies associated with individual VMs and dynamically composes the storage service placing the VM on the right storage tier, allocating capacity, and instantiating the necessary data services (snapshots, replication, etc.).
This is in contrast to a typical storage environment, where each type of storage array has its own management tools, largely disassociated from specific application requirements. The policy-driven control plane is programmable via public APIs that can be used to consume and control policies via scripting and cloud automation tools for self-service consumption of storage.
All data plane devices must be able to express their capabilities to the Storage Policy-Based Management engine as well as respond to dynamic requests for composed storage service composition that is aligned to specific application requirements.
Here is a great overview of the Software-Defined Storage model and its components: Understanding the DNA of Software Defined Storage
Stay tuned for part2 of the article where we will dive deeper into the makeup of the vSphere storage policy components. Here is a sneak preview of the pieces of Storage Policy Based Management put together…
The VMware Perspective on Software-Defined Storage
Understanding the DNA of Software Defined Storage
Software-Defined Data Center and how VMwarersquo;s technology enables IT transformation
Video Training – VMware vSphere: Storage Profiles
Storage Policy Based Management (SPBM) Configuration Maximums (2062751)
For further information and updates on thisvulnerability, refer to KB article:
VMware assessment of Bash Code Injection Vulnerability via Specially Crafted Environment Variables (CVE-2014-6271 CVE-2014-7169, aka “Shellshock”) (2090740).
Note: For information regarding VMware customer portals and web sites, see Impact of bash code injection vulnerability on VMware Customer Portals and web sites (CVE-2014-6271 and CVE-2014-7169, aka “shellshock”) (2090817).
- Troubleshooting SSL certificate issues in VMware Horizon View 5.1 and later (2082408)
- Generating and importing a signed SSL certificate into VMware Horizon View 5.1/5.2/5.3 using Microsoft Certreq (2032400)
- Connections to the Horizon View Connection Server or Security Server fail with SSL errors (2072459)
In View 5.1 and later, you configure certificates for View by importing the certificates into the Windows local computer certificate store on the View server host. By default, clients are presented with this certificate when they visit a secure page such as View Administrator. You can use the default certificate for lab environments, and one could even make the argument that it is OK for fire-walled environments, but otherwise you should replace it with your own certificate from a trusted CA (Verisign, GoDaddy, others) as soon as possible. They also told me you should use an SSL certificate from a trusted CA when setting up a Security Server for your environment when the Security Server can be used from outside your firewall (Internet) to access View desktops inside your firewall.
My engineers stressed to me the importance of following each step in these KBs one at a time when you are filling out the forms on those sites to obtain your certificate. It is easy to make a mistake and you might not receive something that will work for you.
Note: The default certificate is not signed by a commercial Certificate Authority (CA). Use of noncertified certificates can allow untrusted parties to intercept traffic by masquerading as your server.
VMworld US has been a resounding success this year and we are carrying the energy and momentum to VMworld Europe, set to kick-off on October 13th in Barcelona. While there will be multiple product sessions to talk about the latest advances in our technology, we also recognize that our customers want to see how others in their industry have deployed our products and the benefits they achieved. To showcase our customersand to share industry best practices, we have a series of industry-oriented panel discussions at VMworld that will cover Healthcare, Education, Financial Services, Government, and Manufacturing/3D.
In these sessions, we will have customers come in and share their decision making process, business drivers, deployment details, and benefits they achieved through deployment of VMware’s end-user computing solutions, products, and technologies. This will be your chance to get firsthand information from peers in your industry:
- EUC1620 – Solving todayrsquo;s Education Industry Computing Challenges with VMware – Panel
- EUC1905 – End User Computing Journey in Financial Services – Panel
- EUC1836 – Providing Point of Care Solutions that Physicians will ask to use
- EUC2416 – Securing the Government Infrastructure – Panel
- EUC3102 – Tips and advice from Customers who implemented High Performance 3D Graphics Use Cases
- EUC1997 – Horizon 6: Customer Reference Implementation – Healthcare
- EUC2040 – Horizon: Customer Reference Implementation – Car Manufacturing
In addition to sharing best practices, this will be a wonderful opportunity to network with your peers in the industry! We look forward to seeing you at the event.
You can follow us live throughout the show and drop us a comment on Twitter or Facebook using the #vmworld hashtag!