Aug 232017
 

In an effort to provide a more insightful user experience and to help understand how vSphereDRS works, we recently released a fling:DRS Dump Insight.

DRS Dump Insight is a service portal where users can upload drmdump files and it provides a summary of the DRS run, with a breakup of all the possible moves along with the changes in ESX hosts resource consumption before and after DRS run.

Users can get answers to questions like:

  • Why did DRS make a certain recommendation?
  • Why is DRS not making any recommendations to balance my cluster?
  • What recommendations did DRS drop due to cost/benefit analysis?
  • Can I get all the recommendations made by DRS?

Once the drmdump file is uploaded, the portal provides a summary of all the possible vMotion choices DRS went through before coming up with the final recommendations.

The portal also enables users to do What-If analysis on their DRS clusters with options like:

  • Changing DRS Migration Threshold
  • Dropping affinity/anti-affinity rules in the cluster
  • Changing DRS advanced options

 

The post Introducing DRS DumpInsight appeared first on VMware VROOM! Blog.

Jul 142017
 

DRS Lens provides an alternative UI for a DRS enabled cluster. It gives a simple, yet powerful interface to monitor the cluster real time and provide useful analyses to the users. TheUI is comprised ofdifferent dashboards in the form of tabs for each cluster being monitored.

Cluster Balance

Dashboardshowing the variations in the cluster balance metric plotted over time with DRS runs. This shows how DRS reacts to and tries to clear cluster imbalance every time it runs.

VM Happiness

This dashboard shows VM happiness for the first time in a UI. This chart shows the summary of total VMs in the cluster that are happy and those that are unhappy based on the user defined thresholds. Users can then select individual VMs to view performance metrics related to its happiness, like CPU ready time and memory swapIn rate.

vMotions

This dashboard provides a summary of vMotions that happened in the cluster over time. For each DRS run period, there will be a breakdown of vMotions as DRS-initiated and user-initiated. This helps users see how actively DRS has been working to resolve cluster imbalance. It also helps to see if there are vMotions outside of DRS control, which may be affecting cluster balance.

Operations

This dashboard tracks different operations (tasks, in vCenter Server) that happened in the cluster, over time. Users can correlate information about tasks from this dashboard against DRS load balancing and its effects from the other dashboards.

 

Users can download DRS Lens from VMware flings website.

The post DRS Lens – A new UI dashboard for DRS appeared first on VMware VROOM! Blog.

Spotlight on the New DRS Groups and VM-Host Rule Cmdlets!

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Spotlight on the New DRS Groups and VM-Host Rule Cmdlets!
Jun 142017
 

We&#rsquo;re kicking off the first in a series of blog posts taking an in-depth look at some of the new cmdlets that were made available with the PowerCLI 6.5.1 release. This first post is going to be covering the cmdlets targeted at managing DRS groups and their associated rules.

These new cmdlets are as follows:

  • Get-DrsClusterGroup
  • New-DrsClusterGroup
  • Set-DrsClusterGroup
  • Remove-DrsClusterGroup
  • Get-DrsVMHostRule
  • New-DrsVMHostRule
  • Set-DrsVMHostRule
  • Remove-DrsVMHostRule

If you&#rsquo;ve never used DRS groups and DRS affinity rules or don&#rsquo;t know what they are, these are a way to control which VMs are able to exist on which hosts. This control is leveraged through either affinity or anti-affinity rules that are configured at the cluster level. These rules are configured between groupings of VMs and groupings of hosts. These rules also have types, which basically describes how the enforcement should work. The types are: Must Run On, Should Run On, Must Not Run On, Should Not Run On

Please see the documentation for more information about: DRS Affinity Rules

Taking A Closer Look – A Use Case Demonstration

We have been given a lab environment and our end result is to have the even numbered App VMs run on the even numbered hosts whenever possible, and likewise with the odd numbered VMs and hosts.

First, a look at the lab environment:

  • 1 Cluster
  • 4 Hosts
  • 50 VMs

We&#rsquo;ll start by taking a look at the DRS Cluster Group cmdlets. These are used in order to create, manage, and remove VM and host based DRS groups. These cluster groups are then referenced by the DRS VM-Host affinity rules, which we&#rsquo;ll discuss in a bit.

Let&#rsquo;s create the first host DRS group, which will be for the odd numbered hosts. This can be done with the &#lsquo;New-DrsClusterGroup&#rsquo; cmdlet while specifying a name, the cluster, and the desired hosts. A command for our sample environment looks like this:

New-DrsClusterGroup -Name HostsOdd -Cluster Demo -VMHost esx01.corp.local,esx03.corp.local

We&#rsquo;ll repeat a similar process for the even hosts, only this time we&#rsquo;ll store the cluster and desired hosts in their own variables:

We now have the required host DRS groups, so we can move forward and create the VM DRS groups. These are created with the same &#lsquo;New-DrsClusterGroup&#rsquo; cmdlet, except we&#rsquo;ll now use the VM parameter and specify the VMs for each group.

Starting again with our odd numbered VMs, we&#rsquo;ll use the following command:

New-DrsClusterGroup -Name VMsOdd -Cluster $cluster -VM app01,app03,app05

If you&#rsquo;ll notice, that&#rsquo;s nowhere close to all of the necessary odd numbered VMs. We&#rsquo;ll now make use of the &#lsquo;Set-DrsClusterGroup&#rsquo; cmdlet to add the remaining VMs (which I&#rsquo;ve already stored into a variable). This cmdlet also requires usage of either the &#lsquo;Add&#rsquo; or &#lsquo;Remove&#rsquo; parameter in order to specify what kind of modification is being requested.

The command to add the remaining odd system should be similar to the following:

Set-DrsClusterGroup -DrsClusterGroup VMsOdd -VM $VMsOdd –Add

We&#rsquo;ll repeat a similar process for the even VMs’ DRS group:

Before moving on to creating the VM-Host affinity rules, let&#rsquo;s review the DRS groups we&#rsquo;ve created to this point with the &#lsquo;Get-DrsClusterGroup&#rsquo; cmdlet.

This cmdlet also has a couple parameters to help gain additional information. The &#lsquo;Type&#rsquo; parameter can be used to specify whether to return VM groups or host groups. The &#lsquo;VM&#rsquo; and &#lsquo;VMHost&#rsquo; parameters can be used to only return DRS groups belonging to that VM or host.

Some examples of these parameters in use:

Moving on to the creation of the rules… We&#rsquo;ll be using the &#lsquo;New-DrsVMHostRule&#rsquo; cmdlet along with several parameters. These parameters will be &#lsquo;Name&#rsquo;, &#lsquo;Cluster&#rsquo;, &#lsquo;VMGroup&#rsquo;, &#lsquo;VMHostGroup&#rsquo;, and &#lsquo;Type&#rsquo;. Most of those should be self-explanatory, but &#lsquo;Type&#rsquo; may not be. Thanks to tab complete, we&#rsquo;ll see that the type options are &#lsquo;MustRunOn&#rsquo;, &#lsquo;ShouldRunOn&#rsquo;, &#lsquo;MustNotRunOn&#rsquo;, and &#lsquo;ShouldNotRunOn&#rsquo; and apply to how the rule is enforced against the cluster.

Remembering that our goal is to have the even VMs run on the even hosts whenever possible, we&#rsquo;ll issue the following command:

New-DrsVMHostRule -Name 'EvenVMsToEvenHosts' -Cluster $cluster -VMGroup VMsEven -VMHostGroup HostsEven -Type ShouldRunOn

We&#rsquo;ll do a similar command for the odd rule:

Our objective should be complete and we can verify that by using the &#lsquo;Get-DrsVMHostRule&#rsquo; cmdlet. The output should be similar to the following:

These VM-Host rules can also be modified once they&#rsquo;ve created with the &#lsquo;Set-DrsVMHostRule&#rsquo; cmdlet. This cmdlet has the ability to rename the rule, enable or disable it, and modify either the VMGroup and/or the VMHostGroup.

The rules can easily be disabled using the following command:

Get-DrsVMHostRule | Set-DrsVMHostRule -Enabled $false

This environment happens to be a lab, so before wrapping up this post we should probably clean it up. We can do this while utilizing the &#lsquo;Remove-DrsClusterGroup&#rsquo; and &#lsquo;Remove-DrsVMHostRule&#rsquo; cmdlets. The commands could look like the following:

Remove-DrsVMHostRule -Rule EvenVMsToEvenHosts,OddVMsToOddHostsGet-DrsClusterGroup | Remove-DrsClusterGroup

Summary

These eight new cmdlets are a terrific addition to the PowerCLI 6.5.1 release. They are also a great compliment to the already existing DRS cmdlets! Start using them today and let us know your feedback!

The post Spotlight on the New DRS Groups and VM-Host Rule Cmdlets! appeared first on VMware PowerCLI Blog.

vSphere 6.5 DRS Performance – A new white-paper

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für vSphere 6.5 DRS Performance – A new white-paper
Nov 162016
 

VMware recently announced thegeneral availability of vSphere 6.5. Among the many new features in this release are some DRS specific ones like predictive DRS, and network-aware DRS. In vSphere 6.5, DRS also comes with a host of performance improvements like the all-new VM initial placement and the faster and more effective maintenance mode operation.

If you want to learn more about them, we published anew white-paperon the new features and performance improvements of DRS in vSphere 6.5. Here are some highlights from the paper:

 

 

The post vSphere 6.5 DRS Performance – A new white-paper appeared first on VMware VROOM! Blog.

Latency Sensitive VMs and vSphere DRS

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Latency Sensitive VMs and vSphere DRS
Nov 032016
 

Some applications are inherently highly latency sensitive, and cannot afford long vMotion times. VMs running such applications are termed as being ‘Latency Sensitive’. These VMs consume resources very actively, so vMotion of such VMs is often a slow process. Such VMs requirespecial care during cluster load balancing, due to their latency sensitivity.

You can tag a VMas latency sensitive, by setting theVM option through the vSphere web client as shown below (VM →Edit Settings →VM Options → Advanced)


By default, the latency sensitivity value of a VM is set to ‘normal’. Changing it to ‘high’ will make the VM ‘Latency Sensitive’. There are other levels like ‘medium’ and ‘low’ which are experimental right now. Once the value is set to high, 100% of the VM configured memory should be reserved. It is also recommended to reserve 100% of its CPU. Thiswhite paper talks more about the VM latency sensitivity feature in vSphere.

DRSsupport

VMware vSphere DRS provides support for handling such specialVMs. If a VM is part of a DRS cluster, tagging it as latency sensitive will create a VM-Host soft affinity rule. This will ensure that DRS will not move the VM unless it is absolutely necessary. For example, in scenarios where the cluster is over-utilized, all the soft rules will be dropped and VMs can be moved.

To showcasehow thisoption works, we ran a simple experimentwith a four hostDRS cluster running a latency sensitive VM (10.156.231.165:VMZero-Latency-Sensitive-1) on one of its host (10.156.231.165)

As we can see from the screenshot, CPU usage of host ‘10.156.231.165’ ishigher compared to the other hosts, and the cluster load is not balanced. SoDRS migrates VMs from the highly utilised host (10.156.231.165) to distribute the load.

Since latency sensitive VM is a heavy consumer of resources, it will be the best possible candidate to migrate, as moving it will distribute the load in one shot. So DRS migratedthe latency sensitive VM to a different host in order to distribute the load.

Then we put the cluster back in its original state, and set the VM latency sensitivity value to ‘high’ using VM options (as mentioned earlier). Also set 100% of memory and cpu reservations. This time, due toassociated soft-affinity rule, DRS completely avoided the latency sensitive VM. It migrated otherVMs from the same host to distribute the load.

Things to note:

  • 100% memory reservation for the latency sensitive VM is a must. Without the memory reservation, vMotion will fail; if the VM is powered-Off, it cannot be powered-On until reservation is set.
  • Since DRSusesa soft-affinity rule, sometimes the cluster might get imbalanced due to these VMs.
  • If multiple VMs are latency sensitive, spread them across hosts and then tag them as latency sensitive. This will avoid over-utilization of hosts and results in better resource distribution.

The post Latency Sensitive VMs and vSphere DRS appeared first on VMware VROOM! Blog.

DRS Doctor is here to diagnose your DRS clusters

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für DRS Doctor is here to diagnose your DRS clusters
Jun 302016
 

Mystery revealed, DRS for VMware vSphere is no more a black box! DRS Doctor will tell you all you need to know about your DRS clusters.

Our latest fling, DRS Doctor, will monitor your DRS clusters for virtual machine and host resource usage data, DRS-recommended migrations, and the reason behind each migration. It also monitors all the cluster-related events, tasks, and cluster balance, and logs all this information into a plain text log file that anyone can read.

Read this blog for more information on how DRS Doctor can monitor and diagnose your clusters.

Download DRS Doctor from our flings site.

The post DRS Doctor is here to diagnose your DRS clusters appeared first on VMware VROOM! Blog.

DRS Doctor is here to diagnose your DRS clusters

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für DRS Doctor is here to diagnose your DRS clusters
Jun 302016
 

Mystery revealed, DRS for VMware vSphere is no more a black box! DRS Doctor will tell you all you need to know about your DRS clusters.

Our latest fling, DRS Doctor, will monitor your DRS clusters for virtual machine and host resource usage data, DRS-recommended migrations, and the reason behind each migration. It also monitors all the cluster-related events, tasks, and cluster balance, and logs all this information into a plain text log file that anyone can read.

Read this blog for more information on how DRS Doctor can monitor and diagnose your clusters.

Download DRS Doctor from our flings site.

The post DRS Doctor is here to diagnose your DRS clusters appeared first on VMware VROOM! Blog.

Getting More Out of VMware: Distributed Resource Management

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Getting More Out of VMware: Distributed Resource Management
Jun 012016
 

On June 8th, we will be hosting a Getting More Out of VMware webinar about Distributed Resource Management using Dynamic Resource Scheduler (DRS) and vRealize Operations (vR Ops). In this session, we will examine how to combine DRS and vRealize Operations Manager capabilities to deliver the most efficient compute platform for your virtual machines. We […]

The post Getting More Out of VMware: Distributed Resource Management appeared first on VMware Cloud Management.

Getting More Out of VMware: Distributed Resource Management

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Getting More Out of VMware: Distributed Resource Management
Jun 012016
 

On June 8th, we will be hosting a Getting More Out of VMware webinar about Distributed Resource Management using Dynamic Resource Scheduler (DRS) and vRealize Operations (vR Ops). In this session, we will examine how to combine DRS and vRealize Operations Manager capabilities to deliver the most efficient compute platform for your virtual machines. We […]

The post Getting More Out of VMware: Distributed Resource Management appeared first on VMware Cloud Management.

Getting More Out of VMware: Distributed Resource Management

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Getting More Out of VMware: Distributed Resource Management
Jun 012016
 

On June 8th, we will be hosting a Getting More Out of VMware webinar about Distributed Resource Management using Dynamic Resource Scheduler (DRS) and vRealize Operations (vR Ops). In this session, we will examine how to combine DRS and vRealize Operations Manager capabilities to deliver the most efficient compute platform for your virtual machines. We […]

The post Getting More Out of VMware: Distributed Resource Management appeared first on VMware Cloud Management.

Getting More Out of VMware: Distributed Resource Management

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für Getting More Out of VMware: Distributed Resource Management
Jun 012016
 

On June 8th, we will be hosting a Getting More Out of VMware webinar about Distributed Resource Management using Dynamic Resource Scheduler (DRS) and vRealize Operations (vR Ops). In this session, we will examine how to combine DRS and vRealize Operations Manager capabilities to deliver the most efficient compute platform for your virtual machines. We […]

The post Getting More Out of VMware: Distributed Resource Management appeared first on VMware Cloud Management.

Mai 022016
 

Recently, a customer reportedthat DRS was not working to load balance the cluster. Under normal circumstances, a minor imbalance is nothing to be concerned about. This is because the main objective for DRS is not to balance the load perfectly across every host. Rather, DRS monitors the resource demand and works to ensure that every VM is getting the resources entitled. When DRS determines that a better host exists for the VM, it make a recommendation to move that VM.

However, some customers still prefer to have an even distribution of utilization across all hosts within a cluster. This article is intended to provide recommendations to accomplish this goal, bearing in mind thatin most cases this will result in additional vMotion activity.

Migration Threshold

This threshold is a measure of how much cluster imbalance is acceptable based on CPU and memory loads. The slider is used to select one of five settings that range from the most conservative (1) to the most aggressive (5). The further the slider moves to the right, the more aggressive DRS will work to balance the cluster.

A priority level for each migration recommendation is computed using the load imbalance metric of the cluster. This metric is displayed as Current Host Load Standard Deviation in the cluster’s Summary tab in the vSphere Web Client.

A cluster with a higher cluster imbalance will lead to higher priority migration recommendations.

For more information about this metric and how a recommendation priority level is calculated, see the VMware Knowledge Base article “Calculating the priority level of a VMware DRS migration recommendation.”

When the migration threshold is set to a more aggressive setting, the Target Host Load Deviation value is lowered. The more aggressive the migration threshold, the smaller the target range. As long as the current load standard deviation is less than or equal tothe target host load value, DRS will consider the cluster balanced.

If you are viewing the CPU and MEM utilization from the vSphere Web Client, and concludethat your cluster is not as balanced as you’d like, check your migration threshold setting in your DRS enabled cluster to ensure it’s not set to a value that is too conservative. Simply moving this slider to a more aggressive threshold will lower the target standard deviation value, and cause DRS to execute more migrations to achieve a better load balance.

Setting this value to Priority 4 (the second to the furthest right setting) offers a good balance for those that wish to have even cluster balance without executing too many migrations. Setting a Priority 5 will offer the most even load balance, but will result in frequent vMotions that may not provide any performance benefit to the VMs.

Active vs. Consumed Memory

Have you ever looked at the host utilization for MEMin the vSphere Web Client and wondered why the hosts appear to be imbalanced while DRS says it is balanced? Well, there is a reason for that. The vSphere Web Client is showing consumed memory, while DRS uses active memory when calculating the current host load. This disparity is especially common in environments where VMmemory is mostly idle, but the VM once consumed most of its allocation. The consumed memory metric is the maximum amount of memory used by the VM at any point in its lifetime, even if the VM is not actually using most of this memory. By contrast, active memory is an estimation of the amount of memory that is currently actively being used by the VM; it is estimated using a memory sampling algorithm which is run every 5 minutes.

The consumed memory metric is not the most accurate way to measure the effectiveness of DRS. To get a better indication of the current host load, one would need to look at the active memory counter for each host to decide if the hosts loads are balanced with one another. Specifically,DRS uses (Active Memory) + (25% of Idle Memory) to determine the value for the current host load.

Starting with vSphere 5.5, an advanced setting was introduced called PercentIdleMBInMemDemand that can be used to change this behavior. Increasing this value will cause DRS to use more of the consumed memory value when making DRS calculations. This often results in higher priority recommendations to be generated. The higher the priority, the more likely DRS will move the VM to another host in order to achieve a better cluster balance.

A valid value for PercentIdleMBInMemDemand can be from 0 to 100. 0 would cause DRS to use only active memory for all calculations, and 100 would cause DRS to use only Consumed memory. In recent lab tests, setting PercentIdleMBInMemDemand=50 exhibited very positive results. Changing this value increased the number of higher priority recommendations where the default migration threshold (3) would take action to move the VMs. This also resulted in a more even balance of cluster load as represented in the vSphere Web Client since DRS is using a value closer to the consumed memory metric.

Setting PercentIdleMBInMemDemand to 100 can also be done, which would cause DRS to use consumed memory. This is the same value as displayed by the vSphere Web Client. However, this should only be set to 100 in environments where memory is not over-committed, otherwise unexpected results may occur.

Bursty CPU Workloads

AggressiveCPUActive is another setting that was first introduced in vSphere 5.5. This setting is intended to improve the CPU load balance in environments where the CPU is very active but spikey in nature. DRS will normally use the average over the past 5-minute period for CPU demand calculations. When the AggressiveCPUActive advanced setting is enabled, DRS will switch from using the average to the 80th percentile (i.e.,the 2ndhighest value in the interval). This will help some situations where DRS is not generating recommendations to move a VM based on CPU demand because the average demand in the past 5-minute period is much lower than needed to generate the recommendation.

Here is an example to the benefit to this advanced setting. Over a 5-minute period, a VM uses 50%, 70%, 10%, 5%, and 5%. This VM is very spikey in nature where the VM only demands a lot of CPU for a very short period of time, and then moves to more of an idle state. The average 5-minute utilization is 28%, which may not be high enough to warrant a recommendation to move the VM to another host. However, when enabling the AggressiveCPUActive setting, DRS will use 50% which is the 80th percentile.

Recommendations

To summarize, achieving a perfectly even load balance ofhosts is not the primary intention of DRS. However, it can be configured to achieve this result at the expense of additional vMotion migrations.

  • Migration Threshold=4 | This is the first step to gain better load balance. This will cause priority 4 recommendations to be migrated to other hosts in an effort to gain higher levels of load balance.
  • PercentIdleMBInMemDemand=50 | This will instruct DRS to factor twice as much idle memory as the default when generating calculations for the host load. This uses values more closely resembling the values shown in the vSphere Web Client, and results in higher priority recommendations to be generated. A combination of the two above settings will cause even more recommendations to be generated. This will result in a very even balance to the hosts.
  • AggressiveCPUActive=1 | When used in an environment where CPU is spiky and could otherwise be missed. If you are seeing increased ready time in your VMs and DRS is not generating recommendations, this setting could improve the DRS effectiveness for these types of workloads.

If you choose to NOT change any default settings for DRS, then you can rest assured that DRS is still working hard behind the scenes to ensure that all your VMs are getting as much of the resources that they are entitled.

The post Load Balancing vSphere Clusters with DRS appeared first on VMware vSphere Blog.

vRealize Operations 6.2: Intelligent Workload Placement with DRS

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für vRealize Operations 6.2: Intelligent Workload Placement with DRS
Feb 092016
 

Today I want to discuss VMware vRealize Operations Manager v6.2 and specifically its Intelligent Workload Placement feature. This feature works in conjunction with, and compliments DRS to help VMs get the required resources they need, ensuring better performance of the environment and applications. Distributed Resource Scheduler also known as DRS is a well-known and proven […]

The post vRealize Operations 6.2: Intelligent Workload Placement with DRS appeared first on VMware Cloud Management.

vRealize Operations 6.2: Intelligent Workload Placement with DRS

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für vRealize Operations 6.2: Intelligent Workload Placement with DRS
Feb 092016
 

Today I want to discuss VMware vRealize Operations Manager v6.2 and specifically its Intelligent Workload Placement feature. This feature works in conjunction with, and compliments DRS to help VMs get the required resources they need, ensuring better performance of the environment and applications. Distributed Resource Scheduler also known as DRS is a well-known and proven […]

The post vRealize Operations 6.2: Intelligent Workload Placement with DRS appeared first on VMware Cloud Management.

vRealize Operations 6.2: What’s New

 Allgemein, Knowledge Base, Updates, VMware, VMware Partner, VMware Virtual Infrastructure, vSphere  Kommentare deaktiviert für vRealize Operations 6.2: What’s New
Feb 082016
 

VMware vRealize Operations (a.k.a. vROPS) 6.2 is now GA and available on vmware.com. This has been a long awaited update and covers all major areas of the product including installation, configuration, licensing, alerting, dashboards, reports, and policies. Here is what you need to know: Enhanced Distributed Resource Scheduler (DRS) Integration. vRealize Operations now offers enhanced […]

The post vRealize Operations 6.2: What’s New appeared first on VMware Cloud Management.