Sunday, 27 August 2017

Top VDI End-User Experience Metrics and How to Measure Them

When an organization makes the transition from physical to virtual desktops, the IT department must do everything that they can to make sure that the virtual desktops perform as well, if not better than the physical desktops that they are replacing. To do so, administrators have long relied on performance metrics. Today however, there is a fundamental shift that is occurring with regard to the way that virtual desktops are monitored. To put it simply, the industry is finally beginning to accept the idea that the end-user experience is more important than raw performance metrics.



Performance metrics will always have their place, but raw metrics are open to interpretation. After all, there are sometimes situations in which users complain of slow performance even though the performance metrics fall within a healthy range. Conversely, user sessions might sometimes be performing really well even though the performance metrics seem to indicate that they shouldn’t be. When either of these situations occur it is a clear indication that you may not be monitoring the correct metrics.

If you really want to get the most out of your VDI monitoring software then you must figure out how to hone in on the end-user experience rather than merely examining a series of raw performance metrics. This requires doing three things:
  • You must understand the types of end-user activities that are most likely to be reported as being slow.

  • You need to know what storage, CPU, memory, and network resources are being used when the user performs those activities. In other words, what is the OS doing behind the scenes and what hardware is it using?

  • Once you have determined which resources are being used by the various activities you will have to monitor the user experience metrics to see which OS, network, storage, or server configuration issue is causing the particular performance issue that the user is reporting.

VDI End-User Experience Metrics


The first step in the process is to figure out what types of activities the end users are most likely to use to gauge system performance. Generally speaking, users tend to equate overall performance with system responsiveness. In other words, the user’s perception of system performance is based largely on what the user is seeing or feeling. The main VDI UX metrics include:

Logon Duration – The amount of time that it takes the user to log in. When a user enters their password, they should be taken to the desktop almost immediately.

App load time – When a user launches an application there should be an almost instant indication that the application is loading and the loading process should complete within a few seconds. For instance, if a user launches Microsoft Excel 2013, they should see the Excel splash screen almost immediately and be taken into Excel a few seconds later.

App response time – Application response time can be a bit more difficult to quantify than some of the other end user metrics. The basic idea however, is that when a user is working within an application they should not typically see the “hour glass” or have to stop and wait for the application to catch up.

Session Response Time – Session response time refers to how well the user’s operating system responds to user input. There should not be for example, a noticeable lag when a user drags a window across the screen, opens the Start menu, or performs a desktop search.

Graphics Quality and Responsiveness – There is admittedly a bit of overlap between graphics quality and responsiveness and session response time. From an end user prospective, graphics quality and responsiveness simply means that the end user has the same graphical experience in a virtual desktop environment that they would have on a physical desktop. If the user plays a video for example, the video should play smoothly.

What Impacts the End-User Experience Metrics?


When a shortcoming with the end-user experience is identified, the next step in the process is to identify the conditions that might be attributed to the poor level of performance that the user is receiving. Ultimately, user reports of slow virtual desktops can often be traced to either excessive resource consumption or to insufficient resource allocation. In either case, the key to resolving the issue is to identify the resource that is in short supply and then make any necessary adjustments. It is important to keep in mind however, that performance issues are not always hardware related. An improper OS configuration can just as easily cause performance problems. For example, if a configuration error causes a user to be authenticated by a remote domain controller then the user will experience a slower than normal logon even if the proper hardware has been allocated to the virtual desktop.

Some of the more commonly reported end-user experience problems and their most common causes include:

Slow logon – Although the Windows logon process is complex and includes many phases, the most common causes for slow logons usually relates to misconfigured user profiles, Group Policy objects or insufficient storage IOPS. Microsoft has an excellent article on root causes for slow logons.

App Load Time – The root cause for slow application load time may vary depending on the application type and dependencies but generally speaking insufficient IOPS and slow storage devices has negative effect on the app launch time. Tools like Process Monitor can help figure out what an app is doing during startup and help pinpoint specific issues

App Response Time – Slow app response time can be caused by insufficient resources on the client (VDI endpoint) device, application server or the network. APM software can be used to troubleshoot complex app response time issues

Session Response Time – Slow session response time is primarily caused by remote protocol latency (e.g. HDX, PCoIP and RDP), High CPU use, storage latency or non-optimized virtual desktop settings (for example, full graphics being used over a low bandwidth connection).

Graphic Performance – May be a problem with a missing or improperly configured GPU. Poor graphical performance can also be caused by the same factors as poor session response time.

Common Approaches for Measuring End-User Experience Metrics


There are two main approaches used to measure end user experience metrics in VDI environments:

Synthetic Transactions
Real User Monitor (RUM)

Each vendor has its own approach to synthetic transactions, but generally speaking the technology works by using simulated users to launch real sessions and then measure key performance metrics such as logon duration and app load times. Here are some vendors that use the Synthetic Transactions approach for VDI User-Experience monitoring:

eG Innovations
Login PI
Goliath Technologies (Logon Simulator)

The other approach that is commonly used is Real User Monitor, or RUM as it is sometimes referred to. As the name implies, Real User Monitoring works by monitoring real user sessions and keeping track of key performance metrics. Again, each vendor has its own way of doing things. Here are some vendors that use the RUM approach for VDI User-Experience monitoring:

Aternity
ControlUp
ExtraHop
Goliath Technologies (Performance Monitor)
Lakeside Software

Conclusion


Although it is easy to become preoccupied with virtual desktop performance metrics, it is ultimately the end user experience that really matters. The key to providing your users with a good experience is to understand the relationship between hardware resources, user activities, and performance metrics and then use that information in a way that allocates the necessary hardware resources to the individual virtual desktops. VDI admins should look for a VDI monitoring tool that monitors the user experience metrics that are most likely to impact the end user experience, such as logon duration, application load time, etc.


Source: http://docphy.com/technology/computers/software/top-vdi-end-user-experience-metrics-measure.html

No comments:

Post a Comment