Dashboard & Continuous Testing Charts

Once you have at least one launcher running successful sessions, you will be able to see average logon time for a historical period of time and how long individual pieces took to process within the logon (profile load, GPO processing, etc.) This can be examined for each individual environment and thresholds can be set so that if average logon times exceed a certain amount of time, the system generates an alert and notifies the responsible parties.

The Dashboard

The main dashboard will show logon failures and login performance.

mceclip0.png

When inside the environment, we can further break down the login performance data vs. login failures.

mceclip1.png

Within the login performance radio button, we will see a breakdown of various components in the logon process as well as our threshold marker measured in seconds. We can breakdown our view by time increments as well by selecting Timespan.

mceclip2.png

Note: We will only see logon failure data if the system has encountered and logon failures.

Under the reporting tab. We can see additional historical data for login information. We can track back to further periods of time as well as run our prediction model which will give us a realistic representation of what future performance will look like.

 

mceclip3.png

Let's be sure that we select the appropriate information we want to see by selecting the arrow from the upper left corner of the screen. 

mceclip4.png

We can also export data as we choose or create SLA reports to notify individuals or groups when thresholds have been exceeded.

Let's say we also want to examine application performance such as things application start time as well as functions that the synthetic user will be performing within the workload. We can easily do this by diving into the reporting tab and selecting which application data we want to examine.

mceclip0.png

As you can see in this example, we can select each individual function of the workload and gather historical performance metrics and how long the each function took to complete.

Let's say we want to see high level data of which applications are exceeding our defined performance thresholds within the environment. We can see this data from the dashboard page and examining the applications.

mceclip1.png

If we click into an application that has data, we can see further performance metrics at glance and how long the various functions took to compete inside the workload.

mceclip2.png