Setting Up Login Enterprise for Endpoints

Overview

Deploying new security measures, updates, or applications is essential for maintaining your organization's security posture and operational efficiency. However, tools like Endpoint Detection and Response (EDR) and Data Loss Prevention (DLP) software, system updates, or application changes can sometimes introduce performance overheads that negatively impact user experience and productivity. This article explains how to use Login Enterprise to automate the testing of endpoint configurations on Windows-based physical devices, ensuring that your endpoints and critical applications maintain optimal performance without compromising security or functionality.

Balancing robust security, seamless updates, and optimal performance is a common challenge for IT organizations. Users expect smooth, uninterrupted experiences, but additional security measures or system changes can strain system resources, leading to frustration and decreased productivity. Login Enterprise provides a solution by simulating real-world user activities and collecting detailed performance metrics. This enables you to assess the impact of various endpoint configurations—whether it's security agents, system updates, or application changes—before deploying them across your organization.

Imagine quantifying the effects of new or updated security measures, system updates, or application changes on factors like application launch times, EUX scores, or CPU usage. With this data, you can make informed decisions, adjust configurations, and ensure that these changes don’t hinder daily operations.

When setting Login Enterprise for Endpoints, you’ll follow an iterative testing process, no matter which use case you’re focusing on. Start by establishing baseline results, then implement changes specific to the use case you're testing. After testing, review the results and compare them to your baseline. Based on this comparison, decide whether the Appliance is ready for deployment or if further adjustments are necessary.

For details on testing with different use cases, see Use Cases.

Use Cases

The following use cases describe various testing scenarios that leverage Login Enterprise for Endpoints to assess and ensure optimal performance, functionality, and reliability of physical Windows endpoints. These scenarios focus on how specific changes, such as security agent updates, Windows updates, or application modifications, impact system performance and end-user experience. By capturing key performance metrics and verifying application functionality, these Tests help ensure that system changes do not negatively affect productivity or system stability.

Security Agent Change Testing

This use case evaluates the impact of modifying or installing a security agent on a physical Windows endpoint.

Start by establishing a baseline by running Tests on a standard Windows image while capturing key performance metrics—such as logon times, application responsiveness, and EUX score—using Login Enterprise for Endpoints.

Once the baseline is recorded, install or update the security agent, and rerun the Tests.

This helps measure any performance changes and verify that applications continue to operate correctly. The process provides a comprehensive view of both performance and functionality, ensuring that changes to the security agent do not unintentionally hinder end-user productivity or system stability.

Windows Update Change Testing

This use case assesses the effects of Windows updates on endpoint performance and application functionality.

The process begins by capturing baseline performance data on a physical Windows endpoint using Login Enterprise for Endpoints.

After applying the Windows update, rerun the Tests to compare measurements such as system responsiveness and resource utilization.

This method provides clear insights into the impact of the update on overall system performance while verifying that essential applications continue to run smoothly. By monitoring any deviations, you can ensure that the update maintains both system reliability and user productivity.

Application Change Testing

This use case analyzes the effects of changes made directly to key applications installed on physical endpoints.

The testing process starts with establishing a baseline by running Tests to capture the normal performance characteristics of critical applications, including their launch times, response rates, and stability under typical usage conditions.

After an application update or configuration change is applied, execute the same set of Tests using Login Enterprise for Endpoints.

This systematic approach ensures that modifications do not lead to functionality issues or performance degradation, supporting a smooth transition when updating or altering essential applications.

Setting Up Login Enterprise for Endpoints

1. Define Your Test Objectives

Start by clearly outlining your goals for testing:

  • Identify Critical Applications and Functionality: List essential applications for user productivity (e.g., Microsoft Office, custom applications, etc.).
  • Set Performance Metrics and Thresholds: Establish acceptable performance levels for metrics such as:
  • Collaborate with the Security Team: Work with teams (e.g. Security, IT, Operations) to understand the specifics of planned endpoint changes (e.g. security agent deployments, Windows updates, or application updates) that may affect performance.

Example and Walkthrough

The objective thresholds provided in this article are for demonstration purposes only. These examples should be adjusted according to your specific testing requirements.

  • The EUX score should not be more than 1 point lower than the baseline.
  • Logon time should not exceed 4 additional seconds.
  • Microsoft Edge app performance should not show a performance delta greater than 33%.

For example, if any of these thresholds are exceeded during Security Agent Change Testing, Windows Update Change Testing, or Application Change Testing, the changes will not be approved. However, the changes can be deployed if the thresholds remain within acceptable limits.

2. Set Up Test Configuration

Correctly configuring the test environment is critical, as it directly impacts the accuracy of the pass/fail decisions during the analysis of endpoint changes. Proper configuration ensures that the test results are reliable and meaningful, whether you are testing Security Agent Changes, Windows Updates, or Application Changes.

Example and Walkthrough

Configuring the testing scenario:

1. Download the Login Enterprise Launcher. To learn how to do this, see Downloading and installing the Login Enterprise Launcher.

2. Configure the Launcher (if not already set up).

Important Steps to Remember:

  • Keep the Launcher application running throughout the entire testing process.
  • Prevent the Launcher screen from locking by disabling the relevant policy, if necessary.
  • Before beginning the Login Enterprise testing, manually test a successful connection from the Launcher Host to the device under test.

To learn about Launcher best practices, see Launcher Overview and Best Practices.

3. Add the Launcher to a Launcher Group. To learn how to do this, see Launcher Groups and Locations.

4. Set Up the Physical Endpoint for Baseline Testing. Ensure the physical endpoint is configured to match the baseline setup for testing. This may include: no endpoint changes installed, a base configuration of the security agent, Windows update configuration, or the initial “A” configuration of any applications. Also, ensure that all desired applications, updates, and configurations are applied to the endpoint.

5. Add the Login Enterprise Logon Script to the Physical Endpoint. For more information on the Logon component setup and configuration process, see Logon Components.

6. Add the Windows Test User that will be logged into the endpoint under test. For more information, see Creating Virtual Users and Groups.

7. Add the Test User to a Group. For more information, see Creating Account Groups.

8. Create a New Load Test Scenario and Connection. For more information, see Creating a Load Test.

The following parameters are essential while creating the Test:

  • Name: Provide a descriptive name for the test scenario.
  • Connector: Select Microsoft RDS.
  • RDS/RDP Broker/Host: Enter the hostname of the endpoint under test.
  • Accounts: Select the newly created account group that contains the Test User for the endpoint.
  • Launcher: Choose the Launcher Group that can successfully RDP into the endpoint.

Frame 839.png

9. Click Save to apply the changes.

10. In Load Tests, click on the newly created Test and configure the following Test settings:

  • Login: Set to 1 user for the individual endpoint.
  • Test Duration: Set the duration to a realistic timeframe. Longer Tests (e.g., 1 hour or more) are more representative of an employee’s workday and provide more accurate results. Shorter Tests yield quicker results but may miss issues only exposed in longer Tests. A 1-hour Test duration is recommended, but you can set it to the maximum workday duration (typically 8 hours) if needed.
  • EUX score: Enable this option to gather EUX metrics. This helps compare Test results and make informed decisions. For more information on the EUX score, see EUX Score and VSImax.
  • Session Metrics: Enable CPU and Memory utilization metrics. Custom metrics can be added if needed, but the default metrics group provides a quick start for testing. For more information on Session Metrics, see Session Metrics.

Frame 840.png

11. Click Save to apply the Test settings.

3. Configure the Applications (Test Scripts)

While Login Enterprise gathers login, EUX, and session metrics, it's also important to include Application testing (workloads) in your Test. These workloads should focus on the key applications essential for employee success—those that are crucial to business operations, high-risk, or frequently used in employee roles. Application testing ensures that these critical applications perform optimally, regardless of the underlying changes being tested (such as security agent updates, system updates, or application changes).

Example and Walkthrough

In the Load Test configuration, the Actions section defines the Application Workloads to be run. For this example, we’ll set up a simple workload that involves looping Microsoft Edge: opening the browser, browsing the web, and closing it, with intermittent EUX measurements taken.

In this example, Microsoft Edge is added to the Actions section. For more information, see Adding Applications to a Load Test.

Frame 841.png

You can configure different Test scenarios here to run against the applications critical to your organization’s success.

  • Creating Application Workloads. Workloads can be easily created using the no-code Login Enterprise Script Recorder (For more information, see Script Recorder), which allows you to record and upload user interactions.
    • These scripts can then be uploaded to Login Enterprise and used in the testing scenario to simulate real-world Application usage. For more information, see Importing Application scripts.
  • Tailor Workload Sequences. Adjust the sequences to fit different user roles and performance requirements. For example, simulate user behavior by including realistic actions like:
    • Opening Applications
    • Typing and navigating menus
    • Using appropriate wait times to simulate real-world use

Use timers to measure specific actions, such as how long it takes to open a critical page in an app or to complete an important task. For more information, see StartTimer.

  • Setting Up Out-of-the-Box Workload Groups. Set up the Knowledge Worker or Task Worker workload group (for more information, see Knowledge Worker and Task Worker), which simulates typical office tasks. Note that Microsoft Office must be installed and licensed on the physical endpoint for this to work. To configure the workload:
    • Navigate to the Actions section of the Test configuration.
    • Click Add action(s) (the + icon) and select Application Group(s).

Frame 846.png

  • Select the preferred Application Group.

Frame 847.png

  • Ensure the Application Group has been added to the Actions.

Frame 848.png

  • Select Microsoft Edge, as it is typically pre-installed on Windows devices.

While a Login Enterprise Test scenario doesn't require Application Workloads, it is highly recommended to include them for more comprehensive testing.

4. Establish Baseline Performance

Before introducing any changes (such as security agent updates, system updates, or application changes), run Baseline Tests to capture performance metrics without the added changes.

1. Execute Baseline Tests: Run the configured Test on the target physical endpoint without any security agents or changes applied.

2. Observe Virtual Appliance Results: Review the performance metrics, including Application performance, session metrics, login times, and EUX scores.

3. Validate Results: Ensure there are no existing performance issues before introducing any changes.

Example and Walkthrough

1. Ensure the Launcher Is Running. Confirm that the Launcher application is running and the user has logged out of the endpoint under test.

2. Start the Load Test. For more information, see Starting a Load Test.

3. Confirm Test Start. In the confirmation dialog, click Confirm to initiate the Load Test. This will start the testing process.

Frame 851.png

4. Monitor Test Invocation
The Launcher will invoke the Test session on the endpoint under test. The logon script should automatically trigger the defined testing process on the endpoint.

Frame 852.png

5. Navigate to Dashboard. After starting the Test, In the Login Enterprise sidebar menu, navigate to Dashboard to monitor the Test countdown, which will indicate when the Test is complete. For more information, see Dashboard.

Frame 853.png

6. Review Preliminary Results. Once the Test is complete, review the preliminary results to ensure there were no issues during the Test Run. For more information, see the Results page. Check Events for any errors or warnings.

Frame 854.png

7. Add Comments to Test Results. Add a comment to the Test results to indicate the mode of testing.

Frame 855.png

5. Implement Endpoint Changes

Introduce the endpoint changes to the Test endpoint. This could involve installing or updating any software, applying system configurations, or modifying settings to assess their impact on performance.

Steps (general approach):

  • Install/Update Endpoint Software: Follow the vendor's instructions for deploying or updating the software (e.g., security agents, applications, or updates) on the test endpoint.
  • Apply Configurations: Implement necessary configurations, such as enabling or disabling features, adjusting policies, or applying updates.
  • Document Changes: Keep a detailed record of all installations, updates, and configuration changes for future reference.

Example and Walkthrough

This could involve introducing a new application, updating a security agent, or applying a Windows update. In this example, BitDefender was installed with real-time scanning temporarily disabled:

Frame 856.png

6. Execute Tests with Endpoint Changes

After applying the endpoint changes, execute the tests again to measure the impact of the changes on performance and system behavior.

Steps to follow (general approach):

  • Use the Same Test Configuration: Ensure the configuration remains the same as in the Baseline Tests, with the only difference being the changes made to the endpoint.
  • Verify Test Results: After running the Tests, review the results to ensure that the Test was completed successfully and no issues were encountered.
  • Add a Comment to the Test Run: Provide a comment to indicate that the test includes endpoint changes (whether it’s a software update, security agent change, or application modification).

Frame 857.png

7. Iterate Testing

Test any additional scenarios that need to be compared to previous results.

For example, in this Test, BitDefender's real-time scanning was enabled to measure the impact compared to the previous Test.

Frame 858.png

Frame 859.png

8. Analyze Test Results

After running the Tests, compare the baseline data with the post-change results. For details, see Comparing Multiple Load Tests.

  • Utilize Virtual Appliance Reporting: Access detailed graphs and reports in the Virtual Appliance to review the Test outcomes.
  • Focus on Key Metrics: Identify any deviations in critical metrics such as:
    • CPU usage
    • Memory consumption
    • Application launch and performance times
    • Login times
    • End User Experience scores
  • Assess Against Thresholds: Compare the results with the predefined thresholds to determine if any metrics exceed acceptable levels.

Example and Walkthrough

1. In this example, we compare 3 sets of Test results:

Load Testing is preferred over Application Testing in this scenario, as Load Testing provides valuable metrics such as End User Experience and login times, which Application Testing does not.

The default comparison view in Login Enterprise is the Overview page. This page allows you to compare results from 2 to 5 Load Tests. It also lets you set a Baseline Test and view key metrics like EUX, login times, and Application response times in a structured format. The Overview page dynamically updates when the Baseline Test is changed, highlighting differences while maintaining consistent sorting for easy comparison. Pay attention to the highlighted items (from left to right) to identify significant deviations.

a. Set as Baseline: You can select the Test to use as the baseline. By default, the oldest Test will be selected as the baseline.

b. Cross-Reference Test Results: Under the Name column, you can cross-reference the Test number (shown under the Test name) with the Test Comment column. This helps identify which results correspond to which Test. For example, in this case, Test 0003 is the Baseline Test and corresponds to the "No 3rd party agent" Test (as indicated in the comment).

c. Review EUX Scores: The high-level EUX scores are prominently displayed in the Overview. Higher scores indicate better user experience and system performance.

d. VSImax Value: In this testing, the "VSImax" column will show red X icons, as the Tests were conducted with only one user per endpoint. This is expected, as "VSImax" represents the maximum number of concurrent users that can use the system while maintaining an acceptable EUX score. Since the Tests are single-user, this metric isn't applicable and will show as red X by default.

e. Compare Test Iterations: In the Overview tables (e.g., EUX score / VSImax), the green column headers represent the Baseline Test. The columns to the right display the percentage deviation from the baseline. For example, a +3% deviation means the current Test performed 3% better than the baseline.

  • Positive (Green): Indicates better performance than the baseline.
  • Negative (Red): Indicates worse performance than the baseline.

Frame 866.png

3. Open the Charts
The Charts page displays metrics for EUX, Logins, Applications, and Session Metrics. For details, see the Overview page and Charts and Graphs.

Example: If a login action takes 300% longer than the baseline, it would indicate a failed result based on the Test objectives defined earlier. This would signal that further configuration or changes to the security agent are needed, prompting further testing.

5. Generate a PDF Report: Optionally, you can generate a PDF report for easy sharing with stakeholders. For details, see Generating a Load Test PDF Report and Configuring a Load Test PDF Report. Save and share the generated PDF report as needed.

Conclusion

This conclusion is based on the results from the example testing provided earlier in this article (see the Define Your Test Objectives section), and is compared to the defined Test objective thresholds. This helps assess the impact of the changes made to the endpoint, whether they involve security agents, application updates, or other configuration adjustments.

1. End User Experience Score

  • Threshold: The End User Experience score can’t be more than 1 point lower than the baseline.
  • Result: The baseline score (with no security agent) was compared to the score with the security agent and real-time scanning enabled. The difference was 1.6 points, which represents a -21% decrease.
  • Outcome: FAIL—since 1.6 is greater than the allowable 1-point difference.

Frame 863.png

2. Logon Time

  • Threshold: Logon time can’t exceed 4 seconds longer than the baseline.
  • Result: The total logon time difference was 3.3 seconds (8.4 seconds minus 5.1 seconds).
  • Outcome: PASS—since 3.3 seconds is less than the allowable 4-second difference.

Frame 864.png

3. Microsoft Edge Apps Performance

  • Threshold: Microsoft Edge apps performance can’t deviate by more than 33%.
  • Result: The total percentage difference in performance was -36%.
  • Outcome: FAIL—since -36% exceeds the allowable -33% threshold.

Frame 865.png

Since the defined performance thresholds were not met due to the changes introduced by the security agent, further discussions with the security team are necessary. Now, objective results are available to present and address the performance issues identified during testing.

Best Practices

1. Test Across Different Devices. Different hardware may respond differently:

  • Include Diverse Endpoints: Conduct Tests on diverse endpoints, including different device types and ages, based on what’s in production or to determine the best value endpoint hardware to purchase. Conduct a bake-off if needed.
  • Adjust Thresholds for Older Hardware: Set realistic expectations for older hardware to avoid unrealistic comparisons.
  • Plan for Upgrades: Identify older endpoints that may need replacement or upgrades.

2. Test Before Each Security Agent Change. Organizations that don’t test changes (such as security updates, application modifications, or endpoint configurations) before deployment risk unintentionally breaking applications, causing them not to start, or worsening endpoint performance, which can ultimately lead to lost productivity.

3. Iterate Results. If issues are found during testing, take corrective actions:

  • Adjust Security Configurations: Modify settings that are causing performance issues, such as adjusting scan frequencies or creating exclusions for certain processes.
  • Consult Vendors: Reach out to the security software provider for optimization tips and best practices.
  • Rerun Tests: Rerun Tests to evaluate whether the changes improved the situation and met the defined performance thresholds.

Investing in thorough endpoint testing and optimization helps improve operational efficiency and fortifies your organization's overall security posture. In a world where both security threats and performance expectations are high, taking these proactive steps is crucial.