Business Intelligence Software Company

Performance analysis of customer platform & optimisation recommendations

Project Brief:

The customer had a pressing requirement to ensure their existing e-learning system was robust enough to enable an influx of 50,000 prospective new users.

The customer was interested in understanding how their system performed under a heavy load as well as identifying which optimisations could easily be implemented to achieve their goal.

In addition, the customer wanted to take away a set of user journey automated tests & an accompanying framework to enable them to continue analysing system performance in future.

test team with a laptop


Blue Frontier recommended the following services:

  1. Analysis & review of the e-learning system's hosting & infrastructure.
  2. Code structure analysis & review.
  3. Automated test creation of eight common user journeys
  4. Performance test plan creation including:
    1. Overall & user journey specific load configurations
    2. Multiple user IP regions
  5. Performance test execution
  6. Performance test report
  7. Code optimisation recommendations
  8. Infrastructure optimisation recommendations
  9. Handover of test code & accompanying framework systems

While the infrastructure & code were being interrogated, the automated test creation began. The eight user journeys covered a myriad of actions ranging from accessing specific pages, enrolling on courses to completing questionnaires and tutorial sessions. Four of the eight user journeys detailed further permutations that included varying percentages of users splitting between a selection of courses, walkthroughs, assessments & multiple account option actions.

JMeter software was a natural choice for us to begin coding & structuring the user journeys whilst allowing us to easily debug any issues during small load dummy runs. This process involved defining the verification checks to ensure that crucial data would be captured during the execution for reporting purposes. We needed to ensure the user was actually completing all the tasks they were expected to & if they weren’t, what were the issues?

Once perfected, we moved on to configuring the framework in which to execute the tests. Our tried & tested software choice was RedLine13 ( Redline13 is a cloud-based load testing platform that provides an inexpensive yet intelligent UI which integrates with various systems to ensure that performance test executions & relating user IP server spin ups result in minimum uptime costs.

To enable the user journeys to continuously run for a matter of hours at a time, they required tens of thousands of unique user accounts starting & completing actions in parallel as well as in sequence to ensure the tests lasted the duration of the executions. This was also needed to produce enough reporting data to base a good quality analysis on. To achieve this, user credentials were split between several CSV files & fed into the individual user journeys, as necessary. RedLine13 provides an easy solution in which to do this.

The next step was to configure the user IP regions. To imitate the system’s real-world use, we needed to distribute the user actions across three different US locations. Redline13’s core functionality provides a highly intuitive UI in which to do this via an integration with Amazon web services cloud servers.

Having set our various load configurations, our test executions were ready to commence. Redline13 provides a wealth of reporting functionality enabling us to monitor the user servers, API requests, average thread durations & much more. Once complete, it was easy to import metrics & graphs from Redline13 into our final report for the customer to view.


The implementation of the source code’s API communication presented an interesting challenge & led to an approach where we needed to trace the requests back to the source code. By doing this we could ensure user requests were accurately time stamped & using the correct unique session keys during their active status in the system. As some of the user journeys required users to enrol before performing further tasks, it was decided that the quickest way to revert the data set for ensuing test executions would be to reset the system’s database. Each of the eight user journeys & their permutations needed to run simultaneously as opposed to in a linear fashion. We used JMeter to structure & define the parallelisation. Prior to the first test execution, we reviewed the server settings to reduce any artificial constraints in a safe manner and elected to vary some LightSpeed parameters, increasing the connection limits to 30K.


Having interrogated the code, the infrastructure & analysed the performance test executions (using Redline13 for the user experience data & New Relic for the server monitoring), we were able to compile detailed analytical reports resulting in informed optimisation recommendations for the customer to assess & implement where appropriate in future.

Further to this, we held a brief training session in which we handed over our automated tests, load configurations & Redline13 licence to the customer to use for further performance test purposes.


This was a challenging project for us in terms of the initial investigation regarding the pre-existing API communication proceeded by the designing of the complex, multi-permutation user journeys. However, the hard work paid off & the customer was happy with the results.

Hannah – Test Manager

  • Apache JMeter
  • Redline13
  • Amazon web services
  • New Relic

Let's Collaborate

No one understands your business better than you, which is why the work we do with our clients is collaborative.