Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  • In general, there are insignificant regressions in performance in the Nolana release compared to Morning Glory.  Check-in times and check-out times regressed by 10%, but are still about 20% better than the Lotus release.  The 25 users' response times are very similar to the 5 users. This means that the system is very stable from 1 to 25 users. The 20 users test has the best response times and the smallest difference from the Morning Glory release.
  • Services' memory utilization slightly increases during the test runs. Probably it is a result of new cluster usage for the tests and memory utilization will grow over time until reaches some steady state. Added table for comparison of Nolana to Morning Glory memory utilization. To be sure of memory leak presence longevity test has to be performed.

  • The relevant services overall seem to occupy CPU resources nominally. Only mod-authtoken seems to have the spikes but the processes did not crash. CPU usage of all modules did not exceed 31%.
  • RDS CPU utilization did not exceed 20%.
  • Longevity test shows response times worsen over time.

Test Runs


Virtual Users


Load generator size (recommended)Load generator Memory(GiB) (recommended)


5 users30 minst3.medium3


8 users30 minst3.medium3


20 users30 minst3.medium4
4.25 users30 minst3.medium4


Users TestedMorning GloryNolana

Longevity Test

The longevity test shows that Check Out response time increased as time went on. 


Check Out

1st Hour0.442s0.850s
12th Hour0.484s1.086s
24th Hour0.568s1.485s

In the response time graph below the Checkout Controller time, which gathers all check-out API response times), increased over the 24-hours window, from 0.850s to 1.485s. 

Image Added

The DB CPU utilization percentage increased over time by 7% and was about 21% by the end of the test. There are huge spikes every 30 minutes. These are due to the background tasks that run periodically and as the number of loans grew.

Image Added

The number of connections also rose over time, from 360 to 370 connections. It's unclear what (DB connections) caused the DB to use more CPU resources as the test progressed.

Image Added

Database's memory dipped a bit but no symptoms of memory leaks. The memory level bounced back up right after the test finished. 

Image Added

Modules CPU Utilization During Longevity Test

Here is a view of the CPU utilization. A couple of observations:

  • mod-authtoken seems to take up CPU resources in a cyclical way from 9% to 12%. 
  • Okapi uses only about 20%CPU on average compared to Lotus about 450-470% CPU on average.
  • mod-users CPU Utilization grows from 46% to 70% and rapidly decreased to 38% then grows again and decreased reaching 80% of CPU utilization.
  • mod-inventory-storage CPU Utilization grows from 10% to 36% and rapidly decreased to 15% then grows again and reached 25% of CPU utilization.
  • mod-configuration CPU Utilization grows from 24% to 38% and rapidly decreased to 23% and was the same till the end of the test.
  • mod-feesfine CPU Utilization grows from 15% to 35% and spikes periodically up to 50% every 30 minutes.
  • Other modules used less than 20% CPU on average.

Image Added

Here is the Service CPU Utilization graph with the main involved mods.

Image Added

There do not appear to be any memory leak issues in Nolana. There were no spikes and the processes did not crash. 

Image Added

Image Added