- In general, there are insignificant regressions in performance in the Nolana release compared to Morning Glory. Check-in times and check-out times regressed by 10%, but are still about 20% better than the Lotus release. The 25 users' response times are very similar to the 5 users. This means that the system is very stable from 1 to 25 users. The 20 users test has the best response times and the smallest difference from the Morning Glory release.
Services' memory utilization slightly increases during the test runs. Probably it is a result of new cluster usage for the tests and memory utilization will grow over time until reaches some steady state. Added table for comparison of Nolana to Morning Glory memory utilization. To be sure of memory leak presence longevity test has to be performed.
- The relevant services overall seem to occupy CPU resources nominally. Only mod-authtoken seems to have the spikes but the processes did not crash. CPU usage of all modules did not exceed 31%.
- RDS CPU utilization did not exceed 20%.
- Longevity test shows response times worsen over time.
|Load generator size (recommended)||Load generator Memory(GiB) (recommended)|
|5 users||30 mins||t3.medium||3|
|8 users||30 mins||t3.medium||3|
|20 users||30 mins||t3.medium||4|
|4.||25 users||30 mins||t3.medium||4|
|Users Tested||Morning Glory||Nolana|
The longevity test shows that Check Out response time increased as time went on.
In the response time graph below the Checkout Controller time, which gathers all check-out API response times), increased over the 24-hours window, from 0.850s to 1.485s.
The DB CPU utilization percentage increased over time by 7% and was about 21% by the end of the test. There are huge spikes every 30 minutes. These are due to the background tasks that run periodically and as the number of loans grew.
The number of connections also rose over time, from 360 to 370 connections. It's unclear what (DB connections) caused the DB to use more CPU resources as the test progressed.
Database's memory dipped a bit but no symptoms of memory leaks. The memory level bounced back up right after the test finished.
Modules CPU Utilization During Longevity Test
Here is a view of the CPU utilization. A couple of observations:
- mod-authtoken seems to take up CPU resources in a cyclical way from 9% to 12%.
- Okapi uses only about 20%CPU on average compared to Lotus about 450-470% CPU on average.
- mod-users CPU Utilization grows from 46% to 70% and rapidly decreased to 38% then grows again and decreased reaching 80% of CPU utilization.
- mod-inventory-storage CPU Utilization grows from 10% to 36% and rapidly decreased to 15% then grows again and reached 25% of CPU utilization.
- mod-configuration CPU Utilization grows from 24% to 38% and rapidly decreased to 23% and was the same till the end of the test.
- mod-feesfine CPU Utilization grows from 15% to 35% and spikes periodically up to 50% every 30 minutes.
- Other modules used less than 20% CPU on average.
Here is the Service CPU Utilization graph with the main involved mods.
There do not appear to be any memory leak issues in Nolana. There were no spikes and the processes did not crash.
- Grafana baseline test data: http://carrier-io.int.folio.ebsco.com/grafana/d/elIt9zCnz/jmeter-performance-test-copy?orgId=1&from=1668782380120&to=1668794190269&var-percentile=95&var-test_type=baseline&var-test=circulation_checkInCheckOut_nolana&var-env=int&var-grouping=1s&var-low_limit=250&var-high_limit=750&var-db_name=jmeter&var-sampler_type=All
- Grafana longevity test data: http://carrier-io.int.folio.ebsco.com/grafana/d/elIt9zCnz/jmeter-performance-test-copy?orgId=1&from=1669621859633&to=1669624226753&var-percentile=95&var-test_type=longevity&var-test=circulation_checkInCheckOut_nolana&var-env=int&var-grouping=1s&var-low_limit=250&var-high_limit=750&var-db_name=jmeter&var-sampler_type=All