Performance is a critical point to the success of the web application
If we develop a web application without paying attention to the performance at the beginning, it will cost us a lot later on to detect and fix related issues. Also, a well performing website can attract more users and more money in the market where there are many competitors. This post is a summary of the tools needed to detect performance issues in the early phases (development and testing) of Java backend web apps in order to make sure that the design is implemented in a good and correct way. For Java backend web apps, these tools cover both monolithic ones and microservice-based ones. Please note that we are not affiliated with any of the vendors of the tools mentioned in this post.
When we talk about the performance in the backend, we have to use metrics and indicators to make sure that everybody in the project and the customer have the same standard about "how fast is acceptable". There are many indicators we need to focus on. Depending on your role, these indicators may be varied. This post focuses on the developer's point of view and on the following indicators.
Availability - the proportion of time a system is in a functioning condition.
The focus here is much simpler than the definition itself: is your application/modules available during the testing phase since it has been deployed? You can't bring a web application to production if it stops frequently and unexpectedly. If you are running a monolithic web app, it's easy to observe in the testing phase. However, if you are running a legacy microservice web app, this may not be visible to the testers because we have mechanisms such as load balancing or circuit breaker patterns to deal with dead module cases, or sometimes it's a background task and we are not aware of its availability. There is one project which uses OpenShift to deploy the modules on the pods. One pod is usually restarted but the testers are not aware of it because it's a scheduled task.
It's just not simply the execution time of a method and a gate condition of how long an execution is allowed to run within.
From a developers point of view, life will be easier if we receive a detailed report about slowly running methods and some indicators to analyze where the slowness comes from rather than just a message "It took me too long to see the result on screen X when I clicked the button Y". Most of the projects don't pay any attention to this point at the beginning. Setting up a general mechanism at the beginning is much easier than implementing it later when the whole project is running, because you have to deal with many exceptional cases and you may miss some. Otherwise, you have to manually measure each screen to detect the bottleneck. What happens in cases where you have to solve the issue for 10 screens when you aren't clear about the business function of those screens?
Utilization - Resource (CPU, Memory, Disk, Network) utilization.
Common questions for this indicator are:
- How much CPU is being consumed by the application/each module? → you shouldn't let your CPU operate too much because it's a waste of computer power. For example, if you allocate your app with one core CPU and you see that your app always consumes at most 10% then you should think about allocating of just 0.5 core to save the CPU resource. However, if you make it busy 100% and don't let it do other tasks such as garbage collection, the whole app will be slowed down.
- How much memory is used and left? → Track this information day by day to see if there is any task which "eats" memory a lot or not. These ones will be the hot spots to analyze. Note that "a lot" here may vary between projects. Some projects I observe are very strict, for example: a module must not use more than 50MB. It's usually the case for microservices.
- How long is the I/O activity?
- How many network round trips do we need to complete a business task?
- These three indicators (availability, response time and utilization) may not be all, but they are the ones which can be easily detected during the development and testing phase by using monitoring tools.
How to monitor the 3 indicators above on Pet Clinic?
Things will be too theoretical without a context. In this post, I used the popular web application - Spring-Boot Pet Clinic web application from Spring community - to demonstrate the approach which uses the indicators above to monitor the performance of the web app. The deployment model of this application is microservices. However, techniques demonstrated here can be used for both microservice-based web apps and monolithic ones. I customized the application a bit to
Add a dummy RabbitMQ call from visits-service to customers-services.
Add JavaMelody for the customers-service module
Change the code to run with H2 DB in docker to MySQL DB
Availability and Utilization for a quick view of application status
As mentioned above, if your web app is monolithic you can easily detect this point. With microservices we can have many solutions.
Spring Boot Admin
There are many container orchestration tools now and they all support the utility to check the availability of a module in microservice applications. However, with its simplicity, this tool still has its place in the market. I'm working on a microservice project which will assist people who decided to use Spring Boot admin because of its "plug and play" easiness with Spring eco system. If your project includes only a small project to integrate the APM tools, this is a suitable solution. Each Spring Boot version comes with a corresponding Spring Boot Admin tool version. They are compatible with each other and can be integrated as easily as other Spring Boot modules. With the provided view, you can have just enough information you need for a module. If your microservice application is simple (1 or 2 instances on each microservice) then it's OK.
With this tool, you can also view the CPU & Memory usage, export the heap dump & thread dump, change at runtime the log level to debug things more easily.
How to use Grafana instead
You can follow the Grafana page's instructions and integrate it. With some lines of configuration, you can have nice results like this (the picture below is taken from Grafana page)
With Grafana and Prometheus, you can even export more information to monitor such as the example below in this post.
How to monitor my Monolithic app
First, you can reuse Grafana or Spring Boot Admin. Setting them both up is very easy. Another choice is JavaMelody. It provides you with much more detailed information in its all-in-one GUI. The picture below is the result from JavaMelody for the customers-service module. I modified the code in the GitHub link above to integrate it.
Picture taken from customers-service module
Of course this tool will miss some features which are available on Spring Boot Admin such as change log level at run time or Grafana such as adding customization metrics. However, with the detail it brings, you can have a good quick start at beginning. One notable point is that with JavaMelody you have to expose the monitoring URL which may be considered risky to some projects. A possible solution is to define the build profile and Spring conditional beans so that JavaMelody is only enabled with a dedicated build mode and Spring profile.
Use Response time to detect bottleneck issues
As mentioned above, in the developer point of view, the more detailed information provided, the faster and easier it is for us to solve performance issues. With JavaMelody, you have a very detailed result on a single application or a single microservice module (customer service in this scenario).
Picture taken from customers-service module
JavaMelody is a good tool for monolithic apps. However, if you want to measure the performance of microservices apps you should come with another approach. I choose Zipkin in this case. Below is the result from Zipkin in case cross-services calls from visits to customers with RabbitMQ awareness (I had to customize the code from the GitHub a bit to support this feature)
Picture taken from tracing-server module
As you can see, with Zipkin you can have a very clear view of how (and how long) the request travels between the microservice nodes in your application. Thanks to that, you can see the links between the microservices, and also have an idea on how to solve performance issues. You can also customize your tracer to make it more detailed into the Spring bean calls instead of just the outbound. Also, based on such flow you can also have an indicator on how to allocate resources: for nodes which are on a critical path, it should be allocated more resources as a priority. However, if you want to have a view on a specific query/request like JavaMelody, then this tool doesn't support it. What it return is a little limited, like this shown below:
Picture taken from tracing-server module
And since its mechanism is log-based, it can't give you a deep trace as JavaMelody to the slow function.
Pay attention to the pollution of the report when you don't categorize your requests
When investigating performance issues with the response time you need to categorize them to avoid noise on the result. For example, if you mix file upload/download requests with simple REST API calls, your report will be difficult or even impossible to analyze. Another case is that calculation/report requests are sure to be much slower than the others. Zipkin supports tags so that you can categorize your requests and then make the filter more meaningful. My personal advice is that we should have a clear plan before analyzing the performance report to get the feedback as correct as possible. For example, you should have a test plan with clear scenarios for the formal performance test rounds. Therefore, you can categorize your requests and have a clear view of the bottleneck. In case you only want to do daily monitoring, you need to have in mind some request groups, then, tag them and check the report separately by group.
Add your customized metrics to monitor business values
Apart from the indicators mentioned above, there are still many customized metrics. These are specific to projects and you have to implement them yourself. Below is an example of a tool used for this requirement:
Picture taken from grafana module
With Grafana, you can define your own dashboard to focus on the things you want. If your customer defines some clear non-functional requirements, then this board will bring much added value to the project. Of course, you have to add some code to make it work. The Pet Clinic app in this situation is a good sample for you.
On top of the tools above there are still many alternatives.
I suggest that you consider how much information you need, and which utility is necessary, then you can choose the tool you want. The final purpose of these tools is to have a real time visualization of the indicators mentioned above (availability, resource utilization, response time and throughput). Also, when you improve the performance of the app, you need to have a snapshot of the app before and after implementation to prove that you did the job and to offer metrics on how much faster your app is after implementation. This is the key point for us to decide whether it's enough or to continue with the task. Without these tools, you can still have the information about the system but it's not convenient and could be too technical to the customer. You choose the right tool for your purpose and your life will be easier