Introduction
Jakarta Tomcat is an open-source application server that is produced by the Apache Software Foundation. Tomcat is the reference implementation for the Java Servlet and Java Server Pages technologies. For more information, see http://jakarta.apache.org/tomcat.
My company was using the Tomcat servlet engine for a pilot project and experiencing great success. The pilot project involved a web application that had 5-10 concurrent users, but we were uncertain if Tomcat could scale to a larger number of concurrent users.
Many of our other web applications are implemented using one of the most popular commercial application servers on the market (which we will call "Commercial J2EE App Server" for the remainder of this document). I was very interested to see how Tomcat would compare in a production environment. I searched the Internet for Tomcat performance or scalability benchmarks, but the results were very limited. Therefore, I decided to produce this benchmark.
The purpose of this benchmark was to determine the viability of using Tomcat as a servlet engine in a production environment. The benchmark results are compared with "Commercial J2EE App Server". The goal was to measure the relative response times of the two application servers, rather than trying to obtain the best absolute response time. Only the servlet engine components are being compared (i.e. no plug-in for HTTP servers, etc).
To determine production viability, we need to investigate both response times and scalability. The response time is what a single user would observe, and it is expected to be sub-second. Scalability is determined by how consistent the response times are when additional concurrent users are added. As the number of concurrent users is increased, if the response time or the errors increase, then this indicates poor scalability.
Response time can be defined as the length of time that a user must wait from the instant that they submit a request to the instant that the view the response from that request. Response time can be divided further into processing time, transmission time and rendering time. We were interested in the full response time measurements for this benchmark, therefore none of the separate response time components were measured.
Configurations
Test Client
Windows 2000
Pentium 3 800 MHz
512 Mb RAM
The Grinder v2.8.3 (Sun JRE 1.3.1)
The test client was the machine that generated the requests to the servlet engine. The test client used The Grinder load-testing framework to create and execute test scripts, parameterize values and simulate a variable number of concurrent users and test cycles. For more information about The Grinder, see
http://grinder.sourceforge.net.
The "grinder.properties" file that was used for this benchmark is included in Figure 1.
Figure 1 - grinder.properties file.
The application that was used for testing in this benchmark was a search engine that is based on the Jakarta Lucene search API, and is composed of a servlet, JSPs and XML configuration files.
Server Configuration 1
Windows 2000
Pentium 3 1000 MHz
512 Mb RAM
Commercial J2EE App Server (IBM JRE 1.3.0)
Server Configuration 2
Windows 2000
Pentium 3 1000 MHz
512 Mb RAM
Jakarta Tomcat v4.1.18 (Sun JRE 1.3.1)
Results
Response Time
The response time was sub-second and fairly consistent for both "Commercial J2EE App Server" and Tomcat, until a certain threshold of concurrent users was achieved. The response time began to degrade at approximately 40 concurrent users. This number should not be considered as a limitation of either "Commercial J2EE App Server" or Tomcat, except in this particular hardware and software configuration. The important facts to note are that both servlet engines provided similar consistent sub-second response times until that threshold was reached (although "Commercial J2EE App Server" is slightly faster), and that both servlet engines began suffering from overloading at approximately the same concurrent user level.
Figure 3 apparently demonstrates that the performance was consistent from 100 to 200 threads, but in fact the response times were only similar because there were a large number of errors, and the total number of requests that were serviced with 200 threads was less than with 100 threads.
Figure 2 - Response time table.
Figure 3 - Response time chart.
Errors
All of the errors reported were "connection refused" errors, which indicates that the servers could not handle the requests in time. Both servers could only handle about 1300 requests per minute, but "Commercial J2EE App Server" started to fail with a larger number of errors than Tomcat.
Figure 4 - Errors table.
Figure 5 - Errors chart.
Conclusion
The response times for "Commercial J2EE App Server" were slightly faster, but both servlet engines performed in a consistent and similar fashion. The scalability of both servlet engines were also similar, with Tomcat being slightly more scalable, since it could handle more concurrent users without generating as many errors as "Commercial J2EE App Server".
Overall, I did not see a significant difference between these two servlet engines, and I would recommend that Tomcat performs fast enough and is scalable enough for production use.
本文地址:http://com.8s8s.com/it/it14908.htm