At work, one of the technologies I work with are web proxies. On my previous work, web servers and web applications. On both cases before going into production (or before rolling an update) it’s wise doing some stress-testing, trying to assess the limits of the technology or the need for more servers. There are commercial solutions available, like e.g. Spirent‘s Avalanche, but with some work you can roll out your own stress-testing infrastructure using free software.
On the first place, before beginning with the test, you need a measuring tool. Just throwing some load to Apache and then taking a look at the logs, free, ps, and server-status by hand is plain crazy, too much raw information to deal with. One tool I’ve found extremely useful, both for analyzing stress-tests results and for everyday operations monitoring is Cacti.
Cacti is a LAMP-based software which runs a series of (programmable) probes on your systems and makes graphs out of them, graphs of different time periods: the last hours, last days, months, etc. so that you have an historic view of the evolution of the service. Having this historic information is critical, when doing these kind of tests you not only want to know how well the latest release works but how does it compare to the previous one. Or in operations, you want to know the difference between the web traffic spike on a Monday morning compared to the calm of the Sunday afternoon. Just running a test and taking a measure in that moment is not enough, you need to put it in perspective with your previous tests and what you have in your production environment. And you can develop your own probes in whatever programming language you want, so there’s nothing that can be measured (CPU or memory usage, latency, I/O load, concurrent connections, etc.) that you can’t get on the graphs. You’ll add more and more probes and graphs as you go testing and discovering new variables that can affect system performance. Detecting regressions, memory leaks, etc. is very easy (well… visual) when using Cacti.
After having some measuring tool in place, we can start stress testing the systems. If you’re dealing with a web server, you only need a client-simulation program; if a proxy, both that and a server. The server part is easy, just use Apache or (even better) lighttpd or nginx. You want to stress test your proxy application, not the web server behind it, so you want a web server as fast as possible. Even think of putting your cloud (the set of web pages you’re going to test with) in a ramdisk and disabling the server’s access.log. Make sure that the web server doesn’t become a bottleneck or you won’t be really stressing the proxy!
As for the client-simulation software, you have several options: develop your own from the ground up, develop a series of scripts with wget or curl, or use some of the existent solutions available out there, like curl-loader, Apache’s JMeter. or may others. Just Google them.
No matter which software you use for the test, one consideration: These programs usually generate a very high load but under ideal conditions. I mean: you’re running them on a local network, likely with Gbps speeds, without packet loss or rearrangements … That’s not what you’ll find “out there”. Depending on what you want to test, either the theoretical top performance or how a new development would cope with real traffic, you need to adapt the traffic of your test to that.
Things you’ll find useful for tweaking your traffic:
With all this in place, you should have a pretty decent testing framework going.