Of course it's just my opinion. I'm here to make it yours.
The world is sort of abuzz with talk of Apache 2.4, the first major release of the venerable open source HTTP server in years. Okay, not really so much buzz. I really thought there would be more excitement. But still, the Apache Foundation has been making press releases everywhere and they’ve got some claims that stretch my credulity just a bit.
First and foremost, they are making plenty of noise about how 2.4 puts Apache on par with Nginx in terms of speed. First off, I’ve never thought Apache was particularly slow. What I have thought about Apache is that it consumes a metric ton of RAM as it spawns threads like spiders on LSD and running out of RAM makes Apache slow. Now, this isn’t really Apache’s fault: you should have bought more RAM. Anyway, it’s always possible that 2.4 is teh awesome, so I went in search of the benchmarks the Apache Foundation and its members would undoubtedly be flooding the internet with (they did do benchmarks during testing, right?). I could only find one, and it’s not terribly impressive. In fact, it rates amongst the poorest examples of benchmarking HTTP servers that I can imagine. Here’s the issues I see right off the bat:
1) No signiﬁcant “tuning” efforts (out of the box conﬁgs)
While this sounds reasonable (hey, who configures server software?), with Nginx, the default configuration ships with worker_processes 1; What this means is that no matter how many CPU cores your system has, Nginx will use one of them. Now, I’ll grant you, changing this number to something like “2″ sounds an awful lot like tuning, but for Nginx this is part of the standard configuration, not quasi-black-art tuning options like Apache’s MaxClients, MinSpareThreads, MaxSpareThreads, ThreadsPerChild and friends.
Anyway, the slides don’t specify how many cores were available, or if they changed this value for Nginx, but assuming a modern CPU and taking the slides at face value, I’d have to come away with the conclusion that Apache was using at least four cores and Nginx only one. Not off to a good start here, since both servers were essentially performing on par with each other.
2) Where’s the utilization graphs?
The entire benchmark consists of how fast requests are being served. A decent job is done, as we are shown various metrics on how Apache and Nginx perform under various loads. The problem is we aren’t shown how much CPU, memory, etc, are used by each server. This is important as it would reveal whether or not either server was pushing the system to its limits. Given past experience and other people’s testing, I’d venture that Apache was utilizing most of the 1GB of RAM and 90% of the four cores while Nginx used only 20MB of RAM and 20% of one core. Of course, that’s only speculation, but hey, benchmarks are supposed to clear up speculation, not create more.
3) Caveat emptor
In the section “Main Caveats”, it is noted without any sense of irony that “Apache is never resource starved”. This is the bit that irks me the most. Like most people, I didn’t start using Nginx because of some inability to discover that 75% of the web runs on Apache. I started using Nginx because Apache was difficult to tune in such a way that it wouldn’t fall over when resources became thin. So testing these servers to the brink of destruction is the main thing I would have liked to have seen tested. Really. I don’t give a damn how Apache or Nginx perform when they have unlimited resources and traffic is exactly the amount I predicted when I installed the DIMMs, I care how they perform when those resources get scarce.
When memory and CPU become scarce, any server will become slower. Nginx will get slower, Apache will get slower. No mysteries here. The problem with Apache is that consumes a large amount of RAM to handle each request. If memory is tight, it will eventually send the VM into swap. This will further slow the system. If this weren’t bad enough, a slower system means that each request will “live” longer, since it takes longer to process. This means using more RAM for longer. Unless traffic suddenly lets off, this is usually the beginning of the end for an Apache server.
Now, 2.4 might address this problem, but the apparent lack of willingness on the part of the Apache folks to even discuss it (except in vague, hand-wavy marketing talk) leaves me doubtful.
Anyway, I’m hoping to create a test setup around here real soon now to do some real benchmarks. Stay tuned.