It would be nice to add Beego [1], as I'm currently learning Go from an ebook written by the author of the framework [2][3], and it would be nice to see how it performs. Thanks for your hard work!
I had not heard of Beego, but we'd love to accept a pull request with a Beego implementation of the tests. Perhaps you could put one together as an exercise while learning the language and framework? :) Sorry, I can't resist playing the "pull request" card.
INB4 questions to the Go results: Nope, issues of round 4 not addressed yet.
Like in round 4 (since the related code hasn't changed) the many concurrent spawned goroutines probably get in each others way, causing a high latency and low throughput.
There was a reversion of the test without goroutines, which I think performed better. But I was told the goroutines version is more realistic... (I don't share this opinion).
To be fair, this version had also a manual connection pool to address a previous bug.
Also Go's database connectivity is not very mature yet. There is still a lot of work to do. I'm pretty sure it can and will be done.
Thanks for the note. I'd like to get to the bottom of this and make the Go test representative of best practices. A previous decision may have been made to favor an implementation that was measured to be faster at the expense of best practices [1], but that is not irreversible.
I am not a Go expert and I believe you are, and certainly @bradfitz is as well. If you two tell us definitively to change the Go implementation to better comply with best practices, I'll see that it's done for Round 6. I apologize if you feel we stepped on your opinions in any fashion. I really value your input in the project to date and hope it will continue.
Of course I will try to help solving this issue, but it is hard to contribute without testing, since I have no idle server or suitable spare computer around where I could run this tests properly. But my PC seems to have a somewhat similar configuration to your "dedicated i7 hardware". I hope I manage to set up the test environment in a dual-boot soon.
Please let us know if there is anything we can do to help ease the process. It is not strictly necessary to set up the whole benchmark platform. You can create the very simple database with the scripts. Then just run Go alongside a load generator such as Wrk in order to experiment with various approaches and do spot checks along the way.
Thank you so much for finally including the ASP.NET/IIS/Windows stack. I understand that it was a user contribution. This finally gives me something to compare the other stacks against. I realize that Windows/IIS and Linux/Apache|nginx is apples and oranges but it's still nice!
There are some strange things I noticed about this benchmark's organization:
- JRuby is a Ruby implementation. Why is it listed under Platform? It should be listed under Language.
- Why are Unicorn and Gunicorn listed under front-end web servers? Unicorn and Gunicorn are explicitly not front-end web servers, but are meant to be put behind a reverse proxy, by design. The Unicorn author tells users very clearly not to put it directly on the Internet because bad things will happen: http://unicorn.bogomips.org/PHILOSOPHY.html section "Application Concurrency != Network Concurrency". It would be more suitable to put both of them in the Platform category.
We have received some similar feedback previously and have discussed some possible changes to the meta-data structure [1] [2]. As you can imagine, it is actually a complex problem assigning consistent terminology to all of the various parts that can compose a web application's deployment.
Consider Go (language, platform, framework, and server all in one, at least from our perspective) versus Rails (framework only). Some frameworks embed a web server, others don't, and so on. We have had to make several judgment calls in classifying this very broad spectrum of frameworks, and freely admit that there is room for improvement in that classification.
Incidentally, the Ruby deployment is Unicorn behind nginx. We opted to identify the Ruby deployment as "Unicorn" because that is the key among the two, and to clearly indicate the divergence from a previous round in which we were using Passenger, much to the dismay of the community.
Yes, the score became red according to hnslapdown [1]. It has happened to the previous rounds, and we suspect those who can downvote stories don't feel these updates have merit. So be it. We do enjoy the feedback from readers that we receive from the brief time they appear on the home page, so as long as they will permit us to share the updates here, we will continue doing so.
If you want to participate in a longer-form discussion about the project, we invite you to join the Google Group [2].
I need to be clear that the ASP.NET implementation that you see in Round 5 was contributed by user @pdonald at Github.
That said, we'd be happy to receive more pull requests with ASP.NET changes and improvements. We suspect there is a lot of room for improvement in the Windows numbers.
Please add Phusion Passenger (https://www.phusionpassenger.com/) to the benchmark for Ruby apps. Right now Unicorn is the only server in that benchmark for Ruby but it's far from the only available server.
Round 1 used Passenger, but the feedback we got was that Unicorn performed better so we switched to that. Currently, we're aiming to show each framework in the best possible production configuration. In the future we plan to show multiple web/app servers per framework so that you could compare Ruby on Passenger vs. Ruby on Unicorn.
There is a simple explanation for that. Phusion Passenger always proxies data from the web server to another process, for stability and security reasons. If you benchmark Unicorn directly, without putting it behind a reverse proxy, Unicorn will look faster simply because you're avoiding another kernel socket operation.
However as I explained in https://news.ycombinator.com/item?id=5727232, Unicorn is always supposed to be put behind a reverse proxy. If you do that you should find different results.
Also, there's a lot of tuning in Phusion Passenger that can help performance. The default is optimized for usability and stability. For example if you don't prespawn processes, and let Phusion Passenger spawn them on the first request, you'll be adding tens of seconds to the benchmark time, which would greatly disadvantage Phusion Passenger in an unfair manner. You should set at least:
Passenger has some config options to spawn more processes and help the load. But the results will not be much different from unicorn/thin/whatever. Maybe in units of %.
I benchmarked the pure postgresql lua driver to be ˜3 times as fast as the nginx-postgresql-c driver. When you use the nginx drivers from a lua context you have to use an internal nginx request to that location, so there's some overhead.
If you want to improve even further on the lua drivers, LuaJIT FFI is probably the right answer.
>˜3 times as fast as the nginx-postgresql-c driver
That's interesting..
>LuaJIT FFI is probably the right answer
Do you mean calling nginx internal functions (DB driver or location capture, like ngx_eval module does in that case) via FFI (I doubt if that is safe in any way) or just use libpq from LuaJIT directly?
I haven't benchmarked the mysql drivers, so results might be different there.
There's some work being done by openresty author w.r.t. ffi for openresty itself, it might yield interesting results. And yes, I think both the option you listed are viable. But the lua drivers already perform very well.
Why is the overhead of all the PHP frameworks so high? Is it because they have to evaluate all the framework code per request? The difference between raw PHP and symfony2 is massive!
I think it's because raw PHP doesn't do anything. You're literally benchmarking how fast you can do nothing. As soon as you add any kind of logic the number goes way down. It's like saying the performance of adding 4 numbers is a massive difference from the performance of adding 2 numbers.
Yes, but to set expectations before we even implement the server statistics capture we have planned: we want the CPU to be fully utilized for most of these tests. If you barely see the server showing up in top, something is probably going wrong (or you've run into a different limit such as disk or network).
We're testing the maximum number of requests a server can handle per second, so the optimum is for the CPU to be fully utilized. A different test would measure how much CPU is used if I want to serve X requests per second.
Apparently I am a newbie at setting up Apache's HTTP response headers. We intended for the response headers to ask your browser to cache the HTML for two hours and not more.
If you don't see Round 5 at first, just hit reload. It should appear.
Assuming you are talking about web applications from banks and non-tech Fortune 500 in general, it's because many of them are:
* incredibly bloated (and still lack most of the features
that an actual human user would want)
* poorly coded by armies of outsourced programmers
* using over-engineered code built on top of obsolete frameworks
* running on a "homologated" stack, which is often 3 to 7 years out-of-date
(I know because I was partially responsible for some of them, in my dark past.)
Fast Java code that is smartly written is really fast. There are lots of real world Java apps out there that you might not realize are Java --- I've read somewhere that Google Adwords is in Java, for instance.
The problem is, there are lots and lots (and lots) of bad Java developers out there --- it's the "safest language" there is (to learn if you want to make a buck or to hire devs if you want to play it safe as a manager at BigCorp), so you often see badly done outsourced work written in Java. Also, lots of Java code out there lives in non-Agile environments (so it runs on very outdated stacks.)
That being said, if you used a purely modern Java stack (probably some Guice, Wicket, Resin, JDK 7, etc) with super smart people you will be highly performant.
(Note: I haven't done pure real Java dev since the Struts days and Java 1.5, so the tech stack above is just me guessing at what the latest/greatest is.)
While it will not affect the way i work it is interesting to see the differences between languages and plateforms , in the context of a web application. Clearly Go and the JVM are doing very well in term of performance. In some cases it can affect the hosting cost , especially when deploying on pay as you go saas.
Big frameworks and ORM , especially in dynamic languages should really get into serious optimization , there is no excuses for some frameworks to be so slow on trivial things like DB requests ,etc ... there will always a last one in the list , but that last one doesnt have to be only 1% of the performances of the first one in the list. It's pretty shamefull.
[1] https://github.com/astaxie/beego
[2] https://github.com/Unknwon/build-web-application-with-golang... English translation
[3] https://github.com/astaxie/build-web-application-with-golang Original version in Chinese