Hacker News new | past | comments | ask | show | jobs | submit login
Web Framework Benchmarks Round 5 (techempower.com)
89 points by pfalls on May 17, 2013 | hide | past | favorite | 57 comments



It would be nice to add Beego [1], as I'm currently learning Go from an ebook written by the author of the framework [2][3], and it would be nice to see how it performs. Thanks for your hard work!

[1] https://github.com/astaxie/beego

[2] https://github.com/Unknwon/build-web-application-with-golang... English translation

[3] https://github.com/astaxie/build-web-application-with-golang Original version in Chinese


I had not heard of Beego, but we'd love to accept a pull request with a Beego implementation of the tests. Perhaps you could put one together as an exercise while learning the language and framework? :) Sorry, I can't resist playing the "pull request" card.


I might give it a Go :)


You have a pull request :D


INB4 questions to the Go results: Nope, issues of round 4 not addressed yet.

Like in round 4 (since the related code hasn't changed) the many concurrent spawned goroutines probably get in each others way, causing a high latency and low throughput.

There was a reversion of the test without goroutines, which I think performed better. But I was told the goroutines version is more realistic... (I don't share this opinion).

To be fair, this version had also a manual connection pool to address a previous bug.

Also Go's database connectivity is not very mature yet. There is still a lot of work to do. I'm pretty sure it can and will be done.


Hi Julien,

Thanks for the note. I'd like to get to the bottom of this and make the Go test representative of best practices. A previous decision may have been made to favor an implementation that was measured to be faster at the expense of best practices [1], but that is not irreversible.

I am not a Go expert and I believe you are, and certainly @bradfitz is as well. If you two tell us definitively to change the Go implementation to better comply with best practices, I'll see that it's done for Round 6. I apologize if you feel we stepped on your opinions in any fashion. I really value your input in the project to date and hope it will continue.

[1] https://github.com/TechEmpower/FrameworkBenchmarks/pull/209


  I apologize if you feel we stepped on your opinions in any fashion.
I don't ;)

I'm just not so happy about making the queries this way: http://i.imgur.com/u9Nx5.png

Of course I will try to help solving this issue, but it is hard to contribute without testing, since I have no idle server or suitable spare computer around where I could run this tests properly. But my PC seems to have a somewhat similar configuration to your "dedicated i7 hardware". I hope I manage to set up the test environment in a dual-boot soon.


Thanks for clarifying! It's reassuring.

Please let us know if there is anything we can do to help ease the process. It is not strictly necessary to set up the whole benchmark platform. You can create the very simple database with the scripts. Then just run Go alongside a load generator such as Wrk in order to experiment with various approaches and do spot checks along the way.


Thank you so much for finally including the ASP.NET/IIS/Windows stack. I understand that it was a user contribution. This finally gives me something to compare the other stacks against. I realize that Windows/IIS and Linux/Apache|nginx is apples and oranges but it's still nice!


Note that I submitted a pull request to have JSON.net and ServiceStack.Text as serializers as well.

http://www.servicestack.net/benchmarks/


I still miss the async sinatra there. I've done some work on that but don't have enough time to finish it. - https://github.com/mikz/FrameworkBenchmarks/commit/2140775e1...

the one thread issue is main problem of all ruby benchmarks there


We'd like to include Async Sinatra, but I don't want to rush you. We look forward to the pull request when you find the time to wrap it up.


There are some strange things I noticed about this benchmark's organization:

- JRuby is a Ruby implementation. Why is it listed under Platform? It should be listed under Language.

- Why are Unicorn and Gunicorn listed under front-end web servers? Unicorn and Gunicorn are explicitly not front-end web servers, but are meant to be put behind a reverse proxy, by design. The Unicorn author tells users very clearly not to put it directly on the Internet because bad things will happen: http://unicorn.bogomips.org/PHILOSOPHY.html section "Application Concurrency != Network Concurrency". It would be more suitable to put both of them in the Platform category.


Thanks for the feedback, FooBarWidget.

We have received some similar feedback previously and have discussed some possible changes to the meta-data structure [1] [2]. As you can imagine, it is actually a complex problem assigning consistent terminology to all of the various parts that can compose a web application's deployment.

Consider Go (language, platform, framework, and server all in one, at least from our perspective) versus Rails (framework only). Some frameworks embed a web server, others don't, and so on. We have had to make several judgment calls in classifying this very broad spectrum of frameworks, and freely admit that there is room for improvement in that classification.

Incidentally, the Ruby deployment is Unicorn behind nginx. We opted to identify the Ruby deployment as "Unicorn" because that is the key among the two, and to clearly indicate the divergence from a previous round in which we were using Passenger, much to the dismay of the community.

[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/26...

[2] https://github.com/TechEmpower/FrameworkBenchmarks/issues/26...


No metion of Yii? Too bad, it's the best PHP Framework I've ever used. Fast but complete at the same time, and fully OO.


Just a heads up - I think this post has been flagged off the front page? Strange.


Yes, the score became red according to hnslapdown [1]. It has happened to the previous rounds, and we suspect those who can downvote stories don't feel these updates have merit. So be it. We do enjoy the feedback from readers that we receive from the brief time they appear on the home page, so as long as they will permit us to share the updates here, we will continue doing so.

If you want to participate in a longer-form discussion about the project, we invite you to join the Google Group [2].

[1] http://thomaspark.me/2012/10/the-hacker-news-slap/

[2] https://groups.google.com/forum/?fromgroups=#!forum/framewor...


You should really do a test on ASP.NET with JSON.net and ServiceStack Text.

The default JSON serializer is hopelessly slow.


Thanks for the tip!

I need to be clear that the ASP.NET implementation that you see in Round 5 was contributed by user @pdonald at Github.

That said, we'd be happy to receive more pull requests with ASP.NET changes and improvements. We suspect there is a lot of room for improvement in the Windows numbers.


I've just submitted a pull request.


You may be the fastest pull-requester I've seen in the five rounds of this project. Good show.


Hehe :)

It actually took more time to clone and open the code in VS than to make the changes.


Please add Phusion Passenger (https://www.phusionpassenger.com/) to the benchmark for Ruby apps. Right now Unicorn is the only server in that benchmark for Ruby but it's far from the only available server.


Round 1 used Passenger, but the feedback we got was that Unicorn performed better so we switched to that. Currently, we're aiming to show each framework in the best possible production configuration. In the future we plan to show multiple web/app servers per framework so that you could compare Ruby on Passenger vs. Ruby on Unicorn.


There is a simple explanation for that. Phusion Passenger always proxies data from the web server to another process, for stability and security reasons. If you benchmark Unicorn directly, without putting it behind a reverse proxy, Unicorn will look faster simply because you're avoiding another kernel socket operation.

However as I explained in https://news.ycombinator.com/item?id=5727232, Unicorn is always supposed to be put behind a reverse proxy. If you do that you should find different results.

Also, there's a lot of tuning in Phusion Passenger that can help performance. The default is optimized for usability and stability. For example if you don't prespawn processes, and let Phusion Passenger spawn them on the first request, you'll be adding tens of seconds to the benchmark time, which would greatly disadvantage Phusion Passenger in an unfair manner. You should set at least:

passenger_min_instances

passenger_max_pool_size

passenger_pre_start


Passenger has some config options to spawn more processes and help the load. But the results will not be much different from unicorn/thin/whatever. Maybe in units of %.


OpenResty is twice as fast as raw Go at multiple queries DB test (dedicated hardware), huh.


A pure Go MySQL driver is being used for the Go test. OpenResty probably uses a thin wrapper around the C library.


The openresty mysql driver is in pure lua. It has a very efficient socket/pooling mechanism, using sockets from nginx.


It would be even more (much more?) effective to use preconfigured https://github.com/chaoslawful/drizzle-nginx-module location and 'ngx.location.capture' I suppose

And http://leafo.net/lapis/ is missing again.


I benchmarked the pure postgresql lua driver to be ˜3 times as fast as the nginx-postgresql-c driver. When you use the nginx drivers from a lua context you have to use an internal nginx request to that location, so there's some overhead.

If you want to improve even further on the lua drivers, LuaJIT FFI is probably the right answer.


>˜3 times as fast as the nginx-postgresql-c driver

That's interesting..

>LuaJIT FFI is probably the right answer

Do you mean calling nginx internal functions (DB driver or location capture, like ngx_eval module does in that case) via FFI (I doubt if that is safe in any way) or just use libpq from LuaJIT directly?


I haven't benchmarked the mysql drivers, so results might be different there.

There's some work being done by openresty author w.r.t. ffi for openresty itself, it might yield interesting results. And yes, I think both the option you listed are viable. But the lua drivers already perform very well.


I asked leafo about adding Lapis and he sounded interested. Since then I haven't harassed him. I figure he will get to it when he can.


Why is the overhead of all the PHP frameworks so high? Is it because they have to evaluate all the framework code per request? The difference between raw PHP and symfony2 is massive!


I think it's because raw PHP doesn't do anything. You're literally benchmarking how fast you can do nothing. As soon as you add any kind of logic the number goes way down. It's like saying the performance of adding 4 numbers is a massive difference from the performance of adding 2 numbers.


Yeah, pretty much.


If I am not mistaken, Java servlet version use prepared statement

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

Which is kinda faster than other full ORM and smaller network round trips.


Yes, it does. Additionally, several of the ORMs leverage prepared statements.


Why is php one of the fastest on the database query test but one of the slowest on all other tests?


because you did not pay attention to the results , at all...


aah, and you know!

so, please, enlighten all shortsighted.


I normally use Scalatra for Scala stuff, but interesting to see Spray's inclusion and results.


What I miss from that benchmark:

- CPU/memory consumption

- higher concurrency level (4096 for i7 at least)

- dependency graphs - latency/concurrency etc.


Agreed, this would be interesting. It's under discussion: https://github.com/TechEmpower/FrameworkBenchmarks/issues/10...


Nice! I hope we will see it in next rounds.

There is a big difference between "it serves 1200 rps" and "it serves 1200 rps and barely seen in top" actually


Yes, but to set expectations before we even implement the server statistics capture we have planned: we want the CPU to be fully utilized for most of these tests. If you barely see the server showing up in top, something is probably going wrong (or you've run into a different limit such as disk or network).

We're testing the maximum number of requests a server can handle per second, so the optimum is for the CPU to be fully utilized. A different test would measure how much CPU is used if I want to serve X requests per second.


The blog post links to Round 4 results. I can't find Round 5 results anywhere



Apparently I am a newbie at setting up Apache's HTTP response headers. We intended for the response headers to ask your browser to cache the HTML for two hours and not more.

If you don't see Round 5 at first, just hit reload. It should appear.


so Java is fastest... then why does it seem that in the real world Java web apps are slowest?


Assuming you are talking about web applications from banks and non-tech Fortune 500 in general, it's because many of them are:

  * incredibly bloated (and still lack most of the features
    that an actual human user would want)
  * poorly coded by armies of outsourced programmers
  * using over-engineered code built on top of obsolete frameworks
  * running on a "homologated" stack, which is often 3 to 7 years out-of-date
(I know because I was partially responsible for some of them, in my dark past.)


Fast Java code that is smartly written is really fast. There are lots of real world Java apps out there that you might not realize are Java --- I've read somewhere that Google Adwords is in Java, for instance.

The problem is, there are lots and lots (and lots) of bad Java developers out there --- it's the "safest language" there is (to learn if you want to make a buck or to hire devs if you want to play it safe as a manager at BigCorp), so you often see badly done outsourced work written in Java. Also, lots of Java code out there lives in non-Agile environments (so it runs on very outdated stacks.)

That being said, if you used a purely modern Java stack (probably some Guice, Wicket, Resin, JDK 7, etc) with super smart people you will be highly performant.

(Note: I haven't done pure real Java dev since the Struts days and Java 1.5, so the tech stack above is just me guessing at what the latest/greatest is.)


Can you name some examples?


Thanks for the benchmarks.

While it will not affect the way i work it is interesting to see the differences between languages and plateforms , in the context of a web application. Clearly Go and the JVM are doing very well in term of performance. In some cases it can affect the hosting cost , especially when deploying on pay as you go saas.

Big frameworks and ORM , especially in dynamic languages should really get into serious optimization , there is no excuses for some frameworks to be so slow on trivial things like DB requests ,etc ... there will always a last one in the list , but that last one doesnt have to be only 1% of the performances of the first one in the list. It's pretty shamefull.


Please add Revel (the Go framework) to the dedicated hardware tests.


Hi stefantalpalaru. Good eye! We will get that added and patch it into Round 5 soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: