[Catalyst] Catalyst speed test results

Szabo Peter szabo.peter at szszi.hu
Mon May 22 19:32:30 CEST 2006

Dear Catalyst Developerers and Users,

We have compiled a speed test recently comparing a Catalyst web application
and the same functionality written in pure Perl. Our results are quite
surprising: the Catalyst implementation is about 50 times slower than the pure
Perl one (both using FastCGI). We expected a much less slowdown factor. Are we
doing something wrong? What are the proper ways to speed up Catalyst appications?

Please read on for more details...

The intended audience of this test is people developing Catalyst and
software developers with a lot of Catalyst experience.

The purpose if this speed test to initiate a discussion about Catalyst web
application performance. The test measures how fast a Catalyst web
application is compared to the pure Perl implementation of the same task.
Our test shows that Catalyst is about 50 times slower than pure Perl (both
using FastCGI, Apache 1.3 and PostgreSQL 8.1). We should start discussing   
about reducing this factor of 50 by using Catalyst more smartly (or even 
speed-optimizing parts of Catalyst), and we should also discuss how Catalyst
web applications scale when new server hosts are added.

The web page being tested just dumps a PostgreSQL table as a HTML table, and
adds some headers to make it pretty. The table consists of 128 rows of
real-world business contact data (name, city, address, e-mail address etc.)
with a few accented letters, and the resulting HTML file in UTF-8 encoding is
131034 bytes long.  

The test invokes this dump by downloading the dynamic web page with wget(1) a
few thousand times, one download at a time. The test result is the average
real time spent on each HTTP request.

We have created two implementations of the dump: one using Catalyst (as we
understand it based on the tutorials, documentation and videos found on the
web), and the other one, the pure Perl reference implementation, which
minimizes overhead by using as few Perl modules as possible (IO::Socket,
DBD::Pg and FCGI), is hand-optimized, and inlines as much code as possible.


-- Is our test scenario typical? We tried to be as relevant as possible:
   we are usign a database query, templates, and data access objects in
   Catalyst, which is typical. The test data comes from a real-world
   business contact database.

-- Does our test code we use Catalyst and other Perl modules the right way?
   Should we replace certain modules with better, faster alternatives?
   Should we use the modules a different way to make it faster?

-- Are we using the Catalyst architecture the intended way? We developed the
   test application following the spirit of the Catalyst tutorials,
   documentation and videos we've found on the net.

-- Is it OK that Catalyst is 50 times slower than pure Perl?

-- What should be the maximum acceptable slowdown factor?
   Somewhere between 5 and 10?

-- If it is still not fast enough, how do we add additional hardware so our
   website can serve more than 100 requests per second? How to scale up
   Catalyst? Should we add more application servers and load-balance them
   with mod_fastcgi of lighttpd?

-- What is the slowdown factor of other web application frameworks
   (such as Rails, EJB and ASP.NET)?


Get more information about the test suite from:


Get the test suite itself from:


Best regards,

Péter Szabó
free software consultant
Free Software Institute, Hungary

More information about the Catalyst mailing list