[Catalyst-commits] r8673 - trunk/examples/CatalystAdvent/root/2008/pen

jshirley at dev.catalyst.perl.org jshirley at dev.catalyst.perl.org
Mon Dec 1 21:12:09 GMT 2008


Author: jshirley
Date: 2008-12-01 21:12:09 +0000 (Mon, 01 Dec 2008)
New Revision: 8673

Added:
   trunk/examples/CatalystAdvent/root/2008/pen/2.pod
Log:
Day 2 draft, comments welcome before it goes live.

Added: trunk/examples/CatalystAdvent/root/2008/pen/2.pod
===================================================================
--- trunk/examples/CatalystAdvent/root/2008/pen/2.pod	                        (rev 0)
+++ trunk/examples/CatalystAdvent/root/2008/pen/2.pod	2008-12-01 21:12:09 UTC (rev 8673)
@@ -0,0 +1,177 @@
+=head1 Catalyst and nginx
+
+In the spirit of perl, in that there is always more than one way to do it,
+there are also many ways to deploy your Catalyst application.
+
+First, I will summarize the available options and then go into the details of
+my own choice for application deployment.
+
+=head1 Available Deployment Options
+
+=over
+
+=item The Built-in Server
+
+=item Apache and mod_perl
+
+=item FastCGI
+
+=over
+
+=item External FastCGI
+
+=item Apache mod_fastcgi 
+
+=back
+
+=back
+
+=head2 The Built-in Server
+
+The first method is the perl-based standalone server.  This method is actually
+several methods, as it has several engines.  This is the server that is
+used when the development server is initiated via:
+
+ script/myapp_server.pl
+
+The default development server (as it is commonly referred to) is a simple
+single threaded server.  A secondary pure-perl engine that is more robust for
+production usage is the HTTP::Prefork.  This engine is more similar to what
+mongrel is (with the exception of no C-based HTTP handling), and has a prefork
+model for handling multiple connections.  At the end of this article, I'll show
+how to proxy to the Prefork server.
+
+=head2 Apache and mod_perl
+
+Apache is the stalwart of the web servers, and mod_perl is firmly attached to
+Apache.  It has a tremendous number of merits to it, but it is very complex to
+simply get an application going properly.  My general recommendation on this
+deployment methodology is to not use it unless you have distinct reasons to.
+There are numerous valid reasons, but it tends to be a heavier weight
+deployment mechanism that isn't worth the tradeoffs for simple web
+applications.  If you know enough to use mod_perl, you won't be looking here!
+
+=head2 FastCGI
+
+Finally, we have FastCGI.  A simple protocol that acts as a gateway between an
+application and a web server.  FastCGI scripts are handled via socket
+communications (either unix sockets or tcp sockets) between the webserver and
+the application.  Most deployment mechanisms make usage of external scripts,
+with an external process manager that spawns the application FastCGI processes.
+
+This may sound daunting, so lets break down FastCGI components into primitive
+concepts.  But first, a simple definition as to what FastCGI is: FastCGI is
+simply a common protocol used for applications to talk to a webserver.
+
+The first is the webserver.  This is really an interchangeable piece of your
+deployment.  For external FastCGI scripts, you can safely ignore the webserver.
+
+The second piece is the FastCGI process manager.  In the case of mod_fcgid and
+mod_fastcgi (both Apache modules) there is support for a simple FastCGI process
+manager.  These modules handle spawning, reaping and maintaining the child
+processes.  In an external setup, there is some third party that does this
+task.  Currently, Catalyst (by default) uses FCGI::ProcManager.  This handles
+spawning the individual children.
+
+The third and final piece is the individual application process.  This connects
+to the FastCGI socket, and handles the incoming requests while sending the
+response upstream to the socket, which the webserver listens to and sends the
+response to the originating client.
+
+The merits of using FastCGI are numerous, the most distinct of which are the
+capability to restart your application without down time or without touching
+the webserver.  The user running your application doesn't even need to be the
+same user as the webserver.
+
+The zero-downtime is achieved because multiple FastCGI processes can listen to
+the same socket.  This means that you can start your next version on the
+socket, then shutdown the old version.  The upstream webserver doesn't even
+notice, or need to be notified.
+
+I hope this article has been convincing enough to make use of FastCGI, because
+it is a harder sell to switch webservers.  I don't have any good reason as to
+why I originally started playing with nginx (pronounced "engine-x").  It was
+simply idle curiosity that grew into appreciation of a piece of software with a
+sane and, comparatively speaking, beautiful configuration syntax as well as
+being rock solid for my tests and subsequent deployments.
+
+Apache has never let me down, but it has frustrated me.  Nginx won me over
+because it has a very non-intrusive and intuitive configuration format and has
+a great number of features so it is hard to miss Apache for any modern
+projects.  It certainly doesn't have all the features that Apache has (most
+importantly, no traditional CGI support) but it works very well, and is very,
+very fast.
+
+While the front-end FastCGI proxy won't really affect an application's
+processing speed, it does affect how quickly static files and related resources
+are served, as well as the client communications.  This is accomplished by
+reducing the time it takes for the client to receive the first byte, and other
+related metrics. While these efficiencies seem trivial, they matter and add-up,
+especially for high performance applications that require analysis of load
+order and blocking elements in the generated markup.  Often times the response
+from users is that things simply "feel" faster.
+
+=head1 Nginx Configuration
+
+Configuring a location in nginx to be handled by FastCGI is trivial.  It's a
+simple one line, nestled into a location block, which points out the location
+to the socket:
+
+    fastcgi_pass  unix:/var/www/MyApp/fastcgi.socket;
+
+
+You can use unix sockets or tcp sockets:
+
+    fastcgi_pass  127.0.0.1:10003;
+
+So, to put this together into a virtual host setting, your configuration will
+look something like this:
+
+    server {
+        listen       8080;
+        server_name  www.myapp.com;
+        location / {
+            include fastcgi_params; # We'll discuss this later
+            fastcgi_pass  unix:/tmp/myapp.socket;
+        }
+    }
+
+This configuration block will send everything to your Catalyst application,
+which you don't want.  You always want to have static files served directly
+from your webserver.  To accomplish this, assuming you use a static directory,
+create another location block:
+
+     location /static {
+            root  /var/www/MyApp/root;
+     }
+
+Now, every request to a file in /static will be served directly from nginx.
+Speedy!
+
+=head1 Using Catalyst::Engine::HTTP::Prefork
+
+This is one of main reasons why I love nginx over Apache.  To switch from
+fastcgi to using prefork, you simply have to change one line.  Instead of
+fastcgi_pass, you simply have proxy_pass in its place.
+
+    proxy_pass  http://localhost:3000/;
+
+Now, all connections for that location go to your application which is running
+on the built-in server (hopefully with Catalyst::Engine::HTTP::Prefork).  If
+you use this method, you will have to enable the "using_frontend_proxy" option
+in your Catalyst application.
+
+=head1 One more thing...
+
+Deployment is remarkably painless with nginx, but there is a huge caveat when
+dealing with FastCGI.  Your application must exist at "/" as far as nginx is
+concerned.  If you have a front-end proxy ahead of your application (like using
+the configuration above with proxy_pass), you can mitigate this issue there.
+The core issue is the lack of standardization on how FastCGI applications parse
+and setup their environment.  This limitation, via a patch just for nginx, will
+be fixed in Catalyst 5.8.
+
+=head1 AUTHOR
+
+Jay Shirley C<< <jshirley at coldhardcode.com> >>
+




More information about the Catalyst-commits mailing list