[Catalyst] Running Catalyst apps with start_server

Tomas Doran bobtfish at bobtfish.net
Tue Jan 24 12:24:27 GMT 2012


On 23 Jan 2012, at 21:34, Octavian Rasnita wrote:
>
> So something's obviously wrong if so much memory is occupied even  
> after 1 hour of inactivity.

To start with, you're testing entirely wrong.

Testing the free RAM on the machine is bullshit - as the kernel is  
going to cache data for you, so the 'free' RAM figure means nothing.
>
> And before closing Starman I have also tried to use kill -HUP `cat  
> myapp.pid` to reload the workers but the memory didn't decrease.

Why were you thinking it would?

The only figures really of note is the VSZ of each process. (And this  
doesn't account for memory sharing).


> I have also tried to run starman with 5 workers and and also tried  
> without putting it to run in the background, but it still leaks.

You haven't given any information from which we could conclude it leaks.

>
> Does this happends only to me and others can run starman without  
> leaks?

It isn't leaking.

Your model of how memory works is broken.

What will (appear) to happen is that starman pre-loads all your bits  
(lets say that's 20Mb for the sake of argument). It then forks, giving  
you 5 workers... So you now have 6 x 20Mb (VSZ) - there is memory  
sharing going on here, so you're not actually using that memory, but  
lets ignore that...

Then you do a load of (the same) request, which generates a 1Mb output  
document, but generating that document involves the user of 10Mb of RAM.

After 5 requests (one to each worker), you will now be (appearing to  
be) using 20 + 5 * (20+10) Mb of RAM (combined VSZ).

Now, if you continue making the same request, memory useage should not  
go up significantly (although as your workers process more requests,  
they're more likely to become un-shared, so 'real' memory use in the  
background goes up.. but again, let's ignore this).

You stop making requests... Nothing changes.. Perl _never_ gives RAM  
back to the system, until it restarts. If you come back and do another  
web request, the memory perl has internally free will be re-used, but  
it won't be released back to the operating system.

If you now kill Starman, then the operating system _may_, at _it's  
discression_ free up all the pages from which perl code was cached,  
and it may not. Measuring the OS free memory is just wrong...

>
> 144800k
>
> After the first request:
> 145296k
>
> After the second request:
> 145296k
>
> After 1000 more requests (with ab -n 1000 -c 1):
> 146412k
>
> After ~ 5 minutes of inactivity:
> 146412k
>
> After another 1000 requests (with ab -n 1000 -c 20):
> 146784k
>
> After ~ 5 minutes of inactivity:
> 146412k
>
> After 10000 requests (with ab -n 100000 -c 50):
> 156188k
>
> After 90000 requests:
> 221836k
>
> After 100000 requests:
> 228572k
>
> After ~ 15 minutes of inactivity:
> 227696k
>
> After 15 more minutes of inactivity:
> 227696k
>
> After one more hour of inactivity:
> 227944k
>
> So it seems that there is a memory leak if the memory is not freed  
> even after 1 hour.

No, this (the 'after 1 hour' thing) is not a leak - this is perl not  
giving the OS memory back, by design. (And yes - you may have a tiny  
leak in there somewhere due to the small continuing RAM increase per  
request - although I'd be more likely to blame your app than Starman  
for this)

This is why your generally arrange for workers to restart after N  
requests, as if they serve a _massive_ page, then they won't give that  
memory back ever...

So just set children to die after a few thousand requests and stop  
worrying?

Cheers
t0m




More information about the Catalyst mailing list