orasnita at gmail
Jan 25, 2012, 5:41 AM
Post #5 of 7
Re: Running Catalyst apps with start_server
[In reply to]
From: "Tomas Doran" <bobtfish [at] bobtfish>
> On 23 Jan 2012, at 21:34, Octavian Rasnita wrote:
>> So something's obviously wrong if so much memory is occupied even
>> after 1 hour of inactivity.
> To start with, you're testing entirely wrong.
Well, this is good news. :-)
> Testing the free RAM on the machine is bullshit - as the kernel is
> going to cache data for you, so the 'free' RAM figure means nothing.
I know that, but a problem I had was that when the "used" memory reported by top reached to be as big as the total memory, the system started to swap. This is why I was looking at it.
> The only figures really of note is the VSZ of each process. (And this
> doesn't account for memory sharing).
OK, I will make some tests to see if the size of that memory increases.
> What will (appear) to happen is that starman pre-loads all your bits
> (lets say that's 20Mb for the sake of argument). It then forks, giving
> you 5 workers... So you now have 6 x 20Mb (VSZ) - there is memory
> sharing going on here, so you're not actually using that memory, but
> lets ignore that...
> Then you do a load of (the same) request, which generates a 1Mb output
> document, but generating that document involves the user of 10Mb of RAM.
> After 5 requests (one to each worker), you will now be (appearing to
> be) using 20 + 5 * (20+10) Mb of RAM (combined VSZ).
> Now, if you continue making the same request, memory useage should not
> go up significantly (although as your workers process more requests,
> they're more likely to become un-shared, so 'real' memory use in the
> background goes up.. but again, let's ignore this).
> You stop making requests... Nothing changes.. Perl _never_ gives RAM
> back to the system, until it restarts. If you come back and do another
> web request, the memory perl has internally free will be re-used, but
> it won't be released back to the operating system.
> If you now kill Starman, then the operating system _may_, at _it's
> discression_ free up all the pages from which perl code was cached,
> and it may not. Measuring the OS free memory is just wrong...
Thanks for your explanations. It is helpful.
I was also thinking that Perl should reuse the memory has free internally, but I have seen that the "used" memory as reported by top continues to increase, so the free memory looked like it was not reused, but now I understand that this memory size is wrong.
I have made harder tests that should use the entire memory and make the system swap, but I've seen that the "used" memory reported by top continued to increase, but only up to a certain size that never reached the total memory size, so that increase looks to be false indeed.
> No, this (the 'after 1 hour' thing) is not a leak - this is perl not
> giving the OS memory back, by design. (And yes - you may have a tiny
> leak in there somewhere due to the small continuing RAM increase per
> request - although I'd be more likely to blame your app than Starman
> for this)
For doing those tests I used a simple Catalyst app made with catalyst.pl MyApp and nothing more.
And I also tested it with Catalyst's test server and it doesn't seems to be leaking.
> This is why your generally arrange for workers to restart after N
> requests, as if they serve a _massive_ page, then they won't give that
> memory back ever...
> So just set children to die after a few thousand requests and stop
Yep, I am doing that.
The test app wasn't leaking, but my real app might have some leaks.
I have used CatalystX::LeackChecker and I didn't find any leaks, but anyway, if the app has some leaks that I can't find, what do you suggest it is the solution for not consuming the entire memory?
Is closing and starting starman now and then the only solution?
List: Catalyst [at] lists
Searchable archive: http://www.mail-archive.com/catalyst [at] lists/
Dev site: http://dev.catalyst.perl.org/