[Catalyst] Have exceeded the maximum number of attempts (1000) to open temp file/dir

Bill Moseley moseley at hank.org
Fri Oct 25 13:51:51 GMT 2013

I have an API where requests can include JSON.  HTTP::Body saves those off
to temp files.

Yesterday got a very large number of errors:

[ERROR] "Caught exception in engine "Error in tempfile() using
/tmp/XXXXXXXXXX: Have exceeded the maximum number of attempts (1000) to
open temp file/dir

The File::Temp docs say:

If you are forking many processes in parallel that are all creating
> temporary files, you may need to reset the random number seed using
> srand(EXPR) in each child else all the children will attempt to walk
> through the same set of random file names and may well cause
> themselves to give up if they exceed the number of retry attempts.

We are running under mod_perl.   Could it be as simple as the procs all
were in sync?   I'm just surprised this has not happened before.   Is there
another explanation?

Where would you suggest to call srand()?

Another problem, and one I've
before, is that HTTP::Body doesn't use File::Temp's unlink feature and
depends on Catalyst cleaning up.  This results in orphaned files left on
temp disk.

-- =

Bill Moseley
moseley at hank.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20131025/f9e5b=

More information about the Catalyst mailing list