[Catalyst] Have exceeded the maximum number of attempts (1000)
to open temp file/dir
jjn1056 at yahoo.com
Thu Oct 31 21:44:41 GMT 2013
I see over here (latest release)
am calling ->cleanup(1) when we create the HTTP::Body. =A0is that not enoug=
h to cleanup tmp files ?
regarding the tmp file thing, wow I have no idea, but I hope you find out a=
nd report it to us!
On Friday, October 25, 2013 8:53 AM, Bill Moseley <moseley at hank.org> wrote:
I have an API where requests can include JSON. =A0HTTP::Body saves those of=
f to temp files.
Yesterday got a very large number of errors:
[ERROR] "Caught exception in engine "Error in tempfile() using /tmp/XXXXXXX=
XXX: Have exceeded the maximum number of attempts (1000) to open temp file/=
The File::Temp docs say:
If you are forking many processes in parallel that are all creating
>temporary files, you may need to reset the random number seed using
>srand(EXPR) in each child else all the children will attempt to walk
>through the same set of random file names and may well cause
>themselves to give up if they exceed the number of retry attempts.
We are running under mod_perl. =A0 Could it be as simple as the procs all w=
ere in sync? =A0 I'm just surprised this has not happened before. =A0 Is th=
ere another explanation?
Where would you suggest to call srand()?
Another problem, and one I've commented on before, is that HTTP::Body doesn=
't use File::Temp's unlink feature and depends on Catalyst cleaning up. =A0=
This results in orphaned files left on temp disk.
moseley at hank.org =
List: Catalyst at lists.scsys.co.uk
Searchable archive: http://email@example.com/
Dev site: http://dev.catalyst.perl.org/
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Catalyst