
python-checkins at python
Aug 11, 2013, 1:08 PM
Post #1 of 1
(7 views)
Permalink
|
|
peps: PEP 446: closing all file descriptors between fork() and exec() is not reliable
|
|
http://hg.python.org/peps/rev/36961c29aa1b changeset: 5052:36961c29aa1b user: Victor Stinner <victor.stinner [at] gmail> date: Sun Aug 11 22:08:38 2013 +0200 summary: PEP 446: closing all file descriptors between fork() and exec() is not reliable in a multithreaded application files: pep-0446.txt | 16 +++++++++++++--- 1 files changed, 13 insertions(+), 3 deletions(-) diff --git a/pep-0446.txt b/pep-0446.txt --- a/pep-0446.txt +++ b/pep-0446.txt @@ -357,14 +357,20 @@ so this case is not concerned by this PEP. -Performances of Closing All File Descriptors --------------------------------------------- +Closing All Open File Descriptors +--------------------------------- On UNIX, the ``subprocess`` module closes almost all file descriptors in the child process. This operation require MAXFD system calls, where MAXFD is the maximum number of file descriptors, even if there are only few open file descriptors. This maximum can be read using: -``sysconf("SC_OPEN_MAX")``. +``os.sysconf("SC_OPEN_MAX")``. + +There is no portable nor reliable function to close all open file +descriptors between ``fork()`` and ``execv()``. Another thread may +create an inheritable file descriptors while we are closing existing +file descriptors. Holding the CPython GIL reduces the risk of the race +condition. The operation can be slow if MAXFD is large. For example, on a FreeBSD buildbot with ``MAXFD=655,000``, the operation took 300 ms: see @@ -375,6 +381,10 @@ ``/proc/<PID>/fd/``, and so performances depends on the number of open file descriptors, not on MAXFD. +FreeBSD, OpenBSD and Solaris provide a ``closefrom()`` function. It +cannot be used by the ``subprocess`` module when the *pass_fds* +parameter is a non-empty list of file descriptors. + See also: * `Python issue #1663329 <http://bugs.python.org/issue1663329>`_: -- Repository URL: http://hg.python.org/peps
|