地图bootmgr启动sourcesssets放在哪里

17.2. multiprocessing — Process-based parallelism — Python 3.7.0 documentation
— Process-based parallelism
Source code:
17.2.1. Introduction
is a package that supports spawning processes using an
API similar to the
offers both local and remote concurrency, effectively side-stepping the
by using subprocesses instead of threads.
to this, the
module allows the programmer to fully
leverage multiple processors on a given machine.
It runs on both Unix and
module also introduces APIs which do not have
analogs in the
A prime example of this is the
object which offers a convenient means of
parallelizing the execution of a function across multiple input values,
distributing the input data across processes (data parallelism).
The following
example demonstrates the common practice of defining such functions in a module
so that child processes can successfully import that module.
This basic example
of data parallelism using ,
from multiprocessing import Pool
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
will print to standard output
17.2.1.1. The
In , processes are spawned by creating a
object and then calling its
follows the API of .
A trivial example of a
multiprocess program is
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
To show the individual process IDs involved, here is an expanded example:
from multiprocessing import Process
def info(title):
print(title)
print('module name:', __name__)
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
print('hello', name)
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
For an explanation of why the if __name__ == '__main__' part is
necessary, see .
17.2.1.2. Contexts and start methods
Depending on the platform,
supports three ways
to start a process.
These start methods are
The parent process starts a fresh python interpreter process.
child process will only inherit those resources necessary to run
the process objects
In particular,
unnecessary file descriptors and handles from the parent process
will not be inherited.
Starting a process using this method is
rather slow compared to using fork or forkserver.
Available on Unix and Windows.
The default on Windows.
The parent process uses
to fork the Python
interpreter.
The child process, when it begins, is effectively
identical to the parent process.
All resources of the parent are
inherited by the child process.
Note that safely forking a
multithreaded process is problematic.
Available on Unix only.
The default on Unix.
forkserver
When the program starts and selects the forkserver start method,
a server process is started.
From then on, whenever a new process
is needed, the parent process connects to the server and requests
that it fork a new process.
The fork server process is single
threaded so it is safe for it to use .
unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors
over Unix pipes.
Changed in version 3.4: spawn added on all unix platforms, and forkserver added for
some unix platforms.
Child processes no longer inherit all of the parents inheritable
handles on Windows.
On Unix using the spawn or forkserver start methods will also
start a semaphore tracker process which tracks the unlinked named
semaphores created by processes of the program.
When all processes
have exited the semaphore tracker unlinks any remaining semaphores.
Usually there should be none, but if a process was killed by a signal
there may be some “leaked” semaphores.
(Unlinking the named semaphores
is a serious matter since the system allows only a limited number, and
they will not be automatically unlinked until the next reboot.)
To select a start method you use the
the if __name__ == '__main__' clause of the main module.
import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
mp.set_start_method('spawn')
q = mp.Queue()
p = mp.Process(target=foo, args=(q,))
print(q.get())
should not be used more than once in the
Alternatively, you can use
to obtain a context
Context objects have the same API as the multiprocessing
module, and allow one to use multiple start methods in the same
import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
ctx = mp.get_context('spawn')
q = ctx.Queue()
p = ctx.Process(target=foo, args=(q,))
print(q.get())
Note that objects related to one context may not be compatible with
processes for a different context.
In particular, locks created using
the fork context cannot be passed to processes started using the
spawn or forkserver start methods.
A library which wants to use a particular start method should probably
to avoid interfering with the choice of the
library user.
17.2.1.3. Exchanging objects between processes
supports two types of communication channel between
processes:
class is a near clone of .
from multiprocessing import Process, Queue
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
print(q.get())
# prints &[42, None, 'hello']&
Queues are thread and process safe.
function returns a pair of connection objects connected by a
pipe which by default is duplex (two-way).
For example:
from multiprocessing import Process, Pipe
def f(conn):
conn.send([42, None, 'hello'])
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
print(parent_conn.recv())
# prints &[42, None, 'hello']&
The two connection objects returned by
represent the two ends of
Each connection object has send() and
recv() methods (among others).
Note that data in a pipe
may become corrupted if two processes (or threads) try to read from or write
to the same end of the pipe at the same time.
Of course there is no risk
of corruption from processes using different ends of the pipe at the same
17.2.1.4. Synchronization between processes
contains equivalents of all the synchronization
primitives from .
For instance one can use a lock to ensure
that only one process prints to standard output at a time:
from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
print('hello world', i)
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
Without using the lock output from the different processes is liable to get all
17.2.1.5. Sharing state between processes
As mentioned above, when doing concurrent programming it is usually best to
avoid using shared state as far as possible.
This is particularly true when
using multiple processes.
However, if you really do need to use some shared data then
provides a couple of ways of doing so.
Shared memory
Data can be stored in a shared memory map using
For example, the following code
from multiprocessing import Process, Value, Array
def f(n, a):
n.value = 3.1415927
for i in range(len(a)):
a[i] = -a[i]
if __name__ == '__main__':
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
print(num.value)
print(arr[:])
will print
[0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
The 'd' and 'i' arguments used when creating num and arr are
typecodes of the kind used by the
module: 'd' indicates a
double precision float and 'i' indicates a signed integer.
These shared
objects will be process and thread-safe.
For more flexibility in using shared memory one can use the
module which supports the creation of
arbitrary ctypes objects allocated from shared memory.
Server process
A manager object returned by Manager() controls a server process which
holds Python objects and allows other processes to manipulate them using
A manager returned by Manager() will support types
For example,
from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
if __name__ == '__main__':
with Manager() as manager:
d = manager.dict()
l = manager.list(range(10))
p = Process(target=f, args=(d, l))
will print
{0.25: None, 1: '1', '2': 2}
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
Server process managers are more flexible than using shared memory objects
because they can be made to support arbitrary object types.
Also, a single
manager can be shared by processes on different computers over a network.
They are, however, slower than using shared memory.
17.2.1.6. Using a pool of workers
class represents a pool of worker
processes.
It has methods which allows tasks to be offloaded to the worker
processes in a few different ways.
For example:
from multiprocessing import Pool, TimeoutError
import time
return x*x
if __name__ == '__main__':
# start 4 worker processes
with Pool(processes=4) as pool:
# print &[0, 1, 4,..., 81]&
print(pool.map(f, range(10)))
# print same numbers in arbitrary order
for i in pool.imap_unordered(f, range(10)):
# evaluate &f(20)& asynchronously
res = pool.apply_async(f, (20,))
# runs in *only* one process
print(res.get(timeout=1))
# prints &400&
# evaluate &os.getpid()& asynchronously
res = pool.apply_async(os.getpid, ()) # runs in *only* one process
print(res.get(timeout=1))
# prints the PID of that process
# launching multiple evaluations asynchronously *may* use more processes
multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
print([res.get(timeout=1) for res in multiple_results])
# make a single worker sleep for 10 secs
res = pool.apply_async(time.sleep, (10,))
print(res.get(timeout=1))
except TimeoutError:
print(&We lacked patience and got a multiprocessing.TimeoutError&)
print(&For the moment, the pool remains available for more work&)
# exiting the 'with'-block has stopped the pool
print(&Now the pool is closed and no longer available&)
Note that the methods of a pool should only ever be used by the
process which created it.
Functionality within this package requires that the __main__ module be
importable by the children. This is covered in
however it is worth pointing out here. This means that some examples, such
examples will not work in the
interactive interpreter. For example:
&&& from multiprocessing import Pool
&&& p = Pool(5)
&&& def f(x):
return x*x
&&& p.map(f, [1,2,3])
Process PoolWorker-1:
Process PoolWorker-2:
Process PoolWorker-3:
Traceback (most recent call last):
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
(If you try this it will actually output three full tracebacks
interleaved in a semi-random fashion, and then you may have to
stop the master process somehow.)
17.2.2. Reference
package mostly replicates the API of the
and exceptions
class multiprocessing.Process(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)
Process objects represent activity that is run in a separate process. The
class has equivalents of all the methods of
The constructor should always be called with keyword arguments. group
should always be None; it exists solely for compatibility with
target is the callable object to be invoked by
It defaults to None, meaning nothing is
called. name is the process name (see
for more details).
args is the argument tuple for the target invocation.
kwargs is a
dictionary of keyword arguments for the target invocation.
If provided,
the keyword-only daemon argument sets the process
to True or False.
If None (the default), this flag will be
inherited from the creating process.
By default, no arguments are passed to target.
If a subclass overrides the constructor, it must make sure it invokes the
base class constructor (Process.__init__()) before doing anything else
to the process.
Changed in version 3.3: Added the daemon argument.
Method representing the process’s activity.
You may override this method in a subclass.
The standard
method invokes the callable object passed to the object’s constructor as
the target argument, if any, with sequential and keyword arguments taken
from the args and kwargs arguments, respectively.
Start the process’s activity.
This must be called at most once per process object.
It arranges for the
method to be invoked in a separate process.
join([timeout])
If the optional argument timeout is None (the default), the method
blocks until the process whose
method is called terminates.
If timeout is a positive number, it blocks at most timeout seconds.
Note that the method returns None if its process terminates or if the
method times out.
Check the process’s
to determine if
it terminated.
A process can be joined many times.
A process cannot join itself because this would cause a deadlock.
an error to attempt to join a process before it has been started.
The process’s name.
The name is a string used for identification purposes
It has no semantics.
Multiple processes may be given the same
The initial name is set by the constructor.
If no explicit name is
provided to the constructor, a name of the form
‘Process-N1:N2:…:Nk’ is constructed, where
each Nk is the N-th child of its parent.
is_alive()
Return whether the process is alive.
Roughly, a process object is alive from the moment the
method returns until the child process terminates.
The process’s daemon flag, a Boolean value.
This must be set before
is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child
processes.
Note that a daemonic process is not allowed to create child processes.
Otherwise a daemonic process would leave its children orphaned if it gets
terminated when its parent process exits. Additionally, these are not
Unix daemons or services, they are normal processes that will be
terminated (and not joined) if non-daemonic processes have exited.
In addition to the
also support the following attributes and methods:
Return the process ID.
Before the process is spawned, this will be
The child’s exit code.
This will be None if the process has not yet
terminated.
A negative value -N indicates that the child was terminated
by signal N.
The process’s authentication key (a byte string).
is initialized the main process is assigned a
random string using .
object is created, it will inherit the
authentication key of its parent process, although this may be changed by
to another byte string.
A numeric handle of a system object which will become “ready” when
the process ends.
You can use this value if you want to wait on several events at
once using .
is simpler.
On Windows, this is an OS handle usable with the WaitForSingleObject
and WaitForMultipleObjects family of API calls.
On Unix, this is
a file descriptor usable with primitives from the
New in version 3.3.
terminate()
Terminate the process.
On Unix this is done using the SIGTERM
on Windows TerminateProcess() is used.
Note that exit handlers and
finally clauses, etc., will not be executed.
Note that descendant processes of the process will not be terminated –
they will simply become orphaned.
If this method is used when the associated process is using a pipe or
queue then the pipe or queue is liable to become corrupted and may
become unusable by other process.
Similarly, if the process has
acquired a lock or semaphore etc. then terminating it is liable to
cause other processes to deadlock.
but using the SIGKILL signal on Unix.
New in version 3.7.
object, releasing all resources associated
is raised if the underlying process
is still running.
returns successfully, most
other methods and attributes of the
object will
New in version 3.7.
Note that the , , ,
methods should only be called by
the process that created the process object.
Example usage of some of the methods of :
&&& import multiprocessing, time, signal
&&& p = multiprocessing.Process(target=time.sleep, args=(1000,))
&&& print(p, p.is_alive())
&Process(Process-1, initial)& False
&&& p.start()
&&& print(p, p.is_alive())
&Process(Process-1, started)& True
&&& p.terminate()
&&& time.sleep(0.1)
&&& print(p, p.is_alive())
&Process(Process-1, stopped[SIGTERM])& False
&&& p.exitcode == -signal.SIGTERM
exception multiprocessing.ProcessError
The base class of all
exceptions.
exception multiprocessing.BufferTooShort
Exception raised by Connection.recv_bytes_into() when the supplied
buffer object is too small for the message read.
If e is an instance of
then e.args[0] will give
the message as a byte string.
exception multiprocessing.AuthenticationError
Raised when there is an authentication error.
Raised by methods with a timeout when the timeout expires.
17.2.2.2. Pipes and Queues
When using multiple processes, one generally uses message passing for
communication between processes and avoids having to use any synchronization
primitives like locks.
For passing messages one can use
(for a connection between two
processes) or a queue (which allows multiple producers and consumers).
are multi-producer, multi-consumer FIFO
queues modelled on the
class in the
standard library.
They differ in that
methods introduced
into Python 2.5’s
If you use
then you must call
for each task removed from the queue or else the
semaphore used to count the number of unfinished tasks may eventually overflow,
raising an exception.
Note that one can also create a shared queue by using a manager object – see
uses the usual
exceptions to signal a timeout.
They are not available in
namespace so you need to import them from
When an object is put on a queue, the object is pickled and a
background thread later flushes the pickled data to an underlying
This has some consequences which are a little surprising,
but should not cause any practical difficulties – if they really
bother you then you can instead use a queue created with a
After putting an object on an empty queue there may be an
infinitesimal delay before the queue’s
method returns
return without raising .
If multiple processes are enqueuing objects, it is possible for
the objects to be received at the other end out-of-order.
However, objects enqueued by the same process will always be in
the expected order with respect to each other.
If a process is killed using
while it is trying to use a , then the data in the queue is
likely to become corrupted.
This may cause any other process to get an
exception when it tries to use the queue later on.
As mentioned above, if a child process has put items on a queue (and it has
not used ), then that process will
not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless
you are sure that all items which have been put on the queue have been
Similarly, if the child process is non-daemonic then the parent
process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue.
For an example of the usage of queues for interprocess communication see
multiprocessing.Pipe([duplex])
Returns a pair (conn1, conn2) of
objects representing the
ends of a pipe.
If duplex is True (the default) then the pipe is bidirectional.
duplex is False then the pipe is unidirectional: conn1 can only be
used for receiving messages and conn2 can only be used for sending
class multiprocessing.Queue([maxsize])
Returns a process shared queue implemented using a pipe and a few
locks/semaphores.
When a process first puts an item on the queue a feeder
thread is started which transfers objects from a buffer into the pipe.
exceptions from the
standard library’s
module are raised to signal timeouts.
implements all the methods of
except for
Return the approximate size of the queue.
Because of
multithreading/multiprocessing semantics, this number is not reliable.
Note that this may raise
on Unix platforms like
Mac OS X where sem_getvalue() is not implemented.
Return True if the queue is empty, False otherwise.
Because of
multithreading/multiprocessing semantics, this is not reliable.
Return True if the queue is full, False otherwise.
Because of
multithreading/multiprocessing semantics, this is not reliable.
put(obj[, block[, timeout]])
Put obj into the queue.
If the optional argument block is True
(the default) and timeout is None (the default), block if necessary until
a free slot is available.
If timeout is a positive number, it blocks at
most timeout seconds and raises the
exception if no
free slot was available within that time.
Otherwise (block is
False), put an item on the queue if a free slot is immediately
available, else raise the
exception (timeout is
ignored in that case).
put_nowait(obj)
Equivalent to put(obj, False).
get([block[, timeout]])
Remove and return an item from the queue.
If optional args block is
True (the default) and timeout is None (the default), block if
necessary until an item is available.
If timeout is a positive number,
it blocks at most timeout seconds and raises the
exception if no item was available within that time.
Otherwise (block is
False), return an item if one is immediately available, else raise the
exception (timeout is ignored in that case).
get_nowait()
Equivalent to get(False).
has a few additional methods not found in
These methods are usually unnecessary for most
Indicate that no more data will be put on this queue by the current
The background thread will quit once it has flushed all buffered
data to the pipe.
This is called automatically when the queue is garbage
collected.
join_thread()
Join the background thread.
This can only be used after
been called.
It blocks until the background thread exits, ensuring that
all data in the buffer has been flushed to the pipe.
By default if a process is not the creator of the queue then on exit it
will attempt to join the queue’s background thread.
The process can call
do nothing.
cancel_join_thread()
from blocking.
In particular, this prevents
the background thread from being joined automatically when the process
exits – see .
A better name for this method might be
allow_exit_without_flush().
It is likely to cause enqueued
data to lost, and you almost certainly will not need to use it.
It is really only there if you need the current process to exit
immediately without waiting to flush enqueued data to the
underlying pipe, and you don’t care about lost data.
This class’s functionality requires a functioning shared semaphore
implementation on the host operating system. Without one, the
functionality in this class will be disabled, and attempts to
instantiate a
will result in an . See
for additional information.
The same holds true for any
of the specialized queue types listed below.
class multiprocessing.SimpleQueue
It is a simplified
type, very close to a locked .
Return True if the queue is empty, False otherwise.
Remove and return an item from the queue.
Put item into the queue.
class multiprocessing.JoinableQueue([maxsize])
subclass, is a queue which
additionally has
task_done()
Indicate that a formerly enqueued task is complete. Used by queue
consumers.
used to fetch a task, a subsequent
tells the queue that the processing on the task
is complete.
is currently blocking, it will resume when all
items have been processed (meaning that a
received for every item that had been
into the queue).
if called more times than there were items
placed in the queue.
Block until all items in the queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the
The count goes down whenever a consumer calls
to indicate that the item was retrieved and all work on
it is complete.
When the count of unfinished tasks drops to zero,
17.2.2.3. Miscellaneous
multiprocessing.active_children()
Return list of all live children of the current process.
Calling this has the side effect of “joining” any processes which have
already finished.
multiprocessing.cpu_count()
Return the number of CPUs in the system.
This number is not equivalent to the number of CPUs the current process can
The number of usable CPUs can be obtained with
len(os.sched_getaffinity(0))
May raise .
multiprocessing.current_process()
Return the
object corresponding to the current process.
An analogue of .
multiprocessing.freeze_support()
Add support for when a program which uses
frozen to produce a Windows executable.
(Has been tested with py2exe,
PyInstaller and cx_Freeze.)
One needs to call this function straight after the if __name__ ==
'__main__' line of the main module.
For example:
from multiprocessing import Process, freeze_support
print('hello world!')
if __name__ == '__main__':
freeze_support()
Process(target=f).start()
If the freeze_support() line is omitted then trying to run the frozen
executable will raise .
Calling freeze_support() has no effect when invoked on any operating
system other than Windows.
In addition, if the module is being run
normally by the Python interpreter on Windows (the program has not been
frozen), then freeze_support() has no effect.
multiprocessing.get_all_start_methods()
Returns a list of the supported start methods, the first of which
is the default.
The possible start methods are 'fork',
'spawn' and 'forkserver'.
On Windows only 'spawn' is
available.
On Unix 'fork' and 'spawn' are always
supported, with 'fork' being the default.
New in version 3.4.
multiprocessing.get_context(method=None)
Return a context object which has the same attributes as the
If method is None then the default context is returned.
Otherwise method should be 'fork', 'spawn',
'forkserver'.
is raised if the specified
start method is not available.
New in version 3.4.
multiprocessing.get_start_method(allow_none=False)
Return the name of start method used for starting processes.
If the start method has not been fixed and allow_none is false,
then the start method is fixed to the default and the name is
If the start method has not been fixed and allow_none
is true then None is returned.
The return value can be 'fork', 'spawn', 'forkserver'
'fork' is the default on Unix, while 'spawn' is
the default on Windows.
New in version 3.4.
multiprocessing.set_executable()
Sets the path of the Python interpreter to use when starting a child process.
(By default
Embedders will probably need to
do some thing like
set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
before they can create child processes.
Changed in version 3.4: Now supported on Unix when the 'spawn' start method is used.
multiprocessing.set_start_method(method)
Set the method which should be used to start child processes.
method can be 'fork', 'spawn' or 'forkserver'.
Note that this should be called at most once, and it should be
protected inside the if __name__ == '__main__' clause of the
main module.
New in version 3.4.
contains no analogues of
17.2.2.4. Connection Objects
Connection objects allow the sending and receiving of picklable objects or
They can be thought of as message oriented connected sockets.
Connection objects are usually created using
– see also
class multiprocessing.connection.Connection
Send an object to the other end of the connection which should be read
The object must be picklable.
Very large pickles (approximately 32 MiB+,
though it depends on the OS) may raise a
exception.
Return an object sent from the other end of the connection using
Blocks until there is something to receive.
if there is nothing left to receive
and the other end was closed.
Return the file descriptor or handle used by the connection.
Close the connection.
This is called automatically when the connection is garbage collected.
poll([timeout])
Return whether there is any data available to be read.
If timeout is not specified then it will return immediately.
timeout is a number then this specifies the maximum time in seconds to
If timeout is None then an infinite timeout is used.
Note that multiple connection objects may be polled at once by
send_bytes(buffer[, offset[, size]])
Send byte data from a
as a complete message.
If offset is given then data is read from that position in buffer.
size is given then that many bytes will be read from buffer.
Very large
buffers (approximately 32 MiB+, though it depends on the OS) may raise a
recv_bytes([maxlength])
Return a complete message of byte data sent from the other end of the
connection as a string.
Blocks until there is something to receive.
if there is nothing left
to receive and the other end has closed.
If maxlength is specified and the message is longer than maxlength
is raised and the connection will no longer be
Changed in version 3.3: This function used to raise , which is now an
alias of .
recv_bytes_into(buffer[, offset])
Read into buffer a complete message of byte data sent from the other end
of the connection and return the number of bytes in the message.
until there is something to receive.
if there is nothing left to receive and the other end was
buffer must be a writable .
offset is given then the message will be written into the buffer from
that position.
Offset must be a non-negative integer less than the
length of buffer (in bytes).
If the buffer is too short then a BufferTooShort exception is
raised and the complete message is available as e.args[0] where e
is the exception instance.
Changed in version 3.3: Connection objects themselves can now be transferred between processes
New in version 3.3: Connection objects now support the context management protocol – see
returns the
connection object, and
For example:
&&& from multiprocessing import Pipe
&&& a, b = Pipe()
&&& a.send([1, 'hello', None])
&&& b.recv()
[1, 'hello', None]
&&& b.send_bytes(b'thank you')
&&& a.recv_bytes()
b'thank you'
&&& import array
&&& arr1 = array.array('i', range(5))
&&& arr2 = array.array('i', [0] * 10)
&&& a.send_bytes(arr1)
&&& count = b.recv_bytes_into(arr2)
&&& assert count == len(arr1) * arr1.itemsize
array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
method automatically unpickles the data it
receives, which can be a security risk unless you can trust the process
which sent the message.
Therefore, unless the connection object was produced using Pipe() you
should only use the
methods after performing some sort of authentication.
If a process is killed while it is trying to read or write to a pipe then
the data in the pipe is likely to become corrupted, because it may become
impossible to be sure where the message boundaries lie.
17.2.2.5. Synchronization primitives
Generally synchronization primitives are not as necessary in a multiprocess
program as they are in a multithreaded program.
See the documentation for
Note that one can also create synchronization primitives by using a manager
object – see .
class multiprocessing.Barrier(parties[, action[, timeout]])
A barrier object: a clone of .
New in version 3.3.
class multiprocessing.BoundedSemaphore([value])
A bounded semaphore object: a close analog of
A solitary difference from its close analog exists: its acquire method’s
first argument is named block, as is consistent with .
On Mac OS X, this is indistinguishable from
sem_getvalue() is not implemented on that platform.
class multiprocessing.Condition([lock])
A condition variable: an alias for .
If lock is specified then it should be a
object from .
Changed in version 3.3: The
method was added.
class multiprocessing.Event
A clone of .
class multiprocessing.Lock
A non-recursive lock object: a close analog of .
Once a process or thread has acquired a lock, subsequent attempts to
acquire it from any process or thread will block
any process or thread may release it.
The concepts and behaviors of
as it applies to threads are replicated here in
as it applies to either processes or threads,
except as noted.
is actually a factory function which returns an
instance of multiprocessing.synchronize.Lock initialized with a
default context.
supports the
protocol and thus may be
statements.
acquire(block=True, timeout=None)
Acquire a lock, blocking or non-blocking.
With the block argument set to True (the default), the method call
will block until the lock is in an unlocked state, then set it to locked
and return True.
Note that the name of this first argument differs
from that in .
With the block argument set to False, the method call does not
If the lock is currently in a locked state, return False;
otherwise set the lock to a locked state and return True.
When invoked with a positive, floating-point value for timeout, block
for at most the number of seconds specified by timeout as long as
the lock can not be acquired.
Invocations with a negative value for
timeout are equivalent to a timeout of zero.
Invocations with a
timeout value of None (the default) set the timeout period to
Note that the treatment of negative or None values for
timeout differs from the implemented behavior in
The timeout argument has no practical
implications if the block argument is set to False and is thus
Returns True if the lock has been acquired or False if
the timeout period has elapsed.
Release a lock.
This can be called from any process or thread, not only
the process or thread which originally acquired the lock.
Behavior is the same as in
except that
when invoked on an unlocked lock, a
is raised.
class multiprocessing.RLock
A recursive lock object: a close analog of .
recursive lock must be released by the process or thread that acquired it.
Once a process or thread has acquired a recursive lock, the same process
or thread may acquire it ag that process or thread
must release it once for each time it has been acquired.
is actually a factory function which returns an
instance of multiprocessing.synchronize.RLock initialized with a
default context.
supports the
protocol and thus may be
statements.
acquire(block=True, timeout=None)
Acquire a lock, blocking or non-blocking.
When invoked with the block argument set to True, block until the
lock is in an unlocked state (not owned by any process or thread) unless
the lock is already owned by the current process or thread.
The current
process or thread then takes ownership of the lock (if it does not
already have ownership) and the recursion level inside the lock increments
by one, resulting in a return value of True.
Note that there are
several differences in this first argument’s behavior compared to the
implementation of , starting with the name
of the argument itself.
When invoked with the block argument set to False, do not block.
If the lock has already been acquired (and thus is owned) by another
process or thread, the current process or thread does not take ownership
and the recursion level within the lock is not changed, resulting in
a return value of False.
If the lock is in an unlocked state, the
current process or thread takes ownership and the recursion level is
incremented, resulting in a return value of True.
Use and behaviors of the timeout argument are the same as in
Note that some of these behaviors of timeout
differ from the implemented behaviors in .
Release a lock, decrementing the recursion level.
If after the
decrement the recursion level is zero, reset the lock to unlocked (not
owned by any process or thread) and if any other processes or threads
are blocked waiting for the lock to become unlocked, allow exactly one
of them to proceed.
If after the decrement the recursion level is still
nonzero, the lock remains locked and owned by the calling process or
Only call this method when the calling process or thread owns the lock.
is raised if this method is called by a process
or thread other than the owner or if the lock is in an unlocked (unowned)
Note that the type of exception raised in this situation
differs from the implemented behavior in .
class multiprocessing.Semaphore([value])
A semaphore object: a close analog of .
A solitary difference from its close analog exists: its acquire method’s
first argument is named block, as is consistent with .
On Mac OS X, sem_timedwait is unsupported, so calling acquire() with
a timeout will emulate that function’s behavior using a sleeping loop.
If the SIGINT signal generated by Ctrl-C arrives while the main thread is
blocked by a call to BoundedSemaphore.acquire(), ,
, Semaphore.acquire(), Condition.acquire()
or Condition.wait() then the call will be immediately interrupted and
will be raised.
This differs from the behaviour of
where SIGINT will be
ignored while the equivalent blocking calls are in progress.
Some of this package’s functionality requires a functioning shared semaphore
implementation on the host operating system. Without one, the
multiprocessing.synchronize module will be disabled, and attempts to
import it will result in an . See
for additional information.
17.2.2.6. Shared
It is possible to create shared objects using shared memory which can be
inherited by child processes.
multiprocessing.Value(typecode_or_type, *args, lock=True)
object allocated from shared memory.
By default the
return value is actually a synchronized wrapper for the object.
The object
itself can be accessed via the value attribute of a .
typecode_or_type determines the type of the returned object: it is either a
ctypes type or a one character typecode of the kind used by the
*args is passed on to the constructor for the type.
If lock is True (the default) then a new recursive lock
object is created to synchronize access to the value.
If lock is
object then that will be used to
synchronize access to the value.
If lock is False then
access to the returned object will not be automatically protected
by a lock, so it will not necessarily be “process-safe”.
Operations like += which involve a read and write are not
So if, for instance, you want to atomically increment a
shared value it is insufficient to just do
counter.value += 1
Assuming the associated lock is recursive (which it is by default)
you can instead do
with counter.get_lock():
counter.value += 1
Note that lock is a keyword-only argument.
multiprocessing.Array(typecode_or_type, size_or_initializer, *, lock=True)
Return a ctypes array allocated from shared memory.
By default the return
value is actually a synchronized wrapper for the array.
typecode_or_type determines the type of the elements of the returned array:
it is either a ctypes type or a one character typecode of the kind used by
If size_or_initializer is an integer, then it
determines the length of the array, and the array will be initially zeroed.
Otherwise, size_or_initializer is a sequence which is used to initialize
the array and whose length determines the length of the array.
If lock is True (the default) then a new lock object is created to
synchronize access to the value.
If lock is a
object then that will be used to synchronize access to the
If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be
“process-safe”.
Note that lock is a keyword only argument.
Note that an array of
has value and raw
attributes which allow one to use it to store and retrieve strings.
17.2.2.6.1. The
module provides functions for allocating
objects from shared memory which can be inherited by child
processes.
Although it is possible to store a pointer in shared memory remember that
this will refer to a location in the address space of a specific process.
However, the pointer is quite likely to be invalid in the context of a second
process and trying to dereference the pointer from the second process may
cause a crash.
multiprocessing.sharedctypes.RawArray(typecode_or_type, size_or_initializer)
Return a ctypes array allocated from shared memory.
typecode_or_type determines the type of the elements of the returned array:
it is either a ctypes type or a one character typecode of the kind used by
If size_or_initializer is an integer then it
determines the length of the array, and the array will be initially zeroed.
Otherwise size_or_initializer is a sequence which is used to initialize the
array and whose length determines the length of the array.
Note that setting and getting an element is potentially non-atomic – use
instead to make sure that access is automatically synchronized
using a lock.
multiprocessing.sharedctypes.RawValue(typecode_or_type, *args)
Return a ctypes object allocated from shared memory.
typecode_or_type determines the type of the returned object: it is either a
ctypes type or a one character typecode of the kind used by the
*args is passed on to the constructor for the type.
Note that setting and getting the value is potentially non-atomic – use
instead to make sure that access is automatically synchronized
using a lock.
Note that an array of
has value and raw
attributes which allow one to use it to store and retrieve strings – see
documentation for .
multiprocessing.sharedctypes.Array(typecode_or_type, size_or_initializer, *, lock=True)
The same as
except that depending on the value of lock a
process-safe synchronization wrapper may be returned instead of a raw ctypes
If lock is True (the default) then a new lock object is created to
synchronize access to the value.
If lock is a
then that will be used to synchronize access to the
If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be
“process-safe”.
Note that lock is a keyword-only argument.
multiprocessing.sharedctypes.Value(typecode_or_type, *args, lock=True)
The same as
except that depending on the value of lock a
process-safe synchronization wrapper may be returned instead of a raw ctypes
If lock is True (the default) then a new lock object is created to
synchronize access to the value.
If lock is a
object then that will be used to synchronize access to the
If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be
“process-safe”.
Note that lock is a keyword-only argument.
multiprocessing.sharedctypes.copy(obj)
Return a ctypes object allocated from shared memory which is a copy of the
ctypes object obj.
multiprocessing.sharedctypes.synchronized(obj[, lock])
Return a process-safe wrapper object for a ctypes object which uses lock to
synchronize access.
If lock is None (the default) then a
object is created automatically.
A synchronized wrapper will have two methods in addition to those of the
object it wraps: get_obj() returns the wrapped object and
get_lock() returns the lock object used for synchronization.
Note that accessing the ctypes object through the wrapper can be a lot slower
than accessing the raw ctypes object.
Changed in version 3.5: Synchronized objects support the
The table below compares the syntax for creating shared ctypes objects from
shared memory with the normal ctypes syntax.
(In the table MyStruct is some
subclass of .)
c_double(2.4)
RawValue(c_double, 2.4)
RawValue(‘d’, 2.4)
MyStruct(4, 6)
RawValue(MyStruct, 4, 6)
(c_short * 7)()
RawArray(c_short, 7)
RawArray(‘h’, 7)
(c_int * 3)(9, 2, 8)
RawArray(c_int, (9, 2, 8))
RawArray(‘i’, (9, 2, 8))
Below is an example where a number of ctypes objects are modified by a child
from multiprocessing import Process, Lock
from multiprocessing.sharedctypes import Value, Array
from ctypes import Structure, c_double
class Point(Structure):
_fields_ = [('x', c_double), ('y', c_double)]
def modify(n, x, s, A):
n.value **= 2
x.value **= 2
s.value = s.value.upper()
for a in A:
if __name__ == '__main__':
lock = Lock()
n = Value('i', 7)
x = Value(c_double, 1.0/3.0, lock=False)
s = Array('c', b'hello world', lock=lock)
A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
p = Process(target=modify, args=(n, x, s, A))
print(n.value)
print(x.value)
print(s.value)
print([(a.x, a.y) for a in A])
The results printed are
HELLO WORLD
[(3..0625), (33.), (5..25)]
17.2.2.7. Managers
Managers provide a way to create data which can be shared between different
processes, including sharing over a network between processes running on
different machines. A manager object controls a server process which manages
shared objects.
Other processes can access the shared objects by using
multiprocessing.Manager()
Returns a started
object which
can be used for sharing objects between processes.
The returned manager
object corresponds to a spawned child process and has methods which will
create shared objects and return corresponding proxies.
Manager processes will be shutdown as soon as they are garbage collected or
their parent process exits.
The manager classes are defined in the
class multiprocessing.managers.BaseManager([address[, authkey]])
Create a BaseManager object.
Once created one should call
or get_server().serve_forever() to ensure
that the manager object refers to a started manager process.
address is the address on which the manager process listens for new
connections.
If address is None then an arbitrary one is chosen.
authkey is the authentication key which will be used to check the
validity of incoming connections to the server process.
authkey is None then current_process().authkey is used.
Otherwise authkey is used and it must be a byte string.
start([initializer[, initargs]])
Start a subprocess to start the manager.
If initializer is not None
then the subprocess will call initializer(*initargs) when it starts.
get_server()
Returns a Server object which represents the actual server under
the control of the Manager. The Server object supports the
serve_forever() method:
&&& from multiprocessing.managers import BaseManager
&&& manager = BaseManager(address=('', 50000), authkey=b'abc')
&&& server = manager.get_server()
&&& server.serve_forever()
Server additionally has an
attribute.
Connect a local manager object to a remote manager process:
&&& from multiprocessing.managers import BaseManager
&&& m = BaseManager(address=('127.0.0.1', 5000), authkey=b'abc')
&&& m.connect()
shutdown()
Stop the process used by the manager.
This is only available if
has been used to start the server process.
This can be called multiple times.
register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
A classmethod which can be used for registering a type or callable with
the manager class.
typeid is a “type identifier” which is used to identify a particular
type of shared object.
This must be a string.
callable is a callable used for creating objects for this type
identifier.
If a manager instance will be connected to the
server using the
method, or if the
create_method argument is False then this can be left as
proxytype is a subclass of
which is used to create
proxies for shared objects with this typeid.
If None then a proxy
class is created automatically.
exposed is used to specify a sequence of method names which proxies for
this typeid should be allowed to access using
(If exposed is None then
proxytype._exposed_ is used instead if it exists.)
In the case
where no exposed list is specified, all “public methods” of the shared
object will be accessible.
(Here a “public method” means any attribute
which has a
method and whose name does not begin
with '_'.)
method_to_typeid is a mapping used to specify the return type of those
exposed methods which should return a proxy.
It maps method names to
typeid strings.
(If method_to_typeid is None then
proxytype._method_to_typeid_ is used instead if it exists.)
method’s name is not a key of this mapping or if the mapping is None
then the object returned by the method will be copied by value.
create_method determines whether a method should be created with name
typeid which can be used to tell the server process to create a new
shared object and return a proxy for it.
By default it is True.
instances also have one read-only property:
The address used by the manager.
Changed in version 3.3: Manager objects support the context management protocol – see
starts the
server process (if it has not already started) and then returns the
manager object.
In previous versions
did not start the
manager’s server process if it was not already started.
class multiprocessing.managers.SyncManager
A subclass of
which can be used for the synchronization
of processes.
Objects of this type are returned by
multiprocessing.Manager().
Its methods create and return
number of commonly used data types to be synchronized across processes.
This notably includes shared lists and dictionaries.
Barrier(parties[, action[, timeout]])
Create a shared
object and return a
proxy for it.
New in version 3.3.
BoundedSemaphore([value])
Create a shared
object and return a
proxy for it.
Condition([lock])
Create a shared
object and return a proxy for
If lock is supplied then it should be a proxy for a
Changed in version 3.3: The
method was added.
Create a shared
object and return a proxy for it.
Create a shared
object and return a proxy for it.
Namespace()
Create a shared
object and return a proxy for it.
Queue([maxsize])
Create a shared
object and return a proxy for it.
Create a shared
object and return a proxy for it.
Semaphore([value])
Create a shared
object and return a proxy for
Array(typecode, sequence)
Create an array and return a proxy for it.
Value(typecode, value)
Create an object with a writable value attribute and return a proxy
dict(mapping)
dict(sequence)
Create a shared
object and return a proxy for it.
list(sequence)
Create a shared
object and return a proxy for it.
Changed in version 3.6: Shared objects are capable of being nested.
For example, a shared
container object such as a shared list can contain other shared objects
which will all be managed and synchronized by the .
class multiprocessing.managers.Namespace
A type that can register with .
A namespace object has no public methods, but does have writable attributes.
Its representation shows the values of its attributes.
However, when using a proxy for a namespace object, an attribute beginning
with '_' will be an attribute of the proxy and not an attribute of the
&&& manager = multiprocessing.Manager()
&&& Global = manager.Namespace()
&&& Global.x = 10
&&& Global.y = 'hello'
&&& Global._z = 12.3
# this is an attribute of the proxy
&&& print(Global)
Namespace(x=10, y='hello')
17.2.2.7.1. Customized managers
To create one’s own manager, one creates a subclass of
classmethod to register new types or
callables with the manager class.
For example:
from multiprocessing.managers import BaseManager
class MathsClass:
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(BaseManager):
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
with MyManager() as manager:
maths = manager.Maths()
print(maths.add(4, 3))
# prints 7
print(maths.mul(7, 8))
# prints 56
17.2.2.7.2. Using a remote manager
It is possible to run a manager server on one machine and have clients use it
from other machines (assuming that the firewalls involved allow it).
Running the following commands creates a server for a single shared queue which
remote clients can access:
&&& from multiprocessing.managers import BaseManager
&&& from queue import Queue
&&& queue = Queue()
&&& class QueueManager(BaseManager): pass
&&& QueueManager.register('get_queue', callable=lambda:queue)
&&& m = QueueManager(address=('', 50000), authkey=b'abracadabra')
&&& s = m.get_server()
&&& s.serve_forever()
One client can access the server as follows:
&&& from multiprocessing.managers import BaseManager
&&& class QueueManager(BaseManager): pass
&&& QueueManager.register('get_queue')
&&& m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
&&& m.connect()
&&& queue = m.get_queue()
&&& queue.put('hello')
Another client can also use it:
&&& from multiprocessing.managers import BaseManager
&&& class QueueManager(BaseManager): pass
&&& QueueManager.register('get_queue')
&&& m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
&&& m.connect()
&&& queue = m.get_queue()
&&& queue.get()
'hello'
Local processes can also access that queue, using the code from above on the
client to access it remotely:
&&& from multiprocessing import Process, Queue
&&& from multiprocessing.managers import BaseManager
&&& class Worker(Process):
def __init__(self, q):
self.q = q
super(Worker, self).__init__()
def run(self):
self.q.put('local hello')
&&& queue = Queue()
&&& w = Worker(queue)
&&& w.start()
&&& class QueueManager(BaseManager): pass
&&& QueueManager.register('get_queue', callable=lambda: queue)
&&& m = QueueManager(address=('', 50000), authkey=b'abracadabra')
&&& s = m.get_server()
&&& s.serve_forever()
17.2.2.8. Proxy Objects
A proxy is an object which refers to a shared object which lives (presumably)
in a different process.
The shared object is said to be the referent of the
Multiple proxy objects may have the same referent.
A proxy object has methods which invoke corresponding methods of its referent
(although not every method of the referent will necessarily be available through
the proxy).
In this way, a proxy can be used just like its referent can:
&&& from multiprocessing import Manager
&&& manager = Manager()
&&& l = manager.list([i*i for i in range(10)])
&&& print(l)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
&&& print(repr(l))
&ListProxy object, typeid 'list' at 0x...&
&&& l[2:5]
[4, 9, 16]
Notice that applying
to a proxy will return the representation of
the referent, whereas applying
will return the representation of
the proxy.
An important feature of proxy objects is that they are picklable so they can be
passed between processes.
As such, a referent can contain
This permits nesting of these managed
lists, dicts, and other :
&&& a = manager.list()
&&& b = manager.list()
&&& a.append(b)
# referent of a now contains referent of b
&&& print(a, b)
[&ListProxy object, typeid 'list' at ...&] []
&&& b.append('hello')
&&& print(a[0], b)
['hello'] ['hello']
Similarly, dict and list proxies may be nested inside one another:
&&& l_outer = manager.list([ manager.dict() for i in range(2) ])
&&& d_first_inner = l_outer[0]
&&& d_first_inner['a'] = 1
&&& d_first_inner['b'] = 2
&&& l_outer[1]['c'] = 3
&&& l_outer[1]['z'] = 26
&&& print(l_outer[0])
{'a': 1, 'b': 2}
&&& print(l_outer[1])
{'c': 3, 'z': 26}
If standard (non-proxy)
objects are contained
in a referent, modifications to those mutable values will not be propagated
through the manager because the proxy has no way of knowing when the values
contained within are modified.
However, storing a value in a container proxy
(which triggers a __setitem__ on the proxy object) does propagate through
the manager and so to effectively modify such an item, one could re-assign the
modified value to the container proxy:
# create a list proxy and append a mutable object (a dictionary)
lproxy = manager.list()
lproxy.append({})
# now mutate the dictionary
d = lproxy[0]
d['a'] = 1
d['b'] = 2
# at this point, the changes to d are not yet synced, but by
# updating the dictionary, the proxy is notified of the change
lproxy[0] = d
This approach is perhaps less convenient than employing nested
for most use cases but also
demonstrates a level of control over the synchronization.
The proxy types in
do nothing to support comparisons
So, for instance, we have:
&&& manager.list([1,2,3]) == [1,2,3]
One should just use a copy of the referent instead when making comparisons.
class multiprocessing.managers.BaseProxy
Proxy objects are instances of subclasses of .
_callmethod(methodname[, args[, kwds]])
Call and return the result of a method of the proxy’s referent.
If proxy is a proxy whose referent is obj then the expression
proxy._callmethod(methodname, args, kwds)
will evaluate the expression
getattr(obj, methodname)(*args, **kwds)
in the manager’s process.
The returned value will be a copy of the result of the call or a proxy to
a new shared object – see documentation for the method_to_typeid
argument of .
If an exception is raised by the call, then is re-raised by
If some other exception is raised in the manager’s
process then this is converted into a RemoteError exception and is
raised by .
Note in particular that an exception will be raised if methodname has
not been exposed.
An example of the usage of :
&&& l = manager.list(range(10))
&&& l._callmethod('__len__')
&&& l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
[2, 3, 4, 5, 6]
&&& l._callmethod('__getitem__', (20,))
# equivalent to l[20]
Traceback (most recent call last):
IndexError: list index out of range
_getvalue()
Return a copy of the referent.
If the referent is unpicklable then this will raise an exception.
__repr__()
Return a representation of the proxy object.
Return the representation of the referent.
17.2.2.8.1. Cleanup
A proxy object uses a weakref callback so that when it gets garbage collected it
deregisters itself from the manager which owns its referent.
A shared object gets deleted from the manager process when there are no longer
any proxies referring to it.
17.2.2.9. Process Pools
One can create a pool of processes which will carry out tasks submitted to it
class multiprocessing.pool.Pool([processes[, initializer[, initargs[, maxtasksperchild[, context]]]]])
A process pool object which controls a pool of worker processes to which jobs
can be submitted.
It supports asynchronous results with timeouts and
callbacks and has a parallel map implementation.
processes is the number of worker processes to use.
If processes is
None then the number returned by
If initializer is not None then each worker process will call
initializer(*initargs) when it starts.
maxtasksperchild is the number of tasks a worker process can complete
before it will exit and be replaced with a fresh worker process, to enable
unused resources to be freed. The default maxtasksperchild is None, which
means worker processes will live as long as the pool.
context can be used to specify the context used for starting
the worker processes.
Usually a pool is created using the
function multiprocessing.Pool() or the
of a context object.
In both cases context is set
appropriately.
Note that the methods of the pool object should only be called by
the process which created the pool.
New in version 3.2: maxtasksperchild
New in version 3.4: context
Worker processes within a
typically live for the complete
duration of the Pool’s work queue. A frequent pattern found in other
systems (such as Apache, mod_wsgi, etc) to free resources held by
workers is to allow a worker within a pool to complete only a set
amount of work before being exiting, being cleaned up and a new
process spawned to replace the old one. The maxtasksperchild
argument to the
exposes this ability to the end user.
apply(func[, args[, kwds]])
Call func with arguments args and keyword arguments kwds.
until the result is ready. Given this blocks,
better suited for performing work in parallel. Additionally, func
is only executed in one of the workers of the pool.
apply_async(func[, args[, kwds[, callback[, error_callback]]]])
A variant of the
method which returns a result object.
If callback is specified then it should be a callable which accepts a
single argument.
When the result becomes ready callback is applied to
it, that is unless the call failed, in which case the error_callback
is applied instead.
If error_callback is specified then it should be a callable which
accepts a single argument.
If the target function fails, then
the error_callback is called with the exception instance.
Callbacks should complete immediately since otherwise the thread which
handles the results will get blocked.
map(func, iterable[, chunksize])
A parallel equivalent of the
built-in function (it supports only
one iterable argument though).
It blocks until the result is ready.
This method chops the iterable into a number of chunks which it submits to
the process pool as separate tasks.
The (approximate) size of these
chunks can be specified by setting chunksize to a positive integer.
map_async(func, iterable[, chunksize[, callback[, error_callback]]])
A variant of the
method which returns a result object.
If callback is specified then it should be a callable which accepts a
single argument.
When the result becomes ready callback is applied to
it, that is unless the call failed, in which case the error_callback
is applied instead.
If error_callback is specified then it should be a callable which
accepts a single argument.
If the target function fails, then
the error_callback is called with the exception instance.
Callbacks should complete immediately since otherwise the thread which
handles the results will get blocked.
imap(func, iterable[, chunksize])
A lazier version of .
The chunksize argument is the same as the one used by the
For very long iterables using a large value for chunksize can
make the job complete much faster than using the default value of
Also if chunksize is 1 then the next() method of the iterator
returned by the
method has an optional timeout parameter:
next(timeout) will raise
result cannot be returned within timeout seconds.
imap_unordered(func, iterable[, chunksize])
The same as
except that the ordering of the results from the
returned iterator should be considered arbitrary.
(Only when there is
only one worker process is the order guaranteed to be “correct”.)
starmap(func, iterable[, chunksize])
except that the elements of the iterable are expected
to be iterables that are unpacked as arguments.
Hence an iterable of [(1,2), (3, 4)] results in [func(1,2),
func(3,4)].
New in version 3.3.
starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
A combination of
that iterates over
iterable of iterables and calls func with the iterables unpacked.
Returns a result object.
New in version 3.3.
Prevents any more tasks from being submitted to the pool.
Once all the
tasks have been completed the worker processes will exit.
terminate()
Stops the worker processes immediately without completing outstanding
When the pool object is garbage collected
called immediately.
Wait for the worker processes to exit.
One must call
before using .
New in version 3.3: Pool objects now support the context management protocol – see
returns the
pool object, and
class multiprocessing.pool.AsyncResult
The class of the result returned by
get([timeout])
Return the result when it arrives.
If timeout is not None and the
result does not arrive within timeout seconds then
is raised.
If the remote call raised
an exception then that exception will be reraised by .
wait([timeout])
Wait until the result is available or until timeout seconds pass.
Return whether the call has completed.
successful()
Return whether the call completed without raising an exception.
if the result is not ready.
The following example demonstrates the use of a pool:
from multiprocessing import Pool
import time
return x*x
if __name__ == '__main__':
with Pool(processes=4) as pool:
# start 4 worker processes
result = pool.apply_async(f, (10,)) # evaluate &f(10)& asynchronously in a single process
print(result.get(timeout=1))
# prints &100& unless your computer is *very* slow
print(pool.map(f, range(10)))
# prints &[0, 1, 4,..., 81]&
it = pool.imap(f, range(10))
print(next(it))
# prints &0&
print(next(it))
# prints &1&
print(it.next(timeout=1))
# prints &4& unless your computer is *very* slow
result = pool.apply_async(time.sleep, (10,))
print(result.get(timeout=1))
# raises multiprocessing.TimeoutError
17.2.2.10. Listeners and Clients
Usually message passing between processes is done using queues or by using
objects returned by
However, the
module allows some extra
flexibility.
It basically gives a high level message oriented API for dealing
with sockets or Windows named pipes.
It also has support for digest
authentication using the
module, and for polling
multiple connections at the same time.
multiprocessing.connection.deliver_challenge(connection, authkey)
Send a randomly generated message to the other end of the connection and wait
for a reply.
If the reply matches the digest of the message using authkey as the key
then a welcome message is sent to the other end of the connection.
is raised.
multiprocessing.connection.answer_challenge(connection, authkey)
Receive a message, calculate the digest of the message using authkey as the
key, and then send the digest back.
If a welcome message is not received, then
is raised.
multiprocessing.connection.Client(address[, family[, authkey]])
Attempt to set up a connection to the listener which is using address
address, returning a .
The type of the connection is determined by family argument, but this can
generally be omitted since it can usually be inferred from the format of
address. (See )
If authkey is given and not None, it should be a byte string and will be
used as the secret key for an HMAC-based authentication challenge. No
authentication is done if authkey is None.
is raised if authentication fails.
class multiprocessing.connection.Listener([address[, family[, backlog[, authkey]]]])
A wrapper for a bound socket or Windows named pipe which is ‘listening’ for
connections.
address is the address to b}

我要回帖

更多关于 复制boot sources 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信