Question or problem about Python programming:
I am using Python multiprocessing, more precisely
from multiprocessing import Pool p = Pool(15) args = [(df, config1), (df, config2), ...] #list of args - df is the same object in each tuple res = p.map_async(func, args) #func is some arbitrary function p.close() p.join()
This approach has a huge memory consumption; eating up pretty much all my RAM (at which point it gets extremely slow, hence making the multiprocessing pretty useless). I assume the problem is that df is a huge object (a large pandas dataframe) and it gets copied for each process. I have tried using multiprocessing.Value to share the dataframe without copying
shared_df = multiprocessing.Value(pandas.DataFrame, df) args = [(shared_df, config1), (shared_df, config2), ...]
(as suggested in Python multiprocessing shared memory), but that gives me TypeError: this type has no size (same as Sharing a complex object between Python processes?, to which I unfortunately don’t understand the answer).
I am using multiprocessing for the first time and maybe my understanding is not (yet) good enough. Is multiprocessing.Value actually even the right thing to use in this case? I have seen other suggestions (e.g. queue) but am by now a bit confused. What options are there to share memory, and which one would be best in this case?
How to solve the problem:
Solution 1:
The first argument to Value
is typecode_or_type. That is defined as:
typecode_or_type determines the type of the returned object: it is
either a ctypes type or a one character typecode of the kind used by
the array module. *args is passed on to the constructor for the type.
Emphasis mine. So, you simply cannot put a pandas dataframe in a Value
, it has to be a ctypes type.
You could instead use a multiprocessing.Manager
to serve your singleton dataframe instance to all of your processes. There’s a few different ways to end up in the same place – probably the easiest is to just plop your dataframe into the manager’s Namespace
.
from multiprocessing import Manager mgr = Manager() ns = mgr.Namespace() ns.df = my_dataframe # now just give your processes access to ns, i.e. most simply # p = Process(target=worker, args=(ns, work_unit))
Now your dataframe instance is accessible to any process that gets passed a reference to the Manager. Or just pass a reference to the Namespace
, it’s cleaner.
One thing I didn’t/won’t cover is events and signaling – if your processes need to wait for others to finish executing, you’ll need to add that in. Here is a page with some Event
examples which also cover with a bit more detail how to use the manager’s Namespace
.
(note that none of this addresses whether multiprocessing
is going to result in tangible performance benefits, this is just giving you the tools to explore that question)
Solution 2:
You can share a pandas dataframe between processes without any memory overhead by creating a data_handler child process. This process receives calls from the other children with specific data requests (i.e. a row, a specific cell, a slice etc..) from your very large dataframe object. Only the data_handler process keeps your dataframe in memory unlike a Manager like Namespace which causes the dataframe to be copied to all child processes. See below for a working example. This can be converted to pool.
Need a progress bar for this? see my answer here: https://stackoverflow.com/a/55305714/11186769
import time import Queue import numpy as np import pandas as pd import multiprocessing from random import randint #========================================================== # DATA HANDLER #========================================================== def data_handler( queue_c, queue_r, queue_d, n_processes ): # Create a big dataframe big_df = pd.DataFrame(np.random.randint( 0,100,size=(100, 4)), columns=list('ABCD')) # Handle data requests finished = 0 while finished < n_processes: try: # Get the index we sent in idx = queue_c.get(False) except Queue.Empty: continue else: if idx == 'finished': finished += 1 else: try: # Use the big_df here! B_data = big_df.loc[ idx, 'B' ] # Send back some data queue_r.put(B_data) except: pass # big_df may need to be deleted at the end. #import gc; del big_df; gc.collect() #========================================================== # PROCESS DATA #========================================================== def process_data( queue_c, queue_r, queue_d): data = [] # Save computer memory with a generator generator = ( randint(0,x) for x in range(100) ) for g in generator: """ Lets make a request by sending in the index of the data we want. Keep in mind you may receive another child processes return call, which is fine if order isnt important. """ #print(g) # Send an index value queue_c.put(g) # Handle the return call while True: try: return_call = queue_r.get(False) except Queue.Empty: continue else: data.append(return_call) break queue_c.put('finished') queue_d.put(data) #========================================================== # START MULTIPROCESSING #========================================================== def multiprocess( n_processes ): combined = [] processes = [] # Create queues queue_data = multiprocessing.Queue() queue_call = multiprocessing.Queue() queue_receive = multiprocessing.Queue() for process in range(n_processes): if process == 0: # Load your data_handler once here p = multiprocessing.Process(target = data_handler, args=(queue_call, queue_receive, queue_data, n_processes)) processes.append(p) p.start() p = multiprocessing.Process(target = process_data, args=(queue_call, queue_receive, queue_data)) processes.append(p) p.start() for i in range(n_processes): data_list = queue_data.get() combined += data_list for p in processes: p.join() # Your B values print(combined) if __name__ == "__main__": multiprocess( n_processes = 4 )
Solution 3:
You can use Array
instead of Value
for storing your dataframe.
The solution below converts a pandas
dataframe to an object that stores its data in shared memory:
import numpy as np import pandas as pd import multiprocessing as mp import ctypes # the origingal dataframe is df, store the columns/dtypes pairs df_dtypes_dict = dict(list(zip(df.columns, df.dtypes))) # declare a shared Array with data from df mparr = mp.Array(ctypes.c_double, df.values.reshape(-1)) # create a new df based on the shared array df_shared = pd.DataFrame(np.frombuffer(mparr.get_obj()).reshape(df.shape), columns=df.columns).astype(df_dtypes_dict)
If now you share df_shared
across processes, no additional copies will be made. For you case:
pool = mp.Pool(15) def fun(config): # df_shared is global to the script df_shared.apply(config) # whatever compute you do with df/config config_list = [config1, config2] res = p.map_async(fun, config_list) p.close() p.join()
This is also particularly useful if you use pandarallel, for example:
# this will not explode in memory from pandarallel import pandarallel pandarallel.initialize() df_shared.parallel_apply(your_fun, axis=1)
Note: with this solution you end up with two dataframes (df and df_shared), which consume twice the memory and are long to initialise. It might be possible to read the data directly in shared memory.