Python Parallel Processing and Multiprocessing: A Rivew
Parallel and multiprocessing algorithms break down significant numerical problems into smaller subtasks, reducing the total computing time on multiprocessor and multicore computers. Parallel programming is well supported in proven programming languages such as C and Python, which are well suited to “heavy-duty” computational tasks. Historically, Python has been regarded as a strong supporter of parallel programming due to the global interpreter lock (GIL). However, times have changed. Parallel programming in Python is supported by the creation of a diverse set of libraries and packages. This review focused on Python libraries that support parallel processing and multiprocessing, intending to accelerate computation in various fields, including multimedia, attack detection, supercomputers, and genetic algorithms. Furthermore, we discussed some Python libraries that can be used for this purpose.
S. M. Lim, A. B. M. Sultan, M. N. Sulaiman, A. Mustapha, and K. Y. Leong, “Crossover and mutation operators of genetic algorithms,” Int. J. Mach. Learn. Comput., vol. 7, no. 1, pp. 9–12, 2017, doi: 10.18178/ijmlc.2017.7.1.611.
I. V. Kotenko, I. B. Saenko, and A. G. Kushnerevich, “Architecture of the parallel big data processing system for security monitoring of internet of things networks,” SPIIRAS Proc., vol. 4, no. 59, pp. 5–30, 2018, doi: 10.15622/sp.59.1.
A. Ebrahim, J. A. Lerman, B. O. Palsson, and D. R. Hyduke, “COBRApy: COnstraints-Based Reconstruction and Analysis for Python,” BMC Syst. Biol., vol. 7, 2013, doi: 10.1186/1752-0509-7-74.
L. D. Dalcin, R. R. Paz, P. A. Kler, and A. Cosimo, “Parallel distributed computing using Python,” Adv. Water Resour., vol. 34, no. 9, pp. 1124–1139, 2011, doi: 10.1016/j.advwatres.2011.04.013.
A. Rogohult, “Benchmarking Python Interpreters,” KTH, Sk. för datavetenskap och Kommun. (CSC), 2016, p. 119, 2016.
W. Y. Je, S. Teh, and H. R. Chern, “A parallelizable chaos-based true random number generator,” 2018.
W. K. Lee, R. C. W. Phan, W. S. Yap, and B. M. Goi, “SPRING: a novel parallel chaos-based image encryption scheme,” Nonlinear Dyn., vol. 92, no. 2, pp. 575–593, 2018, doi: 10.1007/s11071-018-4076-6.
T. Wang and Q. Kemao, “Parallel computing in experimental mechanics and optical measurement: A review (II),” Opt. Lasers Eng., vol. 104, no. June 2017, pp. 181–191, 2018, doi: 10.1016/j.optlaseng.2017.06.002.
E. Tejedor et al., “PyCOMPSs: Parallel computational workflows in Python,” Int. J. High Perform. Comput. Appl., vol. 31, no. 1, pp. 66–82, 2017, doi: 10.1177/1094342015594678.
R. Filguiera, A. Krause, M. Atkinson, I. Klampanos, and A. Moreno, “Dispel4py: A Python framework for data-intensive scientific computing,” Int. J. High Perform. Comput. Appl., vol. 31, no. 4, pp. 316–334, 2017, doi: 10.1177/1094342016649766.
S. R. M. Zebari and N. O. Yaseen, “Effects of Parallel Processing Implementation on Balanced Load-Division Depending on Distributed Memory Systems Client / Server Principles,” vol. 5, no. 3, 2011.
M. Wilde, M. Hategan, J. M. Wozniak, B. Clifford, D. S. Katz, and I. Foster, “Swift: A language for distributed parallel scripting,” Parallel Comput., vol. 37, no. 9, pp. 633–652, 2011, doi: 10.1016/j.parco.2011.05.005.
S. Y. Yu, S. R. Chhetri, A. Canedo, P. Goyal, and M. A. Al Faruque, “Pykg2vec: A python library for knowledge graph embedding,” arXiv, vol. 22, pp. 1–6, 2019.
C. Evangelinos and C. Hill, “Cloud Computing for Parallel Scientific HPC Applications: Feasibility of Running Coupled Atmosphere-Ocean Climate Models on Amazon’s EC2,” Ratio, vol. 2, Jan. 2008.
K. Asanovíc et al., “The Landscape of Parallel Computing Research : A View from Berkeley,” pp. 1–54, 2006.
T. Kim, “Survey and Performance Test of Python-based Libraries for Parallel Processing,” pp. 1–4, 2020.
N. Singh, L. M. Browne, and R. Butler, “Parallel astronomical data processing with Python: Recipes for multicore machines,” Astron. Comput., vol. 2, pp. 1–10, 2013, doi: 10.1016/j.ascom.2013.04.002.
B. Lewis, I. Smith, M. Fowler, and J. Licato, “The robot mafia: A test environment for deceptive robots,” 28th Mod. Artif. Intell. Cogn. Sci. Conf. MAICS 2017, pp. 189–190, 2017, doi: 10.1145/1235.
Y. Babuji et al., “Introducing Parsl: A Python Parallel Scripting Library,” pp. 1–2, 2017, [Online]. Available: https://doi.org/10.5281/zenodo.891533#.WdOPKS_nvdE.mendeley.
P. Moritz et al., “Ray : A Distributed Framework for Emerging AI Applications This paper is included in the Proceedings of the,” USENIX Symp. Oper. Syst. Des. Implement., 2018.
J. Sampé, G. Vernik, M. Sánchez-Artigas, and P. García-López, “Serverless data analytics in the IBM cloud,” Middlew. Ind. 2018 - Proc. 2018 ACM/IFIP/USENIX Middlew. Conf. (Industrial Track), no. October 2019, pp. 1–8, 2018, doi: 10.1145/3284028.3284029.
M. Jaxa-Rozen and J. Kwakkel, “PyNetLogo: Linking NetLogo with Python,” J. Artif. Soc. Soc. Simul., vol. 21, Mar. 2018, doi: 10.18564/jasss.3668.
D. Meunier et al., “NeuroPycon: An open-source python toolbox for fast multi-modal and reproducible brain connectivity pipelines,” Neuroimage, vol. 219, no. June, 2020, doi: 10.1016/j.neuroimage.2020.117020.
J. Kready, S. A. Shimray, M. N. Hussain, and N. Agarwal, “YouTube data collection using parallel processing,” Proc. - 2020 IEEE 34th Int. Parallel Distrib. Process. Symp. Work. IPDPSW 2020, pp. 1119–1122, 2020, doi: 10.1109/IPDPSW50202.2020.00185.
J. Niruthika and S. Pranavan, “222 implementation of parallel aho-corasick algorithm in python,” no. November, pp. 222–233, 2019.
A. Benítez-Hidalgo, A. J. Nebro, J. García-Nieto, I. Oregi, and J. Del Ser, “jMetalPy: A python framework for multi-objective optimization with metaheuristics,” arXiv, 2019.
Y. Babuji et al., “Parsl: Pervasive parallel programming in Python,” HPDC 2019- Proc. 28th Int. Symp. High-Performance Parallel Distrib. Comput., pp. 25–36, 2019, doi: 10.1145/3307681.3325400.
H. Han, “BayesFactorFMRI: Implementing Bayesian Second-Level fMRI Analysis with Multiple Comparison Correction and Bayesian Meta-Analysis of fMRI Images with Multiprocessing,” J. Open Res. Softw., vol. 9, no. 1, pp. 1–7, 2021, doi: 10.5334/jors.328.
G. Heine, T. Woltron, and W. Alexander, “Towards a Scalable Data-Intensive Text Processing Architecture with Python and Towards a Scalable Data-Intensive Text Processing Architecture with Python and Cassandra,” no. November, 2018.
D. Datta, D. Mittal, N. P. Mathew, and J. Sairabanu, “Comparison of Performance of Parallel Computation of CPU Cores on CNN model,” Int. Conf. Emerg. Trends Inf. Technol. Eng. ic-ETITE 2020, pp. 1–8, 2020, doi: 10.1109/ic-ETITE47903.2020.142.
T. Shaffer, Z. Li, B. Tovar, and Y. Babuji, “Lightweight Function Monitors for Fine-Grained Management in Large Scale Python Applications,” pp. 1–11.
H. Park, J. Denio, J. Choi, and H. Lee, “MpiPython: A robust python MPI binding,” Proc. - 3rd Int. Conf. Inf. Comput. Technol. ICICT 2020, pp. 96–101, 2020, doi: 10.1109/ICICT50521.2020.00023.
J. J. Galvez, K. Senthil, and L. Kale, “CharmPy: A Python Parallel Programming Model,” Proc. - IEEE Int. Conf. Clust. Comput. ICCC, vol. 2018-Septe, pp. 423–433, 2018, doi: 10.1109/CLUSTER.2018.00059.
R. Eggen and E. M. Eggen, “Thread and Process Efficiency in Python,” pp. 32–36.
M. R. Rizqullah, A. R. Anom Besari, I. Kurnianto Wibowo, R. Setiawan, and D. Agata, “Design and implementation of middleware system for IoT devices based on raspberry Pi,” Int. Electron. Symp. Knowl. Creat. Intell. Comput. IES-KCIC 2018 - Proc., pp. 229–234, 2019, doi: 10.1109/KCIC.2018.8628528.
V. Skorpil, V. Oujezsky, P. Cika, and M. Tuleja, “Parallel Processing of Genetic Algorithms in Python Language,” Prog. Electromagn. Res. Symp., vol. 2019-June, pp. 3727–3731, 2019, doi: 10.1109/PIERS-Spring46901.2019.9017332.
H. Jan, J. H. Pynetlogo, L. Netlogo, M. Jaxa-rozen, and J. H. Kwakkel, “Article PyNetLogo : Linking NetLogo with Python Reference PyNetLogo : Linking NetLogo with Python,” vol. 21, no. 2.
P. Zhang, Y. Gao, and X. Shi, “QuantCloud: A software with automated parallel python for Quantitative Finance applications,” Proc. - 2018 IEEE 18th Int. Conf. Softw. Qual. Reliab. Secur. QRS 2018, pp. 388–396, 2018, doi: 10.1109/QRS.2018.00052.
V. Sindhu, “ScholarWorks @ Georgia State University Exploring Parallel Efficiency and Synergy for Max-P Region Problem Using Python,” 2018.
F. Real, A. Batou, T. Ritto, and C. Desceliers, “Stochastic modeling for hysteretic bit–rock interaction of a drill string under torsional vibrations,” J. Vib. Control, p. 107754631982824, 2019, doi: 10.1177/ToBeAssigned.
Z. Rinkevicius et al., “VeloxChem: A Python-driven density-functional theory program for spectroscopy simulations in high-performance computing environments,” Wiley Interdiscip. Rev. Comput. Mol. Sci., vol. 10, no. 5, pp. 1–14, 2020, doi: 10.1002/wcms.1457.
V. Canh Vu and T. H. Hoang, “Detect Wi-Fi Network Attacks Using Parallel Genetic Programming,” Proc. 2018 10th Int. Conf. Knowl. Syst. Eng. KSE 2018, pp. 370–375, 2018, doi: 10.1109/KSE.2018.8573378.
S. Khan and A. Latif, “Python based scenario design and parallel simulation method for transient rotor angle stability assessment in PowerFactory,” 2019 IEEE Milan PowerTech, PowerTech 2019, pp. 1–6, 2019, doi: 10.1109/PTC.2019.8810949.
A. V. M. Barone and R. Sennrich, “A parallel corpus of Python functions and documentation strings for automated code documentation and code generation,” arXiv, 2017.
How to Cite
Copyright (c) 2021 Zina A. Aziz, Diler Naseradeen Abdulqader, Amira Bibo Sallow, Herman Khalid Omer
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License [CC BY-NC-ND 4.0] that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
AJNU is committed to protecting the privacy of the users of this journal website. The names, personal particulars and e-mail addresses entered in this website will be used only for the stated purposes of this journal and will not be made available to third parties without the user's permission or due process. Users consent to receive communication from the AJNU for the stated purposes of the journal. Queries with regard to privacy may be directed to [email protected]