Effects of Multicore Distributed Memory Systems on Parallel Processing Applications

Authors

  • Mohammed J. Mohammed Department of Computer and Communication Engineering, College of Engineering, Nawroz University, Duhok, Kurdistan Region - Iraq

DOI:

https://doi.org/10.25007/ajnu.v6n3a71

Keywords:

Central Processing Unit, Multicore Distributed Memory, Parallel Processing, Client-server Principles, Hardware and Software Parts

Abstract

Complex problems need a longtime to be solved, with low efficiency and performance. Hence, to overcome these drawbacks, the approach of breaking the problem into independent parts and treating each part individually in the way that each processing element can execute its part of the problem simultaneously with the others. The systems that contain many computing elements combined. Parallel processing (PP) is divided into three types; shared, distributed, and hybrid memory systems are usually adopted. The aim of this research is to point out the effects of multicore distributed memory systems on PP applications that can reduce the total execution time of the programs. In this work, distributed- and shared-memory systems addressed depends on client/servers principles. However, to get the exact evaluation of our aim, just one client and one server have been depended. The algorithm used here is capable of calculating: The started, consumed, and terminated for CPU and total execution times, CPU usage of servers, and CPU and Total execution times for the client. The results compared with previous works depending on distributed memory systems, to overcome the previous drawbacks taking in the consideration the effects of multi-core processor. All of these algorithms are implemented using Java Language.

Downloads

Download data is not yet available.

References

Braunl, T. (2010). Parallel Processing: Parallel Computer Architecture and Parallel Software Design Book. University of Western Australia.

Carriero, N & Gelernter, D. (1992). How to Write Parallel Programs Book. Cambridge, MA: Massachusetts Institute of Technology.

Dietz, H. (2004). Linux Parallel Processing HOWTO. v2.0, 28-06. Available from: http://www.aggregate.org/LDP. [Last accessed on 2017 May 26].

El Saifi, M.M & Midorikawa, E.T. (2006). PMPI: A multi-platform, multi-programming language MPI using NET. Sao Paulo, SP, Brazil: Polytechnic School-University of São Paulo.

El-Rewini, H & Abd-El-Barr, M. (2005). Advanced Computer Architecture and Parallel Processing. New York: John Wiley & Sons, Inc.

Frachtenberg, E. (2007). Job Scheduling Strategies for Parallel Processing, JSSPP, June 17; 2007.

Funga, Y.F., Ercanb, M.F., Chonga, Y.S M., Hoa, T.K., Cheunga, W.L. & Singha, G. (2003). Teaching Parallel Computing Concepts with a Desktop. Computer. Hong Kong: The Hong Kong Polytechnic University.

Kessler, C.W. (2006). Teaching Parallel Programming Early. Sweden: Linköping University.

Loosley, C & Douglas, F. (1998). High-Performance Client/Server. New York, NY: John Wiley & Sons.

Naiouf, M.R. (2004). Parallel processing. Dynamic Load Balance in Sorting Algorithms. University Nacional de La Plata, Facultad de Ciencias Exactas.

Sola, M.C. (2010). Parallel Processing for Dynamic Multi-objective Optimization, Ph.D. Thesis, University of GRANADA, April; 2010.

Wilkinson, B & Allen, M. (2004). Parallel Computers. Boston, MA: Pearson Education Inc.

Yaseen, N.O. (2010). Diagnostic Approach for Improving the Implementation of Parallel Processing Operations, Thesis Zakho University; 2010.

Published

2017-07-18

How to Cite

Mohammed, M. J. (2017). Effects of Multicore Distributed Memory Systems on Parallel Processing Applications. Academic Journal of Nawroz University, 6(3), 11–13. https://doi.org/10.25007/ajnu.v6n3a71

Issue

Section

Articles