Experimenting with parallel programming techniques using Multi-Pascal

The relatively new world of computer science is in the midst of yet another technological revolution. As industry strives to produce ever faster machines and systems, many crossroads are encountered and paths are chosen. In one of the more recent battles, the object-oriented paradigm has apparently been declared the winner over the conventional approach to programming - as evidenced by the explosion of popularity now being enjoyed by C++, Smalltalk, and other object-oriented languages. Of the many battles currently being waged in the industry, one in particular may have a very profound influence on the development of future machines and systems: the decision to continue developing conventional sequential computers or to begin developing parallel computing systems. In this dissertation I offer arguments to support the choosing of the parallel processing paradigm. I am convinced that with current technology, parallel processing offers the best hope of reaching the next level of computational performance. There is no question that many problems can be solved much quicker when more processors are applied to the problem; the key is to improve the programmer's ability to discover and take advantage of the parallelism within programs so that the processors can be kept busy enough to justify the use of the extra processors. In Part 1, many topics related to parallel computing are discussed including architecture, languages, interconnection topologies, pipelining, and more. In Parts 2 and 3, six projects are described using the language 'Multi-Pascal'. The projects were selected from the book 'The Art of Parallel Programming' by Bruce Lester. Included with the book is a disk that allows programmers to write their own Multi-Pascal programs and simulate the program performance on a wide variety of parallel architectures. It is capable of representing 256 processors on a DOS system. The environment includes a compiler. When a program is compiled and executed in this environment, the system keeps track of the number of sequential time units required to run the program and also how many parallel time units were required. The environment then computes the speedup by dividing the sequential time by the parallel time. The environment also provided many debugging and testing tools to help the programmer learn the concepts of parallel programming. The first three projects were all done on a simulated multiprocessor system. These projects include polynomial multiplication, bitonic merge sort, and Gaussian elimination. The final three projects were designed to run on multicomputer systems. They include numerical integration on a two-dimensional mesh; image processing and the traveling salesman problem, both on Hypercube topologies.

Le relazioni