Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/83367
Title: An environment for high-level, graph oriented parallel programming
Authors: Chan, Fan
Degree: M.Phil.
Issue Date: 2004
Abstract: Parallel computing has been used as an important technique to speed up the computations in many different application areas. It also brings benefits in other aspects of performance, such as scalability and fault tolerance. However, programming with parallelized programs is much harder than writing sequential programs. A parallel program consists of multiple processes that cooperate to execute the program. There are many issues that need to address, e.g. communication, synchronization, and load balancing. Also, large-scale parallel applications are not easy to maintain. The ability to develop parallel programs quickly and easily is becoming increasingly important to many scientists and engineers. Therefore, there is a need for a high-level programming models and tools to support the building of parallel applications. In this thesis, a project is described which aims to provide support for the design and programming of parallel applications in a multiprocessor and cluster environment. The project investigates the Graph-Oriented Programming (GOP) Model to provide high-level abstractions for configuring and programming cooperative parallel processes. Based on GOP, a software environment with various tools has been developed. Many parallel programs can be modelled as a group of tasks performing local operations and coordinating with one another over a logical graph, which depicts the architectural configuration and inter-task communication pattern of the application. Most of the graphs are regular ones such as tree and mesh. Using a message-passing library, such as PVM and MPI, the programmer needs to manually translate the design-level graph model into its implementation using low-level primitives. With the GOP model, such a graph metaphor is made explicit in the programming levels because GOP directly supports the graph construct. The programmer can configure the structure of a parallel/distributed program by using a user-defined logical graph and write the code for communication and synchronization using primitives defined in terms of the graph. The GOP runtime has been implemented on MPI with the enhancement on the communication support and high-level programming with the Multiple Program Multiple Data (MPMD) model. It provides a high-level programming abstraction (GOP library) for building parallel applications. Graph-oriented primitives for communications, synchronization and configuration are perceived at the programming-level and their implementation hides the programmer from the underlying programming activities associated with accessing services through MPI. The programmer can thus concentrate on the logical design of an application, ignoring unnecessary low-level details. We have also built a visual programming interface, called VisualGOP, for the design, coding, and running of GOP programs. VisualGOP applies visual techniques to provide the programmer with automated and intelligent assistance throughout the program design and construction process. It provides a visual, graphical interface with support for interactive graph drawing and editing, visual programming functions and automation facilities for program mapping and execution. VisualGOP is a generic programming environment independent of programming languages and platforms. VisualGOP also addresses the issues on providing graph scalability and support for interoperability by using XML representations of GOP entities and primitives so that it can support the deployment and execution in different platforms. Example applications have been developed with the support of our GOP environment. It has been observed that the environment eases the expression of parallelism, configuration, communication and coordination in building parallel applications. Sequential programming constructs blend smoothly and easily with parallel programming constructs in GOP. Using the examples, we have conducted evaluations on how GOP performs in comparison with the traditional MPI parallel programming model. The results showed that GOP programs are as efficient as MPI programs.
Subjects: Hong Kong Polytechnic University -- Dissertations
Parallel programming (Computer science)
Pages: xiv, 133 leaves : ill. ; 30 cm
Appears in Collections:Thesis

Show full item record

Page views

43
Last Week
0
Last month
Citations as of Apr 28, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.