adaptive version of Chris Anderson's algorithm
Back to Applications
Principal Contact Person and Organization (including e-mail address):
Contact Person : Dr. Lennart Johnsson
Organization : University of Houston, Houston,TX.
Email Address : johnsson@cs.uh.edu
Contact Person : Dr. Charlie Hu
Organization : Rice University
Email Address : ychu@cs.rice.edu
Brief Description of Application:
We have implemented an algorithm in HPF for the rapid evaluation of the potential and force
fields in systems involving large numbers of particles whose interaction are Coulombic or
gravitational in nature. The code is based on Dr. Chris Anderson's hierarchical O(N) N-body
algorithm derived from computational elements based on Poisson's formula. In a system of N
particles, an amount of work of the order O(N^2) has been traditionally required to evaluate all
pair-wise interactions unless some approximation or truncation method is used. Dr. Chris
Anderson's algorithm requires an amount of work proportional to N to evaluate all interactions
making it considerably more practical for large-scale problems encountered in plasma physics,
fluid dynamics, molecular dynamics and celestial mechanics. Our code has been integrated in a
molecular dynamics simulations package.
Number of Lines of Code: 8552
Target Platforms and HPF Compilers Used:
IBM SP2. SGI Origin 2000. The Portland Group's 'pghpf' complier
Coding Styles (data decompositions, computational methods):
The code is the first ever to implement adaptive hierarchical code methods purely in HPF. The
key innovation of our approach that enables such an implementation is a flatening technqiue that
embeds highly irregular data structures into the array data structures. To achieve load balance,
the irregular nature of the application demands for uneven array distribution, a feature not
supported by any HPF compiler yet. As a solution, we added an emulatation of the uneven array
distribution via padding dummy elements to the code.
Extrinsic Interfaces Used (and reasons):
HPF_local: to facilitating calling our own gather/scatter routines, F77_LOCAL: to facilitating
calling ESSL matrix-vector-mult.
Performance Information, if Available (including any possible comparisons to MPI and/or OpenMP):
On 16 wide nodes of 67 Mhz SP2, the evaluation of 1 milltion particles with
Plummer distribution takes around 400 seconds.
Please comment on any aspects of the application that might be interesting, including any problems using HPF effectively:
1. The HPF code is pioneer in data parallel formulations
of highly irregular applications purely in HPF constructs.
2. Emulation of uneven array distribution to support load
balancing.