PSPI3D (Phase Shift Plus Interpolation)
Back to Applications
Principal Contact Person and Organization (including e-mail address):
Ernesto Bonomi, Head of Geophysics Area, CRS4, Sardinia, Italy
ernesto@crs4.it
Brief Description of Application:
A Post-stack 3D Seismic Depth Migration Code for imaging of inhomogeneous subsurface with a spectral approach. The depth extrapolation of the seismic stack is
achieved by a phase shift for a set of reference, constant velocities, choosen by an information-theoretical criterion, then an interpolation is performed. The
availability of efficient, concurrent FFT library routines is critical.
Number of Lines of Code: 3000
Target Platforms and HPF Compilers Used:
IBM SP2, SGI PowerChallenge and SGI/Cray Origin 2000; homogeneous cluster of IBM RS/6000 or SGI workstations; pghpf (The Portland Group, Inc) compiler
Coding Styles (data decompositions, computational methods):
The computational domain is a regular, uniform, rectangular 3D mesh for both the seismic stack in the wavenumber-frequency domain and for the velocity model (x,y,z). The CYCLIC distribution is adopted for the frequency axis only (preferred over BLOCK for better load balancing). The only phase relevant for communications (apart from I/O) is the so-called imaging i.e. a sum over the frequencies. The FFTs are 2D along the collapsed axes (xy or wavenumber space), concurrent in the frequencies. The selection of the reference velocities, which is done once before the migration loop, involve reduce operations related to the calculation of a frequency histogram.
Extrinsic Interfaces Used (and reasons):
F77_LOCAL has been adopted to call the sequential FFT library routines acting on local data.
Performance Information, if Available (including any possible comparisons to MPI and/or OpenMP):
- the memory requirement of the executable is much larger than
simple estimates based on the large multidimensional arrays used in
the HPF source (which, in turn, are close to the memory requested by
executables compiled by Fortran 90 + MPI calls);
- the single processor performance for most routines or code segments
involving pure concurrent operations is disappointing
URL