Adam M. Dziewonski: Research Statement

I am a geophysicist who has spent most of his professional career mapping the deep interior of the Earth. Nearly twenty years ago I developed a method which would allow us to see in three dimensions features that are as deep as 1000 miles beneath the Earth's surface. This approach is now called `seismic tomography', because of the analogy with medical tomography. Mapping of regions that are anomalously hot or cold is likely to tell us the origin of the driving forces of plate tectonics. The field of seismic tomography has expanded rapidly; some of the developments with which I have been involved are outlined below.

All my formal education was completed in Poland. My choice of geophysics as a field of studies combined my interests in earth sciences and quantitative analysis. An interesting experience, while still a student at the University of Warsaw, was participation (9 months in 1958-9) in the Polish Expedition to (North) Vietnam during the International Geophysical Year. I was in charge of a geomagnetic station in Sha-Pa, in the mountains near the Chinese border. I arrived in the United States in late 1965 as a post-doctoral fellow at the Southwest Center for Advanced Studies in Dallas (to become the University of Texas at Dallas in 1969) and began work in seismology, measuring surface wave dispersion to study the structure of the upper mantle. In the mid-1960's there were two new elements in seismology: a global seismographic network, populated with some 125 standard instruments, the copies of records being readily available from a data center, and the growing use of computers in analysis. Both were to become the principal tools used throughout my career.

My interest in global seismology started sometime in 1967 when I heard Frank Press talk about using the Monte Carlo method to select from some 3 million randomly generated models of the Earth only 5 that satisfied the observations. Even those 5 models were quite different and it seemed to me that we ought to do better. The key was better data. The Alaskan earthquake of 1964 was recorded by the new global network --- unfortunately only on photographic paper, which led to a two-year long effort to convert these extremely convoluted records into strings of numbers that could be analyzed in a computer. It was a gamble, but it paid off.

To interpret these records I entered into co-operation with Freeman Gilbert of the University of California, San Diego, and this started a very fruitful alliance. Using the data set of nearly 100 digitized seismograms of the Alaskan earthquake, we were able to detect and measure frequencies of many previously unidentified modes of free vibrations of the earth, correct some of the earlier mis-identifications and, generally, significantly improve the overall accuracy of the entire data set.

An immediate reward was the ability to present a satisfactory proof that the inner core of the earth is solid (Dziewonski and Gilbert, 1971). A more subtle development was tightening of the bounds on allowable earth models and, in particular, demonstration that the changes in the density as a function of radius in the lower mantle and the core could not be far from adiabatic (Dziewonski, 1971; Dziewonski and Gilbert, 1972). This implies that convection in both regions is possible and highly probable.

At about the same time as this work was being completed, Harvard offered me a faculty position and I moved there in 1972. As a part of start-up resources, I was provided with state-of-the-art computerized digitizing equipment, which made the conversion of analog records much easier and more accurate. Even though the study of the records of the Alaskan earthquake was a success, it was clear that much more could be learned about the average structure of the Earth if we could measure higher frequency overtones of the free oscillations of the earth. A giant deep earthquake under Colombia in 1970 presented an unusual opportunity.

The analysis of some -- two or three, as digital seismology was still in the early stages of development -- digital records of this earthquake revealed that it was very rich in overtone energy. Even though the digital recordings were clearly superior, the number of available analog records made them, at that time, a much more important source of data. A year-long effort yielded 165 digitized recordings (each of some 16-20 hours duration, on average) of the Colombian earthquake. These were subjected to novel procedures of phase equalization, which allowed us to measure the frequencies of overtones even though their amplitude in individual recordings was much below the noise level (Gilbert and Dziewonski, 1975) and expanded the list of modes with measured frequencies by a factor of three or four, to over 1,000.

Yet, the most important result was not originally anticipated. In the course of this work, which required knowledge of the equivalent forces acting at the focus, it became clear that a much more general approach to retrieving information about the earthquake mechanism and its time history was possible. Rather than assuming, as had been the usual practice, that an earthquake is a failure of the material on one of the two perpendicular planes (double-couple mechanism), it is possible to retrieve source parameters in the form of a moment tensor, which allows for the presence of forms of stress release other than plain shear. This seemed particularly important to find out for deep earthquakes, whose origin was, and still is, poorly understood.

Inversion of the Colombian data for the moment rate tensor yielded a surprise: a slow compressional event precursory to the main shear failure (Dziewonski and Gilbert, 1974). This result remains controversial until today, mostly because it is difficult to estimate the effect of lateral heterogeneity and mode coupling, but the possibility of the retrieval of source mechanisms in a general form as a linear inverse problem became universally accepted. Inversion for Earth structure and the seismic source can thus be combined, and this -- in addition to the specific modeling results -- was the conceptually important result (Gilbert and Dziewonski, 1975).

Attempts to reproduce the observed seismograms using the average earth structure and the seismic source led me to an opinion that in modeling the Earth we must progress from one to three dimensions. This need has been appreciated for some time for shallow structure (first the crust, then the top several hundred kilometers), but there were only vague indications that the same could apply to the lower mantle (700--2900 km depth).

An immense collection of data that could be used for that purpose is contained in the Bulletins of the International Seismological Centre, which -- in the early days -- contained reports of phase arrival times from some 1,000 globally distributed stations for, roughly, 10,000 earthquakes a year. Of course, most of these were very small events with data from only a few stations. The idea was that the travel time anomalies observed for many ray paths, criss-crossing the Earth between various points near the Earth's surface and reaching different depths in its interior, could be resolved formally into a three-dimensional (3-D) model. This is now called `seismic tomography', as it conceptually resembles the medical CAT-scan. The early results were reported orally in 1974 and 1975 and a full report was published in January 1977 (Dziewonski et al., 1977).

The motivation for studying the 3-D structure of the Earth's interior is that it may offer the best information on the dynamic processes in the deep interior of the Earth. As seismic wave speeds change with temperature, it seemed plausible to obtain 3-D snapshots of the convection pattern in the Earth. Of course, compositional variations which would lead to different velocity--density relationships could not be a priori excluded. The initial paper addressed this issue by trying to relate the origin of very long wavelength gravity anomalies to the velocity anomalies in the lower mantle, and the authors obtained statistically significant correlation.

A lasting result of the late 1970's was a reference earth model constructed in a cooperation with Don Anderson of Caltech. Even though 3-D inversions were becoming possible, these are measured as relatively small (~1%) deviations from the spherically symmetric average, and there is important need for a good reference model. Model PREM (Dziewonski and Anderson, 1981) remains a standard until today, even though the need for an update is clear.

The progress in digital seismology and, in particular, global deployment of highly sensitive seismometers, made it possible to apply inversion of seismic waveform data for the moment tensor not only to great earthquakes, but also to much more numerous smaller events. The centroid--moment tensor (CMT) technique (Dziewonski et al., 1981) turned out to be a robust method, such that data from very few stations are needed to obtain a reliable estimate of the mechanism and location of a seismic source. With some improvements, the same method is still being used to investigate global seismicity in a routine fashion (various publications). The total number of earthquakes analyzed using the CMT method is over 13,000; this covers the years from 1976 through 1996, and this data base is frequently used by the seismological community.

The initial CMT paper was my first cooperative effort with John Woodhouse, now at Oxford. Most of the work done in seismology at Harvard during the 1980's was affected by his presence. Examination of tens of thousand of records in the process of obtaining the CMT solutions focussed our attention on the opportunity of using the waveform data to improve our knowledge of the 3-D structure of the mantle. The theory and application to the retrieval of the upper mantle structure are described in Woodhouse and Dziewonski (1984); the validity of the model presented there has been confirmed by numerous later studies. This upper mantle model has been accompanied by a 3-D model of the lower mantle, obtained by new analysis of the ISC data (Dziewonski, 1984). This model has also withstood the trial of time. Thus, in a single issue of the Journal of Geophysical Research a baseline for global seismic tomography was established. To complete this first sweep through the earth's interior, we investigated the topography of the core--mantle boundary (Morelli and Dziewonski, 1987) and the properties of the inner core (Morelli et al., 1986). Our finding that the CMB topography is characterized by undulations with an amplitude of several kilometers remains controversial. It is consistent with the geodynamic calculations, but other seismologists seem to have difficulty with fully reproducing our result. The discovery that the inner core is anisotropic, with waves propagating faster in the direction parallel to the rotation axis than in the equatorial plane, has been confirmed by others, but the details remain sketchy, because of the inadequate coverage by the ray paths. It has been suggested that the observed type of anisotropy, consistent with that of iron in the phase appropriate to inner core pressure, could be related either to the way in which the crystals are formed --- as the inner core grows by freezing --- or to very large scale convection in the inner core.

Current tomographic studies at Harvard move towards improving the resolution of tomographic models. The exciting new result is that the spectrum of lateral heterogeneity is indeed dominated by large wavelength features. This has been assumed in earlier studies, and led to some criticism and skepticism. Investigation of the spectra of several observables, sensitive to the properties of the Earth at different depths, leads to the conclusion (Su and Dziewonski, 1991) that lateral heterogeneity is indeed dominated by features with wavelengths greater than about 6,000 km; the middle mantle -- where amplitudes are small -- may by an exception. This means that there is a valid reason to limit the expansion to relatively low orders. The improvement in resolution is achieved by joining the waveform and travel time data (Morelli and Dziewonski, 1991; Su et al., 1992) as well as incorporation of waveform data from new seismographic networks.

The other actively pursued direction is the application of tomographic results to other fields of earth sciences. In particular, efforts are made to use tomographic results in the explanation of geodynamic observables such as the geoid, plate motions, CMB topography and surface topography (Dziewonski and Woodward, 1992; Dziewonski et al., 1993; Su et al., 1992; Peltier et al., 1992). An outstanding urgent problem is determination of a 3-D model of compressional velocities with the same resolution with which the shear velocity distribution is known. We know that the largest scale features are similar, but the P- and S-velocity models differ in some details. Resolution of this problem is critical to being able to distinguish between a thermal vs. compositional origin of the velocity anomalies.

Department of Earth and Planetary Sciences / Harvard University / 20 Oxford Street / Cambridge / MA 02138 / U.S.A. / Telephone: +1 617 495 2350 / Fax: +1 617 496 1907 / Email: reilly@eps.hartvard.edu