Analytical methodology in the Applied Geochemistry Research Group (1950–1988) at the Imperial College of Science and Technology, London

ABSTRACT This paper reviews the development of analytical methodology in the Applied Geochemistry Research Group (AGRG), during its lifetime from 1950 to 1988 and by scientists remaining at Imperial College after the Group's disbanding. This period encompasses a time in which extraordinary advances in electronics and computing technology revolutionized analytical science and, correspondingly, the scope of applied geochemistry. The analytical requirements of applied geochemistry were such that new ideas were eagerly sought and exploited and, as a consequence, the AGRG was often in the forefront of these developments. After the AGRG was disbanded and the analytical staff dispersed, the impetus of these changes was carried into other fields. As an outcome, the philosophy that evolved in the early days of AGRG is now, in a developed form, influencing analytical chemists and their customers over a variety of application sectors much broader than geochemistry. Supplementary material: Results of chemical analysis on geochemical field courses are available at http://www.geolsoc.org.uk/SUP18427.


FOREWORD
In the spring of 1960, I was following an undergraduate module in instrumental analysis in the Chemistry Department of Imperial College. By chance I was present during a telephone conversation between Dr John Herringshaw (lecturer in instrumental analysis) and Professor John Stuart Webb , founder and driving genius of the Geochemical Prospecting Research Centre (GPRC) (renamed the Applied Geochemistry Research Group (AGRG) in 1965). (In this paper I will use AGRG to include the GPRC.) John Webb wanted to know whether mass spectrometry was a reasonable prospect for developing into a quantitative multi-element method for rapid and inexpensive analysis. Herringshaw's answer was an unequivocal 'no'.
At the time it was quite impossible to foresee the amazing advances in electronics and computing that were about to occur, and the effect that this would have on analytical technology. So John Herringshaw was correct. Indeed, rapid and accurate multi-element analysis by mass spectrometry would not become a practical prospect until the mid-1980s.
Even then, the instruments remained prone to frequent malfunction and lengthy downtime and this state of affairs changed only quite recently. It is therefore a pleasant reflection that the technique, conceived of by John Webb well before its time, is now probably the most accurate method available for elemental analysis, with rapid analysis time and extraordinarily low detection limits. In my view, it will soon displace nearly all competing methods from the analyst's repertoire.

ANALYTICAL METHODOLOGY, DATA QUALITY AND GEOCHEMICAL INFORMATION
Reliable chemical analysis on a huge scale is the sine qua non of applied geochemistry. John Webb and the other founding pioneers of the AGRG discovered at an early stage that the traditional rallying cry 'Accuracy and Precision' (the motto on the coat of arms of the Society for Analytical Chemistry (UK) until its amalgamation with the Royal Society of Chemistry), of analytical chemists and pure geologists alike, had to be thrown out in favour of what we now call 'fitness for purpose' (Thompson & Ramsey 1995) if the whole enterprise was going to be economically feasible. This insight followed from the realization that more information (in the technical sense) could often be obtained for the same amount of money by taking more samples and spending less on analytical accuracy. This follows from the fact that uncertainty derived from sampling is notoriously high in geochemistry. From the principles of error propagation, the uncertainty in the analytical result has a negligible effect on the combined uncertainty of the final result unless the analytical uncertainty is greater than about one third of the sampling uncertainty. In short, there is no point in paying a premium for high accuracy analysis when low accuracy will achieve the same ends (Tooms 1959;Webb 1970;Webb & Thompson 1977). It is difficult now to appreciate what a revolutionary precept this was at the time, and how bitterly it was derided and shunned by purists. Lower analytical accuracy and the concomitant cost-saving could be obtained by simplifying methods to their bare essentials, reducing to a minimum the conventional chemical manipulations. This could be done, for instance, by carrying out the whole procedure in one vessel, usually a test-tube, and by avoiding filtration. Volumes of reagents could be measured out by rapid dispensers rather than the traditional pipettes and volumetric flasks. The sensitivity of methods could be maximized by restricting the dilution of the prepared test solution. Test-tubes occupied far less space than beakers, and could be processed simultaneously in large numbers in heating blocks of various kinds. Measurement could be conducted by visual comparison with standards rather than the more precise use of spectrophotometry.
Using such an approach, analysts could obtain a measurement result with a quite acceptable repeatability relative standard deviation (RSD) of 10%. An RSD even as high as 20% would often be serviceable. In these ways, and with feedback from field geologists, the results from geochemical analysis evolved towards the most appropriate level of uncertainty. This 'fitness for purpose' was at first a qualitative notion that could not be specified exactly, as it lacked a clear conceptual framework. Sampling and analytical strategies that were fit for purpose emerged by a kind of evolutionary process, based on the professional experience of both exploration geologists and analytical chemists.
But there was a problem with such rapid methods. An RSD of 20% on the result for the concentration of an analyte may well be fit for purpose in mineral exploration based on closely spaced samples, but it is perilously close to the detection limit, a concentration below which the uncertainty in the result and the result itself are of comparable magnitude. Unless carefully controlled, these short-cut analytical methods could degenerate into random number generators. This was especially the case when the analysis was carried out by trained but chemicallyinexperienced geologists, in the field or in temporary field laboratories where conditions were far from ideal. Methods had to be devised for ensuring that the procedure had been adequately carried out in every run of analysis.
Such quality-related practices in analytical chemistry were poorly developed in the early days of the AGRG, as were the underlying notions about uncertainty of measurement. Statistics was not taught to undergraduate students of geology or chemistry. We have to remember also that the first reference material was not available until 1951 (Fairbairn 1951). In 1973 it was possible to open a paper on analytical quality control with the words 'It is rare to encounter analytical laboratories in which precision is regularly measured and controlled' (Thompson & Howarth 1973). The rare exceptions were, of course, those laboratories involved in applied geochemistry. The methods employed and devised in the AGRG became progressively more informative, and on a firmer conceptual foundation, as time passed (Ramsey et al. 1992;Thompson 1992). Analysis became more rapid and instrumentally-based, so that collecting and collating quality control data became easier as the computing power available increased by leaps and bounds. These quality control methods developed in the AGRG eventually made an important contribution to data quality in virtually all applications of chemical analysis.
A consistent trend in applied geochemical analysis has been the increasing use of multi-element methods. These methods go far beyond simply saving labour by measuring concentrations of numerous elements simultaneously. By providing results for the major constituents as well as a wide range of trace elements, the methods enabled geochemists to interpret the variations of the analytes of primary interest in a complete geochemical context. In combination with computing power, multi-element methods opened for the geochemist the potential for a whole new world of interpretation in terms of underlying geochemical processes.

ANALYTICAL METHODS IN APPLIED GEOCHEMISTRY
A full account of analytical methods used in applied geochemistry to 1989 has been published (Van Loon & Barefoot 1989) and details of instruments and procedures will not be revisited here. Broadly speaking, the tools available in the pioneering days of applied geochemistry were colorimetric methods (often disseminated via the many 'Technical Communications' of the GPRC and AGRG, issued between 1956 and 1965) and photographic spectrography. X-ray fluorescence became available in the mid-1950s, but never became a main part of the trace analytical repertoire in applied geochemistry because of its poor detection limits for trace analysis. Polarography could supply the detection limits, but not the speed required or freedom from interference in typically complex geochemical matrices. In the mid-1960s the first commercial flame atomic absorption spectrometers (FAAS) became available and rapidly displaced most of the colorimetric methods. FAAS was quicker than colorimetric methods, much more precise, covered a wider range of elements, was less prone to interference and did not involve the use of unstable reagents, toxic buffers and organic solvents. At about the same time, the use of direct reading spectrographs began to look feasible, simply by transferring methods developed for photographic spectrography to the new type of instrument. Inductively coupled plasma (ICP) atomic emission spectrometry (ICP-AES), a simultaneous multielement method was first described by Greenfield et al. (1964), but the first complete commercial instrument was not available until the mid-1970s. ICP-AES could provide detection limits at least comparable with those from FAAS for most metallic elements, and considerably better for metals that formed refractory oxides and for many non-metallic elements (Thompson & Walsh 1983). The use of the inductively coupled plasma as an ion source for mass spectrometry was first demonstrated in the early 1980s and commercial instruments with quadrupole mass analysers became available shortly afterwards. The detection limits were very low, below 1 µg l 1 for nearly all elements, but early instruments were temperamental, and interference from molecular ions was sometimes problematic. These instruments have gradually improved in reliability as design of the interface evolved, and in detecting power and mass resolution as other types of mass analysers were used. Unfortunately, inductively coupled plasma mass spectrometry (ICP-MS) was not practicable as a routine tool during the lifetime of the AGRG.

ANALYTICAL METHODS IN THE APPLIED GEOCHEMISTRY RESEARCH GROUP
Until about 1966, geochemists in the AGRG (as elsewhere) relied on the two established types of analytical method, colorimetry and spectrography. The colorimetric methods were ingeniously adapted so that they could be executed quickly, without main services (i.e. gas, electricity, water), and used in the field. Field geologists enjoyed the independence from base camps and the on-the-spot information that was provided. AGRG provided a string of newly adapted and improved colorimetric methods for a variety of elements, notably through the efforts of Stanton & MacDonald (e.g. 1961a (Sn); 1962 (Sb); 1964 (Au); 1965 (Se); 1966 (B)). By the mid-1960s these methods had reached their fully-developed form (Stanton 1966), although some of them were actually rather difficult to carry out successfully. Some attempt was made to automate these methods, but never with complete success (Stanton & MacDonald 1961b, 1963. However, the upcoming instrumental methods of analysis needed the expert attention of an analytical chemist and the reliable mains services and protection from the weather that could only be provided in a laboratory. Do-it-yourself analysis in the field had had its day, at least for the time being. (There have been 'portable' x-ray fluorescence instruments available since about 1970. However, the early models were large, prone to interferences and could only determine single elements. They relied on radioactive sources to excite the emission of x-rays. Recently, compact hand-held instruments, using excitation by miniature x-ray tubes, have become available, so we may see a partial return to analysis in the field, often indeed in situ.) By 1965, photographic spectrography had also reached its final form (Nichol & Henderson-Hamilton 1965) and reached a peak in productivity of about 80 000 determinations per year. However, AGRG remained the forefront of analytical developments with the successive introduction of: (a) FAAS; (b) direct reading spectrography; (c) ICP-AES (with conventional nebulization); (d) laser ablation for mobilizing test material into the plasma; and (e) continuous hydride generation for the determination of elements that formed gaseous hydrides, namely Ge, Sn, As, Sb, Bi, Se, and Te. These developments are discussed below. The flow of analytical publications stemming from AGRG is shown in Figure 1.

ANALYSIS FOR THE WOLFSON GEOCHEMICAL ATLAS PROJECT
A defining event for the whole of the AGRG was the Wolfson Atlas project, the creation of a geochemical atlas of the whole of England and Wales by analysing c. 50 000 stream sediments taken at an average density of one per square mile (Webb et al. 1978). The outcome of this project (funded by the Wolfson Foundation) was to be the first detailed geochemical atlas on a national scale in the world (Garrett et al. 2008), and no analytical task of comparable magnitude had ever been under-taken before. The project required an analytical effort of about 24 person-years. The principal analytical tool was an adapted Quantometer, a direct reading spectrometer designed originally for steel analysis by Applied Research Laboratories (ARL) of Luton, England and Glendale, CA, USA. The basic instrument was an arc/spark spectrograph with 35 channels, each channel comprising a slit to isolate a selected spectral line at the prime focus of the grating, a photomultiplier, a signal integrator, and a separate analogue computer to store the curved calibration functions as a set of straight-line segments. The output of each analogue computer was multiplexed into a digital voltmeter to present the concentrations directly. For the Atlas project, however, an output onto punched cards was required; so that the data could be input readily into a mainframe computer and thence to magnetic tape for subsequent map-making. (That was the only practicable means of downloading data at the time). This unique output requirement demanded a custom-built interface connecting the digital voltmeter in parallel to a long-carriage automatic typewriter and an IBM card punch. Unfortunately, such a complex system proved to be very prone to breakdown. Moreover, it was difficult to get repaired because of demarcation disputes between the four different firms involved. This unreliability added substantially to the time required for analysis in the Wolfson project. In the event, the spectrographic analysis required two years for completion.
The Quantometer system, serviced by five technicians who prepared and ran the samples, was capable of analysing 200 samples per working day. The analytical procedure was a 90-second burn of a mixture of the sample (ashed, but otherwise in the original minus 80-mesh powder) with sodium fluoride and carbon black. The mixture was packed into a cavity in the cathode in 6-mm Ringsdorf carbon electrodes. These conditions were devised by the AGRG's analytical chemists, Robert Foster and Ernest Newman, to maximize the number of elements of interest that could be determined under a single set of spectrochemical conditions. Unfortunately (as with all compromise systems), some elements of potential interest could not be determined, notably metals such as Ti and Zr etc. with refractory oxides. In addition, certain key elements (namely As, Mo, Cd and Zn) gave unacceptably erratic results in the trial run. Additional methods had to be devised to provide results for these key elements in the Wolfson Atlas.
This analytical set-up had been used previously in a trial run to produce the 'Geochemical Atlas of Northern Ireland' (Webb et al. 1973) based on c. 4800 stream sediment samples. This drew attention to the problem of interference effects in spectrographic data, that is, the analytical results for a particular element were being affected by the concentration of major constituents, such as Ca, that were present in the particular sample. As the compositions of the samples varied widely, the problem could not be remedied by matrix-matching the test materials and the calibrators. The nature of these effects and their correction were not understood or documented at the time, but they were in some instances of a magnitude that would be unacceptable in a national geochemical atlas. Accordingly a post-analysis software system was used to modify the raw results (c i ) to corrected results (c i ), according to the formula c i = (c i ^jT ij c j )/(1 ^jR ij c j ), where T ij is the coefficient for the translational effect of the jth interferent on the ith analyte, and R ij is the corresponding coefficient for rotational effects. (A translational effect is caused by a loss of analyte, or change in analytical signal, that is independent of the concentration of the analyte in a series of test materials with identical matrices subjected to identical treatments. A rotational effect is caused by the loss of analyte, or change in analytical signal, that is proportional to the concentration of the analyte in a series of test materials with identical matrices subject to identical treatments.) The concentration c j of the interferent was determined in the same spectrographic burn. Estimation of the T and R coefficients delayed the start of the Atlas project by two months. In addition to this complication, it was found that the daily recalibration of each channel of the analogue computer was so cumbersome and error-prone that it was decided to dispense completely with real-time output of concentrations and to collect only raw uncalibrated analytical signals. Conversion to concentration units was accomplished off-line on a main-frame computer, while the data were being transferred to magnetic tape.

Control of quality in the Atlas data
The Northern Ireland Atlas also demonstrated that a quality control system of considerable power was needed to ensure that consistency of analytical performance was maintained throughout the whole project. The existing 'statistical series' method (Craven 1954;James 1970), then widely used in the AGRG, was considered unsuitable. No guidance on how an appropriate control could be accomplished was available at the time, and the simple expedient was adopted of running eight different reference materials at intervals in every run of analysis. The materials were selected to span as completely as possible the range of matrix types and analyte concentrations likely to be encountered. The materials were also sent for analysis by several other geochemical laboratories to ensure the absence of bias at any consequential level. During the Atlas analysis, the results obtained on these materials were inspected daily to detect any sporadic instrumental malfunction, and at the end of the project to provide an indication of the uncertainty associated with the analytical results. These results also allowed the analysts to check that sample misidentifications had not occurred (brought about, for instance, by analysing a rack of prepared electrodes in reverse order). It is a tribute to the analysts involved, handling large numbers of samples, at different stages of preparation, over an extended period, that no incident of that nature was ever found. A statistical justification for the AGRG approach to regional geochemical atlas production was provided by Howarth & Lowenstein (1971).
Effective multivariate quality control highlighted a previously unconsidered problem. Obviously, if many elements were found to be out of control, the whole dataset would be rejected and re-analysis undertaken. However, finding a single channel to be out of control gave rise to a difficult problem, because it would not have been economically viable to reject a whole day's work on account of a single defective analytical channel for a single, non-critical analyte. (The 'Bonferroni problem' (one of many variables by chance appearing to be out of control at the 95% confidence level) had already been taken into account.) Such instances, although not common, were neither particularly rare. The expedient was adopted of adjusting the defective results if that seemed possible on the basis of the results of the eight reference materials. The effect of this manipulation on maps was also carefully considered before any such adjustments were made.

Additional elements in the Atlas project
Of the four essential elements not adequately determined by spectrography, Zn and Cd were determined by FAAS in the single solution made by treating the ignited sample with nitric acid at 100 C for one hour. Once this method had 'bedded down', analysis proceeded without trouble at the rate of 300 samples per working day with two analysts. Cadmium, however, presented the previously undocumented problem of background interference from Ca. As an outcome, samples high in chalk could give false Cd readings as high as 12 ppm, while the true background concentration was less than 0.5 ppm. Nowadays, background interference is well-understood, and corrected instrumentally by one of a number of methods, but at the time no such methods were available and, indeed, the problem itself was unrecognized (and even denied by some experts). A practical solution was, however, quickly forthcoming. Calcareous samples could be easily recognized (by gas evolution when the acid was added) and the Ca concentration determined separately (also by FAAS) in the resulting test solution. The interference could then be corrected by a term that was proportional to the Ca concentration. Molybdenum and As were determined in a solution produced after a fusion with potassium hydrogen sulphate. Molybdenum was determined spectrophotometrically as the complex with dithiol after extraction into toluene, while As was determined by the Gutzeit method.
In the event, the costs of these additional determinations were fully justified. Molybdenum was found to be elevated above levels safe for cattle and sheep in about a million acres of farmland in England ( Thornton et al. 1969;Thomson et al. 1972). Arsenic was found to be polluting a large tract of land around the tin mining areas in Cornwall and Devon (Colbourn et al. 1975). The astonishing levels of Cd found in Shipham in Somerset (Marples & Thornton 1980;Matthews & Thornton 1980) made that village notorious, literally overnight. All of these findings led to major follow-up studies.

Aftermath
Immediately after the conclusion of the Atlas project, the accuracy of the output from the Quantometer was improved to a marked degree for the analysis of sediments and soils. Uncertainty in results was reduced to less than half of that previously prevailing, but not by changes in instrumentation. All that was required was merely exploiting an increased knowledge of the spectrochemical process, unavailable to practitioners of the photographic technique, combined with a concurrent increase in computer power. At the same time, a very efficient spectrographic method for examining plant and other biological material was introduced. The material was ashed with aluminium nitrate, giving rise to an ash with the trace elements in an alumina matrix. Aluminium provided a simple background spectrum with few lines, so that most elements of interest (apart from volatile elements, of course) could be determined without interference. But the days of the spectrograph were running out, as were the projects with very large requirements for analysis. Geochemists in the AGRG needed fewer results but more accuracy than spectrography could provide, and initially this need was largely fulfilled by flame atomic absorption.
Apart from a few well-known instances, atomic absorption was almost free from interference effects of consequential magnitude (bias in the analytical signal caused by elements other than the analyte), although it was not completely immune from interference (Thompson et al. 1979). The measurement could be made directly on the solution obtained by treating the test material with mineral acid mixtures, without any chemical separation. At its peak, the method was providing in excess of 200 000 results per year. An account of the fully-developed general methods used in AGRG can be found in Thompson & Wood (1982). However, specialized methods were used in the analysis of waters, where concentrations of the analytes tended to be too low for measurement by direct nebulisation and, at least in saline waters, the matrix was unsuitable. Both ion exchange (Elderfield et al. 1971) and solvent extraction with dithiocarbamate systems (Watling 1974) were extensively used. Mercury was determined by a version of the sensitive Hatch & Ott (1968) procedure (Thompson & Wood 1982), which could be executed after a simple temporary modification of standard atomic absorption spectrometers, and which replaced the pioneering methods for mercury developed in the AGRG (James & Webb 1964).

THE ERA OF THE INDUCTIVELY-COUPLED PLASMA
The potential of the inductively-coupled plasma as a source for atomic emission spectrometry was becoming more apparent by the mid 1970s, and in 1977 the AGRG Quantometer was transformed into a retrofit ICP-AES instrument by replacing the arc/spark stand with an industrial radiofrequency generator to power the ICP. The hybrid instrument was also fitted with a monochromator (from a cannibalized FAAS) to allow access to spectral lines not accommodated in the polychromator. (This transformation was completed with the kind assistance of John Goulter, then a research student working on ICP-AES in the Department of Chemistry. He subsequently became an analytical technician in AGRG before moving to ARL). This instrument, among the first few in the world, rapidly demonstrated the remarkable capabilities of ICP-AES, with its fast throughput, high-intensity spectral lines over a low background, virtual freedom from chemical and ionization interferences, almost straight calibration lines and, by virtue of the high temperature of the plasma (c. 10 000 K) the capacity for the determination of refractory oxide elements (lanthanides, Si, Al, Zr and many others) and non-metals (e.g. B, S, P), both groups being difficult or impossible to manage by FAAS (Thompson & Walsh 1983).
As the ICP-AES relied on the nebulization of solutions, new rapid methods of sample decomposition were needed to solubilize rocks, soils and sediments. A method was devised involving the treatment of samples in polytetrafluorethylene test-tubes in a temperature-programmed aluminium block. The treatment involved the use of a minimal-volume mixture of nitric, perchloric and hydrofluoric acids under a carefully optimized and programmed temperature regime. Nearly all minerals were completely solubilized. Three hundred samples could be processed simultaneously under fully-automatic control in a 24-hour cycle (Thompson & Walsh 1983).
The retrofit instrument was a great success, but was far from optimal in optical design, and clumsy in its data handling. It still relied on punch cards to convey the uncalibrated raw results to a mainframe computer. (It also had an irritating 'howl' at c. 300 Hz, and operators needed ear protection, until attention to the acoustics brought this nuisance under control). It was replaced in January 1980 by a commercial instrument (an ARL 34000C, with a 1-m vacuum spectrometer) with a built-in PDP 11/04 minicomputer, and the ICP age had fully arrived (Thompson 1985). This instrument was purchased, through the good offices of John Goulter, by then working for ARL, with funds found by the Head of the Department of Geology, Professor Rex Davis. The machine was in almost daily use until its last run on 17th January 1994. It was subsequently acquired (with a complete set of documentation) by the Science Museum. (The replacement instrument was an ARL 3580B, still extant).
About 150 solutions could be analysed in a working day on the new instrument, with immediate output of concentrations to printer and floppy disc. Development of analytical methods was limited to marginal improvements in throughput (for instance by optimizing the nebulizer construction) and correcting the rotational interferences, which were small but not negligible. Several quite new and successful approaches to this latter problem were devised (Ramsey & Thompson 1984, 1985, 1986, 1987Thompson & Ramsey 1985).

'Unconventional' ICP-AES
There were still certain tasks, however, that ICP-AES could not address adequately in its conventional mode of sample input, namely, nebulizing an aqueous solution. In this process only c. 1% of the solution taken-up by the nebulizer was injected into the plasma. The elements Ge, Sn, As, Sb, Bi, Se, and Te have relatively poor sensitivity by ICP-AES, and low abundances in typical geochemical samples, and therefore could not be determined directly on the solutions produced by conventional decompositions. This deficiency was overcome by the invention of a continuous-flow hydride generator in which the elements, in acidic aqueous solution, could be reduced by sodium tetrahydroborate, NaBH 4 , to the respective gaseous hydrides, separated as such from the spent solutions and finally injected into the ICP in the gas phase (Thompson et al. 1978a, b;Thompson & Pahlavanpour 1979). Because the process was nearly 100% efficient, the elements could be introduced into the plasma at a much faster rate, in a matrix-separated form, with a comparable improvement in detection limits. The process was rapid: c. 100 samples could be analysed in a working day. This capability opened a door to novel approaches to geochemical interpretation with a whole suite of 'pathfinder' elements (e.g. Hale 1981) and to fast and reliable environmental studies, especially relating to As pollution and to Se deficiency (Pahlavanpour et al. 1979(Pahlavanpour et al. , 1980a(Pahlavanpour et al. , b, 1981a. The device was later developed into commercial models by several firms. Laser ablation was another technique for introducing test material into the ICP that was pioneered in the AGRG. Laser ablation had been previously described as an adjunct to spark spectrography (Moenke & Moenke-Blankenburg 1973), and had also been used in parallel with ICP-AES in highly specialized equipment for analysing atmospheric dust for geochemical exploration (Abercrombie et al. 1978). In the AGRG, it was shown that a commercially-available laser microscope could be easily coupled with ICP-AES simply by conducting the ablation in a small (c. 25-ml) chamber with an optical glass window. The ablated material was transferred as an aerosol to the ICP in a stream of argon with virtually no loss in the connecting pipework (Thompson et al. 1981). The laser microscope was focused on the desired part of the sample material, usually a spot less than 50 µm in diameter. The laser, when fired through the microscope optics, could dump up to one joule into the area in c. 1 µs, thus mobilizing a few micrograms of the test material into the plasma. Laser ablation has a number of unique and valuable properties: (a) no sample preparation or reagents are required; (b) the method is virtually non-destructive; (c) the test-piece is not contaminated; and (d) small-scale variations in concentration can be studied. No material can resist the laser pulse, as the ablation event reaches a temperature of 5,000-25,000 K, depending on the laser conditions.
The laser ablation technique was used in the AGRG in a variety of applications (Hale & Thompson 1983), including the 'instant' analysis of single mineral grains in pan concentrates (Thompson & Hale 1984), and the analysis of manganese/iron oxide coatings on stream pebbles to delineate areas of interest for prospecting (Hale et al. 1984;Thompson et al. 1992). The technique itself was also investigated in detail to elucidate the mechanism of ablation and how it could be improved (Thompson et al. 1989(Thompson et al. , 1990Chenery et al. 1992). This research led directly to the coupling of lasers with ICP-MS which immediately provided detection limits 2-3 orders of magnitude lower than that available with ICP-AES (Chenery 1991). This was an important step, because of the limited mass of test material mobilized by the laser. Improvements in laser technology eventually enabled analysts to analyse accurately the trace constituents of single fluid inclusions (Shepherd & Chenery 1995;Moisette et al. 1996). Laser ablation is now a standard attachment to ICP-MS instruments, and is used in a wide variety of non-geochemical applications. In recent geochemical proficiency tests the method was found to be as accurate as the analysis of bulk materials by conventional nebulization (Potts et al. 2002).
A novel ICP-AES-based method of investigating the composition of fluid inclusions was also developed. On heating, mineral grains containing fluid inclusions often burst (decrepitate) with the forceful ejection of their contents as a fine aerosol. It was realized that these aerosols could be swept into the ICP by a steam of argon for atomization and analysis. This provided information about the composition of geochemical fluids trapped in the inclusions that could, under some circumstances, be related to the emplacement of mineralization (Thompson et al. 1980;Lindblom et al. 1989). In one study, clean quartz grains weathered from the granite were collected from many small stream beds on Dartmoor. The samples were subjected to decrepitation and the resulting aerosols analysed. In this rapid method, results for many analytes showed a strong correlation with known occurrences of mineralization (Alderton et al. 1992). This idea was never followed up.

WHAT HAPPENED NEXT
After the AGRG was disbanded, the analysts went their various ways, but the lessons learned were not forgotten. On the contrary, they were developed into powerful general ideas that are now permeating the whole of the analytical community and affecting their customers. Laser ablation ICP-MS has become a standard tool and many different instruments are now available. Ex-members of AGRG have made notable use of it, especially Simon Chenery at British Geological Survey, in a wide range of applications, and John Watling for fingerprinting diamonds and precious metals (Watling et al. 1995). Fitness for purpose was transformed from a vague idea into a quantitative theory that is beginning to be applied in all sectors (Thompson & Fearn 1996;Fearn et al. 2002). Methods of analytical quality control invented in the AGRG have been formalized in internationally recognized protocols (Thompson & Wood, 1995;Thompson et al. 2002Thompson et al. , 2006 that are binding on the international trade in food. Uncertainty from sampling, springing from the original ideas of the American geochemist, Alfred Thomas Miesch (1967;Miesch et al. 1976) and ex-GPRC doctoral student Robert Geoffrey Garrett (1969;Hornbrook & Garrett 1976;Garrett & Goss 1979), has been studied in depth and thrust into the limelight (Thompson & Ramsey, 1995;Thompson 1999;Ramsey & Thompson 2007;Ramsey & Ellison 2007) when it was realized that it often made the major contribution to uncertainty of measurement in important regulatory decisions such as those relating to the safety of food (Lyn et al. 2007) or the re-use of brown-field sites for housing (Ramsey & Argyraki 1997).

POSTSCRIPT
One of the drawbacks of engaging in a long project involving chemical analysis is that it is not possible to take advantage of improvements in methods or equipment until the task is completed. Any changes in the analytical method would mean that the data produced would not have the same quality throughout. In a rapidly evolving area, long-term projects are thus locked into obsolescent methods. This was certainly true of the England and Wales Atlas project, initiated at a time when the revolution in computing and electronics was just beginning to make itself felt in analytical technology. Subsequent advances, such as ICP-AES and ICP-MS would have totally transformed this project had they been available. A much wider range of elements could have been determined, with far higher accuracy and with far less effort and expenditure of time. Such is the fate of pioneering efforts, of course, and it is gratifying that many other countries were able to make use of these advances in producing their own national geochemical atlases. It would have been a welcome and very rewarding task to have re-analysed the Wolfson Atlas samples with up-to-date methods but, alas, that option was never going to be available. The AGRG was forced to surrender its storage space, and the precious samples were thrown away.
This account was compiled with input from many sources and the generous assistance of Anne Barrett and Catherine Harpham, the archivists at Imperial College.

APPENDIX A
A list follows of those employed in the GPRC and AGRG as full-time analysts. Apologies are offered to those inadvertently omitted or misclassified, or for whom the information attributed is missing or incorrect.