Scientific evidence of global climate change: A brief history

This page takes us back to the beginnings of climate science, to look at the early evidence for Global Climate Change and how that evidence has developed into our current state of knowledge. In the tradition of this fact-based website, there are links to the original research.

This is not a comprehensive examination; other valuable histories of climate science are available (here, here, and here, for example). My website being a fact-rich zone, the “twist” I have chosen is to focus this history on the practical measurements. Prior to the history reported here and beginning in 1824, Joseph Fourier (French mathematician), Claude Pouillet (French physicist), John Tyndall (Irish physicist), Svante Arrhenius (Swedish physical chemist), and Thomas Chamberlin (American geologist) had proposed and developed a hypothesis that increases in atmospheric carbon dioxide, caused by the burning of fossil fuels, could cause the surface temperature of the Earth to rise due to what we now know as the Greenhouse Effect. You can read about that elsewhere. Their assertions were disputed by other scientists, at the time, based on a number of arguments. That water vapor was a much stronger absorber of solar radiation than carbon dioxide. That any excess carbon dioxide from fossil fuels would be rapidly absorbed by the vast oceans. Etc. The speculation was all well-informed (based on the available data at the time) but at an impasse

And here we begin.

Quick links to page contents
Episode 1. 1900, Royal Botanical Gardens
Episode 2. First measurement of anthropogenic global warming
Episode 3. Our “large scale geophysical experiment” (1940-1960)
Episode 4. Dave Keeling persists in a great idea
Episode 5. Icy time capsules
Episode 6. The “geologic eons of time”
Episode 7. Our global thermometer since 1850
Episode 8. Of islanders, aliens, and frogs. A Cosmic test for humanity. Part 1.
Episode 9. Of islanders, aliens, and frogs. A Cosmic test for humanity. Part 2.


Episode 1. 1900, Royal Botanical Gardens. Two British scientists’ adventures with leaves and CO2 measurements.

In the years between 1898 and 1901, Dr. Horace Brown, a British chemist, and Mr. Fergusson Escombe, a British botanist, were at the Royal Botanical Gardens in Kew, England studying the influence of light and carbon dioxide levels on the rate of the photosynthesis reaction in leaves (Brown & Escombe, 1905). They constructed a rather ingenious apparatus:

Figure 1 of H. T. Brown & F. Escombe, On the physiological processes of green leaves; Proceedings of the Royal Society B 76 (1905), 29-111.

A leaf was housed inside a sealed box with a window. Air with a known carbon dioxide concentration could be pulled through the box while light of a measured intensity was made to shine on the window. The air from the leaf box was then pulled into the chemical apparatus on the right, within which the amount of carbon dioxide remaining in the air was measured by its reaction with sodium hydroxide to form sodium carbonate. This was a new method of measuring the concentration of carbon dioxide in air, at the time, and Brown & Escombe were pleased to find it was accurate enough to discern the small quantities of carbon dioxide consumed by a single leaf as it did photosynthesis. Naturally, Brown & Escombe had occasion in the course of this work to make a multitude of measurements of the carbon dioxide concentration in the ambient air at the Royal Botanical Gardens. It averaged about 290 ppm (parts per million).

So it was around 1900, and the atmospheric carbon dioxide concentration at the Royal Botanical Gardens was 290 ppm.

Back to page contents


Episode 2. First measurement of anthropogenic global warming

Guy Callendar, a British steam engineer and inventor, referenced Brown & Escombe’s atmospheric carbon dioxide measurements four decades later in his paper (Callendar, 1938), famous in climate science, which opened with the following sensational claim:

“Few of those familiar with the natural heat exchanges of the atmosphere, which go into the making of our climates and weather, would be prepared to admit that the activities of man could have any influence upon phenomena of so vast a scale. In the following paper I hope to show that such influence is not only possible, but is actually occurring at the present time.”

(A note of foreshadowing: As we continue in our pursuit of knowledge about climate science, it may become astounding to realize the quote above, from the year 1938, quite resembles the state of very recent “debate” that occurred on the floor of our 2015 U.S. Senate. Article about Senate absurdity. Video of Senate absurdity.)

In his peer-reviewed 1938 paper, Callendar made use of a number of other scientific studies that had taken place since around the turn of the 20th century, which he believed for the first time enabled a reasonable calculation of the effect on Earth’s temperature of CO2 increases from the burning of fossil fuels:

  • More accurate measurements of infrared absorption by CO2 (Rubens & Aschkinass, 1898);
  • The temperature-pressure-alkalinity-CO2 relation for seawater (C. J. Fox, 1909);
  • Measurements of atmospheric radiation of heat (A. Angstrom, 1918; W. H. Dines, 1927; G. C. Simpson, 1928; D. Brunt, 1932);
  • Infrared absorption measurements of water vapor (F. E. Fowle, 1918).

Callendar had the benefit of more atmospheric CO2 measurements that had been taken in the eastern U.S. between 1930 and 1936. These averaged 310 ppm, about 6% higher than the earlier measurements at the Royal Botanical Gardens around 1900. Taking into consideration better estimates of the expected absorption of CO2 by the oceans, Callendar calculated that a 6% increase was about consistent with the estimated addition of CO2 to the atmosphere by the combustion of fossil fuels (about 4,500 million tons per year at the time). Most of the added CO2 seemed to be staying airborne.

Taking account of infrared absorption by both CO2 and water vapor, downward radiation of absorbed heat from the sky, and the effect of this on surface temperature, Callendar calculated that Earth’s temperature at the surface should be increasing at the rate of about 0.003 degrees Celsius per year.

Callendar then undertook a staggering project of collecting, sorting, analyzing, and averaging measured temperatures from hundreds of global weather stations that had been collected since about 1880 (earlier standardized records did not exist). It’s frankly hard for me to imagine doing this overwhelming project, as he did, without even a calculator. He summarized his findings in this graph:

Figure 4 from G. S. Callendar, The artificial production of carbon dioxide and its influence on temperature; Quarterly Journal of the Royal Meteorological Society 64 (1938), 223-240.

In all 3 major climate zones of the Earth in which temperature records existed, Callendar found the temperature variation, with respect to the 1901-1930 mean temperature, to be remarkably consistent. Everywhere on the Earth, the temperature had increased, over approximately the previous half-century, at an average rate of 0.005 degrees Celsius per year, a somewhat greater increase than he had calculated based on the CO2 increases. But he admitted the temperature record was rather short in duration, and further observation was warranted.

Interestingly, Callendar remarked at the end of his paper that he thought global warming resulting from the combustion of fossil fuels would be beneficial by preventing “the return of the deadly glaciers” (referring, it would seem, to the ice ages). Writing as he was in 1938, and only having observed the first glimmer of Global Climate Change, he can be forgiven for underestimating the future enthusiasm with which we would burn fossil fuels. By the end of it, we may find ourselves nostalgic for the glaciers we have now.

Back to page contents


Episode 3. Our “large scale geophysical experiment” (1940-1960)

Climate enthusiast Guy Callendar continued to find time, around his day job as a steam engineer, to conduct and publish multiple research studies between 1940 and 1955, proposing increasing evidence of a linkage between fossil fuel use, rising atmospheric CO2 concentration, and warming global surface temperature (G. Callendar, 1940, 1941, 1942, 1944, 1948, 1949, 1952, 1955). In these, Callendar continued to refine estimates of infrared absorption by CO2, catalog CO2 and temperature measurements in various regions during the period since 1850, and refine and update his calculations of the total amount of CO2 that had been produced globally by fossil fuel use. His analyses continued to suggest that most of the CO2 produced by fossil fuel combustion had directly increased the CO2 concentration of the atmosphere.

During this period, Callendar’s influential 1938 paper also served to renew the interest of other scientists in the possibility of anthropogenic global warming. Roger Revelle and Hans Suess, at the Scripps Institution of Oceanography (UC San Diego), summed up the growing interest in the subject particularly well (Revelle & Suess, 1957):

“. . . human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future. Within a few centuries we are returning to the atmosphere and oceans the concentrated organic carbon stored in sedimentary rocks over hundreds of millions of years. This experiment, if adequately documented, may yield a far-reaching insight into the processes determining weather and climate.”

Gilbert Plass, a Canadian-born physicist working in the U.S., published a series of papers in 1956 (G. N. Plass, 1956a, 1956b, 1956c, 1956d) in which he brought increased rigor to the calculation of infrared absorption by carbon dioxide in the atmosphere, aided by the new availability of high speed computers to perform complex calculations. These calculations proved wrong a widely held belief at the time, that water vapor absorbed infrared radiation from the Earth’s surface more strongly than carbon dioxide and thus controlled the “greenhouse effect.” With improved calculations, Plass showed that water vapor and carbon dioxide absorbed radiation mainly in different parts of the infrared spectrum. Also, water vapor was present primarily in the region of the atmosphere right next to the Earth’s surface, whereas carbon dioxide was present uniformly at all heights. The new calculations added physical rigor to the theory that the atmospheric carbon dioxide level strongly influences the Earth’s surface temperature. Plass calculated that a doubling of the atmospheric carbon dioxide level would lead to a temperature increase of 3.6 degrees Celsius, and that continued use of fossil fuels would cause about a 1 degree Celsius temperature increase by the year 2000, at which time we would experience easily-observed effects of climate change. As we will see, these 1956 predictions have proven remarkably accurate.

But it was not all agreement during this period. In the tradition of the scientific method, other scientists were questioning the above conclusions. Giles Slocum, a scientist at the U.S. Weather Bureau, pointed out that Callendar’s claim of increasing atmospheric CO2 relied heavily on his selection of particular historical measurements he deemed more accurate than others (G. Slocum, 1995). Slocum’s criticism was illustrated quite well by Stig Fonselius and his coworkers, operators of a network of Scandinavian CO2 measurement sites that had been set up in 1954. Fonselius, et al. (1956) cataloged a large number of CO2 measurements that had been made since the early 1800’s and prepared this graph:

Figure 1 from Fonselius, et al., 1956. The circled values are those selected by Callendar as well as those recorded in 1955 by the Scandinavian network.

As you can easily see, anyone taking the totality of the data as the CO2 record would be hard pressed to argue there had been an obvious increase over time. Callendar had argued in his papers that many of the measurements, particularly early ones, had been conducted with poor equipment and/or at locations, like the middle of large cities, likely to display elevated CO2 levels due to local sources of CO2 pollution (factories, etc.) While nobody disputed that many CO2 measurements had probably been inaccurate, Slocum argued the totality of data was not yet sufficient to prove atmospheric CO2 had been rising and that a more standardized data set was needed.

Around the same time, oceanographer Roger Revelle and physical chemist Hans Suess were starting to bring nuclear physics to bear on the question (Revelle & Suess, 1957). Their work involved carbon-14, an isotope of carbon present in atmospheric carbon dioxide but not present in fossil fuels (if you’re interested, see my primer on carbon-14). Revelle and Suess and other scientists reasoned that, if atmospheric CO2 levels were increasing due mainly to the burning of fossil fuels, the proportion of atmospheric CO2 containing carbon-14 should be decreasing. In fact, Suess did find that tree rings from recent years were depleted in carbon-14 compared with old tree rings:

Table 5 from Revelle & Suess, 1957. The values in the right column are the percentage reductions in carbon-14 found in tree rings during the indicated years, relative to old tree rings.

But the reductions appeared lower than could be expected based on Callendar’s estimate that the atmospheric CO2 level had increased by some 6% or more. Further, using data on the carbon-14 contents of the atmosphere and of carbonaceous materials extracted from the ocean surface (namely, seashells, fish flesh, and seaweed), Revelle & Suess calculated that a molecule of CO2 in the atmosphere would be absorbed into the ocean surface within an average of about 10 years, and that the overall ocean was mixed within several hundred years. Based on the enormity of the oceans, Revelle & Suess concluded that Callendar’s claims seemed improbable. Moreover, assuming fossil fuels continued to be used at about the rate they were being used in the mid-1950’s, they calculated that the ocean would prevent anything but a modest increase in atmospheric CO2 well into the future.

Guy Callendar’s “last word” during this period was in a 1958 paper applying an additional 20 years of measurements and analysis to his 1938 catalog of atmospheric CO2 measurements, as shown in this graph:

Figure 1 from G. S. Callendar, 1958. Numbered points are measurements of the concentration of CO2 in the free air, North Atlantic region, 1870-1956. Black line is the calculated CO2 from combustion of fossil fuels.

Dr. Browne & Mr. Escombe’s year 1900 measurements of about 290 ppm CO2 are the point labelled “d” in the plot. The atmospheric CO2 concentration in the North Atlantic region appeared to have increased to around 320 ppm by the year 1956. At the same time, Callendar (1961) and Landsberg & Mitchell, Jr. (1961) independently continued to document that the Earth, at all latitudes, had been warming over the same period:

Figure 3 from Callendar, 1961. Temperature fluctuations for the zones of the Earth, 5-year annual departures from the mean 1901-1930.

Callendar acknowledged the contradiction between his analyses and the carbon-14 measurements, but was unapologetic:

“. . . the observations show a rising trend which is similar in amount to the addition from fuel combustion. This result is not in accordance with recent radio carbon data, but the reasons for the discrepancy are obscure, and it is concluded that much further observational data is required to clarify this problem.”

On the need for further measurements, Callendar, Revelle, Suess, and other scientists agreed. If you read the linked papers on this page, you’ll find many mentions of the upcoming International Geophysical Year (1957-1958), a period of international governmental funding of Earth sciences interestingly intertwined with a Cold War competition for scientific prestige, the launching of the first satellites by the Soviet Union and the United States, and the beginning of the Space Race. As you will see in the next episode of this series, new measurements were coming largely as a result of this funding.

This period is a confusing chapter of climate science, but it presents a terrific example of the self-correcting nature of the scientific method. Pioneering scientists like Callendar test obscure hypotheses, often relying on scant initial data. Their conclusions, if compelling, inspire other scientists both to make more measurements and to check their work. “Watchdog” scientists (like Slocum) point out deficiencies in their analyses. Scientists from other disciplines (Plass, Revelle & Suess) apply alternative techniques to see whether the results are consistent. Predictive scientists (Plass) extend the conclusions of early work to formulate predictions that can be tested. If a hypothesis is correct – if it’s the truth – then any accurate measurement will confirm it. Any prediction based on it will come true. Where there is an apparent contradiction, or where a prediction fails to come true, more measurements are needed to resolve the contradiction.

Keep this mind as we go forward. We will, of course, be applying these principles to the findings supporting the hypothesis of anthropogenic global warming. But also bear in mind that any alternative hypothesis must stand up to the same tests. It’s not enough to say, as my own Senator Ron Johnson (R-WI) did,

“It’s far more likely that it’s sunspot activity or just something in the geologic eons of time.” [Journal Sentinel 8/16/2010]

Well, okay, if it’s sunspots or something (what thing?), let’s see the data. Do measurements of sunspot activity correlate with our observations of Earth’s climate? Scientists have been thinking about and studying this since the 1800’s and making concerted measurements since the early 1900’s. We should be in a position, after all that work, to support our claims with evidence.

Read on, as we get into the data that resulted from the calls for study by Callendar, Revelle, and Suess. As for the “geologic eons of time,” we will actually take a look at that, too. Has the scientific controversy evident in this episode persisted? Or, are people who claim it’s controversial stuck in the ’50’s? Read on to find out!

Back to page contents


Episode 4. Dave Keeling persists in a great idea

In 1953, Charles David (“Dave”) Keeling, a just-graduated Ph.D. chemist with an interest in geology, was looking for a job. He got one as a postdoctoral researcher at Caltech, where a professor employed him to experimentally confirm a rather esoteric hypothesis about the balance between carbon stored in limestone rocks, carbonate in surface water, and atmospheric carbon dioxide. To do this, Dave realized he would first need to have a very accurate estimate of the CO2 content of the air. In investigating the available data on that subject, he found what we encountered at the end of Episode 3 – a great deal of variability in the reported measurements. In fact, it had become widely believed that the CO2 concentration in air might vary significantly from place to place and from time to time, depending on the movements of various air masses and local effects due to the respiration of plants, etc. Dave decided he would need his own way of very accurately measuring the CO2 concentration in air.

Dave developed a new method of measuring the CO2 content of air by collecting air samples in specialized 5-liter flasks, condensing the CO2 out of the air using liquid nitrogen (which had just recently become commercially available), separating the CO2 from water vapor by distillation, and measuring the condensed CO2 volume using a specialized manometer he developed by modifying a design published in 1914. Dave’s new method was accurate to within 1.0 ppm of CO2 concentration. If you’re interested, you can read more about it in his 1958 paper, “The concentration and isotopic abundances of atmospheric carbon dioxide in rural areas,” in which he reported the results of repeated atmospheric CO2 measurements he made at 11 remote stations, including Big Sur State Park, Yosemite National Park, and Olympic National Park, at different elevations and at all times of the day and night.

In his autobiographical account, Keeling admitted he took many more air samples than probably required for this work largely because he was having fun camping in beautiful state and national parks. The great number of samples paid off, though, as they enabled him to make some important observations about daily fluctuations in the atmospheric CO2 level. He found that, in forested locations, maximum CO2 concentrations occurred in the late evening or early morning hours and minimum CO2 concentrations occurred in the afternoon. In non-forested locations, the CO2 concentrations were very similar to the minimum (afternoon) levels measured in forested locations, as well as earlier published levels in maritime polar air collected north of Iceland. In all these locations, the minimum measured CO2 concentrations were pretty consistent, in the range of 307-317 ppm. By isotopic analysis of the carbon-13/carbon-12 ratio of CO2 collected in the forested areas, Keeling determined that the elevated CO2 levels measured at non-afternoon hours in forested areas were due to respiration of plant roots and decay of vegetative material in the soil. He posited that afternoon meteorological conditions resulted in mixing of the near-surface air layer influenced by vegetative processes with higher air that was constant in CO2 concentration.

Basically, the results of Dave’s camping adventures with 5-liter vacuum flasks suggested three important conclusions: (1) care should be taken to sample air using specific methods and under conditions not influenced by industrial pollution or vegetative processes (sample at rural locations in the afternoon); (2) if such care was taken, maybe the CO2 concentration in the atmosphere was virtually the same everywhere, from the old-growth forests of Big Sur to the pristine sea air north of Iceland; and (3) if that was the case, the global atmospheric CO2 concentration in 1956 was about 310 ppm.

Federal agencies, including the US Weather Bureau, were working to identify scientific studies to undertake using the substantial government geophysical research funding anticipated during the International Geophysical Year. Dave reported to a US Weather Bureau researcher his new CO2 measurement method and his results pointing to a potential constancy of global CO2 levels. This resulted in Dave’s installation at the Scripps Institution of Oceanography, directed by Roger Revelle and his associate, Hans Suess. You may remember Revelle and Suess from Episode 3. They were in the midst of publishing a paper concluding that much of the excess CO2 from fossil fuel combustion should be rapidly conveyed into the deep oceans. However, they remained intrigued by Callendar’s analyses, apparently to the contrary, and thought it worthwhile to undertake a dedicated program of atmospheric CO2 measurements at multiple locations.

With funding from Scripps and the US Weather Bureau, Keeling was to make continuous CO2 measurements with a newly developed infrared instrument at remote locations on a 13,000-foot volcano at Mauna Loa, Hawaii and at Little America, Antarctica. The infrared instruments were to be calibrated by the gas sampling technique Dave had developed at Caltech, and 5-liter flasks were to be collected from other strategic places on the Earth, including on airplane flights and trans-ocean ships. The measurements commenced at Mauna Loa, Hawaii in 1958, and the first measured CO2 concentration was 313 ppm.

Continuous weekly CO2 measurements have been conducted at Mauna Loa ever since. The results are freely available to the public here. You can download the data yourself (as can, presumably, House Representatives, Senators, and the President). I did, and I plotted the weekly measurements as this blue curve which has become known as the “Keeling Curve:”

Keeling Curve 4-22-19
(Updated 4-22-2019)Keeling Curve,” a plot of weekly atmospheric CO2 measurements made by the Scripps Institution of Oceanography at Mauna Loa, Hawaii from 1958 to present. The curve was plotted by me using Scripps weekly data from the Mauna Loa observatory, downloaded here. Blue: Data from 1958 through 20118. Red: 2019 data. For fun and context, I added some significant human events to the Earth’s recent CO2 timeline.

Keeling’s very first observation was a seasonal cycle in atmospheric CO2 concentration. The atmospheric CO2 concentration reached a maximum in May, just before the local plants put on new leaves. It then declined, as the plants withdrew CO2 from the atmosphere through photosynthesis, until October, when the plants dropped their leaves. This was, incredibly and quite literally, the breathing of the Earth, which you can clearly see in Keeling’s first measurements (1960, 1963, 1965).

Figures 9a and 9b from Pales & Keeling, 1965. Atmospheric CO2 measurements made at the Mauna Loa Observatory in 1958 and 1959.

The first few years of measurements also confirmed remarkable agreement between measurements taken at Mauna Loa, in Antarctica, on trans-Pacific air flights, and at other locations:

Figure 1 from C. D. Keeling, 1960.

By 1960, the Scripps workers had concluded that the average atmospheric CO2 concentration was rising year-on-year. As you can see by the blue curve above, both the seasonal “breathing” of the Earth’s plants and increasing average CO2 concentration, measured at Mauna Loa, have continued every single year, without interruption, since Keeling’s first measurement in 1958.

No informed person disputes the correctness of the blue curve above. The Mauna Loa CO2 record makes the most compelling graph because it is our only uninterrupted CO2 record. But it has been corroborated for decades by many other scientists who have made measurements all over the world. The Scripps Institution of Oceanography has made measurements at 12 sampling stations from the Arctic to the South Pole, and spread across the latitudes in between. You can get daily updates of the Mauna Loa CO2 concentration here. The National Oceanic and Atmospheric Administration also operates a globally distributed system of air sampling sites, based on which it calculates a global average atmospheric CO2 concentration that is periodically updated here.

In fact, we now know 57% of the CO2 produced by the burning of fossil fuels has stayed in the atmosphere, according to the Mauna Loa CO2 record (see here for more information). So, what about the analysis of Roger Revelle and Hans Suess (1957) from Episode 3, which suggested the CO2-absorbing power of Earth’s deep oceans would save us the hassle of worrying about our CO2 emissions? The early 1957 conclusions were based on measurement of the steady-state rate of exchange of CO2 between air and seawater. That is, the average time a CO2 molecule floats around in the atmosphere before it is “traded” for one dissolved in the surface of the ocean, independently of any net change of the CO2 concentration in either the air or the seawater. Revelle and Suess estimated that steady state exchange rate at around 10 years, and reasoned this meant that, if new CO2 were introduced into the atmosphere, a matching increase in the CO2 surface concentration of the seawater would occur within about 10 years.

Around the same time Dave Keeling was beginning his CO2 measurements at Mauna Loa, Roger Revelle and other scientists were learning the above assumption ignored an important buffering effect of the dissolved salts in seawater, which cause seawater to “resist” increases in its CO2 concentration (see more in this 1959 paper). Thus, when the concentration of CO2 in the atmosphere increases, the net concentration of CO2 in the ocean surface increases by an amount over 10 times less. After decades of further study, this buffering effect is well understood and is routinely measured in the oceans as a quantity known as the Revelle Factor. It explains why Callendar was right about increasing atmospheric CO2, and why we can’t count on the deep oceans to help with our CO2 problem on any but geological time scales of several thousands of years (for more details see this paper).

So, at least, we can say with certainty we’ve settled the question of whether combustion of fossil fuels has increased atmospheric CO2. A multitude of independent measurements tell us it has. When we started this story in Episode 1 around the year 1900, the atmospheric CO2 concentration at the Royal Botanical Gardens was 290 ppm. Dave Keeling’s first measurement at Mauna Loa in 1958 was 8% higher. When I first watched Star Wars at the drive-in in 1977, the CO2 concentration in the air around me was 16% higher. By the time Barack Obama was elected President in 2008, it was 32% higher. The average 2017 Mauna Loa reading was 406.6 ppm, 40% higher than the CO2 concentration in the year 1900. As you can see by the upward bend of the blue curve above, the atmospheric CO2 concentration is increasing at an accelerating rate.

So, how big is that change in the context of Earth’s history? To find out, it would seem we would have to go backward in time. As it turns out, we can! (Sort of.) Stay tuned for more!

Back to page contents


Episode 5. Icy time capsules

In Episode 4, we saw Dave Keeling and coworkers discover the atmospheric CO2 concentration has been on a marked upward sweep, from about 290 ppm in 1900 to over 400 ppm now, and accelerating. Well, is that unusual? Is that a big swing? Or, does the CO2 concentration vary a lot due to natural causes?

Since Dave Keeling only began our continuous, high-accuracy CO2 measurements in 1958, it would seem we would need a time machine to figure that out. In some of the loneliest places on Earth, it turns out, nature has been quietly making time capsules for us.

In parts of Greenland and Antarctica, the snow never melts. In between the snowflakes, tiny volumes of air are trapped. As the years go by, each layer of snow is compacted under new layers. The snow is eventually compacted into ice, and the air is entrapped in minute, isolated bubbles. Geologists in heavy coats prospect for those historical bubbles, little bits of past atmospheres. Good spots to prospect are where it snows very often, such that the snow and ice are deep and the annual layers thick. One such place is Law Dome, Antarctica, a coastal location of Antarctica where the snowfall is as much as 225 lbs of snow per square foot per year.

(A) Field tents at Law Dome, Antarctica (Australian Antarctic Division). Ice core drilling was conducted in the tent in the foreground. (B) Slice from an ice core showing entrapped, ancient air bubbles (Norwegian Polar Institute). (C) Section of an ice core showing visible seasonal layers (Wikimedia Commons). (D) A researcher selects ice cores for greenhouse gas analysis at an Australian ice core storage facility (Australian Antarctic Division).

Ice cores are drilled out using cylindrical drills. Layers in the ice are dated, sometimes visually (see image C above), most times using more sophisticated methods. For example, a rare, heavy isotope of oxygen, O-18, is present in the frozen H2O of Antarctic precipitation at a higher concentration in summer than in winter. Thus, the years in an ice core can be counted as summer stripes and winter stripes, through isotopic analysis of the oxygen in ice layers using a mass spectrometer.

Scientists in the 1980’s expended considerable effort developing accurate methods of harvesting and measuring the composition of the old atmospheric air trapped in ice core bubbles. Since CO2 is water soluble, it’s important not to allow any of the ice to melt while you’re getting the air out. The figure below, from a 1988 paper, shows a schematic diagram of an apparatus used to measure the CO2 concentrations in gas samples retrieved from Law Dome ice cores. This has become known as the “cheese grater” technique, and is still used for CO2 analysis of ice cores.

Figure 1 of Etheridge, Pearman & de Silvia, 1988. Schematic diagram of “cheese grater” and associated gas condensing equipment for harvesting ice core air samples for analysis.

In a cold room (to prevent any melting), an ice core section is inserted in a cylinder with raised cutting blades on the inside, like an inside-out cheese grater. This is put inside a vacuum flask and shaken on a machine, crushing the ice inside. The released gases are sucked by a vacuum pump over, first, a water vapor trap, cooled to -100 degrees Celsius, to condense and remove water vapor. The dry sample is then made to flow over a “cold finger,” cooled by liquid helium to a frigid -269 degrees Celsius, cold enough to condense to liquid all the gases in the air sample. Once all the gas has been sucked out of the sample, the cold finger is isolated and warmed, and the accumulated gas sample is sucked into a gas chromatograph, a standard piece of analytical equipment for separating the gas constituents from each other and measuring their concentrations.

Between 1987 and 1993, Australian and French scientists working at Law Dome drilled 3 separate ice cores to depths of as much as three quarters of a mile. Samples of these ice cores have been analyzed by various groups. Below, in green, is a plot of data from a 2006 study of CO2 concentration from these ice cores going back over 2000 years.

2000y CO2 2017 update v3
(Updated 01-12-2018) Publicly available Scripps ice core-merged data, downloaded and plotted by me. Green: Ice core data from Law Dome, 0 C.E. to 1957 (see references here and here). Blue circles: Average yearly data from atmospheric sampling at Mauna Loa and South Pole, 1958-2016. Red square: 2017 average measured at Mauna Loa, Hawaii. Human experience milestones added by me.

The historical CO2 data tells a story of remarkable stability for 90% of human experience since Biblical times. In fact, until around 1850, the atmospheric CO2 concentration averaged 279 ppm and never strayed outside a narrow range between 272 ppm and 284 ppm (see black lines on the plot below):

2000y CO2 2017 with limits update
Plot of Scripps ice core-merged data showing the pre-industrial average (black dashed line) and range (black solid lines) of CO2 concentrations going back to 0 AD.

Around the time of the First and Second Industrial Revolutions (attended by the advent of coal-fired steam engines and the petroleum industry, respectively), atmospheric CO2 began its relentless upward sweep that continues today. By the time Dr. Brown and Mr. Escombe were doing CO2 measurements at the Royal Botanical Gardens around the year 1900, and certainly by the time Guy Callendar and Dave Keeling were publishing their CO2 measurements and analyses starting in the late 1930’s, the atmospheric CO2 concentration had already departed significantly from the pre-industrial range. The March 30, 2017 direct measurement at Mauna Loa was 47% higher than the average CO2 concentration that had persisted, until very recently, since classical antiquity.

The rate of increase of the atmospheric CO2 level is also strongly accelerating. The graph below shows the rate of change of CO2 concentration over the past two millenia. (If you remember your pre-calculus, I obtained the graph below by taking the derivative of the graph above.)

2000y rate of change 2017 update
Rate of change of atmospheric CO2 concentration in parts per million per year (ppm/year).

Prior to the Industrial Revolutions, the atmospheric CO2 concentration changed very little from year to year, and the rate of change hovered around zero. Following the Industrial Revolutions, the rate of change was positive much more often than it was negative; the CO2 concentration was increasing. Immediately following World War II commenced an unprecedented period of positive and increasing rate of change of the CO2 concentration. Some climatologists have labelled the time period between the end of World War II and today as the “Great Acceleration.” During this period, the global population doubled in just 50 years, while the size of the global economy grew by a factor of 15 (Steffen, Crutzen & McNeill, 2007). At the same time, the global CO2 concentration has not only increased to levels unprecedented in previous human experience, but the rate of that increase has sped up from year to year. In 2016 (the hottest global year on record), the rate of increase reached 2.24 ppm/year.

The question for us is, how high do we wish to allow the atmospheric CO2 concentration to go? For me, I have to say the data shown above is alarming. The fact that, in spite of the data above, we are still having discussions about “putting coal miners back to work” is terrifying.

“It will bring back manufacturing jobs across the country, coal jobs across the country. Across the energy sector, we have so much opportunity, George. And the last administration had an idea of keeping it in the ground. We need to be more independent, less reliant upon foreign energy sources. And this is an opportunity.” (EPA Head, Scott Pruitt, explaining to ABC News Anchor, George Stephanopoulos, the merits of President Trump’s executive order of March 28, 2017, seeking to redefine the government’s role in protecting the environment)

In a future episode in this series, we will get into the details of how historical temperature records have been created and linked to the CO2 concentrations above. But there is already enough information on this website to show that our prodigious CO2 production, if unabated, will lead to prodigious warming. The physics of the greenhouse effect are well understood and have been refined by scientists since the effect was first proposed in 1824. It is a mathematical certainty that more CO2 in the atmosphere will cause warming. As we saw in Episode 3, physicist Gilbert Plass used this known math and some of the first computers to predict in 1956 that the combustion of fossil fuels would lead to a warming of about 1 degree Celsius by around the year 2000, and that has come to pass.

In a 2013 paper, respected climatologist, James Hansen, and co-workers calculated that the Earth’s fossil fuel reserves are sufficient to raise the average land surface temperature by 20 degrees Celsius (36 degrees Fahrenheit). Try adding that to the summer temperature where you live. Since humans require a wet bulb temperature less than 35 degrees Celsius (95 degrees Fahrenheit) to maintain body temperature, this temperature change would literally make most of the Earth uninhabitable for humans in the summer. As an engineer, it’s impossible for me to imagine a workable adaptation for this problem that could be accomplished on the short time scale over which this change is presently on track to occur. In fact, given the comfortable stability in CO2 concentration humans have “grown up” with, there is nothing to suggest our social systems are prepared to deal with many of the consequences of the rapid climate changes we would experience on the current trajectory. Our farm land will be moving toward the poles. (Will we then clear more carbon-absorbing forests as it moves?) Our most valuable coastal real estate will be submerged.

As for the consideration of jobs, I suspect it will always be plausible to make the argument that jobs in fossil fuel reliant sectors of our economy will be eliminated by shifting to more sustainable sources of energy. It seems to me that new jobs will be created making solar panels, solar concentrators, and wind turbines. With respect to energy independence, I would argue that the sun shines and wind blows in all regions of the Earth. In any case, given the conclusions of the last paragraph, it would seem the only reasonable conclusion is, yes, as much as it may pain us, we will need to leave much of our remaining fossil fuels in the ground.

Back to page contents


Episode 6. The “geologic eons of time”

“I absolutely do not believe in the science of man-caused climate change. It’s not proven by any stretch of the imagination. It’s far more likely that it’s sunspot activity or just something in the geologic eons of time.”
-My own U.S. Senator, Ron Johnson, R-WI (Journal Sentinel, August 16, 2010)

“It’s a very complex subject. I’m not sure anybody is ever going to really know.”
-Donald Trump (New York Times interview, November 22, 2016)

“I think that measuring with precision human activity on the climate is something very challenging to do…”
-Scott Pruitt, EPA Administrator (CNBC Interview, March 9, 2017)

Mr. Pruitt is right, of course. Measuring with precision [the influence of] human activity on the climate is indeed challenging. Just like landing folks on the moon and returning them safely home. Or sending automobile-sized robots to drive themselves around on Mars taking photographs and analyzing soil samples and sending the results back to us on Earth. Or making giant aluminum tubes with wings that can carry hundreds of people by air to destinations anywhere on the globe in 24 hours or less with a safety record better than that of horse-drawn carriages. Or eradicating smallpox. Or making it possible for most of us to communicate with one another using our voices, text, images or videos, globally, in real time and at a moment’s notice, with little wireless devices we carry around in our pockets.

Once you recall we have accomplished all those rather challenging things, you may not be shocked to learn we have, indeed, also measured with precision the influence of human activity on the climate. Not only that, as we have seen in previous episodes and will continue to see, scientists have made these high-precision measurements publicly available. Anyone with web access can download and review much of the data. The detailed methods with which the precision measurements were conducted, and the resulting data analyzed, are also publicly available in scientific publications, the quality of which have been verified through peer-review (many of these are accessible as links on this website). Presumably, as Head of the United States Environmental Protection Agency, Mr. Pruitt has ready access to means for reviewing the precision measurements at his convenience.

And, as it turns out, we don’t have to speculate, as Senator Johnson evidently does, about mysterious somethings (sunspots maybe?) in the “geologic eons of time.” That’s because, as we saw in Episode 5 of this series, evidence of events during those “geologic eons” is available for study.

In Episode 5, we saw how tiny bubbles of old atmospheres, trapped and preserved in ice as deep as three quarters of a mile below ground at Law Dome, Antarctica, and extracted from ice cores, have enabled us to construct a measured record of atmospheric CO2 concentration over the past 2000 years. Thanks to the exceptionally high rate of snowfall at Law Dome, this 2000-year record has a very high resolution. But ice cores have been extracted at other locations in Antarctica, too, and some of those locations feature deeper ice.

Image credit: U.S. Department of Energy, Carbon Dioxide Information Analysis Center. Map of Antarctica showing locations of ice core drilling operations.

The deepest ice cores have been extracted by the European Project for Ice Coring in Antarctica (EPICA) at Dome C. EPICA has extracted ice cores three miles deep at Dome C, and those ice cores contain air bubbles trapped up to 800,000 years ago. Additionally, a collaborative project between Russia, the U.S., and France extracted ice cores as deep as 2.25 miles below ground at Vostok station, from which have been captured atmospheric samples from up to 420,000 years ago. Combining CO2 measurements from the Dome C ice cores, Vostok ice cores, Law Dome ice cores, and direct atmospheric measurements at Mauna Loa and the South Pole gives us this continuous plot of atmospheric CO2 concentrations going back a whopping 800,000 years:

800kY 2017 update
(Updated 01-22-2018) Publicly available 800 KYr ice core data and Scripps ice core-merged data, downloaded and plotted by me. Original data sources: (A) Dome C (Luthi et al. 2008) measured at University of Bern; (B) Dome C (Siegenthaler et al. 2005) measured at University of Bern; (C) Dome C (Siegenthaler et al. 2005) measured at LGGE Grenoble; (D) Vostok (Petit et al. 1999, Pepin et al. 2001) measured at LGGE Grenoble; (E) Dome C (Monnin et al. 2001) measured at University of Bern; (F) Law Dome (Keeling et al. 2005, Meure et al. 2006); (G) Average yearly data from atmospheric sampling at Mauna Loa and South Pole (“Keeling Curve”); (H) Average Mauna Loa measurement of 2017 (406.6 ppm). Human and other hominid experience milestones added by me with reference to Wikipedia.

More details about the measurement methods and access to the data sets are available at this website and by clicking links to the original scientific publications in the caption above.

The green and blue colored data in the graph above are the 2000-year Law Dome measurements and direct atmospheric CO2 measurements since 1958, respectively, that we plotted in Episode 5. They are shoved way over to the right now, dwarfed in time by the massive amount of historical data collected from the deeper ice cores at Vostok and Dome C.

I’m not sure what Senator Johnson meant by “the geologic eons of time.” But, insofar as we are interested in how CO2 has changed over a time period of interest to the success and survival of humans on Earth, I’d say 800,000 years fits the bill. To put that in context, anatomically modern humans appeared on the planet only 200,000 years ago. So, the CO2 record above goes back 4 times as long as the entirety of human experience. In fact, it goes back 200,000 years longer than fossil evidence of Homo heidelbergensis, the hominid thought likely to be the common evolutionary ancestor of Neanderthals and humans. (I included these and some other human and hominid milestones on the graph above. I find this useful for the purpose of putting geological and human events in perspective.)

In Episode 5, we saw that, over the past 2000 years, humans experienced atmospheric CO2 concentrations between 272 and 284 ppm prior to the Industrial Revolutions when we started to burn gobs of fossil fuels. The data in this episode extends that range somewhat, to a human experience of 184-287 ppm. The maximum pre-industrial concentration in human experience occurred 126,000 years ago, and it was roughly matched at the time of the Second Industrial Revolution, when we started to burn oil at an industrial scale. Since then, it has been up and up, such that our CO2 level as of April 29, 2017 is 43% higher than the maximum CO2 level over the entire pre-industrial experience of humans spanning 200,000 years.

800kY plot 2 update 2017
Same plot of atmospheric CO2 concentrations over the past 800,000 years, showing the average pre-industrial CO2 concentration during that period (dashed line), the minimum and maximum pre-industrial concentrations during that period, and the minimum and maximum concentrations during all of pre-industrial human experience (that is, between about 200,000 years ago and the Industrial Revolutions).

And, in the context of the “geologic eons of time,” this is happening quickly! As we did for the shorter data set in Episode 5, we can take the derivative of the graph above to see the rate of change of the atmospheric CO2 concentration in parts per million per year:

Rate of change in atmospheric CO2 concentration in parts per million per year (ppm/year).

The answer is the same as we saw in Episode 5, but it’s all the more striking in the context of an 800,000 year record. Not only are we far above any “natural” CO2 level in the past 800,000 years, since the Industrial Revolutions we have been increasing that CO2 level at a rate much faster than Earth has experienced over at least that time period. And the rate of increase continues to accelerate.

When you hear about “controversy” in climate science, uncertainties about the Earth’s response to this super fast rate of change is what it’s about. It’s not about whether CO2 from our burning of fossil fuels is causing global climate change. (It is.) The uncertainty (which the popular media may refer to as “controversy”) is about how extremely and how quickly Earth’s climate will respond to the rapid change in atmospheric CO2. Questions like: How quickly will the land-based ice sheets in Antarctica and Greenland melt, contributing to sea level rise? How much and how quickly will the reduced reflectivity of the Earth, as a result of the melting of the reflective snow and ice, contribute to additional warming?

To a scientist, like myself, who is experienced in rate-of-change graphs, the plot above is terrifying. It’s what we refer to as, “going vertical.” That is, departing from the normal process at an accelerating rate. I, myself, am a product developer experienced with defining and controlling the conditions required to manufacture new products. People like me want to keep a graph of a critical process parameter (in this case, CO2 concentration) within narrow limits. From this point of view, the Earth has “manufactured” humans. This has occurred, until very recently, within narrow limits of the atmospheric CO2 concentration. We are now departing rapidly from those narrow limits. As an engineer, I would say we need to get that critical process parameter back in control, as soon as possible. Otherwise, we risk a failure of our manufacturing process. Since the manufactured product, in this case, is us, we have a strong interest in getting the process under control.

In the next episode, we link the historical CO2 record directly to the global temperature record.

Back to page contents


Episode 7. Our global thermometer since 1850

In the last 3 episodes of our history of global climate change evidence, we’ve focused on measurement of Earth’s atmospheric CO2 record, finding in the last episode that it’s now over 40% higher than the entire pre-industrial experience of the human species spanning over 200,000 years. But we have not checked in on global temperature measurements since Episode 3, where the intrepid steam engineer, Guy Callendar (1961) and Landsberg & Mitchell, Jr. (1961) had independently measured what appeared to be a slight but discernible warming between 1880 and the late 1950’s. You may also recall from Episode 3 that the physicist, Gilbert Plass, had used some of the first computers to refine calculations of infrared absorption by CO2, predicting we would observe about a 1 degree temperature increase between the years 1900 and 2000, whereupon we would also begin to observe obvious effects of climate change.

Well, the year 2000 has come and gone and we have thermometers all over the world. Let’s grade Dr. Plass’ work, shall we?

In the 1960’s and 1970’s, others continued to document surface temperature records from collections of meteorological stations, but the data were gathered primarily from stations in the Northern Hemisphere and there wasn’t a standardized method of obtaining a truly global temperature average. During that time, James Hansen, a physicist and astronomer at the NASA Goddard Institute for Space Studies, was studying the planet Venus. Specifically, he was calculating the influence of Venus’ thick atmosphere on its extremely hot surface temperature. (Fun fact: scientists believe Venus’ atmosphere several billion years ago was similar to Earth’s and it had liquid water on its surface, but Venus now has a thick atmosphere and a scorching surface temperature of 864 degrees Fahrenheit due to the occurrence of a runaway greenhouse effect.)

In the late 1970’s, Hansen turned his attention to similar calculations of the effects of Earth’s atmosphere on its surface temperature. As part of this work, he tackled the problem of creating a standardized method for calculating global average temperature trends. The method begins with the recognition that, while absolute temperatures are widely variable from place to place on the Earth, even for locations relatively close to one another, temperature changes of nearby locations tend to be very similar. For example, while the absolute temperatures in New York and Pittsburgh might be quite different on a particular day, if one is having a hotter than average month, the other is likely having a month hotter than average by around the same amount. Thus, global temperature trends are plotted, not as absolute temperatures, but as temperature differences, called “temperature anomalies,” relative to some reference temperature.

The second key element of the method is the Earth’s surface is divided up into a grid formed by squares of equally spaced latitude and longitude lines, such that each square contains a sufficient number of weather stations to obtain an accurate record of historical temperature data. At any given time in history, then, the temperatures of the squares are averaged to get an estimate of the global average temperature. Various statistical methods are used to correct for errors, such as the known artificial urban warming around weather stations in or near cities. The gathering of sufficient, widespread temperature data to apply this method began in the late 1800’s. Hansen’s method was initially published in the peer-reviewed scientific journal, Science, in 1981, and has since been updated as the techniques have continued to improve (1987, 2010).

Similar methods have now been applied independently by four major research groups. They make their data publicly available for download (see links in the caption below). Here are the four readings of the “global thermometer” (orange, pink, red, and purple lines) plotted on top of the global CO2 record (green and blue circles) we saw in Episode 5:

(Updated 6-6-2019) All data publicly available, downloaded and plotted by me. Green and blue circles: atmospheric CO2 concentration from Law Dome ice cores (green) and direct atmospheric sampling (blue) from Scripps (see figure captions in Episode 5 for detailed references). Orange line: Temperature anomaly, 1880-2017, according to U.S. NASA Goddard Institute for Space Studies (public data, reference). Pink line: Temperature anomaly, 1880-2017, according to U.S. NOAA National Climatic Data Center (public data, reference). Red line: Temperature anomaly, 1850-2017, according to U.K. Hadley Centre/Climate Research Unit (public data, reference). Purple line: Temperature anomaly, 1891-2017, according to Japan Meteorological Agency (public data, reference). All temperature anomalies re-scaled by me to be relative to a common reference baseline of the 1891-2010 average temperature.

Due to differences between the chosen data sources, gridding methods, and error correction methods used by the four independent groups (for details, see references in the caption above), the four temperature records are not identical. They show remarkable agreement, however. They generally have peaks and valleys in the same places, and their basic conclusions are all the same – the world is about 1.1 degrees Celsius warmer now than it was in pre-industrial times. Check out the video below, where the NASA and NOAA gridded data have been used to show how different parts of the globe have changed in temperature.

Video credit: NASA Goddard Space Flight Center (link to web page). Video using a color coding of NASA and NOAA gridded global temperature anomaly data to show how the Earth’s temperature has changed since 1880.

There is no obvious evidence of a “Chinese hoax” here. Instead, these appear to be the serious, well considered and extensively peer-reviewed conclusions of four independently funded and well-respected scientific groups (a British group, a Japanese group, and 2 U.S. groups – one of which, NASA, has brought us other generally well-regarded scientific achievements such as the moon landings).

In their 1981 paper, James Hansen and his coworkers calculated the temperature increase, relative to the global temperature around 1975, at which we would have a greater than 98% statistical confidence that global warming is “real” (not just a result of random temperature variations). That would be when the temperature rose above the light grey range in this graph, about 0.2 degrees C higher than the 1975 temperature, which the NASA scientists predicted would occur in the 1990’s.

Figure 7 from Hansen, et al. (1981). Calculation of the temperature change, relative to the temperature in the late 1970’s, at which our statistical confidence that global warming had exceeded previous natural variation would reach >85% confidence (represented by the dark grey range) and >98% (light grey range).

A look at the temperature data above shows that had indeed occurred by the 1990’s. Now, we are a full 0.8-1.0 degrees C above the 1975 temperature, and there can really be no doubt.

Strikingly, the temperature graphs above have almost exactly the same shape as the CO2 graph! But, if we’ve been paying attention to our history of evidence, this should not be a surprise. Rather, it should be a confirmation of our expectations. Sure, the Earth’s climate is a highly complex system, and there have been real questions about things like the role of the deep oceans, as we saw in Episode 3. But those questions were settled by around 1960, by which time Dave Keeling had also begun direct measurements of the atmospheric CO2 concentration. Once we see CO2 going up, we expect warming with mathematical certainty. Based on physics known since the early 1800’s, CO2 absorbs infrared radiation reflected from the Earth’s surface, generating heat. It’s as simple as that. At the end of the day, the basic physics driving global warming are far simpler than those at work every moment inside your smart phone.

Anyone denying the reality of global warming would have to not only explain why at least four formidable groups of well-respected scientists, not evidently influenced by Chinese hoaxters, don’t know how to process data from thermometers. They would also need to explain how the undeniable increase in atmospheric CO2 through the combustion of fossil fuels has somehow not resulted in warming, when anyone with a basic laboratory infrared instrument can verify the infrared absorption of CO2. In fact, did you notice in Episode 4 how the weekly atmospheric CO2 concentration at Mauna Loa is measured? By the infrared absorption of collected air samples! So, every time we measure the CO2 concentration of the atmosphere, by the method precise enough to reveal the seasonal respiration of plants, we verify the very physical phenomenon that drives global warming!

OK, so it’s time to grade Dr. Gilbert Plass’ 1956 prediction of around 1 degree Celsius of warming between 1900 and 2000, and readily observed effects of global climate change, due to infrared absorption by increased atmospheric CO2. The verdict?

  • Actual warming between 1900 and 2000? Around 0.8 degrees C. Not bad. Maybe an A-. But pretty impressive given that Dr. Plass was using the world’s very first computers and considering only the effects of infrared absorption by CO2.
  • Readily observable effects of global climate change? Absolutely.

Back to page contents


Episode 8. Of islanders, aliens, and frogs. A cosmic test for humanity. Part 1.

Though the details are not known for certain, most who have studied it believe the first inhabitants of the island, numbering not more than 150 people, arrived between 400 and 1200 AD in wooden canoes from previously settled Pacific islands that may have been as far as 2,000 miles away. They were highly skilled seagoing navigators, having over previous generations employed navigational instruments and charts, detailed observations of the sun, stars, seabird behavior, wave formations, winds, and weather, and extensive accumulated knowledge maintained in oral tradition and songs to discover nearly every island in the vast Polynesian Triangle of the Pacific. They had arrived at a faraway corner of that explored territory, to this day among the most remote inhabited locations on Earth. The first people off the boats encountered an isolated tropical paradise, forested with multiple species of up to 50-foot trees, including possibly the largest palm trees in the world, and populated by six species of indigenous land birds.

Ancient Rapa Nui
Image credit: Wikipedia. A digital recreation of the island’s ancient landscape.

Undaunted by their isolation, they set about rapidly building a complex, vibrant, and thriving agricultural civilization on the island; it would eventually reach a population of 10-15,000 people. Oral tradition, later recorded by European missionaries, held that nine separate clans, each with its own chief, were ruled over by a high chief, the eldest of the first-born descendants of Hotu Matu’a, the island’s legendary founder. Over generations, the clans paid homage to their ancestors by erecting over a hundred giant stone monuments, unique in the world, up to 32 feet tall and weighing as much as 90 tons. The precise methods by which these Stone Agers accomplished that impressive feat, testifying to their ingenuity and artistry, a deep spirituality, and a cooperative society with an apparent luxury of time and resources, is still a matter of controversy and wonder in today’s digital age.

But, by just a century after the island civilization’s peak, things had gone terribly awry.

When Dutch explorers happened upon the island on April 5 (Easter Sunday), 1722, they found a largely bald landscape, with no tree over 10 feet tall and a population of 2,000-3,000 inhabitants living in a radically diminished condition. With no wood left capable of making a seaworthy vessel, they were stranded and had lost much of their former fishing range. Twenty one tree species and all of the birds were extinct.

Easter Island today
Image credit: Wikipedia. Modern Easter Island. Impressive statues (moai). Mostly treeless.

What happened on Easter Island during that tragic 100 years?

Again, the details are debated, but the broad strokes are fairly clear. Some combination of intensified agriculture to support the expanding population, rats, the monument construction, and possibly climate change caused the rapid and near complete deforestation of the island. Aside from the clearing of forests for fuel and agriculture, large trees may have been felled as rollers to transport the heavy monuments. Polynesian rats, stowaways on the boats that had carried the original settlers, had no real predators on the island and ate the tree seeds. Some speculate that the Little Ice Age, beginning around 1650, may have additionally stressed the large palm trees. Without protection from the trees, the fertile topsoil began to dry up and blow away. Over-hunting of the land birds by humans, or rats, or both drove them to extinction while, at the same time, access to fish protein was dramatically reduced by the loss of wood for large boats. As resource depletion continued, the systematic class system of the society gave way to loosely organized, warrior-led bands that frequently fought fiercely and took to toppling each other’s statues in anger.

Toppled moai
Image credit: Wikimedia Commons. A toppled moai lies face down in the foreground of a treeless landscape.

The absence of written records on the island leaves much room to speculate about the state of mind of the people who inflicted on themselves such a seemingly predictable fate. The island is some 15 miles across at its widest point; a single person could survey the state of its entirety in a matter of a few days. Yet, someone cut down the very last of its remaining trees, thus eliminating the possibility of escape from a deteriorating environment. What were they thinking as they did that?

  • Were they our proverbial frog in a soup pot, unaware of their developing crisis because it appeared to occur slowly? Had each succeeding generation become accustomed to a “new normal” with fewer trees, less productive farms, and less cultural emphasis on fishing, until it seemed like no big deal to fell the last few remaining trees on the island?
  • Were they engaged in internal conflict, such that the folks who felled the last few trees believed if they didn’t someone else surely would?
  • Had the tree-felling folks successfully convinced the chiefs that the alternatives were too costly, or that the rumored decline of farmland productivity was a hoax?
  • Did they believe they would be supernaturally delivered from their declining state by the deified ancestors to whom their statues payed homage, such that they may have felled the very last trees as rollers to transport statues they hoped would hasten their deliverance?
  • Did they simply place more value on the now than on the future?
  • Did they fell the last trees in deep sorrow, having realized their fate but, after concerted effort, having failed to come up with any social or technological solutions to the problem of the rats and their need for fuel?

We will almost certainly never know the answers to these questions but, as we shall see, reflecting on them will be important to our own future.

It’s difficult to get inside the heads of ancient people as they faced intensifying degradation of their environment’s ability to support them. But it turns out the process can be understood, modeled, and even predicted using cold, hard math. In the graph below, Bill Basener, an Applied Mathematician, and coworkers modeled the interaction between a human population (top equation) and resources on which the humans depend (bottom equation). The human population, P, grows at a growth rate, a, but its survivability is constrained by access to resources, R (in the case of Easter Island, primarily arable farmland and trees). The resources self-replenish at growth rate, c, have a maximum carrying capacity on the island, K, and are harvested at a per-person rate, h. Solving this pair of equations using assumptions appropriate to Easter Island (see details in figure caption below) yields a population curve, represented by the solid line, which closely matches actual population values estimated from the archaeological record on the island (x’s on the graph).

Easter Island population
Graph of the calculated population of Easter Island, adapted from Figure 1 of Basener & Ross (2005) with inset Equations from Basener, et al. (2008)P on the y-axis is the number of people on the island; t on the x-axis is the year. “x” marks in the graph are estimated actual populations according to archaeological evidence. The top equation is the equation of the line on the graph, where Pn is the number of people on the island in year n. The bottom equation is the equation for the number of resources on the island, where Rn is the maximum number of people that can be supported by the available resources in year n. The model begins in the year 400, with Pn = 50 people and Rn = 70,000 (a value consistent with the estimated starting amount of arable land on the island). a is the fractional annual population growth = 0.0044 in this model (a value consistent with other pre-WWII human populations). c = 0.001 is the fractional annual self-replenishment rate of the resources. h = 0.025 is the annual rate at which people harvest the resources. K = 70,000 is the environment’s maximum carrying capacity for resources in the absence of people.

There it is, in stark, mathematical detail. The math is relatively simple and based on common-sense assumptions about how a population of people and resources might interact. It can’t be far off — it matches the archaeological record quite closely at key times (when it was large, and after it rapidly became small again). The simplified math doesn’t perfectly capture the ultimate fate of the Easter Island people; in fact, the population didn’t fall to zero. The archaeological record shows that people shifted to resources they hadn’t used earlier (including rats), enabling the survival of a fraction of the population, in a state of relative poverty, into the 1700’s. After that, the population was significantly affected, both negatively and positively, by contact with foreigners, and the present population of around 6,000 people relies significantly on resources from outside the island.

Still the complex native civilization on the island, built over a millennium, ended abruptly during about a 100-year period. It’s rather sobering to consider the human experiences that must have attended the rapid, downward sweep at the right side of the graph. An intensifying scarcity of fuel and food. A cooperative, seagoing civilization giving way to increasingly separate, competing bands. (Isolationism.) Proud farmers and fishers learning to survive on native grasses and rats. Fierce internal conflicts. Starvation.

One wonders if, at the height of the civilization’s powers in the 1600’s, thriving but surely confronted with mounting and ever more visible evidence of their environmental impacts, the people of Easter Island sensed the coming catastrophe.

It wasn’t the only mathematically possible outcome.

Indeed, according to some mathematically possible scenarios, the original Easter Island population would still be going strong to this day, even indefinitely. Below are four mathematically possible scenarios according to the equations above. In each, the island’s resources appear as a green line, while the human population is represented by the black line.

Easter Island Scenarios
Four mathematically possible scenarios for Easter Island, according to the coupled population and resource equations of Basener, et al. (2008). (a) The base scenario, presented in the previous figure, that closely matches the historical record at key points. The self-replenishment rate of resources is exceeded by their harvest rate by humans (c<h) resulting in exponential disappearance of resources coincident with the exponential human population growth and ultimately leading to collapse prior to the year 1800. (b) A scenario in which the growth rate of the human population, a, is reduced to 1/4 that of (a). Inevitable collapse still occurs, but the human civilization lasts much longer, until later than the year 4400. (c) A scenario with the same population growth as (a), but in which the humans support themselves using a resource that self-replenishes at a rate faster than the harvest rate (c>h). Following an exponential growth phase, the human population and resources come into balance with one another, and the human civilization survives indefinitely. (d) A scenario that starts out identically to the slow-growing but resource intensive scenario (b). Around the year 4400, an advanced civilization finds a way to transition to a more quickly replenishing resource (c>h), ensuring its lasting survival.

The historically-representative scenario is shown in (a), where the root of the problem can easily be seen as the green curve that exponentially falls as the human population exponentially rises. Before 1800, the resources are depleted, and the human civilization is rapidly decimated.

In (b), the people manage to control their population growth rate to only 1/4 of the growth rate in (a). This might be done, for example, by instituting some version of China’s “One Child” law. Inevitable collapse still occurs, but it takes a lot longer — until after the year 4400.

In (c), the population growth is the same as in (a), but the people manage to sustain themselves using a resource that self-replenishes more quickly than it’s harvested. After an initial growth phase, the human population comes into balance with the island’s resources, and both live on indefinitely.

A key learning of scenarios (b) and (c) is that controlling population growth, which we often tend to think about when considering Earth’s limits, is not in itself sufficient to avoid eventual civilization collapse. Rather, the key to indefinite survival is finding a way to live on resources that self-replenish faster than they are consumed.

Controlling population growth can give people time to think, however. Scenario (d) starts the same way as (b), with a slow-growing population that is consuming resources much faster than they are replenished. Around the year 4400, however, a technologically advanced civilization comes to understand its predicament and finds a way to consume resources that self-replenish faster than they are harvested. (Perhaps the humans learn to practice irrigation and discover a tree species that grows very rapidly.) This transition ensures the civilization’s indefinite survival on the island.

But why are we talking about ancient islanders on a blog about modern global climate change? What do we have to learn from mathematically modeling the fate of these archaic people?

20130722_annotated_earth-moon_from_saturn_1920x1080
Image credit: NASA/JPL-Caltech/Space Science Institute. Our Earth appears as a pale blue dot in this photo taken by the Cassini spacecraft on July 19, 2013, orbiting Saturn at a distance of 898 million miles from Earth. We, all of us, are floating together on an isolated, finite celestial island in space.

Well, of course, we live on a finite, isolated paradise. An island in space. In a project that has taken several thousand years, we have built a complex, thriving human civilization. The only intelligent civilization we know of, capable of pondering its origins and meaning and sending spacecraft to take photos of itself from 898 million miles away.

But we have to imagine our civilization is constrained by the limits of our island’s responses to our population, which has rapidly filled it up. And, as we have seen in previous episodes of this series, there are troubling signs we are nearing those limits. Can the learnings and math of Easter Island be applied to our bigger island?

Can we discover math to avoid the Easter Islanders’ fate?

We look at that in Episode 9. Stay tuned…

Back to page contents


Episode 9. Of islanders, aliens, and frogs. A cosmic test for humanity. Part 2.

“But where are they?” exclaimed the physicist, Enrico Fermi. He was sitting at lunch one summer day in 1950 at the Los Alamos National Laboratory with another physicist, Edward Teller, and two nuclear scientists, Emil Konopinski and Herbert York. Each of the scientists had contributed vitally to the creation of the atomic bombs that ended World War II. (Incidentally, 2 of the 4 scientists were American immigrants, 1 was a first-generation American, and 1 was part native American. You might say they perfectly represented the strength of the American melting pot.)

Improbably, Fermi was referring to space aliens.

Intrigued? If not, you’re brain dead! Everyone wonders about alien stuff!

It was the continuation of a conversation about the possibilities and limitations of interstellar travel that had started earlier in the day, and the reasoning was this:

  • Our galaxy contains billions of stars like our sun, and many of those other suns are billions of years older than ours;
  • If it’s common for stars like our sun to have Earth-like planets, some of those may have developed intelligent life and civilizations like our own;
  • Some of those civilizations may have had a “head start” of billions of years on ours;
  • They may have developed interstellar travel (which didn’t seem like a great stretch, since these 4 very guys — having recently unlocked the secrets of nuclear fission — had been discussing interstellar travel earlier that day and other civilizations might have had billions of years longer to have been thinking about it — consider the strides we’ve made in transportation in just a couple hundred years);
  • Even at a “slow” pace of a fraction of the speed of light, it would only take a few million years for an interstellar-travelling civilization to cross the entirety of our Milky Way galaxy;
  • So why hadn’t space aliens already landed on the White House lawn? Why hadn’t we seen evidence of them with telescopes, etc.?
  • In short, “Where are they?”

The above set of questions would later be called the Fermi paradox. Today, there is a Wikipedia article about it, and it’s the subject of systematic and active study by astronomers, cosmologists, and astrobiologists. At the time Fermi first posed the question in 1950, it was immediately compelling to many scientists. Given a growing realization among them that “Earth-like” (wet, warm) planets were possibly abundant in our galaxy, it suggested that there might be some “Great Filter” that prevented intelligent civilizations like ours from either beginning or lasting long on those planets. Optimistically, the Great Filter was something related to the biological evolution of intelligent life, making us a unique or very rare success story. Pessimistically, intelligent life was fairly common but the Great Filter was some existential challenge that prevents intelligent civilizations from lasting very long. You have to imagine scientists in the 1950’s thinking, “like they all discover nuclear fission and then get in a fight and blow themselves up.” It was a conundrum, and the answer seemed important to the fate of humanity. Looking for evidence of the other civilizations that seemed like they should exist began to seem important.

A few years later in 1959, a pair of physicists, Giuseppe Cocconi and Philip Morrison, published a paper in one of the most selective scientific journals making the case that the best way to look for alien civilizations was to search for radio signals from them. Unlike light, which is blocked by interstellar dust, radio waves penetrate unobstructed through great distances of space. And, they reasoned, a civilization as advanced as or more advanced than ours would likely have learned to manipulate and communicate with radio like we have. Thus was born an international radio astronomy effort that persists to this day as the Search for Extraterrestrial Intelligence (SETI). By 1961, a group of scientists led by the astrophysicist, Frank Drake, and including a young Carl Sagan (whom some readers will remember as the writer and presenter of the 1980’s TV series, Cosmos) had boiled the Fermi paradox down to a short mathematical equation, called the Drake equation, that could be very efficiently written on a postage stamp:

Episode 9 Equation 1

  • N is what the scientists wanted to know, the number of alien civilizations we can detect radio signals from;
  • R* is the rate of star formation (number of stars that form each year);
  • fp is the fraction of those stars with planets;
  • np is the average number of planets, given a star has them, that are “Earth-like” (where life could potentially form);
  • fL is the fraction of those planets on which life does form;
  • fi is the fraction of those planets on which the life evolves intelligence;
  • fc is the fraction of those intelligent species that develop civilizations involving radio communications; and
  • L (rather frighteningly) is the average lifetime of those civilizations.

This may seem silly, but it’s not. Boiling a nebulous question like, “Where are the aliens?” into a number of (at least potentially) quantifiable factors had the power of making a big question into a set of smaller questions that different groups of scientists could actually work on.

And they have. NASA’s many robotic missions in the decades since have had many scientific objectives, important of which have literally been to measure key terms in the Drake equation.

Starting in our own neighborhood, we’ve conducted a multitude of robotic missions to Mars, culminating with the landing and operation of 4 car-sized remote controlled rovers, 2 of which are driving around now. Geological evidence collected by these rovers has conclusively shown Mars had warm oceans and liquid water rushing across parts of its surface between 3 and 4 billion years ago. Mars today has weather and a climate, and climate models like those used on Earth actively predict its day-to-day weather conditions.

victoria2_opportunity
Image credit: NASA/JPL-Caltech/Cornell Univ. The steep cliffs of Victoria crater, showing layers of exposed bedrock providing a geological record of billions of years that NASA’s mobile geology lab Opportunity studied in 2006.

Our other planetary neighbor, Venus, has proven harder to explore in detail due to its current harsh surface conditions, but recent NASA models consistent with data from robotic missions and Earth-based observations suggest it could have had a water ocean and habitable temperatures for as much as 2 billion years of its early lifetime before it became a hellish place due to a runaway greenhouse effect.

We’ve also looked much further afield in search of planets beyond our solar system. Since 1995, astronomers have been able to observe stars with sufficient precision to detect the “wobble” caused by planets orbiting them. Since 2009, the Kepler space telescope has hunted planets by staring at a field of stars long enough to see the cyclic dimming as orbiting planets pass in front of them. The nature of each planet and its distance from its sun can be sorted out based on the extent and periodicity of the dimming.

Based on these explorations, we now know the following:

  • Our own solar system, in its history, has hosted at least 2 and maybe 3 habitable planets.
  • Planets are not static, and the habitability of a planet can change. Mars was once habitable, and now it isn’t (at least for complex life). Earth wasn’t always habitable for us. Before the Great Oxidation Event about 2 billion years ago, during which now-extinct bacteria began producing oxygen by photosynthesis, we wouldn’t have been able to breathe.
  • fp in the Drake equation, the fraction of stars with planets, is about 1. Just about every star you see in the night sky hosts at least 1 planet.
  • np in the Drake equation, the average number of planets in a star’s “habitable zone” where water would be liquid, is about 0.2. That is, about 1 out of every 5 stars hosts a world that has the right temperature for life like our own.

To me, these discoveries are incredible. There are a lot of planets that could potentially host life, and even intelligent civilizations! But what are the chances that any one of them develops life, intelligent life, and a civilization like ours? And, how long does such a civilization typically last? We are still lacking any good information about the last 4 terms in the Drake equation (fLfifc, and L). It would seem we will be lacking that information for some time, since we so far only know of one intelligent civilization (us), and getting to other stars in reasonable time frames remains a substantial technical challenge.

In a 2016 paper, astrophysicist Adam Frank and astrobiologist Woody Sullivan showed that we can still make important conclusions about alien civilizations — conclusions relevant to our own project of civilization — with what we know now. The two scientists re-arranged the Drake equation to ask, not how many alien civilizations exist now, but an easier and still interesting question: How many alien civilizations have ever existed in the history of the observable universe? This question can be expressed in a re-formulated Drake equation:

Episode 9 Equation 2

Now, N* is the total number of stars, which we know. There are about 2×1022 in the observable universe. We know the next two terms from astronomy work (see above), we have no idea about the following three terms, and the pesky L term (average lifetime of intelligent civilizations) is taken away because we are asking how many civilizations have ever existed in the observable universe, not how many exist right now. Frank & Sullivan combined the three remaining terms we don’t know into a single fraction, fbt, which is the fraction of planets that have ever existed within the habitable zones of orbits around stars that go on to develop biology that results in a technological civilization:

Episode 9 Equation 3

Now, we can ask and answer the question, “What is the likelihood that we are alone in the history of the observable universe?” That is, what would fbt, the probability that a habitable planet develops an intelligent civilization, have to be in order for Never to be just 1? We can calculate the answer based on what we know:

Episode 9 erquation 4

That is, for us to be alone in the history of the universe, the probability of a physically habitable planet developing a technological civilization would have to be less than 2.5×10-22 — less than a 0.000000000000000000025% chance. To put that in context, the chance you will be struck by lightning 3 times during your lifetime is 1×10-12. That’s a probability 4 trillion times larger than fbt would have to be, given the sheer number of habitable planets, for us to be the lone civilization to have developed in the history of the universe. If we are the only one, then nature would have to be incredibly biased against the development of intelligent life.

There are optimists and pessimists. Since we’re talking probabilities, we’re all free to think what we want. As for me,

  • I don’t generally fret that I might get struck by lightning 3 times; and
  • The above math makes me think we are almost certainly not the only intelligent civilization to have faced the challenges of filling up its home planet. In fact, there have very likely been thousands. This is an amazing conclusion to ponder.

What wisdom can we glean from that knowledge? What common experiences might we share with, perhaps, thousands of alien civilizations possibly living, or having lived, on worlds our telescopes have already seen? Without crossing over to science fiction, what can we say we know about what an alien civilization might be like?

We know a defining feature of any civilization like ours would be its ability to harness energy from its planet’s star. Prior to civilization, each person had the energy of one person with which to do stuff. Now, if you live in an industrialized country, you probably use the equivalent of about 50 people’s energy every day just to control the temperature in your house. If you jump in your car that gets 25 mi/gal and start driving, you immediately begin using the energy equivalent of about 12 people. A fundamental feature of a civilization like ours is the ability to magnify our power by harnessing and directing a star’s energy. That’s exactly what we’re doing whenever we make a bonfire, drive a car, fly on an airplane, or send a text message. Dolphin’s and chimpanzees are smart. They use tools and communicate through language. But they don’t build fires. Each dolphin or chimpanzee has exactly the energy of one dolphin or chimpanzee at its disposal. They don’t harness additional energy from the sun; hence, they are not civilized.

Since we’ve spent decades studying other planets, we know pretty much for certain what sources of energy would be available for any intelligent aliens seeking to develop a civilization:

  • Burning stuff (combustion). We started by burning trees, which stored energy from the sun and converted it to biomass over a period of years. Later, we discovered our Earth had given us a great gift: fossilized biomass (oil, gas, coal) from millions of years of its previous experiments with life. Most of the energy that fuels our civilization still comes from burning stuff.
  • Hydro/Tides. If a planet has water or other liquids flowing on its surface, that motion can be harnessed to generate energy.
  • Wind. If the planet’s atmosphere generates wind, the wind can be used to harvest energy.
  • Solar. The planet’s star’s radiation energy can be directly harvested by low-tech (think black plastic), high-tech (think photo-voltaic), or organic (think photosynthesis) methods.
  • Geothermal. Heat from deep within the planet, generated by storage of it’s star’s radiation or tidal energy, can be tapped by a civilization on the planet’s surface.
  • Nuclear. If the planet has stores of radioactive elements like uranium, the energy evolved when they decay by nuclear fission can be captured and used. We also believe we may someday create energy by nuclear fusion — by fusing hydrogen molecules to make helium like the stars themselves do. But we haven’t yet proven it.

That’s pretty much the whole list. We know, because we’ve studied astronomically or sent robots to visit lots of planets and stars. Energy in the universe comes from stars, and stars shine on and gravitationally pull on planets, and those are the ways of directing star energy if you live on a planet.

To the extent that it’s successful, any alien civilization’s energy use will eventually affect its home planet. Some methods of energy use will affect the planet more strongly than others. In our case, as we’ve been studying in this series, burning stuff creates carbon dioxide which traps the sun’s radiation energy which heats the atmosphere and then the oceans.

What happens when the aliens’ population gets large, when it begins to fill up its home planet and when its energy use begins to create significant planetary responses?

In a 2018 paper, Adam Frank and three other scientists applied to this question the same type of math that was applied to Easter Island in Episode 8. In that episode, two linked mathematical equations, describing the growth of the human population on the island and the growth and consumption of the resources on which the humans depended, accurately predicted the early growth and eventual catastrophic collapse of the human population, as captured by the archaeological record.

Here, the scientists applied the same type of math to an entire planet (also a sort of island) inhabited by an intelligent, civilization building population. One equation modeled the population growth and its consumption of energy. A second, interdependent equation modeled the response (temperature rise) of the planet to the method of energy generation. There were two means of generating energy. The first (like fossil fuel combustion) caused a strong planetary response. The second (like solar) caused a mild planetary response. At some point (either soon or late after detecting the planetary response), the population could switch its energy generation method from the first method to the second method.

The scientists found four broad categories of solutions to their equations, depending on the rate at which the planet responded to the population’s energy generation and the timing of the population’s switch to the lower impact energy resource. The four different types of outcomes are shown in the plots below, where the solid line is the population and the dashed line is the planetary “temperature.”

Frank Figure
Figure from Chapter 5 of Adam Frank’s 2018 book, which I recommend. Four types of histories for an intelligent civilization on a planet with two sources of energy (one with a high impact on the planet and one with a low impact). The civilization starts by using the high-impact source and switches to the low-impact one at some point. The size of the surviving population (if any) depends on the interplay between the speed of the planet’s response to using the first source, as well as the timing of the switch.

Rather frighteningly, the most common outcome was some extent of a “die-off,” as shown in plot A. In these numerous scenarios, growth in high impact energy use drove a significant change in the planetary state that strongly reduced the planet’s capacity to support the population, even after the population switched to the lower impact energy source. This often resulted in significant population reduction before restoration of the planet to an equilibrium (though changed) state; in plot A, the surviving population was only about a third of the peak population. Two out of every 3 people died during the collapse.

Some simulations provided hope. Populations that switched to the more sustainable energy source early enough, relative to the planetary response, were able to achieve a soft approach to a sustained population, as shown in plot B.

Populations unable or unwilling to change to a sustainable resource were doomed to collapse, as shown in plot C. The time before collapse was dictated only by the rate of response of the planet.

Perhaps most scary of all was the type of scenario shown in plot D. In these scenarios, the population switched to the more sustainable resource, but too late. The planetary system and the population appeared to begin to stabilize, before suddenly rushing to collapse as planetary response feedbacks took over. This, I think, is one of the most underappreciated features of climate change. Geology is slow. We can already see at work glimmers of the types of positive feedbacks that, once underway, could drive our planet to a very different state despite our best efforts. Arctic ice melts, reducing the solar reflectivity of the entire Arctic region of the Earth, causing Earth to absorb more solar radiation. Thawing Arctic permafrost releases formerly trapped methane, a greenhouse gas 20 times as potent as carbon dioxide.

We can’t necessarily call off climate change when we decide we’ve had enough. The decision is time sensitive.

A cosmic test.

Why are we talking about aliens?

Here’s the thing. In American politics (which appear to be mirroring the politics of the democratic world), we are currently divided into two tribes. The voice of each tribe, through monetary distortions of the systems that elect our politicians, is dominated by the most extreme elements of the tribe. Each tribe is telling itself a story about climate change.

The story being told by the extreme elements of each tribe is objectively wrong.

The Story of Tribe #1. Humans are greedy. We are carelessly, wantonly destroying the Earth. From the beginning, the burning of fossil fuels was a nasty, unnatural method of fueling the fires of our greedy desires and Earth’s destruction. We are immorally destroying the planet, primarily in the service of the very rich. We should be ashamed for burning fossil fuels.

The Story of Tribe #2. The Earth and its resources are our birthright. We have used those resources, including fossil fuels, to great effect. Unburdened capitalism is beneficial, and has unleashed the full power of the human spirit. Limiting that progress with fake “evidence” of problems is immoral. The costs of addressing your unproven problems are unjustified. Even, “God had guaranteed us an ultimate solution to our problems.”

These are my own interpretations of the two extremes that currently animate us. Please forgive me any inaccuracies, and try to honestly answer whether you identify more strongly with one of them.

I, myself, identify more strongly with one of them (Tribe #1) but, having studied this for a year, I know it’s a mistaken position. We are not a greedy, evil species that’s wantonly consuming a helpless Earth. The Earth is fine. It was fine before there was oxygen in its atmosphere for us to breathe, and it may well be fine when the next life evolves that’s well adapted to the planetary temperature we create. Further, we are not some greedy, evil blight on the Earth because we burn fossil fuels. In fact, the fossil fuels were a gift from Earth’s previous life experiments. They are fundamental to us having built a civilization in which we can have this conversation. On any alien planet, given our understanding of astronomy and physics, the same energy resources would very likely be used first. We are not greedy and evil, we have just been trying to do what humans do — live comfortably, free ourselves from the threats of diseases and predators, raise kids. Earth’s fossil resources were a gift to us, one we have used to build our civilization, but one our scientific evidence tells us we dare not use much longer.

The story told by Tribe #2 is also problematic. The Earth is not our birthright. Our own solar system has featured at least one, and maybe two, other planets that have been habitable at one point but lost their habitability. There are no guarantees. The only protections we have from a bleak future, as a species, are knowledge and acting on that knowledge.

In fact, evidence gathered from significant study of both nearby and faraway worlds makes a strong case that the urgent challenge we currently face with climate change is a cosmic test that would face any intelligent, technological civilization on any planet in the universe. Further, it almost certainly has faced other alien civilizations already, perhaps thousands. It may well be the “Great Filter” proposed as one solution of the Fermi paradox.

This should focus our thinking. Just as much smart work has been required to prevent a potentially civilization-ending nuclear war, smart and coordinated work will be required to find our way through this challenge. We really have no excuse. Thanks to decades of work, we have the technology to switch to low-impact energy sources. The only thing standing in our way is our own ability to agree on a set of facts, compromise on a rational set of solutions, and execute. A basic law of life is “survival of the fittest,” and this is probably a cosmic test of our civilization’s societal fitness.

But what course of action can we all agree on? We need a framework in which we can make steady (and rapid) progress while still arguing about the details. I’ve put quite a bit of thought into that, and I’ve read widely over the last couple years. I believe there is such a framework. It’s a framework we’ve already used with great success (hint: it gave you your smart phone). It’s one that could ensure consistent progress, while allowing all of us the freedoms of choice we value.

That will be the subject of Episode 10. Stay tuned.

NoteEpisodes 8 and 9 of this series rely heavily on ideas in the book, Light of the Stars: Alien Worlds and the Fate of the Earth, by Adam Frank (2018). I recommend it.

MW@BixbyCreek_BobWestern
Image credit: Bob Western

Back to page contents


Episode 10 in preparation. Stay tuned…